id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
8,163,742
https://en.wikipedia.org/wiki/Dehalococcoides
Dehalococcoides is a genus of bacteria within class Dehalococcoidia that obtain energy via the oxidation of hydrogen and subsequent reductive dehalogenation of halogenated organic compounds in a mode of anaerobic respiration called organohalide respiration. They are well known for their great potential to remediate halogenated ethenes and aromatics. They are the only bacteria known to transform highly chlorinated dioxins, PCBs. In addition, they are the only known bacteria to transform tetrachloroethene (perchloroethene, PCE) to ethene. Microbiology The first member of the genus Dehalococcoides was described in 1997 as Dehalococcoides ethenogenes strain 195 (nom. inval.). Additional Dehalococcoides members were later described as strains CBDB1, BAV1, FL2, VS, and GT. In 2012 all yet-isolated Dehalococcoides strains were summarized under the new taxonomic name D. mccartyi, with strain 195 as the type strain. GTDB release 202 clusters the genus into three species, all labeled Dehalococcoides mccartyi in their NCBI accession. Activities Dehalococcoides are obligately organohalide-respiring bacteria, meaning that they can only grow by using halogenated compounds as electron acceptors. Currently, hydrogen (H2) is often regarded as the only known electron donor to support growth of dehalococcoides bacteria. However, studies have shown that using various electron donors such as formate, and methyl viologen, have also been effective in promoting growth for various species of dehalococcoides. In order to perform reductive dehalogenation processes, electrons are transferred from electron donors through dehydrogenases, and ultimately used to reduce halogenated compounds, many of which are human-synthesized chemicals acting as pollutants. Furthermore, it has been shown that a majority of reductive dehalogenase activities lie within the extracellular and membranous components of D. ethenogenes, indicating that dechlorination processes may function semi-independently from intracellular systems. Currently, all known dehalococcoides strains require acetate for producing cellular material, however, the underlying mechanisms are not well understood as they appear to lack fundamental enzymes that complete biosynthesis cycles found in other organisms. Dehalococcoides can transform many persistent compounds. This includes tetrachloroethylene (PCE) and trichloroethylene (TCE) which are transformed to ethylene, and chlorinated dioxins, vinyl chloride, benzenes, polychlorinated biphenyls (PCBs), phenols and many other aromatic contaminants. Applications Dehalococcoides can uniquely transform many highly toxic and/or persistent compounds that are not transformed by any other known bacteria, in addition to halogenated compounds that other common organohalide respirers use. For example, common compounds such as chlorinated dioxins, benzenes, PCBs, phenols and many other aromatic substrates can be reduced into less harmful chemical forms. However, dehalococcoides are currently the only known dechlorinating bacteria with the unique ability to degrade the highly recalcitrant, tetrachloroethene (PCE) and Trichloroethylene (TCE) compounds into more suitable for environmental conditions, and thus used in bioremediation. Their capacity to grow by using contaminants allows them to proliferate in contaminated soil or groundwater, offering promise for in situ decontamination efforts. The process of transforming halogenated pollutants to non-halogenated compounds involves different reductive enzymes. D. mccartyi strain BAV1 is able to reduce vinyl chloride, a contaminant that usually originates from landfills, to ethene by using a special vinyl chloride reductase thought to be coded for by the bvcA gene. A chlorobenzene reductive dehalogenase has also been identified in the strain CBDB1. Several companies worldwide now use Dehalococcoides-containing mixed cultures in commercial remediation efforts. In mixed cultures, other bacteria present can augment the dehalogenation process by producing metabolic products that can be used by Dehalococcoides and others involved in the degradation process. For example, Dehalococcoides sp. strain WL can work alongside Dehalobacter in a step-wise manner to degrade vinyl chloride: Dehalobacter converts 1,1,2-TCA to vinyl chloride, which is subsequently degraded by Dehalococcoides. Also, the addition of electron acceptors is needed – they are converted to hydrogen in situ by other bacteria present, which can then be used as an electron source by Dehalococcoides. MEAL (a methanol, ethanol, acetate, and lactate mixture) is documented to have been used as substrate. In the US, BAV1 was patented for the in situ reductive dechlorination of vinyl chlorides and dichloroethylenes in 2007. D. mccartyi in high-density dechlorinating bioflocs have also been used in ex situ bioremediation. Although dehalococcoides have been shown to reduce contaminants such as PCE and TCE, it appears that individual species have various dechlorinating capabilities which contributes to the degree that these compounds are reduced. This could have implications on the effects of bioremediation tactics. For example, particular strains of dehalococcoides have shown preference to produce more soluble, intermediates such as 1,2–dichloroethene isomers and vinyl chloride that contrasts against bioremediation goals, primarily due to their harmful nature. Therefore, an important aspect of current bioremediation tactics involves the use of multiple dechlorinating organisms to promote symbiotic relationships within a mixed culture to ensure complete reduction to ethene. As a result, studies have focused upon metabolic pathways and environmental factors that regulate reductive dehalogenative processes in order to better implement dehalococcoides for bioremediation tactics. However, not all members of Dehalococcoides can reduce all halogenated contaminants. Certain strains cannot use PCE or TCE as electron acceptors (e.g. CBDB1) and some cannot use vinyl chloride as an electron acceptor (e.g. FL2). D. mccartyi strains 195 and SFB93 are inhibited by high concentrations of acetylene (which builds up in contaminated groundwater sites as a result of TCE degradation) via changes in gene expression that likely disrupt normal electron transport chain function. Even when D. mccartyi strains work well to turn toxic chemicals into harmless ones, treatment times range from months to decades. When selecting Dehalococcoides strains for bioremediation use, it is important to consider their metabolic capabilities and their sensitivities to different chemicals. In 2022, the United States National Aeronautics and Space Administration (NASA) co-funded a US$1.9 million multi-year project with Arizona State University, the University of Arizona, and the Florida Institute of Technology to reduce perchlorates (such as those found in the regolith of Mars) to a useful form of soil for growing plants. Genomes Several strains of Dehalococcoides sp. has been sequenced. They contain between 14 and 36 reductive dehalogenase homologous (rdh) operons each consisting of a gene for the active dehalogenases (rdhA) and a gene for a putative membrane anchor (rdhB). Most rdh-operons in Dehalococcoides genomes are preceded by a regulator gene, either of the marR-type (rdhR) or a two-component system (rdhST). Dehalococcoides have very small genomes of about 1.4–1.5 Mio base pairs. This is one of the smallest values for free-living organisms. Biochemistry Dehalococcoides strains do not seem to encode quinones but respire with a novel protein-bound electron transport chain. See also Bioaugmentation Bioremediation Biostimulation List of bacterial orders List of bacteria genera References Bacteria genera Bioremediation Monotypic bacteria genera Chloroflexota
Dehalococcoides
[ "Chemistry", "Biology", "Environmental_science" ]
1,798
[ "Biodegradation", "Ecological techniques", "Environmental soil science", "Bioremediation" ]
15,659,323
https://en.wikipedia.org/wiki/Laplace%20operators%20in%20differential%20geometry
In differential geometry there are a number of second-order, linear, elliptic differential operators bearing the name Laplacian. This article provides an overview of some of them. Connection Laplacian The connection Laplacian, also known as the rough Laplacian, is a differential operator acting on the various tensor bundles of a manifold, defined in terms of a Riemannian- or pseudo-Riemannian metric. When applied to functions (i.e. tensors of rank 0), the connection Laplacian is often called the Laplace–Beltrami operator. It is defined as the trace of the second covariant derivative: where T is any tensor, is the Levi-Civita connection associated to the metric, and the trace is taken with respect to the metric. Recall that the second covariant derivative of T is defined as Note that with this definition, the connection Laplacian has negative spectrum. On functions, it agrees with the operator given as the divergence of the gradient. If the connection of interest is the Levi-Civita connection one can find a convenient formula for the Laplacian of a scalar function in terms of partial derivatives with respect to a coordinate system: where is a scalar function, is absolute value of the determinant of the metric (absolute value is necessary in the pseudo-Riemannian case, e.g. in General Relativity) and denotes the inverse of the metric tensor. Hodge Laplacian The Hodge Laplacian, also known as the Laplace–de Rham operator, is a differential operator acting on differential forms. (Abstractly, it is a second order operator on each exterior power of the cotangent bundle.) This operator is defined on any manifold equipped with a Riemannian- or pseudo-Riemannian metric. where d is the exterior derivative or differential and δ is the codifferential. The Hodge Laplacian on a compact manifold has nonnegative spectrum. The connection Laplacian may also be taken to act on differential forms by restricting it to act on skew-symmetric tensors. The connection Laplacian differs from the Hodge Laplacian by means of a Weitzenböck identity. Bochner Laplacian The Bochner Laplacian is defined differently from the connection Laplacian, but the two will turn out to differ only by a sign, whenever the former is defined. Let M be a compact, oriented manifold equipped with a metric. Let E be a vector bundle over M equipped with a fiber metric and a compatible connection, . This connection gives rise to a differential operator where denotes smooth sections of E, and T*M is the cotangent bundle of M. It is possible to take the -adjoint of , giving a differential operator The Bochner Laplacian is given by which is a second order operator acting on sections of the vector bundle E. Note that the connection Laplacian and Bochner Laplacian differ only by a sign: Lichnerowicz Laplacian The Lichnerowicz Laplacian is defined on symmetric tensors by taking to be the symmetrized covariant derivative. The Lichnerowicz Laplacian is then defined by , where is the formal adjoint. The Lichnerowicz Laplacian differs from the usual tensor Laplacian by a Weitzenbock formula involving the Riemann curvature tensor, and has natural applications in the study of Ricci flow and the prescribed Ricci curvature problem. Conformal Laplacian On a Riemannian manifold, one can define the conformal Laplacian as an operator on smooth functions; it differs from the Laplace–Beltrami operator by a term involving the scalar curvature of the underlying metric. In dimension n ≥ 3, the conformal Laplacian, denoted L, acts on a smooth function u by where Δ is the Laplace-Beltrami operator (of negative spectrum), and R is the scalar curvature. This operator often makes an appearance when studying how the scalar curvature behaves under a conformal change of a Riemannian metric. If n ≥ 3 and g is a metric and u is a smooth, positive function, then the conformal metric has scalar curvature given by More generally, the action of the conformal Laplacian of g̃ on smooth functions φ can be related to that of the conformal Laplacian of g via the transformation rule Complex differential geometry In complex differential geometry, the Laplace operator (also known as the Laplacian) is defined in terms of the complex differential forms. This operator acts on complex-valued functions of a complex variable. It is essentially the complex conjugate of the ordinary partial derivative with respect to. It's important in complex analysis and complex differential geometry for studying functions of complex variables. Comparisons Below is a table summarizing the various Laplacian operators, including the most general vector bundle on which they act, and what structure is required for the manifold and vector bundle. All of these operators are second order, linear, and elliptic. See also Weitzenböck identity References Differential operators Differential geometry
Laplace operators in differential geometry
[ "Mathematics" ]
1,053
[ "Mathematical analysis", "Differential operators" ]
15,660,489
https://en.wikipedia.org/wiki/Vladimir%20Shalaev
Vladimir (Vlad) M. Shalaev is a Distinguished Professor of Electrical and Computer Engineering and Scientific Director for Nanophotonics at Birck Nanotechnology Center, Purdue University. Education and career V. Shalaev earned a Master of Science Degree in physics (summa com laude) in 1979 from Krasnoyarsk State University (Russia) and a PhD Degree in physics and mathematics in 1983 from the same university. Over the course of his career, Shalaev received a number of awards for his research in the fields of nanophotonics and metamaterials, and he is a Fellow of several of Professional Societies (see the Awards, honors, memberships section below). Prof. Shalaev has co-/written three- and co-/edited four books, and authored over 800 research publications, in total. As of May 2024, his h-index is 125 with the total number of citations nearing 70,000, according to Google Scholar. In 2017-2023 Prof. Shalaev has been on the list of Highly Cited Researchers from the Web of Science Group; he is ranked #9 in the optics category of the Stanford list of top 2% World's highest-cited scientists (career-long; out of 64,044 entries); ranked #34 in the US and #58 worldwide in the field of Electronics and Electrical Engineering by Research.com. Research Vladimir M. Shalaev is recognized for his pioneering studies on linear and nonlinear optics of random nanophotonic composites that had helped to mold the research area of composite optical media. He also contributed to the emergence of a new field of engineered, artificial materials - optical metamaterials. Currently, he studies new phenomena resulting from merging metamaterials and plasmonics with quantum nanophotonics. Optical metamaterials Optical metamaterials (MMs) are rationally designed composite nanostructured materials that exhibit unique electromagnetic properties drastically different from the properties of their constituent material components. Metamaterials offer remarkable tailorability of their electromagnetic response via shape, size, composition and morphology of their nanoscale building blocks sometimes called 'meta-atoms'. Shalaev proposed and demonstrated the first optical MM that exhibits negative index of refraction and the nanostructures that show artificial magnetism across the entire visible spectrum. (Here and thereafter, only selected, representative papers by Shalaev are cited; for complete list of Shalaev's publications visit his website.) He made important contributions to active, nonlinear and tunable metamaterials, which enable new ways of controlling light and accessing new regimes of enhanced light-matter interactions. Shalaev also experimentally realized negative-refractive-index MMs where optical gain medium is used to compensate for light absorption (optical loss). He made significant contributions to the so-called Transformation Optics, specifically on optical concentrators and "invisibility cloaks". In collaboration with Noginov, Shalaev demonstrated the smallest, 40-nm, nanolaser operating in the visible spectral range. Shalaev also made seminal contributions to two dimensional, flat metamaterials – metasurfaces – that introduce abrupt changes to the phase of light at a single interface via coupling to nanoscale optical antennas. He realized extremely compact flat lens, ultra-thin hologram and record-small circular dichroism spectrometer compatible with planar optical circuitry. MM designs developed by Shalaev are now broadly employed for research in sub-wavelength optical imaging, nanoscale lasers, and novel sensors. Shalaev’s work had a strong impact on the whole field of metamaterials. Three of Shalaev’s papers - Refs. , , and - remain among the top 50 most-cited out of over 750,000 papers included in the ISI Web of Science OPTICS category since 2005 (as of January 2021). Random composites Shalaev made pioneering contributions to the area of random optical media, including fractal and percolation composites. He predicted the highly localized optical modes -'hot spots' - for fractals and percolating films which were later experimentally demonstrated by Shalaev in collaboration with the Moskovits and Boccara groups. Furthermore, he showed that the hot spots in fractal and percolation random composites are related to localization of surface plasmons.− These localized surface plasmon modes in random systems are sometimes referred to as  Shalaev's "hot spots": see e.g. This research on random composites stemmed from the early studies on fractals performed by Shalaev in collaboration with M. I. Stockman; a theory of random metal-dielectric films was worked out in collaboration with A. K. Sarychev. Shalaev also developed fundamental theories of surface-enhanced Raman scattering (SERS) and strongly-enhanced optical nonlinearities in fractals and percolation systems and led experimental studies aimed to verify the developed theories.− Shalaev also predicted that nonlinear phenomena in random systems can be enhanced not only because of the high local fields in hot spots but also due to the rapid, nanoscale spatial variation of these fields in the vicinity of hot spots, which serves as a source of additional momentum and thus enables indirect electronic transitions. Shalaev’s contributions to the optics and plasmonics of random media− helped to transform those concepts into the area of optical metamaterials.−− Owing to the theory and experimental approaches developed in the area of random composites, optical metamaterials have quickly become a mature research field surprisingly rich in new physics. Shalaev’s impact on the development of both fields is in identifying the strong synergy and close connection between these two frontier fields of optics that unlock an entirely new set of physical properties. New Materials for Nanophotonics and Plasmonics Random composites and metamaterials provide a unique opportunity to tailor their optical properties via shape, size and composition of their nanoscale building blocks, which often require metals to confine light down to the nanometer scale via the excitation of surface plasmons. To enable practical applications of plasmonics, Shalaev in collaboration with A. Boltasseva developed novel plasmonic materials, namely transition metal nitrides and transparent conducting oxides (TCOs), paving the way to durable, low-loss, and CMOS-compatible plasmonic and nanophotonic devices. The proposed plasmonic ceramics operating at high temperatures, can offer solutions to highly efficient energy conversion, photocatalysis and data storage technologies−. In collaboration with the Faccio group, Shalaev demonstrated ultrafast, strongly-enhanced nonlinear responses in TCOs that possess an extremely low (close to zero) linear refractive index – the so-called epsilon-near-zero regime. Independently, the Boyd group obtained equally remarkable results in a TCO material, demonstrating that low-index TCOs hold a promise for novel nonlinear optics. Early research Shalaev’s PhD work (supervised by Prof. A.K. Popov) and early research involved theoretical analysis of resonant interaction of laser radiation with gaseous media, in particular i) Doppler-free multi-photon processes in strong optical fields and their applications in nonlinear optics spectroscopy and laser physics as well as ii) the (newly-discovered then) phenomenon of light-induced drift of gases. Awards, honors, memberships Recognized as Highly Cited Researcher by the Web of Science Group in 2017-2022; Ranked #9 in the optics category of the Stanford list of top 2% World's highest-cited scientists (career-long; out of 64,044 entries) Ranked #28 in electronics and electrical engineering among top USA researchers, according to Research.com. The 2020 Frank Isakson Prize for Optical Effects in Solids The Optical Society of America Max Born Award, 2010 The Willis E. Lamb Award for Laser Science and Quantum Optics, 2010 IEEE Photonics Society William Streifer Scientific Achievement Award, 2015 Rolf Landauer Medal of the ETOPIM (Electrical, Transport and Optical Properties of Inhomogeneous Media) International Association, 2015 The 2012 Nanotechnology Award from UNESCO The 2014 Goodman Book Award from OSA and SPIE Honorary Doctorate from University of Southern Denmark, 2015 The 2006 Top 50 Nano Technology Award Winner for “Nanorod Material” The 2009 McCoy Award, Purdue University's highest honor for scientific achievement Fellow of the Materials Research Society (MRS), since 2015 Fellow of the Institute of Electrical and Electronics Engineers (IEEE), since 2010 Fellow of the American Physical Society (APS), since 2002 Fellow of the Optical Society of America (OSA), since 2003 Fellow of the International Society for Optical Engineering (SPIE), since 2005 General co-Chair for 2011 and Program co-Chair 2009 of CLEO/QELS conferences Chair of the OSA Technical Group “Photonic Metamaterials”, 2004 - 2010 Reviewing Editor for Science Science Magazine Co-Editor of Applied Physics B - Lasers and Optics, 2006 - 2013 Topical Editor for Journal of Optical Society of America B, 2005–2011 Editorial Board Member for Nanophotonics journal, since 2012 Editorial Advisory Board Member for Laser and Photonics Reviews, since 2008 Publications Prof. Shalaev co-/authored three- and co-/edited four books in the area of his scientific expertise. According to Shalaev's website, over the course of his career he contributed 30 invited chapters to various scientific anthologies and published a number of invited review articles, over 800 publications in total. He made over 500 invited presentations at International Conferences and leading research centers, including a number of plenary and keynote talks. References 21st-century American physicists Russian physicists Living people American optical physicists Metamaterials scientists Purdue University faculty Fellows of the American Physical Society Fellows of Optica (society) Fellows of SPIE Year of birth missing (living people)
Vladimir Shalaev
[ "Materials_science" ]
2,071
[ "Metamaterials scientists", "Metamaterials" ]
15,666,340
https://en.wikipedia.org/wiki/Consolidated%20Safety-Valve%20Co.%20v.%20Crosby%20Steam%20Gauge%20%26%20Valve%20Co.
Consolidated Safety-Valve Co. v. Crosby Steam Gauge & Valve Co., 113 U.S. 157 (1885), was a patent case to determine validity of patent No. 58,294, granted to George W. Richardson September 25, 1866, for an improvement in steam safety valves. Technical background Richardson was the first person who made a safety valve which, while it automatically relieved the pressure of steam in the boiler, did not, in effecting that result, reduce the pressure to such an extent as to make the use of the relieving apparatus practically impossible because of the expenditure of time and fuel necessary to bring up the steam again to the proper working standard. His valve was the first which had the strictured orifice to retard the escape of the steam and enable the valve to open with increasing power against the spring and close suddenly, with small loss of pressure in the boiler. Ruling The direction given in the patent that the flange or lip is to be separated from the valve seat by about one sixty-fourth of an inch for an ordinary spring, with less space for a strong spring and more space for a weak spring, to regulate the escape of steam as required, is a sufficient description as matter of law, and it is not shown to be insufficient as a matter of fact. Letters patent No. 85,963, granted to said Richardson January 19, 1869, for an improvement in safety valves for steam boilers or generators, are valid. The patents of Richardson were infringed by a valve which produces the same effects in operation by the means described in Richardson's claims, although the valve proper is an annulus and the extended surface is a disc inside of the annulus, the Richardson valve proper being a disc and the extended surface an annulus surrounding the disc, and although the valve proper has two ground joints, and only the steam which passes through one of them goes through the stricture, while, in the Richardson valve, all the steam which passes into the air goes through the stricture, and although the huddling chamber is at the center instead of the circumference, and is in the seat of the valve, under the head, instead of in the head, and the stricture is at the circumference of the seat of the valve instead of being at the circumference of the head. The fact that the prior patented valves were not used and the speedy and extensive adoption of Richardson's valve support the conclusion as to the novelty of the latter. Suits in equity having been begun in 1879 for the infringement of the two patents, and the circuit court having dismissed the bills, this Court in reversing the decrees after the first patent had expired but not the second, awarded accounts of profits and damages as to both patents, and a perpetual injunction as to the second patent. See also List of United States Supreme Court cases, volume 113 References External links United States Supreme Court cases United States Supreme Court cases of the Waite Court Steam power 1885 in United States case law United States patent case law Safety valves
Consolidated Safety-Valve Co. v. Crosby Steam Gauge & Valve Co.
[ "Physics", "Engineering" ]
623
[ "Physical quantities", "Steam power", "Power (physics)", "Industrial safety devices", "Safety valves" ]
15,666,618
https://en.wikipedia.org/wiki/Electrical%20outlet%20tester
An electrical outlet tester, receptacle tester, or socket tester is a small device containing a 3-prong power plug and three indicator lights, used for quickly detecting some types of incorrectly-wired electrical wall outlets or campsite supplies. Tests and limitations The outlet tester checks that each contact in the outlet appears to be connected to the correct wire in the building's electrical wiring. It can identify several common wiring errors, including swapped phase and neutral, and failure to connect ground. The tester confirms continuity and polarity of the electrical connections, but it does not verify current-carrying ability, electrical safety (which requires impedance testing), insulation breakdown voltage, or loop connection of ring mains. Simple three-light testers cannot detect some potentially serious house wiring errors, including neutral and ground interchanged at the receptacle. There may be a "bootleg ground", where the neutral and ground pins have been connected together at the receptacle, which cannot be detected either. These problems can be detected with a multimeter and a test load, to verify that the ground connection is separate from the neutral and is not carrying normal circuit return current, or more typically by using a more-sophisticated multifunction tester. A quick supplemental screening test for these simple miswiring errors can be performed using a non-contact voltage tester (NCVT) or non-contact voltage detector. If a problem is thus identified, it can be investigated further using more-advanced equipment, or the outlet in question can be de-energized and disassembled for careful scrutiny. Some receptacle testers include an additional test button to test the triggering of GFCI devices, which supplements the built-in test button on the GFCI and can be used for testing outlets downstream from a GFCI receptacle. "Plug-in analyzers" may include earth loop impedance and other checks. History An early reference that describes the typical outlet tester circuit was published in Popular Mechanics in the March issue of 1967, and consists of two 27 kΩ resistors, one 100 kΩ resistor, and three NE-51 neon lamp bulbs with 100 kΩ resistors. See also Polarized plugs Test light References External links Ways a receptacle tester can mislead (by an electrician) Diagnosing Power Problems at the Receptacle, Duane Smith, EC&M, October 1, 2004 Electronic test equipment Electrical wiring
Electrical outlet tester
[ "Physics", "Technology", "Engineering" ]
510
[ "Electrical systems", "Building engineering", "Electronic test equipment", "Measuring instruments", "Physical systems", "Electrical engineering", "Electrical wiring" ]
15,667,957
https://en.wikipedia.org/wiki/Minkowski%20content
The Minkowski content (named after Hermann Minkowski), or the boundary measure, of a set is a basic concept that uses concepts from geometry and measure theory to generalize the notions of length of a smooth curve in the plane, and area of a smooth surface in space, to arbitrary measurable sets. It is typically applied to fractal boundaries of domains in the Euclidean space, but it can also be used in the context of general metric measure spaces. It is related to, although different from, the Hausdorff measure. Definition For , and each integer m with , the m-dimensional upper Minkowski content is and the m-dimensional lower Minkowski content is defined as where is the volume of the (n−m)-ball of radius r and is an -dimensional Lebesgue measure. If the upper and lower m-dimensional Minkowski content of A are equal, then their common value is called the Minkowski content Mm(A). Properties The Minkowski content is (generally) not a measure. In particular, the m-dimensional Minkowski content in Rn is not a measure unless m = 0, in which case it is the counting measure. Indeed, clearly the Minkowski content assigns the same value to the set A as well as its closure. If A is a closed m-rectifiable set in Rn, given as the image of a bounded set from Rm under a Lipschitz function, then the m-dimensional Minkowski content of A exists, and is equal to the m-dimensional Hausdorff measure of A. See also Gaussian isoperimetric inequality Geometric measure theory Isoperimetric inequality in higher dimensions Minkowski–Bouligand dimension Footnotes References . . Measure theory Geometry Analytic geometry Dimension theory Dimension Measures (measure theory) Fractals Hermann Minkowski
Minkowski content
[ "Physics", "Mathematics" ]
373
[ "Geometric measurement", "Functions and mappings", "Mathematical analysis", "Physical quantities", "Measures (measure theory)", "Quantity", "Mathematical objects", "Fractals", "Size", "Mathematical relations", "Geometry", "Theory of relativity", "Dimension" ]
15,668,075
https://en.wikipedia.org/wiki/Cyclin-dependent%20kinase%205
Cyclin-dependent kinase 5 is a protein, and more specifically an enzyme, that is encoded by the Cdk5 gene. It was discovered 15 years ago, and it is saliently expressed in post-mitotic central nervous system neurons (CNS). The molecule belongs to the cyclin-dependent kinase family. Kinases are enzymes that catalyze reactions of phosphorylation. This process allows the substrate to gain a phosphate group donated by an organic compound known as ATP.  Phosphorylations are of vital importance during glycolysis, therefore, making kinases an essential part of the cell due to their role in the metabolism, cell signaling, and many other processes. Structure Cdk5 is a proline-directed serine/threonine kinase, which was first identified as a CDK family member due to its similar structure to CDC2/CDK1 in humans, a protein that plays a crucial role in the regulation of the cell cycle. The gene Cdk5 contains 12 exons in a region that contains around 5000 nucleotides (5kb), as it was determined by Ohshima after cloning the Cdk5 gene that belonged to a mouse. Cdk5 has 292 amino acids and presents both α-helix and β strand structures. Even though Cdk5 has a similar structure to other cyclin-dependent kinases, its activators are highly specific (CDK5R1 and CDK5R2). Some investigations have reported that the active states of protein kinases structurally differ from each other in order to preserve the geometry of its machinery so that catalytic output works properly. The Cdk5 kinase has an original design as well. Cdk5 belongs to the eukaryotic protein kinases (ePKs). A crystal structure of the catalytic domain of cAMP-dependent protein kinase showed that it holds 2 lobes;  on the one hand, it has a small lobe, an N-terminal arranged as an antiparallel β-sheet structure. Furthermore, it contains nucleotide motifs as a way to orient the nucleotide for phospho-transfer. On the other hand, the large lobe, a C-terminal, is helical shaped, which helps to identify the substrate and includes crucial residues for the phospho-transfer. Physiological role Pain Recently Cdk5 has emerged as an essential kinase in sensory pathways. Recent reports by Pareek et al. suggest its necessity in pain signaling. CDK5 is required for proper development of the brain, and to be activated, it must associate with CDK5R1 or CDK5R2. Unlike other cyclin-dependent kinases, CDK5 does not also require phosphorylation on the T loop. Therefore, binding with the activator is sufficient to activate the kinase. Neurons Cdk5 is abundant and mainly expressed in neurons, where it phosphorylates protein polymers with a high molecular weight called neurofilaments, and microtubule-associated protein tau, which are abundant in the CNS (Central Nervous System). The enzyme is involved in many aspects of neuronal development and functions. The main role of Cdk5 when it comes to neurons is to assure proper neuronal migration. Neurons will send out both dendrites and axons to form connections with other neurons in order to transmit information, and Cdk5 regulates this process. In order to perform, Cdk5 needs to be activated by p35 (these 3 amino acids, Asp-259, Asn-266, and Ser-270, are involved in the formation of hydrogen bonds with Cdk5) or p39 (the isoform of p35), which are two of its neuron-specific regulatory subunits. This means that the level of expression of p35 and p39 is going to be related to the activity of the enzyme. If there is a high activity of Cdk5 during brain development, its activators will have a high expression. As a matter of fact, when studies were conducted on mice without p35 and p39, the results were the same as the ones observed on mice without Cdk5: there were clear disruptions of the laminar structures in the cerebral cortex, the olfactory bulb, the hippocampus, and the cerebellum. These areas' proper development and functionality depend on Cdk5, which relies on the correct expression of p35 and p39. Also, Cdk5 collaborates with Reelin signaling in order to assure the proper neuronal migration in the developing brain. Cdk5 is not only implicated in neuronal migration. The enzyme will also help manage neurite extension, synapse formation, and synaptic transmission. It is also worth noting that Cdk5 also regulates the process of apoptosis, which is necessary in order to assure that the neural connections that are formed are correct. Moreover, due to the fact that Cdk5 also intervenes in the regulation of synaptic plasticity, it is implicated in the processes of learning and memory formation, as well as the creation of drug addiction. On top of that, Cdk5 modulates actin-cytoskeleton dynamics by phosphorylating Pak1 and filamin 1 and regulates the microtubules by also phosphorylating tau, MAP1B, doublecortin, Nudel, and CRMPs, which are all microtubule-associated proteins. A non-proper expression of Cdk5 will generate defects in these substrates that can lead to multiple illnesses. For example, a defect on filamin 1 in humans provokes periventricular heterotopia; and a defect on Lis1 and doublecortin will cause lissencephaly type 1. As a matter of fact, four members of a consanguineous Israeli Muslim family that suffered from lissencephaly-7 with cerebellar hypoplasia had a splice site mutation in the Cdk5 gene. Drug abuse Cdk5 has been proven to be directly linked with drug abuse. It is established that drugs act on the reward system by disturbing intracellular signal transduction pathways, with Cdk5 being involved. Upon repetitive administration, several components of dopamine signalling are modified, including changes in gene expression and the circuitry of dopaminoceptive neurons. In the example of cocaine, CREB (cAMP Response Element Binding) causes a transient burst in immediate-early gene expression in the striatum, as well as the expression of ΔFosB isoforms, which accumulate and persist in striatal neurons with an extremely long half-life. Many studies have revealed that the overexpression of ΔFosB due to drug abuse is the cause of an upregulation of Cdk5, it being downstream of ΔFosB expression in the striatum, including the nucleus accumbens. It has been established that with repeated exposure to drugs such as cocaine and overexpression of ΔFosB isoforms, Cdk5 is upregulated, mediated by the upregulation of p35. It has also been demonstrated that this enzyme has an important place in dopamine neurotransmission regulation. Indeed, Cdk5 can act on the dopamine system by phosphorylating DARPP-32. As a consequence of tof Cdk5 upregulation, there is also a rise in the number of dendritic branch points and spines, both in medium spiny neurons in the nucleus accumbens and pyramidal neurons in the medial prefrontal cortex. Hence, its involvement in the reward system, and by extension addiction. Further analysis of the relationship between Cdk5 proportion and drug effects has shown that there is a strong dependence on the dose and frequency of administration. For instance, if the frequency of the cocaine dose is low, or the dose is continuously administered over a period, the cocaine effects will be present even though the production of Cdk5 in the nucleus accumbens, in the ventral tegmental area, and prefrontal cortex activity will not increase. However, when it comes to significantly frequent doses, the effects of cocaine are not displayed despite the enhanced proportion of Cdk5. Those differences can be explained by the fact that Cdk5 is a transitional state to overexposure to drugs like cocaine. Cdk5 has been suggested as a therapeutic target in addiction management. For example, it has been proved that sustained administration of Cdk5 antagonists inhibits the growth of spiny dendrites in the nucleus accumbens, which could be an avenue for addiction management. Further, Cdk5 could be used as a diagnostic marker for addiction. Pancreas Even though the main role of Cdk5 is related to neuronal migration, its impact on the human body is not limited to the nervous system. Indeed, Cdk5 plays an important part in the control of insulin secretion in the pancreas. Actually, this enzyme has been found in pancreatic β cells and has been proven to reduce insulin exocytosis by phosphorylating L-VDCC (L-type voltage-dependent Ca2+ channel). Immune system During T-cell activation, Cdk5 phosphorylates coronin 1a, a protein that contributes to the process of phagocytosis and regulates actin polarization. Therefore, this kinase promotes T-cell survival and motility. Cdk5 also takes part in the production of interleukin 2 (IL-2), a cytokine involved in cell signaling, by T-cells. To do so, it disrupts the repression of interleukin 2 transcription by the Histone deacetylase 1 (HDAC1) through mSin3a protein phosphorylation. This reduces the ability of the HDAC1/mSin3a complex to bind to the IL-2 promoter, which leads to an increased interleukin 2 production. Regulation of exocytosis Synaptic vesicle exocytosis is also regulated by CdK5, with the phosphorylation of the munc-18-a protein, which is indispensable for secretion, as it has a great affinity with a derivative of SNAP receptor (SNARE protein). This phosphorylation was demonstrated with the simulation of secretion from neuroendocrine cells, since the Cdk5 activity increased. When Cdk5 was removed, the norepinephrine secretion decreased. Memory Thanks to an experiment with mice, a relation between memory and Cdk5 was demonstrated. On one hand, mice did not show fear integrated by a previous activity when Cdk5 was inactivated. On the other hand, when the enzyme activity was increased in the hippocampus -where memories are stored- the fear reappeared. Remodelling of the actin cytoskeleton in the brain During embryogenesis, Cdk5 is essential for brain development as it is crucial for the regulation of the cytoskeleton that in turn is important for remodelling in the brain. Several neuronal processes: pain signalling, drug addiction, behavioural changes, the formation of memories and learning, related to the development of the brain, derive from rapid modifications in cytoskeleton. A negative remodelling of neuronal cytoskeleton will be associated with a loss of synapses and neurodegeneration in brain diseases, where the Cdk5 activity is deregulated. Therefore, most part of Cdk5 substrates are related to the actin skeleton; both, the physiological and the pathological ones. Some of them have been identified in the recent decades: ephexin1, p27, Mst3, CaMKv, kalirin-7, RasGRF2, Pak1, WAVE1, neurabin-1, TrkB, 5-HT6R, talin, drebrin, synapsin I, synapsin III, CRMP1, GKAP, SPAR, PSD-95, and LRRK2. Circadian clock regulation The mammalian circadian clock is controlled by Cdk5 with the phosphorylation of PER2. In the laboratory, Cdk5 was blocked in the SCN (suprachiasmatic nuclei, a master oscillator of the circadian system), consequently the free-running period in mice was reduced. During the diurnal period, the PER2 (at serine residue 394) was phosphorylated by the Cdk5, thus, the Cryptochrome 1 (CRY1) could easily interact with it and the PER2-CRY1 complex went into the nucleus. The molecular circadian cycle and period are properly established thanks to the task of the Cdk5 as a nuclear driver of these proteins. Regulator of cell apoptosis and cell survival In addition to all the roles previously mentioned, the Cdk5 is involved in numerous cellular functions such as cell mobility survival, apoptosis, and gene regulation. The plasma membrane, cytosol and perinuclear region are the locations where Cdk5/p35 activators are found. Nevertheless, Cdk5 can also be activated by cyclin I, this regulator causes an increase in the expression of BCl-2 family proteins, which are associated with anti-apoptotic functions. Role in disease The chemical explanation of a wide variety of neurological disorders lead to the Cdk5; the abnormal phosphorylation of tau is a pathological action carried out by this kinase and the neurofibrillary tangles are the consequences. Neurodegenerative diseases Cdk5 plays an essential role in the central nervous system. During the process of embryogenesis, this kinase is necessary for the development of the brain; and in adult brains, Cdk5 is needed for many neuronal processes; for instance, learning and the formation of memories. Nevertheless, if Cdk5 activity is deregulated, it can lead to really severe neurological diseases, including Alzheimer's, Parkinson, Multiple sclerosis and Huntington's disease. Alzheimer's disease (AD) is responsible for 50-70% of all dementia cases. There have been some studies which have shown that an excess in the activity of Cdk5, a proline-directed protein kinase, leads to tau hyperphosphorylation, a process that is observed in many AD patients. Cdk5 activators, p35 and p39 (both of them are myristoylated proteins that are anchored to cell membranes), can be cleaved by calcium-activated calpain to p25 and p29. This will result in a migration of the proteins from the cell membrane to both nuclear and perinuclear regions, and in a deregulation of Cdk5 activity. p25 and p29 have half-lives that are 5 to 10 times longer to the ones that p35 and p39 have. This is incredibly problematic due to the fact that it can lead to the accumulation of Cdk5 activators and an excess of Cdk5 activity, which then causes tau hyperphosphorylation. On top of that, an increase in Aβ levels can also lead to tau hyperphosphorylation by stimulating the production of p25. Therefore, Cdk5 could be a potential drug target in order to treat patients with AD because its inhibition could reduce tau hyperphosphorylation, and consequently, reduce the formation of NFTs (neurofibrillary tangles) and slow down the process of neurodegeneration. Huntington's disease (HD) is another neurodegenerative disease that is somewhat linked to the activity of Cdk5. Dynamin-related protein 1 (Drp1) is an essential element in mitochondrial fission. Cdk5 can alter the subcellular distribution of Drp1 and its activity. As a matter of fact, it has been observed that the inhibition of the overly-active kinase allows the Drp1 to function properly in mitochondrial fragmentation in order to avoid neurotoxicity in the brain. On top of that, Cdk5 can have an influence on the alteration of the mitochondrial morphology or its transmembrane potential, which can lead to cell death and neurodegeneration. This means that Cdk5 is a possible therapeutic target to treat the mitochondrial dysfunction that leads to the development of HD. Parkinson disease (PD): Cdk5 is considered to be tightly involved in Parkinson's disease. This neurodegenerative disease is caused by progressive loss of nerve cells in the part of the brain called the substantia nigra, among others. Cdk5 is able to form a complex with p25 (cleavage peptide of p35): Cdk5/p25. P25 will lead to the hyperactivity of Cdk5. The result of the formation of this complex is the apoptosis of nerve cells and neuroinflammation. This discovery could be used to treat Parkinson's disease. In order to inhibit the Cdk5/p25 complex, we could use an antagonist of Cdk5: CIP. The results of this treatment have been surprisingly positive. Indeed, we can notice not only that the Parkinson symptoms were appeased, but also that the CIP turned out to protect the loss of dopaminergic neurons in substantia nigra. Multiple sclerosis (MS): is one of the diseases, in which a failure of remyelination can provoke lasting axonal damage and an irreversible loss of function. Cyclin-dependent kinase 5 is involved in the process as it regulates the oligodendrocyte (OL9 development and myelination in CNS). Cdk5 inhibitors impede the remyelination and disrupt the neural cells activity. The low expression of MBP and proteolipid protein and the decrease in the number of myelinated axons indicate the lack of myelin repair. Cancer Cdk5 is involved in invasive cancers, apparently by reducing the activity of the actin regulatory protein caldesmon. Although Cdk5 is not mutated in cancer tissues, its activity and expression are deregulated. The kinase phosphorylates tumor suppressors and transcription factors, which are involved in cell cycle progression. Cdk5 is involved in tumor proliferation, migration, angiogenesis and also chemotherapy resistance and anti-tumor immunity. It also participates in signalling pathways that lead to metastasis, and it regulates the cytoskeleton and focal adhesions. A possible cancer treatment could consist in targeting Cdk5 and avoiding its binding to its activators and substrates. In recent studies, about radiation therapy in patients with large cell lung cancer, it has been found that CDK5 depletion diminishes lung cancer development and radiation resistance in vitro and in vivo. It was demonstrated that a decrease in Cdk5 reduced the expression of TAZ, a component of the Hypothalamus pathway. As a result, this loss mitigates the signal activation from the Hypothalamus. Consequently, Cdk5 can be treated as a target to fight lung cancer. History CDK5 was originally named NCLK (Neuronal CDC2-Like Kinase) due to its similar phosphorylation motif. CDK5 in combination with an activator was also referred to as Tau Protein Kinase II. Furthermore, Cdk5 has been reported to be involved in T cell activation and play an important role in development of autoimmune disorders, such as multiple sclerosis. Interactive pathway map Interactions Cyclin-dependent kinase 5 has been shown to interact with different molecules and substrates: It interacts with LMTK2, NDEL1, CDK5R1, Nestin and PAK1. The gene CABLES1 codes for a cyclin-dependent kinase binding protein, whose complete name is Cdk5 and Abl enzyme substrate 1. This binding protein links Cdk5 and c-Abl, a tyrosine kinase. Active c-Abl phosphorylates CDK5 on tyrosine 15, a process enhanced by CABLES1 protein. As a result, Cdk5/p35 activity in developing neurons increases. CABLES1 and the mentioned phosphorylation may play an important role in axon growth regulation. The gene called CABLES2 codes for another binding protein, Cdk5 and Abl enzyme substrate 2. Although its function is unknown, it may be involved in the G1-S cell cycle transition, a stage between cell growth and DNA replication. Moreover, Cdk5 phosphorylates Apoptosis-associated tyrosine kinase (AATK). This protein probably induces growth arrest and myeloid precursor cells apoptosis, and also activates CdkR1. Glutathione S-transferase P enzyme, encoded by the GSTP1 gene, causes a negative regulation, or reduction, of Cdk5 activity. This is achieved via p25/p35 translocation in order to prevent neurodegeneration. Cdk5 binds to the protein Histone deacetylase 1 (HDAC1). When Cdk5/p25 derregulates HDAC1, abnormal cell-cycle activity appears and double-strand DNA breaks, causing neurotoxicity. Cdk5 cytoplasmic distribution is determined by activators p35 and p39. Both activators have localization motifs, which lead to the presence of Cdk5 in the plasma membrane and in the perinuclear region. p35 and p39 myristoylation allows Cdk5 to associate with membranes. Cdk5 also interacts with APEX1 endonuclease. The kinase phosphorylates Thr-233, causing an accumulation of DNA damage and, eventually, neuronal death. Cdk5 phosphorylates and regulates the tumor suppressor protein p53. In apoptotic PC12 cells there is a simultaneous increase in Cdk5 and p53 levels, so it is thought that the mechanism by which Cdk5 induces apoptosis could be caused by phosphorylation and activation of p53. Once Cdk5 is phosphorylated by a protein called EPH receptor A4, it phosphorylates Guanine nucleotide exchange factors (NGEF) regulating RhoA and dendritic spine morphogenesis. Cdk5 also phosphorylates Focal adhesion kinase (FAK). This may stimulate nuclear translocation, which plays an important role in neuronal migration, by regulating a centrosome-associated microtubule structure. 5-Hydroxytryptamine receptor 6 (HTR6), which is believed to control cholinergic neuronal transmission in the brain, manages pyramidal neuron migration during corticogenesis. In order to do so, HTR6 regulates Cdk5 activity. Cdk5 interacts with CTNNB1 and CTNND2 as well. References Further reading External links Cell cycle Proteins EC 2.7.11
Cyclin-dependent kinase 5
[ "Chemistry", "Biology" ]
4,867
[ "Biomolecules by chemical classification", "Cellular processes", "Molecular biology", "Proteins", "Cell cycle" ]
15,668,833
https://en.wikipedia.org/wiki/Season%20cracking
Season cracking is a form of stress-corrosion cracking of brass cartridge cases originally reported from British forces in India. During the monsoon season, military activity was temporarily reduced, and ammunition was stored in stables until the dry weather returned. Many brass cartridges were subsequently found to be cracked, especially where the case was crimped to the bullet. It was not until 1921 that the phenomenon was explained by Moor, Beckinsale and Mallinson: ammonia from horse urine, combined with the residual stress in the cold-drawn metal of the cartridges, was responsible for the cracking. Season cracking is characterised by deep brittle cracks which penetrate into affected components. If the cracks reach a critical size, the component can suddenly fracture, sometimes with disastrous results. However, if the concentration of ammonia is very high, then attack is much more severe, and attack over all exposed surfaces occurs. The problem was solved by annealing the brass cases after forming so as to relieve the residual stresses. Ammonia The attack takes the form of a reaction between ammonia and copper to form the cuprammonium ion, formula [Cu(NH3)4]2+, a chemical complex which is water-soluble, and hence washed from the growing cracks. The problem of cracking can therefore also occur in copper and any other copper alloy, such as bronze. The tendency of copper to react with ammonia was exploited in making rayon, and the deep blue colour of the aqueous solution of copper(II) oxide in ammonia is known as Schweizer's reagent. Materials Although the problem was first found in brass, any alloy containing copper will be susceptible to the problem. It includes copper itself (as used in pipe for example), bronzes and other alloys with a significant copper content. Like all problems with hairline cracks, detection in the early stages of attack is difficult, but the characteristic blue coloration may give a clue to attack. Microscopic inspection will often reveal the cracks, and x-ray analysis using the EDX facility on the scanning electron microscope or SEM should reveal the presence of elemental nitrogen from ammoniacal traces. See also References Corrosion Engineering concepts Reliability engineering Safety Firearm safety
Season cracking
[ "Chemistry", "Materials_science", "Engineering" ]
435
[ "Systems engineering", "Reliability engineering", "Metallurgy", "Corrosion", "Electrochemistry", "nan", "Materials degradation" ]
18,553,798
https://en.wikipedia.org/wiki/CODESYS
Codesys (spelled “CODESYS” by the manufacturer, previously “CoDeSys”) is an integrated development environment for programming controller applications according to the international industrial standard IEC 61131-3. CODESYS is developed and marketed by the CODESYS Group that is headquartered in Kempten. The company was founded in 1994 under the name 3S-Smart Software Solutions. It was renamed in 2018 and 2020 to Codesys Group / Codesys GmbH. Version 1.0 of CODESYS was released in 1994. Licenses of the CODESYS Development System are free of charge and can be installed legally without copy protection on further workstations. Integrated use cases The tool covers different aspects of industrial automation in one surface: Engineering The five programming languages for application programming defined in the IEC 61131-3 are available in the CODESYS development environment. IL (instruction list) is an assembler-like programming language. The IEC 61131-3 user organization PLCopen has declared this language as “deprecated”, which means it shall not be used for new projects anymore. ST (structured text) is similar to programming in Pascal or C LD (ladder diagram) enables programmers to virtually combine relay contacts and coils FBD (function block diagram) enables users to rapidly program both Boolean and analog expressions SFC (sequential function chart) is convenient for programming sequential processes and flows Additional graphical editor available in CODESYS: CFC (Continuous Function Chart) is a sort of freehand FBD editor. While the FBD editor works in a network-oriented way and arranges the function blocks automatically, in CFC it is possible to place all function blocks freely and thus also to realize feedback without intermediate variables. Therefore, this language is also particularly suitable for the overview representation of an application. Integrated compilers transform the application code created by CODESYS into native machine code (binary code) which is then downloaded onto the controller. The most important 32- and 64-bit CPU families are supported, such as TriCore, 80x86/iX, ARM/Cortex, PowerPC, SH or BlackFin. Once CODESYS is connected with the controller, it offers an extensive debugging functionality such as variable monitoring/writing/forcing by setting breakpoints/performing single steps or recording variable values online on the controller in a ring buffer (Sampling Trace) as well as core dumps during exceptions. CODESYS V3.x is based on the so-called CODESYS Automation Platform, an automation framework device manufacturers can extend by their own plug-in modules. The CODESYS Professional Developer Edition offers the option to add components to the tool which are subject to licensing, e.g. integrated UML support, a connection to the version control systems Apache Subversion and Git, online runtime performance analysis ("Profiler"), static code analysis of the application code or script-based automated test execution. With the CODESYS Application Composer, which partly can be used free of charge, users can have complete automation applications generated as part of the IEC 61131-3 tool. To do this, they can configure their machine or system on the basis of modules that define, for example, the mechatronic structure or the software function to be used, including the entire functionality. From this configuration, an integrated configurator generates viewable IEC 61131-3 code. Runtime After implementing the CODESYS Control Runtime System, intelligent devices can be programmed with CODESYS. A fee-based toolkit provides this runtime system as source and object code. It can be ported to different platforms. Since the beginning of 2014, a runtime version has also existed for all the Raspberry Pi versions. However, this does not guarantee hard real-time characteristics. The Raspberry Pi interfaces, such as I²C, SPI and 1-Wire are supported in addition to the Ethernet-based field buses. Furthermore, SoftPLC systems under Windows and Linux are available, which turn industrial PCs and other standard device platforms from different manufacturers such as Janztec, WAGO, Siemens or Phoenix Contact into CODESYS-compatible controllers. These SoftPLC systems can also be operated as virtual PLCs in virtualization platforms, such as software containers and hypervisors in real-time. Fieldbus technology Different field buses can be used directly in the programming system CODESYS. For this purpose, the tool integrates configurators for the most common systems such as PROFIBUS, CANopen, EtherCAT, PROFINET and EtherNet/IP. For most of the systems mentioned, protocol stacks are available in the form of CODESYS libraries which can be loaded subsequently onto the supported devices. In addition, the platform provides optional support for application-specific communication protocols, such as BACnet or KNX for building automation. Communication For the exchange of data with other devices in control networks, CODESYS can seamlessly integrate and use communication protocols. These include proprietary protocols, standardized protocols in automation technology, such as OPC and OPC UA, standard protocols for serial and Ethernet interfaces as well as standard protocols of web technology, such as MQTT or https. The latter are also offered in the form of encapsulated libraries for simplified access to public clouds from AWS or Microsoft (Azure). Visualization An integrated editor helps users to create complex visualization masks directly in the programming system CODESYS and animate them based on application variables. To simplify the procedure, integrated visualization elements are available. In addition, canvas (HTML5) elements can also be integrated and animated. An optional toolkit enables users to create their own visualization elements. The masks created are, among others, used for application tests and commissioning during online operation of the programming system. With optional visualization clients, the created masks can also be used to operate the machine or plant, e.g. on controllers with integrated display (product name CODESYS TargetVisu), in an own portable runtime e.g. under Windows or Linux (product name CODESYS HMI) or in an HTML5-capable web browser (product name CODESYS WebVisu). For simplified use, a free Android app is available for Codesys WebVisu (product name CODESYS Web View). Motion CNC Robotics An optional modular solution for controlling complex movements with an IEC 61131-3 programmed controller is also completely integrated in the programming system CODESYS. The modular solution includes: Editors for motion planning, e. g. with CAMs or DIN 66025 CNC descriptions An axis group configurater for multiple robot kinematics Library modules for decoder, interpolator, program execution, e. g. according to PLCopen MotionControl, for kinematical transformations and visualization templates Safety Pre-certified software components within CODESYS make it much easier for device manufacturers to have their controllers SIL2 or SIL3 certified according IEC 61508. Therefore, CODESYS Safety consists of components within the programming system and the runtime system, whereas the development is completely integrated in the IEC 61131-3 programming environment. Users of control technology use the safety functions with devices that have already implemented CODESYS Safety. In addition, an add-on product is available with which the certified EtherCAT Safety Terminals from Beckhoff can be configured within the CODESYS Development System. Automation Server For the administration of compatible devices, an Industry 4.0 platform is available, which allows, for example, the storage of projects in source and binary code via web browser and their download to connected devices. The platform is currently only hosted in a public cloud. Operation of the server on local, on-premise servers has been announced for 2024. The communication between the cloud and the controllers is performed through a special software Edge Gateway, whose security features have been rated A+ by SSL Labs. Therefore, this connection can be used to communicate securely with devices integrated in the Automation Server without the need for additional VPN tunnels or firewalls, e.g. for displaying web visualizations or for debugging/updating the application software on the device. Additional sources of information and assistance Since 2012, the manufacturer has been operating an online forum in which users can communicate with each other. In 2020 it was transferred to the Q&A platform "CODESYS Forge", an open-source platform for the development of projects and sharing of knowledge and a section acts as a forum ("CODESYS Talk"). An Android app is available to simplify the use of the platform ("CODESYS Forge") With the CODESYS Store, the manufacturer operates an online shop in which additional options and products are offered. A considerable part of the product offerings is free sample projects that make it easier to try out features and supported technologies. Just like an "App-Shop" platform, users have the possibility to search and install the offered products and projects directly from the CODESYS Development System without leaving the platform. Industrial usage According to information from the manufacturer at least 400 device manufacturers from different industrial sectors offer intelligent automation devices with a CODESYS programming interface. These include devices from global players such as Schneider Electric, Beckhoff, Eaton Corporation, WAGO or Festo, but also niche suppliers of industrial controllers. Consequently, more than 100,000 end users such as machine or plant builders around the world employ CODESYS for different automation tasks and applications. In the CODESYS Store alone, there are far more than 310,000 verified users registered (as of 10/2023). In a study published in 2019, the independent market research institute IoT Analytics states that CODESYS is the market leader for hardware-agnostic SoftPLCs. Furthermore, numerous educational institutions (commercial schools, colleges, universities) around the world use CODESYS in the training of control and automation technology. Membership in organisations PLCopen OSADL CAN in Automation OPC Foundation Profibus SERCOS interface EtherCAT IO-Link ODVA The Open Group See also Integrated development environment Process control Programmable logic controller (PLC) Software engineering References Bibliography Kai Stüber (2023): Konzeption und Implementierung der Ansteuerung einer Bohreinrichtung mit einer speicherprogrammierbaren Steuerung und CODESYS (Projektarbeit), 2023 (E-Book). Stefan Henneken (2023): Use of the SOLID principles with the IEC 61131-3 - 5 Principles for Object-Oriented Software Design in the PLC Programming, 2023 (Paperback). / E-Book Gary L. Pratt (2021): The BOOK of CODESYS. self-published, 2021. Peter Beater: Grundkurs der Steuerungstechnik mit CODESYS: Grundlagen und Einsatz Speicherprogrammierbarer Steuerungen, 2021, Peter Beater: Aufgabensammlung zur Steuerungstechnik: 56 mit Papier und Bleistift oder CoDeSys gelöste Aufgaben, 2019, Karl Schmitt: SPS-Programmierung mit ST: nach IEC 61131 mit CoDeSys und mit Hinweisen zu STEP 7 im TIA-Portal (elektrotechnik), 2019, Stefan Nothdurft: Projekt Bohreinrichtung. Implementierung einer speicherprogrammierbaren Steuerung mit CoDeSys, 2018, Jochen Petry und Karsten Reinholz: SPS-Programmierung mit CODESYS V2.3: Praxisorientiert - Realitätsnah - Erprobt!. Mit e. Vorw. v. Karsten Reinholz, 2014, Jochen Petry: IEC 61131-3 mit CoDeSys V3: Ein Praxisbuch für SPS-Programmierer. Eigenverlag 3S-Smart Software Solutions, 2011 Karl Schmitt: SPS-Programmierung mit ST nach IEC 61131-3 mit CoDeSys und Hinweisen zu STEP7 V11. Vogel Buchverlag, 2011 Herbert Bernstein (2007) SPS-Workshop mit Programmierung nach IEC 61131 mit vielen praktischen Beispielen, mit 2 CD-ROM, VDE Verlag. Prof. Dr. Birgit Vogel-Heuser (2008) Automation & Embedded Systems, Oldenbourg Industrieverlag. Ulrich Kanngießer: Kleinsteuerungen in Praxis und Anwendung: Erfolgreich messen, steuern, regeln mit LOGO!, easy, Zelio und Millenium 3. Hüthig Verlag Matthias Seitz: Speicherprogrammierbare Steuerungen. Hanser Fachbuchverlag Leipzig Heinrich Lepers (2005) SPS-Programmierung nach IEC 61131-3 mit Beispielen für CoDeSys und STEP 7, Franzis Verlag Günter Wellenreuther/Dieter Zastrow (2007) Automatisieren mit SPS – Übersichten und Übungsaufgaben, Vieweg Verlag. Norbert Becker (2006) Automatisierungstechnik, Vogel Buchverlag. Helmut Greiner: Systematischer Entwurf sequentieller Steuerungen – Grundlagen. Schriftenreihe der Stiftung für Technologie, Innovation und Forschung Thüringen (STIFT) Igor Petrov: Controller Programming: The standard languages and most important development tools. Solon Press, 2007 (Russian) Marcos de Oliveira Fonseca et al.(2008) Aplicando a norma IEC 61131 na automação de processos, ISA América do Sul. (Portuguese) Dag Håkon Hanssen (2008) Programmerbare Logiske Styringer – baser på IEC 61131-3, tapir akademisk forlag. (Norwegian) Jürgen Kaftan: "Practical Examples with AC500 from ABB: 45 Exercises and Solution programmed with CoDeSys Software". IKH Didactic Systems Tom Mejer Antonsen: "PLC Controls with Structured Text (ST): IEC 61131-3 and best practice ST programming", (further languages available) External links CODESYS Talk (former CODESYS user forum) CODESYS Forge (open source projects) http://www.oscat.de/ OpenSource library for version 2 and 3 of CODESYS "OPC UA and IEC 61131-3" ISA Intech article on the power of CODESYS IEC61131-3 and OPC-UA Codesys PLC Industrial automation Programmable logic controllers
CODESYS
[ "Technology", "Engineering" ]
3,043
[ "Industrial computing", "Industrial engineering", "Automation", "Programmable logic controllers", "Industrial automation" ]
18,555,762
https://en.wikipedia.org/wiki/Srivastava%20code
In coding theory, Srivastava codes, formulated by Professor J. N. Srivastava, form a class of parameterised error-correcting codes which are a special case of alternant codes. Definition The original Srivastava code over GF(q) of length n is defined by a parity check matrix H of alternant form where the αi and zi are elements of GF(qm) Properties The parameters of this code are length n, dimension ≥ n − ms and minimum distance ≥ s + 1. References Error detection and correction Finite fields Coding theory
Srivastava code
[ "Mathematics", "Engineering" ]
123
[ "Discrete mathematics", "Coding theory", "Reliability engineering", "Error detection and correction" ]
18,555,843
https://en.wikipedia.org/wiki/SNPedia
SNPedia (pronounced "snipedia") is a wiki-based bioinformatics web site that serves as a database of single nucleotide polymorphisms (SNPs). Each article on a SNP provides a short description, links to scientific articles and personal genomics web sites, as well as microarray information about that SNP. Thus SNPedia may support the interpretation of results of personal genotyping from, e.g., 23andMe and similar companies. SNPedia is a semantic wiki, powered by MediaWiki and the Semantic MediaWiki extension. SNPedia was created, and is run by, geneticist Greg Lennon and programmer Mike Cariaso, who at the time of the site's founding were both located in Bethesda, Maryland. , the website has 537 medical conditions and 109,729 SNPs in its database. The number of SNPs in SNPedia has doubled roughly once every 14 months since August 2007. On 7 September 2019, MyHeritage announced that they acquired both SNPedia and Promethease. All non-European raw genetic data files previously uploaded to Promethease, and not deleted by users by 1 Nov 2019, are to be copied to MyHeritage and the users will receive a free MyHeritage account with paid level of services, including Cousin Matching and Ethnicities. Promethease An associated computer program called Promethease, also developed by the SNPedia team, allows users to compare personal genetics results against the SNPedia database, generating a report with information about a person's attributes, such as propensity to diseases, based on the presence of specific SNPs within their genome. In May 2008 Cariaso, using Promethease, won an online contest sponsored by 23andMe to determine as much information as possible about an anonymous woman based only on her genome. Cariaso won in all three categories of "accuracy, creativity and cleverness". In 2009, the anonymous woman ("Lilly Mendel") was revealed to be 23andMe co-founder Linda Avey, allowing a direct comparison between her actual traits and those predicted by Promethease a year earlier. Reception In a June 2008 article on personal genomics, a doctor from the Southern Illinois University School of Medicine said: In January 2011, technology journalist Ronald Bailey posted the full result of his Promethease report online. Writing about his decision in Reason magazine, he stated: Members of the medical community have criticised Promethease for technical complexity and a poorly defined "magnitude" scale that causes misconceptions, confusion and panic among its users. See also dbSNP Online Mendelian Inheritance in Man Full Genome Sequencing Predictive Medicine References External links Michael Cariaso - Next-Gen Sequencing, Wikis and SNPedia , Webcast from Bio-ITWorld.com. Biological databases Genomics companies MediaWiki websites Semantic wikis Mutation Single-nucleotide polymorphisms
SNPedia
[ "Chemistry", "Biology" ]
632
[ "Single-nucleotide polymorphisms", "Bioinformatics", "Biodiversity", "Molecular biology", "Biological databases" ]
18,557,138
https://en.wikipedia.org/wiki/Justesen%20code
In coding theory, Justesen codes form a class of error-correcting codes that have a constant rate, constant relative distance, and a constant alphabet size. Before the Justesen error correction code was discovered, no error correction code was known that had all of these three parameters as a constant. Subsequently, other ECC codes with this property have been discovered, for example expander codes. These codes have important applications in computer science such as in the construction of small-bias sample spaces. Justesen codes are derived as the code concatenation of a Reed–Solomon code and the Wozencraft ensemble. The Reed–Solomon codes used achieve constant rate and constant relative distance at the expense of an alphabet size that is linear in the message length. The Wozencraft ensemble is a family of codes that achieve constant rate and constant alphabet size, but the relative distance is only constant for most of the codes in the family. The concatenation of the two codes first encodes the message using the Reed–Solomon code, and then encodes each symbol of the codeword further using a code from the Wozencraft ensemble – using a different code of the ensemble at each position of the codeword. This is different from usual code concatenation where the inner codes are the same for each position. The Justesen code can be constructed very efficiently using only logarithmic space. Definition The Justesen code is the concatenation of an outer code and different inner codes , for. More precisely, the concatenation of these codes, denoted by , is defined as follows. Given a message , we compute the codeword produced by an outer code : . Then we apply each code of N linear inner codes to each coordinate of that codeword to produce the final codeword; that is, . Look back to the definition of the outer code and linear inner codes, this definition of the Justesen code makes sense because the codeword of the outer code is a vector with elements, and we have linear inner codes to apply for those elements. Here for the Justesen code, the outer code is chosen to be Reed Solomon code over a field evaluated over of rate , < < . The outer code have the relative distance and block length of . The set of inner codes is the Wozencraft ensemble . Property of Justesen code As the linear codes in the Wonzencraft ensemble have the rate , Justesen code is the concatenated code with the rate . We have the following theorem that estimates the distance of the concatenated code . Theorem Let Then has relative distance of at least Proof In order to prove a lower bound for the distance of a code we prove that the Hamming distance of an arbitrary but distinct pair of codewords has a lower bound. So let be the Hamming distance of two codewords and . For any given we want a lower bound for Notice that if , then . So for the lower bound , we need to take into account the distance of Suppose Recall that is a Wozencraft ensemble. Due to "Wonzencraft ensemble theorem", there are at least linear codes that have distance So if for some and the code has distance then Further, if we have numbers such that and the code has distance then So now the final task is to find a lower bound for . Define: Then is the number of linear codes having the distance Now we want to estimate Obviously . Due to the Wozencraft Ensemble Theorem, there are at most linear codes having distance less than so Finally, we have This is true for any arbitrary . So has the relative distance at least which completes the proof. Comments We want to consider the "strongly explicit code". So the question is what the "strongly explicit code" is. Loosely speaking, for linear code, the "explicit" property is related to the complexity of constructing its generator matrix G. That in effect means that we can compute the matrix in logarithmic space without using the brute force algorithm to verify that a code has a given satisfied distance. For the other codes that are not linear, we can consider the complexity of the encoding algorithm. So by far, we can see that the Wonzencraft ensemble and Reed-Solomon codes are strongly explicit. Therefore, we have the following result: Corollary: The concatenated code is an asymptotically good code(that is, rate > 0 and relative distance > 0 for small q) and has a strongly explicit construction. An example of a Justesen code The following slightly different code is referred to as the Justesen code in MacWilliams/MacWilliams. It is the particular case of the above-considered Justesen code for a very particular Wonzencraft ensemble: Let R be a Reed-Solomon code of length N = 2m − 1, rank K and minimum weight N − K + 1. The symbols of R are elements of F = GF(2m) and the codewords are obtained by taking every polynomial ƒ over F of degree less than K and listing the values of ƒ on the non-zero elements of F in some predetermined order. Let α be a primitive element of F. For a codeword a = (a1, ..., aN) from R, let b be the vector of length 2N over F given by and let c be the vector of length 2N m obtained from b by expressing each element of F as a binary vector of length m. The Justesen code is the linear code containing all such c. The parameters of this code are length 2m N, dimension m K and minimum distance at least where is the greatest integer satisfying . (See MacWilliams/MacWilliams for a proof.) See also Concatenated error correction code Linear code Reed-Solomon error correction Wozencraft ensemble References Lecture 28: Justesen Code. Coding theory's course. Prof. Atri Rudra. Lecture 6: Concatenated codes. Forney codes. Justesen codes. Essential Coding Theory. Error detection and correction Finite fields Coding theory
Justesen code
[ "Mathematics", "Engineering" ]
1,242
[ "Discrete mathematics", "Coding theory", "Reliability engineering", "Error detection and correction" ]
19,593,167
https://en.wikipedia.org/wiki/Heat
In thermodynamics, heat is energy in transfer between a thermodynamic system and its surroundings by modes other than thermodynamic work and transfer of matter. Such modes are microscopic, mainly thermal conduction, radiation, and friction, as distinct from the macroscopic modes, thermodynamic work and transfer of matter. For a closed system (transfer of matter excluded), the heat involved in a process is the difference in internal energy between the final and initial states of a system, and subtracting the work done in the process. For a closed system, this is the formulation of the first law of thermodynamics. Calorimetry is measurement of quantity of energy transferred as heat by its effect on the states of interacting bodies, for example, by the amount of ice melted or by change in temperature of a body. In the International System of Units (SI), the unit of measurement for heat, as a form of energy, is the joule (J). With various other meanings, the word 'heat' is also used in engineering, and it occurs also in ordinary language, but such are not the topic of the present article. Notation and units As a form of energy, heat has the unit joule (J) in the International System of Units (SI). In addition, many applied branches of engineering use other, traditional units, such as the British thermal unit (BTU) and the calorie. The standard unit for the rate of heating is the watt (W), defined as one joule per second. The symbol for heat was introduced by Rudolf Clausius and Macquorn Rankine in . Heat released by a system into its surroundings is by convention, as a contributor to internal energy, a negative quantity (); when a system absorbs heat from its surroundings, it is positive (). Heat transfer rate, or heat flow per unit time, is denoted by , but it is not a time derivative of a function of state (which can also be written with the dot notation) since heat is not a function of state. Heat flux is defined as rate of heat transfer per unit cross-sectional area (watts per square metre). History In common language, English 'heat' or 'warmth', just as French chaleur, German Hitze or Wärme, Latin calor, Greek θάλπος, etc. refers to either thermal energy or temperature, or the human perception of these. Later, chaleur (as used by Sadi Carnot), 'heat', and Wärme became equivalents also as specific scientific terms at an early stage of thermodynamics. Speculation on 'heat' as a separate form of matter has a long history, involving the phlogiston theory, the caloric theory, and fire. Many careful and accurate historical experiments practically exclude friction, mechanical and thermodynamic work and matter transfer, investigating transfer of energy only by thermal conduction and radiation. Such experiments give impressive rational support to the caloric theory of heat. To account also for changes of internal energy due to friction, and mechanical and thermodynamic work, the caloric theory was, around the end of the eighteenth century, replaced by the "mechanical" theory of heat, which is accepted today. 17th century–early 18th century "Heat is motion" As scientists of the early modern age began to adopt the view that matter consists of particles, a close relationship between heat and the motion of those particles was widely surmised, or even the equivalency of the concepts, boldly expressed by the English philosopher Francis Bacon in 1620. "It must not be thought that heat generates motion, or motion heat (though in some respects this be true), but that the very essence of heat ... is motion and nothing else." "not a ... motion of the whole, but of the small particles of the body." In The Assayer (published 1623) Galileo Galilei, in turn, described heat as an artifact of our minds. Galileo wrote that heat and pressure are apparent properties only, caused by the movement of particles, which is a real phenomenon. In 1665, and again in 1681, English polymath Robert Hooke reiterated that heat is nothing but the motion of the constituent particles of objects, and in 1675, his colleague, Anglo-Irish scientist Robert Boyle repeated that this motion is what heat consists of. Heat has been discussed in ordinary language by philosophers. An example is this 1720 quote from the English philosopher John Locke: When Bacon, Galileo, Hooke, Boyle and Locke wrote “heat”, they might more have referred to what we would now call “temperature”. No clear distinction was made between heat and temperature until the mid-18th century, nor between the internal energy of a body and the transfer of energy as heat until the mid-19th century. Locke's description of heat was repeatedly quoted by English physicist James Prescott Joule. Also the transfer of heat was explained by the motion of particles. Scottish physicist and chemist Joseph Black wrote: "Many have supposed that heat is a tremulous ... motion of the particles of matter, which ... motion they imagined to be communicated from one body to another." John Tyndall's Heat Considered as Mode of Motion (1863) was instrumental in popularizing the idea of heat as motion to the English-speaking public. The theory was developed in academic publications in French, English and German. 18th century Heat vs. temperature Unstated distinctions between heat and “hotness” may be very old, heat seen as something dependent on the quantity of a hot substance, “heat”, vaguely perhaps distinct from the quality of "hotness". In 1723, the English mathematician Brook Taylor measured the temperature—the expansion of the liquid in a thermometer—of mixtures of various amounts of hot water in cold water. As expected, the increase in temperature was in proportion to the proportion of hot water in the mixture. The distinction between heat and temperature is implicitly expressed in the last sentence of his report. Evaporative cooling In 1748, an account was published in The Edinburgh Physical and Literary Essays of an experiment by the Scottish physician and chemist William Cullen. Cullen had used an air pump to lower the pressure in a container with diethyl ether. The ether boiled, while no heat was withdrawn from it, and its temperature decreased. And in 1758 on a warm day in Cambridge, England, Benjamin Franklin and fellow scientist John Hadley experimented by continually wetting the ball of a mercury thermometer with ether and using bellows to evaporate the ether. With each subsequent evaporation, the thermometer read a lower temperature, eventually reaching . Discovery of specific heat In 1756 or soon thereafter, Joseph Black, Cullen’s friend and former assistant, began an extensive study of heat. In 1760 Black realized that when two different substances of equal mass but different temperatures are mixed, the changes in number of degrees in the two substances differ, though the heat gained by the cooler substance and lost by the hotter is the same. Black related an experiment conducted by Daniel Gabriel Fahrenheit on behalf of Dutch physician Herman Boerhaave. For clarity, he then described a hypothetical but realistic variant of the experiment: If equal masses of 100 °F water and 150 °F mercury are mixed, the water temperature increases by 20 ° and the mercury temperature decreases by 30 ° (both arriving at 120 °F), even though the heat gained by the water and lost by the mercury is the same. This clarified the distinction between heat and temperature. It also introduced the concept of specific heat capacity, being different for different substances. Black wrote: "Quicksilver [mercury] ... has less capacity for the matter of heat than water." Degrees of heat In his investigations of specific heat, Black used a unit of heat he called "degrees of heat"—as opposed to just "degrees" [of temperature]. This unit was context-dependent and could only be used when circumstances were identical. It was based on change in temperature multiplied by the mass of the substance involved. Discovery of latent heat It was known that when the air temperature rises above freezing—air then becoming the obvious heat source—snow melts very slowly and the temperature of the melted snow is close to its freezing point. In 1757, Black started to investigate if heat, therefore, was required for the melting of a solid, independent of any rise in temperature. As far Black knew, the general view at that time was that melting was inevitably accompanied by a small increase in temperature, and that no more heat was required than what the increase in temperature would require in itself. Soon, however, Black was able to show that much more heat was required during melting than could be explained by the increase in temperature alone. He was also able to show that heat is released by a liquid during its freezing; again, much more than could be explained by the decrease of its temperature alone. In 1762, Black announced the following research and results to a society of professors at the University of Glasgow. Black had placed equal masses of ice at 32 °F (0 °C) and water at 33 °F (0.6 °C) respectively in two identical, well separated containers. The water and the ice were both evenly heated to 40 °F by the air in the room, which was at a constant 47 °F (8 °C). The water had therefore received 40 – 33 = 7 “degrees of heat”. The ice had been heated for 21 times longer and had therefore received 7 × 21 = 147 “degrees of heat”. The temperature of the ice had increased by 8 °F. The ice had now absorbed an additional 8 “degrees of heat”, which Black called sensible heat, manifest as temperature change, which could be felt and measured. 147 – 8 = 139 “degrees of heat” were also absorbed as latent heat, manifest as phase change rather than as temperature change. Black next showed that a water temperature of 176 °F was needed to melt an equal mass of ice until it was all 32 °F. So now 176 – 32 = 144 “degrees of heat” seemed to be needed to melt the ice. The modern value for the heat of fusion of ice would be 143 “degrees of heat” on the same scale (79.5 “degrees of heat Celsius”). Finally Black increased the temperature of and vaporized respectively two equal masses of water through even heating. He showed that 830 “degrees of heat” was needed for the vaporization; again based on the time required. The modern value for the heat of vaporization of water would be 967 “degrees of heat” on the same scale. First calorimeter A calorimeter is a device used for measuring heat capacity, as well as the heat absorbed or released in chemical reactions or physical changes. In 1780, French chemist Antoine Lavoisier used such an apparatus—which he named 'calorimeter'—to investigate the heat released by respiration, by observing how this heat melted snow surrounding his apparatus. A so called ice calorimeter was used 1782–83 by Lavoisier and his colleague Pierre-Simon Laplace to measure the heat released in various chemical reactions. The heat so released melted a specific amount of ice, and the heat required for the melting of a certain amount of ice was known beforehand. Classical thermodynamics The modern understanding of heat is often partly attributed to Thompson's 1798 mechanical theory of heat (An Experimental Enquiry Concerning the Source of the Heat which is Excited by Friction), postulating a mechanical equivalent of heat. A collaboration between Nicolas Clément and Sadi Carnot (Reflections on the Motive Power of Fire) in the 1820s had some related thinking along similar lines. In 1842, Julius Robert Mayer frictionally generated heat in paper pulp and measured the temperature rise. In 1845, Joule published a paper entitled The Mechanical Equivalent of Heat, in which he specified a numerical value for the amount of mechanical work required to "produce a unit of heat", based on heat production by friction in the passage of electricity through a resistor and in the rotation of a paddle in a vat of water. The theory of classical thermodynamics matured in the 1850s to 1860s. Clausius (1850) In 1850, Clausius, responding to Joule's experimental demonstrations of heat production by friction, rejected the caloric doctrine of conservation of heat, writing: If we assume that heat, like matter, cannot be lessened in quantity, we must also assume that it cannot be increased; but it is almost impossible to explain the ascension of temperature brought about by friction otherwise than by assuming an actual increase of heat. The careful experiments of Joule, who developed heat in various ways by the application of mechanical force, establish almost to a certainty, not only the possibility of increasing the quantity of heat, but also the fact that the newly-produced heat is proportional to the work expended in its production. It may be remarked further, that many facts have lately transpired which tend to overthrow the hypothesis that heat is itself a body, and to prove that it consists in a motion of the ultimate particles of bodies. The process function was introduced by Rudolf Clausius in 1850. Clausius described it with the German compound Wärmemenge, translated as "amount of heat". James Clerk Maxwell (1871) James Clerk Maxwell in his 1871 Theory of Heat outlines four stipulations for the definition of heat: It is something which may be transferred from one body to another, according to the second law of thermodynamics. It is a measurable quantity, and so can be treated mathematically. It cannot be treated as a material substance, because it may be transformed into something that is not a material substance, e.g., mechanical work. Heat is one of the forms of energy. Bryan (1907) In 1907, G.H. Bryan published an investigation of the foundations of thermodynamics, Thermodynamics: an Introductory Treatise dealing mainly with First Principles and their Direct Applications, B.G. Teubner, Leipzig. Bryan was writing when thermodynamics had been established empirically, but people were still interested to specify its logical structure. The 1909 work of Carathéodory also belongs to this historical era. Bryan was a physicist while Carathéodory was a mathematician. Bryan started his treatise with an introductory chapter on the notions of heat and of temperature. He gives an example of where the notion of heating as raising a body's temperature contradicts the notion of heating as imparting a quantity of heat to that body. He defined an adiabatic transformation as one in which the body neither gains nor loses heat. This is not quite the same as defining an adiabatic transformation as one that occurs to a body enclosed by walls impermeable to radiation and conduction. He recognized calorimetry as a way of measuring quantity of heat. He recognized water as having a temperature of maximum density. This makes water unsuitable as a thermometric substance around that temperature. He intended to remind readers of why thermodynamicists preferred an absolute scale of temperature, independent of the properties of a particular thermometric substance. His second chapter started with the recognition of friction as a source of heat, by Benjamin Thompson, by Humphry Davy, by Robert Mayer, and by James Prescott Joule. He stated the First Law of Thermodynamics, or Mayer–Joule Principle as follows: When heat is transformed into work or conversely work is transformed into heat, the quantity of heat gained or lost is proportional to the quantity of work lost or gained. He wrote: If heat be measured in dynamical units the mechanical equivalent becomes equal to unity, and the equations of thermodynamics assume a simpler and more symmetrical form. He explained how the caloric theory of Lavoisier and Laplace made sense in terms of pure calorimetry, though it failed to account for conversion of work into heat by such mechanisms as friction and conduction of electricity. Having rationally defined quantity of heat, he went on to consider the second law, including the Kelvin definition of absolute thermodynamic temperature. In section 41, he wrote:          §41. Physical unreality of reversible processes. In Nature all phenomena are irreversible in a greater or less degree. The motions of celestial bodies afford the closest approximations to reversible motions, but motions which occur on this earth are largely retarded by friction, viscosity, electric and other resistances, and if the relative velocities of moving bodies were reversed, these resistances would still retard the relative motions and would not accelerate them as they should if the motions were perfectly reversible. He then stated the principle of conservation of energy. He then wrote: In connection with irreversible phenomena the following axioms have to be assumed.          (1) If a system can undergo an irreversible change it will do so.          (2) A perfectly reversible change cannot take place of itself; such a change can only be regarded as the limiting form of an irreversible change. On page 46, thinking of closed systems in thermal connection, he wrote: We are thus led to postulate a system in which energy can pass from one element to another otherwise than by the performance of mechanical work. On page 47, still thinking of closed systems in thermal connection, he wrote:          §58. Quantity of Heat. Definition. When energy flows from one system or part of a system to another otherwise than by the performance of work, the energy so transferred i[s] called heat. On page 48, he wrote:          § 59. When two bodies act thermically on one another the quantities of heat gained by one and lost by the other are not necessarily equal.          In the case of bodies at a distance, heat may be taken from or given to the intervening medium.          The quantity of heat received by any portion of the ether may be defined in the same way as that received by a material body. [He was thinking of thermal radiation.]          Another important exception occurs when sliding takes place between two rough bodies in contact. The algebraic sum of the works done is different from zero, because, although the action and reaction are equal and opposite the velocities of the parts of the bodies in contact are different. Moreover, the work lost in the process does not increase the mutual potential energy of the system and there is no intervening medium between the bodies. Unless the lost energy can be accounted for in other ways, (as when friction produces electrification), it follows from the Principle of Conservation of Energy that the algebraic sum of the quantities of heat gained by the two systems is equal to the quantity of work lost by friction. [This thought was echoed by Bridgman, as above.] Carathéodory (1909) A celebrated and frequent definition of heat in thermodynamics is based on the work of Carathéodory (1909), referring to processes in a closed system. Carathéodory was responding to a suggestion by Max Born that he examine the logical structure of thermodynamics. The internal energy of a body in an arbitrary state can be determined by amounts of work adiabatically performed by the body on its surroundings when it starts from a reference state . Such work is assessed through quantities defined in the surroundings of the body. It is supposed that such work can be assessed accurately, without error due to friction in the surroundings; friction in the body is not excluded by this definition. The adiabatic performance of work is defined in terms of adiabatic walls, which allow transfer of energy as work, but no other transfer, of energy or matter. In particular they do not allow the passage of energy as heat. According to this definition, work performed adiabatically is in general accompanied by friction within the thermodynamic system or body. On the other hand, according to Carathéodory (1909), there also exist non-adiabatic, diathermal walls, which are postulated to be permeable only to heat. For the definition of quantity of energy transferred as heat, it is customarily envisaged that an arbitrary state of interest is reached from state by a process with two components, one adiabatic and the other not adiabatic. For convenience one may say that the adiabatic component was the sum of work done by the body through volume change through movement of the walls while the non-adiabatic wall was temporarily rendered adiabatic, and of isochoric adiabatic work. Then the non-adiabatic component is a process of energy transfer through the wall that passes only heat, newly made accessible for the purpose of this transfer, from the surroundings to the body. The change in internal energy to reach the state from the state is the difference of the two amounts of energy transferred. Although Carathéodory himself did not state such a definition, following his work it is customary in theoretical studies to define heat, , to the body from its surroundings, in the combined process of change to state from the state , as the change in internal energy, , minus the amount of work, , done by the body on its surrounds by the adiabatic process, so that . In this definition, for the sake of conceptual rigour, the quantity of energy transferred as heat is not specified directly in terms of the non-adiabatic process. It is defined through knowledge of precisely two variables, the change of internal energy and the amount of adiabatic work done, for the combined process of change from the reference state to the arbitrary state . It is important that this does not explicitly involve the amount of energy transferred in the non-adiabatic component of the combined process. It is assumed here that the amount of energy required to pass from state to state , the change of internal energy, is known, independently of the combined process, by a determination through a purely adiabatic process, like that for the determination of the internal energy of state above. The rigour that is prized in this definition is that there is one and only one kind of energy transfer admitted as fundamental: energy transferred as work. Energy transfer as heat is considered as a derived quantity. The uniqueness of work in this scheme is considered to guarantee rigor and purity of conception. The conceptual purity of this definition, based on the concept of energy transferred as work as an ideal notion, relies on the idea that some frictionless and otherwise non-dissipative processes of energy transfer can be realized in physical actuality. The second law of thermodynamics, on the other hand, assures us that such processes are not found in nature. Before the rigorous mathematical definition of heat based on Carathéodory's 1909 paper, historically, heat, temperature, and thermal equilibrium were presented in thermodynamics textbooks as jointly primitive notions. Carathéodory introduced his 1909 paper thus: "The proposition that the discipline of thermodynamics can be justified without recourse to any hypothesis that cannot be verified experimentally must be regarded as one of the most noteworthy results of the research in thermodynamics that was accomplished during the last century." Referring to the "point of view adopted by most authors who were active in the last fifty years", Carathéodory wrote: "There exists a physical quantity called heat that is not identical with the mechanical quantities (mass, force, pressure, etc.) and whose variations can be determined by calorimetric measurements." James Serrin introduces an account of the theory of thermodynamics thus: "In the following section, we shall use the classical notions of heat, work, and hotness as primitive elements, ... That heat is an appropriate and natural primitive for thermodynamics was already accepted by Carnot. Its continued validity as a primitive element of thermodynamical structure is due to the fact that it synthesizes an essential physical concept, as well as to its successful use in recent work to unify different constitutive theories." This traditional kind of presentation of the basis of thermodynamics includes ideas that may be summarized by the statement that heat transfer is purely due to spatial non-uniformity of temperature, and is by conduction and radiation, from hotter to colder bodies. It is sometimes proposed that this traditional kind of presentation necessarily rests on "circular reasoning". This alternative approach to the definition of quantity of energy transferred as heat differs in logical structure from that of Carathéodory, recounted just above. This alternative approach admits calorimetry as a primary or direct way to measure quantity of energy transferred as heat. It relies on temperature as one of its primitive concepts, and used in calorimetry. It is presupposed that enough processes exist physically to allow measurement of differences in internal energies. Such processes are not restricted to adiabatic transfers of energy as work. They include calorimetry, which is the commonest practical way of finding internal energy differences. The needed temperature can be either empirical or absolute thermodynamic. In contrast, the Carathéodory way recounted just above does not use calorimetry or temperature in its primary definition of quantity of energy transferred as heat. The Carathéodory way regards calorimetry only as a secondary or indirect way of measuring quantity of energy transferred as heat. As recounted in more detail just above, the Carathéodory way regards quantity of energy transferred as heat in a process as primarily or directly defined as a residual quantity. It is calculated from the difference of the internal energies of the initial and final states of the system, and from the actual work done by the system during the process. That internal energy difference is supposed to have been measured in advance through processes of purely adiabatic transfer of energy as work, processes that take the system between the initial and final states. By the Carathéodory way it is presupposed as known from experiment that there actually physically exist enough such adiabatic processes, so that there need be no recourse to calorimetry for measurement of quantity of energy transferred as heat. This presupposition is essential but is explicitly labeled neither as a law of thermodynamics nor as an axiom of the Carathéodory way. In fact, the actual physical existence of such adiabatic processes is indeed mostly supposition, and those supposed processes have in most cases not been actually verified empirically to exist. Planck (1926) Over the years, for example in his 1879 thesis, but particularly in 1926, Planck advocated regarding the generation of heat by rubbing as the most specific way to define heat. Planck criticised Carathéodory for not attending to this. Carathéodory was a mathematician who liked to think in terms of adiabatic processes, and perhaps found friction too tricky to think about, while Planck was a physicist. Heat transfer Heat transfer between two bodies Referring to conduction, Partington writes: "If a hot body is brought in conducting contact with a cold body, the temperature of the hot body falls and that of the cold body rises, and it is said that a quantity of heat has passed from the hot body to the cold body." Referring to radiation, Maxwell writes: "In Radiation, the hotter body loses heat, and the colder body receives heat by means of a process occurring in some intervening medium which does not itself thereby become hot." Maxwell writes that convection as such "is not a purely thermal phenomenon". In thermodynamics, convection in general is regarded as transport of internal energy. If, however, the convection is enclosed and circulatory, then it may be regarded as an intermediary that transfers energy as heat between source and destination bodies, because it transfers only energy and not matter from the source to the destination body. In accordance with the first law for closed systems, energy transferred solely as heat leaves one body and enters another, changing the internal energies of each. Transfer, between bodies, of energy as work is a complementary way of changing internal energies. Though it is not logically rigorous from the viewpoint of strict physical concepts, a common form of words that expresses this is to say that heat and work are interconvertible. Cyclically operating engines that use only heat and work transfers have two thermal reservoirs, a hot and a cold one. They may be classified by the range of operating temperatures of the working body, relative to those reservoirs. In a heat engine, the working body is at all times colder than the hot reservoir and hotter than the cold reservoir. In a sense, it uses heat transfer to produce work. In a heat pump, the working body, at stages of the cycle, goes both hotter than the hot reservoir, and colder than the cold reservoir. In a sense, it uses work to produce heat transfer. Heat engine In classical thermodynamics, a commonly considered model is the heat engine. It consists of four bodies: the working body, the hot reservoir, the cold reservoir, and the work reservoir. A cyclic process leaves the working body in an unchanged state, and is envisaged as being repeated indefinitely often. Work transfers between the working body and the work reservoir are envisaged as reversible, and thus only one work reservoir is needed. But two thermal reservoirs are needed, because transfer of energy as heat is irreversible. A single cycle sees energy taken by the working body from the hot reservoir and sent to the two other reservoirs, the work reservoir and the cold reservoir. The hot reservoir always and only supplies energy, and the cold reservoir always and only receives energy. The second law of thermodynamics requires that no cycle can occur in which no energy is received by the cold reservoir. Heat engines achieve higher efficiency when the ratio of the initial and final temperature is greater. Heat pump or refrigerator Another commonly considered model is the heat pump or refrigerator. Again there are four bodies: the working body, the hot reservoir, the cold reservoir, and the work reservoir. A single cycle starts with the working body colder than the cold reservoir, and then energy is taken in as heat by the working body from the cold reservoir. Then the work reservoir does work on the working body, adding more to its internal energy, making it hotter than the hot reservoir. The hot working body passes heat to the hot reservoir, but still remains hotter than the cold reservoir. Then, by allowing it to expand without passing heat to another body, the working body is made colder than the cold reservoir. It can now accept heat transfer from the cold reservoir to start another cycle. The device has transported energy from a colder to a hotter reservoir, but this is not regarded as by an inanimate agency; rather, it is regarded as by the harnessing of work . This is because work is supplied from the work reservoir, not just by a simple thermodynamic process, but by a cycle of thermodynamic operations and processes, which may be regarded as directed by an animate or harnessing agency. Accordingly, the cycle is still in accord with the second law of thermodynamics. The 'efficiency' of a heat pump (which exceeds unity) is best when the temperature difference between the hot and cold reservoirs is least. Functionally, such engines are used in two ways, distinguishing a target reservoir and a resource or surrounding reservoir. A heat pump transfers heat to the hot reservoir as the target from the resource or surrounding reservoir. A refrigerator transfers heat, from the cold reservoir as the target, to the resource or surrounding reservoir. The target reservoir may be regarded as leaking: when the target leaks heat to the surroundings, heat pumping is used; when the target leaks coldness to the surroundings, refrigeration is used. The engines harness work to overcome the leaks. Macroscopic view According to Planck, there are three main conceptual approaches to heat. One is the microscopic or kinetic theory approach. The other two are macroscopic approaches. One of the macroscopic approaches is through the law of conservation of energy taken as prior to thermodynamics, with a mechanical analysis of processes, for example in the work of Helmholtz. This mechanical view is taken in this article as currently customary for thermodynamic theory. The other macroscopic approach is the thermodynamic one, which admits heat as a primitive concept, which contributes, by scientific induction to knowledge of the law of conservation of energy. This view is widely taken as the practical one, quantity of heat being measured by calorimetry. Bailyn also distinguishes the two macroscopic approaches as the mechanical and the thermodynamic. The thermodynamic view was taken by the founders of thermodynamics in the nineteenth century. It regards quantity of energy transferred as heat as a primitive concept coherent with a primitive concept of temperature, measured primarily by calorimetry. A calorimeter is a body in the surroundings of the system, with its own temperature and internal energy; when it is connected to the system by a path for heat transfer, changes in it measure heat transfer. The mechanical view was pioneered by Helmholtz and developed and used in the twentieth century, largely through the influence of Max Born. It regards quantity of heat transferred as heat as a derived concept, defined for closed systems as quantity of heat transferred by mechanisms other than work transfer, the latter being regarded as primitive for thermodynamics, defined by macroscopic mechanics. According to Born, the transfer of internal energy between open systems that accompanies transfer of matter "cannot be reduced to mechanics". It follows that there is no well-founded definition of quantities of energy transferred as heat or as work associated with transfer of matter. Nevertheless, for the thermodynamical description of non-equilibrium processes, it is desired to consider the effect of a temperature gradient established by the surroundings across the system of interest when there is no physical barrier or wall between system and surroundings, that is to say, when they are open with respect to one another. The impossibility of a mechanical definition in terms of work for this circumstance does not alter the physical fact that a temperature gradient causes a diffusive flux of internal energy, a process that, in the thermodynamic view, might be proposed as a candidate concept for transfer of energy as heat. In this circumstance, it may be expected that there may also be active other drivers of diffusive flux of internal energy, such as gradient of chemical potential which drives transfer of matter, and gradient of electric potential which drives electric current and iontophoresis; such effects usually interact with diffusive flux of internal energy driven by temperature gradient, and such interactions are known as cross-effects. If cross-effects that result in diffusive transfer of internal energy were also labeled as heat transfers, they would sometimes violate the rule that pure heat transfer occurs only down a temperature gradient, never up one. They would also contradict the principle that all heat transfer is of one and the same kind, a principle founded on the idea of heat conduction between closed systems. One might to try to think narrowly of heat flux driven purely by temperature gradient as a conceptual component of diffusive internal energy flux, in the thermodynamic view, the concept resting specifically on careful calculations based on detailed knowledge of the processes and being indirectly assessed. In these circumstances, if perchance it happens that no transfer of matter is actualized, and there are no cross-effects, then the thermodynamic concept and the mechanical concept coincide, as if one were dealing with closed systems. But when there is transfer of matter, the exact laws by which temperature gradient drives diffusive flux of internal energy, rather than being exactly knowable, mostly need to be assumed, and in many cases are practically unverifiable. Consequently, when there is transfer of matter, the calculation of the pure 'heat flux' component of the diffusive flux of internal energy rests on practically unverifiable assumptions. This is a reason to think of heat as a specialized concept that relates primarily and precisely to closed systems, and applicable only in a very restricted way to open systems. In many writings in this context, the term "heat flux" is used when what is meant is therefore more accurately called diffusive flux of internal energy; such usage of the term "heat flux" is a residue of older and now obsolete language usage that allowed that a body may have a "heat content". Microscopic view In the kinetic theory, heat is explained in terms of the microscopic motions and interactions of constituent particles, such as electrons, atoms, and molecules. The immediate meaning of the kinetic energy of the constituent particles is not as heat. It is as a component of internal energy. In microscopic terms, heat is a transfer quantity, and is described by a transport theory, not as steadily localized kinetic energy of particles. Heat transfer arises from temperature gradients or differences, through the diffuse exchange of microscopic kinetic and potential particle energy, by particle collisions and other interactions. An early and vague expression of this was made by Francis Bacon. Precise and detailed versions of it were developed in the nineteenth century. In statistical mechanics, for a closed system (no transfer of matter), heat is the energy transfer associated with a disordered, microscopic action on the system, associated with jumps in occupation numbers of the energy levels of the system, without change in the values of the energy levels themselves. It is possible for macroscopic thermodynamic work to alter the occupation numbers without change in the values of the system energy levels themselves, but what distinguishes transfer as heat is that the transfer is entirely due to disordered, microscopic action, including radiative transfer. A mathematical definition can be formulated for small increments of quasi-static adiabatic work in terms of the statistical distribution of an ensemble of microstates. Calorimetry Quantity of heat transferred can be measured by calorimetry, or determined through calculations based on other quantities. Calorimetry is the empirical basis of the idea of quantity of heat transferred in a process. The transferred heat is measured by changes in a body of known properties, for example, temperature rise, change in volume or length, or phase change, such as melting of ice. A calculation of quantity of heat transferred can rely on a hypothetical quantity of energy transferred as adiabatic work and on the first law of thermodynamics. Such calculation is the primary approach of many theoretical studies of quantity of heat transferred. Engineering The discipline of heat transfer, typically considered an aspect of mechanical engineering and chemical engineering, deals with specific applied methods by which thermal energy in a system is generated, or converted, or transferred to another system. Although the definition of heat implicitly means the transfer of energy, the term heat transfer encompasses this traditional usage in many engineering disciplines and laymen language. Heat transfer is generally described as including the mechanisms of heat conduction, heat convection, thermal radiation, but may include mass transfer and heat in processes of phase changes. Convection may be described as the combined effects of conduction and fluid flow. From the thermodynamic point of view, heat flows into a fluid by diffusion to increase its energy, the fluid then transfers (advects) this increased internal energy (not heat) from one location to another, and this is then followed by a second thermal interaction which transfers heat to a second body or system, again by diffusion. This entire process is often regarded as an additional mechanism of heat transfer, although technically, "heat transfer" and thus heating and cooling occurs only on either end of such a conductive flow, but not as a result of flow. Thus, conduction can be said to "transfer" heat only as a net result of the process, but may not do so at every time within the complicated convective process. Latent and sensible heat In an 1847 lecture entitled On Matter, Living Force, and Heat, James Prescott Joule characterized the terms latent heat and sensible heat as components of heat each affecting distinct physical phenomena, namely the potential and kinetic energy of particles, respectively. He described latent energy as the energy possessed via a distancing of particles where attraction was over a greater distance, i.e. a form of potential energy, and the sensible heat as an energy involving the motion of particles, i.e. kinetic energy. Latent heat is the heat released or absorbed by a chemical substance or a thermodynamic system during a change of state that occurs without a change in temperature. Such a process may be a phase transition, such as the melting of ice or the boiling of water. Heat capacity Heat capacity is a measurable physical quantity equal to the ratio of the heat added to an object to the resulting temperature change. The molar heat capacity is the heat capacity per unit amount (SI unit: mole) of a pure substance, and the specific heat capacity, often called simply specific heat, is the heat capacity per unit mass of a material. Heat capacity is a physical property of a substance, which means that it depends on the state and properties of the substance under consideration. The specific heats of monatomic gases, such as helium, are nearly constant with temperature. Diatomic gases such as hydrogen display some temperature dependence, and triatomic gases (e.g., carbon dioxide) still more. Before the development of the laws of thermodynamics, heat was measured by changes in the states of the participating bodies. Some general rules, with important exceptions, can be stated as follows. In general, most bodies expand on heating. In this circumstance, heating a body at a constant volume increases the pressure it exerts on its constraining walls, while heating at a constant pressure increases its volume. Beyond this, most substances have three ordinarily recognized states of matter, solid, liquid, and gas. Some can also exist in a plasma. Many have further, more finely differentiated, states of matter, such as glass and liquid crystal. In many cases, at fixed temperature and pressure, a substance can exist in several distinct states of matter in what might be viewed as the same 'body'. For example, ice may float in a glass of water. Then the ice and the water are said to constitute two phases within the 'body'. Definite rules are known, telling how distinct phases may coexist in a 'body'. Mostly, at a fixed pressure, there is a definite temperature at which heating causes a solid to melt or evaporate, and a definite temperature at which heating causes a liquid to evaporate. In such cases, cooling has the reverse effects. All of these, the commonest cases, fit with a rule that heating can be measured by changes of state of a body. Such cases supply what are called thermometric bodies, that allow the definition of empirical temperatures. Before 1848, all temperatures were defined in this way. There was thus a tight link, apparently logically determined, between heat and temperature, though they were recognized as conceptually thoroughly distinct, especially by Joseph Black in the later eighteenth century. There are important exceptions. They break the obviously apparent link between heat and temperature. They make it clear that empirical definitions of temperature are contingent on the peculiar properties of particular thermometric substances, and are thus precluded from the title 'absolute'. For example, water contracts on being heated near 277 K. It cannot be used as a thermometric substance near that temperature. Also, over a certain temperature range, ice contracts on heating. Moreover, many substances can exist in metastable states, such as with negative pressure, that survive only transiently and in very special conditions. Such facts, sometimes called 'anomalous', are some of the reasons for the thermodynamic definition of absolute temperature. In the early days of measurement of high temperatures, another factor was important, and used by Josiah Wedgwood in his pyrometer. The temperature reached in a process was estimated by the shrinkage of a sample of clay. The higher the temperature, the more the shrinkage. This was the only available more or less reliable method of measurement of temperatures above 1000 °C (1,832 °F). But such shrinkage is irreversible. The clay does not expand again on cooling. That is why it could be used for the measurement. But only once. It is not a thermometric material in the usual sense of the word. Nevertheless, the thermodynamic definition of absolute temperature does make essential use of the concept of heat, with proper circumspection. "Hotness" The property of hotness is a concern of thermodynamics that should be defined without reference to the concept of heat. Consideration of hotness leads to the concept of empirical temperature. All physical systems are capable of heating or cooling others. With reference to hotness, the comparative terms hotter and colder are defined by the rule that heat flows from the hotter body to the colder. If a physical system is inhomogeneous or very rapidly or irregularly changing, for example by turbulence, it may be impossible to characterize it by a temperature, but still there can be transfer of energy as heat between it and another system. If a system has a physical state that is regular enough, and persists long enough to allow it to reach thermal equilibrium with a specified thermometer, then it has a temperature according to that thermometer. An empirical thermometer registers degree of hotness for such a system. Such a temperature is called empirical. For example, Truesdell writes about classical thermodynamics: "At each time, the body is assigned a real number called the temperature. This number is a measure of how hot the body is." Physical systems that are too turbulent to have temperatures may still differ in hotness. A physical system that passes heat to another physical system is said to be the hotter of the two. More is required for the system to have a thermodynamic temperature. Its behavior must be so regular that its empirical temperature is the same for all suitably calibrated and scaled thermometers, and then its hotness is said to lie on the one-dimensional hotness manifold. This is part of the reason why heat is defined following Carathéodory and Born, solely as occurring other than by work or transfer of matter; temperature is advisedly and deliberately not mentioned in this now widely accepted definition. This is also the reason that the zeroth law of thermodynamics is stated explicitly. If three physical systems, A, B, and C are each not in their own states of internal thermodynamic equilibrium, it is possible that, with suitable physical connections being made between them, A can heat B and B can heat C and C can heat A. In non-equilibrium situations, cycles of flow are possible. It is the special and uniquely distinguishing characteristic of internal thermodynamic equilibrium that this possibility is not open to thermodynamic systems (as distinguished amongst physical systems) which are in their own states of internal thermodynamic equilibrium; this is the reason why the zeroth law of thermodynamics needs explicit statement. That is to say, the relation 'is not colder than' between general non-equilibrium physical systems is not transitive, whereas, in contrast, the relation 'has no lower a temperature than' between thermodynamic systems in their own states of internal thermodynamic equilibrium is transitive. It follows from this that the relation 'is in thermal equilibrium with' is transitive, which is one way of stating the zeroth law. Just as temperature may be undefined for a sufficiently inhomogeneous system, so also may entropy be undefined for a system not in its own state of internal thermodynamic equilibrium. For example, 'the temperature of the Solar System' is not a defined quantity. Likewise, 'the entropy of the Solar System' is not defined in classical thermodynamics. It has not been possible to define non-equilibrium entropy, as a simple number for a whole system, in a clearly satisfactory way. Classical thermodynamics Heat and enthalpy For a closed system (a system from which no matter can enter or exit), one version of the first law of thermodynamics states that the change in internal energy of the system is equal to the amount of heat supplied to the system minus the amount of thermodynamic work done by system on its surroundings. The foregoing sign convention for work is used in the present article, but an alternate sign convention, followed by IUPAC, for work, is to consider the work performed on the system by its surroundings as positive. This is the convention adopted by many modern textbooks of physical chemistry, such as those by Peter Atkins and Ira Levine, but many textbooks on physics define work as work done by the system. This formula can be re-written so as to express a definition of quantity of energy transferred as heat, based purely on the concept of adiabatic work, if it is supposed that is defined and measured solely by processes of adiabatic work: The thermodynamic work done by the system is through mechanisms defined by its thermodynamic state variables, for example, its volume , not through variables that necessarily involve mechanisms in the surroundings. The latter are such as shaft work, and include isochoric work. The internal energy, , is a state function. In cyclical processes, such as the operation of a heat engine, state functions of the working substance return to their initial values upon completion of a cycle. The differential, or infinitesimal increment, for the internal energy in an infinitesimal process is an exact differential . The symbol for exact differentials is the lowercase letter . In contrast, neither of the infinitesimal increments nor in an infinitesimal process represents the change in a state function of the system. Thus, infinitesimal increments of heat and work are inexact differentials. The lowercase Greek letter delta, , is the symbol for inexact differentials. The integral of any inexact differential in a process where the system leaves and then returns to the same thermodynamic state does not necessarily equal zero. As recounted above, in the section headed heat and entropy, the second law of thermodynamics observes that if heat is supplied to a system in a reversible process, the increment of heat and the temperature form the exact differential and that , the entropy of the working body, is a state function. Likewise, with a well-defined pressure, , behind a slowly moving (quasistatic) boundary, the work differential, , and the pressure, , combine to form the exact differential with the volume of the system, which is a state variable. In general, for systems of uniform pressure and temperature without composition change, Associated with this differential equation is the concept that the internal energy may be considered to be a function of its natural variables and . The internal energy representation of the fundamental thermodynamic relation is written as If is constant and if is constant with the enthalpy defined by The enthalpy may be considered to be a function of its natural variables and . The enthalpy representation of the fundamental thermodynamic relation is written The internal energy representation and the enthalpy representation are partial Legendre transforms of one another. They contain the same physical information, written in different ways. Like the internal energy, the enthalpy stated as a function of its natural variables is a thermodynamic potential and contains all thermodynamic information about a body. If a quantity of heat is added to a body while it does only expansion work on its surroundings, one has If this is constrained to happen at constant pressure, i.e. with , the expansion work done by the body is given by ; recalling the first law of thermodynamics, one has Consequently, by substitution one has In this scenario, the increase in enthalpy is equal to the quantity of heat added to the system. This is the basis of the determination of enthalpy changes in chemical reactions by calorimetry. Since many processes do take place at constant atmospheric pressure, the enthalpy is sometimes given the misleading name of 'heat content' or heat function, while it actually depends strongly on the energies of covalent bonds and intermolecular forces. In terms of the natural variables of the state function , this process of change of state from state 1 to state 2 can be expressed as It is known that the temperature is identically stated by Consequently, In this case, the integral specifies a quantity of heat transferred at constant pressure. Heat and entropy In 1856, Rudolf Clausius, referring to closed systems, in which transfers of matter do not occur, defined the second fundamental theorem (the second law of thermodynamics) in the mechanical theory of heat (thermodynamics): "if two transformations which, without necessitating any other permanent change, can mutually replace one another, be called equivalent, then the generations of the quantity of heat Q from work at the temperature T, has the equivalence-value:" In 1865, he came to define the entropy symbolized by S, such that, due to the supply of the amount of heat Q at temperature T the entropy of the system is increased by In a transfer of energy as heat without work being done, there are changes of entropy in both the surroundings which lose heat and the system which gains it. The increase, , of entropy in the system may be considered to consist of two parts, an increment, that matches, or 'compensates', the change, , of entropy in the surroundings, and a further increment, that may be considered to be 'generated' or 'produced' in the system, and is said therefore to be 'uncompensated'. Thus This may also be written The total change of entropy in the system and surroundings is thus This may also be written It is then said that an amount of entropy has been transferred from the surroundings to the system. Because entropy is not a conserved quantity, this is an exception to the general way of speaking, in which an amount transferred is of a conserved quantity. From the second law of thermodynamics it follows that in a spontaneous transfer of heat, in which the temperature of the system is different from that of the surroundings: For purposes of mathematical analysis of transfers, one thinks of fictive processes that are called reversible, with the temperature of the system being hardly less than that of the surroundings, and the transfer taking place at an imperceptibly slow rate. Following the definition above in formula (), for such a fictive reversible process, a quantity of transferred heat (an inexact differential) is analyzed as a quantity , with (an exact differential): This equality is only valid for a fictive transfer in which there is no production of entropy, that is to say, in which there is no uncompensated entropy. If, in contrast, the process is natural, and can really occur, with irreversibility, then there is entropy production, with . The quantity was termed by Clausius the "uncompensated heat", though that does not accord with present-day terminology. Then one has This leads to the statement which is the second law of thermodynamics for closed systems. In non-equilibrium thermodynamics that makes the approximation of assuming the hypothesis of local thermodynamic equilibrium, there is a special notation for this. The transfer of energy as heat is assumed to take place across an infinitesimal temperature difference, so that the system element and its surroundings have near enough the same temperature . Then one writes where by definition The second law for a natural process asserts that See also Effect of sun angle on climate Heat death of the Universe Heat diffusion Heat equation Heat exchanger Heat flux sensor Heat recovery steam generator Heat recovery ventilation Heat transfer coefficient Heat wave History of heat Orders of magnitude (temperature) Relativistic heat conduction Renewable heat Sigma heat Thermal energy storage Thermal management of electronic devices and systems Thermometer Waste heat Waste heat recovery unit Water heat recycling Notes References Quotations Bibliography of cited references Adkins, C.J. (1968/1983). Equilibrium Thermodynamics, (1st edition 1968), third edition 1983, Cambridge University Press, Cambridge UK, . Atkins, P., de Paula, J. (1978/2010). Physical Chemistry, (first edition 1978), ninth edition 2010, Oxford University Press, Oxford UK, . Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, . Born, M. (1949). Natural Philosophy of Cause and Chance, Oxford University Press, London. Bryan, G.H. (1907). Thermodynamics. An Introductory Treatise dealing mainly with First Principles and their Direct Applications, B.G. Teubner, Leipzig. Buchdahl, H.A. (1966). The Concepts of Classical Thermodynamics, Cambridge University Press, Cambridge UK. Callen, H.B. (1960/1985). Thermodynamics and an Introduction to Thermostatistics, (1st edition 1960) 2nd edition 1985, Wiley, New York, . A translation may be found here. A mostly reliable translation is to be found at Kestin, J. (1976). The Second Law of Thermodynamics, Dowden, Hutchinson & Ross, Stroudsburg PA. Chandrasekhar, S. (1961). Hydrodynamic and Hydromagnetic Stability, Oxford University Press, Oxford UK. Clausius, R. (1854). Annalen der Physik (Poggendoff's Annalen), Dec. 1854, vol. xciii. p. 481; translated in the Journal de Mathematiques, vol. xx. Paris, 1855, and in the Philosophical Magazine, August 1856, s. 4. vol. xii, p. 81. Clausius, R. (1865/1867). The Mechanical Theory of Heat – with its Applications to the Steam Engine and to Physical Properties of Bodies, London: John van Voorst. 1867. Also the second edition translated into English by W.R. Browne (1879) here and here. De Groot, S.R., Mazur, P. (1962). Non-equilibrium Thermodynamics, North-Holland, Amsterdam. Reprinted (1984), Dover Publications Inc., New York, . Denbigh, K. (1955/1981). The Principles of Chemical Equilibrium, Cambridge University Press, Cambridge . Greven, A., Keller, G., Warnecke (editors) (2003). Entropy, Princeton University Press, Princeton NJ, . , Lecture on Matter, Living Force, and Heat. 5 and 12 May 1847. Kittel, C. Kroemer, H. (1980). Thermal Physics, second edition, W.H. Freeman, San Francisco, . Kondepudi, D., Prigogine, I. (1998). Modern Thermodynamics: From Heat Engines to Dissipative Structures, John Wiley & Sons, Chichester, . Landau, L., Lifshitz, E.M. (1958/1969). Statistical Physics, volume 5 of Course of Theoretical Physics, translated from the Russian by J.B. Sykes, M.J. Kearsley, Pergamon, Oxford. Lebon, G., Jou, D., Casas-Vázquez, J. (2008). Understanding Non-equilibrium Thermodynamics: Foundations, Applications, Frontiers, Springer-Verlag, Berlin, e-. Lieb, E.H., Yngvason, J. (2003). The Entropy of Classical Thermodynamics, Chapter 8 of Entropy, Greven, A., Keller, G., Warnecke (editors) (2003). Pippard, A.B. (1957/1966). Elements of Classical Thermodynamics for Advanced Students of Physics, original publication 1957, reprint 1966, Cambridge University Press, Cambridge. Planck, M., (1897/1903). Treatise on Thermodynamics, translated by A. Ogg, first English edition, Longmans, Green and Co., London. Planck. M. (1914). The Theory of Heat Radiation, a translation by Masius, M. of the second German edition, P. Blakiston's Son & Co., Philadelphia. Planck, M., (1923/1927). Treatise on Thermodynamics, translated by A. Ogg, third English edition, Longmans, Green and Co., London. Shavit, A., Gutfinger, C. (1995). Thermodynamics. From Concepts to Applications, Prentice Hall, London, . Truesdell, C. (1969). Rational Thermodynamics: a Course of Lectures on Selected Topics, McGraw-Hill Book Company, New York. Truesdell, C. (1980). The Tragicomical History of Thermodynamics 1822–1854, Springer, New York, . Further bibliography Gyftopoulos, E.P., & Beretta, G.P. (1991). Thermodynamics: foundations and applications. (Dover Publications) Hatsopoulos, G.N., & Keenan, J.H. (1981). Principles of general thermodynamics. RE Krieger Publishing Company. External links Plasma heat at 2 gigakelvins – Article about extremely high temperature generated by scientists (Foxnews.com) Correlations for Convective Heat Transfer – ChE Online Resources Heat transfer Thermodynamics Physical quantities
Heat
[ "Physics", "Chemistry", "Mathematics" ]
12,784
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Physical quantities", "Quantity", "Thermodynamics", "Physical properties", "Dynamical systems" ]
19,594,028
https://en.wikipedia.org/wiki/Theoretical%20physics
Theoretical physics is a branch of physics that employs mathematical models and abstractions of physical objects and systems to rationalize, explain, and predict natural phenomena. This is in contrast to experimental physics, which uses experimental tools to probe these phenomena. The advancement of science generally depends on the interplay between experimental studies and theory. In some cases, theoretical physics adheres to standards of mathematical rigour while giving little weight to experiments and observations. For example, while developing special relativity, Albert Einstein was concerned with the Lorentz transformation which left Maxwell's equations invariant, but was apparently uninterested in the Michelson–Morley experiment on Earth's drift through a luminiferous aether. Conversely, Einstein was awarded the Nobel Prize for explaining the photoelectric effect, previously an experimental result lacking a theoretical formulation. Overview A physical theory is a model of physical events. It is judged by the extent to which its predictions agree with empirical observations. The quality of a physical theory is also judged on its ability to make new predictions which can be verified by new observations. A physical theory differs from a mathematical theorem in that while both are based on some form of axioms, judgment of mathematical applicability is not based on agreement with any experimental results. A physical theory similarly differs from a mathematical theory, in the sense that the word "theory" has a different meaning in mathematical terms. A physical theory involves one or more relationships between various measurable quantities. Archimedes realized that a ship floats by displacing its mass of water, Pythagoras understood the relation between the length of a vibrating string and the musical tone it produces. Other examples include entropy as a measure of the uncertainty regarding the positions and motions of unseen particles and the quantum mechanical idea that (action and) energy are not continuously variable. Theoretical physics consists of several different approaches. In this regard, theoretical particle physics forms a good example. For instance: "phenomenologists" might employ (semi-) empirical formulas and heuristics to agree with experimental results, often without deep physical understanding. "Modelers" (also called "model-builders") often appear much like phenomenologists, but try to model speculative theories that have certain desirable features (rather than on experimental data), or apply the techniques of mathematical modeling to physics problems. Some attempt to create approximate theories, called effective theories, because fully developed theories may be regarded as unsolvable or too complicated. Other theorists may try to unify, formalise, reinterpret or generalise extant theories, or create completely new ones altogether. Sometimes the vision provided by pure mathematical systems can provide clues to how a physical system might be modeled; e.g., the notion, due to Riemann and others, that space itself might be curved. Theoretical problems that need computational investigation are often the concern of computational physics. Theoretical advances may consist in setting aside old, incorrect paradigms (e.g., aether theory of light propagation, caloric theory of heat, burning consisting of evolving phlogiston, or astronomical bodies revolving around the Earth) or may be an alternative model that provides answers that are more accurate or that can be more widely applied. In the latter case, a correspondence principle will be required to recover the previously known result. Sometimes though, advances may proceed along different paths. For example, an essentially correct theory may need some conceptual or factual revisions; atomic theory, first postulated millennia ago (by several thinkers in Greece and India) and the two-fluid theory of electricity are two cases in this point. However, an exception to all the above is the wave–particle duality, a theory combining aspects of different, opposing models via the Bohr complementarity principle. Physical theories become accepted if they are able to make correct predictions and no (or few) incorrect ones. The theory should have, at least as a secondary objective, a certain economy and elegance (compare to mathematical beauty), a notion sometimes called "Occam's razor" after the 13th-century English philosopher William of Occam (or Ockham), in which the simpler of two theories that describe the same matter just as adequately is preferred (but conceptual simplicity may mean mathematical complexity). They are also more likely to be accepted if they connect a wide range of phenomena. Testing the consequences of a theory is part of the scientific method. Physical theories can be grouped into three categories: mainstream theories, proposed theories and fringe theories. History Theoretical physics began at least 2,300 years ago, under the Pre-socratic philosophy, and continued by Plato and Aristotle, whose views held sway for a millennium. During the rise of medieval universities, the only acknowledged intellectual disciplines were the seven liberal arts of the Trivium like grammar, logic, and rhetoric and of the Quadrivium like arithmetic, geometry, music and astronomy. During the Middle Ages and Renaissance, the concept of experimental science, the counterpoint to theory, began with scholars such as Ibn al-Haytham and Francis Bacon. As the Scientific Revolution gathered pace, the concepts of matter, energy, space, time and causality slowly began to acquire the form we know today, and other sciences spun off from the rubric of natural philosophy. Thus began the modern era of theory with the Copernican paradigm shift in astronomy, soon followed by Johannes Kepler's expressions for planetary orbits, which summarized the meticulous observations of Tycho Brahe; the works of these men (alongside Galileo's) can perhaps be considered to constitute the Scientific Revolution. The great push toward the modern concept of explanation started with Galileo, one of the few physicists who was both a consummate theoretician and a great experimentalist. The analytic geometry and mechanics of Descartes were incorporated into the calculus and mechanics of Isaac Newton, another theoretician/experimentalist of the highest order, writing Principia Mathematica. In it contained a grand synthesis of the work of Copernicus, Galileo and Kepler; as well as Newton's theories of mechanics and gravitation, which held sway as worldviews until the early 20th century. Simultaneously, progress was also made in optics (in particular colour theory and the ancient science of geometrical optics), courtesy of Newton, Descartes and the Dutchmen Snell and Huygens. In the 18th and 19th centuries Joseph-Louis Lagrange, Leonhard Euler and William Rowan Hamilton would extend the theory of classical mechanics considerably. They picked up the interactive intertwining of mathematics and physics begun two millennia earlier by Pythagoras. Among the great conceptual achievements of the 19th and 20th centuries were the consolidation of the idea of energy (as well as its global conservation) by the inclusion of heat, electricity and magnetism, and then light. The laws of thermodynamics, and most importantly the introduction of the singular concept of entropy began to provide a macroscopic explanation for the properties of matter. Statistical mechanics (followed by statistical physics and Quantum statistical mechanics) emerged as an offshoot of thermodynamics late in the 19th century. Another important event in the 19th century was the discovery of electromagnetic theory, unifying the previously separate phenomena of electricity, magnetism and light. The pillars of modern physics, and perhaps the most revolutionary theories in the history of physics, have been relativity theory and quantum mechanics. Newtonian mechanics was subsumed under special relativity and Newton's gravity was given a kinematic explanation by general relativity. Quantum mechanics led to an understanding of blackbody radiation (which indeed, was an original motivation for the theory) and of anomalies in the specific heats of solids — and finally to an understanding of the internal structures of atoms and molecules. Quantum mechanics soon gave way to the formulation of quantum field theory (QFT), begun in the late 1920s. In the aftermath of World War 2, more progress brought much renewed interest in QFT, which had since the early efforts, stagnated. The same period also saw fresh attacks on the problems of superconductivity and phase transitions, as well as the first applications of QFT in the area of theoretical condensed matter. The 1960s and 70s saw the formulation of the Standard model of particle physics using QFT and progress in condensed matter physics (theoretical foundations of superconductivity and critical phenomena, among others), in parallel to the applications of relativity to problems in astronomy and cosmology respectively. All of these achievements depended on the theoretical physics as a moving force both to suggest experiments and to consolidate results — often by ingenious application of existing mathematics, or, as in the case of Descartes and Newton (with Leibniz), by inventing new mathematics. Fourier's studies of heat conduction led to a new branch of mathematics: infinite, orthogonal series. Modern theoretical physics attempts to unify theories and explain phenomena in further attempts to understand the Universe, from the cosmological to the elementary particle scale. Where experimentation cannot be done, theoretical physics still tries to advance through the use of mathematical models. Mainstream theories Mainstream theories (sometimes referred to as central theories) are the body of knowledge of both factual and scientific views and possess a usual scientific quality of the tests of repeatability, consistency with existing well-established science and experimentation. There do exist mainstream theories that are generally accepted theories based solely upon their effects explaining a wide variety of data, although the detection, explanation, and possible composition are subjects of debate. Examples Big Bang Chaos theory Classical mechanics Classical field theory Dynamo theory Field theory Ginzburg–Landau theory Kinetic theory of gases Classical electromagnetism Perturbation theory (quantum mechanics) Physical cosmology Quantum chromodynamics Quantum complexity theory Quantum electrodynamics Quantum field theory Quantum field theory in curved spacetime Quantum information theory Quantum mechanics Quantum thermodynamics Relativistic quantum mechanics Scattering theory Standard Model Statistical physics Theory of relativity Wave–particle duality Proposed theories The proposed theories of physics are usually relatively new theories which deal with the study of physics which include scientific approaches, means for determining the validity of models and new types of reasoning used to arrive at the theory. However, some proposed theories include theories that have been around for decades and have eluded methods of discovery and testing. Proposed theories can include fringe theories in the process of becoming established (and, sometimes, gaining wider acceptance). Proposed theories usually have not been tested. In addition to the theories like those listed below, there are also different interpretations of quantum mechanics, which may or may not be considered different theories since it is debatable whether they yield different predictions for physical experiments, even in principle. For example, AdS/CFT correspondence, Chern–Simons theory, graviton, magnetic monopole, string theory, theory of everything. Fringe theories Fringe theories include any new area of scientific endeavor in the process of becoming established and some proposed theories. It can include speculative sciences. This includes physics fields and physical theories presented in accordance with known evidence, and a body of associated predictions have been made according to that theory. Some fringe theories go on to become a widely accepted part of physics. Other fringe theories end up being disproven. Some fringe theories are a form of protoscience and others are a form of pseudoscience. The falsification of the original theory sometimes leads to reformulation of the theory. Examples Aether (classical element) Luminiferous aether Digital physics Electrogravitics Stochastic electrodynamics Tesla's dynamic theory of gravity Thought experiments vs real experiments "Thought" experiments are situations created in one's mind, asking a question akin to "suppose you are in this situation, assuming such is true, what would follow?". They are usually created to investigate phenomena that are not readily experienced in every-day situations. Famous examples of such thought experiments are Schrödinger's cat, the EPR thought experiment, simple illustrations of time dilation, and so on. These usually lead to real experiments designed to verify that the conclusion (and therefore the assumptions) of the thought experiments are correct. The EPR thought experiment led to the Bell inequalities, which were then tested to various degrees of rigor, leading to the acceptance of the current formulation of quantum mechanics and probabilism as a working hypothesis. See also List of theoretical physicists Philosophy of physics Symmetry in quantum mechanics Timeline of developments in theoretical physics Double field theory Notes References Further reading Duhem, Pierre. La théorie physique - Son objet, sa structure, (in French). 2nd edition - 1914. English translation: The physical theory - its purpose, its structure. Republished by Joseph Vrin philosophical bookstore (1981), . Feynman, et al. The Feynman Lectures on Physics (3 vol.). First edition: Addison–Wesley, (1964, 1966). Bestselling three-volume textbook covering the span of physics. Reference for both (under)graduate student and professional researcher alike. Landau et al. Course of Theoretical Physics. Famous series of books dealing with theoretical concepts in physics covering 10 volumes, translated into many languages and reprinted over many editions. Often known simply as "Landau and Lifschits" or "Landau-Lifschits" in the literature. Longair, MS. Theoretical Concepts in Physics: An Alternative View of Theoretical Reasoning in Physics. Cambridge University Press; 2d edition (4 Dec 2003). . Planck, Max (1909). Eight Lectures on theoretical physics. Library of Alexandria. , . A set of lectures given in 1909 at Columbia University. Sommerfeld, Arnold. Vorlesungen über theoretische Physik (Lectures on Theoretical Physics); German, 6 volumes. A series of lessons from a master educator of theoretical physicists. External links MIT Center for Theoretical Physics How to become a GOOD Theoretical Physicist, a website made by Gerard 't Hooft de:Physik#Theoretische Physik
Theoretical physics
[ "Physics" ]
2,859
[ "Theoretical physics" ]
19,594,213
https://en.wikipedia.org/wiki/Planck%20constant
The Planck constant, or Planck's constant, denoted by is a fundamental physical constant of foundational importance in quantum mechanics: a photon's energy is equal to its frequency multiplied by the Planck constant, and the wavelength of a matter wave equals the Planck constant divided by the associated particle momentum. The closely related reduced Planck constant, equal to and denoted is commonly used in quantum physics equations. The constant was postulated by Max Planck in 1900 as a proportionality constant needed to explain experimental black-body radiation. Planck later referred to the constant as the "quantum of action". In 1905, Albert Einstein associated the "quantum" or minimal element of the energy to the electromagnetic wave itself. Max Planck received the 1918 Nobel Prize in Physics "in recognition of the services he rendered to the advancement of Physics by his discovery of energy quanta". In metrology, the Planck constant is used, together with other constants, to define the kilogram, the SI unit of mass. The SI units are defined in such a way that, when the Planck constant is expressed in SI units, it has the exact value History Origin of the constant Planck's constant was formulated as part of Max Planck's successful effort to produce a mathematical expression that accurately predicted the observed spectral distribution of thermal radiation from a closed furnace (black-body radiation). This mathematical expression is now known as Planck's law. In the last years of the 19th century, Max Planck was investigating the problem of black-body radiation first posed by Kirchhoff some 40 years earlier. Every physical body spontaneously and continuously emits electromagnetic radiation. There was no expression or explanation for the overall shape of the observed emission spectrum. At the time, Wien's law fit the data for short wavelengths and high temperatures, but failed for long wavelengths. Also around this time, but unknown to Planck, Lord Rayleigh had derived theoretically a formula, now known as the Rayleigh–Jeans law, that could reasonably predict long wavelengths but failed dramatically at short wavelengths. Approaching this problem, Planck hypothesized that the equations of motion for light describe a set of harmonic oscillators, one for each possible frequency. He examined how the entropy of the oscillators varied with the temperature of the body, trying to match Wien's law, and was able to derive an approximate mathematical function for the black-body spectrum, which gave a simple empirical formula for long wavelengths. Planck tried to find a mathematical expression that could reproduce Wien's law (for short wavelengths) and the empirical formula (for long wavelengths). This expression included a constant, , which is thought to be for (auxiliary quantity), and subsequently became known as the Planck constant. The expression formulated by Planck showed that the spectral radiance per unit frequency of a body for frequency at absolute temperature is given by where is the Boltzmann constant, is the Planck constant, and is the speed of light in the medium, whether material or vacuum. Planck soon realized that his solution was not unique. There were several different solutions, each of which gave a different value for the entropy of the oscillators. To save his theory, Planck resorted to using the then-controversial theory of statistical mechanics, which he described as "an act of desperation". One of his new boundary conditions was With this new condition, Planck had imposed the quantization of the energy of the oscillators, in his own words, "a purely formal assumption ... actually I did not think much about it", but one that would revolutionize physics. Applying this new approach to Wien's displacement law showed that the "energy element" must be proportional to the frequency of the oscillator, the first version of what is now sometimes termed the "Planck–Einstein relation": Planck was able to calculate the value of from experimental data on black-body radiation: his result, , is within 1.2% of the currently defined value. He also made the first determination of the Boltzmann constant from the same data and theory. Development and application The black-body problem was revisited in 1905, when Lord Rayleigh and James Jeans (together) and Albert Einstein independently proved that classical electromagnetism could never account for the observed spectrum. These proofs are commonly known as the "ultraviolet catastrophe", a name coined by Paul Ehrenfest in 1911. They contributed greatly (along with Einstein's work on the photoelectric effect) in convincing physicists that Planck's postulate of quantized energy levels was more than a mere mathematical formalism. The first Solvay Conference in 1911 was devoted to "the theory of radiation and quanta". Photoelectric effect The photoelectric effect is the emission of electrons (called "photoelectrons") from a surface when light is shone on it. It was first observed by Alexandre Edmond Becquerel in 1839, although credit is usually reserved for Heinrich Hertz, who published the first thorough investigation in 1887. Another particularly thorough investigation was published by Philipp Lenard (Lénárd Fülöp) in 1902. Einstein's 1905 paper discussing the effect in terms of light quanta would earn him the Nobel Prize in 1921, after his predictions had been confirmed by the experimental work of Robert Andrews Millikan. The Nobel committee awarded the prize for his work on the photo-electric effect, rather than relativity, both because of a bias against purely theoretical physics not grounded in discovery or experiment, and dissent amongst its members as to the actual proof that relativity was real. Before Einstein's paper, electromagnetic radiation such as visible light was considered to behave as a wave: hence the use of the terms "frequency" and "wavelength" to characterize different types of radiation. The energy transferred by a wave in a given time is called its intensity. The light from a theatre spotlight is more intense than the light from a domestic lightbulb; that is to say that the spotlight gives out more energy per unit time and per unit space (and hence consumes more electricity) than the ordinary bulb, even though the color of the light might be very similar. Other waves, such as sound or the waves crashing against a seafront, also have their intensity. However, the energy account of the photoelectric effect did not seem to agree with the wave description of light. The "photoelectrons" emitted as a result of the photoelectric effect have a certain kinetic energy, which can be measured. This kinetic energy (for each photoelectron) is independent of the intensity of the light, but depends linearly on the frequency; and if the frequency is too low (corresponding to a photon energy that is less than the work function of the material), no photoelectrons are emitted at all, unless a plurality of photons, whose energetic sum is greater than the energy of the photoelectrons, acts virtually simultaneously (multiphoton effect). Assuming the frequency is high enough to cause the photoelectric effect, a rise in intensity of the light source causes more photoelectrons to be emitted with the same kinetic energy, rather than the same number of photoelectrons to be emitted with higher kinetic energy. Einstein's explanation for these observations was that light itself is quantized; that the energy of light is not transferred continuously as in a classical wave, but only in small "packets" or quanta. The size of these "packets" of energy, which would later be named photons, was to be the same as Planck's "energy element", giving the modern version of the Planck–Einstein relation: Einstein's postulate was later proven experimentally: the constant of proportionality between the frequency of incident light and the kinetic energy of photoelectrons was shown to be equal to the Planck constant . Atomic structure In 1912 John William Nicholson developed an atomic model and found the angular momentum of the electrons in the model were related by h/2. Nicholson's nuclear quantum atomic model influenced the development of Niels Bohr 's atomic model and Bohr quoted him in his 1913 paper of the Bohr model of the atom. Bohr's model went beyond Planck's abstract harmonic oscillator concept: an electron in a Bohr atom could only have certain defined energies , defined by where is the speed of light in vacuum, is an experimentally determined constant (the Rydberg constant) and . This approach also allowed Bohr to account for the Rydberg formula, an empirical description of the atomic spectrum of hydrogen, and to account for the value of the Rydberg constant in terms of other fundamental constants. In discussing angular momentum of the electrons in his model Bohr introduced the quantity , now known as the reduced Planck constant as the quantum of angular momentum. Uncertainty principle The Planck constant also occurs in statements of Werner Heisenberg's uncertainty principle. Given numerous particles prepared in the same state, the uncertainty in their position, , and the uncertainty in their momentum, , obey where the uncertainty is given as the standard deviation of the measured value from its expected value. There are several other such pairs of physically measurable conjugate variables which obey a similar rule. One example is time vs. energy. The inverse relationship between the uncertainty of the two conjugate variables forces a tradeoff in quantum experiments, as measuring one quantity more precisely results in the other quantity becoming imprecise. In addition to some assumptions underlying the interpretation of certain values in the quantum mechanical formulation, one of the fundamental cornerstones to the entire theory lies in the commutator relationship between the position operator and the momentum operator : where is the Kronecker delta. Photon energy The Planck relation connects the particular photon energy with its associated wave frequency : This energy is extremely small in terms of ordinarily perceived everyday objects. Since the frequency , wavelength , and speed of light are related by , the relation can also be expressed as de Broglie wavelength In 1923, Louis de Broglie generalized the Planck–Einstein relation by postulating that the Planck constant represents the proportionality between the momentum and the quantum wavelength of not just the photon, but the quantum wavelength of any particle. This was confirmed by experiments soon afterward. This holds throughout the quantum theory, including electrodynamics. The de Broglie wavelength of the particle is given by where denotes the linear momentum of a particle, such as a photon, or any other elementary particle. The energy of a photon with angular frequency is given by while its linear momentum relates to where is an angular wavenumber. These two relations are the temporal and spatial parts of the special relativistic expression using 4-vectors. Statistical mechanics Classical statistical mechanics requires the existence of (but does not define its value). Eventually, following upon Planck's discovery, it was speculated that physical action could not take on an arbitrary value, but instead was restricted to integer multiples of a very small quantity, the "[elementary] quantum of action", now called the Planck constant. This was a significant conceptual part of the so-called "old quantum theory" developed by physicists including Bohr, Sommerfeld, and Ishiwara, in which particle trajectories exist but are hidden, but quantum laws constrain them based on their action. This view has been replaced by fully modern quantum theory, in which definite trajectories of motion do not even exist; rather, the particle is represented by a wavefunction spread out in space and in time. Related to this is the concept of energy quantization which existed in old quantum theory and also exists in altered form in modern quantum physics. Classical physics cannot explain quantization of energy. Dimension and value The Planck constant has the same dimensions as action and as angular momentum. The Planck constant is fixed at = as part of the definition of the SI units. This value is used to define the SI unit of mass, the kilogram: "the kilogram [...] is defined by taking the fixed numerical value of to be when expressed in the unit J⋅s, which is equal to kg⋅m2⋅s−1, where the metre and the second are defined in terms of speed of light and duration of hyperfine transition of the ground state of an unperturbed caesium-133 atom ." Technologies of mass metrology such as the Kibble balance measure refine the value of kilogram applying fixed value of the Planck constant. Significance of the value The Planck constant is one of the smallest constants used in physics. This reflects the fact that on a scale adapted to humans, where energies are typical of the order of kilojoules and times are typical of the order of seconds or minutes, the Planck constant is very small. When the product of energy and time for a physical event approaches the Planck constant, quantum effects dominate. Equivalently, the order of the Planck constant reflects the fact that everyday objects and systems are made of a large number of microscopic particles. For example, in green light (with a wavelength of 555 nanometres or a frequency of ) each photon has an energy . That is a very small amount of energy in terms of everyday experience, but everyday experience is not concerned with individual photons any more than with individual atoms or molecules. An amount of light more typical in everyday experience (though much larger than the smallest amount perceivable by the human eye) is the energy of one mole of photons; its energy can be computed by multiplying the photon energy by the Avogadro constant, with the result of , about the food energy in three apples. Reduced Planck constant Many equations in quantum physics are customarily written using the reduced Planck constant, equal to and denoted (pronounced h-bar). The fundamental equations look simpler when written using as opposed to and it is usually rather than that gives the most reliable results when used in order-of-magnitude estimates. For example, using dimensional analysis to estimate the ionization energy of a hydrogen atom, the relevant parameters that determine the ionization energy are the mass of the electron the electron charge and either the Planck constant or the reduced Planck constant : Since both constants have the same dimensions, they will enter the dimensional analysis in the same way, but with the estimate is within a factor of two, while with the error is closer to Names and symbols The reduced Planck constant is known by many other names: reduced Planck's constant ), the rationalized Planck constant (or rationalized Planck's constant , the Dirac constant (or Dirac's constant ), the Dirac (or Dirac's ), the Dirac (or Dirac's ), and h-bar. It is also common to refer to this as "Planck's constant" while retaining the relationship . By far the most common symbol for the reduced Planck constant is . However, there are some sources that denote it by instead, in which case they usually refer to it as the "Dirac " (or "Dirac's "). History The combination appeared in Niels Bohr's 1913 paper, where it was denoted by For the next 15 years, the combination continued to appear in the literature, but normally without a separate symbol. Then, in 1926, in their seminal papers, Schrödinger and Dirac again introduced special symbols for it: in the case of Schrödinger, and in the case of Dirac. Dirac continued to use in this way until 1930, when he introduced the symbol in his book The Principles of Quantum Mechanics. See also Committee on Data of the International Science Council International System of Units Introduction to quantum mechanics List of scientists whose names are used in physical constants Planck units Wave–particle duality Hashgraph Notes References Citations Sources External links "The role of the Planck constant in physics" – presentation at 26th CGPM meeting at Versailles, France, November 2018 when voting took place. "The Planck constant and its units" – presentation at the 35th Symposium on Chemical Physics at the University of Waterloo, Waterloo, Ontario, Canada, November 3 2019. Fundamental constants 1900 in science Max Planck
Planck constant
[ "Physics" ]
3,285
[ "Physical constants", "Physical quantities", "Fundamental constants" ]
19,600,416
https://en.wikipedia.org/wiki/Isotope
Isotopes are distinct nuclear species (or nuclides) of the same chemical element. They have the same atomic number (number of protons in their nuclei) and position in the periodic table (and hence belong to the same chemical element), but different nucleon numbers (mass numbers) due to different numbers of neutrons in their nuclei. While all isotopes of a given element have similar chemical properties, they have different atomic masses and physical properties. The term isotope is derived from the Greek roots isos (ἴσος "equal") and topos (τόπος "place"), meaning "the same place"; thus, the meaning behind the name is that different isotopes of a single element occupy the same position on the periodic table. It was coined by Scottish doctor and writer Margaret Todd in a 1913 suggestion to the British chemist Frederick Soddy, who popularized the term. The number of protons within the atom's nucleus is called its atomic number and is equal to the number of electrons in the neutral (non-ionized) atom. Each atomic number identifies a specific element, but not the isotope; an atom of a given element may have a wide range in its number of neutrons. The number of nucleons (both protons and neutrons) in the nucleus is the atom's mass number, and each isotope of a given element has a different mass number. For example, carbon-12, carbon-13, and carbon-14 are three isotopes of the element carbon with mass numbers 12, 13, and 14, respectively. The atomic number of carbon is 6, which means that every carbon atom has 6 protons so that the neutron numbers of these isotopes are 6, 7, and 8 respectively. Isotope vs. nuclide A nuclide is a species of an atom with a specific number of protons and neutrons in the nucleus, for example, carbon-13 with 6 protons and 7 neutrons. The nuclide concept (referring to individual nuclear species) emphasizes nuclear properties over chemical properties, whereas the isotope concept (grouping all atoms of each element) emphasizes chemical over nuclear. The neutron number greatly affects nuclear properties, but its effect on chemical properties is negligible for most elements. Even for the lightest elements, whose ratio of neutron number to atomic number varies the most between isotopes, it usually has only a small effect although it matters in some circumstances (for hydrogen, the lightest element, the isotope effect is large enough to affect biology strongly). The term isotopes (originally also isotopic elements, now sometimes isotopic nuclides) is intended to imply comparison (like synonyms or isomers). For example, the nuclides , , are isotopes (nuclides with the same atomic number but different mass numbers), but , , are isobars (nuclides with the same mass number). However, isotope is the older term and so is better known than nuclide and is still sometimes used in contexts in which nuclide might be more appropriate, such as nuclear technology and nuclear medicine. Notation An isotope and/or nuclide is specified by the name of the particular element (this indicates the atomic number) followed by a hyphen and the mass number (e.g. helium-3, helium-4, carbon-12, carbon-14, uranium-235 and uranium-239). When a chemical symbol is used, e.g. "C" for carbon, standard notation (now known as "AZE notation" because A is the mass number, Z the atomic number, and E for element) is to indicate the mass number (number of nucleons) with a superscript at the upper left of the chemical symbol and to indicate the atomic number with a subscript at the lower left (e.g. , , , , , and ). Because the atomic number is given by the element symbol, it is common to state only the mass number in the superscript and leave out the atomic number subscript (e.g. , , , , , and ). The letter m (for metastable) is sometimes appended after the mass number to indicate a nuclear isomer, a metastable or energetically excited nuclear state (as opposed to the lowest-energy ground state), for example (tantalum-180m). The common pronunciation of the AZE notation is different from how it is written: is commonly pronounced as helium-four instead of four-two-helium, and as uranium two-thirty-five (American English) or uranium-two-three-five (British) instead of 235-92-uranium. Radioactive, primordial, and stable isotopes Some isotopes/nuclides are radioactive, and are therefore referred to as radioisotopes or radionuclides, whereas others have never been observed to decay radioactively and are referred to as stable isotopes or stable nuclides. For example, is a radioactive form of carbon, whereas and are stable isotopes. There are about 339 naturally occurring nuclides on Earth, of which 286 are primordial nuclides, meaning that they have existed since the Solar System's formation. Primordial nuclides include 35 nuclides with very long half-lives (over 100 million years) and 251 that are formally considered as "stable nuclides", because they have not been observed to decay. In most cases, for obvious reasons, if an element has stable isotopes, those isotopes predominate in the elemental abundance found on Earth and in the Solar System. However, in the cases of three elements (tellurium, indium, and rhenium) the most abundant isotope found in nature is actually one (or two) extremely long-lived radioisotope(s) of the element, despite these elements having one or more stable isotopes. Theory predicts that many apparently "stable" nuclides are radioactive, with extremely long half-lives (discounting the possibility of proton decay, which would make all nuclides ultimately unstable). Some stable nuclides are in theory energetically susceptible to other known forms of decay, such as alpha decay or double beta decay, but no decay products have yet been observed, and so these isotopes are said to be "observationally stable". The predicted half-lives for these nuclides often greatly exceed the estimated age of the universe, and in fact, there are also 31 known radionuclides (see primordial nuclide) with half-lives longer than the age of the universe. Adding in the radioactive nuclides that have been created artificially, there are 3,339 currently known nuclides. These include 905 nuclides that are either stable or have half-lives longer than 60 minutes. See list of nuclides for details. History Radioactive isotopes The existence of isotopes was first suggested in 1913 by the radiochemist Frederick Soddy, based on studies of radioactive decay chains that indicated about 40 different species referred to as radioelements (i.e. radioactive elements) between uranium and lead, although the periodic table only allowed for 11 elements between lead and uranium inclusive. Several attempts to separate these new radioelements chemically had failed. For example, Soddy had shown in 1910 that mesothorium (later shown to be 228Ra), radium (226Ra, the longest-lived isotope), and thorium X (224Ra) are impossible to separate. Attempts to place the radioelements in the periodic table led Soddy and Kazimierz Fajans independently to propose their radioactive displacement law in 1913, to the effect that alpha decay produced an element two places to the left in the periodic table, whereas beta decay emission produced an element one place to the right. Soddy recognized that emission of an alpha particle followed by two beta particles led to the formation of an element chemically identical to the initial element but with a mass four units lighter and with different radioactive properties. Soddy proposed that several types of atoms (differing in radioactive properties) could occupy the same place in the table. For example, the alpha-decay of uranium-235 forms thorium-231, whereas the beta decay of actinium-230 forms thorium-230. The term "isotope", Greek for "at the same place", was suggested to Soddy by Margaret Todd, a Scottish physician and family friend, during a conversation in which he explained his ideas to her. He received the 1921 Nobel Prize in Chemistry in part for his work on isotopes. In 1914 T. W. Richards found variations between the atomic weight of lead from different mineral sources, attributable to variations in isotopic composition due to different radioactive origins. Stable isotopes The first evidence for multiple isotopes of a stable (non-radioactive) element was found by J. J. Thomson in 1912 as part of his exploration into the composition of canal rays (positive ions). Thomson channelled streams of neon ions through parallel magnetic and electric fields, measured their deflection by placing a photographic plate in their path, and computed their mass to charge ratio using a method that became known as the Thomson's parabola method. Each stream created a glowing patch on the plate at the point it struck. Thomson observed two separate parabolic patches of light on the photographic plate (see image), which suggested two species of nuclei with different mass-to-charge ratios. He wrote "There can, therefore, I think, be little doubt that what has been called neon is not a simple gas but a mixture of two gases, one of which has an atomic weight about 20 and the other about 22. The parabola due to the heavier gas is always much fainter than that due to the lighter, so that probably the heavier gas forms only a small percentage of the mixture." F. W. Aston subsequently discovered multiple stable isotopes for numerous elements using a mass spectrograph. In 1919 Aston studied neon with sufficient resolution to show that the two isotopic masses are very close to the integers 20 and 22 and that neither is equal to the known molar mass (20.2) of neon gas. This is an example of Aston's whole number rule for isotopic masses, which states that large deviations of elemental molar masses from integers are primarily due to the fact that the element is a mixture of isotopes. Aston similarly showed in 1920 that the molar mass of chlorine (35.45) is a weighted average of the almost integral masses for the two isotopes 35Cl and 37Cl. Neutrons After the discovery of the neutron by James Chadwick in 1932, the ultimate root cause for the existence of isotopes was clarified, that is, the nuclei of different isotopes for a given element have different numbers of neutrons, albeit having the same number of protons. Variation in properties between isotopes Chemical and molecular properties A neutral atom has the same number of electrons as protons. Thus different isotopes of a given element all have the same number of electrons and share a similar electronic structure. Because the chemical behaviour of an atom is largely determined by its electronic structure, different isotopes exhibit nearly identical chemical behaviour. The main exception to this is the kinetic isotope effect: due to their larger masses, heavier isotopes tend to react somewhat more slowly than lighter isotopes of the same element. This is most pronounced by far for protium (), deuterium (), and tritium (), because deuterium has twice the mass of protium and tritium has three times the mass of protium. These mass differences also affect the behavior of their respective chemical bonds, by changing the center of gravity (reduced mass) of the atomic systems. However, for heavier elements, the relative mass difference between isotopes is much less so that the mass-difference effects on chemistry are usually negligible. (Heavy elements also have relatively more neutrons than lighter elements, so the ratio of the nuclear mass to the collective electronic mass is slightly greater.) There is also an equilibrium isotope effect. Similarly, two molecules that differ only in the isotopes of their atoms (isotopologues) have identical electronic structures, and therefore almost indistinguishable physical and chemical properties (again with deuterium and tritium being the primary exceptions). The vibrational modes of a molecule are determined by its shape and by the masses of its constituent atoms; so different isotopologues have different sets of vibrational modes. Because vibrational modes allow a molecule to absorb photons of corresponding energies, isotopologues have different optical properties in the infrared range. Nuclear properties and stability Atomic nuclei consist of protons and neutrons bound together by the residual strong force. Because protons are positively charged, they repel each other. Neutrons, which are electrically neutral, stabilize the nucleus in two ways. Their copresence pushes protons slightly apart, reducing the electrostatic repulsion between the protons, and they exert an attractive nuclear force on each other and on protons. For this reason, one or more neutrons are necessary for two or more protons to bind into a nucleus. As the number of protons increases, so does the ratio of neutrons to protons necessary to ensure a stable nucleus (see graph at right). For example, although the neutron:proton ratio of is 1:2, the neutron:proton ratio of is greater than 3:2. A number of lighter elements have stable nuclides with the ratio 1:1 (Z = N). The nuclide (calcium-40) is observationally the heaviest stable nuclide with the same number of neutrons and protons. All stable nuclides heavier than calcium-40 contain more neutrons than protons. Numbers of isotopes per element Of the 80 elements with a stable isotope, the largest number of stable isotopes observed for any element is ten (for the element tin). No element has nine or eight stable isotopes. Five elements have seven stable isotopes, eight have six stable isotopes, ten have five stable isotopes, nine have four stable isotopes, five have three stable isotopes, 16 have two stable isotopes (counting as stable), and 26 elements have only a single stable isotope (of these, 19 are so-called mononuclidic elements, having a single primordial stable isotope that dominates and fixes the atomic weight of the natural element to high precision; 3 radioactive mononuclidic elements occur as well). In total, there are 251 nuclides that have not been observed to decay. For the 80 elements that have one or more stable isotopes, the average number of stable isotopes is 251/80 ≈ 3.14 isotopes per element. Even and odd nucleon numbers The proton:neutron ratio is not the only factor affecting nuclear stability. It depends also on evenness or oddness of its atomic number Z, neutron number N and, consequently, of their sum, the mass number A. Oddness of both Z and N tends to lower the nuclear binding energy, making odd nuclei, generally, less stable. This remarkable difference of nuclear binding energy between neighbouring nuclei, especially of odd-A isobars, has important consequences: unstable isotopes with a nonoptimal number of neutrons or protons decay by beta decay (including positron emission), electron capture, or other less common decay modes such as spontaneous fission and cluster decay. Most stable nuclides are even-proton-even-neutron, where all numbers Z, N, and A are even. The odd-A stable nuclides are divided (roughly evenly) into odd-proton-even-neutron, and even-proton-odd-neutron nuclides. Stable odd-proton-odd-neutron nuclides are the least common. Even atomic number The 146 even-proton, even-neutron (EE) nuclides comprise ~58% of all stable nuclides and all have spin 0 because of pairing. There are also 24 primordial long-lived even-even nuclides. As a result, each of the 41 even-numbered elements from 2 to 82 has at least one stable isotope, and most of these elements have several primordial isotopes. Half of these even-numbered elements have six or more stable isotopes. The extreme stability of helium-4 due to a double pairing of 2 protons and 2 neutrons prevents any nuclides containing five (, ) or eight () nucleons from existing long enough to serve as platforms for the buildup of heavier elements via nuclear fusion in stars (see triple alpha process). Only five stable nuclides contain both an odd number of protons and an odd number of neutrons. The first four "odd-odd" nuclides occur in low mass nuclides, for which changing a proton to a neutron or vice versa would lead to a very lopsided proton-neutron ratio (, , , and ; spins 1, 1, 3, 1). The only other entirely "stable" odd-odd nuclide, (spin 9), is thought to be the rarest of the 251 stable nuclides, and is the only primordial nuclear isomer, which has not yet been observed to decay despite experimental attempts. Many odd-odd radionuclides (such as the ground state of tantalum-180) with comparatively short half-lives are known. Usually, they beta-decay to their nearby even-even isobars that have paired protons and paired neutrons. Of the nine primordial odd-odd nuclides (five stable and four radioactive with long half-lives), only is the most common isotope of a common element. This is the case because it is a part of the CNO cycle. The nuclides and are minority isotopes of elements that are themselves rare compared to other light elements, whereas the other six isotopes make up only a tiny percentage of the natural abundance of their elements. Odd atomic number 53 stable nuclides have an even number of protons and an odd number of neutrons. They are a minority in comparison to the even-even isotopes, which are about 3 times as numerous. Among the 41 even-Z elements that have a stable nuclide, only two elements (argon and cerium) have no even-odd stable nuclides. One element (tin) has three. There are 24 elements that have one even-odd nuclide and 13 that have two odd-even nuclides. Of 35 primordial radionuclides there exist four even-odd nuclides (see table at right), including the fissile . Because of their odd neutron numbers, the even-odd nuclides tend to have large neutron capture cross-sections, due to the energy that results from neutron-pairing effects. These stable even-proton odd-neutron nuclides tend to be uncommon by abundance in nature, generally because, to form and enter into primordial abundance, they must have escaped capturing neutrons to form yet other stable even-even isotopes, during both the s-process and r-process of neutron capture, during nucleosynthesis in stars. For this reason, only and are the most naturally abundant isotopes of their element. 48 stable odd-proton-even-neutron nuclides, stabilized by their paired neutrons, form most of the stable isotopes of the odd-numbered elements; the very few odd-proton-odd-neutron nuclides comprise the others. There are 41 odd-numbered elements with Z = 1 through 81, of which 39 have stable isotopes (technetium () and promethium () have no stable isotopes). Of these 39 odd Z elements, 30 elements (including hydrogen-1 where 0 neutrons is even) have one stable odd-even isotope, and nine elements: chlorine (), potassium (), copper (), gallium (), bromine (), silver (), antimony (), iridium (), and thallium (), have two odd-even stable isotopes each. This makes a total stable odd-even isotopes. There are also five primordial long-lived radioactive odd-even isotopes, , , , , and . The last two were only recently found to decay, with half-lives greater than 10 years. Odd neutron number Actinides with odd neutron number are generally fissile (with thermal neutrons), whereas those with even neutron number are generally not, though they are fissionable with fast neutrons. All observationally stable odd-odd nuclides have nonzero integer spin. This is because the single unpaired neutron and unpaired proton have a larger nuclear force attraction to each other if their spins are aligned (producing a total spin of at least 1 unit), instead of anti-aligned. See deuterium for the simplest case of this nuclear behavior. Only , , and have odd neutron number and are the most naturally abundant isotope of their element. Occurrence in nature Elements are composed either of one nuclide (mononuclidic elements), or of more than one naturally occurring isotopes. The unstable (radioactive) isotopes are either primordial or postprimordial. Primordial isotopes were a product of stellar nucleosynthesis or another type of nucleosynthesis such as cosmic ray spallation, and have persisted down to the present because their rate of decay is very slow (e.g. uranium-238 and potassium-40). Post-primordial isotopes were created by cosmic ray bombardment as cosmogenic nuclides (e.g., tritium, carbon-14), or by the decay of a radioactive primordial isotope to a radioactive radiogenic nuclide daughter (e.g. uranium to radium). A few isotopes are naturally synthesized as nucleogenic nuclides, by some other natural nuclear reaction, such as when neutrons from natural nuclear fission are absorbed by another atom. As discussed above, only 80 elements have any stable isotopes, and 26 of these have only one stable isotope. Thus, about two-thirds of stable elements occur naturally on Earth in multiple stable isotopes, with the largest number of stable isotopes for an element being ten, for tin (). There are about 94 elements found naturally on Earth (up to plutonium inclusive), though some are detected only in very tiny amounts, such as plutonium-244. Scientists estimate that the elements that occur naturally on Earth (some only as radioisotopes) occur as 339 isotopes (nuclides) in total. Only 251 of these naturally occurring nuclides are stable, in the sense of never having been observed to decay as of the present time. An additional 35 primordial nuclides (to a total of 286 primordial nuclides), are radioactive with known half-lives, but have half-lives longer than 100 million years, allowing them to exist from the beginning of the Solar System. See list of nuclides for details. All the known stable nuclides occur naturally on Earth; the other naturally occurring nuclides are radioactive but occur on Earth due to their relatively long half-lives, or else due to other means of ongoing natural production. These include the afore-mentioned cosmogenic nuclides, the nucleogenic nuclides, and any radiogenic nuclides formed by ongoing decay of a primordial radioactive nuclide, such as radon and radium from uranium. An additional ~3000 radioactive nuclides not found in nature have been created in nuclear reactors and in particle accelerators. Many short-lived nuclides not found naturally on Earth have also been observed by spectroscopic analysis, being naturally created in stars or supernovae. An example is aluminium-26, which is not naturally found on Earth but is found in abundance on an astronomical scale. The tabulated atomic masses of elements are averages that account for the presence of multiple isotopes with different masses. Before the discovery of isotopes, empirically determined noninteger values of atomic mass confounded scientists. For example, a sample of chlorine contains 75.8% chlorine-35 and 24.2% chlorine-37, giving an average atomic mass of 35.5 atomic mass units. According to generally accepted cosmology theory, only isotopes of hydrogen and helium, traces of some isotopes of lithium and beryllium, and perhaps some boron, were created at the Big Bang, while all other nuclides were synthesized later, in stars and supernovae, and in interactions between energetic particles such as cosmic rays, and previously produced nuclides. (See nucleosynthesis for details of the various processes thought responsible for isotope production.) The respective abundances of isotopes on Earth result from the quantities formed by these processes, their spread through the galaxy, and the rates of decay for isotopes that are unstable. After the initial coalescence of the Solar System, isotopes were redistributed according to mass, and the isotopic composition of elements varies slightly from planet to planet. This sometimes makes it possible to trace the origin of meteorites. Atomic mass of isotopes The atomic mass (mr) of an isotope (nuclide) is determined mainly by its mass number (i.e. number of nucleons in its nucleus). Small corrections are due to the binding energy of the nucleus (see mass defect), the slight difference in mass between proton and neutron, and the mass of the electrons associated with the atom, the latter because the electron:nucleon ratio differs among isotopes. The mass number is a dimensionless quantity. The atomic mass, on the other hand, is measured using the atomic mass unit based on the mass of the carbon-12 atom. It is denoted with symbols "u" (for unified atomic mass unit) or "Da" (for dalton). The atomic masses of naturally occurring isotopes of an element determine the standard atomic weight of the element. When the element contains N isotopes, the expression below is applied for the average atomic mass : where m1, m2, ..., mN are the atomic masses of each individual isotope, and x1, ..., xN are the relative abundances of these isotopes. Applications of isotopes Purification of isotopes Several applications exist that capitalize on the properties of the various isotopes of a given element. Isotope separation is a significant technological challenge, particularly with heavy elements such as uranium or plutonium. Lighter elements such as lithium, carbon, nitrogen, and oxygen are commonly separated by gas diffusion of their compounds such as CO and NO. The separation of hydrogen and deuterium is unusual because it is based on chemical rather than physical properties, for example in the Girdler sulfide process. Uranium isotopes have been separated in bulk by gas diffusion, gas centrifugation, laser ionization separation, and (in the Manhattan Project) by a type of production mass spectrometry. Use of chemical and biological properties Isotope analysis is the determination of isotopic signature, the relative abundances of isotopes of a given element in a particular sample. Isotope analysis is frequently done by isotope ratio mass spectrometry. For biogenic substances in particular, significant variations of isotopes of C, N, and O can occur. Analysis of such variations has a wide range of applications, such as the detection of adulteration in food products or the geographic origins of products using isoscapes. The identification of certain meteorites as having originated on Mars is based in part upon the isotopic signature of trace gases contained in them. Isotopic substitution can be used to determine the mechanism of a chemical reaction via the kinetic isotope effect. Another common application is isotopic labeling, the use of unusual isotopes as tracers or markers in chemical reactions. Normally, atoms of a given element are indistinguishable from each other. However, by using isotopes of different masses, even different nonradioactive stable isotopes can be distinguished by mass spectrometry or infrared spectroscopy. For example, in 'stable isotope labeling with amino acids in cell culture (SILAC)' stable isotopes are used to quantify proteins. If radioactive isotopes are used, they can be detected by the radiation they emit (this is called radioisotopic labeling). Isotopes are commonly used to determine the concentration of various elements or substances using the isotope dilution method, whereby known amounts of isotopically substituted compounds are mixed with the samples and the isotopic signatures of the resulting mixtures are determined with mass spectrometry. Use of nuclear properties A technique similar to radioisotopic labeling is radiometric dating: using the known half-life of an unstable element, one can calculate the amount of time that has elapsed since a known concentration of isotope existed. The most widely known example is radiocarbon dating used to determine the age of carbonaceous materials. Several forms of spectroscopy rely on the unique nuclear properties of specific isotopes, both radioactive and stable. For example, nuclear magnetic resonance (NMR) spectroscopy can be used only for isotopes with a nonzero nuclear spin. The most common nuclides used with NMR spectroscopy are 1H, 2D, 15N, 13C, and 31P. Mössbauer spectroscopy also relies on the nuclear transitions of specific isotopes, such as 57Fe. Radionuclides also have important uses. Nuclear power and nuclear weapons development require relatively large quantities of specific isotopes. Nuclear medicine and radiation oncology utilize radioisotopes respectively for medical diagnosis and treatment. See also Abundance of the chemical elements Bainbridge mass spectrometer Geotraces Isotope hydrology Isotopomer Nuclear isomer List of nuclides List of particles Mass spectrometry Reference materials for stable isotope analysis Table of nuclides References External links The Nuclear Science web portal Nucleonica The Karlsruhe Nuclide Chart National Nuclear Data Center Portal to large repository of free data and analysis programs from NNDC National Isotope Development Center Coordination and management of the production, availability, and distribution of isotopes, and reference information for the isotope community Isotope Development & Production for Research and Applications (IDPRA) U.S. Department of Energy program for isotope production and production research and development International Atomic Energy Agency Homepage of International Atomic Energy Agency (IAEA), an Agency of the United Nations (UN) Atomic Weights and Isotopic Compositions for All Elements Static table, from NIST (National Institute of Standards and Technology) Atomgewichte, Zerfallsenergien und Halbwertszeiten aller Isotope Exploring the Table of the Isotopes at the LBNL Current isotope research and information isotope.info Emergency Preparedness and Response: Radioactive Isotopes by the CDC (Centers for Disease Control and Prevention) Chart of Nuclides Interactive Chart of Nuclides (National Nuclear Data Center) Interactive Chart of the nuclides, isotopes and Periodic Table The LIVEChart of Nuclides – IAEA with isotope data. Annotated bibliography for isotopes from the Alsos Digital Library for Nuclear Issues The Valley of Stability (video) – a virtual "flight" through 3D representation of the nuclide chart, by CEA (France) Nuclear physics
Isotope
[ "Physics", "Chemistry" ]
6,506
[ "Isotopes", "Nuclear physics" ]
19,603,150
https://en.wikipedia.org/wiki/Polybutene
Polybutene is an organic polymer made from a mixture of 1-butene, 2-butene, and isobutylene. Ethylene steam cracker C4s are also used as supplemental feed for polybutene. It is similar to polyisobutylene (PIB), which is produced from essentially pure isobutylene made in a C4 complex of a major refinery. The presence of isomers other than isobutylene can have several effects including: 1) lower reactivity due to steric hindrance at the terminal carbon in, e.g., manufacture of polyisobutenyl succinic anhydride (PIBSA) dispersant manufacture; 2) the molecular weight—viscosity relationships of the two materials may also be somewhat different. Applications Industrial product applications include, sealants, adhesives, extenders for putties used for sealing roofs and windows, coatings, polymer modification, tackified polyethylene films, personal care, polybutene emulsions. Hydrogenated polybutenes are used in a wide variety of cosmetic preparations, such as lipstick and lip gloss. It is used in adhesives owing to its tackiness. Polybutene finds a niche use in bird and squirrel repellents and is ubiquitous as the active agent in mouse and insect "sticky traps". An important physical property is that higher molecular weight grades thermally degrade to lower-molecular weight polybutenes; those evaporate as well as degrade to butene monomers which can also evaporate. This depolymerization mechanism which allows clean and complete volatization is in contrast to mineral oils which leave gum and sludge or thermoplastics which melt and spread. The property is very valuable for a variety of applications. For smoke inhibition in two stroke engine fuels, the lubricant can degrade at temperatures below the combustion temperature. For electrical lubricants and carriers which might be subject to overheating or fires, polybutene does not result in increased insulation (accelerating the overheating) or conductive carbon deposits. See also Polybutene-1 Oligomers Bibliography Decroocq, S and Casserino, M, Polybutenes, Chapter 17 in Rudnick (Ed), Synthetics, Mineral Oils, and Bio-Based Lubricants: Chemistry and Technology, CRC Press (2005), Print , eBook . References Polymer chemistry Polymers Plasticizers
Polybutene
[ "Chemistry", "Materials_science", "Engineering" ]
523
[ "Polymers", "Materials science", "Polymer chemistry" ]
19,604,059
https://en.wikipedia.org/wiki/Flow%20limiter
A flow limiter or flow restrictor is a device to restrict the flow of a fluid, in general a gas or a liquid. Some designs use single stage or multi-stage orifice plates to handle high and low flow rates. Flow limiters are often used in manufacturing plants as well as households. Safety is usually the main purpose of using a flow limiter. An example is manufacturing facilities and laboratories using flow limiters to prevent injury or death from noxious gases that are in use. The flow limiter prevents gases from causing injury or death by reducing its cross-sectional area where gas flows. Uses Reduce flow of fluid (velocity) through a system, e.g. to reduce water usage in a shower Reduce the amount of gas passing through a system Reduce pressure in a system Applications in medical instrumentation As a safety valve to provide limited flow after closing in the event of a broken hose. (See Hydraulic fuse). Specifications Orifice diameter Flow tolerance Media temperature Maximum flow rate for liquid or gas Maximum pressure Design Flow limiters are designed with the intent of reducing the cross-section area with a plate and a laser drill is used to create a small hole. The diameter of the hole will vary based on the flow rate, inlet pressure, and outlet pressure. The design can also be created with drilled orifices with threaded ends. Another flow limiter design option is the use of porous media. This design is created with hundreds of pores within a central plug. The porous media design has the purpose of lowering the velocity of flow, and the lifespan is extended with lower rates of erosion. The disadvantage to the porous media design is the poor removal of particles and a drop in pressure. Debris buildup will not be a concern with the porous media flow limiter with multiple holes allowing liquid or gas to flow with ease. Features Flow limiters have features that may be added to expand on their applications. The device can have bidirectional flow and have increased flow control with multiple openings. Flow limiters can also be made of various materials to better improve quality and applications. The materials involve the use of metals, metal alloys, and thermoplastics. Metals like copper and aluminium are materials that have good conductivity with heat and electricity. Stainless steel has the material strength to withstand high pressures and resists chemical corrosion. Flow limiters are also capable of being designed out of nylon and fluoropolymers. For the connections to the flow limiter, there are a variety of ends featuring plain ends, pipe clamp ends, flanges, and compression fittings. See also Flow control valve Mass flow controller Needle valve References Hydraulic engineering Plumbing Pneumatics
Flow limiter
[ "Physics", "Engineering", "Environmental_science" ]
545
[ "Hydrology", "Plumbing", "Physical systems", "Construction", "Hydraulics", "Civil engineering", "Hydraulic engineering" ]
19,606,838
https://en.wikipedia.org/wiki/Grazing-incidence%20small-angle%20scattering
Grazing-incidence small-angle scattering (GISAS) is a scattering technique used to study nanostructured surfaces and thin films. The scattered probe is either photons (grazing-incidence small-angle X-ray scattering, GISAXS) or neutrons (grazing-incidence small-angle neutron scattering, GISANS). GISAS combines the accessible length scales of small-angle scattering (SAS: SAXS or SANS) and the surface sensitivity of grazing incidence diffraction (GID). Applications A typical application of GISAS is the characterisation of self-assembly and self-organization on the nanoscale in thin films. Systems studied by GISAS include quantum dot arrays, growth instabilities formed during in-situ growth, self-organized nanostructures in thin films of block copolymers, silica mesophases, and nanoparticles. GISAXS was introduced by Levine and Cohen to study the dewetting of gold deposited on a glass surface. The technique was further developed by Naudon and coworkers to study metal agglomerates on surfaces and in buried interfaces. With the advent of nanoscience other applications evolved quickly, first in hard matter such as the characterization of quantum dots on semiconductor surfaces and the in-situ characterization of metal deposits on oxide surfaces. This was soon to be followed by soft matter systems such as ultrathin polymer films, polymer blends, block copolymer films and other self-organized nanostructured thin films that have become indispensable for nanoscience and technology. Future challenges of GISAS may lie in biological applications, such as proteins, peptides, or viruses attached to surfaces or in lipid layers. Interpretation As a hybrid technique, GISAS combines concepts from transmission small-angle scattering (SAS), from grazing-incidence diffraction (GID), and from diffuse reflectometry. From SAS it uses the form factors and structure factors. From GID it uses the scattering geometry close to the critical angles of substrate and film, and the two-dimensional character of the scattering, giving rise to diffuse rods of scattering intensity perpendicular to the surface. With diffuse (off-specular) reflectometry it shares phenomena like the Yoneda/Vinyard peak at the critical angle of the sample, and the scattering theory, the distorted wave Born approximation (DWBA). However, while diffuse reflectivity remains confined to the incident plane (the plane given by the incident beam and the surface normal), GISAS explores the whole scattering from the surface in all directions, typically utilizing an area detector. Thus GISAS gains access to a wider range of lateral and vertical structures and, in particular, is sensitive to the morphology and preferential alignment of nanoscale objects at the surface or inside the thin film. As a particular consequence of the DWBA, the refraction of x-rays or neutrons has to be always taken into account in the case of thin film studies, due to the fact that scattering angles are small, often less than 1 deg. The refraction correction applies to the perpendicular component of the scattering vector with respect to the substrate while the parallel component is unaffected. Thus parallel scattering can often be interpreted within the kinematic theory of SAS, while refractive corrections apply to the scattering along perpendicular cuts of the scattering image, for instance along a scattering rod. In the interpretation of GISAS images some complication arises in the scattering from low-Z films e.g. organic materials on silicon wafers, when the incident angle is in between the critical angles of the film and the substrate. In this case, the reflected beam from the substrate has a similar strength as the incident beam and thus the scattering from the reflected beam from the film structure can give rise to a doubling of scattering features in the perpendicular direction. This as well as interference between the scattering from the direct and the reflected beam can be fully accounted for by the DWBA scattering theory. These complications are often more than offset by the fact that the dynamic enhancement of the scattering intensity is significant. In combination with the straightforward scattering geometry, where all relevant information is contained in a single scattering image, in-situ and real-time experiments are facilitated. Specifically self-organization during MBE growth and re-organization processes in block copolymer films under the influence of solvent vapor have been characterized on the relevant timescales ranging from seconds to minutes. Ultimately the time resolution is limited by the x-ray flux on the samples necessary to collect an image and the read-out time of the area detector. Experimental practice Dedicated or partially dedicated GISAXS beamlines exist at most synchrotron light sources (for instance Advanced Light Source (ALS), Australian Synchrotron, APS, ELETTRA (Italy), Diamond (UK), ESRF, National Synchrotron Light Source II (NSLS-II), Pohang Light Source (South Korea), SOLEIL (France), Shanghai Synchrotron (PR China), SSRL At neutron research facilities, GISANS is increasingly used, typically on small-angle (SANS) instruments or on reflectometers. GISAS does not require any specific sample preparation other than thin film deposition techniques. Film thicknesses may range from a few nm to several 100 nm, and such thin films are still fully penetrated by the x-ray beam. The film surface, the film interior, as well as the substrate-film interface are all accessible. By varying the incidence angle the various contributions can be identified. References External links GISAXS and GIWAXS tutorial by Detlef Smilgies - Updated Link! GISAXS wiki by Kevin Yager isGISAXS modelling/fitting software by Rémi Lazzari FitGISAXS modelling/fitting software by David Babonneau BornAgain modelling and fitting software by Scientific Computing Group of MLZ Garching HiPGISAXS Massively Parallel GISAXS simulation code by LBNL X-rays Scattering Synchrotron-related techniques Scientific techniques Neutron scattering Nanotechnology
Grazing-incidence small-angle scattering
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,254
[ "Spectrum (physical sciences)", "X-rays", "Neutron scattering", "Electromagnetic spectrum", "Materials science", "Scattering", "Condensed matter physics", "Particle physics", "Nuclear physics", "Nanotechnology" ]
256,141
https://en.wikipedia.org/wiki/Fluorescent%20tag
In molecular biology and biotechnology, a fluorescent tag, also known as a fluorescent label or fluorescent probe, is a molecule that is attached chemically to aid in the detection of a biomolecule such as a protein, antibody, or amino acid. Generally, fluorescent tagging, or labeling, uses a reactive derivative of a fluorescent molecule known as a fluorophore. The fluorophore selectively binds to a specific region or functional group on the target molecule and can be attached chemically or biologically. Various labeling techniques such as enzymatic labeling, protein labeling, and genetic labeling are widely utilized. Ethidium bromide, fluorescein and green fluorescent protein are common tags. The most commonly labelled molecules are antibodies, proteins, amino acids and peptides which are then used as specific probes for detection of a particular target. History The development of methods to detect and identify biomolecules has been motivated by the ability to improve the study of molecular structure and interactions. Before the advent of fluorescent labeling, radioisotopes were used to detect and identify molecular compounds. Since then, safer methods have been developed that involve the use of fluorescent dyes or fluorescent proteins as tags or probes as a means to label and identify biomolecules. Although fluorescent tagging in this regard has only been recently utilized, the discovery of fluorescence has been around for a much longer time. Sir George Stokes developed the Stokes Law of Fluorescence in 1852 which states that the wavelength of fluorescence emission is greater than that of the exciting radiation. Richard Meyer then termed fluorophore in 1897 to describe a chemical group associated with fluorescence. Since then, Fluorescein was created as a fluorescent dye by Adolph von Baeyer in 1871 and the method of staining was developed and utilized with the development of fluorescence microscopy in 1911. Ethidium bromide and variants were developed in the 1950s, and in 1994, fluorescent proteins or FPs were introduced. Green fluorescent protein or GFP was discovered by Osamu Shimomura in the 1960s and was developed as a tracer molecule by Douglas Prasher in 1987. FPs led to a breakthrough of live cell imaging with the ability to selectively tag genetic protein regions and observe protein functions and mechanisms. For this breakthrough, Shimomura was awarded the Nobel Prize in 2008. New methods for tracking biomolecules have been developed including the use of colorimetric biosensors, photochromic compounds, biomaterials, and electrochemical sensors. Fluorescent labeling is also a common method in which applications have expanded to enzymatic labeling, chemical labeling, protein labeling, and genetic labeling. Methods for tracking biomolecules There are currently several labeling methods for tracking biomolecules. Some of the methods include the following. Isotope markers Common species that isotope markers are used for include proteins. In this case, amino acids with stable isotopes of either carbon, nitrogen, or hydrogen are incorporated into polypeptide sequences. These polypeptides are then put through mass spectrometry. Because of the exact defined change that these isotopes incur on the peptides, it is possible to tell through the spectrometry graph which peptides contained the isotopes. By doing so, one can extract the protein of interest from several others in a group. Isotopic compounds play an important role as photochromes, described below. Colorimetric biosensors Biosensors are attached to a substance of interest. Normally, this substance would not be able to absorb light, but with the attached biosensor, light can be absorbed and emitted on a spectrophotometer. Additionally, biosensors that are fluorescent can be viewed with the naked eye. Some fluorescent biosensors also have the ability to change color in changing environments (ex: from blue to red). A researcher would be able to inspect and get data about the surrounding environment based on what color he or she could see visibly from the biosensor-molecule hybrid species. Colorimetric assays are normally used to determine how much concentration of one species there is relative to another. Photochromic compounds Photochromic compounds have the ability to switch between a range or variety of colors. Their ability to display different colors lies in how they absorb light. Different isomeric manifestations of the molecule absorbs different wavelengths of light, so that each isomeric species can display a different color based on its absorption. These include photoswitchable compounds, which are proteins that can switch from a non-fluorescent state to that of a fluorescent one given a certain environment. The most common organic molecule to be used as a photochrome is diarylethene. Other examples of photoswitchable proteins include PADRON-C, rs-FastLIME-s and bs-DRONPA-s, which can be used in plant and mammalian cells alike to watch cells move into different environments. Biomaterials Fluorescent biomaterials are a possible way of using external factors to observe a pathway more visibly. The method involves fluorescently labeling peptide molecules that would alter an organism's natural pathway. When this peptide is inserted into the organism's cell, it can induce a different reaction. This method can be used, for example to treat a patient and then visibly see the treatment's outcome. Electrochemical sensors Electrochemical sensors can be used for label-free sensing of biomolecules. They detect changes and measure current between a probed metal electrode and an electrolyte containing the target analyte. A known potential to the electrode is then applied from a feedback current and the resulting current can be measured. For example, one technique using electrochemical sensing includes slowly raising the voltage causing chemical species at the electrode to be oxidized or reduced. Cell current vs voltage is plotted which can ultimately identify the quantity of chemical species consumed or produced at the electrode. Fluorescent tags can be used in conjunction with electrochemical sensors for ease of detection in a biological system. Fluorescent labels Of the various methods of labeling biomolecules, fluorescent labels are advantageous in that they are highly sensitive even at low concentration and non-destructive to the target molecule folding and function. Green fluorescent protein is a naturally occurring fluorescent protein from the jellyfish Aequorea victoria that is widely used to tag proteins of interest. GFP emits a photon in the green region of the light spectrum when excited by the absorption of light. The chromophore consists of an oxidized tripeptide -Ser^65-Tyr^66-Gly^67 located within a β barrel. GFP catalyzes the oxidation and only requires molecular oxygen. GFP has been modified by changing the wavelength of light absorbed to include other colors of fluorescence. YFP or yellow fluorescent protein, BFP or blue fluorescent protein, and CFP or cyan fluorescent protein are examples of GFP variants. These variants are produced by the genetic engineering of the GFP gene. Synthetic fluorescent probes can also be used as fluorescent labels. Advantages of these labels include a smaller size with more variety in color. They can be used to tag proteins of interest more selectively by various methods including chemical recognition-based labeling, such as utilizing metal-chelating peptide tags, and biological recognition-based labeling utilizing enzymatic reactions. However, despite their wide array of excitation and emission wavelengths as well as better stability, synthetic probes tend to be toxic to the cell and so are not generally used in cell imaging studies. Fluorescent labels can be hybridized to mRNA to help visualize interaction and activity, such as mRNA localization. An antisense strand labeled with the fluorescent probe is attached to a single mRNA strand, and can then be viewed during cell development to see the movement of mRNA within the cell. Fluorogenic labels A fluorogen is a ligand (fluorogenic ligand) which is not itself fluorescent, but when it is bound by a specific protein or RNA structure becomes fluorescent. For instance, FAST is a variant of photoactive yellow protein which was engineered to bind chemical mimics of the GFP tripeptide chromophore. Likewise, the spinach aptamer is an engineered RNA sequence which can bind GFP chromophore chemical mimics, thereby conferring conditional and reversible fluorescence on RNA molecules containing the sequence. Use of tags in fluorescent labeling Fluorescent labeling is known for its non-destructive nature and high sensitivity. This has made it one of the most widely used methods for labeling and tracking biomolecules. Several techniques of fluorescent labeling can be utilized depending on the nature of the target. Enzymatic labeling In enzymatic labeling, a DNA construct is first formed, using a gene and the DNA of a fluorescent protein. After transcription, a hybrid RNA + fluorescent is formed. The object of interest is attached to an enzyme that can recognize this hybrid DNA. Usually fluorescein is used as the fluorophore. Chemical labeling Chemical labeling or the use of chemical tags utilizes the interaction between a small molecule and a specific genetic amino acid sequence. Chemical labeling is sometimes used as an alternative for GFP. Synthetic proteins that function as fluorescent probes are smaller than GFP's, and therefore can function as probes in a wider variety of situations. Moreover, they offer a wider range of colors and photochemical properties. With recent advancements in chemical labeling, Chemical tags are preferred over fluorescent proteins due to the architectural and size limitations of the fluorescent protein's characteristic β-barrel. Alterations of fluorescent proteins would lead to loss of fluorescent properties. Protein labeling Protein labeling use a short tag to minimize disruption of protein folding and function. Transition metals are used to link specific residues in the tags to site-specific targets such as the N-termini, C-termini, or internal sites within the protein. Examples of tags used for protein labeling include biarsenical tags, Histidine tags, and FLAG tags. Genetic labeling Fluorescence in situ hybridization (FISH), is an example of a genetic labeling technique that utilizes probes that are specific for chromosomal sites along the length of a chromosome, also known as chromosome painting. Multiple fluorescent dyes that each have a distinct excitation and emission wavelength are bound to a probe which is then hybridized to chromosomes. A fluorescence microscope can detect the dyes present and send it to a computer that can reveal the karyotype of a cell. This technique allows abnormalities such as deletions and duplications to be revealed. Cell imaging Chemical tags have been tailored for imaging technologies more so than fluorescent proteins because chemical tags can localize photosensitizers closer to the target proteins. Proteins can then be labeled and detected with imaging such as super-resolution microscopy, Ca2+-imaging, pH sensing, hydrogen peroxide detection, chromophore assisted light inactivation, and multi-photon light microscopy. In vivo imaging studies in live animals have been performed for the first time with the use of a monomeric protein derived from the bacterial haloalkane dehalogenase known as the Halo-tag. The Halo-tag covalently links to its ligand and allows for better expression of soluble proteins. Advantages Although fluorescent dyes may not have the same sensitivity as radioactive probes, they are able to show real-time activity of molecules in action. Moreover, radiation and appropriate handling is no longer a concern. With the development of fluorescent tagging, fluorescence microscopy has allowed the visualization of specific proteins in both fixed and live cell images. Localization of specific proteins has led to important concepts in cellular biology such as the functions of distinct groups of proteins in cellular membranes and organelles. In live cell imaging, fluorescent tags enable movements of proteins and their interactions to be monitored. Latest advances in methods involving fluorescent tags have led to the visualization of mRNA and its localization within various organisms. Live cell imaging of RNA can be achieved by introducing synthesized RNA that is chemically coupled with a fluorescent tag into living cells by microinjection. This technique was used to show how the oskar mRNA in the Drosophila embryo localizes to the posterior region of the oocyte. See also Molecular tagging velocimetry Spectrophotometer for Nucleic Acid Measurements Protein tags Notes External links Molecular biology Fluorescence techniques
Fluorescent tag
[ "Chemistry", "Biology" ]
2,529
[ "Biochemistry", "Fluorescence techniques", "Molecular biology" ]
256,310
https://en.wikipedia.org/wiki/John%20Tyndall
John Tyndall (; 2 August 1820 – 4 December 1893) was an Irish physicist and chemist. His scientific fame arose in the 1850s from his study of diamagnetism. Later he made discoveries in the realms of infrared radiation and the physical properties of air, proving the connection between atmospheric CO and what is now known as the greenhouse effect in 1859. Tyndall also published more than a dozen science books which brought state-of-the-art 19th century experimental physics to a wide audience. From 1853 to 1887 he was professor of physics at the Royal Institution of Great Britain in London. He was elected as a member to the American Philosophical Society in 1868. Early years and education Tyndall was born in Leighlinbridge, County Carlow, Ireland. His father was a local police constable, descended from Gloucestershire emigrants who settled in southeast Ireland around 1670. Tyndall attended the local schools (Ballinabranna Primary School) in County Carlow until his late teens, and was probably an assistant teacher near the end of his time there. Subjects learned at school notably included technical drawing and mathematics with some applications of those subjects to land surveying. He was hired as a draftsman by the Ordnance Survey of Ireland in his late teens in 1839, and moved to work for the Ordnance Survey for Great Britain in 1842. In the decade of the 1840s, a railway-building boom was in progress, and Tyndall's land surveying experience was valuable and in demand by the railway companies. Between 1844 and 1847, he was lucratively employed in railway construction planning. In 1847, Tyndall opted to become a mathematics and surveying teacher at Queenwood College, a boarding school in Hampshire. Recalling this decision later, he wrote: "the desire to grow intellectually did not forsake me; and, when railway work slackened, I accepted in 1847 a post as master in Queenwood College." Another recently arrived young teacher at Queenwood was Edward Frankland, who had previously worked as a chemical laboratory assistant for the British Geological Survey. Frankland and Tyndall became good friends. On the strength of Frankland's prior knowledge, they decided to go to Germany to further their education in science. Among other things, Frankland knew that certain German universities were ahead of any in Britain in experimental chemistry and physics. (British universities were still focused on classics and mathematics and not laboratory science.) The pair moved to Germany in summer 1848 and enrolled at the University of Marburg, attracted by the reputation of Robert Bunsen as a teacher. Tyndall studied under Bunsen for two years. Perhaps more influential for Tyndall at Marburg was Professor Hermann Knoblauch, with whom Tyndall maintained communications by letter for many years afterwards. Tyndall's Marburg dissertation was a mathematical analysis of screw surfaces in 1850 (under Friedrich Ludwig Stegmann). Tyndall stayed in Germany for a further year doing research on magnetism with Knoblauch, including some months' visit at the Berlin laboratory of Knoblauch's main teacher, Heinrich Gustav Magnus. It is clear today that Bunsen and Magnus were among the very best experimental science instructors of the era. Thus, when Tyndall returned to live in England in summer 1851, he probably had as good an education in experimental science as anyone in England. Early scientific work Tyndall's early original work in physics was his experiments on magnetism and diamagnetic polarity, on which he worked from 1850 to 1856. His two most influential reports were the first two, co-authored with Knoblauch. One of them was entitled "The magneto-optic properties of crystals, and the relation of magnetism and diamagnetism to molecular arrangement", dated May 1850. The two described an inspired experiment, with an inspired interpretation. These and other magnetic investigations very soon made Tyndall known among the leading scientists of the day. He was elected a Fellow of the Royal Society in 1852. In his search for a suitable research appointment, he was able to ask the longtime editor of the leading German physics journal (Poggendorff) and other prominent men to write testimonials on his behalf. In 1853, he attained the prestigious appointment of Professor of Natural Philosophy (Physics) at the Royal Institution in London, due in no small part to the esteem his work had garnered from Michael Faraday, the leader of magnetic investigations at the Royal Institution. About a decade later Tyndall was appointed the successor to the positions held by Michael Faraday at the Royal Institution on Faraday's retirement. Alpine mountaineering and glaciology Tyndall visited the Alps mountains in 1856 for scientific reasons and ended up becoming a pioneering mountain climber. He visited the Alps almost every summer from 1856 onward, was a member of the very first mountain-climbing team to reach the top of the Weisshorn (1861), and led one of the early teams to reach the top of the Matterhorn (1868). His is one of the names associated with the "Golden age of alpinism" — the mid-Victorian years when the more difficult of the Alpine peaks were summited for the first time. In the Alps, Tyndall studied glaciers, and especially glacier motion. His explanation of glacial flow brought him into dispute with others, particularly James David Forbes. Much of the early scientific work on glacier motion had been done by Forbes, but Forbes at that time did not know of the phenomenon of regelation, which was discovered a little later by Michael Faraday. Regelation played a key role in Tyndall's explanation. Forbes did not see regelation in the same way at all. Complicating their debate, a disagreement arose publicly over who deserved to get investigator credit for what. Articulate friends of Forbes, as well as Forbes himself, thought that Forbes should get the credit for most of the good science, whereas Tyndall thought the credit should be distributed more widely. Tyndall commented: "The idea of semi-fluid motion belongs entirely to Louis Rendu; the proof of the quicker central flow belongs in part to Rendu, but almost wholly to Louis Agassiz and Forbes; the proof of the retardation of the bed belongs to Forbes alone; while the discovery of the locus of the point of maximum motion belongs, I suppose, to me." When Forbes and Tyndall were in the grave, their disagreement was continued by their respective official biographers. Everyone tried to be reasonable, but agreement was not attained. More disappointingly, aspects of glacier motion remained not understood or not proved. Numerous landforms and geographical features are named for John Tyndall, including Tyndall Glacier in Chile, Tyndall Glacier in Colorado, Tyndall Glacier in Alaska, Mount Tyndall in California, and Mount Tyndall in Tasmania. Main scientific work Work on glaciers alerted Tyndall to the research of Horace Bénédict de Saussure into the heating effect of sunlight, and the concept of Jospeph Fourier, developed by Claude Pouillet and William Hopkins, that heat from the sun penetrates the atmosphere more easily than "obscure heat" (infrared) "terrestrial radiation" from the warmed Earth, causing what we now call the greenhouse effect. In the spring of 1859, Tyndall began research into how thermal radiation, both visible and obscure, affects different gases and aerosols. He developed differential absorption spectroscopy using the electro-magnetic thermopile devised by Macedonio Melloni. Tyndall began intensive experiments on 9 May 1859, at first without significant results, then improved the sensitivity of the apparatus and on 18 May wrote in his journal "Experimented all day; the subject is completely in my hands!" On 26 May he gave the Royal Society a note which described his methods, and stated "With the exception of the celebrated memoir of M. Pouillet on Solar Radiation through the atmosphere, nothing, so far as I am aware, has been published on the transmission of radiant heat through gaseous bodies. We know nothing of the effect even of air upon heat radiated from terrestrial sources." On 10 June, he demonstrated the research in a Royal Society lecture, noting that coal gas and ether strongly absorbed (infrared) radiant heat, and his experimental confirmation of the (greenhouse effect) concept; that solar heat crosses an atmosphere, but "when the heat is absorbed by the planet, it is so changed in quality that the rays emanating from the planet cannot get with the same freedom back into space. Thus the atmosphere admits of the entrance of solar heat; but checks its exit, and the result is a tendency to accumulate heat at the surface of the planet." Tyndall's studies of the action of radiant energy on the constituents of air led him onto several lines of inquiry, and his original research results included the following: Tyndall explained the heat in the Earth's atmosphere in terms of the capacities of the various gases in the air to absorb radiant heat, in the form of infrared radiation. His measuring device, which used thermopile technology, is an early landmark in the history of absorption spectroscopy of gases. He was the first to correctly measure the relative infrared absorptive powers of the gases nitrogen, oxygen, water vapour, carbon dioxide, ozone, methane, and other trace gases and vapours. He concluded that water vapour is the strongest absorber of radiant heat in the atmosphere and is the principal gas controlling air temperature. Absorption by the other gases is not negligible but relatively small. Prior to Tyndall it was widely surmised that the Earth's atmosphere warms the surface in what was later called a greenhouse effect, but he was the first to prove it. The proof was that water vapour strongly absorbed infrared radiation. Three years earlier, in 1856, the American scientist Eunice Newton Foote had announced experiments demonstrating that water vapour and carbon dioxide absorb heat from solar radiation, but she did not differentiate the effects of infrared. Relatedly, Tyndall in 1860 was first to demonstrate and quantify that visually transparent gases are infrared emitters. He devised demonstrations that advanced the question of how radiant heat is absorbed and emitted at the molecular level. He appears to be the first person to have demonstrated experimentally that emission of heat in chemical reactions has its physical origination within the newly created molecules (1864). He produced instructive demonstrations involving the incandescent conversion of infrared into visible light at the molecular level, which he called calorescence (1865), in which he used materials that are transparent to infrared and opaque to visible light or vice versa. He usually referred to infrared as "radiant heat", and sometimes as "ultra-red undulations", as the word "infrared" did not start coming into use until the 1880s. His main reports of the 1860s were republished as a 450-page collection in 1872 under the title Contributions to Molecular Physics in the Domain of Radiant Heat. In the investigations on radiant heat in air it had been necessary to use air from which all traces of floating dust and other particulates had been removed. A very sensitive way to detect particulates is to bathe the air with intense light. The scattering of light by particulate impurities in air and other gases, and in liquids, is known today as the Tyndall effect or Tyndall scattering. In studying this scattering during the late 1860s Tyndall was a beneficiary of recent improvements in electric-powered lights. He also had the use of good light concentrators. He developed the nephelometer and similar instruments that show properties of aerosols and colloids through concentrated light beams against a dark background and are based on exploiting the Tyndall effect. (When combined with microscopes, the result is the ultramicroscope, which was developed later by others). He was the first to observe and report the phenomenon of thermophoresis in aerosols. He spotted it surrounding hot objects while investigating the Tyndall effect with focused lightbeams in a dark room. He devised a better way to demonstrate it, and then simply reported it (1870), without investigating the physics of it in depth. In radiant-heat experiments that called for much laboratory expertise in the early 1860s, he showed for a variety of readily vaporisable liquids that, molecule for molecule, the vapour form and the liquid form have essentially the same power to absorb radiant heat. (In modern experiments using narrow-band spectra, some small differences are found that Tyndall's equipment was unable to get at; see e.g. absorption spectrum of H2O). He consolidated and enhanced the results of Paul-Quentin Desains, James D. Forbes, Hermann Knoblauch and others demonstrating that the principal properties of visible light can be reproduced for radiant heat – namely reflection, refraction, diffraction, polarisation, depolarisation, double refraction, and rotation in a magnetic field. Using his expertise about radiant heat absorption by gases, he invented a system for measuring the amount of carbon dioxide in a sample of exhaled human breath (1862, 1864). The basics of Tyndall's system is in daily use in hospitals today for monitoring patients under anaesthesia. (See capnometry.) When studying the absorption of radiant heat by ozone, he came up with a demonstration that helped confirm or reaffirm that ozone is an oxygen cluster (1862). In the lab he came up with the following simple way to obtain "optically pure" air, i.e. air that has no visible signs of particulate matter. He built a square wooden box with a couple of glass windows on it. Before closing the box, he coated the inside walls and floor of the box with glycerin, which is a sticky syrup. He found that after a few days' wait the air inside the box was entirely particulate-free when examined with strong light beams through the glass windows. The various floating-matter particulates had all ended up getting stuck to the walls or settling on the sticky floor. Now, in the optically pure air there were no signs of any "germs", i.e. no signs of floating micro-organisms. Tyndall sterilised some meat-broths by simply boiling them, and then compared what happened when he let these meat-broths sit in the optically pure air, and in ordinary air. The broths sitting in the optically pure air remained "sweet" (as he said) to smell and taste after many months of sitting, while the ones in ordinary air started to become putrid after a few days. This demonstration extended Louis Pasteur's earlier demonstrations that the presence of micro-organisms is a precondition for biomass decomposition. However, the next year (1876) Tyndall failed to consistently reproduce the result. Some of his supposedly heat-sterilized broths rotted in the optically pure air. From this Tyndall was led to find viable bacterial spores (endospores) in supposedly heat-sterilized broths. He discovered the broths had been contaminated with dry bacterial spores from hay in the lab. All bacteria are killed by simple boiling, except that bacteria have a spore form that can survive boiling, he correctly contended, citing research by Ferdinand Cohn. Tyndall found a way to eradicate the bacterial spores that came to be known as "Tyndallization". Tyndallization historically was the earliest known effective way to destroy bacterial spores. At the time, it affirmed the "germ theory" against a number of critics whose experimental results had been defective from the same cause. During the mid-1870s Pasteur and Tyndall were in frequent communication. Invented a better fireman's respirator, a hood that filtered smoke and noxious gas from air (1871, 1874). In the late 1860s and early 1870s he wrote an introductory book about sound propagation in air, and was a participant in a large-scale British project to develop a better foghorn. In laboratory demonstrations motivated by foghorn issues, Tyndall established that sound is partially reflected (i.e. partially bounced back like an echo) at the location where an air mass of one temperature meets another air mass of a different temperature; and more generally when a body of air contains two or more air masses of different densities or temperatures, the sound travels poorly because of reflections occurring at the interfaces between the air masses, and very poorly when many such interfaces are present. (He then argued, though inconclusively, that this is the usual main reason why the same distant sound, e.g. foghorn, can be heard stronger or fainter on different days or at different times of day.) An index of 19th-century scientific research journals has John Tyndall as the author of more than 147 papers in science research journals, with practically all of them dated between 1850 and 1884, which is an average of more than four papers a year over that 35-year period. In his lectures at the Royal Institution Tyndall put a great value on, and was talented at producing, lively, visible demonstrations of physics concepts. In one lecture, Tyndall demonstrated the propagation of light down through a stream of falling water via total internal reflection of the light. It was referred to as the "light fountain". It is historically significant today because it demonstrates the scientific foundation for modern fibre optic technology. During second half of the 20th century Tyndall was usually credited with being the first to make this demonstration. However, Jean-Daniel Colladon published a report of it in Comptes Rendus in 1842, and there's some suggestive evidence that Tyndall's knowledge of it came ultimately from Colladon and no evidence that Tyndall claimed to have originated it himself. Molecular physics of radiant heat Tyndall was an experimenter and laboratory apparatus builder, not an abstract model builder. But in his experiments on radiation and the heat-absorptive power of gases, he had an underlying agenda to understand the physics of molecules. Tyndall said in 1879: "During nine years of labour on the subject of radiation [in the 1860s], heat and light were handled throughout by me, not as ends, but as instruments by the aid of which the mind might perchance lay hold upon the ultimate particles of matter." This agenda is explicit in the title he picked for his 1872 book Contributions to Molecular Physics in the Domain of Radiant Heat. It is present less explicitly in the spirit of his widely read 1863 book Heat Considered as a Mode of Motion. Besides heat he also saw magnetism and sound propagation as reducible to molecular behaviours. Invisible molecular behaviours were the ultimate basis of all physical activity. With this mindset, and his experiments, he outlined an account whereby differing types of molecules have differing absorptions of infrared radiation because their molecular structures give them differing oscillating resonances. He'd gotten into the oscillating resonances idea because he'd seen that any one type of molecule has differing absorptions at differing radiant frequencies, and he was entirely persuaded that the only difference between one frequency and another is the frequency. He'd also seen that the absorption behaviour of molecules is quite different from that of the atoms composing the molecules. For example, the gas nitric oxide (NO) absorbed more than a thousand times more infrared radiation than either nitrogen (N2) or oxygen (O2). He'd also seen in several kinds of experiments that – no matter whether a gas is a weak absorber of broad-spectrum radiant heat – any gas will strongly absorb the radiant heat coming from a separate body of the same type of gas. That demonstrated a kinship between the molecular mechanisms of absorption and emission. Such a kinship was also in evidence in experiments by Balfour Stewart and others, cited and extended by Tyndall, that showed with respect to broad-spectrum radiant heat that molecules that are weak absorbers are weak emitters and strong absorbers are strong emitters. (For example, rock-salt is an exceptionally poor absorber of heat via radiation, and a good absorber of heat via conduction. When a plate of rock-salt is heated via conduction and let stand on an insulator, it takes an exceptionally long time to cool down; i.e., it's a poor emitter of infrared.) The kinship between absorption and emission was also consistent with some generic or abstract features of resonators. The chemical decomposition of molecules by lightwaves (photochemical effect) convinced Tyndall that the resonator could not be the molecule as a whole unit; it had to be some substructure, because otherwise the photochemical effect would be impossible. But he was without testable ideas as to the form of this substructure, and did not partake in speculation in print. His promotion of the molecular mindset, and his efforts to experimentally expose what molecules are, has been discussed by one historian under the title "John Tyndall, The Rhetorician of Molecularity". Educator Besides being a scientist, John Tyndall was a science teacher and evangelist for the cause of science. He spent a significant amount of his time disseminating science to the general public. He gave hundreds of public lectures to non-specialist audiences at the Royal Institution in London. When he went on a public lecture tour in the US in 1872, large crowds of non-scientists paid fees to hear him lecture about the nature of light. A typical statement of Tyndall's reputation at the time is this from a London publication in 1878: "Following the precedent set by Faraday, Professor Tyndall has succeeded not only in original investigation and in teaching science soundly and accurately, but in making it attractive.... When he lectures at the Royal Institution the theatre is crowded." Tyndall said of the occupation of teacher "I do not know a higher, nobler, and more blessed calling." His greatest audience was gained ultimately through his books, most of which were not written for experts or specialists. He published more than a dozen science books. From the mid-1860s on, he was one of the world's most famous living physicists, due firstly to his skill and industry as a tutorialist. Most of his books were translated into German and French with his main tutorials staying in print in those languages for decades. As an indicator of his teaching attitude, here are his concluding remarks to the reader at the end of a 200-page tutorial book for a "youthful audience", The Forms of Water (1872): "Here, my friend, our labours close. It has been a true pleasure to me to have you at my side so long. In the sweat of our brows we have often reached the heights where our work lay, but you have been steadfast and industrious throughout, using in all possible cases your own muscles instead of relying upon mine. Here and there I have stretched an arm and helped you to a ledge, but the work of climbing has been almost exclusively your own. It is thus that I should like to teach you all things; showing you the way to profitable exertion, but leaving the exertion to you.... Our task seems plain enough, but you and I know how often we have had to wrangle resolutely with the facts to bring out their meaning. The work, however, is now done, and you are master of a fragment of that sure and certain knowledge which is founded on the faithful study of nature.... Here then we part. And should we not meet again, the memory of these days will still unite us. Give me your hand. Good bye." As another indicator, here is the opening paragraph of his 350-page tutorial entitled Sound (1867): "In the following pages I have tried to render the science of acoustics interesting to all intelligent persons, including those who do not possess any special scientific culture. The subject is treated experimentally throughout, and I have endeavoured so to place each experiment before the reader that he should realise it as an actual operation." In the preface to the 3rd edition of this book, he reports that earlier editions were translated into Chinese at the expense of the Chinese government and translated into German under the supervision of Hermann von Helmholtz (a big name in the science of acoustics). His first published tutorial, which was about glaciers (1860), similarly states: "The work is written with a desire to interest intelligent persons who may not possess any special scientific culture." His most widely praised tutorial, and probably his biggest seller, was the 550-page "Heat: a Mode of Motion" (1863; updated editions until 1880). It was in print for at least 50 years, and is in print today. Its primary feature is, as James Clerk Maxwell said in 1871, "the doctrines of the science [of heat] are forcibly impressed on the mind by well-chosen illustrative experiments." Tyndall's three longest tutorials, namely Heat (1863), Sound (1867), and Light (1873), represented state-of-the-art experimental physics at the time they were written. Much of their contents were recent major innovations in the understanding of their respective subjects, which Tyndall was the first writer to present to a wider audience. One caveat is called for about the meaning of "state of the art". The books were devoted to laboratory science and they avoided mathematics. In particular, they contain absolutely no infinitesimal calculus. Mathematical modelling using infinitesimal calculus, especially differential equations, was a component of the state-of-the-art understanding of heat, light and sound at the time. Demarcation of science from religion The majority of the progressive and innovative British physicists of Tyndall's generation were conservative and orthodox on matters of religion. That includes for example James Joule, Balfour Stewart, James Clerk Maxwell, George Gabriel Stokes and Lord Kelvin – all names investigating heat or light contemporaneously with Tyndall. These conservatives believed, and sought to strengthen the basis for believing, that religion and science were consistent and harmonious with each other. Tyndall, however, was a member of a club that vocally supported Charles Darwin's theory of evolution and sought to strengthen the barrier, or separation, between religion and science. The most prominent member of this club was the anatomist Thomas Henry Huxley. Tyndall first met Huxley in 1851 and the two had a lifelong friendship. Chemist Edward Frankland and mathematician Thomas Archer Hirst, both of whom Tyndall had known since before going to university in Germany, were members too. Others included the social philosopher Herbert Spencer. Though not nearly so prominent as Huxley in controversy over philosophical problems, Tyndall played his part in communicating to the educated public what he thought were the virtues of having a clear separation between science (knowledge & rationality) and religion (faith & spirituality). As the elected president of the British Association for the Advancement of Science in 1874, he gave a long keynote speech at the Association's annual meeting held that year in Belfast. The speech gave a favourable account of the history of evolutionary theories, mentioning Darwin's name favourably more than 20 times, and concluded by asserting that religious sentiment should not be permitted to "intrude on the region of knowledge, over which it holds no command". This was a hot topic. The newspapers carried the report of it on their front pages – in Britain, Ireland & North America, even the European Continent – and many critiques of it appeared soon after. The attention and scrutiny increased the friends of the evolutionists' philosophical position, and brought it closer to mainstream ascendancy. In Rome in 1864, Pope Pius IX in his Syllabus of Errors decreed that it was an error that "reason is the ultimate standard by which man can and ought to arrive at knowledge" and an error that "divine revelation is imperfect" in the Bible – and anyone maintaining those errors was to be "anathematized" – and in 1888 decreed as follows: "The fundamental doctrine of rationalism is the supremacy of the human reason, which, refusing due submission to the divine and eternal reason, proclaims its own independence... A doctrine of such character is most hurtful both to individuals and to the State... It follows that it is quite unlawful to demand, to defend, or to grant, unconditional [or promiscuous] freedom of thought, speech, writing, or religion." Those principles and Tyndall's principles were profound enemies. Luckily for Tyndall he didn't need to get into a contest with them in Britain. Even in Italy, Huxley and Darwin were awarded honorary medals and most of the Italian governing class was hostile to the papacy. But in Ireland during Tyndall's lifetime the majority of the population grew increasingly doctrinaire and vigorous in its Roman Catholicism and also grew stronger politically. Between 1886 and 1893, Tyndall was active in the debate in England about whether to give the Catholics of Ireland more freedom to go their own way. Like the great majority of Irish-born scientists of the 19th century he opposed the Irish Home Rule Movement. He had ardent views about it, which were published in newspapers and pamphlets. For example, in an opinion piece in The Times on 27 December 1890 he saw priests and Catholicism as "the heart and soul of this movement" and wrote that placing the non-Catholic minority under the dominion of "the priestly horde" would be "an unspeakable crime". He tried unsuccessfully to get the UK's premier scientific society to denounce the Irish Home Rule proposal as contrary to the interests of science. In several essays included in his book Fragments of Science for Unscientific People, Tyndall attempted to dissuade people from believing in the potential effectiveness of prayers. At the same time, though, he was not broadly anti-religious. Many of his readers interpret Tyndall to be a confirmed agnostic, though he never explicitly declared himself to be so. The following statement from Tyndall is an example of Tyndall's agnostic mindset, made in 1867, and reiterated in 1878: "The phenomena of matter and force come within our intellectual range... but behind, and above, and around us the real mystery of the universe lies unsolved, and, as far as we are concerned, is incapable of solution.... Let us lower our heads, and acknowledge our ignorance, priest and philosopher, one and all." Private life Tyndall did not marry until age 55. His bride, Louisa Hamilton, was the 30-year-old daughter of a member of parliament (Lord Claud Hamilton, M.P.). The following year, 1877, they built a summer chalet at Belalp in the Swiss Alps. Before getting married Tyndall had been living for many years in an upstairs apartment at the Royal Institution and continued living there after marriage until 1885 when he and Louisa moved to a house near Haslemere 45 miles southwest of London. The marriage was a happy one and without children. He retired from the Royal Institution at age 66 having complaints of ill health. Tyndall became financially well-off from sales of his popular books and fees from his lectures (but there is no evidence that he owned commercial patents). For many years he got non-trivial payments for being a part-time scientific advisor to a couple of quasi-governmental agencies and partly donated the payments to charity. His successful lecture tour of the United States in 1872 netted him a substantial amount of dollars, all of which he promptly donated to a trustee for fostering science in America. Late in life his money donations went most visibly to the Irish Unionist political cause. When he died, his wealth was £22,122. For comparison's sake, the income of a police constable in London was about £80 per year at the time. Death In his last years Tyndall often took chloral hydrate to treat his insomnia. When bedridden and ailing, he died from an accidental overdose of this drug in 1893 at the age of 73, and was buried at Haslemere. The overdose was administered by his wife Louisa. "My darling," said Tyndall when he realized what had happened, "you have killed your John." Afterwards, Tyndall's wife took possession of his papers and assigned herself supervisor of an official biography of him. She procrastinated on the project, however, and it was still unfinished when she died in 1940 aged 95. The book eventually appeared in 1945, written by A. S. Eve and C. H. Creasey, whom Louisa Tyndall had authorised shortly before her death. John Tyndall is commemorated by a memorial (the Tyndalldenkmal) erected at an elevation of on the mountain slopes above the village of Belalp, where he had his holiday home, and in sight of the Aletsch Glacier, which he had studied. John Tyndall's books Tyndall, J. (1860), The glaciers of the Alps, Being a narrative of excursions and ascents, an account of the origin and phenomena of glaciers and an exposition of the physical principles to which they are related, (1861 edition) Ticknor and Fields, Boston Tyndall, J. (1862), Mountaineering in 1861. A vacation tour, Longman, Green, Longman, and Roberts, London Tyndall, J. (1865), On Radiation: One Lecture (40 pages) Tyndall, J. (1868), Heat : A mode of motion, (1869 edition) D. Appleton, New York Tyndall, J. (1869), Natural Philosophy in Easy Lessons (180 pages) (a physics book intended for use in secondary schools) Tyndall, J. (1870), Faraday as a discoverer, Longmans, Green, London Tyndall, J. (1870), Three Scientific Addresses by Prof. John Tyndall (75 pages) Tyndall, J. (1870), Notes of a Course of Nine Lectures on Light (80 pages) Tyndall, J. (1870), Notes of a Course of Seven Lectures on Electrical Phenomena and Theories (50 pages) Tyndall, J. (1870), Researches on diamagnetism and magne-crystallic action: including the question of diamagnetic polarity, (a compilation of 1850s research reports), Longmans, Green, London Tyndall, J. (1871), Hours of exercise in the Alps, Longmans, Green, and Co., London Tyndall, J. (1871), Fragments of Science: A Series of Detached Essays, Lectures, and Reviews, (1872 edition), Longmans, Green, London Tyndall, J. (1872), Contributions to Molecular Physics in the Domain of Radiant Heat, (a compilation of 1860s research reports), (1873 edition), D. Appleton and Company, New York Tyndall, J. (1873), The forms of water in clouds & rivers, ice & glaciers, H. S. King & Co., London Tyndall, J. (1873), Six Lectures on Light (290 pages) Tyndall, J. (1876), Lessons in Electricity at the Royal Institution (100 pages), (intended for secondary school students) Tyndall, J. (1878), Sound; delivered in eight lectures, (1969 edition), Greenwood Press, New York Tyndall, J. (1882), Essays on the floating matter of the air, in relation to putrefaction and infection, D. Appleton, New York Tyndall, J. (1887), Light and electricity: notes of two courses of lectures before the Royal Institution of Great Britain, D. Appleton and Company, New York Tyndall, J. (1892), New Fragments (miscellaneous essays for a broad audience), D. Appleton, New York See also Ice sheet dynamics Greenhouse gas John Tyndall's system for measuring radiant heat absorption in gases Notes Sources Biographies of John Tyndall 430 pages. This is the "official" biography. William Tulloch Jeans wrote a 100-page biography of Professor Tyndall in 1887 (the year Tyndall retired from the Royal Institution). Downloadable. See also The Lives of Electricians: Professors Tyndall, Wheatstone, and Morse. (1887, Whittaker & Co.) Louisa Charlotte Tyndall, his wife, wrote an 8-page biography of John Tyndall that was published in 1899 in Dictionary of National Biography (volume 57). It is readable online (and a 1903 republication of the same biography is also readable online). Edward Frankland, a longtime friend, wrote a 16-page biography of John Tyndall as an obituary in 1894 in a scientific journal. It is readable online. Gives an account of Tyndall's vocational development prior to 1853. 220 pages. Arthur Whitmore Smith, a professor of physics, wrote a 10-page biography of John Tyndall in 1920 in a scientific monthly. Readable online. John Walter Gregory, a naturalist, wrote a 9-page obituary of John Tyndall in 1894 in a natural science journal. Readable online. An early, 8-page profile of John Tyndall appeared in 1864 in Portraits of Men of Eminence in Literature, Science and Art, Volume II, pages 25–32. A brief profile of Tyndall based on information supplied by Tyndall himself appeared in 1874 in . Claud Schuster, John Tyndall as a Mountaineer, 56-page essay included in Schuster's book Postscript to Adventure, year 1950 (New Alpine Library: Eyre & Spottiswoode, London). . The first major biography of Tyndall since 1945. Further reading External links A blog maintained by a historian who is involved in transcribing Tyndall's letters. The Tyndall Correspondence Project website 1820 births 1893 deaths Atmospheric physicists Experimental physicists University of Marburg alumni 19th-century Irish physicists Optical physicists Glaciologists Irish mountain climbers Scientists from County Carlow Royal Medal winners Fellows of the Royal Society Drug-related deaths in England People from Leighlinbridge
John Tyndall
[ "Physics" ]
7,937
[ "Experimental physics", "Experimental physicists" ]
256,322
https://en.wikipedia.org/wiki/Peter%20Guthrie%20Tait
Peter Guthrie Tait (28 April 18314 July 1901) was a Scottish mathematical physicist and early pioneer in thermodynamics. He is best known for the mathematical physics textbook Treatise on Natural Philosophy, which he co-wrote with Lord Kelvin, and his early investigations into knot theory. His work on knot theory contributed to the eventual formation of topology as a mathematical discipline. His name is known in graph theory mainly for Tait's conjecture on cubic graphs. He is also one of the namesakes of the Tait–Kneser theorem on osculating circles. Early life Tait was born in Dalkeith on 28 April 1831 the only son of Mary Ronaldson and John Tait, secretary to the 5th Duke of Buccleuch. He was educated at Dalkeith Grammar School then Edinburgh Academy, where he began his lifelong friendship with James Clerk Maxwell. He studied mathematics and physics at the University of Edinburgh, and then went to Peterhouse, Cambridge, graduating as senior wrangler and first Smith's prizeman in 1852. As a fellow and lecturer of his college he remained at the University for a further two years, before leaving to take up the professorship of mathematics at Queen's College, Belfast; there he made the acquaintance of Thomas Andrews, whom he joined in researches on the density of ozone and the action of the electric discharge on oxygen and other gases. Andrews also introduced him to Sir William Rowan Hamilton and quaternions. Middle years In 1860, Tait succeeded his old master, James D. Forbes, as professor of natural philosophy at the University of Edinburgh. He occupied the Chair until shortly before his death. The first scientific paper under Tait's name only was published in 1860. His earliest work dealt mainly with mathematical subjects, and especially with quaternions, of which he was the leading exponent after their originator, William Rowan Hamilton. He was the author of two text-books on them - one an Elementary Treatise on Quaternions (1867), written with the advice of Hamilton, though not published till after his death, and the other an Introduction to Quaternions (1873), in which he was aided by Philip Kelland (1808–1879). Kelland was one of his teachers and colleagues at the University of Edinburgh. Quaternions was also one of the themes of his address as president of the mathematical and physical section of the British Association for the Advancement of Science in 1871. Tait also collaborated with Lord Kelvin on ‘’Treatise on Natural Philosophy’’ in 1867. Tait also produced original work in mathematical and experimental physics. In 1864, he published a short paper on thermodynamics, and from that time his contributions to that and kindred departments of science became frequent and important. In 1871, he emphasised the significance and future importance of the principle of the dissipation of energy (second law of thermodynamics). In 1873 he took thermoelectricity for the subject of his discourse as Rede lecturer at Cambridge, and in the same year he presented the first sketch of his well-known thermoelectric diagram before the Royal Society of Edinburgh. Two years later, researches on "Charcoal Vacua" with James Dewar led him to see the true dynamical explanation of the Crookes radiometer in the large mean free path of the molecule of the highly rarefied air. From 1879 to 1888, he engaged in difficult experimental investigations. These began with an inquiry into what corrections were required for thermometers operating at great pressure. This was for the benefit of thermometers employed by the Challenger expedition for observing deep-sea temperatures, and were extended to include the compressibility of water, glass, and mercury. This work led to the first formulation of the Tait equation, which is widely used to fit liquid density to pressure. Between 1886 and 1892 he published a series of papers on the foundations of the kinetic theory of gases, the fourth of which contained what was, according to Lord Kelvin, the first proof ever given of the Waterston-Maxwell theorem (equipartition theorem) of the average equal partition of energy in a mixture of two gases./ About the same time he carried out investigations into impact and its duration. Many other inquiries conducted by him might be mentioned, and some idea may be gained of his scientific activity from the fact that a selection only from his papers, published by the Cambridge University Press, fills three large volumes. This mass of work was done in the time he could spare from his professorial teaching in the university. For example, in 1880 he worked on the Four color theorem and proved that it was true if and only if no snarks were planar. Later years In addition, he was the author of a number of books and articles. Of the former, the first, published in 1856, was on the dynamics of a particle; and afterwards there followed a number of concise treatises on thermodynamics, heat, light, properties of matter and dynamics, together with an admirably lucid volume of popular lectures on Recent Advances in Physical Science. With Lord Kelvin, he collaborated in writing the well-known Treatise on Natural Philosophy. "Thomson and Tait", as it is familiarly called (" T and T' " was the authors' own formula), was planned soon after Lord Kelvin became acquainted with Tait, on the latter's appointment to his professorship in Edinburgh, and it was intended to be an all-comprehensive treatise on physical science, the foundations being laid in kinematics and dynamics, and the structure completed with the properties of matter, heat, light, electricity and magnetism. But the literary partnership ceased in about eighteen years, when only the first portion of the plan had been completed, because each of the members felt he could work to better advantage separately than jointly. The friendship, however, endured for the remaining twenty-three years of Tait's life. Tait collaborated with Balfour Stewart in the Unseen Universe, which was followed by Paradoxical Philosophy. It was in his 1875 review of The Unseen Universe, that William James first put forth his Will to Believe Doctrine. Tait's articles include those he wrote for the ninth edition of the Encyclopædia Britannica on light, mechanics, quaternions, radiation, and thermodynamics, and the biographical notices of Hamilton and James Clerk Maxwell. Death He died in Edinburgh on 4 July 1901, aged 70. He is buried in the second terrace down from Princes Street in the burial ground of St John's Episcopal Church, Edinburgh. Topology The Tait conjectures are three conjectures made by Tait in his study of knots. The Tait conjectures involve concepts in knot theory such as alternating knots, chirality, and writhe. All of the Tait conjectures have been solved, the most recent being the Flyping conjecture, proved by Morwen Thistlethwaite and William Menasco in 1991. Publications Dynamics of a Particle (1856) Treatise on Natural Philosophy (1867); v. 1 and v. 2 (PDF/DjVu at the Internet Archive). An elementary treatise on quaternions (1867); PDF/DjVu Copy of the 1st ed. at the Internet Archive and PDF/DjVu Copy of the 3rd ed. at the Internet Archive. Elements of Natural Philosophy (1872); (PDF/DjVu at the Internet Archive). A "non-mathematical portion of Treatise on Natural Philosophy". Sketch of Thermodynamics (1877); PDF/DjVu Copy at the Internet Archive. Recent Advances in Physical Science (1876); PDF/DjVu Copy at the Internet Archive. Heat (1884); PDF/DjVu Copy at the Internet Archive. Light (1884); PDF/DjVu Copy at the Internet Archive. Properties of Matter (1885); PDF/DjVu Copy at the Internet Archive. Dynamics (1895); PDF/DjVu Copy at the Internet Archive. The Unseen Universe (1875; new edition, 1901) Scientific papers vol. 1 (1898–1900) PDF/DjVu Copy at the Internet Archive. Scientific papers vol. 2 (1898–1900) PDF/DjVu Copy at the Internet Archive. Private life In 1857 Tait married Margaret Archer Porter (1839–1926). She was the sister of (1) William Archer Porter, a lawyer and educationist who served as the Principal of Government Arts College, Kumbakonam and tutor and secretary to the Maharaja of Mysore, (2) James Porter (Master of Peterhouse, Cambridge), and (3) Jane Bailie Porter, who married Alexander Crum Brown, the Scottish organic chemist. Tait was an enthusiastic golfer and, of his seven children, two, Frederick Guthrie Tait (1870–1900) and John Guthrie Tait (1861–1945) went on to become gifted amateur golf champions. (In 1891, Tait invoked the Magnus effect to explain the influence of spin on the flight of a golf ball.) He was an all-round sportsman and represented Scotland at international level in rugby union. His daughter, Edith, married Rev. Harry Reid, who later became Bishop of Edinburgh. Another son, William, was a civil engineer. Recognition Tait was a lifelong friend of James Clerk Maxwell, and a portrait of Tait by Harrington Mann is held in the James Clerk Maxwell Foundation museum in Edinburgh. There are several portraits of Tait by Sir George Reid. One, painted about 1883, is owned by the National Galleries of Scotland, to which it was given by the artist in 1902. Another portrait was unveiled at Peterhouse, Cambridge in October 1902, paid for by the Master and Fellows of Peterhouse, where Tait had been an Honorary Fellow. One of the chairs in the Department of Physics at the University of Edinburgh is the Tait professorship. Peter Guthrie Tait Road at the University of Edinburgh King's Buildings complex is named in his honour. He was also given the following honours; Fellow of the Royal Society of Edinburgh  General Secretary of the Royal Society of Edinburgh, 1879 until 1901 Gunning Victoria Jubilee Prize Keith prize (twice) Royal Medal from the Royal Society of London, in 1886 Honorary degrees by the University of Glasgow and the University of Ireland Honorary membership of the academies of Denmark, Holland, Sweden and Ireland. See also Dowker-Thistlethwaite notation Four color theorem Homoeoid Medial graph Nabla symbol References Further reading External links Pritchard, Chris. "Provisional Bibliography of Peter Guthrie Tait". British Society for the History of Mathematics. An Elementary Treatise on Quaternions, 1890, Cambridge University Press. Scanned PDF, HTML version (in progress) "Knot Theory" Website of Andrew Ranicki in Edinburgh. University of Edinburgh website, Life and Scientific Work of Peter Guthrie Tait’, online book by Cargill Gilston Knott (1898) Scottish physicists Scottish Episcopalians Thermodynamicists Fellows of the Royal Society of Edinburgh Alumni of the University of Edinburgh Alumni of Peterhouse, Cambridge Fellows of Peterhouse, Cambridge People educated at Edinburgh Academy 1831 births 1901 deaths Royal Medal winners Senior Wranglers People from Dalkeith Mathematical physicists Academics of Queen's University Belfast Academics of the University of Edinburgh 19th-century Scottish mathematicians 20th-century Scottish mathematicians
Peter Guthrie Tait
[ "Physics", "Chemistry" ]
2,318
[ "Thermodynamics", "Thermodynamicists" ]
256,363
https://en.wikipedia.org/wiki/Experimental%20mathematics
Experimental mathematics is an approach to mathematics in which computation is used to investigate mathematical objects and identify properties and patterns. It has been defined as "that branch of mathematics that concerns itself ultimately with the codification and transmission of insights within the mathematical community through the use of experimental (in either the Galilean, Baconian, Aristotelian or Kantian sense) exploration of conjectures and more informal beliefs and a careful analysis of the data acquired in this pursuit." As expressed by Paul Halmos: "Mathematics is not a deductive science—that's a cliché. When you try to prove a theorem, you don't just list the hypotheses, and then start to reason. What you do is trial and error, experimentation, guesswork. You want to find out what the facts are, and what you do is in that respect similar to what a laboratory technician does." History Mathematicians have always practiced experimental mathematics. Existing records of early mathematics, such as Babylonian mathematics, typically consist of lists of numerical examples illustrating algebraic identities. However, modern mathematics, beginning in the 17th century, developed a tradition of publishing results in a final, formal and abstract presentation. The numerical examples that may have led a mathematician to originally formulate a general theorem were not published, and were generally forgotten. Experimental mathematics as a separate area of study re-emerged in the twentieth century, when the invention of the electronic computer vastly increased the range of feasible calculations, with a speed and precision far greater than anything available to previous generations of mathematicians. A significant milestone and achievement of experimental mathematics was the discovery in 1995 of the Bailey–Borwein–Plouffe formula for the binary digits of π. This formula was discovered not by formal reasoning, but instead by numerical searches on a computer; only afterwards was a rigorous proof found. Objectives and uses The objectives of experimental mathematics are "to generate understanding and insight; to generate and confirm or confront conjectures; and generally to make mathematics more tangible, lively and fun for both the professional researcher and the novice". The uses of experimental mathematics have been defined as follows: Gaining insight and intuition. Discovering new patterns and relationships. Using graphical displays to suggest underlying mathematical principles. Testing and especially falsifying conjectures. Exploring a possible result to see if it is worth formal proof. Suggesting approaches for formal proof. Replacing lengthy hand derivations with computer-based derivations. Confirming analytically derived results. Tools and techniques Experimental mathematics makes use of numerical methods to calculate approximate values for integrals and infinite series. Arbitrary precision arithmetic is often used to establish these values to a high degree of precision – typically 100 significant figures or more. Integer relation algorithms are then used to search for relations between these values and mathematical constants. Working with high precision values reduces the possibility of mistaking a mathematical coincidence for a true relation. A formal proof of a conjectured relation will then be sought – it is often easier to find a formal proof once the form of a conjectured relation is known. If a counterexample is being sought or a large-scale proof by exhaustion is being attempted, distributed computing techniques may be used to divide the calculations between multiple computers. Frequent use is made of general mathematical software or domain-specific software written for attacks on problems that require high efficiency. Experimental mathematics software usually includes error detection and correction mechanisms, integrity checks and redundant calculations designed to minimise the possibility of results being invalidated by a hardware or software error. Applications and examples Applications and examples of experimental mathematics include: Searching for a counterexample to a conjecture Roger Frye used experimental mathematics techniques to find the smallest counterexample to Euler's sum of powers conjecture. The ZetaGrid project was set up to search for a counterexample to the Riemann hypothesis. Tomás Oliveira e Silva searched for a counterexample to the Collatz conjecture. Finding new examples of numbers or objects with particular properties The Great Internet Mersenne Prime Search is searching for new Mersenne primes. The Great Periodic Path Hunt is searching for new periodic paths. distributed.net's OGR project searched for optimal Golomb rulers. The PrimeGrid project is searching for the smallest Riesel and Sierpiński numbers. Finding serendipitous numerical patterns Edward Lorenz found the Lorenz attractor, an early example of a chaotic dynamical system, by investigating anomalous behaviours in a numerical weather model. The Ulam spiral was discovered by accident. The pattern in the Ulam numbers was discovered by accident. Mitchell Feigenbaum's discovery of the Feigenbaum constant was based initially on numerical observations, followed by a rigorous proof. Use of computer programs to check a large but finite number of cases to complete a computer-assisted proof by exhaustion Thomas Hales's proof of the Kepler conjecture. Various proofs of the four colour theorem. Clement Lam's proof of the non-existence of a finite projective plane of order 10. Gary McGuire proved a minimum uniquely solvable Sudoku requires 17 clues. Symbolic validation (via computer algebra) of conjectures to motivate the search for an analytical proof Solutions to a special case of the quantum three-body problem known as the hydrogen molecule-ion were found standard quantum chemistry basis sets before realizing they all lead to the same unique analytical solution in terms of a generalization of the Lambert W function. Related to this work is the isolation of a previously unknown link between gravity theory and quantum mechanics in lower dimensions (see quantum gravity and references therein). In the realm of relativistic many-bodied mechanics, namely the time-symmetric Wheeler–Feynman absorber theory: the equivalence between an advanced Liénard–Wiechert potential of particle j acting on particle i and the corresponding potential for particle i acting on particle j was demonstrated exhaustively to order before being proved mathematically. The Wheeler-Feynman theory has regained interest because of quantum nonlocality. In the realm of linear optics, verification of the series expansion of the envelope of the electric field for ultrashort light pulses travelling in non isotropic media. Previous expansions had been incomplete: the outcome revealed an extra term vindicated by experiment. Evaluation of infinite series, infinite products and integrals (also see symbolic integration), typically by carrying out a high precision numerical calculation, and then using an integer relation algorithm (such as the Inverse Symbolic Calculator) to find a linear combination of mathematical constants that matches this value. For example, the following identity was rediscovered by Enrico Au-Yeung, a student of Jonathan Borwein using computer search and PSLQ algorithm in 1993: Visual investigations In Indra's Pearls, David Mumford and others investigated various properties of Möbius transformation and the Schottky group using computer generated images of the groups which: furnished convincing evidence for many conjectures and lures to further exploration. Plausible but false examples Some plausible relations hold to a high degree of accuracy, but are still not true. One example is: The two sides of this expression actually differ after the 42nd decimal place. Another example is that the maximum height (maximum absolute value of coefficients) of all the factors of xn − 1 appears to be the same as the height of the nth cyclotomic polynomial. This was shown by computer to be true for n < 10000 and was expected to be true for all n. However, a larger computer search showed that this equality fails to hold for n = 14235, when the height of the nth cyclotomic polynomial is 2, but maximum height of the factors is 3. Practitioners The following mathematicians and computer scientists have made significant contributions to the field of experimental mathematics: Fabrice Bellard David H. Bailey Jonathan Borwein David Epstein Helaman Ferguson Ronald Graham Thomas Callister Hales Donald Knuth Clement Lam Oren Patashnik Simon Plouffe Eric Weisstein Stephen Wolfram Doron Zeilberger A.J. Han Vinck See also Borwein integral Computer-aided proof Proofs and Refutations Experimental Mathematics (journal) Institute for Experimental Mathematics References External links Experimental Mathematics (Journal) Centre for Experimental and Constructive Mathematics (CECM) at Simon Fraser University Collaborative Group for Research in Mathematics Education at University of Southampton Recognizing Numerical Constants by David H. Bailey and Simon Plouffe Psychology of Experimental Mathematics Experimental Mathematics Website (Links and resources) The Great Periodic Path Hunt Website (Links and resources) An Algorithm for the Ages: PSLQ, A Better Way to Find Integer Relations (Alternative link ) Experimental Algorithmic Information Theory Sample Problems of Experimental Mathematics by David H. Bailey and Jonathan M. Borwein Ten Problems in Experimental Mathematics by David H. Bailey, Jonathan M. Borwein, Vishaal Kapoor, and Eric W. Weisstein Institute for Experimental Mathematics at University of Duisburg-Essen
Experimental mathematics
[ "Mathematics" ]
1,812
[ "Experimental mathematics" ]
256,662
https://en.wikipedia.org/wiki/Terminal%20velocity
Terminal velocity is the maximum speed attainable by an object as it falls through a fluid (air is the most common example). It is reached when the sum of the drag force (Fd) and the buoyancy is equal to the downward force of gravity (FG) acting on the object. Since the net force on the object is zero, the object has zero acceleration. For objects falling through air at normal pressure, the buoyant force is usually dismissed and not taken into account, as its effects are negligible. As the speed of an object increases, so does the drag force acting on it, which also depends on the substance it is passing through (for example air or water). At some speed, the drag or force of resistance will be equal to the gravitational pull on the object. At this point the object stops accelerating and continues falling at a constant speed called the terminal velocity (also called settling velocity). An object moving downward faster than the terminal velocity (for example because it was thrown downwards, it fell from a thinner part of the atmosphere, or it changed shape) will slow down until it reaches the terminal velocity. Drag depends on the projected area, here represented by the object's cross-section or silhouette in a horizontal plane. An object with a large projected area relative to its mass, such as a parachute, has a lower terminal velocity than one with a small projected area relative to its mass, such as a dart. In general, for the same shape and material, the terminal velocity of an object increases with size. This is because the downward force (weight) is proportional to the cube of the linear dimension, but the air resistance is approximately proportional to the cross-section area which increases only as the square of the linear dimension. For very small objects such as dust and mist, the terminal velocity is easily overcome by convection currents which can prevent them from reaching the ground at all, and hence they can stay suspended in the air for indefinite periods. Air pollution and fog are examples. Examples Based on air resistance, for example, the terminal speed of a skydiver in a belly-to-earth (i.e., face down) free fall position is about . This speed is the asymptotic limiting value of the speed, and the forces acting on the body balance each other more and more closely as the terminal speed is approached. In this example, a speed of 50.0% of terminal speed is reached after only about 3 seconds, while it takes 8 seconds to reach 90%, 15 seconds to reach 99%, and so on. Higher speeds can be attained if the skydiver pulls in their limbs (see also freeflying). In this case, the terminal speed increases to about , which is almost the terminal speed of the peregrine falcon diving down on its prey. The same terminal speed is reached for a typical .30-06 bullet dropping downwards—when it is returning to the ground having been fired upwards or dropped from a tower—according to a 1920 U.S. Army Ordnance study. Competition speed skydivers fly in a head-down position and can reach speeds of . The current record is held by Felix Baumgartner who jumped from an altitude of and reached , though he achieved this speed at high altitude where the density of the air is much lower than at the Earth's surface, producing a correspondingly lower drag force. The biologist J. B. S. Haldane wrote, Physics For terminal velocity in falling through air, where viscosity is negligible compared to the drag force, and without considering buoyancy effects, terminal velocity is given by where represents terminal velocity, is the mass of the falling object, is the acceleration due to gravity, is the drag coefficient, is the density of the fluid through which the object is falling, and is the projected area of the object. In reality, an object approaches its terminal speed asymptotically. Buoyancy effects, due to the upward force on the object by the surrounding fluid, can be taken into account using Archimedes' principle: the mass has to be reduced by the displaced fluid mass , with the volume of the object. So instead of use the reduced mass in this and subsequent formulas. The terminal speed of an object changes due to the properties of the fluid, the mass of the object and its projected cross-sectional surface area. Air density increases with decreasing altitude, at about 1% per (see barometric formula). For objects falling through the atmosphere, for every of fall, the terminal speed decreases 1%. After reaching the local terminal velocity, while continuing the fall, speed decreases to change with the local terminal speed. Using mathematical terms, defining down to be positive, the net force acting on an object falling near the surface of Earth is (according to the drag equation): with v(t) the velocity of the object as a function of time t. At equilibrium, the net force is zero (Fnet = 0) and the velocity becomes the terminal velocity : Solving for Vt yields: The drag equation is—assuming ρ, g and Cd to be constants: Although this is a Riccati equation that can be solved by reduction to a second-order linear differential equation, it is easier to separate variables. A more practical form of this equation can be obtained by making the substitution . Dividing both sides by m gives The equation can be re-arranged into Taking the integral of both sides yields After integration, this becomes or in a simpler form with artanh the inverse hyperbolic tangent function. Alternatively, with tanh the hyperbolic tangent function. Assuming that g is positive (which it was defined to be), and substituting α back in, the speed v becomes Using the formula for terminal velocity the equation can be rewritten as As time tends to infinity (t → ∞), the hyperbolic tangent tends to 1, resulting in the terminal speed For very slow motion of the fluid, the inertia forces of the fluid are negligible (assumption of massless fluid) in comparison to other forces. Such flows are called creeping or Stokes flows and the condition to be satisfied for the flows to be creeping flows is the Reynolds number, . The equation of motion for creeping flow (simplified Navier–Stokes equation) is given by: where: is the fluid velocity vector field, is the fluid pressure field, is the liquid/fluid viscosity. The analytical solution for the creeping flow around a sphere was first given by Stokes in 1851. From Stokes' solution, the drag force acting on the sphere of diameter can be obtained as where the Reynolds number, . The expression for the drag force given by equation () is called Stokes' law. When the value of is substituted in the equation (), we obtain the expression for terminal speed of a spherical object moving under creeping flow conditions: where is the density of the object. Applications The creeping flow results can be applied in order to study the settling of sediments near the ocean bottom and the fall of moisture drops in the atmosphere. The principle is also applied in the falling sphere viscometer, an experimental device used to measure the viscosity of highly viscous fluids, for example oil, paraffin, tar etc. Terminal velocity in the presence of buoyancy force When the buoyancy effects are taken into account, an object falling through a fluid under its own weight can reach a terminal velocity (settling velocity) if the net force acting on the object becomes zero. When the terminal velocity is reached the weight of the object is exactly balanced by the upward buoyancy force and drag force. That is where is the weight of the object, is the buoyancy force acting on the object, and is the drag force acting on the object. If the falling object is spherical in shape, the expression for the three forces are given below: where is the diameter of the spherical object, is the gravitational acceleration, is the density of the fluid, is the density of the object, is the projected area of the sphere, is the drag coefficient, and is the characteristic velocity (taken as terminal velocity, ). Substitution of equations (–) in equation () and solving for terminal velocity, to yield the following expression In equation (), it is assumed that the object is denser than the fluid. If not, the sign of the drag force should be made negative since the object will be moving upwards, against gravity. Examples are bubbles formed at the bottom of a champagne glass and helium balloons. The terminal velocity in such cases will have a negative value, corresponding to the rate of rising up. See also Stokes's law Terminal ballistics References External links Terminal Velocity Interactive Tool - NASA site, Beginners Guide to Aeronautics Onboard video of Space Shuttle Solid Rocket Boosters rapidly decelerating to terminal velocity on entry to the thicker atmosphere, from at 5:15 in the video, to 220  mph at 6:45 when the parachutes are deployed 90 seconds later—NASA video and sound, @ io9.com. Terminal settling velocity of a sphere at all realistic Reynolds Numbers, by Heywood Tables approach. Falling Fluid dynamics Velocity
Terminal velocity
[ "Physics", "Chemistry", "Engineering" ]
1,863
[ "Physical phenomena", "Physical quantities", "Chemical engineering", "Motion (physics)", "Vector physical quantities", "Piping", "Velocity", "Wikipedia categories named after physical quantities", "Fluid dynamics" ]
256,738
https://en.wikipedia.org/wiki/Tensor%20field
In mathematics and physics, a tensor field is a function assigning a tensor to each point of a region of a mathematical space (typically a Euclidean space or manifold) or of the physical space. Tensor fields are used in differential geometry, algebraic geometry, general relativity, in the analysis of stress and strain in material object, and in numerous applications in the physical sciences. As a tensor is a generalization of a scalar (a pure number representing a value, for example speed) and a vector (a magnitude and a direction, like velocity), a tensor field is a generalization of a scalar field and a vector field that assigns, respectively, a scalar or vector to each point of space. If a tensor is defined on a vector fields set over a module , we call a tensor field on . Many mathematical structures called "tensors" are also tensor fields. For example, the Riemann curvature tensor is a tensor field as it associates a tensor to each point of a Riemannian manifold, which is a topological space. Definition Let M be a manifold, for instance the Euclidean plane Rn. Equivalently, it is a collection of elements Tx ∈ Vx⊗p ⊗ (Vx*)⊗q for all points x ∈ M, arranging into a smooth map T : M → V⊗p ⊗ (V*)⊗q. Elements Tx are called tensors. Often we take V = TM to be the tangent bundle of M. Geometric introduction Intuitively, a vector field is best visualized as an "arrow" attached to each point of a region, with variable length and direction. One example of a vector field on a curved space is a weather map showing horizontal wind velocity at each point of the Earth's surface. Now consider more complicated fields. For example, if the manifold is Riemannian, then it has a metric field , such that given any two vectors at point , their inner product is . The field could be given in matrix form, but it depends on a choice of coordinates. It could instead be given as an ellipsoid of radius 1 at each point, which is coordinate-free. Applied to the Earth's surface, this is Tissot's indicatrix. In general, we want to specify tensor fields in a coordinate-independent way: It should exist independently of latitude and longitude, or whatever particular "cartographic projection" we are using to introduce numerical coordinates. Via coordinate transitions Following and , the concept of a tensor relies on a concept of a reference frame (or coordinate system), which may be fixed (relative to some background reference frame), but in general may be allowed to vary within some class of transformations of these coordinate systems. For example, coordinates belonging to the n-dimensional real coordinate space may be subjected to arbitrary affine transformations: (with n-dimensional indices, summation implied). A covariant vector, or covector, is a system of functions that transforms under this affine transformation by the rule The list of Cartesian coordinate basis vectors transforms as a covector, since under the affine transformation . A contravariant vector is a system of functions of the coordinates that, under such an affine transformation undergoes a transformation This is precisely the requirement needed to ensure that the quantity is an invariant object that does not depend on the coordinate system chosen. More generally, a tensor of valence (p,q) has p downstairs indices and q upstairs indices, with the transformation law being The concept of a tensor field may be obtained by specializing the allowed coordinate transformations to be smooth (or differentiable, analytic, etc.). A covector field is a function of the coordinates that transforms by the Jacobian of the transition functions (in the given class). Likewise, a contravariant vector field transforms by the inverse Jacobian. Tensor bundles A tensor bundle is a fiber bundle where the fiber is a tensor product of any number of copies of the tangent space and/or cotangent space of the base space, which is a manifold. As such, the fiber is a vector space and the tensor bundle is a special kind of vector bundle. The vector bundle is a natural idea of "vector space depending continuously (or smoothly) on parameters" – the parameters being the points of a manifold M. For example, a vector space of one dimension depending on an angle could look like a Möbius strip or alternatively like a cylinder. Given a vector bundle V over M, the corresponding field concept is called a section of the bundle: for m varying over M, a choice of vector vm in Vm, where Vm is the vector space "at" m. Since the tensor product concept is independent of any choice of basis, taking the tensor product of two vector bundles on M is routine. Starting with the tangent bundle (the bundle of tangent spaces) the whole apparatus explained at component-free treatment of tensors carries over in a routine way – again independently of coordinates, as mentioned in the introduction. We therefore can give a definition of tensor field, namely as a section of some tensor bundle. (There are vector bundles that are not tensor bundles: the Möbius band for instance.) This is then guaranteed geometric content, since everything has been done in an intrinsic way. More precisely, a tensor field assigns to any given point of the manifold a tensor in the space where V is the tangent space at that point and V∗ is the cotangent space. See also tangent bundle and cotangent bundle. Given two tensor bundles E → M and F → M, a linear map A: Γ(E) → Γ(F) from the space of sections of E to sections of F can be considered itself as a tensor section of if and only if it satisfies A(fs) = fA(s), for each section s in Γ(E) and each smooth function f on M. Thus a tensor section is not only a linear map on the vector space of sections, but a C∞(M)-linear map on the module of sections. This property is used to check, for example, that even though the Lie derivative and covariant derivative are not tensors, the torsion and curvature tensors built from them are. Notation The notation for tensor fields can sometimes be confusingly similar to the notation for tensor spaces. Thus, the tangent bundle TM = T(M) might sometimes be written as to emphasize that the tangent bundle is the range space of the (1,0) tensor fields (i.e., vector fields) on the manifold M. This should not be confused with the very similar looking notation ; in the latter case, we just have one tensor space, whereas in the former, we have a tensor space defined for each point in the manifold M. Curly (script) letters are sometimes used to denote the set of infinitely-differentiable tensor fields on M. Thus, are the sections of the (m,n) tensor bundle on M that are infinitely-differentiable. A tensor field is an element of this set. Tensor fields as multilinear forms There is another more abstract (but often useful) way of characterizing tensor fields on a manifold M, which makes tensor fields into honest tensors (i.e. single multilinear mappings), though of a different type (although this is not usually why one often says "tensor" when one really means "tensor field"). First, we may consider the set of all smooth (C∞) vector fields on M, (see the section on notation above) as a single space — a module over the ring of smooth functions, C∞(M), by pointwise scalar multiplication. The notions of multilinearity and tensor products extend easily to the case of modules over any commutative ring. As a motivating example, consider the space of smooth covector fields (1-forms), also a module over the smooth functions. These act on smooth vector fields to yield smooth functions by pointwise evaluation, namely, given a covector field ω and a vector field X, we define Because of the pointwise nature of everything involved, the action of on X is a C∞(M)-linear map, that is, for any p in M and smooth function f. Thus we can regard covector fields not just as sections of the cotangent bundle, but also linear mappings of vector fields into functions. By the double-dual construction, vector fields can similarly be expressed as mappings of covector fields into functions (namely, we could start "natively" with covector fields and work up from there). In a complete parallel to the construction of ordinary single tensors (not tensor fields!) on M as multilinear maps on vectors and covectors, we can regard general (k,l) tensor fields on M as C∞(M)-multilinear maps defined on k copies of and l copies of into C∞(M). Now, given any arbitrary mapping T from a product of k copies of and l copies of into C∞(M), it turns out that it arises from a tensor field on M if and only if it is multilinear over C∞(M). Namely -module of tensor fields of type over M is canonically isomorphic to -module of -multilinear forms This kind of multilinearity implicitly expresses the fact that we're really dealing with a pointwise-defined object, i.e. a tensor field, as opposed to a function which, even when evaluated at a single point, depends on all the values of vector fields and 1-forms simultaneously. A frequent example application of this general rule is showing that the Levi-Civita connection, which is a mapping of smooth vector fields taking a pair of vector fields to a vector field, does not define a tensor field on M. This is because it is only -linear in Y (in place of full C∞(M)-linearity, it satisfies the Leibniz rule, )). Nevertheless, it must be stressed that even though it is not a tensor field, it still qualifies as a geometric object with a component-free interpretation. Applications The curvature tensor is discussed in differential geometry and the stress–energy tensor is important in physics, and these two tensors are related by Einstein's theory of general relativity. In electromagnetism, the electric and magnetic fields are combined into an electromagnetic tensor field. Differential forms, used in defining integration on manifolds, are a type of tensor field. Tensor calculus In theoretical physics and other fields, differential equations posed in terms of tensor fields provide a very general way to express relationships that are both geometric in nature (guaranteed by the tensor nature) and conventionally linked to differential calculus. Even to formulate such equations requires a fresh notion, the covariant derivative. This handles the formulation of variation of a tensor field along a vector field. The original absolute differential calculus notion, which was later called tensor calculus, led to the isolation of the geometric concept of connection. Twisting by a line bundle An extension of the tensor field idea incorporates an extra line bundle L on M. If W is the tensor product bundle of V with L, then W is a bundle of vector spaces of just the same dimension as V. This allows one to define the concept of tensor density, a 'twisted' type of tensor field. A tensor density is the special case where L is the bundle of densities on a manifold, namely the determinant bundle of the cotangent bundle. (To be strictly accurate, one should also apply the absolute value to the transition functions – this makes little difference for an orientable manifold.) For a more traditional explanation see the tensor density article. One feature of the bundle of densities (again assuming orientability) L is that Ls is well-defined for real number values of s; this can be read from the transition functions, which take strictly positive real values. This means for example that we can take a half-density, the case where s = . In general we can take sections of W, the tensor product of V with Ls, and consider tensor density fields with weight s. Half-densities are applied in areas such as defining integral operators on manifolds, and geometric quantization. The flat case When M is a Euclidean space and all the fields are taken to be invariant by translations by the vectors of M, we get back to a situation where a tensor field is synonymous with a tensor 'sitting at the origin'. This does no great harm, and is often used in applications. As applied to tensor densities, it does make a difference. The bundle of densities cannot seriously be defined 'at a point'; and therefore a limitation of the contemporary mathematical treatment of tensors is that tensor densities are defined in a roundabout fashion. Cocycles and chain rules As an advanced explanation of the tensor concept, one can interpret the chain rule in the multivariable case, as applied to coordinate changes, also as the requirement for self-consistent concepts of tensor giving rise to tensor fields. Abstractly, we can identify the chain rule as a 1-cocycle. It gives the consistency required to define the tangent bundle in an intrinsic way. The other vector bundles of tensors have comparable cocycles, which come from applying functorial properties of tensor constructions to the chain rule itself; this is why they also are intrinsic (read, 'natural') concepts. What is usually spoken of as the 'classical' approach to tensors tries to read this backwards – and is therefore a heuristic, post hoc approach rather than truly a foundational one. Implicit in defining tensors by how they transform under a coordinate change is the kind of self-consistency the cocycle expresses. The construction of tensor densities is a 'twisting' at the cocycle level. Geometers have not been in any doubt about the geometric nature of tensor quantities; this kind of descent argument justifies abstractly the whole theory. Generalizations Tensor densities The concept of a tensor field can be generalized by considering objects that transform differently. An object that transforms as an ordinary tensor field under coordinate transformations, except that it is also multiplied by the determinant of the Jacobian of the inverse coordinate transformation to the wth power, is called a tensor density with weight w. Invariantly, in the language of multilinear algebra, one can think of tensor densities as multilinear maps taking their values in a density bundle such as the (1-dimensional) space of n-forms (where n is the dimension of the space), as opposed to taking their values in just R. Higher "weights" then just correspond to taking additional tensor products with this space in the range. A special case are the scalar densities. Scalar 1-densities are especially important because it makes sense to define their integral over a manifold. They appear, for instance, in the Einstein–Hilbert action in general relativity. The most common example of a scalar 1-density is the volume element, which in the presence of a metric tensor g is the square root of its determinant in coordinates, denoted . The metric tensor is a covariant tensor of order 2, and so its determinant scales by the square of the coordinate transition: which is the transformation law for a scalar density of weight +2. More generally, any tensor density is the product of an ordinary tensor with a scalar density of the appropriate weight. In the language of vector bundles, the determinant bundle of the tangent bundle is a line bundle that can be used to 'twist' other bundles w times. While locally the more general transformation law can indeed be used to recognise these tensors, there is a global question that arises, reflecting that in the transformation law one may write either the Jacobian determinant, or its absolute value. Non-integral powers of the (positive) transition functions of the bundle of densities make sense, so that the weight of a density, in that sense, is not restricted to integer values. Restricting to changes of coordinates with positive Jacobian determinant is possible on orientable manifolds, because there is a consistent global way to eliminate the minus signs; but otherwise the line bundle of densities and the line bundle of n-forms are distinct. For more on the intrinsic meaning, see density on a manifold. See also Notes References . . . . . . . . Multilinear algebra Differential geometry Differential topology Tensors Functions and mappings
Tensor field
[ "Mathematics", "Engineering" ]
3,395
[ "Mathematical analysis", "Functions and mappings", "Tensors", "Mathematical objects", "Topology", "Differential topology", "Mathematical relations" ]
258,221
https://en.wikipedia.org/wiki/David%20Chalmers
David John Chalmers (; born 20 April 1966) is an Australian philosopher and cognitive scientist specializing in the areas of the philosophy of mind, and the philosophy of language. He is a professor of philosophy and neural science at New York University, as well as co-director of NYU's Center for Mind, Brain and Consciousness (along with Ned Block). In 2006, he was elected a Fellow of the Australian Academy of the Humanities. In 2013, he was elected a Fellow of the American Academy of Arts & Sciences. Chalmers is best known for formulating the hard problem of consciousness, and for popularizing the philosophical zombie thought experiment. Chalmers and David Bourget co-founded PhilPapers; a database of journal articles for philosophers. Early life and education David Chalmers was born in Sydney, New South Wales, and subsequently grew up in Adelaide, South Australia, where he attended Unley High School. As a child, he experienced synesthesia. He began coding and playing computer games at the age of 10 on a PDP-10 at a medical center. He also performed exceptionally in mathematics, and secured a bronze medal in the International Mathematical Olympiad. When Chalmers was 13, he read Douglas Hofstadter's 1979 book Gödel, Escher, Bach, which awakened an interest in philosophy. Chalmers received his undergraduate degree in pure mathematics from the University of Adelaide. After graduating Chalmers spent six months reading philosophy books while hitchhiking across Europe, before continuing his studies at the University of Oxford, where he was a Rhodes Scholar but eventually withdrew from the course. In 1993, Chalmers received his PhD in philosophy and cognitive science from Indiana University Bloomington under Douglas Hofstadter, writing a doctoral thesis entitled Toward a Theory of Consciousness. He was a postdoctoral fellow in the Philosophy-Neuroscience-Psychology program directed by Andy Clark at Washington University in St. Louis from 1993 to 1995. Career In 1994, Chalmers presented a lecture at the inaugural Toward a Science of Consciousness conference. According to the Chronicle of Higher Education, this "lecture established Chalmers as a thinker to be reckoned with and goosed a nascent field into greater prominence." He went on to coorganize the conference (renamed "The Science of Consciousness") for some years with Stuart Hameroff, but stepped away when he felt it became too divergent from mainstream science. Chalmers is a founding member of the Association for the Scientific Study of Consciousness and one of its past presidents. Having established his reputation, Chalmers received his first professorship at UC Santa Cruz, from August 1995 to December 1998. In 1996 he published the widely cited book The Conscious Mind. Chalmers was subsequently appointed Professor of Philosophy (1999–2004) and then Director of the Center for Consciousness Studies (2002–2004) at the University of Arizona. In 2004, Chalmers returned to Australia, encouraged by an ARC Federation Fellowship, becoming professor of philosophy and director of the Center for Consciousness at the Australian National University. Chalmers accepted a part-time professorship at the philosophy department of New York University in 2009, becoming a full-time professor in 2014. In 2013, Chalmers was elected a Fellow of the American Academy of Arts & Sciences. He is an editor on topics in the philosophy of mind for the Stanford Encyclopedia of Philosophy. In May 2018, it was announced that he would serve on the jury for the Berggruen Prize. In 2023, Chalmers won a bet—made in 1998, for a case of wine—with neuroscientist Christof Koch that the neural underpinnings for consciousness would not be resolved by the year 2023, while Koch had bet that they would. Philosophical work Philosophy of mind Chalmers is best known for formulating what he calls the "hard problem of consciousness," in both his 1995 paper "Facing Up to the Problem of Consciousness" and his 1996 book The Conscious Mind. He makes a distinction between "easy" problems of consciousness, such as explaining object discrimination or verbal reports, and the single hard problem, which could be stated "why does the feeling which accompanies awareness of sensory information exist at all?" The essential difference between the (cognitive) easy problems and the (phenomenal) hard problem is that the former are at least theoretically answerable via the dominant strategy in the philosophy of mind: physicalism. Chalmers argues for an "explanatory gap" from the objective to the subjective, and criticizes physicalist explanations of mental experience, making him a dualist. Chalmers characterizes his view as "naturalistic dualism": naturalistic because he believes mental states supervene "naturally" on physical systems (such as brains); dualist because he believes mental states are ontologically distinct from and not reducible to physical systems. He has also characterized his view by more traditional formulations such as property dualism. In support of this, Chalmers is famous for his commitment to the logical (though, not natural) possibility of philosophical zombies. These zombies are complete physical duplicates of human beings, lacking only qualitative experience. Chalmers argues that since such zombies are conceivable to us, they must therefore be logically possible. Since they are logically possible, then qualia and sentience are not fully explained by physical properties alone; the facts about them are further facts. Instead, Chalmers argues that consciousness is a fundamental property ontologically autonomous of any known (or even possible) physical properties, and that there may be lawlike rules which he terms "psychophysical laws" that determine which physical systems are associated with which types of qualia. He further speculates that all information-bearing systems may be conscious, leading him to entertain the possibility of conscious thermostats and a qualified panpsychism he calls panprotopsychism. Chalmers maintains a formal agnosticism on the issue, even conceding that the viability of panpsychism places him at odds with the majority of his contemporaries. According to Chalmers, his arguments are similar to a line of thought that goes back to Leibniz's 1714 "mill" argument; the first substantial use of philosophical "zombie" terminology may be Robert Kirk's 1974 "Zombies vs. Materialists". After the publication of Chalmers's landmark paper, more than twenty papers in response were published in the Journal of Consciousness Studies. These papers (by Daniel Dennett, Colin McGinn, Francisco Varela, Francis Crick, and Roger Penrose, among others) were collected and published in the book Explaining Consciousness: The Hard Problem. John Searle critiqued Chalmers's views in The New York Review of Books. With Andy Clark, Chalmers has written "The Extended Mind", an article about the borders of the mind. According to Chalmers, systems that have the same functional organization "at a fine enough grain" (that are "functionally isomorphic") will have "qualitatively identical conscious experiences". In 1995, he proposed the reductio ad absurdum "fading qualia" thought experiment. It involves progressively replacing each neuron of a brain with a functional equivalent, for example implemented on a silicon chip. Since each substitute neuron performs the same function as the original, the subject would not notice any change. But, Chalmers argues, if qualia (for example, the perceived color of objects) were to fade or disappear, the brain's holder could notice the difference, which would alter the brain's functional profile, leading to a contradiction. He concludes that such fading qualia are impossible in practice, and that after each neuron is replaced, the resulting functionally isomorphic robotic brain would be as conscious as the original biological one. In addition, Chalmers proposed a similar thought experiment, "dancing qualia", which concludes that a robotic brain that is functionally isomorphic to a biological one would not only be as conscious, but would also have the same conscious experiences (e.g., the same perception of color when seeing an object). In 2023, he analyzed whether large language models could be conscious, and suggested that they were probably not conscious, but could become serious candidates for consciousness within a decade. Philosophy of language Chalmers has published works on the "theory of reference" concerning how words secure their referents. He, together with others such as Frank Jackson, played a major role in developing two-dimensional semantics. Background Before Saul Kripke delivered his famous lecture series Naming and Necessity in 1970, the descriptivism advocated by Gottlob Frege and Bertrand Russell was the orthodoxy. Descriptivism suggests that a name is an abbreviation of a description, which is a set of properties. This name secures its reference by a process of properties fitting: whichever object fits the description most, is the referent of the name. Therefore, the description provides the sense of the name, and it is through this sense that the reference of the name is determined. However, as Kripke argued in Naming and Necessity, a name does not secure its reference via any process of description fitting. Rather, a name determines its reference via a historical-causal link tracing back to the process of naming. And thus, Kripke thinks that a name does not have a sense, or, at least, does not have a sense which is rich enough to play the reference-determining role. Moreover, a name, in Kripke's view, is a rigid designator, which refers to the same object in all possible worlds. Following this line of thought, Kripke suggests that any scientific identity statement such as "Water is H2O" is also a necessary statement, i.e. true in all possible worlds. Kripke thinks that this is a phenomenon that descriptivism cannot explain. And, as also proposed by Hilary Putnam and Kripke himself, Kripke's view on names can also be applied to the reference of natural kind terms. The kind of theory of reference that is advocated by Kripke and Putnam is called the direct reference theory. Two-dimensional semantics Chalmers disagrees with Kripke, and direct reference theorists in general. He thinks that there are two kinds of intension of a natural kind term, a stance called two-dimensionalism. For example, the statement "Water is H2O" expresses two distinct propositions, often referred to as a primary intension and a secondary intension, which together form its meaning. The primary intension of a word or sentence is its sense, i.e., is the idea or method by which we find its referent. The primary intension of "water" might be a description, such as "the substance with water-like properties". The entity identified by this intension could vary in different hypothetical worlds. In the twin Earth thought experiment, for example, inhabitants might use "water" to mean their equivalent of water, even if its chemical composition is not H2O. Thus, for that world, "water" does not refer to H2O. The secondary intension of "water" is whatever "water" refers to in this world. When considered according to its secondary intension, water means H2O in every world. Through this concept, Chalmers provides a way to explain how reference is determined by distinguishing between epistemic possibilities (primary intension) and metaphysical necessities (secondary intension), ensuring that the referent (H2O) is uniquely identified across all metaphysically possible worlds. Philosophy of verbal disputes In some more recent work, Chalmers has concentrated on verbal disputes. He argues that a dispute is best characterized as "verbal" when it concerns some sentence S which contains a term T such that (i) the parties to the dispute disagree over the meaning of T, and (ii) the dispute arises solely because of this disagreement. In the same work, Chalmers proposes certain procedures for the resolution of verbal disputes. One of these he calls the "elimination method", which involves eliminating the contentious term and observing whether any dispute remains. Technology and virtual reality Chalmers addressed the issue of virtual and non-virtual worlds in his 2022 book Reality+. While Chalmers recognises that virtual reality is not the same as non-virtual reality, he does not consider virtual reality to be an illusion, but rather a "genuine reality" in its own right. Chalmers sees virtual reality as potentially offering as meaningful a life as non-virtual reality, and argues that we could already be inhabitants of a simulation without knowing it. Chalmers proposes that computers are forming a form of "exo-cortex", where a part of human cognition is 'outsourced' to corporations such as Apple and Google. Chalmers was featured in the 2012 documentary film entitled The Singularity by filmmaker Doug Wolens, which focuses on the theory proposed by techno-futurist Ray Kurzweil, of that "point in time when computer intelligence exceeds human intelligence." He was a featured philosopher in the 2020 Daily Nous series on GPT-3, which he described as "one of the most interesting and important AI systems ever produced." Personal life Chalmers was the lead singer of the Zombie Blues band, which performed at the music festival Qualia Fest in 2012 in New York. Regarding religion, Chalmers said in 2011: "I have no religious views myself and no spiritual views, except watered-down humanistic, spiritual views. And consciousness is just a fact of life. It's a natural fact of life". Bibliography The Conscious Mind: In Search of a Fundamental Theory (1996). Oxford University Press. hardcover: , paperback: Toward a Science of Consciousness III: The Third Tucson Discussions and Debates (1999). Stuart R. Hameroff, Alfred W. Kaszniak and David J. Chalmers (Editors). The MIT Press. Philosophy of Mind: Classical and Contemporary Readings (2002). (Editor). Oxford University Press. or The Character of Consciousness (2010). Oxford University Press. hardcover: , paperback: Constructing the World (2012). Oxford University Press. hardcover: , paperback: Reality+: Virtual Worlds and the Problems of Philosophy (2022). W. W. Norton & Company. Hardcover: Notes External links An in-depth autobiographical interview with David Chalmers "The Singularity" a documentary film featuring Chalmers The Moscow Center for Consciousness Studies video interview with David Chalmers 1966 births 20th-century Australian philosophers 21st-century Australian philosophers Academic staff of the Australian National University Alumni of Lincoln College, Oxford American consciousness researchers and theorists Analytic philosophers Australian Rhodes Scholars Australian humanists Consciousness researchers and theorists Epistemologists Fellows of the American Academy of Arts and Sciences Fellows of the Australian Academy of the Humanities Indiana University Bloomington alumni International Mathematical Olympiad participants Australian lecturers Living people New York University faculty Ontologists Philosophers of language Philosophers of mind Philosophers of technology Philosophy writers Quantum mind University of Adelaide alumni Washington University in St. Louis fellows
David Chalmers
[ "Physics" ]
3,028
[ "Quantum mind", "Quantum mechanics" ]
258,827
https://en.wikipedia.org/wiki/Dynamic%20mechanical%20analysis
Dynamic mechanical analysis (abbreviated DMA) is a technique used to study and characterize materials. It is most useful for studying the viscoelastic behavior of polymers. A sinusoidal stress is applied and the strain in the material is measured, allowing one to determine the complex modulus. The temperature of the sample or the frequency of the stress are often varied, leading to variations in the complex modulus; this approach can be used to locate the glass transition temperature of the material, as well as to identify transitions corresponding to other molecular motions. Theory Viscoelastic properties of materials Polymers composed of long molecular chains have unique viscoelastic properties, which combine the characteristics of elastic solids and Newtonian fluids. The classical theory of elasticity describes the mechanical properties of elastic solids where stress is proportional to strain in small deformations. Such response to stress is independent of strain rate. The classical theory of hydrodynamics describes the properties of viscous fluid, for which stress response depends on strain rate. This solidlike and liquidlike behaviour of polymers can be modelled mechanically with combinations of springs and dashpots, making for both elastic and viscous behaviour of viscoelastic materials such as bitumen. Dynamic moduli of polymers The viscoelastic property of a polymer is studied by dynamic mechanical analysis where a sinusoidal force (stress σ) is applied to a material and the resulting displacement (strain) is measured. For a perfectly elastic solid, the resulting strain and the stress will be perfectly in phase. For a purely viscous fluid, there will be a 90 degree phase lag of strain with respect to stress. Viscoelastic polymers have the characteristics in between where some phase lag will occur during DMA tests. When the strain is applied and the stress lags behind, the following equations hold: Stress: Strain: where is the frequency of strain oscillation, is time, is phase lag between stress and strain. Consider the purely elastic case, where stress is proportional to strain given by Young's modulus . We have Now for the purely viscous case, where stress is proportional to strain rate. The storage modulus measures the stored energy, representing the elastic portion, and the loss modulus measures the energy dissipated as heat, representing the viscous portion. The tensile storage and loss moduli are defined as follows: Storage modulus: Loss modulus: Phase angle: Similarly, in the shearing instead of tension case, we also define shear storage and loss moduli, and . Complex variables can be used to express the moduli and as follows: where Derivation of dynamic moduli Shear stress of a finite element in one direction can be expressed with relaxation modulus and strain rate, integrated over all past times up to the current time . With strain rate and substitution one obtains . Application of the trigonometric addition theorem lead to the expression with converging integrals, if for , which depend on frequency but not of time. Extension of with trigonometric identity lead to . Comparison of the two equations lead to the definition of and . Applications Measuring glass transition temperature One important application of DMA is measurement of the glass transition temperature of polymers. Amorphous polymers have different glass transition temperatures, above which the material will have rubbery properties instead of glassy behavior and the stiffness of the material will drop dramatically along with a reduction in its viscosity. At the glass transition, the storage modulus decreases dramatically and the loss modulus reaches a maximum. Temperature-sweeping DMA is often used to characterize the glass transition temperature of a material. Polymer composition Varying the composition of monomers and cross-linking can add or change the functionality of a polymer that can alter the results obtained from DMA. An example of such changes can be seen by blending ethylene propylene diene monomer (EPDM) with styrene-butadiene rubber (SBR) and different cross-linking or curing systems. Nair et al. abbreviate blends as E0S, E20S, etc., where E0S equals the weight percent of EPDM in the blend and S denotes sulfur as the curing agent. Increasing the amount of SBR in the blend decreased the storage modulus due to intermolecular and intramolecular interactions that can alter the physical state of the polymer. Within the glassy region, EPDM shows the highest storage modulus due to stronger intermolecular interactions (SBR has more steric hindrance that makes it less crystalline). In the rubbery region, SBR shows the highest storage modulus resulting from its ability to resist intermolecular slippage. When compared to sulfur, the higher storage modulus occurred for blends cured with dicumyl peroxide (DCP) because of the relative strengths of C-C and C-S bonds. Incorporation of reinforcing fillers into the polymer blends also increases the storage modulus at an expense of limiting the loss tangent peak height. DMA can also be used to effectively evaluate the miscibility of polymers. The E40S blend had a much broader transition with a shoulder instead of a steep drop-off in a storage modulus plot of varying blend ratios, indicating that there are areas that are not homogeneous. Instrumentation The instrumentation of a DMA consists of a displacement sensor such as a linear variable differential transformer, which measures a change in voltage as a result of the instrument probe moving through a magnetic core, a temperature control system or furnace, a drive motor (a linear motor for probe loading which provides load for the applied force), a drive shaft support and guidance system to act as a guide for the force from the motor to the sample, and sample clamps in order to hold the sample being tested. Depending on what is being measured, samples will be prepared and handled differently. A general schematic of the primary components of a DMA instrument is shown in figure 3. Types of analyzers There are two main types of DMA analyzers used currently: forced resonance analyzers and free resonance analyzers. Free resonance analyzers measure the free oscillations of damping of the sample being tested by suspending and swinging the sample. A restriction to free resonance analyzers is that it is limited to rod or rectangular shaped samples, but samples that can be woven/braided are also applicable. Forced resonance analyzers are the more common type of analyzers available in instrumentation today. These types of analyzers force the sample to oscillate at a certain frequency and are reliable for performing a temperature sweep. Analyzers are made for both stress (force) and strain (displacement) control. In strain control, the probe is displaced and the resulting stress of the sample is measured by implementing a force balance transducer, which utilizes different shafts. The advantages of strain control include a better short time response for materials of low viscosity and experiments of stress relaxation are done with relative ease. In stress control, a set force is applied to the sample and several other experimental conditions (temperature, frequency, or time) can be varied. Stress control is typically less expensive than strain control because only one shaft is needed, but this also makes it harder to use. Some advantages of stress control include the fact that the structure of the sample is less likely to be destroyed and longer relaxation times/ longer creep studies can be done with much more ease. Characterizing low viscous materials come at a disadvantage of short time responses that are limited by inertia. Stress and strain control analyzers give about the same results as long as characterization is within the linear region of the polymer in question. However, stress control lends a more realistic response because polymers have a tendency to resist a load. Stress and strain can be applied via torsional or axial analyzers. Torsional analyzers are mainly used for liquids or melts but can also be implemented for some solid samples since the force is applied in a twisting motion. The instrument can do creep-recovery, stress–relaxation, and stress–strain experiments. Axial analyzers are used for solid or semisolid materials. It can do flexure, tensile, and compression testing (even shear and liquid specimens if desired). These analyzers can test higher modulus materials than torsional analyzers. The instrument can do thermomechanical analysis (TMA) studies in addition to the experiments that torsional analyzers can do. Figure 4 shows the general difference between the two applications of stress and strain. Changing sample geometry and fixtures can make stress and strain analyzers virtually indifferent of one another except at the extreme ends of sample phases, i.e. really fluid or rigid materials. Common geometries and fixtures for axial analyzers include three-point and four-point bending, dual and single cantilever, parallel plate and variants, bulk, extension/tensile, and shear plates and sandwiches. Geometries and fixtures for torsional analyzers consist of parallel plates, cone-and-plate, couette, and torsional beam and braid. In order to utilize DMA to characterize materials, the fact that small dimensional changes can also lead to large inaccuracies in certain tests needs to be addressed. Inertia and shear heating can affect the results of either forced or free resonance analyzers, especially in fluid samples. Test modes Two major kinds of test modes can be used to probe the viscoelastic properties of polymers: temperature sweep and frequency sweep tests. A third, less commonly studied test mode is dynamic stress–strain testing. Temperature sweep A common test method involves measuring the complex modulus at low constant frequency while varying the sample temperature. A prominent peak in appears at the glass transition temperature of the polymer. Secondary transitions can also be observed, which can be attributed to the temperature-dependent activation of a wide variety of chain motions. In semi-crystalline polymers, separate transitions can be observed for the crystalline and amorphous sections. Similarly, multiple transitions are often found in polymer blends. For instance, blends of polycarbonate and poly(acrylonitrile-butadiene-styrene) were studied with the intention of developing a polycarbonate-based material without polycarbonate's tendency towards brittle failure. Temperature-sweeping DMA of the blends showed two strong transitions coincident with the glass transition temperatures of PC and PABS, consistent with the finding that the two polymers were immiscible. Frequency sweep A sample can be held to a fixed temperature and can be tested at varying frequency. Peaks in and in E’’ with respect to frequency can be associated with the glass transition, which corresponds to the ability of chains to move past each other. This implies that the glass transition is dependent on strain rate in addition to temperature. Secondary transitions may be observed as well. The Maxwell model provides a convenient, if not strictly accurate, description of viscoelastic materials. Applying a sinusoidal stress to a Maxwell model gives: where is the Maxwell relaxation time. Thus, a peak in E’’ is observed at the frequency . A real polymer may have several different relaxation times associated with different molecular motions. Dynamic stress–strain studies By gradually increasing the amplitude of oscillations, one can perform a dynamic stress–strain measurement. The variation of storage and loss moduli with increasing stress can be used for materials characterization, and to determine the upper bound of the material's linear stress–strain regime. Combined sweep Because glass transitions and secondary transitions are seen in both frequency studies and temperature studies, there is interest in multidimensional studies, where temperature sweeps are conducted at a variety of frequencies or frequency sweeps are conducted at a variety of temperatures. This sort of study provides a rich characterization of the material, and can lend information about the nature of the molecular motion responsible for the transition. For instance, studies of polystyrene (Tg ≈110 °C) have noted a secondary transition near room temperature. Temperature-frequency studies showed that the transition temperature is largely frequency-independent, suggesting that this transition results from a motion of a small number of atoms; it has been suggested that this is the result of the rotation of the phenyl group around the main chain. See also Maxwell material Standard linear solid material Thermomechanical analysis Dielectric thermal analysis Time–temperature superposition Electroactive polymers References External links Dynamical Mechanical Analysis Retrieved May 21, 2019. Materials science Scientific techniques
Dynamic mechanical analysis
[ "Physics", "Materials_science", "Engineering" ]
2,537
[ "Applied and interdisciplinary physics", "Materials science", "nan" ]
258,833
https://en.wikipedia.org/wiki/Thermal%20analysis
Thermal analysis is a branch of materials science where the properties of materials are studied as they change with temperature. Several methods are commonly used – these are distinguished from one another by the property which is measured: Dielectric thermal analysis: dielectric permittivity and loss factor Differential thermal analysis: temperature difference versus temperature or time Differential scanning calorimetry: heat flow changes versus temperature or time Dilatometry: volume changes with temperature change Dynamic mechanical analysis: measures storage modulus (stiffness) and loss modulus (damping) versus temperature, time and frequency Evolved gas analysis: analysis of gases evolved during heating of a material, usually decomposition products Isothermal titration calorimetry Isothermal microcalorimetry Laser flash analysis: thermal diffusivity and thermal conductivity Thermogravimetric analysis: mass change versus temperature or time Thermomechanical analysis: dimensional changes versus temperature or time Thermo-optical analysis: optical properties Derivatography: A complex method in thermal analysis Simultaneous thermal analysis generally refers to the simultaneous application of thermogravimetry and differential scanning calorimetry to one and the same sample in a single instrument. The test conditions are perfectly identical for the thermogravimetric analysis and differential scanning calorimetry signals (same atmosphere, gas flow rate, vapor pressure of the sample, heating rate, thermal contact to the sample crucible and sensor, radiation effect, etc.). The information gathered can even be enhanced by coupling the simultaneous thermal analysis instrument to an Evolved Gas Analyzer like Fourier transform infrared spectroscopy or mass spectrometry. Other, less common, methods measure the sound or light emission from a sample, or the electrical discharge from a dielectric material, or the mechanical relaxation in a stressed specimen. The essence of all these techniques is that the sample's response is recorded as a function of temperature (and time). It is usual to control the temperature in a predetermined way – either by a continuous increase or decrease in temperature at a constant rate (linear heating/cooling) or by carrying out a series of determinations at different temperatures (stepwise isothermal measurements). More advanced temperature profiles have been developed which use an oscillating (usually sine or square wave) heating rate (Modulated Temperature Thermal Analysis) or modify the heating rate in response to changes in the system's properties (Sample Controlled Thermal Analysis). In addition to controlling the temperature of the sample, it is also important to control its environment (e.g. atmosphere). Measurements may be carried out in air or under an inert gas (e.g. nitrogen or helium). Reducing or reactive atmospheres have also been used and measurements are even carried out with the sample surrounded by water or other liquids. Inverse gas chromatography is a technique which studies the interaction of gases and vapours with a surface - measurements are often made at different temperatures so that these experiments can be considered to come under the auspices of Thermal Analysis. Atomic force microscopy uses a fine stylus to map the topography and mechanical properties of surfaces to high spatial resolution. By controlling the temperature of the heated tip and/or the sample a form of spatially resolved thermal analysis can be carried out. Thermal analysis is also often used as a term for the study of heat transfer through structures. Many of the basic engineering data for modelling such systems comes from measurements of heat capacity and thermal conductivity. Polymers Polymers represent another large area in which thermal analysis finds strong applications. Thermoplastic polymers are commonly found in everyday packaging and household items, but for the analysis of the raw materials, effects of the many additive used (including stabilisers and colours) and fine-tuning of the moulding or extrusion processing used can be achieved by using differential scanning calorimetry. An example is oxidation induction time by differential scanning calorimetry which can determine the amount of oxidation stabiliser present in a thermoplastic (usually a polyolefin) polymer material. Compositional analysis is often made using thermogravimetric analysis, which can separate fillers, polymer resin and other additives. Thermogravimetric analysis can also give an indication of thermal stability and the effects of additives such as flame retardants. (See J.H.Flynn, L.A.Wall J.Res.Nat.Bur. Standerds, General Treatment of the Thermogravimetry of Polymers Part A, 1966 V70A, No5 487) Thermal analysis of composite materials, such as carbon fibre composites or glass epoxy composites are often carried out using dynamic mechanical analysis, which can measure the stiffness of materials by determining the modulus and damping (energy absorbing) properties of the material. Aerospace companies often employ these analysers in routine quality control to ensure that products being manufactured meet the required strength specifications. Formula 1 racing car manufacturers also have similar requirements. Differential scanning calorimetry is used to determine the curing properties of the resins used in composite materials, and can also confirm whether a resin can be cured and how much heat is evolved during that process. Application of predictive kinetics analysis can help to fine-tune manufacturing processes. Another example is that thermogravimetric analysis can be used to measure the fibre content of composites by heating a sample to remove the resin by application of heat and then determining the mass remaining. Metals Production of many metals (cast iron, grey iron, ductile iron, compacted graphite iron, 3000 series aluminium alloys, copper alloys, silver, and complex steels) are aided by a production technique also referred to as thermal analysis. A sample of liquid metal is removed from the furnace or ladle and poured into a sample cup with a thermocouple embedded in it. The temperature is then monitored, and the phase diagram arrests (liquidus, eutectic, and solidus) are noted. From this information chemical composition based on the phase diagram can be calculated, or the crystalline structure of the cast sample can be estimated especially for silicon morphology in hypo-eutectic Al-Si cast alloys. Strictly speaking these measurements are cooling curves and a form of sample controlled thermal analysis whereby the cooling rate of the sample is dependent on the cup material (usually bonded sand) and sample volume which is normally a constant due to the use of standard sized sample cups. To detect phase evolution and corresponding characteristic temperatures, cooling curve and its first derivative curve should be considered simultaneously. Examination of cooling and derivative curves is done by using appropriate data analysis software. The process consists of plotting, smoothing and curve fitting as well as identifying the reaction points and characteristic parameters. This procedure is known as Computer-Aided Cooling Curve Thermal Analysis. Advanced techniques use differential curves to locate endothermic inflection points such as gas holes, and shrinkage, or exothermic phases such as carbides, beta crystals, inter crystalline copper, magnesium silicide, iron phosphide's and other phases as they solidify. Detection limits seem to be around 0.01% to 0.03% of volume. In addition, integration of the area between the zero curve and the first derivative is a measure of the specific heat of that part of the solidification which can lead to rough estimates of the percent volume of a phase. (Something has to be either known or assumed about the specific heat of the phase versus the overall specific heat.) In spite of this limitation, this method is better than estimates from two dimensional micro analysis, and a lot faster than chemical dissolution. Foods Most foods are subjected to variations in their temperature during production, transport, storage, preparation and consumption, e.g., pasteurization, sterilization, evaporation, cooking, freezing, chilling, etc. Temperature changes cause alterations in the physical and chemical properties of food components which influence the overall properties of the final product, e.g., taste, appearance, texture and stability. Chemical reactions such as hydrolysis, oxidation or reduction may be promoted, or physical changes, such as evaporation, melting, crystallization, aggregation or gelation may occur. A better understanding of the influence of temperature on the properties of foods enables food manufacturers to optimize processing conditions and improve product quality. It is therefore important for food scientists to have analytical techniques to monitor the changes that occur in foods when their temperature varies. These techniques are often grouped under the general heading of thermal analysis. In principle, most analytical techniques can be used, or easily adapted, to monitor the temperature-dependent properties of foods, e.g., spectroscopic (nuclear magnetic resonance, UV-visible, infrared spectroscopy, fluorescence), scattering (light, X-rays, neutrons), physical (mass, density, rheology, heat capacity) etc. Nevertheless, at present the term thermal analysis is usually reserved for a narrow range of techniques that measure changes in the physical properties of foods with temperature (TG/DTG, differential thermal analysis, differential scanning calorimetry and transition temperature). Printed circuit boards Power dissipation is an important issue in present-day PCB design. Power dissipation will result in temperature difference and pose a thermal problem to a chip. In addition to the issue of reliability, excess heat will also negatively affect electrical performance and safety. The working temperature of an IC should therefore be kept below the maximum allowable limit of the worst case. In general, the temperatures of junction and ambient are 125 °C and 55 °C, respectively. The ever-shrinking chip size causes the heat to concentrate within a small area and leads to high power density. Furthermore, denser transistors gathering in a monolithic chip and higher operating frequency cause a worsening of the power dissipation. Removing the heat effectively becomes the critical issue to be resolved. References External links Thermal Analysis, Cambridge University International Confederation for Thermal Analysis and Calorimetry Biological processes Calorimetry Chemical processes Heat transfer Materials science
Thermal analysis
[ "Physics", "Chemistry", "Materials_science", "Engineering", "Biology" ]
2,050
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Applied and interdisciplinary physics", "Materials science", "Chemical processes", "Thermodynamics", "nan", "Chemical process engineering" ]
258,980
https://en.wikipedia.org/wiki/Ergodic%20hypothesis
In physics and thermodynamics, the ergodic hypothesis says that, over long periods of time, the time spent by a system in some region of the phase space of microstates with the same energy is proportional to the volume of this region, i.e., that all accessible microstates are equiprobable over a long period of time. Liouville's theorem states that, for a Hamiltonian system, the local density of microstates following a particle path through phase space is constant as viewed by an observer moving with the ensemble (i.e., the convective time derivative is zero). Thus, if the microstates are uniformly distributed in phase space initially, they will remain so at all times. But Liouville's theorem does not imply that the ergodic hypothesis holds for all Hamiltonian systems. The ergodic hypothesis is often assumed in the statistical analysis of computational physics. The analyst would assume that the average of a process parameter over time and the average over the statistical ensemble are the same. This assumption—that it is as good to simulate a system over a long time as it is to make many independent realizations of the same system—is not always correct. (See, for example, the Fermi–Pasta–Ulam–Tsingou experiment of 1953.) Assumption of the ergodic hypothesis allows proof that certain types of perpetual motion machines of the second kind are impossible. Systems that are ergodic are said to have the property of ergodicity; a broad range of systems in geometry, physics, and probability are ergodic. Ergodic systems are studied in ergodic theory. Phenomenology In macroscopic systems, the timescales over which a system can truly explore the entirety of its own phase space can be sufficiently large that the thermodynamic equilibrium state exhibits some form of ergodicity breaking. A common example is that of spontaneous magnetisation in ferromagnetic systems, whereby below the Curie temperature the system preferentially adopts a non-zero magnetisation even though the ergodic hypothesis would imply that no net magnetisation should exist by virtue of the system exploring all states whose time-averaged magnetisation should be zero. The fact that macroscopic systems often violate the literal form of the ergodic hypothesis is an example of spontaneous symmetry breaking. However, complex disordered systems such as a spin glass show an even more complicated form of ergodicity breaking where the properties of the thermodynamic equilibrium state seen in practice are much more difficult to predict purely by symmetry arguments. Also conventional glasses (e.g. window glasses) violate ergodicity in a complicated manner. In practice this means that on sufficiently short time scales (e.g. those of parts of seconds, minutes, or a few hours) the systems may behave as solids, i.e. with a positive shear modulus, but on extremely long scales, e.g. over millennia or eons, as liquids, or with two or more time scales and plateaux in between. Ergodic hypothesis in finance Models used in finance and investment assume ergodicity, explicitly or implicitly. The ergodic hypothesis is prevalent in modern portfolio theory, discounted cash flow (DCF) models, and aggregate indicator models that infuse macroeconomics, among others. The situations modeled by these theories can be useful. But often they are only useful during much, but not all, of any particular time period under study. They can therefore miss some of the largest deviations from the standard model, such as financial crises, debt crises and systemic risk in the banking system that occur only infrequently. Nassim Nicholas Taleb has argued that a very important part of empirical reality in finance and investment is non-ergodic. An even statistical distribution of probabilities, where the system returns to every possible state an infinite number of times, is simply not the case we observe in situations where "absorbing states" are reached, a state where ruin is seen. The death of an individual, or total loss of everything, or the devolution or dismemberment of a nation state and the legal regime that accompanied it, are all absorbing states. Thus, in finance, path dependence matters. A path where an individual, firm or country hits a "stop"—an absorbing barrier, "anything that prevents people with skin in the game from emerging from it, and to which the system will invariably tend. Let us call these situations ruin, as the entity cannot emerge from the condition. The central problem is that if there is a possibility of ruin, cost benefit analyses are no longer possible."—will be non-ergodic. All traditional models based on standard probabilistic statistics break down in these extreme situations. The emerging field of ergodicity economics is beginning to show how including non-ergodic dynamics addresses some of the criticisms of neoclassical and pluralist economics; and, practically, what investors and entrepreneurs can do to correct for the typical outcome of a business or investment fund (under non-ergodic capital dynamics) being less than the expectation value. This correction is necessary for the regenerative economy described by regenerative economic theory to work in practice. Ergodic hypothesis in social science In the social sciences, the ergodic hypothesis corresponds to the assumption that individuals are representative of groups, and vice-versa, that group averages can adequately characterize what might be seen in an individual. This appears to not be the case: group level data often gives a poor indication of individual level variation, as individual standard deviations (SDs) tend to be almost eight times larger than group level SDs of the same people. Subsequently a third of the individual observations falls outside a 99.9% confidence interval of group level data. See also Ergodic process Ergodic theory, a branch of mathematics concerned with a more general formulation of ergodicity Ergodicity Loschmidt's paradox Poincaré recurrence theorem Lindy effect References Ergodic theory Hypotheses Statistical mechanics Philosophy of thermal and statistical physics Concepts in physics
Ergodic hypothesis
[ "Physics", "Chemistry", "Mathematics" ]
1,279
[ "Philosophy of thermal and statistical physics", "Ergodic theory", "Thermodynamics", "nan", "Statistical mechanics", "Dynamical systems" ]
258,986
https://en.wikipedia.org/wiki/Ergodic%20theory
Ergodic theory is a branch of mathematics that studies statistical properties of deterministic dynamical systems; it is the study of ergodicity. In this context, "statistical properties" refers to properties which are expressed through the behavior of time averages of various functions along trajectories of dynamical systems. The notion of deterministic dynamical systems assumes that the equations determining the dynamics do not contain any random perturbations, noise, etc. Thus, the statistics with which we are concerned are properties of the dynamics. Ergodic theory, like probability theory, is based on general notions of measure theory. Its initial development was motivated by problems of statistical physics. A central concern of ergodic theory is the behavior of a dynamical system when it is allowed to run for a long time. The first result in this direction is the Poincaré recurrence theorem, which claims that almost all points in any subset of the phase space eventually revisit the set. Systems for which the Poincaré recurrence theorem holds are conservative systems; thus all ergodic systems are conservative. More precise information is provided by various ergodic theorems which assert that, under certain conditions, the time average of a function along the trajectories exists almost everywhere and is related to the space average. Two of the most important theorems are those of Birkhoff (1931) and von Neumann which assert the existence of a time average along each trajectory. For the special class of ergodic systems, this time average is the same for almost all initial points: statistically speaking, the system that evolves for a long time "forgets" its initial state. Stronger properties, such as mixing and equidistribution, have also been extensively studied. The problem of metric classification of systems is another important part of the abstract ergodic theory. An outstanding role in ergodic theory and its applications to stochastic processes is played by the various notions of entropy for dynamical systems. The concepts of ergodicity and the ergodic hypothesis are central to applications of ergodic theory. The underlying idea is that for certain systems the time average of their properties is equal to the average over the entire space. Applications of ergodic theory to other parts of mathematics usually involve establishing ergodicity properties for systems of special kind. In geometry, methods of ergodic theory have been used to study the geodesic flow on Riemannian manifolds, starting with the results of Eberhard Hopf for Riemann surfaces of negative curvature. Markov chains form a common context for applications in probability theory. Ergodic theory has fruitful connections with harmonic analysis, Lie theory (representation theory, lattices in algebraic groups), and number theory (the theory of diophantine approximations, L-functions). Ergodic transformations Ergodic theory is often concerned with ergodic transformations. The intuition behind such transformations, which act on a given set, is that they do a thorough job "stirring" the elements of that set. E.g. if the set is a quantity of hot oatmeal in a bowl, and if a spoonful of syrup is dropped into the bowl, then iterations of the inverse of an ergodic transformation of the oatmeal will not allow the syrup to remain in a local subregion of the oatmeal, but will distribute the syrup evenly throughout. At the same time, these iterations will not compress or dilate any portion of the oatmeal: they preserve the measure that is density. The formal definition is as follows: Let be a measure-preserving transformation on a measure space , with . Then is ergodic if for every in with (that is, is invariant), either or . The operator Δ here is the symmetric difference of sets, equivalent to the exclusive-or operation with respect to set membership. The condition that the symmetric difference be measure zero is called being essentially invariant. Examples An irrational rotation of the circle R/Z, T: x → x + θ, where θ is irrational, is ergodic. This transformation has even stronger properties of unique ergodicity, minimality, and equidistribution. By contrast, if θ = p/q is rational (in lowest terms) then T is periodic, with period q, and thus cannot be ergodic: for any interval I of length a, 0 < a < 1/q, its orbit under T (that is, the union of I, T(I), ..., Tq−1(I), which contains the image of I under any number of applications of T) is a T-invariant mod 0 set that is a union of q intervals of length a, hence it has measure qa strictly between 0 and 1. Let G be a compact abelian group, μ the normalized Haar measure, and T a group automorphism of G. Let G* be the Pontryagin dual group, consisting of the continuous characters of G, and T* be the corresponding adjoint automorphism of G*. The automorphism T is ergodic if and only if the equality (T*)n(χ) = χ is possible only when n = 0 or χ is the trivial character of G. In particular, if G is the n-dimensional torus and the automorphism T is represented by a unimodular matrix A then T is ergodic if and only if no eigenvalue of A is a root of unity. A Bernoulli shift is ergodic. More generally, ergodicity of the shift transformation associated with a sequence of i.i.d. random variables and some more general stationary processes follows from Kolmogorov's zero–one law. Ergodicity of a continuous dynamical system means that its trajectories "spread around" the phase space. A system with a compact phase space which has a non-constant first integral cannot be ergodic. This applies, in particular, to Hamiltonian systems with a first integral I functionally independent from the Hamilton function H and a compact level set X = {(p,q): H(p,q) = E} of constant energy. Liouville's theorem implies the existence of a finite invariant measure on X, but the dynamics of the system is constrained to the level sets of I on X, hence the system possesses invariant sets of positive but less than full measure. A property of continuous dynamical systems that is the opposite of ergodicity is complete integrability. Ergodic theorems Let T: X → X be a measure-preserving transformation on a measure space (X, Σ, μ) and suppose ƒ is a μ-integrable function, i.e. ƒ ∈ L1(μ). Then we define the following averages: Time average: This is defined as the average (if it exists) over iterations of T starting from some initial point x: Space average: If μ(X) is finite and nonzero, we can consider the space or phase average of ƒ: In general the time average and space average may be different. But if the transformation is ergodic, and the measure is invariant, then the time average is equal to the space average almost everywhere. This is the celebrated ergodic theorem, in an abstract form due to George David Birkhoff. (Actually, Birkhoff's paper considers not the abstract general case but only the case of dynamical systems arising from differential equations on a smooth manifold.) The equidistribution theorem is a special case of the ergodic theorem, dealing specifically with the distribution of probabilities on the unit interval. More precisely, the pointwise or strong ergodic theorem states that the limit in the definition of the time average of ƒ exists for almost every x and that the (almost everywhere defined) limit function is integrable: Furthermore, is T-invariant, that is to say holds almost everywhere, and if μ(X) is finite, then the normalization is the same: In particular, if T is ergodic, then must be a constant (almost everywhere), and so one has that almost everywhere. Joining the first to the last claim and assuming that μ(X) is finite and nonzero, one has that for almost all x, i.e., for all x except for a set of measure zero. For an ergodic transformation, the time average equals the space average almost surely. As an example, assume that the measure space (X, Σ, μ) models the particles of a gas as above, and let ƒ(x) denote the velocity of the particle at position x. Then the pointwise ergodic theorems says that the average velocity of all particles at some given time is equal to the average velocity of one particle over time. A generalization of Birkhoff's theorem is Kingman's subadditive ergodic theorem. Probabilistic formulation: Birkhoff–Khinchin theorem Birkhoff–Khinchin theorem. Let ƒ be measurable, E(|ƒ|) < ∞, and T be a measure-preserving map. Then with probability 1: where is the conditional expectation given the σ-algebra of invariant sets of T. Corollary (Pointwise Ergodic Theorem): In particular, if T is also ergodic, then is the trivial σ-algebra, and thus with probability 1: Mean ergodic theorem Von Neumann's mean ergodic theorem, holds in Hilbert spaces. Let U be a unitary operator on a Hilbert space H; more generally, an isometric linear operator (that is, a not necessarily surjective linear operator satisfying ‖Ux‖ = ‖x‖ for all x in H, or equivalently, satisfying U*U = I, but not necessarily UU* = I). Let P be the orthogonal projection onto {ψ ∈ H | Uψ = ψ} = ker(I − U). Then, for any x in H, we have: where the limit is with respect to the norm on H. In other words, the sequence of averages converges to P in the strong operator topology. Indeed, it is not difficult to see that in this case any admits an orthogonal decomposition into parts from and respectively. The former part is invariant in all the partial sums as grows, while for the latter part, from the telescoping series one would have: This theorem specializes to the case in which the Hilbert space H consists of L2 functions on a measure space and U is an operator of the form where T is a measure-preserving endomorphism of X, thought of in applications as representing a time-step of a discrete dynamical system. The ergodic theorem then asserts that the average behavior of a function ƒ over sufficiently large time-scales is approximated by the orthogonal component of ƒ which is time-invariant. In another form of the mean ergodic theorem, let Ut be a strongly continuous one-parameter group of unitary operators on H. Then the operator converges in the strong operator topology as T → ∞. In fact, this result also extends to the case of strongly continuous one-parameter semigroup of contractive operators on a reflexive space. Remark: Some intuition for the mean ergodic theorem can be developed by considering the case where complex numbers of unit length are regarded as unitary transformations on the complex plane (by left multiplication). If we pick a single complex number of unit length (which we think of as U), it is intuitive that its powers will fill up the circle. Since the circle is symmetric around 0, it makes sense that the averages of the powers of U will converge to 0. Also, 0 is the only fixed point of U, and so the projection onto the space of fixed points must be the zero operator (which agrees with the limit just described). Convergence of the ergodic means in the Lp norms Let (X, Σ, μ) be as above a probability space with a measure preserving transformation T, and let 1 ≤ p ≤ ∞. The conditional expectation with respect to the sub-σ-algebra ΣT of the T-invariant sets is a linear projector ET of norm 1 of the Banach space Lp(X, Σ, μ) onto its closed subspace Lp(X, ΣT, μ). The latter may also be characterized as the space of all T-invariant Lp-functions on X. The ergodic means, as linear operators on Lp(X, Σ, μ) also have unit operator norm; and, as a simple consequence of the Birkhoff–Khinchin theorem, converge to the projector ET in the strong operator topology of Lp if 1 ≤ p ≤ ∞, and in the weak operator topology if p = ∞. More is true if 1 < p ≤ ∞ then the Wiener–Yoshida–Kakutani ergodic dominated convergence theorem states that the ergodic means of ƒ ∈ Lp are dominated in Lp; however, if ƒ ∈ L1, the ergodic means may fail to be equidominated in Lp. Finally, if ƒ is assumed to be in the Zygmund class, that is |ƒ| log+(|ƒ|) is integrable, then the ergodic means are even dominated in L1. Sojourn time Let (X, Σ, μ) be a measure space such that μ(X) is finite and nonzero. The time spent in a measurable set A is called the sojourn time. An immediate consequence of the ergodic theorem is that, in an ergodic system, the relative measure of A is equal to the mean sojourn time: for all x except for a set of measure zero, where χA is the indicator function of A. The occurrence times of a measurable set A is defined as the set k1, k2, k3, ..., of times k such that Tk(x) is in A, sorted in increasing order. The differences between consecutive occurrence times Ri = ki − ki−1 are called the recurrence times of A. Another consequence of the ergodic theorem is that the average recurrence time of A is inversely proportional to the measure of A, assuming that the initial point x is in A, so that k0 = 0. (See almost surely.) That is, the smaller A is, the longer it takes to return to it. Ergodic flows on manifolds The ergodicity of the geodesic flow on compact Riemann surfaces of variable negative curvature and on compact manifolds of constant negative curvature of any dimension was proved by Eberhard Hopf in 1939, although special cases had been studied earlier: see for example, Hadamard's billiards (1898) and Artin billiard (1924). The relation between geodesic flows on Riemann surfaces and one-parameter subgroups on SL(2, R) was described in 1952 by S. V. Fomin and I. M. Gelfand. The article on Anosov flows provides an example of ergodic flows on SL(2, R) and on Riemann surfaces of negative curvature. Much of the development described there generalizes to hyperbolic manifolds, since they can be viewed as quotients of the hyperbolic space by the action of a lattice in the semisimple Lie group SO(n,1). Ergodicity of the geodesic flow on Riemannian symmetric spaces was demonstrated by F. I. Mautner in 1957. In 1967 D. V. Anosov and Ya. G. Sinai proved ergodicity of the geodesic flow on compact manifolds of variable negative sectional curvature. A simple criterion for the ergodicity of a homogeneous flow on a homogeneous space of a semisimple Lie group was given by Calvin C. Moore in 1966. Many of the theorems and results from this area of study are typical of rigidity theory. In the 1930s G. A. Hedlund proved that the horocycle flow on a compact hyperbolic surface is minimal and ergodic. Unique ergodicity of the flow was established by Hillel Furstenberg in 1972. Ratner's theorems provide a major generalization of ergodicity for unipotent flows on the homogeneous spaces of the form Γ \ G, where G is a Lie group and Γ is a lattice in G. In the last 20 years, there have been many works trying to find a measure-classification theorem similar to Ratner's theorems but for diagonalizable actions, motivated by conjectures of Furstenberg and Margulis. An important partial result (solving those conjectures with an extra assumption of positive entropy) was proved by Elon Lindenstrauss, and he was awarded the Fields medal in 2010 for this result. See also Chaos theory Ergodic hypothesis Ergodic process Kruskal principle Lindy effect Lyapunov time – the time limit to the predictability of the system Maximal ergodic theorem Ornstein isomorphism theorem Statistical mechanics Symbolic dynamics References Historical references . . . . . . . . Modern references Vladimir Igorevich Arnol'd and André Avez, Ergodic Problems of Classical Mechanics. New York: W.A. Benjamin. 1968. Leo Breiman, Probability. Original edition published by Addison–Wesley, 1968; reprinted by Society for Industrial and Applied Mathematics, 1992. . (See Chapter 6.) (A survey of topics in ergodic theory; with exercises.) Karl Petersen. Ergodic Theory (Cambridge Studies in Advanced Mathematics). Cambridge: Cambridge University Press. 1990. Françoise Pène, Stochastic properties of dynamical systems, Cours spécialisés de la SMF, Volume 30, 2022 Joseph M. Rosenblatt and Máté Weirdl, Pointwise ergodic theorems via harmonic analysis, (1993) appearing in Ergodic Theory and its Connections with Harmonic Analysis, Proceedings of the 1993 Alexandria Conference, (1995) Karl E. Petersen and Ibrahim A. Salama, eds., Cambridge University Press, Cambridge, . (An extensive survey of the ergodic properties of generalizations of the equidistribution theorem of shift maps on the unit interval. Focuses on methods developed by Bourgain.) A. N. Shiryaev, Probability, 2nd ed., Springer 1996, Sec. V.3. . (A detailed discussion about the priority of the discovery and publication of the ergodic theorems by Birkhoff and von Neumann, based on a letter of the latter to his friend Howard Percy Robertson.) Andrzej Lasota, Michael C. Mackey, Chaos, Fractals, and Noise: Stochastic Aspects of Dynamics. Second Edition, Springer, 1994. Manfred Einsiedler and Thomas Ward, Ergodic Theory with a view towards Number Theory. Springer, 2011. Jane Hawkins, Ergodic Dynamics: From Basic Theory to Applications, Springer, 2021. External links Ergodic Theory (16 June 2015) Notes by Cosma Rohilla Shalizi Ergodic theorem passes the test From Physics World
Ergodic theory
[ "Mathematics" ]
3,995
[ "Ergodic theory", "Dynamical systems" ]
23,764,437
https://en.wikipedia.org/wiki/Stem%20Cell%20Network
The Stem Cell Network (SCN) is a Canadian non-profit that supports stem cell and regenerative medicine research, teaches the next generation of highly qualified personal, and delivers outreach activities across Canada. The Network has been supported by the Government of Canada, since inception in 2001. SCN has catalyzed 25 clinical trials, 21 start-up companies, incubated several international and Canadian research networks and organizations, and established the Till & McCulloch Meetings, Canada's foremost stem cell research event. The organization is based in Ottawa, Ontario. Activities Annual Scientific Conference Since 2001, SCN has hosted an annual scientific conference. This conference is open to SCN investigators and trainees, and provides a forum to share new research. The conference takes place in a different Canadian city each year. In 2012, the annual conference was re-branded as the Till & McCulloch Meetings. The establishment of the Meetings ensured that the country's stem cell and regenerative medicine research community would continue to have a venue for collaboration and the sharing of important research. The Till & McCulloch Meetings are Canada's largest stem cell and regenerative medicine conference. Research Funding Programs Training The SCN training program includes studentships, fellowships, research grants and workshops. Since 2001, SCN has offered training opportunities to more than 5,000 trainees. Organization Member institutions SCN and its membership engage in collaborative funding and research activities. Current members institutions include: Partners References Medical and health organizations based in Ontario Stem cell research
Stem Cell Network
[ "Chemistry", "Biology" ]
310
[ "Translational medicine", "Tissue engineering", "Stem cell research" ]
12,825,821
https://en.wikipedia.org/wiki/Power%20system%20simulation
Electrical power system simulation involves power system modeling and network simulation in order to analyze electrical power systems using design/offline or real-time data. Power system simulation software's are a class of computer simulation programs that focus on the operation of electrical power systems. These types of computer programs are used in a wide range of planning and operational situations for electric power systems. Applications of power system simulation include: long-term generation and transmission expansion planning, short-term operational simulations, and market analysis (e.g. price forecasting). These programs typically make use of mathematical optimization techniques such linear programming, quadratic programming, and mixed integer programming. Multiple elements of a power system can be modelled. A power-flow study calculates the loading on transmission lines and the power necessary to be generated at generating stations, given the required loads to be served. A short circuit study or fault analysis calculates the short-circuit current that would flow at various points of interest in the system under study, for short-circuits between phases or from energized wires to ground. A coordination study allows selection and setting of protective relays and fuses to rapidly clear a short-circuit fault while minimizing effects on the rest of the power system. Transient or dynamic stability studies show the effect of events such as sudden load changes, short-circuits, or accidental disconnection of load on the synchronization of the generators in the system. Harmonic or power quality studies show the effect of non-linear loads such as lighting on the waveform of the power system, and allow recommendations to be made to mitigate severe distortion. An optimal power-flow study establishes the best combination of generating plant output to meet a given load requirement, so as to minimize production cost while maintaining desired stability and reliability; such models may be updated in near-real-time to allow guidance to system operators on the lowest-cost way to achieve economic dispatch. There are many power simulation software packages in commercial and non-commercial forms that range from utility-scale software to study tools. Load flow calculation The load-flow calculation is the most common network analysis tool for examining the undisturbed and disturbed network within the scope of operational and strategic planning. Using network topology, transmission line parameters, transformer parameters, generator location and limits, and load location and compensation, the load-flow calculation can provide voltage magnitudes and angles for all nodes and loading of network components, such as cables and transformers. With this information, compliance to operating limitations such as those stipulated by voltage ranges and maximum loads, can be examined. This is, for example, important for determining the transmission capacity of underground cables, where the influence of cable bundling on the load capability of each cable has to be taken also into account. Due to the ability to determine losses and reactive-power allocation, load-flow calculation also supports the planning engineer in the investigation of the most economical operation mode of the network. When changing over from single and/or multi-phase infeed low-voltage meshed networks to isolated networks, load-flow calculation is essential for operational and economical reasons. Load-flow calculation is also the basis of all further network studies, such as motor start-up or investigation of scheduled or unscheduled outages of equipment within the outage simulation. Especially when investigating motor start-up, the load-flow calculation results give helpful hints, for example, of whether the motor can be started in spite of the voltage drop caused by the start-up current. Short circuit analysis Short circuit analysis analyzes the power flow after a fault occurs in a power network. The faults may be three-phase short circuit, one-phase grounded, two-phase short circuit, two-phase grounded, one-phase break, two-phase break or complex faults. Results of such an analysis may help determine the following: Magnitude of the fault current Circuit breaker capacity Rise in voltage in a single line due to ground fault Residual voltage and relay settings Interference due to power line. Transient stability simulation The goal of transient stability simulation of power systems is to analyse the stability of a power system from sub-second to several tens of seconds. Stability in this aspect is the ability of the system to quickly return to a stable operating condition after being exposed to a disturbance such as for example a tree falling over an overhead line resulting in the automatic disconnection of that line by its protection systems. In engineering terms, a power system is deemed stable if the substation voltage levels and the rotational speeds of motors and generators return to their normal values in a quick and continuous manner. Models typically use the following inputs: Number, size and type of generators with any available mechanical, electrical, and control (governor, voltage regulation, etc.) parameters, a mix of residential, commercial and industrial load at each bus, location and specifications for distributed control devices such as tap-changing transformers, switched shunt compensation, static Var compensators, flexible AC transmission systems, etc., location and specifications for protection devices such as relays and load shedding, and location and specifications of any other relevant control and/or protection devices. The acceptable amount of time it takes grid voltages return to their intended levels is dependent on the magnitude of voltage disturbance, and the most common standard is specified by the CBEMA curve in Figure. 1. This curve informs both electronic equipment design and grid stability data reporting. Unit commitment The problem of unit commitment involves finding the least-cost dispatch of available generation resources to meet the electrical load. Generating resources can include a wide range of types: Nuclear Thermal (using coal, gas, other fossil fuels, or biomass) Renewables (including hydro, wind, wave-power, and solar) The key decision variables that are decided by the computer program are: Generation level (in megawatts) Number of generating units on The latter decisions are binary {0,1}, which means that the mathematical problem is not continuous. In addition, generating plants are subject to a number of complex technical constraints, including: Minimum stable operating level Maximum rate of ramping up or down Minimum time period the unit is up and/or down These constraints have many different variants; all this gives rise to a large class of mathematical optimization problems. Optimal power flow Electricity flows through an AC network according to Kirchhoff's Laws. Transmission lines are subject to thermal limits (simple megawatt limits on flow), as well as voltage and electrical stability constraints. The simulator must calculate the flows in the AC network that result from any given combination of unit commitment and generator megawatt dispatch, and ensure that AC line flows are within both the thermal limits and the voltage and stability constraints. This may include contingencies such as the loss of any one transmission or generation element - a so-called security-constrained optimal power flow (SCOPF), and if the unit commitment is optimized inside this framework we have a security-constrained unit commitment (SCUC). In optimal power flow (OPF) the generalised scalar objective to be minimised is given by: where u is a set of the control variables, x is a set of independent variables, and the subscript 0 indicates that the variable refers to the pre-contingency power system. The SCOPF is bound by equality and inequality constraint limits. The equality constraint limits are given by the pre and post contingency power-flow equations, where k refers to the kth contingency case: The equipment and operating limits are given by the following inequalities: represent hard constraints on controls represents hard/soft constraints on variables represents other constraints such as reactive reserve limits The objective function in OPF can take on different forms relating to active or reactive power quantities that we wish to either minimise or maximise. For example we may wish to minimise transmission losses or minimise real power generation costs on a power network. Other power flow solution methods like stochastic optimization incorporate the uncertainty found in modeling power systems by using the probability distributions of certain variables whose exact values are not known. When uncertainties in the constraints are present, such as for dynamic line ratings, chance constrained optimization can be used where the probability of violating a constraint is limited to a certain value. Another technique to model variability is the Monte Carlo method, in which different combinations of inputs and resulting outputs are considered based on the probability of their occurrence in the real world. This method can be applied to simulations for system security and unit commitment risk, and it is increasingly being used to model probabilistic load flow with renewable and/or distributed generation. Models of competitive behavior The cost of producing a megawatt of electrical energy is a function of: fuel price generation efficiency (the rate at which potential energy in the fuel is converted to electrical energy) operations and maintenance costs In addition to this, generating plant incur fixed costs including: plant construction costs, and fixed operations and maintenance costs Assuming perfect competition, the market-based price of electricity would be based purely on the cost of producing the next megawatt of power, the so-called short-run marginal cost (SRMC). This price however might not be sufficient to cover the fixed costs of generation, and thus power market prices rarely show purely SRMC pricing. In most established power markets, generators are free to offer their generation capacity at prices of their choosing. Competition and use of financial contracts keeps these prices close to SRMC, but inevitably offers price above SRMC do occur (for example during the California energy crisis of 2001). In the context of power system simulation, a number of techniques have been applied to simulate imperfect competition in electrical power markets: Cournot competition Bertrand competition Supply function equilibrium Residual Supply Index analysis Various heuristics have also been applied to this problem. The aim is to provide realistic forecasts of power market prices, given the forecast supply-demand situation. Long-term optimization Power system long-term optimization focuses on optimizing the multi-year expansion and retirement plan for generation, transmission, and distribution facilities. The optimization problem will typically consider the long term investment cash flow and a simplified version of OPF / UC (Unit commitment), to make sure the power system operates in a secure and economic way. This area can be categorized as: Generation expansion optimization Transmission expansion optimization Generation-transmission expansion co-optimization Distribution network optimization Study specifications A well-defined power systems study requirement is critical to the success of any project as it will reduce the challenge of selecting the qualified service provider and the right analysis software. The system study specification describes the project scope, analysis types, and the required deliverable. The study specification must be written to match the specific project and industry requirements and will vary based on the type of analysis. Power system simulation software Over the years, there have been several power system simulation software used for various analysis. The first software with a graphical user interface was built by the University of Manchester in 1974 and was called IPSA - Interactive Power Systems Analysis (now owned by TNEI Services Ltd). The recently reformatted cinefilm 'A Blueprint for Power', shot in 1979 shows how this revolutionary software bridged the gap between user-friendly interfaces and the precision required for intricate network analyses. General Electric's MAPS (Multi-Area Production Simulation) is a production simulation model used by various Regional Transmission Organizations and Independent System Operators in the United States to plan for the economic impact of proposed electric transmission and generation facilities in FERC-regulated electric wholesale markets. Portions of the model may also be used for the commitment and dispatch phase (updated on 5 minute intervals) in operation of wholesale electric markets for RTO and ISO regions. Hitachi Energy's PROMOD is a similar software package. These ISO and RTO regions also utilize a GE software package called MARS (Multi-Area Reliability Simulation) to ensure the power system meets reliability criteria (a loss of load expectation (LOLE) of no greater than 0.1 days per year). Further, a GE software package called PSLF (Positive Sequence Load Flow), Siemens software packages called PSSE (Power System Simulation for Engineering) as well as PSS SINCAL (Siemens Network Calculator), and Electrical Transient Analyzer Program (ETAP) by Operation Technology Inc. analyzes load flow on the power system for short-circuits and stability during preliminary planning studies by RTOs and ISOs. References Electric power
Power system simulation
[ "Physics", "Engineering" ]
2,518
[ "Power (physics)", "Electrical engineering", "Electric power", "Physical quantities" ]
22,268,226
https://en.wikipedia.org/wiki/Andrea%20Prosperetti
Andrea Prosperetti is the Distinguished Professor of Mechanical Engineering at the University of Houston, the Berkhoff Professor of Applied Physics at the University of Twente in the Netherlands and an elected member of the National Academy of Engineering in 2012 ("for contributions to the fundamentals and applications of multiphase flows"). He is known for his work in the field of multiphase flows including bubble dynamics and cavitation. He was the editor-in-chief of the International Journal of Multiphase Flow and serves on the editorial board of the Annual Review of Fluid Mechanics. He completed his doctoral work in 1974 at the California Institute of Technology under the supervision of Milton Plesset (of the Rayleigh–Plesset equation and Møller–Plesset perturbation theory) and holds a B.S. in Physics from Universitá di Milano, Italy (1968). Prosperetti was awarded the Fluid Dynamics Prize (the highest award in Fluid Mechanics) by the American Physical Society in 2003 "for breakthroughs in the theory of multiphase flows, the dynamics of bubble oscillations, underwater sound, and free-surface flows and for providing elegant explanations of paradoxical phenomena in these fields". In 2012, the Acoustical Society of America awarded him the Silver Medal in Physical Acoustics "for contributions to bubble dynamics and multiphase flow." In addition, Prosperetti also won the 2014 EUROMECH Fluid Mechanics Prize (administered by the Council of the European Mechanics Society), the Lifetime Achievement Award in 2001 by the Japan Society of Multiphase Flow, and the Fluids Engineering Award in 2005 by the American Society of Mechanical Engineers. He is a fellow of the Acoustical Society of America, the American Physical Society, and the American Society of Mechanical Engineers. He has been a foreign member of the Royal Netherlands Academy of Arts and Sciences since 2000. He is the author of "Advanced Mathematics for Applications", a reference textbook for graduate-level engineers and also of "Computational Methods for Multiphase Flows", both published by the Cambridge University Press. References External links National Academy of Engineering Profile Personal Website Fluid dynamicists Johns Hopkins University faculty California Institute of Technology alumni Computational fluid dynamicists Fellows of the American Physical Society Fellows of the American Society of Mechanical Engineers Living people Members of the Royal Netherlands Academy of Arts and Sciences Members of the United States National Academy of Engineering Fellows of the Acoustical Society of America Year of birth missing (living people)
Andrea Prosperetti
[ "Chemistry" ]
505
[ "Fluid dynamicists", "Fluid dynamics" ]
22,269,179
https://en.wikipedia.org/wiki/Aster%20%28cell%20biology%29
An aster is a cellular structure shaped like a star, consisting of a centrosome and its associated microtubules during the early stages of mitosis in an animal cell. Asters do not form during mitosis in plants. Astral rays, composed of microtubules, radiate from the centrosphere and look like a cloud. Astral rays are one variant of microtubule which comes out of the centrosome; others include kinetochore microtubules and polar microtubules. During mitosis, there are five stages of cell division: Prophase, Prometaphase, Metaphase, Anaphase, and Telophase. During prophase, two aster-covered centrosomes migrate to opposite sides of the nucleus in preparation of mitotic spindle formation. During prometaphase there is fragmentation of the nuclear envelope and formation of the mitotic spindles. During metaphase, the kinetochore microtubules extending from each centrosome connect to the centromeres of the chromosomes. Next, during anaphase, the kinetochore microtubules pull the sister chromatids apart into individual chromosomes and pull them towards the centrosomes, located at opposite ends of the cell. This allows the cell to divide properly with each daughter cell containing full replicas of chromosomes. In some cells, the orientation of the asters determines the plane of division upon which the cell will divide. Astral microtubules Astral microtubules are a subpopulation of microtubules, which only exist during and immediately before mitosis. They are defined as any microtubule originating from the centrosome which does not connect to a kinetochore. Astral microtubules develop in the actin skeleton and interact with the cell cortex to aid in spindle orientation. They are organized into radial arrays around the centrosomes. The turn-over rate of this population of microtubules is higher than any other population. The role of astral microtubules is assisted by dyneins specific to this role. These dyneins have their light chains (static portion) attached to the cell membrane, and their globular parts (dynamic portions) attached to the microtubules. The globular chains attempt to move towards the centrosome, but as they are bound to the cell membrane, this results in pulling the centrosomes towards the membrane, thus assisting cytokinesis. Astral microtubules are not required for the progression of mitosis, but they are required to ensure the fidelity of the process. The function of astral microtubules can be generally considered as determination of cell geometry. They are absolutely required for correct positioning and orientation of the mitotic spindle apparatus, and are thus involved in determining the cell division site based on the geometry and polarity of the cells. The maintenance of astral microtubules is dependent on centrosomal integrity. It is also dependent on several microtubule-associated proteins such as EB1 and adenomatous polyposis coli (APC). Growth of Microtubules Asters grow through nucleation and polymerization. At the negative ends of the aster centrosomes will nucleate (form a nucleus) and anchor to the microtubules. At the positive end, polymerization of the aster will occur. Cortical dynein, a motor protein, moves along the microtubules of the cell and plays a key role in the growth and inhibition of aster microtubules. A dynein that is barrier-attached can inhibit and trigger growth. References Ishihara, Keisuke, et al. "Physical basis of large microtubule aster growth." eLife, vol. 5, 2016. Gale OneFile: Health and Medicine, link.gale.com/apps/doc/A476395269/HRCA?u=cuny_hunter&sid=HRCA&xid=5e6ad228. Accessed 28 Apr. 2021. Laan, Liedewij et al. “Cortical Dynein Controls Microtubule Dynamics to Generate Pulling Forces That Position Microtubule Asters.” Cell (Cambridge) 148.3 (2012): 502–514. Web. See also Mitosis Centrosome Centriole Chromosome Cell biology Cell cycle Mitosis
Aster (cell biology)
[ "Biology" ]
905
[ "Cell biology", "Cell cycle", "Cellular processes", "Mitosis" ]
22,269,613
https://en.wikipedia.org/wiki/Prp24
Prp24 (precursor RNA processing, gene 24) is a protein part of the pre-messenger RNA splicing process and aids the binding of U6 snRNA to U4 snRNA during the formation of spliceosomes. Found in eukaryotes from yeast to E. coli, fungi, and humans, Prp24 was initially discovered to be an important element of RNA splicing in 1989. Mutations in Prp24 were later discovered in 1991 to suppress mutations in U4 that resulted in cold-sensitive strains of yeast, indicating its involvement in the reformation of the U4/U6 duplex after the catalytic steps of splicing. Biological Role The process of spliceosome formation involves the U4 and U6 snRNPs associating and forming a di-snRNP in the cell nucleus. This di-snRNP then recruits another member (U5) to become a tri-snRNP. U6 must then dissociate from U4 to bond with U2 and become catalytically active. Once splicing has been done, U6 must dissociate from the spliceosome and bond back with U4 to restart the cycle. Prp24 has been shown to promote the binding of U4 and U6 snRNPs. Removing Prp24 results in the accumulation of free U4 and U6, and the subsequent addition of Prp24 regenerates U4/U6 and reduces the amount of free U4 and U6. Naked U6 snRNA is very compact and has little room to form base pairs with other RNA. However, when U6 snRNP associates with proteins such as Prp24, the structure is much more open, thus facilitating the binding to U4. Prp24 is not present in the U6/U4 duplex itself, and it has been suggested that Prp24 must leave the complex in order for proper base pairs to be formed. It has also been suggested that Prp24 may play a role in destabilizing U4/U6 in order for U6 to pair bases with U2. Structure Prp24 has a molecular weight of 50 kDa and has been shown to contain four RNA recognition motifs (RRMs) and a conserved 12-amino acid sequence at the C-terminus. RRMs 1 and 2 have been shown to be important for high-affinity binding of U6, while RRMs 3 and 4 bind at lower affinity sites on U6. The first three RRMs interact extensively with each other and contain canonical folds that contain a four-stranded beta-sheet and two alpha-helices. The electropositive surface of RRMs 1 and 2 is a RNA annealing domain while the cleft between RRMs 1 and 2 including the beta-sheet face of RRM2 is a sequence-specific RNA binding site. The C-terminal motif is required for association with LSm proteins and contributes to substrate (U6) binding and not the catalytic rate of splicing. Interactions Prp24 interacts with the U6 snRNA via its RRMs. It has been shown through chemical modification testing that nucleotides 39–57 of U6 (40–43 in particular) are involved in binding Prp24. The LSm proteins are in a consistent configuration on the U6 RNA. It has been proposed that the LSm proteins and Prp24 interact both physically and functionally and the C-terminal motif of Prp24 is important for this interaction. The binding of Prp24 to U6 is enhanced by the binding of Lsm proteins to U6, as is binding of U4 and U6. It was revealed by electron microscopy that Prp24 may interact with the LSm protein ring at LSm2. Homologs Prp24 has a human homolog, SART3. SART3 is a tumor rejection antigen (SART3 stands for "squamous cell carcinoma antigen recognized by T cells, gene 3). The RRMs 1 and 2 in yeast are similar to RRMs in human SART3. The C-terminal domain is also highly conserved from yeast to humans. This protein, like Prp24, interacts with the LSm proteins for the recycling of U6 into the U4/U6 snRNP. It has been proposed that SART3 target U6 to a Cajal body or a nuclear inclusion as the site of assembly of the U4/U6 snRNP. SART3 is located on chromosome 12, and a mutation is likely the cause of disseminated superficial actinic porokeratosis. References External links Biological Sciences at Lancaster University Explanation of pre-mRNA splicing Gene expression Molecular genetics Spliceosome
Prp24
[ "Chemistry", "Biology" ]
987
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
22,269,627
https://en.wikipedia.org/wiki/Pieri%27s%20formula
In mathematics, Pieri's formula, named after Mario Pieri, describes the product of a Schubert cycle by a special Schubert cycle in the Schubert calculus, or the product of a Schur polynomial by a complete symmetric function. In terms of Schur functions sλ indexed by partitions λ, it states that where hr is a complete homogeneous symmetric polynomial and the sum is over all partitions λ obtained from μ by adding r elements, no two in the same column. By applying the ω involution on the ring of symmetric functions, one obtains the dual Pieri rule for multiplying an elementary symmetric polynomial with a Schur polynomial: The sum is now taken over all partitions λ obtained from μ by adding r elements, no two in the same row. Pieri's formula implies Giambelli's formula. The Littlewood–Richardson rule is a generalization of Pieri's formula giving the product of any two Schur functions. Monk's formula is an analogue of Pieri's formula for flag manifolds. References Symmetric functions
Pieri's formula
[ "Physics", "Mathematics" ]
219
[ "Algebra", "Symmetric functions", "Symmetry" ]
22,269,939
https://en.wikipedia.org/wiki/Giambelli%27s%20formula
In mathematics, Giambelli's formula, named after Giovanni Giambelli, expresses Schubert classes as determinants in terms of special Schubert classes. It states where σλ is the Schubert class of a partition λ. Giambelli's formula may be derived as a consequence of Pieri's formula. The Porteous formula is a generalization to morphisms of vector bundles over a variety. In the theory of symmetric functions, the same identity, known as the first Jacobi-Trudi identity expresses Schur functions as determinants in terms of complete symmetric functions. There is also the dual second Jacobi-Trudi identity which expresses Schur functions as determinants in terms of elementary symmetric functions. The corresponding identity also holds for Schubert classes. There is another Giambelli identity, expressing Schur functions as determinants of matrices whose entries are Schur functions corresponding to hook partitions contained within the same Young diagram. This too is valid for Schubert classes, as are all Schur function identities. For instance, hook partition Schur functions can be expressed bilinearly in terms of elementary and complete symmetric functions, and Schubert classes satisfy these same relations. See also Schubert calculus - includes examples References Symmetric functions
Giambelli's formula
[ "Physics", "Mathematics" ]
252
[ "Algebra", "Symmetric functions", "Symmetry" ]
22,270,008
https://en.wikipedia.org/wiki/Slit%20%28protein%29
Slit is a family of secreted extracellular matrix proteins which play an important signalling role in the neural development of most bilaterians (animals with bilateral symmetry). While lower animal species, including insects and nematode worms, possess a single Slit gene, humans, mice and other vertebrates possess three Slit homologs: Slit1, Slit2 and Slit3. Human Slits have been shown to be involved in certain pathological conditions, such as cancer and inflammation. The ventral midline of the central nervous system is a key place where axons can either decide to cross and laterally project or stay on the same side of the brain. The main function of Slit proteins is to act as midline repellents, preventing the crossing of longitudinal axons through the midline of the central nervous system of most bilaterian animal species, including mice, chickens, humans, insects, nematode worms and planarians. It also prevents the recrossing of commissural axons. Its canonical receptor is Robo but it may have other receptors. The Slit protein is produced and secreted by cells within the floor plate (in vertebrates) or by midline glia (in insects) and diffuses outward. Slit/Robo signaling is important in pioneer axon guidance. Discovery Slit mutations were first discovered in the Nuesslein-Volhard/Wieschaus patterning screen where they were seen to affect the external midline structures in the embryos of Drosophila melanogaster, also known as the common fruit fly. In this experiment, researchers screened for different mutations in D. melanogaster embryos that affected the neural development of axons in the central nervous system. They found that the mutations in commissureless genes (Slit genes) lead to the growth cones that typically cross the midline remaining on their own side. The findings from this screening suggest that Slit genes are responsible for repulsive signaling along the neuronal midline. Structure Slit1, Slit2, and Slit3 each have the same basic structure. A major identifying feature of the Slit protein is the four leucine-rich repeat (LRR) domains and the N-terminus. Slits are one of only two protein families that contain multiple LRR domains. These LRRs are followed by six repeats similar to epidermal growth factors (EGF) as well as a β-sandwich domain similar to laminin G. Directly after these sequences, invertebrates have one EGF repeat, whereas vertebrates have three EGF repeats. In each case, the EGF is followed by a C-terminal cystine knot (CT) domain. It is possible for Slits to be cleaved into fragments of the N-terminus and C-terminus as a result of an assumed proteolytic site between the fifth and sixth EGFs in Drosophila Slit, Caenorhabditis elegans Slit, rat Slit1, rat Slit3 and human Slit2. LRR domains Slit LRR domains are thought to assist in controlling neurite outgrowth. The domains consist of five to seven LRRs each with disulfide-rich cap segments. Each LRR motif contains a LXXLXLXXN sequence (where L = leucine, N = asparagine, X = any amino acid) which is one strand to a parallel β-sheet on the concave face of the LRR domain, while the back side of the domain consists of irregular loops. Each of the four domains of Slit are connected by short "linkers" which attach to the domains via a disulfide bridge, allowing the LRR region of Slit to remain very compact. Vertebrate homologs Slit1, Slit2, and Slit3 are all a human homologs of the 'Slit' gene found in Drosophila. Each of these genes secretes a protein containing protein-protein interaction regions with leucine-rich repeats and EFGs. Slit2 is mainly expressed in the spinal cord, where it repels motor axons. Slit1 functions in the brain, and Slit3 in the thyroid. Both Slit1 and Slit2 are found in the murine postnatal septum as well as in the neocortex. Further, Slit2 participates in inhibiting leukocyte chemotaxis. In rats, Slit1 was found in the neurons of adult and fetal forebrains. This shows that Slit proteins in mammals most likely contribute to the process of forming and maintaining the endocrine and nervous systems through interactions between proteins. Slit3 is primarily expressed in the thyroid, in human umbilical vein endothelial cells (HUVECs), as well as in endothelial cells from the lung and diaphragm of the mouse. Slit3 interacts with Robo1 and Robo4. Function Guidance molecules Guidance molecules act as cues by carrying information to receptive cells; administering this information which tells the cell and its entities how to properly align. Slit proteins behave as such when working in axonal guidance during the development of the nervous system. Similarly, these proteins help to orchestrate the development of various networks of tissues throughout the body. This role, also described as cell migration, is the primary role of Slit when interacting with Robo. It is most commonly found acting in neurons, endothelial cells and cancer cells. Axon guidance Chemorepellents help to direct growing axons toward the correct regions by directing them away from inappropriate regions. Slit genes, as well as their roundabout receptors, act as chemorepellents by helping prevent the wrong types of axons from crossing the midline of the central nervous system during establishment or remodeling of the neural circuits. The binding of Slit to any member of the Roundabout receptor family results in axon repelling through changes in the axon growth cone. The resulting repelling of axons is collectively termed as axonal guidance. Slit1 and Slit2 have both been seen to collapse and repel olfactory axons. Further evidence suggests that Slit also directs interneurons, particularly acting in the cortex. Positive effects are also correlated with slits. Slit2 begins the formation of axon branches through neural growth factor genes of the dorsal root ganglia. Organogenesis Several studies have shown that the interaction of Slit with its receptors is crucial in regulating the processes involved with the formation of organs. As previously discussed, these interactions play a key role in cell migration. Not surprisingly then, this gene has been found expressed during the development of tightly regulated tissues, such as the heart, lungs, gonads, and ovaries. For example, in early development of the heart tube in Drosophila, Slit and two of its Robo receptors guide migrating cardioblasts and pericardial cells in the dorsal midline. In addition, research on mice has shown that Slit3 and its interaction with Robo1 may be crucial to the development and maturation of lung tissue. Similarly, the expression of Slit3 is upregulated when aligning airway epithelium with endothelium. Due to its regulating function in tissue development, absence or mutations in the expression of these genes can result in abnormalities of these tissues. Several studies in mice and other vertebrates have shown that this deficit results in death almost immediately after birth. Angiogenesis The Slit2 protein has recently been discovered to be associated with the development of new blood vessels from pre-existing vessels, or angiogenesis. Recent research has debated on whether this gene inhibits or stimulates this process. There has been significant proof to conclude that both are true, depending on the context. It has been concluded that the role of Slit in this process depends on which receptor it binds, the cellular context of its target cells, and/or other environmental factors. Slit2 has been implicated in promoting angiogenesis in mice (both in vitro and in vivo), in the human placenta, and in tumorigenesis. Clinical importance Because of their part in forebrain development, during which they contribute to axonal guidance and guiding signals in the movement of cortical interneurons, Slit-Robo signal transduction mechanisms could possibly be used in therapy and treatment of neurological disorders and certain types of cancer. Procedures have been found in which Slit genes allow for precise control over vascular guidance cues influencing the organization of blood vessels during development. Slit also plays a large role in angiogenesis. With increased knowledge of this relationship, treatments could be developed for complications with development of embryo vasculature, female reproductive cycling, tumor grown, and metastasis, ischemic cardiovascular diseases, or ocular disorders. Cancer Due to its pivotal role in controlling cell migration, abnormalities or absences in the expression of Slit1, Slit2 and Slit3 are associated with a variety of cancers. In particular, Slit-Robo interaction has been implicated in reproductive and hormone dependent cancers, particularly in females. Under normal function, these genes act as tumor suppressors. Therefore, deletion or lack of expression of these genes is associated with tumorigenesis, particularly tumors within the epithelium of the ovaries, endometrium, and cervix. Samples of surface epithelium in cancer ridden ovaries has exhibited that these cells show decreased expression of Slit2 and Slit3. In addition, absence of these genes allows the migration of cancer cells and thus is associated with increased cancer progression and increased metastasis. The role of this gene and its place in cancer treatment and development is becoming increasingly unraveled but increasingly complex. References Developmental neuroscience Protein families
Slit (protein)
[ "Biology" ]
1,981
[ "Protein families", "Protein classification" ]
22,272,874
https://en.wikipedia.org/wiki/Covalent%20organic%20framework
Covalent organic frameworks (COFs) are a class of porous polymers that form two- or three-dimensional structures through reactions between organic precursors resulting in strong, covalent bonds to afford porous, stable, and crystalline materials. COFs emerged as a field from the overarching domain of organic materials as researchers optimized both synthetic control and precursor selection. These improvements to coordination chemistry enabled non-porous and amorphous organic materials such as organic polymers to advance into the construction of porous, crystalline materials with rigid structures that granted exceptional material stability in a wide range of solvents and conditions. Through the development of reticular chemistry, precise synthetic control was achieved and resulted in ordered, nano-porous structures with highly preferential structural orientation and properties which could be synergistically enhanced and amplified. With judicious selection of COF secondary building units (SBUs), or precursors, the final structure could be predetermined, and modified with exceptional control enabling fine-tuning of emergent properties. This level of control facilitates the COF material to be designed, synthesized, and utilized in various applications, many times with metrics on scale or surpassing that of the current state-of-the-art approaches. History While at University of Michigan, Omar M. Yaghi (currently at UCBerkeley) and Adrien P Cote published the first paper of COFs in 2005, reporting a series of 2D COFs. They reported the design and successful synthesis of COFs by condensation reactions of phenyl diboronic acid (C6H4[B(OH)2]2) and hexahydroxytriphenylene (C18H6(OH)6). Powder X-ray diffraction studies of the highly crystalline products having empirical formulas (C3H2BO)6·(C9H12)1 (COF-1) and C9H4BO2 (COF-5) revealed 2-dimensional expanded porous graphitic layers that have either staggered conformation (COF-1) or eclipsed conformation (COF-5). Their crystal structures are entirely held by strong bonds between B, C, and O atoms to form rigid porous architectures with pore sizes ranging from 7 to 27 Angstroms. COF-1 and COF-5 exhibit high thermal stability (to temperatures up to 500 to 600 °C), permanent porosity, and high surface areas (711 and 1590 square meters per gram, respectively). The synthesis of 3D COFs has been hindered by longstanding practical and conceptual challenges until it was first achieved in 2007 by Omar M. Yaghi and colleagues, which received the Newcomb Cleveland Prize. The research team synthesized and designed the first 3D-COF ever; COF-103 and COF-108, helping unleash this new field. Unlike 0D and 1D systems, which are soluble, the insolubility of 2D and 3D structures precludes the use of stepwise synthesis, making their isolation in crystalline form very difficult. This first challenge, however, was overcome by judiciously choosing building blocks and using reversible condensation reactions to crystallize COFs. Structure Porous crystalline solids consist of secondary building units (SBUs) which assemble to form a periodic and porous framework. An almost infinite number of frameworks can be formed through various SBU combinations leading to unique material properties for applications in separations, storage, and heterogeneous catalysis. Types of porous crystalline solids include zeolites, metal-organic frameworks (MOFs), and covalent organic frameworks (COFs). Zeolites are microporous, aluminosilicate minerals commonly used as commercial adsorbents. MOFs are a class of porous polymeric material, consisting of metal ions linked together by organic bridging ligands and are a new development on the interface between molecular coordination chemistry and materials science. COFs are another class of porous polymeric materials, consisting of porous, crystalline, covalent bonds that usually have rigid structures, exceptional thermal stabilities (to temperatures up to 600 °C), are stable in water and low densities. They exhibit permanent porosity with specific surface areas surpassing those of well-known zeolites and porous silicates. Secondary building units The term ‘secondary building unit’ has been used for some time to describe conceptual fragments which can be compared as bricks used to build a house of zeolites; in the context of this page it refers to the geometry of the units defined by the points of extension. Reticular synthesis Reticular synthesis enables facile bottom-up synthesis of the framework materials to introduce precise perturbations in chemical composition, resulting in the highly controlled tunability of framework properties. Through a bottom-up approach, a material is built from atomic or molecular components synthetically as opposed to a top-down approach, which forms a material from the bulk through approaches such as exfoliation, lithography, or other varieties of post-synthetic modification. The bottom-up approach is especially advantageous with respect to materials such as COFs because the synthetic methods are designed to directly result in an extended, highly crosslinked framework that can be tuned with exceptional control at the nanoscale level. Geometrical and dimensional principles govern the framework's resulting topology as the SBUs combine to form predetermined structures. This level of synthetic control has also been termed "molecular engineering", abiding by the concept termed by Arthur R. von Hippel in 1956. It has been established in the literature that, when integrated into an isoreticular framework, such as a COF, properties from monomeric compounds can be synergistically enhanced and amplified. COF materials possess the unique ability for bottom-up reticular synthesis to afford robust, tunable frameworks that synergistically enhance the properties of the precursors, which, in turn, offers many advantages in terms of improved performance in different applications. As a result, the COF material is highly modular and tuned efficiently by varying the SBUs’ identity, length, and functionality depending on the desired property change on the framework scale. Ergo, there exists the ability to introduce diverse functionality directly into the framework scaffold to allow for a variety of functions which would be cumbersome, if not impossible, to achieve through a top-down method, such as lithographic approaches or chemical-based nanofabrication. Through reticular synthesis, it is possible to molecularly engineer modular, framework materials with highly porous scaffolds that exhibit unique electronic, optical, and magnetic properties while simultaneously integrating desired functionality into the COF skeleton. Reticular synthesis is different from retrosynthesis of organic compounds, because the structural integrity and rigidity of the building blocks in reticular synthesis remain unaltered throughout the construction process—an important aspect that could help to fully realize the benefits of design in crystalline solid-state frameworks. Similarly, reticular synthesis should be distinguished from supramolecular assembly, because in the former, building blocks are linked by strong bonds throughout the crystal. Synthetic chemistry Reticular synthesis was used by Yaghi and coworkers in 2005 to construct the first two COFs reported in the literature: COF-1, using a dehydration reaction of benzenediboronic acid (BDBA), and COF-5, via a condensation reaction between hexahydroxytriphenylene (HHTP) and BDBA. These framework scaffolds were interconnected through the formation of boroxine and boronate linkages, respectively, using solvothermal synthetic methods. COF linkages Since Yaghi and coworkers’ seminal work in 2005, COF synthesis has expanded to include a wide range of organic connectivity such as boron-, nitrogen-, other atom-containing linkages. The linkages in the figures shown are not comprehensive as other COF linkages exist in the literature, especially for the formation of 3D COFs. Boron condensation The most popular COF synthesis route is a boron condensation reaction which is a molecular dehydration reaction between boronic acids. In case of COF-1, three boronic acid molecules converge to form a planar six-membered B3O3 (boroxine) ring with the elimination of three water molecules. Triazine based trimerization Another class of high performance polymer frameworks with regular porosity and high surface area is based on triazine materials which can be achieved by dynamic trimerization reaction of simple, cheap, and abundant aromatic nitriles in ionothermal conditions (molten zinc chloride at high temperature (400 °C)). CTF-1 is a good example of this chemistry. Imine condensation The imine condensation reaction which eliminates water (exemplified by reacting aniline with benzaldehyde using an acid catalyst) can be used as a synthetic route to reach a new class of COFs. The 3D COF called COF-300 and the 2D COF named TpOMe-DAQ are good examples of this chemistry. When 1,3,5-triformylphloroglucinol (TFP) is used as one of the SBUs, two complementary tautomerizations occur (an enol to keto and an imine to enamine) which result in a β-ketoenamine moiety as depicted in the DAAQ-TFP framework. Both DAAQ-TFP and TpOMe-DAQ COFs are stable in acidic aqueous conditions and contain the redox active linker 2,6-diaminoanthroquinone which enables these materials to reversibly store and release electrons within a characteristic potential window. Consequently, both of these COFs have been investigated as electrode materials for potential use in supercapacitors. Solvothermal synthesis The solvothermal approach is the most common used in the literature but typically requires long reaction times due to the insolubility of the organic SBUs in nonorganic media and the time necessary to reach thermodynamic COF products. Templated synthesis Morphological control on the nanoscale is still limited as COFs lack synthetic control in higher dimensions due to the lack of dynamic chemistry during synthesis. To date, researchers have attempted to establish better control through different synthetic methods such as solvothermal synthesis, interface-assisted synthesis, solid templation as well as seeded growth. First one of the precursors is deposited onto the solid support followed by the introduction of the second precursor in vapor form. This results in the deposition of the COF as a thin film on the solid support. Properties Porosity A defining advantage of COFs is the exceptional porosity that results from the substitution of analogous SBUs of varying sizes. Pore sizes range from 7-23 Å and feature a diverse range of shapes and dimensionalities that remain stable during the evacuation of solvent. The rigid scaffold of the COF structure enables the material to be evacuated of solvent and retain its structure, resulting in high surface areas as seen by the Brunauer–Emmett–Teller analysis. This high surface area to volume ratio and incredible stability enables the COF structure to serve as exceptional materials for gas storage and separation. Crystallinity There are several COF single crystals synthesized to date. There are a variety of techniques employed to improve crystallinity of COFs. The use of modulators, monofunctional version of precursors, serve to slow the COF formation to allow for more favorable balance between kinetic and thermodynamic control, hereby enabling crystalline growth. This was employed by Yaghi and coworkers for 3D imine-based COFs (COF-300, COF 303, LZU-79, and LZU-111). However, the vast majority of COFs are not able to crystallize into single crystals but instead are insoluble powders. The improvement of crystallinity of these polycrystalline materials can be improved through tuning the reversibility of the linkage formation to allow for corrective particle growth and self-healing of defects that arise during COF formation. Conductivity Integration of SBUs into a covalent framework results in the synergistic emergence of conductivities much greater than the monomeric values. The nature of the SBUs can improve conductivity. Through the use of highly conjugated linkers throughout the COF scaffold, the material can be engineered to be fully conjugated, enabling high charge carrier density as well as through- and in-plane charge transport. For instance, Mirica and coworkers synthesized a COF material (NiPc-Pyr COF) from nickel phthalocyanine (NiPc) and pyrene organic linkers that had a conductivity of 2.51 x 10−3 S/m, which was several orders of magnitude larger than the undoped molecular NiPc, 10−11 S/m. A similar COF structure made by Jiang and coworkers, CoPc-Pyr COF, exhibited a conductivity of 3.69 x 10−3 S/m. In both previously mentioned COFs, the 2D lattice allows for full π-conjugation in the x and y directions as well as π-conduction along the z axis due to the fully conjugated, aromatic scaffold and π-π stacking, respectively. Emergent electrical conductivity in COF structures is especially important for applications such as catalysis and energy storage where quick and efficient charge transport is required for optimal performance. Characterization There exists a wide range of characterization methods for COF materials. There are several COF single crystals synthesized to date. For these highly crystalline materials, X-ray diffraction (XRD) is a powerful tool capable of determining COF crystal structure. The majority of COF materials suffer from decreased crystallinity so powder X-ray diffraction (PXRD) is used. In conjunction with simulated powder packing models, PXRD can determine COF crystal structure. In order to verify and analyze COF linkage formation, various techniques can be employed such as infrared (IR) spectroscopy, and nuclear magnetic resonance (NMR) spectroscopy. Precursor and COF IR spectra enables comparison between vibrational peaks to ascertain that certain key bonds present in the COF linkages appear and that peaks of precursor functional groups disappear. In addition, solid-state NMR enables probing of linkage formation as well and is well suited for large, insoluble materials like COFs. Gas adsorption-desorption studies quantify the porosity of the material via calculation of the Brunauer–Emmett–Teller (BET) surface area and pore diameter from gas adsorption isotherms. Electron imagine techniques such as scanning electron microscope (SEM), and transmission electron microscopy (TEM) can resolve surface structure and morphology, and microstructural information, respectively. Scanning tunneling microscope (STM) and atomic force microscopy (AFM) have also been used to characterize COF microstructural information as well. Additionally, methods like X-ray photoelectron spectroscopy (XPS), inductively coupled plasma mass spectrometry (ICP-MS), and combustion analysis can be used to identify elemental composition and ratios. Applications Gas storage and separation Due to the exceptional porosity of COFs, they have been used extensively in the storage and separation of gases such as hydrogen, methane, etc. Hydrogen storage Omar M. Yaghi and William A. Goddard III reported COFs as exceptional hydrogen storage materials. They predicted the highest excess H2 uptakes at 77 K are 10.0 wt % at 80 bar for COF-105, and 10.0 wt % at 100 bar for COF-108, which have higher surface area and free volume, by grand canonical Monte Carlo (GCMC) simulations as a function of temperature and pressure. This is the highest value reported for associative H2 storage of any material. Thus 3D COFs are most promising new candidates in the quest for practical H2 storage materials. In 2012, the lab of William A. Goddard III reported the uptake for COF102, COF103, and COF202 at 298 K and they also proposed new strategies to obtain higher interaction with H2. Such strategy consists of metalating the COF with alkali metals such as Li. These complexes composed of Li, Na and K with benzene ligands (such as 1,3,5-benzenetribenzoate, the ligand used in MOF-177) have been synthesized by Krieck et al. and Goddard showed that the THF is important to their stability. If the metalation with alkali meals is performed in the COFs, Goddard et al. calculated that some COFs can reach 2010 DOE gravimetric target in delivery units at 298 K of 4.5 wt %: COF102-Li (5.16 wt %), COF103-Li (4.75 wt %), COF102-Na (4.75 wt %) and COF103-Na (4.72 wt %). COFs also perform better in delivery units than MOFs because the best volumetric performance is for COF102-Na (24.9), COF102-Li (23.8), COF103-Na (22.8), and COF103-Li (21.7), all using delivery g H2/L units for 1–100 bar. These are the highest gravimetric molecular hydrogen uptakes for a porous material under these thermodynamic conditions. Methane storage Omar M. Yaghi and William A. Goddard III also reported COFs as exceptional methane storage materials. The best COF in terms of total volume of CH4 per unit volume COF adsorbent is COF-1, which can store 195 v/v at 298 K and 30 bar, exceeding the U.S. Department of Energy target for CH4 storage of 180 v/v at 298 K and 35 bar. The best COFs on a delivery amount basis (volume adsorbed from 5 to 100 bar) are COF-102 and COF-103 with values of 230 and 234 v(STP: 298 K, 1.01 bar)/v, respectively, making these promising materials for practical methane storage. More recently, new COFs with better delivery amount have been designed in the lab of William A. Goddard III, and they have been shown to be stable and overcome the DOE target in delivery basis. COF-103-Eth-trans and COF-102-Ant, are found to exceed the DOE target of 180 v(STP)/v at 35 bar for methane storage. They reported that using thin vinyl bridging groups aids performance by minimizing the interaction methane-COF at low pressure. Gas separation In addition to storage, COF materials are exceptional at gas separation. For instance, COFs like imine-linked COF LZU1 and azine-linked COF ACOF-1 were used as a bilayer membrane for the selective separation of the following mixtures: H2/CO2, H2/N2, and H2/CH4. The COFs outperformed molecular sieves due to the inherent thermal and operational stability of the structures. It has also been shown that COFs inherently act as adsorbents, adhering to the gaseous molecules to enable storage and separation. Optical properties A highly ordered π-conjugation TP-COF, consisting of pyrene and triphenylene functionalities alternately linked in a mesoporous hexagonal skeleton, is highly luminescent, harvests a wide wavelength range of photons, and allows energy transfer and migration. Furthermore, TP-COF is electrically conductive and capable of repetitive on–off current switching at room temperature. Porosity/surface-area effects Most studies to date have focused on the development of synthetic methodologies with the aim of maximizing pore size and surface area for gas storage. That means the functions of COFs have not yet been well explored, but COFs can be used as catalysts, or for gas separation, etc. Carbon capture In 2015 the use of highly porous, catalyst-decorated COFs for converting carbon dioxide into carbon monoxide was reported. MOF under solvent-free conditions can also be used for catalytic activity in the cycloaddition of CO2 and epoxides into cyclic organic carbonates with enhanced catalyst recyclability. Sensing Due to defining molecule-framework interactions, COFs can be used as chemical sensors in a wide range of environments and applications. Properties of the COF change when their functionalities interact with various analytes enabling the materials to serve as devices in various conditions: as chemiresistive sensors, as well as electrochemical sensors for small molecules. Catalysis Due to the ability to introduce diverse functionality into COFs’ structure, catalytic sites can be fine-tuned in conjunction with other advantageous properties like conductivity and stability to afford efficient and selective catalysts. COFs have been used as heterogeneous catalysts in organic, electrochemical, as well as photochemical reactions. Electrocatalysis COFs have been studied as non-metallic electrocatalysts for energy-related catalysis, including carbon dioxide electro-reduction and water splitting reaction. However, such researches are still in the very early stage. Most of the efforts have been focusing on solving the key issues, such as conductivity, stability in electrochemical processes. Energy storage A few COFs possess the stability and conductivity necessary to perform well in energy storage applications like lithium-ion batteries, and various different metal-ion batteries and cathodes. Water filtration A prototype 2 nanometer thick COF layer on a graphene substrate was used to filter dye from industrial wastewater. Once full, the COF can be cleaned and reused. Pharmaceutical drug delivery A 3D COF was created, characterised by an interconnected mesoporous scaffold that showed effective drug loading and release in a simulated body fluid environment, making it useful as a nanocarrier for pharmaceutical drugs. See also Jose L. Mendoza-Cortes Reticular chemistry Conjugated microporous polymer Omar M. Yaghi Metal-organic framework Zeolite Hydrogen-bonded organic framework References External links Welcome to the Yaghi Laboratory Website Porous polymers
Covalent organic framework
[ "Chemistry", "Materials_science", "Engineering" ]
4,731
[ "Porous polymers", "Porous media", "Polymer chemistry", "Materials science" ]
22,275,273
https://en.wikipedia.org/wiki/Metachronal%20rhythm
A metachronal rhythm or metachronal wave refers to wavy movements produced by the sequential action (as opposed to synchronized) of structures such as cilia, segments of worms, or legs. These movements produce the appearance of a travelling wave. A Mexican wave is a large scale example of a metachronal wave. This pattern is found widely in nature such as on the cilia of many aquatic organisms such as ctenophores, molluscs, ciliates as well as on the epithelial surfaces of many body organs. Individual cilia, when part of a metachronal wave being used for protist locomotion, individually beat in a pattern similar to the planar stroke of a flagellum. The difference is that the recovery stroke is at 90 degrees to the power stroke, so that the cilia avoid hitting each other. Metachronal rhythms may be seen in the coordinated movements of the legs of millipedes and other multi-legged land invertebrates, as well as in the coordinated movements of social insects. Such metachronal motion has been shown to enhance fluid transport properties in natural cilia. Metachronal motion has also been replicated in synthetic microfluidic systems using magnetic filaments. See also Beta movement Phi phenomenon References External links Metachronal swimming Cilia Mathematical model of millipede gaits Animal locomotion Waves Articles containing video clips
Metachronal rhythm
[ "Physics", "Biology" ]
292
[ "Animal locomotion", "Physical phenomena", "Animals", "Behavior", "Waves", "Motion (physics)", "Ethology" ]
22,275,501
https://en.wikipedia.org/wiki/Lexitropsin
Lexitropsins are members of a family of semi-synthetic DNA-binding ligands. They are structural analogs of the natural antibiotics netropsin and distamycin. Antibiotics of this group can bind in the minor groove of DNA with different sequence-selectivity. Lexitropsins form a complexes with DNA with stoichiometry 1:1 and 2:1. Based on the 2:1 complexes were obtained ligands with high sequence-selectivity. See also Hoechst 33258 Pentamidine DNA binding ligand Single-strand binding protein Comparison of nucleic acid simulation software References Molecular biology DNA-binding substances
Lexitropsin
[ "Chemistry", "Biology" ]
132
[ "Biochemistry", "DNA-binding substances", "Genetics techniques", "Molecular biology" ]
22,276,716
https://en.wikipedia.org/wiki/Apparent%20molar%20property
In thermodynamics, an apparent molar property of a solution component in a mixture or solution is a quantity defined with the purpose of isolating the contribution of each component to the non-ideality of the mixture. It shows the change in the corresponding solution property (for example, volume) per mole of that component added, when all of that component is added to the solution. It is described as apparent because it appears to represent the molar property of that component in solution, provided that the properties of the other solution components are assumed to remain constant during the addition. However this assumption is often not justified, since the values of apparent molar properties of a component may be quite different from its molar properties in the pure state. For instance, the volume of a solution containing two components identified as solvent and solute is given by where is the volume of the pure solvent before adding the solute and its molar volume (at the same temperature and pressure as the solution), is the number of moles of solvent, is the apparent molar volume of the solute, and is the number of moles of the solute in the solution. By dividing this relation to the molar amount of one component a relation between the apparent molar property of a component and the mixing ratio of components can be obtained. This equation serves as the definition of . The first term is equal to the volume of the same quantity of solvent with no solute, and the second term is the change of volume on addition of the solute. may then be considered as the molar volume of the solute if it is assumed that the molar volume of the solvent is unchanged by the addition of solute. However this assumption must often be considered unrealistic as shown in the examples below, so that is described only as an apparent value. An apparent molar quantity can be similarly defined for the component identified as solvent . Some authors have reported apparent molar volumes of both (liquid) components of the same solution. This procedure can be extended to ternary and multicomponent mixtures. Apparent quantities can also be expressed using mass instead of number of moles. This expression produces apparent specific quantities, like the apparent specific volume. where the specific quantities are denoted with small letters. Apparent (molar) properties are not constants (even at a given temperature), but are functions of the composition. At infinite dilution, an apparent molar property and the corresponding partial molar property become equal. Some apparent molar properties that are commonly used are apparent molar enthalpy, apparent molar heat capacity, and apparent molar volume. Relation to molality The apparent (molal) volume of a solute can be expressed as a function of the molality b of that solute (and of the densities of the solution and solvent). The volume of solution per mole of solute is Subtracting the volume of pure solvent per mole of solute gives the apparent molal volume: For more solutes the above equality is modified with the mean molar mass of the solutes as if they were a single solute with molality bT: , The sum of products molalities – apparent molar volumes of solutes in their binary solutions equals the product between the sum of molalities of solutes and apparent molar volume in ternary of multicomponent solution mentioned above. , Relation to mixing ratio A relation between the apparent molar of a component of a mixture and molar mixing ratio can be obtained by dividing the definition relation to the number of moles of one component. This gives the following relation: Relation to partial (molar) quantities Note the contrasting definitions between partial molar quantity and apparent molar quantity: in the case of partial molar volumes , defined by partial derivatives , one can write , and so always holds. In contrast, in the definition of apparent molar volume, the molar volume of the pure solvent, , is used instead, which can be written as , for comparison. In other words, we assume that the volume of the solvent does not change, and we use the partial molar volume where the number of moles of the solute is exactly zero ("the molar volume"). Thus, in the defining expression for apparent molar volume , , the term is attributed to the pure solvent, while the "leftover" excess volume, , is considered to originate from the solute. At high dilution with , we have , and so the apparent molar volume and partial molar volume of the solute also converge: . Quantitatively, the relation between partial molar properties and the apparent ones can be derived from the definition of the apparent quantities and of the molality. For volume, Relation to the activity coefficient of an electrolyte and its solvation shell number The ratio ra between the apparent molar volume of a dissolved electrolyte in a concentrated solution and the molar volume of the solvent (water) can be linked to the statistical component of the activity coefficient of the electrolyte and its solvation shell number h: , where ν is the number of ions due to dissociation of the electrolyte, and b is the molality as above. Examples Electrolytes The apparent molar volume of salt is usually less than the molar volume of the solid salt. For instance, solid NaCl has a volume of 27 cm3 per mole, but the apparent molar volume at low concentrations is only 16.6 cc/mole. In fact, some aqueous electrolytes have negative apparent molar volumes: NaOH −6.7, LiOH −6.0, and Na2CO3 −6.7 cm3/mole. This means that their solutions in a given amount of water have a smaller volume than the same amount of pure water. (The effect is small, however.) The physical reason is that nearby water molecules are strongly attracted to the ions so that they occupy less space. Alcohol Another example of the apparent molar volume of the second component is less than its molar volume as a pure substance is the case of ethanol in water. For example, at 20 mass percents ethanol, the solution has a volume of 1.0326 liters per kg at 20 °C, while pure water is 1.0018 L/kg (1.0018 cc/g). The apparent volume of the added ethanol is 1.0326 L – 0.8 kg x 1.0018 L/kg = 0.2317 L. The number of moles of ethanol is 0.2 kg / (0.04607 kg/mol) = 4.341 mol, so that the apparent molar volume is 0.2317 L / 4.341 mol = 0.0532 L / mol = 53.2 cc/mole (1.16 cc/g). However pure ethanol has a molar volume at this temperature of 58.4 cc/mole (1.27 cc/g). If the solution were ideal, its volume would be the sum of the unmixed components. The volume of 0.2 kg pure ethanol is 0.2 kg x 1.27 L/kg = 0.254 L, and the volume of 0.8 kg pure water is 0.8 kg x 1.0018 L/kg = 0.80144 L, so the ideal solution volume would be 0.254 L + 0.80144 L = 1.055 L. The nonideality of the solution is reflected by a slight decrease (roughly 2.2%, 1.0326 rather than 1.055 L/kg) in the volume of the combined system upon mixing. As the percent ethanol goes up toward 100%, the apparent molar volume rises to the molar volume of pure ethanol. Electrolyte – non-electrolyte systems Apparent quantities can underline interactions in electrolyte – non-electrolyte systems which show interactions like salting in and salting out, but also give insights in ion-ion interactions, especially by their dependence on temperature. Multicomponent mixtures or solutions For multicomponent solutions, apparent molar properties can be defined in several ways. For the volume of a ternary (3-component) solution with one solvent and two solutes as an example, there would still be only one equation , which is insufficient to determine the two apparent volumes. (This is in contrast to partial molar properties, which are well-defined intensive properties of the materials and therefore unambiguously defined in multicomponent systems. For example, partial molar volume is defined for each component i as .) One description of ternary aqueous solutions considers only the weighted mean apparent molar volume of the solutes, defined as , where is the solution volume and the volume of pure water. This method can be extended for mixtures with more than 3 components. , The sum of products molalities – apparent molar volumes of solutes in their binary solutions equals the product between the sum of molalities of solutes and apparent molar volume in ternary of multicomponent solution mentioned above. , Another method is to treat the ternary system as pseudobinary and define the apparent molar volume of each solute with reference to a binary system containing both other components: water and the other solute. The apparent molar volumes of each of the two solutes are then and The apparent molar volume of the solvent is: However, this is an unsatisfactory description of volumetric properties. The apparent molar volume of two components or solutes considered as one pseudocomponent or is not to be confused with volumes of partial binary mixtures with one common component Vij, Vjk which mixed in a certain mixing ratio form a certain ternary mixture V or Vijk. Of course the complement volume of a component in respect to other components of the mixture can be defined as a difference between the volume of the mixture and the volume of a binary submixture of a given composition like: There are situations when there is no rigorous way to define which is solvent and which is solute like in the case of liquid mixtures (say water and ethanol) that can dissolve or not a solid like sugar or salt. In these cases apparent molar properties can and must be ascribed to all components of the mixture. See also Volume fraction Ideal solution Regular solution Enthalpy change of solution Enthalpy of mixing Block design Heat of dilution Hydration energy Ion transport number Solvation shell Partial molar property Excess molar quantity Salting in Ternary plot Thermodynamic activity Notes References External links Apparent Molar Properties: Solutions: Background The (p,ρ,T) Properties and Apparent Molar Volumes of ethanol solutions of LiI or ZnCl2 Apparent molar volumes and apparent molar heat capacities of Pr(NO3)3(aq), Gd(NO3)3(aq), Ho(NO3)3(aq), and Y(NO3)3(aq) at T = (288.15, 298.15, 313.15, and 328.15) K and p = 0.1 MPa Isotopic effects for electrolytes apparent properties Physical chemistry Thermodynamic properties Molar quantities
Apparent molar property
[ "Physics", "Chemistry", "Mathematics" ]
2,349
[ "Thermodynamic properties", "Applied and interdisciplinary physics", "Physical quantities", "Quantity", "Intensive quantities", "Thermodynamics", "nan", "Physical chemistry", "Molar quantities" ]
1,240,093
https://en.wikipedia.org/wiki/Effective%20action
In quantum field theory, the quantum effective action is a modified expression for the classical action taking into account quantum corrections while ensuring that the principle of least action applies, meaning that extremizing the effective action yields the equations of motion for the vacuum expectation values of the quantum fields. The effective action also acts as a generating functional for one-particle irreducible correlation functions. The potential component of the effective action is called the effective potential, with the expectation value of the true vacuum being the minimum of this potential rather than the classical potential, making it important for studying spontaneous symmetry breaking. It was first defined perturbatively by Jeffrey Goldstone and Steven Weinberg in 1962, while the non-perturbative definition was introduced by Bryce DeWitt in 1963 and independently by Giovanni Jona-Lasinio in 1964. The article describes the effective action for a single scalar field, however, similar results exist for multiple scalar or fermionic fields. Generating functionals These generating functionals also have applications in statistical mechanics and information theory, with slightly different factors of and sign conventions. A quantum field theory with action can be fully described in the path integral formalism using the partition functional Since it corresponds to vacuum-to-vacuum transitions in the presence of a classical external current , it can be evaluated perturbatively as the sum of all connected and disconnected Feynman diagrams. It is also the generating functional for correlation functions where the scalar field operators are denoted by . One can define another useful generating functional responsible for generating connected correlation functions which is calculated perturbatively as the sum of all connected diagrams. Here connected is interpreted in the sense of the cluster decomposition, meaning that the correlation functions approach zero at large spacelike separations. General correlation functions can always be written as a sum of products of connected correlation functions. The quantum effective action is defined using the Legendre transformation of where is the source current for which the scalar field has the expectation value , often called the classical field, defined implicitly as the solution to As an expectation value, the classical field can be thought of as the weighted average over quantum fluctuations in the presence of a current that sources the scalar field. Taking the functional derivative of the Legendre transformation with respect to yields In the absence of an source , the above shows that the vacuum expectation value of the fields extremize the quantum effective action rather than the classical action. This is nothing more than the principle of least action in the full quantum field theory. The reason for why the quantum theory requires this modification comes from the path integral perspective since all possible field configurations contribute to the path integral, while in classical field theory only the classical configurations contribute. The effective action is also the generating functional for one-particle irreducible (1PI) correlation functions. 1PI diagrams are connected graphs that cannot be disconnected into two pieces by cutting a single internal line. Therefore, we have with being the sum of all 1PI Feynman diagrams. The close connection between and means that there are a number of very useful relations between their correlation functions. For example, the two-point correlation function, which is nothing less than the propagator , is the inverse of the 1PI two-point correlation function Methods for calculating the effective action A direct way to calculate the effective action perturbatively as a sum of 1PI diagrams is to sum over all 1PI vacuum diagrams acquired using the Feynman rules derived from the shifted action . This works because any place where appears in any of the propagators or vertices is a place where an external line could be attached. This is very similar to the background field method which can also be used to calculate the effective action. Alternatively, the one-loop approximation to the action can be found by considering the expansion of the partition function around the classical vacuum expectation value field configuration , yielding Symmetries Symmetries of the classical action are not automatically symmetries of the quantum effective action . If the classical action has a continuous symmetry depending on some functional then this directly imposes the constraint This identity is an example of a Slavnov–Taylor identity. It is identical to the requirement that the effective action is invariant under the symmetry transformation This symmetry is identical to the original symmetry for the important class of linear symmetries For non-linear functionals the two symmetries generally differ because the average of a non-linear functional is not equivalent to the functional of an average. Convexity For a spacetime with volume , the effective potential is defined as . With a Hamiltonian , the effective potential at always gives the minimum of the expectation value of the energy density for the set of states satisfying . This definition over multiple states is necessary because multiple different states, each of which corresponds to a particular source current, may result in the same expectation value. It can further be shown that the effective potential is necessarily a convex function . Calculating the effective potential perturbatively can sometimes yield a non-convex result, such as a potential that has two local minima. However, the true effective potential is still convex, becoming approximately linear in the region where the apparent effective potential fails to be convex. The contradiction occurs in calculations around unstable vacua since perturbation theory necessarily assumes that the vacuum is stable. For example, consider an apparent effective potential with two local minima whose expectation values and are the expectation values for the states and , respectively. Then any in the non-convex region of can also be acquired for some using However, the energy density of this state is meaning cannot be the correct effective potential at since it did not minimize the energy density. Rather the true effective potential is equal to or lower than this linear construction, which restores convexity. See also Background field method Correlation function Path integral formulation Renormalization group Spontaneous symmetry breaking References Further reading Das, A. : Field Theory: A Path Integral Approach, World Scientific Publishing 2006 Schwartz, M.D.: Quantum Field Theory and the Standard Model, Cambridge University Press 2014 Toms, D.J.: The Schwinger Action Principle and Effective Action, Cambridge University Press 2007 Weinberg, S.: The Quantum Theory of Fields: Modern Applications, Vol.II, Cambridge University Press 1996 Quantum field theory
Effective action
[ "Physics" ]
1,263
[ "Quantum field theory", "Quantum mechanics" ]
1,240,378
https://en.wikipedia.org/wiki/Symmetry%20breaking
In physics, symmetry breaking is a phenomenon where a disordered but symmetric state collapses into an ordered, but less symmetric state. This collapse is often one of many possible bifurcations that a particle can take as it approaches a lower energy state. Due to the many possibilities, an observer may assume the result of the collapse to be arbitrary. This phenomenon is fundamental to quantum field theory (QFT), and further, contemporary understandings of physics. Specifically, it plays a central role in the Glashow–Weinberg–Salam model which forms part of the Standard model modelling the electroweak sector.In an infinite system (Minkowski spacetime) symmetry breaking occurs, however in a finite system (that is, any real super-condensed system), the system is less predictable, but in many cases quantum tunneling occurs. Symmetry breaking and tunneling relate through the collapse of a particle into non-symmetric state as it seeks a lower energy. Symmetry breaking can be distinguished into two types, explicit and spontaneous. They are characterized by whether the equations of motion fail to be invariant, or the ground state fails to be invariant. Non-technical description This section describes spontaneous symmetry breaking. This is the idea that for a physical system, the lowest energy configuration (the vacuum state) is not the most symmetric configuration of the system. Roughly speaking there are three types of symmetry that can be broken: discrete, continuous and gauge, ordered in increasing technicality. An example of a system with discrete symmetry is given by the figure with the red graph: consider a particle moving on this graph, subject to gravity. A similar graph could be given by the function . This system is symmetric under reflection in the y-axis. There are three possible stationary states for the particle: the top of the hill at , or the bottom, at . When the particle is at the top, the configuration respects the reflection symmetry: the particle stays in the same place when reflected. However, the lowest energy configurations are those at . When the particle is in either of these configurations, it is no longer fixed under reflection in the y-axis: reflection swaps the two vacuum states. An example with continuous symmetry is given by a 3d analogue of the previous example, from rotating the graph around an axis through the top of the hill, or equivalently given by the graph . This is essentially the graph of the Mexican hat potential. This has a continuous symmetry given by rotation about the axis through the top of the hill (as well as a discrete symmetry by reflection through any radial plane). Again, if the particle is at the top of the hill it is fixed under rotations, but it has higher gravitational energy at the top. At the bottom, it is no longer invariant under rotations but minimizes its gravitational potential energy. Furthermore rotations move the particle from one energy minimizing configuration to another. There is a novelty here not seen in the previous example: from any of the vacuum states it is possible to access any other vacuum state with only a small amount of energy, by moving around the trough at the bottom of the hill, whereas in the previous example, to access the other vacuum, the particle would have to cross the hill, requiring a large amount of energy. Gauge symmetry breaking is the most subtle, but has important physical consequences. Roughly speaking, for the purposes of this section a gauge symmetry is an assignment of systems with continuous symmetry to every point in spacetime. Gauge symmetry forbids mass generation for gauge fields, yet massive gauge fields (W and Z bosons) have been observed. Spontaneous symmetry breaking was developed to resolve this inconsistency. The idea is that in an early stage of the universe it was in a high energy state, analogous to the particle being at the top of the hill, and so had full gauge symmetry and all the gauge fields were massless. As it cooled, it settled into a choice of vacuum, thus spontaneously breaking the symmetry, thus removing the gauge symmetry and allowing mass generation of those gauge fields. A full explanation is highly technical: see electroweak interaction. Spontaneous symmetry breaking In spontaneous symmetry breaking (SSB), the equations of motion of the system are invariant, but any vacuum state (lowest energy state) is not. For an example with two-fold symmetry, if there is some atom that has two vacuum states, occupying either one of these states breaks the two-fold symmetry. This act of selecting one of the states as the system reaches a lower energy is SSB. When this happens, the atom is no longer symmetric (reflectively symmetric) and has collapsed into a lower energy state. Such a symmetry breaking is parametrized by an order parameter. A special case of this type of symmetry breaking is dynamical symmetry breaking. In the Lagrangian setting of Quantum field theory (QFT), the Lagrangian is a functional of quantum fields which is invariant under the action of a symmetry group . However, the vacuum expectation value formed when the particle collapses to a lower energy may not be invariant under . In this instance, it will partially break the symmetry of , into a subgroup . This is spontaneous symmetry breaking. Within the context of gauge symmetry however, SSB is the phenomenon by which gauge fields 'acquire mass' despite gauge-invariance enforcing that such fields be massless. This is because the SSB of gauge symmetry breaks gauge-invariance, and such a break allows for the existence of massive gauge fields. This is an important exemption from Goldstone's theorem, where a Nambu-Goldstone boson can gain mass, becoming a Higgs boson in the process. Further, in this context the usage of 'symmetry breaking' while standard, is a misnomer, as gauge 'symmetry' is not really a symmetry but a redundancy in the description of the system. Mathematically, this redundancy is a choice of trivialization, somewhat analogous to redundancy arising from a choice of basis. Spontaneous symmetry breaking is also associated with phase transitions. For example in the Ising model, as the temperature of the system falls below the critical temperature the symmetry of the vacuum is broken, giving a phase transition of the system. Explicit symmetry breaking In explicit symmetry breaking (ESB), the equations of motion describing a system are variant under the broken symmetry. In Hamiltonian mechanics or Lagrangian mechanics, this happens when there is at least one term in the Hamiltonian (or Lagrangian) that explicitly breaks the given symmetry. In the Hamiltonian setting, this is often studied when the Hamiltonian can be written . Here is a 'base Hamiltonian', which has some manifest symmetry. More explicitly, it is symmetric under the action of a (Lie) group . Often this is an integrable Hamiltonian. The is a perturbation or interaction Hamiltonian. This is not invariant under the action of . It is often proportional to a small, perturbative parameter. This is essentially the paradigm for perturbation theory in quantum mechanics. An example of its use is in finding the fine structure of atomic spectra. Examples Symmetry breaking can cover any of the following scenarios: The breaking of an exact symmetry of the underlying laws of physics by the apparently random formation of some structure; A situation in physics in which a minimal energy state has less symmetry than the system itself; Situations where the actual state of the system does not reflect the underlying symmetries of the dynamics because the manifestly symmetric state is unstable (stability is gained at the cost of local asymmetry); Situations where the equations of a theory may have certain symmetries, though their solutions may not (the symmetries are "hidden"). One of the first cases of broken symmetry discussed in the physics literature is related to the form taken by a uniformly rotating body of incompressible fluid in gravitational and hydrostatic equilibrium. Jacobi and soon later Liouville, in 1834, discussed the fact that a tri-axial ellipsoid was an equilibrium solution for this problem when the kinetic energy compared to the gravitational energy of the rotating body exceeded a certain critical value. The axial symmetry presented by the McLaurin spheroids is broken at this bifurcation point. Furthermore, above this bifurcation point, and for constant angular momentum, the solutions that minimize the kinetic energy are the non-axially symmetric Jacobi ellipsoids instead of the Maclaurin spheroids. See also Higgs mechanism QCD vacuum 1964 PRL symmetry breaking papers References External links Symmetry Pattern formation Theoretical physics Quantum field theory Standard Model
Symmetry breaking
[ "Physics", "Mathematics" ]
1,749
[ "Standard Model", "Quantum field theory", "Theoretical physics", "Quantum mechanics", "Particle physics", "Geometry", "Symmetry" ]
1,240,666
https://en.wikipedia.org/wiki/Parity%20%28physics%29
In physics, a parity transformation (also called parity inversion) is the flip in the sign of one spatial coordinate. In three dimensions, it can also refer to the simultaneous flip in the sign of all three spatial coordinates (a point reflection): It can also be thought of as a test for chirality of a physical phenomenon, in that a parity inversion transforms a phenomenon into its mirror image. All fundamental interactions of elementary particles, with the exception of the weak interaction, are symmetric under parity. As established by the Wu experiment conducted at the US National Bureau of Standards by Chinese-American scientist Chien-Shiung Wu, the weak interaction is chiral and thus provides a means for probing chirality in physics. In her experiment, Wu took advantage of the controlling role of weak interactions in radioactive decay of atomic isotopes to establish the chirality of the weak force. By contrast, in interactions that are symmetric under parity, such as electromagnetism in atomic and molecular physics, parity serves as a powerful controlling principle underlying quantum transitions. A matrix representation of P (in any number of dimensions) has determinant equal to −1, and hence is distinct from a rotation, which has a determinant equal to 1. In a two-dimensional plane, a simultaneous flip of all coordinates in sign is not a parity transformation; it is the same as a 180° rotation. In quantum mechanics, wave functions that are unchanged by a parity transformation are described as even functions, while those that change sign under a parity transformation are odd functions. Simple symmetry relations Under rotations, classical geometrical objects can be classified into scalars, vectors, and tensors of higher rank. In classical physics, physical configurations need to transform under representations of every symmetry group. Quantum theory predicts that states in a Hilbert space do not need to transform under representations of the group of rotations, but only under projective representations. The word projective refers to the fact that if one projects out the phase of each state, where we recall that the overall phase of a quantum state is not observable, then a projective representation reduces to an ordinary representation. All representations are also projective representations, but the converse is not true, therefore the projective representation condition on quantum states is weaker than the representation condition on classical states. The projective representations of any group are isomorphic to the ordinary representations of a central extension of the group. For example, projective representations of the 3-dimensional rotation group, which is the special orthogonal group SO(3), are ordinary representations of the special unitary group SU(2). Projective representations of the rotation group that are not representations are called spinors and so quantum states may transform not only as tensors but also as spinors. If one adds to this a classification by parity, these can be extended, for example, into notions of scalars () and pseudoscalars () which are rotationally invariant. vectors () and axial vectors (also called pseudovectors) () which both transform as vectors under rotation. One can define reflections such as which also have negative determinant and form a valid parity transformation. Then, combining them with rotations (or successively performing x-, y-, and z-reflections) one can recover the particular parity transformation defined earlier. The first parity transformation given does not work in an even number of dimensions, though, because it results in a positive determinant. In even dimensions only the latter example of a parity transformation (or any reflection of an odd number of coordinates) can be used. Parity forms the abelian group due to the relation . All Abelian groups have only one-dimensional irreducible representations. For , there are two irreducible representations: one is even under parity, , the other is odd, . These are useful in quantum mechanics. However, as is elaborated below, in quantum mechanics states need not transform under actual representations of parity but only under projective representations and so in principle a parity transformation may rotate a state by any phase. Representations of O(3) An alternative way to write the above classification of scalars, pseudoscalars, vectors and pseudovectors is in terms of the representation space that each object transforms in. This can be given in terms of the group homomorphism which defines the representation. For a matrix scalars: , the trivial representation pseudoscalars: vectors: , the fundamental representation pseudovectors: When the representation is restricted to , scalars and pseudoscalars transform identically, as do vectors and pseudovectors. Classical mechanics Newton's equation of motion (if the mass is constant) equates two vectors, and hence is invariant under parity. The law of gravity also involves only vectors and is also, therefore, invariant under parity. However, angular momentum is an axial vector, In classical electrodynamics, the charge density is a scalar, the electric field, , and current are vectors, but the magnetic field, is an axial vector. However, Maxwell's equations are invariant under parity because the curl of an axial vector is a vector. Effect of spatial inversion on some variables of classical physics The two major divisions of classical physical variables have either even or odd parity. The way into which particular variables and vectors sort out into either category depends on whether the number of dimensions of space is either an odd or even number. The categories of odd or even given below for the parity transformation is a different, but intimately related issue. The answers given below are correct for 3 spatial dimensions. In a 2 dimensional space, for example, when constrained to remain on the surface of a planet, some of the variables switch sides. Odd Classical variables whose signs flip when inverted in space inversion are predominantly vectors. They include: Even Classical variables, predominantly scalar quantities, which do not change upon spatial inversion include: Quantum mechanics Possible eigenvalues In quantum mechanics, spacetime transformations act on quantum states. The parity transformation, , is a unitary operator, in general acting on a state as follows: . One must then have , since an overall phase is unobservable. The operator , which reverses the parity of a state twice, leaves the spacetime invariant, and so is an internal symmetry which rotates its eigenstates by phases . If is an element of a continuous U(1) symmetry group of phase rotations, then is part of this U(1) and so is also a symmetry. In particular, we can define , which is also a symmetry, and so we can choose to call our parity operator, instead of . Note that and so has eigenvalues . Wave functions with eigenvalue under a parity transformation are even functions, while eigenvalue corresponds to odd functions. However, when no such symmetry group exists, it may be that all parity transformations have some eigenvalues which are phases other than . For electronic wavefunctions, even states are usually indicated by a subscript g for gerade (German: even) and odd states by a subscript u for ungerade (German: odd). For example, the lowest energy level of the hydrogen molecule ion (H2+) is labelled and the next-closest (higher) energy level is labelled . The wave functions of a particle moving into an external potential, which is centrosymmetric (potential energy invariant with respect to a space inversion, symmetric to the origin), either remain invariable or change signs: these two possible states are called the even state or odd state of the wave functions. The law of conservation of parity of particles states that, if an isolated ensemble of particles has a definite parity, then the parity remains invariable in the process of ensemble evolution. However this is not true for the beta decay of nuclei, because the weak nuclear interaction violates parity. The parity of the states of a particle moving in a spherically symmetric external field is determined by the angular momentum, and the particle state is defined by three quantum numbers: total energy, angular momentum and the projection of angular momentum. Consequences of parity symmetry When parity generates the Abelian group , one can always take linear combinations of quantum states such that they are either even or odd under parity (see the figure). Thus the parity of such states is ±1. The parity of a multiparticle state is the product of the parities of each state; in other words parity is a multiplicative quantum number. In quantum mechanics, Hamiltonians are invariant (symmetric) under a parity transformation if commutes with the Hamiltonian. In non-relativistic quantum mechanics, this happens for any scalar potential, i.e., , hence the potential is spherically symmetric. The following facts can be easily proven: If and have the same parity, then where is the position operator. For a state of orbital angular momentum with z-axis projection , then . If , then atomic dipole transitions only occur between states of opposite parity. If , then a non-degenerate eigenstate of is also an eigenstate of the parity operator; i.e., a non-degenerate eigenfunction of is either invariant to or is changed in sign by . Some of the non-degenerate eigenfunctions of are unaffected (invariant) by parity and the others are merely reversed in sign when the Hamiltonian operator and the parity operator commute: where is a constant, the eigenvalue of , Many-particle systems: atoms, molecules, nuclei The overall parity of a many-particle system is the product of the parities of the one-particle states. It is −1 if an odd number of particles are in odd-parity states, and +1 otherwise. Different notations are in use to denote the parity of nuclei, atoms, and molecules. Atoms Atomic orbitals have parity (−1)ℓ, where the exponent ℓ is the azimuthal quantum number. The parity is odd for orbitals p, f, ... with ℓ = 1, 3, ..., and an atomic state has odd parity if an odd number of electrons occupy these orbitals. For example, the ground state of the nitrogen atom has the electron configuration 1s22s22p3, and is identified by the term symbol 4So, where the superscript o denotes odd parity. However the third excited term at about 83,300 cm−1 above the ground state has electron configuration 1s22s22p23s has even parity since there are only two 2p electrons, and its term symbol is 4P (without an o superscript). Molecules The complete (rotational-vibrational-electronic-nuclear spin) electromagnetic Hamiltonian of any molecule commutes with (or is invariant to) the parity operation P (or E*, in the notation introduced by Longuet-Higgins) and its eigenvalues can be given the parity symmetry label + or - as they are even or odd, respectively. The parity operation involves the inversion of electronic and nuclear spatial coordinates at the molecular center of mass. Centrosymmetric molecules at equilibrium have a centre of symmetry at their midpoint (the nuclear center of mass). This includes all homonuclear diatomic molecules as well as certain symmetric molecules such as ethylene, benzene, xenon tetrafluoride and sulphur hexafluoride. For centrosymmetric molecules, the point group contains the operation i which is not to be confused with the parity operation. The operation i involves the inversion of the electronic and vibrational displacement coordinates at the nuclear centre of mass. For centrosymmetric molecules the operation i commutes with the rovibronic (rotation-vibration-electronic) Hamiltonian and can be used to label such states. Electronic and vibrational states of centrosymmetric molecules are either unchanged by the operation i, or they are changed in sign by i. The former are denoted by the subscript g and are called gerade, while the latter are denoted by the subscript u and are called ungerade. The complete electromagnetic Hamiltonian of a centrosymmetric molecule does not commute with the point group inversion operation i because of the effect of the nuclear hyperfine Hamiltonian. The nuclear hyperfine Hamiltonian can mix the rotational levels of g and u vibronic states (called ortho-para mixing) and give rise to ortho-para transitions Nuclei In atomic nuclei, the state of each nucleon (proton or neutron) has even or odd parity, and nucleon configurations can be predicted using the nuclear shell model. As for electrons in atoms, the nucleon state has odd overall parity if and only if the number of nucleons in odd-parity states is odd. The parity is usually written as a + (even) or − (odd) following the nuclear spin value. For example, the isotopes of oxygen include 17O(5/2+), meaning that the spin is 5/2 and the parity is even. The shell model explains this because the first 16 nucleons are paired so that each pair has spin zero and even parity, and the last nucleon is in the 1d5/2 shell, which has even parity since ℓ = 2 for a d orbital. Quantum field theory If one can show that the vacuum state is invariant under parity, , the Hamiltonian is parity invariant and the quantization conditions remain unchanged under parity, then it follows that every state has good parity, and this parity is conserved in any reaction. To show that quantum electrodynamics is invariant under parity, we have to prove that the action is invariant and the quantization is also invariant. For simplicity we will assume that canonical quantization is used; the vacuum state is then invariant under parity by construction. The invariance of the action follows from the classical invariance of Maxwell's equations. The invariance of the canonical quantization procedure can be worked out, and turns out to depend on the transformation of the annihilation operator: where denotes the momentum of a photon and refers to its polarization state. This is equivalent to the statement that the photon has odd intrinsic parity. Similarly all vector bosons can be shown to have odd intrinsic parity, and all axial-vectors to have even intrinsic parity. A straightforward extension of these arguments to scalar field theories shows that scalars have even parity. That is, , since This is true even for a complex scalar field. (Details of spinors are dealt with in the article on the Dirac equation, where it is shown that fermions and antifermions have opposite intrinsic parity.) With fermions, there is a slight complication because there is more than one spin group. Parity in the Standard Model Fixing the global symmetries Applying the parity operator twice leaves the coordinates unchanged, meaning that must act as one of the internal symmetries of the theory, at most changing the phase of a state. For example, the Standard Model has three global U(1) symmetries with charges equal to the baryon number , the lepton number , and the electric charge . Therefore, the parity operator satisfies for some choice of , , and . This operator is also not unique in that a new parity operator can always be constructed by multiplying it by an internal symmetry such as for some . To see if the parity operator can always be defined to satisfy , consider the general case when for some internal symmetry present in the theory. The desired parity operator would be . If is part of a continuous symmetry group then exists, but if it is part of a discrete symmetry then this element need not exist and such a redefinition may not be possible. The Standard Model exhibits a symmetry, where is the fermion number operator counting how many fermions are in a state. Since all particles in the Standard Model satisfy , the discrete symmetry is also part of the continuous symmetry group. If the parity operator satisfied , then it can be redefined to give a new parity operator satisfying . But if the Standard Model is extended by incorporating Majorana neutrinos, which have and , then the discrete symmetry is no longer part of the continuous symmetry group and the desired redefinition of the parity operator cannot be performed. Instead it satisfies so the Majorana neutrinos would have intrinsic parities of . Parity of the pion In 1954, a paper by William Chinowsky and Jack Steinberger demonstrated that the pion has negative parity. They studied the decay of an "atom" made from a deuteron () and a negatively charged pion () in a state with zero orbital angular momentum into two neutrons (). Neutrons are fermions and so obey Fermi–Dirac statistics, which implies that the final state is antisymmetric. Using the fact that the deuteron has spin one and the pion spin zero together with the antisymmetry of the final state they concluded that the two neutrons must have orbital angular momentum The total parity is the product of the intrinsic parities of the particles and the extrinsic parity of the spherical harmonic function Since the orbital momentum changes from zero to one in this process, if the process is to conserve the total parity then the products of the intrinsic parities of the initial and final particles must have opposite sign. A deuteron nucleus is made from a proton and a neutron, and so using the aforementioned convention that protons and neutrons have intrinsic parities equal to they argued that the parity of the pion is equal to minus the product of the parities of the two neutrons divided by that of the proton and neutron in the deuteron, explicitly from which they concluded that the pion is a pseudoscalar particle. Parity violation Although parity is conserved in electromagnetism and gravity, it is violated in weak interactions, and perhaps, to some degree, in strong interactions. The Standard Model incorporates parity violation by expressing the weak interaction as a chiral gauge interaction. Only the left-handed components of particles and right-handed components of antiparticles participate in charged weak interactions in the Standard Model. This implies that parity is not a symmetry of our universe, unless a hidden mirror sector exists in which parity is violated in the opposite way. An obscure 1928 experiment, undertaken by R. T. Cox, G. C. McIlwraith, and B. Kurrelmeyer, had in effect reported parity violation in weak decays, but, since the appropriate concepts had not yet been developed, those results had no impact. In 1929, Hermann Weyl explored, without any evidence, the existence of a two-component massless particle of spin one-half. This idea was rejected by Pauli, because it implied parity violation. By the mid-20th century, it had been suggested by several scientists that parity might not be conserved (in different contexts), but without solid evidence these suggestions were not considered important. Then, in 1956, a careful review and analysis by theoretical physicists Tsung-Dao Lee and Chen-Ning Yang went further, showing that while parity conservation had been verified in decays by the strong or electromagnetic interactions, it was untested in the weak interaction. They proposed several possible direct experimental tests. They were mostly ignored, but Lee was able to convince his Columbia colleague Chien-Shiung Wu to try it. She needed special cryogenic facilities and expertise, so the experiment was done at the National Bureau of Standards. Wu, Ambler, Hayward, Hoppes, and Hudson (1957) found a clear violation of parity conservation in the beta decay of cobalt-60. As the experiment was winding down, with double-checking in progress, Wu informed Lee and Yang of their positive results, and saying the results need further examination, she asked them not to publicize the results first. However, Lee revealed the results to his Columbia colleagues on 4 January 1957 at a "Friday lunch" gathering of the Physics Department of Columbia. Three of them, R. L. Garwin, L. M. Lederman, and R. M. Weinrich, modified an existing cyclotron experiment, and immediately verified the parity violation. They delayed publication of their results until after Wu's group was ready, and the two papers appeared back-to-back in the same physics journal. The discovery of parity violation explained the outstanding puzzle in the physics of kaons. In 2010, it was reported that physicists working with the Relativistic Heavy Ion Collider had created a short-lived parity symmetry-breaking bubble in quark–gluon plasmas. An experiment conducted by several physicists in the STAR collaboration, suggested that parity may also be violated in the strong interaction. It is predicted that this local parity violation manifests itself by chiral magnetic effect. Intrinsic parity of hadrons To every particle one can assign an intrinsic parity as long as nature preserves parity. Although weak interactions do not, one can still assign a parity to any hadron by examining the strong interaction reaction that produces it, or through decays not involving the weak interaction, such as rho meson decay to pions. See also C-symmetry CP violation Electroweak theory Mirror matter Molecular symmetry T-symmetry References Footnotes Citations Sources Physical quantities Quantum mechanics Quantum field theory Nuclear physics Conservation laws Quantum numbers Asymmetry
Parity (physics)
[ "Physics", "Chemistry", "Mathematics" ]
4,489
[ "Quantum field theory", "Physical phenomena", "Quantum chemistry", "Equations of physics", "Physical quantities", "Asymmetry", "Conservation laws", "Theoretical physics", "Quantity", "Quantum mechanics", "Quantum numbers", "Nuclear physics", "Physical properties", "Symmetry", "Physics th...
1,240,836
https://en.wikipedia.org/wiki/Terahertz%20time-domain%20spectroscopy
In physics, terahertz time-domain spectroscopy (THz-TDS) is a spectroscopic technique in which the properties of matter are probed with short pulses of terahertz radiation. The generation and detection scheme is sensitive to the sample's effect on both the amplitude and the phase of the terahertz radiation. Explanation Typically, an ultrashort pulsed laser is used in the terahertz pulse generation process. In the use of low-temperature grown GaAs as an antenna, the ultrashort pulse creates charge carriers that are accelerated to create the terahertz pulse. In the use of non-linear crystals as a source, a high-intensity ultrashort pulse produces THz radiation from the crystal. A single terahertz pulse can contain frequency components covering much of the terahertz range, often from 0.05 to 4 THz, though the use of an air plasma can yield frequency components up to 40 THz. After THz pulse generation, the pulse is directed by optical techniques, focused through a sample, then measured. THz-TDS requires generation of an ultrafast (thus, large bandwidth) terahertz pulse from an even faster femtosecond optical pulse, typically from a Ti-sapphire laser. That optical pulse is first split to provide a probe pulse whose path length is adjusted using an optical delay line. The probe pulse strobes the detector that is sensitive to the electric field of the resulting terahertz signal at the time of the optical probe pulse sent to it. By varying the path length traversed by the probe pulse, the test signal is thereby measured as a function of time—the same principle as a sampling oscilloscope (technically, the measurement obtains the convolution of the test signal and the time-domain response of the strobed detector). To obtain the resulting frequency domain response using the Fourier transform, the measurement must cover each point in time (delay-line offset) of the resulting test pulse. The response of a test sample can be calibrated by dividing its spectrum so obtained by the spectrum of the terahertz pulse obtained with the sample removed, for instance. Components Components of a typical THz-TDS instrument, as illustrated in the figure, include an infrared laser, optical beamsplitters, beam steering mirrors, delay stages, a terahertz generator, terahertz beam focusing and collimating optics like parabolic mirrors, and detector. Ti:sapphire laser Constructing a THz-TDS experiment using low temperature grown GaAs (LT-GaAs) based antennas requires a laser whose photon energy exceeds the band gap of the material. Ti:sapphire lasers tuned to around 800 nm, matching the energy gap in LT-GaAs, are ideal as they can generate optical pulses as short as 10 fs. These lasers are available as commercial, turnkey systems. Steering mirrors Silver-coated mirrors are optimum for use as steering mirrors for infrared pulses around 800 nm. Their reflectivity is higher than gold and much higher than aluminum at that wavelength. Beamsplitters A beamsplitter is used to divide a single ultrashort optical pulse into two separate beams. A 50/50 beamsplitter is often used, supplying equal optical power to the terahertz generator and detector, though it is common to provide the terahertz generation path with more power given the inefficiency of the terahertz generation process compared to the detection efficiency of infrared (typically 800 nm wavelength) light. Delay stage An optical delay-line is implemented using a movable stage to vary the path length of one of the two beam paths. A delay stage uses a moving retroreflector to redirect the beam along a well-defined output path but following a delay. Movement of the stage holding the retroreflector corresponds to an adjustment of path length and consequently the time at which the terahertz detector is gated relative to the source terahertz pulse. Purge box A purge box is typically used so that absorption of THz radiation by gaseous water molecules is minimized. A dry air source is often used for this purpose, however, a nitrogen gas source may also be used. Water is known to have many discrete absorptions in the THz region that are rotational modes of water molecules. Alternatively, nitrogen, as a diatomic molecule, has no electric dipole moment, and does not (for the purposes of typical THz-TDS) absorb THz radiation. Thus, a purge box may be filled with nitrogen gas so no unintended discrete absorptions in the THz frequency range occur. Parabolic mirrors Off-axis parabolic mirrors are commonly used to collimate and focus THz radiation. Radiation from an effective point source, such as from a low-temperature gallium arsenide (LT-GaAs) antenna (active region ~5 μm) incident on an off-axis parabolic mirror becomes collimated, while collimated radiation incident on a parabolic mirror is focused to a point (see diagram). Terahertz radiation can thus be manipulated spatially using optical components such as metal-coated mirrors as well as lenses made from materials that are transparent at THz wavelengths. Samples for spectroscopy are commonly placed at a focus where the terahertz beam is most concentrated. Uses of THz radiation THz radiation has several distinct advantages for use in spectroscopy. Many materials are transparent at terahertz wavelengths, and this radiation is safe for biological tissue being non-ionizing (as opposed to X-rays). Many interesting materials have unique spectral fingerprints in the terahertz range that may be used for identification. Demonstrated examples include several different types of explosives, dynamic fingerprinting of DNA and protein molecules using polarization varying anisotropic terahertz microspectroscopy, polymorphic forms of many compounds used as active pharmaceutical ingredients (API) in commercial medications as well as several illegal narcotic substances. Since many materials are transparent to THz radiation, underlying materials can be accessed through visually opaque intervening layers. Though not strictly a spectroscopic technique, the ultrashort width of THz radiation pulses allows for measurements (e.g., thickness, density, defect location) on difficult-to-probe materials like foam. These measurement capabilities share many similarities to those of pulsed ultrasonic systems as the depth of buried structures can be inferred through timing of their reflections of these short terahertz pulses. THz generation There are three widely used techniques for generating terahertz pulses, all based on ultrashort pulses from titanium-sapphire lasers or mode-locked fiber lasers. Surface emitters When an ultra-short (100 femtoseconds or shorter) optical pulse illuminates a semiconductor and its wavelength (energy) is above the energy band-gap of the material, it photogenerates mobile carriers. Most carriers are generated near the surface of the material (typically within 1 micrometre) because pulses are absorbed exponentially with respect to depth. This has two main effects. Firstly, it generates a band bending that has the effect of accelerating carriers of different signs in opposite directions (normal to the surface), creating a dipole. This effect is known as surface field emission. Secondly, the presence of a surface creates a break of symmetry that causes carriers to move (on average) only into the bulk of the semiconductor. This phenomenon, combined with the difference of mobilities of electrons and holes, also produces a dipole. This is known as the photo-Dember effect and is particularly strong in high-mobility semiconductors such as indium arsenide. Photoconductive emitters When generating THz radiation via a photoconductive emitter, an ultrafast pulse (typically 100 femtoseconds or shorter) creates charge carriers (electron-hole pairs) in a semiconductor material. This incident laser pulse abruptly changes the antenna from an insulating state into a conducting state. Due to an electric bias applied across the antenna, a sudden electric current transmits across the antenna. This changing current lasts for about a picosecond, and thus emits terahertz radiation since the Fourier transform of a picosecond length signal will contain THz components. Typically the two antenna electrodes are patterned on a low temperature gallium arsenide (LT-GaAs), semi-insulating gallium arsenide (SI-GaAs), or other semiconductor (such as InP) substrate. In a commonly used scheme, the electrodes are formed into the shape of a simple dipole antenna with a gap of a few micrometers and have a bias voltage up to 40 V between them. The ultrafast laser pulse must have a wavelength that is short enough to excite electrons across the bandgap of the semiconductor substrate. This scheme is suitable for illumination with a Ti:sapphire oscillator laser with photon energies of 1.55 eV and pulse energies of about 10 nJ. For use with amplified Ti:sapphire lasers with pulse energies of about 1 mJ, the electrode gap can be increased to several centimeters with a bias voltage of up to 200 kV. More recent advances towards cost-efficient and compact THz-TDS systems are based on mode-locked fiber laser sources emitting at a center wavelength of 1550 nm. Therefore, the photoconductive emitters must be based on semiconductor materials with smaller band gaps of approximately 0.74 eV such as Fe-doped indium gallium arsenide or indium gallium arsenide/indium aluminum arsenide heterostructures. The short duration of THz pulses generated (typically ~2 ps) are primarily due to the rapid rise of the photo-induced current in the semiconductor and short carrier lifetime semiconductor materials (e.g., LT-GaAs). This current may persist for only a few hundred femtoseconds to several nanoseconds depending on the substrate material. This is not the only means of generation but is currently () the most common. Pulses produced by this method have average power levels on the order of several tens of microwatts. The peak power during pulses can be many orders of magnitude higher due to the low duty cycle of mostly >1%, which is dependent on the repetition rate of the laser source. The maximum bandwidth of the resulting THz pulse is primarily limited by the duration of the laser pulse, while the frequency position of the maximum of the Fourier spectrum is determined by the carrier lifetime of the semiconductor. Optical rectification In optical rectification, a high-intensity ultrashort laser pulse passes through a transparent crystal material that emits a terahertz pulse without any applied voltages. It is a nonlinear-optical process, where an appropriate crystal material is quickly electrically polarized at high optical intensities. This changing electrical polarization emits terahertz radiation. Because of the high laser intensities that are necessary, this technique is mostly used with amplified Ti:sapphire lasers. Typical crystal materials are zinc telluride, gallium phosphide, and gallium selenide. The bandwidth of pulses generated by optical rectification is limited by the laser pulse duration, terahertz absorption in the crystal material, the thickness of the crystal, and a mismatch between the propagation speed of the laser pulse and the terahertz pulse inside the crystal. Typically, a thicker crystal will generate higher intensities, but lower THz frequencies. With this technique, it is possible to boost the generated frequencies to 40 THz (7.5 μm) or higher, although 2 THz (150 μm) is more commonly used since it requires less complex optical setups. THz detection The electrical field of terahertz pulses is measured in a detector simultaneously illuminated with an ultrashort laser pulse. Two common detection schemes are used in THz-TDS: photoconductive sampling and electro-optical sampling. The power of THz pulses can be detected by bolometers (heat detectors cooled to liquid-helium temperatures), but since bolometers can only measure the total energy of a terahertz pulse rather than its electric field over time, they are unsuitable for THz-TDS. Because the measurement technique is coherent, it naturally rejects incoherent radiation. Additionally, because the time slice of the measurement is extremely narrow, the noise contribution to the measurement is extremely low. The signal-to-noise ratio (S/N) of the resulting time-domain waveform depends on experimental conditions (e.g., averaging time). However due to the coherent sampling techniques described, high S/N values (>70 dB) are routinely observed with 1 minute averaging times. Downmixing The original problem responsible for the "Terahertz gap" (the colloquial term for the lack of techniques in the THz frequency range) was that electronics routinely have limited operation at frequencies at and above 1012 Hz. Two experimental parameters make such measurement possible in THz-TDS with LT-GaAs antennas: the femtosecond “gating” pulses and the < 1 ps lifetimes of the charge carriers in the antenna (effectively determining the antenna's “on” time). When all optical path lengths have fixed length, an effective dc current results at the detection electronics due to their low time resolution. Picosecond time resolution does not come from fast electronic or optical techniques, but from the ability to adjust optical path lengths on the micrometer (μm) scale. To measure a particular segment of a THz pulse, the optical path lengths are fixed and the (effective dc) current at the detector due to the particular segment of electric field of the THz pulse. THz-TDS measurements are typically not single-shot measurements. Photoconductive detection Photoconductive detection is similar to photoconductive generation. Here, the voltage bias across the antenna leads is generated by the electric field of the THz pulse focused onto the antenna, rather than some external generation. The THz electric field drives current across the antenna leads, which is usually amplified with a low-bandwidth amplifier. This amplified current is the measured parameter that corresponds to the THz field strength. Again, the carriers in the semiconductor substrate have an extremely short lifetime. Thus, the THz electric field strength is only sampled for an extremely narrow slice (femtoseconds) of the entire electric field waveform. Electro-optical sampling The materials used for generation of terahertz radiation by optical rectification can also be used for its detection by using the Pockels effect, where particular crystalline materials become birefringent in the presence of an electric field. The birefringence caused by the electric field of a terahertz pulse leads to a change in the optical polarization of the detection pulse, proportional to the terahertz electric-field strength. With the help of polarizers and photodiodes, this polarization change is measured. As with the generation, the bandwidth of the detection is dependent on the laser pulse duration, material properties, and crystal thickness. Advantages THz-TDS measures the electric field of a pulse and not just the power. Thus, THz-TDS measures both the amplitude and phase information of the frequency components it contains. In contrast, measuring only the power at each frequency is essentially a photon counting technique; information regarding the phase of the light is not obtained. Thus, the waveform is not uniquely determined by such a power measurement. Even when measuring only the power reflected from a sample, the complex optical response constant of the material can be obtained. This is so because the complex nature of an optical constant is not arbitrary. The real and imaginary parts of an optical constant are related by the Kramers–Kronig relations. There is a difficulty in applying the Kramers-Kronig relations as written, because information about the sample (reflected power, for example) must be obtained at all frequencies. In practice, far separated frequency regions do not have significant influence on each other, and reasonable limiting conditions can be applied at high and low frequency, outside of the measured range. THz-TDS, in contrast, does not require use of Kramers-Kronig relations. By measuring the electric field of a THz pulse in the time-domain, the amplitude and phase of each frequency component of the THz pulse are known (in contrast to the single piece of information known by a power measurement). Thus the real and imaginary parts of an optical constant can be known at every frequency within the usable bandwidth of a THz pulse, without need of frequencies outside the usable bandwidth or Kramers-Kronig relations. See also Time resolved microwave conductivity References Further reading Spectroscopy Terahertz technology Explosive detection
Terahertz time-domain spectroscopy
[ "Physics", "Chemistry" ]
3,485
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Electromagnetic spectrum", "Spectroscopy", "Terahertz technology" ]
1,241,092
https://en.wikipedia.org/wiki/Pi%20backbonding
In chemistry, π backbonding is a π-bonding interaction between a filled (or half filled) orbital of a transition metal atom and a vacant orbital on an adjacent ion or molecule. In this type of interaction, electrons from the metal are used to bond to the ligand, which dissipates excess negative charge and stabilizes the metal. It is common in transition metals with low oxidation states that have ligands such as carbon monoxide, olefins, or phosphines. The ligands involved in π backbonding can be broken into three groups: carbonyls and nitrogen analogs, alkenes and alkynes, and phosphines. Compounds where π backbonding is prominent include Ni(CO)4, Zeise's salt, and molybdenum and iron dinitrogen complexes. Metal carbonyls, nitrosyls, and isocyanides The electrons are partially transferred from a d-orbital of the metal to anti-bonding molecular orbitals of CO (and its analogs). This electron-transfer strengthens the metal–C bond and weakens the C–O bond. The strengthening of the M–CO bond is reflected in increases of the vibrational frequencies for the M–C bond (often outside of the range for the usual IR spectrophotometers). Furthermore, the M–CO bond length is shortened. The weakening of the C–O bond is indicated by a decrease in the wavenumber of the νCO band(s) from that for free CO (2143 cm−1), for example to 2060 cm−1 in Ni(CO)4 and 1981 cm−1 in Cr(CO)6, and 1790 cm−1 in the anion [Fe(CO)4]2−. For this reason, IR spectroscopy is an important diagnostic technique in metal–carbonyl chemistry. The article infrared spectroscopy of metal carbonyls discusses this in detail. Many ligands other than CO are strong "backbonders". Nitric oxide is an even stronger π-acceptor than CO and νNO is a diagnostic tool in metal–nitrosyl chemistry. Isocyanides, RNC, are another class of ligands that are capable of π-backbonding. In contrast with CO, the σ-donor lone pair on the C atom of isocyanides is antibonding in nature and upon complexation the CN bond is strengthened and the νCN increased. At the same time, π-backbonding lowers the νCN. Depending on the balance of σ-bonding versus π-backbonding, the νCN can either be raised (for example, upon complexation with weak π-donor metals, such as Pt(II)) or lowered (for example, upon complexation with strong π-donor metals, such as Ni(0)). For the isocyanides, an additional parameter is the MC=N–C angle, which deviates from 180° in highly electron-rich systems. Other ligands have weak π-backbonding abilities, which creates a labilization effect of CO, which is described by the cis effect. Metal–alkene and metal–alkyne complexes As in metal–carbonyls, electrons are partially transferred from a d-orbital of the metal to antibonding molecular orbitals of the alkenes and alkynes. This electron transfer strengthens the metal–ligand bond and weakens the C–C bonds within the ligand. In the case of metal-alkenes and alkynes, the strengthening of the M–C2R4 and M–C2R2 bond is reflected in bending of the C–C–R angles which assume greater sp3 and sp2 character, respectively. Thus strong π backbonding causes a metal-alkene complex to assume the character of a metallacyclopropane. Alkenes and alkynes with electronegative substituents exhibit greater π backbonding. Some strong π backbonding ligands are tetrafluoroethylene, tetracyanoethylene, and hexafluoro-2-butyne. Metal-phosphine complexes Phosphines accept electron density from metal p or d orbitals into combinations of P–C σ* antibonding orbitals that have π symmetry. When phosphines bond to electron-rich metal atoms, backbonding would be expected to lengthen P–C bonds as P–C σ* orbitals become populated by electrons. The expected lengthening of the P–C distance is often hidden by an opposing effect: as the phosphorus lone pair is donated to the metal, P(lone pair)–R(bonding pair) repulsions decrease, which acts to shorten the P–C bond. The two effects have been deconvoluted by comparing the structures of pairs of metal-phosphine complexes that differ only by one electron. Oxidation of R3P–M complexes results in longer M–P bonds and shorter P–C bonds, consistent with π-backbonding. In early work, phosphine ligands were thought to utilize 3d orbitals to form M–P pi-bonding, but it is now accepted that d-orbitals on phosphorus are not involved in bonding as they are too high in energy. IUPAC definition of Back Donation The full IUPAC definition of back donation is as follows: A description of the bonding of π-conjugated ligands to a transition metal which involves a synergic process with donation of electrons from the filled π-orbital or lone electron pair orbital of the ligand into an empty orbital of the metal (donor–acceptor bond), together with release (back donation) of electrons from an nd orbital of the metal (which is of π-symmetry with respect to the metal–ligand axis) into the empty π*-antibonding orbital of the ligand. See also Bridging carbonyl Dewar–Chatt–Duncanson model 18-electron rule Ligand field theory Pi-donor ligands References Chemical bonding Coordination chemistry Organometallic chemistry
Pi backbonding
[ "Physics", "Chemistry", "Materials_science" ]
1,275
[ "Coordination chemistry", "Condensed matter physics", "nan", "Chemical bonding", "Organometallic chemistry" ]
1,241,750
https://en.wikipedia.org/wiki/Actinism
Actinism () is the property of solar radiation that leads to the production of photochemical and photobiological effects. Actinism is derived from the Ancient Greek ἀκτίς, ἀκτῖνος ("ray, beam"). The word actinism is found, for example, in the terminology of imaging technology (esp. photography), medicine (concerning sunburn), and chemistry (concerning containers that protect from photo-degradation), and the concept of actinism is applied, for example, in chemical photography and X-ray imaging. Actinic () chemicals include silver salts used in photography and other light sensitive chemicals. In chemistry In chemical terms, actinism is the property of radiation that lets it be absorbed by a molecule and cause a photochemical reaction as a result. Albert Einstein was the first to correctly theorize that each photon would be able to cause only one molecular reaction. This distinction separates photochemical reactions from exothermic reduction reactions triggered by radiation. For general purposes, photochemistry is the commonly used vernacular rather than actinic or actino-chemistry, which are again more commonly seen used for photography or imaging. In medicine In medicine, actinic effects are generally described in terms of the dermis or outer layers of the body, such as eyes (see: Actinic conjunctivitis) and upper tissues that the sun would normally affect, rather than deeper tissues that higher-energy shorter-wavelength radiation such as x-ray and gamma might affect. Actinic is also used to describe medical conditions that are triggered by exposure to light, especially UV light (see actinic keratosis). The term actinic rays is used to refer to this phenomenon. In biology In biology, actinic light denotes light from solar or other sources that can cause photochemical reactions such as photosynthesis in a species. In photography Actinic light was first commonly used in early photography to distinguish light that would expose the monochrome films from light that would not. A non-actinic safe-light (e.g., red or amber) could be used in a darkroom without risk of exposing (fogging) light-sensitive films, plates or papers. Early "non colour-sensitive" (NCS) films, plates and papers were only sensitive to the high-energy end of the visible spectrum from green to UV (shorter-wavelength light). This would render a print of the red areas as a very dark tone because the red light was not actinic. Typically, light from xenon flash lamps is highly actinic, as is daylight as both contain significant green-to-UV light. In the first half of the 20th century, developments in film technology produced films sensitive to red and yellow light, known as orthochromatic and panchromatic, and extended that through to near infra-red light. These gave a truer reproduction of human perception of lightness across the color spectrum. In photography, therefore, actinic light must now be referenced to the photographic material in question. In manufacturing Actinic inspection of masks in computer chip manufacture refers to inspecting the mask with the same wavelength of light that the lithography system will use. In aquaculture Actinic lights are also common in the reef aquarium industry. They are used to promote coral and invertebrate growth. They are also used to accentuate the fluorescence of fluorescent fish. Actinic lighting is also used to limit algae growth in the aquarium. Since algae (like many other plants), flourish in shallower warm water, algae cannot effectively photosynthesize from blue and violet light, thus actinic light minimizes its photosynthetic benefit. Actinic lighting is also a great alternative to black lights as it provides a "night environment" for the fish, while still allowing enough light for coral and other marine life to grow. Aesthetically, they make fluorescent coral "pop" to the eye, but in some cases also to promote the growth of deeper-water coral adapted to photosynthesis in regions of the ocean dominated by blue light. In artificial lighting "Actinic" lights are a high-color-temperature blue light. They are also used in electric fly killers to attract flies. The center wavelength for most actinic light products is 420 nanometers, with longer wavelengths regarded as "royal blue" (450nm) to sky blue (470nm) and cyan (490nm) and shorter wavelengths regarded as "violet" (400nm) and blacklight (365nm). Actinic light centered at 420nm may appear to the naked eye as a color between deep blue and violet. See also Spectral sensitivity is commonly used to describe the actinic responsivity of photographic materials. Ionizing radiation References Electromagnetic radiation Physical chemistry Radiation Science of photography Lighting
Actinism
[ "Physics", "Chemistry" ]
1,012
[ "Transport phenomena", "Physical phenomena", "Applied and interdisciplinary physics", "Electromagnetic radiation", "Waves", "Radiation", "nan", "Physical chemistry" ]
1,242,892
https://en.wikipedia.org/wiki/Hodge%20index%20theorem
In mathematics, the Hodge index theorem for an algebraic surface V determines the signature of the intersection pairing on the algebraic curves C on V. It says, roughly speaking, that the space spanned by such curves (up to linear equivalence) has a one-dimensional subspace on which it is positive definite (not uniquely determined), and decomposes as a direct sum of some such one-dimensional subspace, and a complementary subspace on which it is negative definite. In a more formal statement, specify that V is a non-singular projective surface, and let H be the divisor class on V of a hyperplane section of V in a given projective embedding. Then the intersection where d is the degree of V (in that embedding). Let D be the vector space of rational divisor classes on V, up to algebraic equivalence. The dimension of D is finite and is usually denoted by ρ(V). The Hodge index theorem says that the subspace spanned by H in D has a complementary subspace on which the intersection pairing is negative definite. Therefore, the signature (often also called index) is (1,ρ(V)-1). The abelian group of divisor classes up to algebraic equivalence is now called the Néron-Severi group; it is known to be a finitely-generated abelian group, and the result is about its tensor product with the rational number field. Therefore, ρ(V) is equally the rank of the Néron-Severi group (which can have a non-trivial torsion subgroup, on occasion). This result was proved in the 1930s by W. V. D. Hodge, for varieties over the complex numbers, after it had been a conjecture for some time of the Italian school of algebraic geometry (in particular, Francesco Severi, who in this case showed that ρ < ∞). Hodge's methods were the topological ones brought in by Lefschetz. The result holds over general (algebraically closed) fields. References , see Ch. V.1 Algebraic surfaces Geometry of divisors Intersection theory Theorems in algebraic geometry
Hodge index theorem
[ "Mathematics" ]
440
[ "Theorems in algebraic geometry", "Theorems in geometry" ]
23,629,444
https://en.wikipedia.org/wiki/Mixed%20linear%20complementarity%20problem
In mathematical optimization theory, the mixed linear complementarity problem, often abbreviated as MLCP or LMCP, is a generalization of the linear complementarity problem to include free variables. References Complementarity problems Algorithms for complementarity problems and generalized equations An Algorithm for the Approximate and Fast Solution of Linear Complementarity Problems Linear algebra Mathematical optimization
Mixed linear complementarity problem
[ "Mathematics" ]
73
[ "Mathematical analysis", "Mathematical analysis stubs", "Linear algebra", "Algebra", "Mathematical optimization" ]
23,634,474
https://en.wikipedia.org/wiki/Normal%20crossing%20singularity
In algebraic geometry a normal crossing singularity is a singularity similar to a union of coordinate hyperplanes. The term can be confusing because normal crossing singularities are not usually normal schemes (in the sense of the local rings being integrally closed). Normal crossing divisors In algebraic geometry, normal crossing divisors are a class of divisors which generalize the smooth divisors. Intuitively they cross only in a transversal way. Let A be an algebraic variety, and a reduced Cartier divisor, with its irreducible components. Then Z is called a smooth normal crossing divisor if either (i) A is a curve, or (ii) all are smooth, and for each component , is a smooth normal crossing divisor. Equivalently, one says that a reduced divisor has normal crossings if each point étale locally looks like the intersection of coordinate hyperplanes. Normal crossing singularity In algebraic geometry a normal crossings singularity is a point in an algebraic variety that is locally isomorphic to a normal crossings divisor. Simple normal crossing singularity In algebraic geometry a simple normal crossings singularity is a point in an algebraic variety, the latter having smooth irreducible components, that is locally isomorphic to a normal crossings divisor. Examples The normal crossing points in the algebraic variety called the Whitney umbrella are not simple normal crossings singularities. The origin in the algebraic variety defined by is a simple normal crossings singularity. The variety itself, seen as a subvariety of the two-dimensional affine plane is an example of a normal crossings divisor. Any variety which is the union of smooth varieties which all have smooth intersections is a variety with normal crossing singularities. For example, let be irreducible polynomials defining smooth hypersurfaces such that the ideal defines a smooth curve. Then is a surface with normal crossing singularities. References Robert Lazarsfeld, Positivity in algebraic geometry, Springer-Verlag, Berlin, 1994. Algebraic geometry Geometry of divisors
Normal crossing singularity
[ "Mathematics" ]
418
[ "Fields of abstract algebra", "Algebraic geometry" ]
2,577,767
https://en.wikipedia.org/wiki/B-staging
B-staging is a process that utilizes heat or UV light to remove the majority of solvent from an adhesive, thereby allowing a construction to be “staged”. In between adhesive application, assembly and curing, the product can be held for a period of time, without sacrificing performance. Attempts to use traditional epoxies in IC packaging often created expensive production bottlenecks, because, as soon as the epoxy adhesive was applied, the components had to be assembled and cured immediately. B-staging eliminates these bottlenecks by allowing the IC manufacturing to proceed efficiently, with each step performed on larger batches of product. B stage laminates are also used in the electronic circuit board industry, where the laminates are reinforced with woven glass fibers called prepregs. This allows manufacturers to have clean and accurate setup for multilayer pressing of cores and prepregs for production of PCBs, without the need to hassle with liquid uncured epoxies. Semiconductor device fabrication
B-staging
[ "Materials_science" ]
206
[ "Semiconductor device fabrication", "Microtechnology" ]
2,578,582
https://en.wikipedia.org/wiki/Conserved%20sequence
In evolutionary biology, conserved sequences are identical or similar sequences in nucleic acids (DNA and RNA) or proteins across species (orthologous sequences), or within a genome (paralogous sequences), or between donor and receptor taxa (xenologous sequences). Conservation indicates that a sequence has been maintained by natural selection. A highly conserved sequence is one that has remained relatively unchanged far back up the phylogenetic tree, and hence far back in geological time. Examples of highly conserved sequences include the RNA components of ribosomes present in all domains of life, the homeobox sequences widespread amongst eukaryotes, and the tmRNA in bacteria. The study of sequence conservation overlaps with the fields of genomics, proteomics, evolutionary biology, phylogenetics, bioinformatics and mathematics. History The discovery of the role of DNA in heredity, and observations by Frederick Sanger of variation between animal insulins in 1949, prompted early molecular biologists to study taxonomy from a molecular perspective. Studies in the 1960s used DNA hybridization and protein cross-reactivity techniques to measure similarity between known orthologous proteins, such as hemoglobin and cytochrome c. In 1965, Émile Zuckerkandl and Linus Pauling introduced the concept of the molecular clock, proposing that steady rates of amino acid replacement could be used to estimate the time since two organisms diverged. While initial phylogenies closely matched the fossil record, observations that some genes appeared to evolve at different rates led to the development of theories of molecular evolution. Margaret Dayhoff's 1966 comparison of ferredoxin sequences showed that natural selection would act to conserve and optimise protein sequences essential to life. Mechanisms Over many generations, nucleic acid sequences in the genome of an evolutionary lineage can gradually change over time due to random mutations and deletions. Sequences may also recombine or be deleted due to chromosomal rearrangements. Conserved sequences are sequences which persist in the genome despite such forces, and have slower rates of mutation than the background mutation rate. Conservation can occur in coding and non-coding nucleic acid sequences. Highly conserved DNA sequences are thought to have functional value, although the role for many highly conserved non-coding DNA sequences is poorly understood. The extent to which a sequence is conserved can be affected by varying selection pressures, its robustness to mutation, population size and genetic drift. Many functional sequences are also modular, containing regions which may be subject to independent selection pressures, such as protein domains. Coding sequence In coding sequences, the nucleic acid and amino acid sequence may be conserved to different extents, as the degeneracy of the genetic code means that synonymous mutations in a coding sequence do not affect the amino acid sequence of its protein product. Amino acid sequences can be conserved to maintain the structure or function of a protein or domain. Conserved proteins undergo fewer amino acid replacements, or are more likely to substitute amino acids with similar biochemical properties. Within a sequence, amino acids that are important for folding, structural stability, or that form a binding site may be more highly conserved. The nucleic acid sequence of a protein coding gene may also be conserved by other selective pressures. The codon usage bias in some organisms may restrict the types of synonymous mutations in a sequence. Nucleic acid sequences that cause secondary structure in the mRNA of a coding gene may be selected against, as some structures may negatively affect translation, or conserved where the mRNA also acts as a functional non-coding RNA. Non-coding Non-coding sequences important for gene regulation, such as the binding or recognition sites of ribosomes and transcription factors, may be conserved within a genome. For example, the promoter of a conserved gene or operon may also be conserved. As with proteins, nucleic acids that are important for the structure and function of non-coding RNA (ncRNA) can also be conserved. However, sequence conservation in ncRNAs is generally poor compared to protein-coding sequences, and base pairs that contribute to structure or function are often conserved instead. Identification Conserved sequences are typically identified by bioinformatics approaches based on sequence alignment. Advances in high-throughput DNA sequencing and protein mass spectrometry has substantially increased the availability of protein sequences and whole genomes for comparison since the early 2000s. Homology search Conserved sequences may be identified by homology search, using tools such as BLAST, HMMER, OrthologR, and Infernal. Homology search tools may take an individual nucleic acid or protein sequence as input, or use statistical models generated from multiple sequence alignments of known related sequences. Statistical models such as profile-HMMs, and RNA covariance models which also incorporate structural information, can be helpful when searching for more distantly related sequences. Input sequences are then aligned against a database of sequences from related individuals or other species. The resulting alignments are then scored based on the number of matching amino acids or bases, and the number of gaps or deletions generated by the alignment. Acceptable conservative substitutions may be identified using substitution matrices such as PAM and BLOSUM. Highly scoring alignments are assumed to be from homologous sequences. The conservation of a sequence may then be inferred by detection of highly similar homologs over a broad phylogenetic range. Multiple sequence alignment Multiple sequence alignments can be used to visualise conserved sequences. The CLUSTAL format includes a plain-text key to annotate conserved columns of the alignment, denoting conserved sequence (*), conservative mutations (:), semi-conservative mutations (.), and non-conservative mutations ( ) Sequence logos can also show conserved sequence by representing the proportions of characters at each point in the alignment by height. Genome alignment Whole genome alignments (WGAs) may also be used to identify highly conserved regions across species. Currently the accuracy and scalability of WGA tools remains limited due to the computational complexity of dealing with rearrangements, repeat regions and the large size of many eukaryotic genomes. However, WGAs of 30 or more closely related bacteria (prokaryotes) are now increasingly feasible. Scoring systems Other approaches use measurements of conservation based on statistical tests that attempt to identify sequences which mutate differently to an expected background (neutral) mutation rate. The GERP (Genomic Evolutionary Rate Profiling) framework scores conservation of genetic sequences across species. This approach estimates the rate of neutral mutation in a set of species from a multiple sequence alignment, and then identifies regions of the sequence that exhibit fewer mutations than expected. These regions are then assigned scores based on the difference between the observed mutation rate and expected background mutation rate. A high GERP score then indicates a highly conserved sequence. LIST (Local Identity and Shared Taxa) is based on the assumption that variations observed in species closely related to human are more significant when assessing conservation compared to those in distantly related species. Thus, LIST utilizes the local alignment identity around each position to identify relevant sequences in the multiple sequence alignment (MSA) and then it estimates conservation based on the taxonomy distances of these sequences to human. Unlike other tools, LIST ignores the count/frequency of variations in the MSA. Aminode combines multiple alignments with phylogenetic analysis to analyze changes in homologous proteins and produce a plot that indicates the local rates of evolutionary changes. This approach identifies the Evolutionarily Constrained Regions in a protein, which are segments that are subject to purifying selection and are typically critical for normal protein function. Other approaches such as PhyloP and PhyloHMM incorporate statistical phylogenetics methods to compare probability distributions of substitution rates, which allows the detection of both conservation and accelerated mutation. First, a background probability distribution is generated of the number of substitutions expected to occur for a column in a multiple sequence alignment, based on a phylogenetic tree. The estimated evolutionary relationships between the species of interest are used to calculate the significance of any substitutions (i.e. a substitution between two closely related species may be less likely to occur than distantly related ones, and therefore more significant). To detect conservation, a probability distribution is calculated for a subset of the multiple sequence alignment, and compared to the background distribution using a statistical test such as a likelihood-ratio test or score test. P-values generated from comparing the two distributions are then used to identify conserved regions. PhyloHMM uses hidden Markov models to generate probability distributions. The PhyloP software package compares probability distributions using a likelihood-ratio test or score test, as well as using a GERP-like scoring system. Extreme conservation Ultra-conserved elements Ultra-conserved elements or UCEs are sequences that are highly similar or identical across multiple taxonomic groupings. These were first discovered in vertebrates, and have subsequently been identified within widely-differing taxa. While the origin and function of UCEs are poorly understood, they have been used to investigate deep-time divergences in amniotes, insects, and between animals and plants. Universally conserved genes The most highly conserved genes are those that can be found in all organisms. These consist mainly of the ncRNAs and proteins required for transcription and translation, which are assumed to have been conserved from the last universal common ancestor of all life. Genes or gene families that have been found to be universally conserved include GTP-binding elongation factors, Methionine aminopeptidase 2, Serine hydroxymethyltransferase, and ATP transporters. Components of the transcription machinery, such as RNA polymerase and helicases, and of the translation machinery, such as ribosomal RNAs, tRNAs and ribosomal proteins are also universally conserved. Applications Phylogenetics and taxonomy Sets of conserved sequences are often used for generating phylogenetic trees, as it can be assumed that organisms with similar sequences are closely related. The choice of sequences may vary depending on the taxonomic scope of the study. For example, the most highly conserved genes such as the 16S RNA and other ribosomal sequences are useful for reconstructing deep phylogenetic relationships and identifying bacterial phyla in metagenomics studies. Sequences that are conserved within a clade but undergo some mutations, such as housekeeping genes, can be used to study species relationships. The internal transcribed spacer (ITS) region, which is required for spacing conserved rRNA genes but undergoes rapid evolution, is commonly used to classify fungi and strains of rapidly evolving bacteria. Medical research As highly conserved sequences often have important biological functions, they can be useful a starting point for identifying the cause of genetic diseases. Many congenital metabolic disorders and Lysosomal storage diseases are the result of changes to individual conserved genes, resulting in missing or faulty enzymes that are the underlying cause of the symptoms of the disease. Genetic diseases may be predicted by identifying sequences that are conserved between humans and lab organisms such as mice or fruit flies, and studying the effects of knock-outs of these genes. Genome-wide association studies can also be used to identify variation in conserved sequences associated with disease or health outcomes. More than two dozen novel potential susceptibility loci have been discovered for Alzehimer's disease. Functional annotation Identifying conserved sequences can be used to discover and predict functional sequences such as genes. Conserved sequences with a known function, such as protein domains, can also be used to predict the function of a sequence. Databases of conserved protein domains such as Pfam and the Conserved Domain Database can be used to annotate functional domains in predicted protein coding genes. See also Evolutionary developmental biology NAPP (database) Segregating site Sequence alignment Sequence alignment software UCbase Ultra-conserved element References Computational phylogenetics Nucleic acids Protein structure Population genetics Molecular genetics Evolutionary developmental biology
Conserved sequence
[ "Chemistry", "Biology" ]
2,383
[ "Genetics techniques", "Biomolecules by chemical classification", "Computational phylogenetics", "Bioinformatics", "Molecular genetics", "Structural biology", "Molecular biology", "Phylogenetics", "Protein structure", "Nucleic acids" ]
2,578,746
https://en.wikipedia.org/wiki/Homogeneity%20%28physics%29
In physics, a homogeneous material or system has the same properties at every point; it is uniform without irregularities. A uniform electric field (which has the same strength and the same direction at each point) would be compatible with homogeneity (all points experience the same physics). A material constructed with different constituents can be described as effectively homogeneous in the electromagnetic materials domain, when interacting with a directed radiation field (light, microwave frequencies, etc.). Mathematically, homogeneity has the connotation of invariance, as all components of the equation have the same degree of value whether or not each of these components are scaled to different values, for example, by multiplication or addition. Cumulative distribution fits this description. "The state of having identical cumulative distribution function or values". Context The definition of homogeneous strongly depends on the context used. For example, a composite material is made up of different individual materials, known as "constituents" of the material, but may be defined as a homogeneous material when assigned a function. For example, asphalt paves our roads, but is a composite material consisting of asphalt binder and mineral aggregate, and then laid down in layers and compacted. However, homogeneity of materials does not necessarily mean isotropy. In the previous example, a composite material may not be isotropic. In another context, a material is not homogeneous in so far as it is composed of atoms and molecules. However, at the normal level of our everyday world, a pane of glass, or a sheet of metal is described as glass, or stainless steel. In other words, these are each described as a homogeneous material. A few other instances of context are: dimensional homogeneity (see below) is the quality of an equation having quantities of same units on both sides; homogeneity (in space) implies conservation of momentum; and homogeneity in time implies conservation of energy. Homogeneous alloy In the context of composite metals is an alloy. A blend of a metal with one or more metallic or nonmetallic materials is an alloy. The components of an alloy do not combine chemically but, rather, are very finely mixed. An alloy might be homogeneous or might contain small particles of components that can be viewed with a microscope. Brass is an example of an alloy, being a homogeneous mixture of copper and zinc. Another example is steel, which is an alloy of iron with carbon and possibly other metals. The purpose of alloying is to produce desired properties in a metal that naturally lacks them. Brass, for example, is harder than copper and has a more gold-like color. Steel is harder than iron and can even be made rust proof (stainless steel). Homogeneous cosmology Homogeneity, in another context plays a role in cosmology. From the perspective of 19th-century cosmology (and before), the universe was infinite, unchanging, homogeneous, and therefore filled with stars. However, German astronomer Heinrich Olbers asserted that if this were true, then the entire night sky would be filled with light and bright as day; this is known as Olbers' paradox. Olbers presented a technical paper in 1826 that attempted to answer this conundrum. The faulty premise, unknown in Olbers' time, was that the universe is not infinite, static, and homogeneous. The Big Bang cosmology replaced this model (expanding, finite, and inhomogeneous universe). However, modern astronomers supply reasonable explanations to answer this question. One of at least several explanations is that distant stars and galaxies are red shifted, which weakens their apparent light and makes the night sky dark. However, the weakening is not sufficient to actually explain Olbers' paradox. Many cosmologists think that the fact that the Universe is finite in time, that is that the Universe has not been around forever, is the solution to the paradox. The fact that the night sky is dark is thus an indication for the Big Bang. Translation invariance By translation invariance, one means independence of (absolute) position, especially when referring to a law of physics, or to the evolution of a physical system. Fundamental laws of physics should not (explicitly) depend on position in space. That would make them quite useless. In some sense, this is also linked to the requirement that experiments should be reproducible. This principle is true for all laws of mechanics (Newton's laws, etc.), electrodynamics, quantum mechanics, etc. In practice, this principle is usually violated, since one studies only a small subsystem of the universe, which of course "feels" the influence of the rest of the universe. This situation gives rise to "external fields" (electric, magnetic, gravitational, etc.) which make the description of the evolution of the system depend upon its position (potential wells, etc.). This only stems from the fact that the objects creating these external fields are not considered as (a "dynamical") part of the system. Translational invariance as described above is equivalent to shift invariance in system analysis, although here it is most commonly used in linear systems, whereas in physics the distinction is not usually made. The notion of isotropy, for properties independent of direction, is not a consequence of homogeneity. For example, a uniform electric field (i.e., which has the same strength and the same direction at each point) would be compatible with homogeneity (at each point physics will be the same), but not with isotropy, since the field singles out one "preferred" direction. Consequences In the Lagrangian formalism, homogeneity in space implies conservation of momentum, and homogeneity in time implies conservation of energy. This is shown, using variational calculus, in standard textbooks like the classical reference text of Landau & Lifshitz. This is a particular application of Noether's theorem. Dimensional homogeneity As said in the introduction, dimensional homogeneity is the quality of an equation having quantities of same units on both sides. A valid equation in physics must be homogeneous, since equality cannot apply between quantities of different nature. This can be used to spot errors in formula or calculations. For example, if one is calculating a speed, units must always combine to [length]/[time]; if one is calculating an energy, units must always combine to [mass][length]2/[time]2, etc. For example, the following formulae could be valid expressions for some energy: if m is a mass, v and c are velocities, p is a momentum, h is the Planck constant, λ a length. On the other hand, if the units of the right hand side do not combine to [mass][length]2/[time]2, it cannot be a valid expression for some energy. Being homogeneous does not necessarily mean the equation will be true, since it does not take into account numerical factors. For example, could be or could not be the correct formula for the energy of a particle of mass m traveling at speed v, and one cannot know if hc/λ should be divided or multiplied by 2π. Nevertheless, this is a very powerful tool in finding characteristic units of a given problem, see dimensional analysis. See also Translational invariance Miscibility Phase (matter) References Dimensional analysis Concepts in physics
Homogeneity (physics)
[ "Physics", "Engineering" ]
1,531
[ "Dimensional analysis", "Mechanical engineering", "nan" ]
2,580,396
https://en.wikipedia.org/wiki/Deaerator
A deaerator is a device that is used for the removal of dissolved gases like oxygen from a liquid. Thermal deaerators are commonly used to remove dissolved gases in feedwater for steam-generating boilers. The deaerator is part of the feedwater heating system. Dissolved oxygen in feedwater will cause serious corrosion damage in a boiler by attaching to the walls of metal piping and other equipment forming oxides (like rust). Dissolved carbon dioxide combines with water to form carbonic acid that may cause further corrosion. Most deaerators are designed to remove oxygen down to levels of 7 parts per billion by weight or less, as well as essentially eliminating carbon dioxide. Vacuum deaerators are used to remove dissolved gases from products such as food, personal care products, cosmetic products, chemicals, and pharmaceuticals to increase the dosing accuracy in the filling process, to increase product shelf stability, to prevent oxidative effects (e.g. discolouration, changes of smell or taste, rancidity), to alter pH, and to reduce packaging volume. Manufacturing of deaerators started in the 1800s and continues to the present day. History Manufacturing of deaerators started in the 1800s.They were used to purify water used in the ice manufacturing process. Feed water heaters were used for marine applications. In 1899, George M Kleucker received a patent for an improved method of de-aerating water. Two sister ships, Olympic and Titanic (1912), had contact feed heaters on board. In 1934 the US Navy purchased an atomizing deaerator. During the 1920s the feedwater heaters and deaerators designs improved. Between 1921 and 1933, George Gibson, Percy Lyon, and Victor Rohlin of Cochrane received deaerator / degasification patents for bubbling steam through liquid. 1926 Brown Stanley received a patent for reducing oxygen and nitrogen gases (deaeration). In 1937 Samuel B Applebaum of Permutit received a water deaerator and purifier patent. Deaerators continue to be used today for many applications. Principles Oxygen and Nitrogen are two non-condensable gases that are removed by deaeration. Henry's law describes the relationship of dissolved gases and partial pressures. Thermal deaeration relies on the principle that the solubility of a gas in water decreases as the water temperature increases and approaches its boiling point. In the deaerator, water is heated up to close to its boiling point with a minimum pressure drop and minimum vent. Deaeration is done by spraying feedwater into a chamber to increase its surface area, and may involve flow over multiple layers of trays. This scrubbing (or stripping) steam is fed to the bottom of the deaeration section of the deaerator. When steam contacts the feedwater, it heats it up to its boiling point and dissolved gases are released from the feedwater and vented from the deaerator through the vent. The treated water falls into a storage tank below the deaerator. Oxygen scavenging chemicals are very often added to the deaerated boiler feedwater to remove any last traces of oxygen that were not removed by the deaerator. The type of chemical added depends on whether the location uses a volatile or non-volatile water treatment program. Most lower pressure systems (lower than ) use non-volatile treatment programs. The most commonly used oxygen scavenger for lower pressure systems is sodium sulfite (Na2SO3). It is very effective and rapidly reacts with traces of oxygen to form sodium sulfate (Na2SO4) which is non-scaling. Most higher pressure systems (higher than ) and all systems where certain highly alloyed materials are present are now using volatile programs, as many phosphate-based treatment programs are being phased out. Volatile programs are further broken down into oxidizing or reducing programs [(AVT(O) or AVT(R)] depending whether the environment requires an oxidizing or reducing environment to reduce the incidence of flow-accelerated corrosion. Flow-accelerated corrosion related failures have caused numerous accidents in which significant loss of property and life has occurred. Hydrazine (N2H4) is an oxygen scavenger commonly used in volatile treatment programs. Other scavengers include carbohydrazide, diethylhydroxylamine, nitrilotriacetic acid, ethylenediaminetetraacetic acid, and hydroquinone. Thermal deaerators Thermal deaerators are commonly used to remove dissolved gases in feedwater for steam-generating boilers. Dissolved oxygen in feedwater will cause serious corrosion damage in a boiler by attaching to the walls of metal piping and other equipment forming oxides (like rust). Dissolved carbon dioxide combines with water to form carbonic acid that may cause further corrosion. Most deaerators are designed to remove oxygen down to levels of 7 parts per billion by weight or less, as well as essentially eliminating carbon dioxide. The deaerators in the steam generating systems of most thermal power plants use low pressure steam obtained from an extraction point in their steam turbine system. However, the steam generators in many large industrial facilities such as petroleum refineries may use whatever low-pressure steam is available. Tray-type The tray-type deaerator has a vertical domed deaeration section mounted above a horizontal boiler feedwater storage vessel. Boiler feedwater enters the vertical deaeration section through spray valves above the perforated trays and then flows downward through the perforations. Low-pressure deaeration steam enters below the perforated trays and flows upward through the perforations. Combined action of spray valves & trays guarantees very high performance because of longer contact time between steam and water. Some designs use various types of packed beds, rather than perforated trays, to provide good contact and mixing between the steam and the boiler feed water. The steam strips the dissolved gas from the boiler feedwater and exits via the vent valve at the top of the domed section. If this vent valve has not be opened sufficiently, the deaerator will not work properly, resulting in feed water with a high oxygen content going to the boilers. Should the boiler not have an oxygen-content analyzer, a high level in the boiler chlorides may indicate the vent valve not being far enough open. Some designs may include a vent condenser to trap and recover any water entrained in the vented gas. The vent line usually includes a valve and just enough steam is allowed to escape with the vented gases to provide a small visible telltale plume of steam. The deaerated water flows down into the horizontal storage vessel from where it is pumped to the steam generating boiler system. Low-pressure heating steam, which enters the horizontal vessel through a Sparge Pipe in the bottom of the vessel, is provided to keep the stored boiler feedwater warm. Stainless steel material is recommended for the sparger pipe. External insulation of the vessel is typically provided to minimize heat loss. Spray-type The typical spray-type deaerator is a horizontal vessel which has a preheating section and a deaeration section. The two sections are separated by a baffle. Low-pressure steam enters the vessel through a sparger in the bottom of the vessel. The boiler feedwater is sprayed into section where it is preheated by the rising steam from the sparger. The purpose of the feedwater spray nozzle and the preheat section is to heat the boiler feedwater to its saturation temperature to facilitate stripping out the dissolved gases in the following deaeration section. The preheated feedwater then flows into the deaeration section (F), where it is deaerated by the steam rising from the sparger system. The gases stripped out of the water exit via the vent at the top of the vessel. Again, some designs may include a vent condenser to trap and recover any water entrained in the vented gas. Also again, the vent line usually includes a valve and just enough steam is allowed to escape with the vented gases to provide a small and visible telltale plume of steam. The deaerated boiler feedwater is pumped from the bottom of the vessel to the steam generating boiler system. Silencers (optional) have been used for reducing venting noise levels in the Deaerator equipment industry. Vacuum deaerators Deaerators are also used to remove dissolved gases from products such as food, personal care products, cosmetic products, chemicals, and pharmaceuticals to increase the dosing accuracy in the filling process, to increase product shelf stability, to prevent oxidative effects (e.g. discolouration, changes of smell or taste, rancidity), to alter pH, and to reduce packaging volume. Vacuum deaerators are also used in the petrochemical field. In 1921 a tank with vacuum pump for removing gases was used in Pittsburgh. In 1934 and 1940 a tank with vacuum pump for removing gases were used in Indiana. Vacuum deaerators can be rubber lined on the inside to protect the steel heads and shell from corrosion. Rotating Disc In a typical design, the product is distributed as a thin layer on a high speed spinning disc via special feed system. The centrifugal force slings it through a perforated screen onto the inner wall of the vessel, which is under vacuum. Air (gas) pockets are released in the process and are drawn off by the vacuum. A discharge pump carries the deaerated product to the next process in the production line. For high viscous products the rotating disc is replaced with static one. Other types Sound waves using ultrasonic equipment can be used to assist deaerating water. Production Welding of the steel pressure vessels during the manufacturing process sometimes requires Post weld heat treatment, XRAY, Dye Penetration, Ultrasonic, and other type non-destructive testing. ASME Boiler and Pressure Vessel Code, NACE International, and HEI (Heat Exchange Institute) have recommendations on the type of testing required. Older fabrication techniques also used cast iron for the shell and heads. Thermal insulation is sometimes required after fabrication or after installation at the project site. Insulation is used to reduce heat loses. Inspection and maintenance NACE International (now known as Association for Materials Protection and Performance (AMPP)) and CIBO (Council of Industrial Boiler Owners) have several recommendations to increase the life of the deaerator unit. First, regular inspections (and testing) of the pressure vessel for cracking of welds, and repairing of any weld defects. Second, maintaining a proper water chemistry to reduce deaerator deterioration. Third, minimize temperature and pressure fluctuation. Fourth, internals and accessories should be inspected for proper operation. NACE had created a Corrosion Task Group in 1984 that studied causes of corrosion and provided recommendations; NACE still provides recommendations to improve operations of the equipment. Manufacturers Stickle, Cochrane, and Permutit are three of the oldest Deaerator manufacturers in the USA. In 1929, a court case between Elliott Company (no longer in business) and H.S.B.W. Cochrane Corporation allowed both businesses to continue manufacturing deaerators. In 1909 Weir was manufacturing contact feed heaters (for de-aerating) in Europe. By 1937 Permutit was manufacturing deaerators. In 1939, Cochrane, Darby, Elliott, Groeschel, Stearns-Rogers, Worthington, and others were competing against each other for business. In 1949 Chicago Heater was formed and became a leading deaerator manufacturer. In 1954, Allis-Chalmers, Chicago Heater, Cochrane, Elliott, Graver, Swartwout, Worthington, and others were in business. Applications Deaerators are used in many industries such as co-generation plants, hospitals, larger laundry facilities, oil fields, oil refineries, off-shore platforms, paper mills, power plants, prisons, steel mills, and many other industries. See also References Sources Further reading Betz Handbook of Industrial Water Conditioning, Chapter 9 boiler feedwater deaeration. 8th Edition, copyright 1980, LOC 79-56368. NEA (National Environmental Agency) Paper, "Energy Best Practice Guide for Oil Refining External links Association of Water Technologies Deaerator design Petrochemical and Chemical Plants", June 2021 US Dept of Energy, Deaerators in Industrial Steam Systems National Board "System Design, Specifications, Operation, and Inspection of Deaerators" April 1988 Power station technology Chemical equipment Nuclear power plant components Gas-liquid separation Industrial water treatment
Deaerator
[ "Chemistry", "Engineering" ]
2,552
[ "Separation processes by phases", "Water treatment", "Chemical equipment", "Industrial water treatment", "nan", "Gas-liquid separation" ]
2,580,900
https://en.wikipedia.org/wiki/Liquefaction%20of%20gases
Liquefaction of gases is physical conversion of a gas into a liquid state (condensation). The liquefaction of gases is a complicated process that uses various compressions and expansions to achieve high pressures and very low temperatures, using, for example, turboexpanders. Uses Liquefaction processes are used for scientific, industrial and commercial purposes. Many gases can be put into a liquid state at normal atmospheric pressure by simple cooling; a few, such as carbon dioxide, require pressurization as well. Liquefaction is used for analyzing the fundamental properties of gas molecules (intermolecular forces), or for the storage of gases, for example: LPG, and in refrigeration and air conditioning. There the gas is liquefied in the condenser, where the heat of vaporization is released, and evaporated in the evaporator, where the heat of vaporization is absorbed. Ammonia was the first such refrigerant, and is still in widespread use in industrial refrigeration, but it has largely been replaced by compounds derived from petroleum and halogens in residential and commercial applications. Liquid oxygen is provided to hospitals for conversion to gas for patients with breathing problems, and liquid nitrogen is used in the medical field for cryosurgery, by inseminators to freeze semen, and by field and lab scientists to preserve samples. Liquefied chlorine is transported for eventual solution in water, after which it is used for water purification, sanitation of industrial waste, sewage and swimming pools, bleaching of pulp and textiles and manufacture of carbon tetrachloride, glycol and numerous other organic compounds as well as phosgene gas. Liquefaction of helium (4He) with the precooled Hampson–Linde cycle led to a Nobel Prize for Heike Kamerlingh Onnes in 1913. At ambient pressure the boiling point of liquefied helium is . Below 2.17 K liquid 4He becomes a superfluid (Nobel Prize 1978, Pyotr Kapitsa) and shows characteristic properties such as heat conduction through second sound, zero viscosity and the fountain effect among others. The liquefaction of air is used to obtain nitrogen, oxygen, and argon and other atmospheric noble gases by separating the air components by fractional distillation in a cryogenic air separation unit. History Liquid air Linde's process Air is liquefied by the Linde process, in which air is alternately compressed, cooled, and expanded, each expansion results in a considerable reduction in temperature. With the lower temperature the molecules move more slowly and occupy less space, so the air changes phase to become liquid. Claude's process Air can also be liquefied by Claude's process in which the gas is allowed to expand isentropically twice in two chambers. While expanding, the gas has to do work as it is led through an expansion turbine. The gas is not yet liquid, since that would destroy the turbine. Commercial air liquefication plants bypass this problem by expanding the air at supercritical pressures. Final liquefaction takes place by isenthalpic expansion in a thermal expansion valve. See also Air Liquide Air Products & Chemicals Air separation The BOC Group Chemical engineer Compressibility factor Fischer–Tropsch process Gas separation Gas to liquids Hampson–Linde cycle Industrial gases The Linde Group Liquefaction Liquefaction point Louis Paul Cailletet Messer Group Praxair Siemens cycle Turboexpander References External links Liquefaction of Gases History of Liquefying Hydrogen - NASA Phases of matter Industrial processes Gas technologies Industrial gases it:Condensazione
Liquefaction of gases
[ "Physics", "Chemistry" ]
767
[ "Chemical process engineering", "Phases of matter", "Matter", "Industrial gases" ]
2,581,209
https://en.wikipedia.org/wiki/Sensory%20analysis
Sensory analysis (or sensory evaluation) is a scientific discipline that applies principles of experimental design and statistical analysis to the use of human senses (sight, smell, taste, touch and hearing) for the purposes of evaluating consumer products. This method of testing products is generally used during the marketing and advertising phase. The discipline requires panels of human assessors, on whom the products are tested, and recording the responses made by them. By applying statistical techniques to the results it is possible to make inferences and insights about the products under test. Most large consumer goods companies have departments dedicated to sensory analysis. Sensory analysis can mainly be broken down into three sub-sections: Analytical testing (dealing with objective facts about products) Affective testing (dealing with subjective facts such as preferences) Perception (the biochemical and psychological aspects of sensation) Analytical testing This type of testing is concerned with obtaining objective facts about products. This could range from basic discrimination testing (e.g. Do two or more products differ from each other?) to descriptive analysis (e.g. What are the characteristics of two or more products?). The type of panel required for this type of testing would normally be a trained panel. There are several types of sensory tests. The most classic is the sensory profile. In this test, each taster describes each product by means of a questionnaire. The questionnaire includes a list of descriptors (e.g., bitterness, acidity, etc.). The taster rates each descriptor for each product depending on the intensity of the descriptor he perceives in the product (e.g., 0 = very weak to 10 = very strong). In the method of Free choice profiling, each taster builds his own questionnaire. Another family of methods is known as holistic as they focus on the product's overall appearance. This is the case of the categorization and the napping. Affective testing Also known as consumer testing, this type of testing concerns obtaining subjective data, or how well products are likely to be accepted. Usually, large (50 or more) panels of untrained personnel are recruited for this type of testing, although smaller focus groups can be utilized to gain insights into products. The range of testing can vary from simple comparative testing (e.g. Which do you prefer, A or B?) to structured questioning regarding the magnitude of acceptance of individual characteristics (e.g. Please rate the "fruity aroma": dislike|neither|like). Affective testing is generally used by larger companies distributing products on a larger scale, such as cereal brands, clothing brands, and accessories used in daily life. For example, a small company on the verge of a breakthrough for a specific medicine wouldn't use wide-scale affective testing to see if the medicine would work. Companies such as this would use a specific panel of judges who require this medicine to test whether or not it would work. See also European Sensory Network Food Quality and Preference Journal of Sensory Studies Just-About-Right scale Pangborn Sensory Science Symposium Notes and references Bibliography ASTM MNL14 The Role of Sensory Analysis in Quality Control, 1992 ISO 16820 Sensory Analysis - Methodology - Sequential Analysis ISO 5495 Sensory Analysis - Methodology - Paired Comparisons ISO 13302 Sensory Analysis - Methods for assessing modifications to the flavour of foodstuffs due to packaging Sensory Evaluation Techniques- Morten C. Meilgaard, Gail Vance Civille, B. Thomas Carr - 4th edition, 2007 External links ISO 67.240 – Sensory analysis – A series of ISO standards Sensory evaluation practice; Herbert Stone, Joel L. Sidel Product testing Psychophysics
Sensory analysis
[ "Physics" ]
745
[ "Psychophysics", "Applied and interdisciplinary physics" ]
2,581,605
https://en.wikipedia.org/wiki/Concurrent%20computing
Concurrent computing is a form of computing in which several computations are executed concurrently—during overlapping time periods—instead of sequentially—with one completing before the next starts. This is a property of a system—whether a program, computer, or a network—where there is a separate execution point or "thread of control" for each process. A concurrent system is one where a computation can advance without waiting for all other computations to complete. Concurrent computing is a form of modular programming. In its paradigm an overall computation is factored into subcomputations that may be executed concurrently. Pioneers in the field of concurrent computing include Edsger Dijkstra, Per Brinch Hansen, and C.A.R. Hoare. Introduction The concept of concurrent computing is frequently confused with the related but distinct concept of parallel computing, although both can be described as "multiple processes executing during the same period of time". In parallel computing, execution occurs at the same physical instant: for example, on separate processors of a multi-processor machine, with the goal of speeding up computations—parallel computing is impossible on a (one-core) single processor, as only one computation can occur at any instant (during any single clock cycle). By contrast, concurrent computing consists of process lifetimes overlapping, but execution does not happen at the same instant. The goal here is to model processes that happen concurrently, like multiple clients accessing a server at the same time. Structuring software systems as composed of multiple concurrent, communicating parts can be useful for tackling complexity, regardless of whether the parts can be executed in parallel. For example, concurrent processes can be executed on one core by interleaving the execution steps of each process via time-sharing slices: only one process runs at a time, and if it does not complete during its time slice, it is paused, another process begins or resumes, and then later the original process is resumed. In this way, multiple processes are part-way through execution at a single instant, but only one process is being executed at that instant. Concurrent computations may be executed in parallel, for example, by assigning each process to a separate processor or processor core, or distributing a computation across a network. The exact timing of when tasks in a concurrent system are executed depends on the scheduling, and tasks need not always be executed concurrently. For example, given two tasks, T1 and T2: T1 may be executed and finished before T2 or vice versa (serial and sequential) T1 and T2 may be executed alternately (serial and concurrent) T1 and T2 may be executed simultaneously at the same instant of time (parallel and concurrent) The word "sequential" is used as an antonym for both "concurrent" and "parallel"; when these are explicitly distinguished, concurrent/sequential and parallel/serial are used as opposing pairs. A schedule in which tasks execute one at a time (serially, no parallelism), without interleaving (sequentially, no concurrency: no task begins until the prior task ends) is called a serial schedule. A set of tasks that can be scheduled serially is serializable, which simplifies concurrency control. Coordinating access to shared resources The main challenge in designing concurrent programs is concurrency control: ensuring the correct sequencing of the interactions or communications between different computational executions, and coordinating access to resources that are shared among executions. Potential problems include race conditions, deadlocks, and resource starvation. For example, consider the following algorithm to make withdrawals from a checking account represented by the shared resource balance: bool withdraw(int withdrawal) { if (balance >= withdrawal) { balance -= withdrawal; return true; } return false; } Suppose balance = 500, and two concurrent threads make the calls withdraw(300) and withdraw(350). If line 3 in both operations executes before line 5 both operations will find that balance >= withdrawal evaluates to true, and execution will proceed to subtracting the withdrawal amount. However, since both processes perform their withdrawals, the total amount withdrawn will end up being more than the original balance. These sorts of problems with shared resources benefit from the use of concurrency control, or non-blocking algorithms. Advantages The advantages of concurrent computing include: Increased program throughput—parallel execution of a concurrent algorithm allows the number of tasks completed in a given time to increase proportionally to the number of processors according to Gustafson's law High responsiveness for input/output—input/output-intensive programs mostly wait for input or output operations to complete. Concurrent programming allows the time that would be spent waiting to be used for another task. More appropriate program structure—some problems and problem domains are well-suited to representation as concurrent tasks or processes. For example MVCC. Models Introduced in 1962, Petri nets were an early attempt to codify the rules of concurrent execution. Dataflow theory later built upon these, and Dataflow architectures were created to physically implement the ideas of dataflow theory. Beginning in the late 1970s, process calculi such as Calculus of Communicating Systems (CCS) and Communicating Sequential Processes (CSP) were developed to permit algebraic reasoning about systems composed of interacting components. The π-calculus added the capability for reasoning about dynamic topologies. Input/output automata were introduced in 1987. Logics such as Lamport's TLA+, and mathematical models such as traces and Actor event diagrams, have also been developed to describe the behavior of concurrent systems. Software transactional memory borrows from database theory the concept of atomic transactions and applies them to memory accesses. Consistency models Concurrent programming languages and multiprocessor programs must have a consistency model (also known as a memory model). The consistency model defines rules for how operations on computer memory occur and how results are produced. One of the first consistency models was Leslie Lamport's sequential consistency model. Sequential consistency is the property of a program that its execution produces the same results as a sequential program. Specifically, a program is sequentially consistent if "the results of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program". Implementation A number of different methods can be used to implement concurrent programs, such as implementing each computational execution as an operating system process, or implementing the computational processes as a set of threads within a single operating system process. Interaction and communication In some concurrent computing systems, communication between the concurrent components is hidden from the programmer (e.g., by using futures), while in others it must be handled explicitly. Explicit communication can be divided into two classes: Shared memory communication Concurrent components communicate by altering the contents of shared memory locations (exemplified by Java and C#). This style of concurrent programming usually needs the use of some form of locking (e.g., mutexes, semaphores, or monitors) to coordinate between threads. A program that properly implements any of these is said to be thread-safe. Message passing communication Concurrent components communicate by exchanging messages (exemplified by MPI, Go, Scala, Erlang and occam). The exchange of messages may be carried out asynchronously, or may use a synchronous "rendezvous" style in which the sender blocks until the message is received. Asynchronous message passing may be reliable or unreliable (sometimes referred to as "send and pray"). Message-passing concurrency tends to be far easier to reason about than shared-memory concurrency, and is typically considered a more robust form of concurrent programming. A wide variety of mathematical theories to understand and analyze message-passing systems are available, including the actor model, and various process calculi. Message passing can be efficiently implemented via symmetric multiprocessing, with or without shared memory cache coherence. Shared memory and message passing concurrency have different performance characteristics. Typically (although not always), the per-process memory overhead and task switching overhead is lower in a message passing system, but the overhead of message passing is greater than for a procedure call. These differences are often overwhelmed by other performance factors. History Concurrent computing developed out of earlier work on railroads and telegraphy, from the 19th and early 20th century, and some terms date to this period, such as semaphores. These arose to address the question of how to handle multiple trains on the same railroad system (avoiding collisions and maximizing efficiency) and how to handle multiple transmissions over a given set of wires (improving efficiency), such as via time-division multiplexing (1870s). The academic study of concurrent algorithms started in the 1960s, with credited with being the first paper in this field, identifying and solving mutual exclusion. Prevalence Concurrency is pervasive in computing, occurring from low-level hardware on a single chip to worldwide networks. Examples follow. At the programming language level: Channel Coroutine Futures and promises At the operating system level: Computer multitasking, including both cooperative multitasking and preemptive multitasking Time-sharing, which replaced sequential batch processing of jobs with concurrent use of a system Process Thread At the network level, networked systems are generally concurrent by their nature, as they consist of separate devices. Languages supporting concurrent programming Concurrent programming languages are programming languages that use language constructs for concurrency. These constructs may involve multi-threading, support for distributed computing, message passing, shared resources (including shared memory) or futures and promises. Such languages are sometimes described as concurrency-oriented languages or concurrency-oriented programming languages (COPL). Today, the most commonly used programming languages that have specific constructs for concurrency are Java and C#. Both of these languages fundamentally use a shared-memory concurrency model, with locking provided by monitors (although message-passing models can and have been implemented on top of the underlying shared-memory model). Of the languages that use a message-passing concurrency model, Erlang is probably the most widely used in industry at present. Many concurrent programming languages have been developed more as research languages (e.g. Pict) rather than as languages for production use. However, languages such as Erlang, Limbo, and occam have seen industrial use at various times in the last 20 years. A non-exhaustive list of languages which use or provide concurrent programming facilities: Ada—general purpose, with native support for message passing and monitor based concurrency Alef—concurrent, with threads and message passing, for system programming in early versions of Plan 9 from Bell Labs Alice—extension to Standard ML, adds support for concurrency via futures Ateji PX—extension to Java with parallel primitives inspired from π-calculus Axum—domain specific, concurrent, based on actor model and .NET Common Language Runtime using a C-like syntax BMDFM—Binary Modular DataFlow Machine C++—thread and coroutine support libraries Cω (C omega)—for research, extends C#, uses asynchronous communication C#—supports concurrent computing using , , also since version 5.0 and keywords introduced Clojure—modern, functional dialect of Lisp on the Java platform Concurrent Clean—functional programming, similar to Haskell Concurrent Collections (CnC)—Achieves implicit parallelism independent of memory model by explicitly defining flow of data and control Concurrent Haskell—lazy, pure functional language operating concurrent processes on shared memory Concurrent ML—concurrent extension of Standard ML Concurrent Pascal—by Per Brinch Hansen Curry D—multi-paradigm system programming language with explicit support for concurrent programming (actor model) E—uses promises to preclude deadlocks ECMAScript—uses promises for asynchronous operations Eiffel—through its SCOOP mechanism based on the concepts of Design by Contract Elixir—dynamic and functional meta-programming aware language running on the Erlang VM. Erlang—uses synchronous or asynchronous message passing with no shared memory FAUST—real-time functional, for signal processing, compiler provides automatic parallelization via OpenMP or a specific work-stealing scheduler Fortran—coarrays and do concurrent are part of Fortran 2008 standard Go—for system programming, with a concurrent programming model based on CSP Haskell—concurrent, and parallel functional programming language Hume—functional, concurrent, for bounded space and time environments where automata processes are described by synchronous channels patterns and message passing Io—actor-based concurrency Janus—features distinct askers and tellers to logical variables, bag channels; is purely declarative Java—thread class or Runnable interface Julia—"concurrent programming primitives: Tasks, async-wait, Channels." JavaScript—via web workers, in a browser environment, promises, and callbacks. JoCaml—concurrent and distributed channel based, extension of OCaml, implements the join-calculus of processes Join Java—concurrent, based on Java language Joule—dataflow-based, communicates by message passing Joyce—concurrent, teaching, built on Concurrent Pascal with features from CSP by Per Brinch Hansen LabVIEW—graphical, dataflow, functions are nodes in a graph, data is wires between the nodes; includes object-oriented language Limbo—relative of Alef, for system programming in Inferno (operating system) Locomotive BASIC—Amstrad variant of BASIC contains EVERY and AFTER commands for concurrent subroutines MultiLisp—Scheme variant extended to support parallelism Modula-2—for system programming, by N. Wirth as a successor to Pascal with native support for coroutines Modula-3—modern member of Algol family with extensive support for threads, mutexes, condition variables Newsqueak—for research, with channels as first-class values; predecessor of Alef occam—influenced heavily by communicating sequential processes (CSP) occam-π—a modern variant of occam, which incorporates ideas from Milner's π-calculus ooRexx—object-based, message exchange for communication and synchronization Orc—heavily concurrent, nondeterministic, based on Kleene algebra Oz-Mozart—multiparadigm, supports shared-state and message-passing concurrency, and futures ParaSail—object-oriented, parallel, free of pointers, race conditions PHP—multithreading support with parallel extension implementing message passing inspired from Go Pict—essentially an executable implementation of Milner's π-calculus Python — uses thread-based parallelism and process-based parallelism Raku includes classes for threads, promises and channels by default Reia—uses asynchronous message passing between shared-nothing objects Red/System—for system programming, based on Rebol Rust—for system programming, using message-passing with move semantics, shared immutable memory, and shared mutable memory. Scala—general purpose, designed to express common programming patterns in a concise, elegant, and type-safe way SequenceL—general purpose functional, main design objectives are ease of programming, code clarity-readability, and automatic parallelization for performance on multicore hardware, and provably free of race conditions SR—for research SuperPascal—concurrent, for teaching, built on Concurrent Pascal and Joyce by Per Brinch Hansen Swift—built-in support for writing asynchronous and parallel code in a structured way Unicon—for research TNSDL—for developing telecommunication exchanges, uses asynchronous message passing VHSIC Hardware Description Language (VHDL)—IEEE STD-1076 XC—concurrency-extended subset of C language developed by XMOS, based on communicating sequential processes, built-in constructs for programmable I/O Many other languages provide support for concurrency in the form of libraries, at levels roughly comparable with the above list. See also Asynchronous I/O Chu space Flow-based programming Java ConcurrentMap Ptolemy Project Structured concurrency Transaction processing Notes References Sources Further reading External links Concurrent Systems Virtual Library Operating system technology
Concurrent computing
[ "Technology" ]
3,301
[ "Computing platforms", "Concurrent computing", "IT infrastructure" ]
2,582,609
https://en.wikipedia.org/wiki/Nikolay%20Bogolyubov
Nikolay Nikolayevich (Mykola Mykolayovych) Bogolyubov (; ; 21 August 1909 – 13 February 1992) was a Soviet, Ukrainian and Russian mathematician and theoretical physicist known for a significant contribution to quantum field theory, classical and quantum statistical mechanics, and the theory of dynamical systems; he was the recipient of the 1992 Dirac Medal for his works and studies. Biography Early life in Ukraine (1909–1921) Nikolay Bogolyubov was born on 21 August 1909 in Nizhny Novgorod, Russian Empire to Russian Orthodox Church priest and seminary teacher of theology, psychology and philosophy Nikolay Mikhaylovich Bogolyubov, and Olga Nikolayevna Bogolyubova, a teacher of music. Six months after Mykola's birth, the family moved to Nizhyn, city of Chernihiv Oblast, Ukraine, where his father taught until 1913. From 1913 to 1918, the family lived in Kyiv. Mykola received his initial education at home. His father taught him the basics of arithmetic, as well as German, French, and English. At the age of six, he attended the preparatory class of the Kyiv Gymnasium. However, he did not stay long in the gymnasium—during the years of the Ukrainian War of Independence from 1917 to 1921, the family moved to the village of Velyka Krucha (now in Poltava Oblast, Ukraine). From 1919 to 1921, he studied at the Velykokruchanska seven-year school – the only educational institution he graduated from. Kyiv period (1921-1940) The family soon moved to Kyiv in 1921, where they continued to live in poverty as the elder Nikolay Bogolyubov only found a position as a priest in 1923. After finishing the seven-year school, Bogolyubov independently studied physics and mathematics, and by the age of 14, he was already participating in the seminar of the Department of Mathematical Physics at Kyiv University under the supervision of Academician Dmitry Grave. In 1924, at the age of 15, Nikolay Bogolyubov wrote his first published scientific paper On the behavior of solutions of linear differential equations at infinity. In 1925 he entered Ph.D. program at the Academy of Sciences of the Ukrainian SSR under the supervision of the well-known contemporary mathematician Nikolay Krylov and obtained the degree of Kandidat Nauk (Candidate of Sciences, equivalent to a Ph.D.) in 1928, at the age of 19, with the doctoral thesis titled On direct methods of variational calculus. In 1930, at the age of 21, he obtained the degree of Doktor nauk (Doctor of Sciences, equivalent to Habilitation), the highest degree in the Soviet Union, which requires the recipient to have made a significant independent contribution to his or her scientific field. This early period of Bogolyubov's work in science was concerned with such mathematical problems as direct methods of the calculus of variations, the theory of almost periodic functions, methods of approximate solution of differential equations, and dynamical systems. This earlier research had already earned him recognition. One of his essays was awarded the Bologna Academy of Sciences Prize in 1930, and the author was awarded the erudite degree of doctor of mathematics. This was the period when the scientific career of the young Nikolay Bogolyubov began, later producing new scientific trends in modern mathematics, physics, and mechanics. Since 1931, Krylov and Bogolyubov worked together on the problems of nonlinear mechanics and nonlinear oscillations. They were the key figures in the "Kyiv school of nonlinear oscillation research", where their cooperation resulted in the paper "On the quasiperiodic solutions of the equations of nonlinear mechanics" (1934) and the book Introduction to Nonlinear Mechanics (1937; translated to English in 1947) leading to a creation of a large field of non-linear mechanics. Distinctive features of the Kyiv School approach included an emphasis on the computation of solutions (not just a proof of its existence), approximations of periodic solutions, use of the invariant manifolds in the phase space, and applications of a single unified approach to many different problems. From a control engineering point of view, the key achievement of the Kyiv School was the development by Krylov and Bogolyubov of the describing function method for the analysis of nonlinear control problems. In 1936, M. M. Bogolyubov was awarded the title of professor, and from 1936 to 1940, he chaired the Department of Mathematical Physics at Kyiv University In 1939, he was elected a corresponding member of the Academy of Sciences of the Ukrainian SSR (since 1994 – National Academy of Sciences of Ukraine). In 1940, after the reunification of Northern Bukovyna with Ukraine, Nikolay Bogolyubov was sent to Chernivtsi to organize mathematical departments at the Faculty of Physics and Mathematics of Chernivtsi State University. In evacuation (1941–1943) After the German attack against the Soviet Union on 22 June 1941 (beginning of the Eastern front of World War II), most institutes and universities from the western part were evacuated into the eastern regions, far from the battle lines. Nikolay Bogolyubov moved to Ufa, where he became Head of the Departments of Mathematical Analysis at Ufa State Aviation Technical University and at Ufa Pedagogical Institute, remaining on these positions during the period of July 1941 – August 1943. Moscow (1943–?) In autumn 1943, Bogolyubov came from evacuation to Moscow and on 1 November 1943 he accepted a position in the Department of Theoretical Physics at the Moscow State University (MSU). At that time the Head of the Department was Anatoly Vlasov (for a short period in 1944 the Head of the Department was Vladimir Fock). Theoretical physicists working in the department in that period included Dmitri Ivanenko, Arseny Sokolov, and other physicists. In the period 1943–1946, Bogolyubov's research was essentially concerned with the theory of stochastic processes and asymptotic methods. In his work a simple example of an anharmonic oscillator driven by a superposition of incoherent sinusoidal oscillations with continuous spectrum was used to show that depending on a specific approximation time scale the evolution of the system can be either deterministic, or a stochastic process satisfying Fokker–Planck equation, or even a process which is neither deterministic nor stochastic. In other words, he showed that depending on the choice of the time scale for the corresponding approximations the same stochastic process can be regarded as both dynamical and Markovian, and in the general case as a non-Markov process. This work was the first to introduce the notion of time hierarchy in non-equilibrium statistical physics which then became the key concept in all further development of the statistical theory of irreversible processes. In 1945, Bogolyubov proved a fundamental theorem on the existence and basic properties of a one-parameter integral manifold for a system of non-linear differential equations. He investigated periodic and quasi-periodic solutions lying on a one-dimensional manifold, thus forming the foundation for a new method of non-linear mechanics, the method of integral manifolds. In 1946, he published in JETP two works on equilibrium and non-equilibrium statistical mechanics which became the essence of his fundamental monograph Problems of dynamical theory in statistical physics (Moscow, 1946). On 26 January 1953, Nikolay Bogolyubov became the Head of the Department of Theoretical Physics at MSU, after Anatoly Vlasov decided to leave the position on January 2, 1953. Steklov Institute (1947–?) In 1947, Nikolay Bogolyubov organized and became the Head of the Department of Theoretical Physics at the Steklov Institute of Mathematics. In 1969, the Department of Theoretical Physics was separated into the Departments of Mathematical Physics (Head Vasily Vladimirov), of Statistical Mechanics, and of Quantum Field Theory (Head Mikhail Polivanov). While working in the Steklov Institute, Nikolay Bogolyubov and his school contributed to science with many important works including works on renormalization theory, renormalization group, axiomatic S-matrix theory, and works on the theory of dispersion relations. In the late 1940s and 1950s, Bogolyubov worked on the theory of superfluidity and superconductivity, where he developed the method of BBGKY hierarchy for a derivation of kinetic equations, formulated microscopic theory of superfluidity, and made other essential contributions. Later he worked on quantum field theory, where introduced the Bogoliubov transformation, formulated and proved the Bogoliubov's edge-of-the-wedge theorem and Bogoliubov–Parasyuk theorem (with Ostap Parasyuk), and obtained other significant results. In the 1960s his attention turned to the quark model of hadrons; in 1965 he was among the first scientists to study the new quantum number color charge. In 1946, Nikolay Bogolyubov was elected as a Corresponding Member of the Academy of Sciences of the Soviet Union. He was elected a full member (academician) of the Academy of Sciences of the Ukrainian SSR and in full member of the Academy of Sciences of the USSR in 1953. Dubna (1956–1992) Since 1956, he worked in the Joint Institute for Nuclear Research (JINR), Dubna, Russia, where he was a founder (together with Dmitry Blokhintsev) and the first director of the Laboratory of Theoretical Physics. This laboratory, where Nikolay Bogolyubov worked for a long time, has traditionally been the home of the prominent Russian schools in quantum field theory, theoretical nuclear physics, statistical physics, and nonlinear mechanics. Nikolay Bogolyubov was Director of the JINR in the period 1966–1988. Work in Ukraine after the WWII In the post-war years, M. M. Bogolyubov worked as the dean of the Faculty of Mechanics and Mathematics at Kyiv University and headed the Department of Probability Theory at the Institute of Mathematics of the Academy of Sciences of the Ukrainian SSR (now – NASU Institute of Mathematics). His first students in nonlinear mechanics were Yurii Mitropolskyi and Yu. V. Blagoveshchensky, and in probability theory and mathematical statistics, I. I. Gikhman. In the first half of the 1960s, Bogolyubov worked on organizing the Institute for Theoretical Physics of the Academy of Sciences of the Ukrainian SSR (now – Bogolyubov Institute for Theoretical Physics of the National Academy of Sciences of Ukraine) and from 1966 to 1973, he served as its director. When the institute was established in 1966, it consisted of three departments: Mathematical Methods in Theoretical Physics (Head: Academician Ostap Parasyuk), Theory of the Nucleus (Head: Oleksandr Davydov), and Theory of Elementary Particles (Albert Tavkhelidze). In 1968, the institute organized the Department of Nuclear Reaction Theory (Head: Oleksiy Sytenko). Family Nikolay Bogolyubov was married (since 1937) to Evgenia Pirashkova. They had two sons – Pavel and Nikolay (jr). Nikolay Boglyubov (jr) is a theoretical physicist working in the fields of mathematical physics and statistical mechanics. Pavel was a theoretical physicist, Doctor of Physical and Mathematical Sciences, senior researcher, and head of the sector at the Laboratory of Theoretical Physics of the Joint Institute for Nuclear Research. Students Nikolay Bogolyubov was a scientific supervisor of Yurii Mitropolskiy, Dmitry Shirkov, Selim Krein, Iosif Gihman, Tofik Mamedov, Kirill Gurov, Mikhail Polivanov, Naftul Polsky, Galina Biryuk, Sergei Tyablikov, Dmitry Zubarev, Vladimir Kadyshevsky, and many other students. His method of teaching, based on creation of a warm atmosphere, politeness and kindness, is famous in Russia and is known as the "Bogolyubov approach". Awards Nikolay Bogolyubov received various high USSR honors and international awards. Soviet Two Stalin Prizes (1947, 1953) USSR State Prize (1984) Lenin Prize (1958) Hero of Socialist Labour, twice (1969, 1979) Six Orders of Lenin (1953, 1959, 1967, 1969, 1975, 1979) Order of the October Revolution (1984) Order of the Red Banner of Labour, twice (1948, 1954) Order of the Badge of Honour, twice (1944, 1944) Foreign awards Order of Cyril and Methodius, 1st class (Bulgaria, 1969) Order "For merits", 2nd class (Poland, 1977) Academic awards Award of the Bologna Academy of Sciences (1930) Heineman Prize for Mathematical Physics (American Physical Society, 1966) Gold Medal Helmholtz (Academy of Sciences of the German Democratic Republic, 1969) Max Planck Medal (1973) Franklin Medal (1974) Gold Medal "For service to science and humanity" (Slovak Academy of Sciences, 1975) Karpinski Prize (Germany, 1981) Gold Medal Lavrent'ev (1983) – for his work "On stochastic processes in dynamical systems" Lomonosov Gold Medal (1985) – for outstanding achievement in mathematics and theoretical physics Gold Medal of Lyapunov (1989) – for his work on sustainability, critical phenomena and phase transitions in the theory of many interacting particles Dirac Medal (1992, posthumously) Academic recognition Foreign Honorary Member of the National Academy of Sciences (United States, 1959), American Academy of Arts and Sciences (1960), Bulgarian Academy of Sciences (1961); a foreign member of the Polish Academy of Sciences (1962), GDR Academy of Sciences (1966), Hungarian Academy of Sciences (1970), Academy of Sciences in Heidelberg (1968), Czechoslovak Academy of Sciences (1980), Indian Academy of Sciences (1983), Mongolian Academy of Sciences (1983) Honorary Doctor of the University of Allahabad, India (1958), Berlin (East Germany, 1960), Chicago (USA, 1967), Turin (Italy, 1969), Wroclaw (Poland, 1970), Bucharest (Romania, 1971), Helsinki (Finland, 1973), Ulan Bator (Mongolia, 1977), Warsaw (Poland, 1977) Memory Institutions, awards and locations have been named in Bogolyubov's memory: N.N. Bogolyubov Institute for Theoretical Problems of Microphysics (Moscow State University) Bogoliubov Institute of Theoretical Physics National Academy of Sciences of Ukraine (Kyiv, Ukraine) Bogoliubov Laboratory of Theoretical Physics (Joint Institute for Nuclear Research, Dubna) Bogolyubov Prize (Joint Institute for Nuclear Research) for scientists with outstanding contribution to theoretical physics and applied mathematics Bogolyubov Prize for young scientists (Joint Institute for Nuclear Research) Bogolyubov Prize (National Academy of Sciences of Ukraine) for scientists with outstanding contribution to theoretical physics and applied mathematics Bogolyubov Gold Medal (Russian Academy of Sciences) Bust of Academician NN Bogolyubov (Nizhny Novgorod) Bust of Academician NN Bogolyubov (Dubna) Bogolyubov prospect () (Dubna's central street) Commemorative plaque at the entrance of the Physics Department of Moscow State University In 2009, the centenary of Nikolay Bogolyubov's birth was celebrated with two conferences in Russia and Ukraine: International Bogolyubov Conference: Problems of Theoretical and Mathematical Physics 21–27 August, Moscow-Dubna, Russia. Bogolyubov Kyiv Conference: Modern Problems of Theoretical and Mathematical Physics 15–18 September, Kyiv, Ukraine. Research Fundamental works of Nikolay Bogolyubov were devoted to asymptotic methods of nonlinear mechanics, quantum field theory, statistical field theory, variational calculus, approximation methods in mathematical analysis, equations of mathematical physics, theory of stability, theory of dynamical systems, and to many other areas. He built a new theory of scattering matrices, formulated the concept of microscopical causality, obtained important results in quantum electrodynamics, and investigated on the basis of the edge-of-the-wedge theorem the dispersion relations in elementary particle physics. He suggested a new synthesis of the Bohr theory of quasiperiodic functions and developed methods for asymptotic integration of nonlinear differential equations which describe oscillating processes. Mathematics and non-linear mechanics In 1932–1943, in the early stage of his career, he worked in collaboration with Nikolay Krylov on mathematical problems of nonlinear mechanics and developed mathematical methods for asymptotic integration of non-linear differential equations. He also applied these methods to problems of statistical mechanics. In 1937, jointly with Nikolay Krylov he proved the Krylov–Bogolyubov theorems. In 1956, at the International Conference on Theoretical Physics in Seattle, USA (September, 1956), he presented the formulation and the first proof of the edge-of-the-wedge theorem. This theorem in the theory of functions of several complex variables has important implications to the dispersion relations in elementary particle physics. Statistical mechanics 1939 Jointly with Nikolay Krylov gave the first consistent microscopic derivation of the Fokker–Planck equation in the single scheme of classical and quantum mechanics. 1945 Suggested the idea of hierarchy of relaxation times, which is significant for statistical theory of irreversible processes. 1946 Developed a general method for a microscopic derivation of kinetic equations for classical systems. The method was based on the hierarchy of equations for multi-particle distribution functions known now as Bogoliubov–Born–Green–Kirkwood–Yvon hierarchy. 1947 Jointly with K. P. Gurov extended this method to the derivation of kinetic equations for quantum systems on the basis of the quantum BBGKY hierarchy. 1947—1948 Introduced kinetic equations in the theory of superfluidity, computed the excitation spectrum for a weakly imperfect Bose gas, showed that this spectrum has the same properties as spectrum of Helium II, and used this analogy for a theoretical description of superfluidity of Helium II. 1958 Formulated a microscopic theory of superconductivity and established an analogy between superconductivity and superfluidity phenomena; this contribution was discussed in details in the book A New Method in the Theory of Superconductivity (co-authors V. V. Tolmachev and D. V. Shirkov, Moscow, Academy of Sciences Press, 1958). Quantum theory 1955 Developed an axiomatic theory for the scattering matrix (S-matrix) in quantum field theory and introduced the causality condition for S-matrix in terms of variational derivatives. 1955 Jointly with Dmitry Shirkov developed the renormalization group method. 1955 Jointly with Ostap Parasyuk proved the theorem on the finiteness and uniqueness (for renormalizable theories) of the scattering matrix in any order of perturbation theory (Bogoliubov-Parasyuk theorem) and developed a procedure (R-operation) for a practical subtraction of singularities in quantum field theory. 1965 Jointly with Boris Struminsky and Albert Tavkhelidze and independently of Moo-Young Han, Yoichiro Nambu and Oscar W. Greenberg suggested a triplet quark model and introduced a new quantum degree of freedom (later called as color charge) for quarks. Suggested a first proof of dispersion relations in quantum field theory. Publications Books Mathematics and Non-linear Mechanics: N. M. Krylov and N. N. Bogoliubov (1934): On various formal expansions of non-linear mechanics. Kyiv, Izdat. Zagal'noukr. Akad. Nauk. N. M. Krylov and N. N. Bogoliubov (1947): Introduction to Nonlinear Mechanics. Princeton, Princeton University Press. N. N. Bogoliubov, Y. A. Mitropolsky (1961): Asymptotic Methods in the Theory of Non-Linear Oscillations. New York, Gordon and Breach. Statistical Mechanics: N. N. Bogoliubov (1945): On Some Statistical Methods in Mathematical Physics. Kyiv . N. N. Bogoliubov, V. V. Tolmachev, D. V. Shirkov (1959): A New Method in the Theory of Superconductivity. New York, Consultants Bureau. N. N. Bogoliubov (1960): Problems of Dynamic Theory in Statistical Physics. Oak Ridge, Tenn., Technical Information Service. N. N. Bogoliubov (1967—1970): Lectures on Quantum Statistics. Problems of Statistical Mechanics of Quantum Systems. New York, Gordon and Breach. N. N. Bogolubov and N. N. Bogolubov, Jnr. (1992): Introduction to Quantum Statistical Mechanics. Gordon and Breach. . Quantum Field Theory: N. N. Bogoliubov, B. V. Medvedev, M. K. Polivanov (1958): Problems in the Theory of Dispersion Relations. Institute for Advanced Study, Princeton. N. N. Bogoliubov, D. V. Shirkov (1959): The Theory of Quantized Fields. New York, Interscience. The first text-book on the renormalization group theory. N. N. Bogoliubov, A. A. Logunov and I. T. Todorov (1975): Introduction to Axiomatic Quantum Field Theory. Reading, Mass.: W. A. Benjamin, Advanced Book Program. . . N. N. Bogoliubov, D. V. Shirkov (1980): Introduction to the Theory of Quantized Field. John Wiley & Sons Inc; 3rd edition. . . N. N. Bogoliubov, D. V. Shirkov (1982): Quantum Fields. Benjamin-Cummings Pub. Co., . N. N. Bogoliubov, A. A. Logunov, A. I. Oksak, I. T. Todorov (1990): General Principles of Quantum Field Theory. Dordrecht [Holland]; Boston, Kluwer Academic Publishers. . . Selected works N. N. Bogoliubov, Selected Works. Part I. Dynamical Theory. Gordon and Breach, New York, 1990. , . N. N. Bogoliubov, Selected Works. Part II. Quantum and Classical Statistical Mechanics. Gordon and Breach, New York, 1991. . N. N. Bogoliubov, Selected Works. Part III. Nonlinear Mechanics and Pure Mathematics. Gordon and Breach, Amsterdam, 1995. . N. N. Bogoliubov, Selected Works. Part IV. Quantum Field Theory. Gordon and Breach, Amsterdam, 1995. , . Selected papers "On Question about Superfluidity Condition in the Nuclear Matter Theory" (in Russian), Doklady Akademii Nauk USSR, 119, 52, 1958. "On One Variational Principle in Many Body Problem" (in Russian), Doklady Akademii Nauk USSR, 119, N2, 244, 1959. "On Compensation Principle in the Method of Self conformed Field" (in Russian), Uspekhi Fizicheskhih Nauk, 67, N4, 549, 1959. "The Quasi-averages in Problems of Statistical Mechanics" (in Russian), Preprint D-781, JINR, Dubna, 1961. "On the Hydrodynamics of a Superfluiding" (in Russian), Preprint P-1395, JINR, Dubna, 1963. See also Bogoliubov approximation Bogolyubov-Born-Green-Kirkwood-Yvon hierarchy Bogoliubov causality condition Bogolyubov's edge-of-the-wedge theorem Bogolyubov inequality Bogoliubov inner product Bogolyubov's lemma Bogoliubov-Parasyuk theorem Bogoliubov quasiparticle Bogoliubov transformation Describing function method Goldstone boson Krylov-Bogoliubov averaging method Krylov-Bogolyubov theorem Landau pole Peierls–Bogoliubov inequality Quantum triviality Notes References Further reading External links Bogolyubov Institute for Theoretical Physics of the National Academy of Sciences of Ukraine. Bogolyubov Institute for Theoretical Problems of Microphysics at the Lomonosov Moscow State University, Russia. Bogolyubov Laboratory of Theoretical Physics at the Joint Institute for Nuclear Research, Dubna, Russia. Department of Theoretical Physics in the Steklov Mathematical Institute, Moscow, Russia (created by Nikolay Bogolyubov). The role of Nikolay Bogoliubov in Dubna's Russian Orthodox Christian church (in Russian). Author profile in the database zbMATH 1909 births 1992 deaths Fellows of the American Academy of Arts and Sciences Foreign associates of the National Academy of Sciences Foreign fellows of the Indian National Science Academy Foreign members of the Bulgarian Academy of Sciences Full Members of the Russian Academy of Sciences Full Members of the USSR Academy of Sciences Members of the German Academy of Sciences at Berlin Members of the National Academy of Sciences of Ukraine Academic staff of Moscow State University Taras Shevchenko National University of Kyiv alumni Academic staff of the Taras Shevchenko National University of Kyiv Seventh convocation members of the Soviet of the Union Eighth convocation members of the Soviet of the Union Ninth convocation members of the Soviet of the Union Tenth convocation members of the Soviet of the Union Eleventh convocation members of the Soviet of the Union Heroes of Socialist Labour Recipients of the Stalin Prize Recipients of the Lenin Prize Recipients of the Lomonosov Gold Medal Recipients of the Order of Lenin Recipients of the Order of the Red Banner of Labour Recipients of the USSR State Prize Winners of the Max Planck Medal Control theorists Mathematical physicists Quantum physicists Soviet physicists Soviet mathematicians Soviet inventors Theoretical physicists Superfluidity Burials at Novodevichy Cemetery Recipients of Franklin Medal Russian scientists
Nikolay Bogolyubov
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering" ]
5,442
[ "Physical phenomena", "Phase transitions", "Quantum physicists", "Phases of matter", "Quantum mechanics", "Theoretical physics", "Superfluidity", "Recipients of the Lomonosov Gold Medal", "Control engineering", "Condensed matter physics", "Control theorists", "Exotic matter", "Science and te...
3,523,787
https://en.wikipedia.org/wiki/Piezoresistive%20effect
The piezoresistive effect is a change in the electrical resistivity of a semiconductor or metal when mechanical strain is applied. In contrast to the piezoelectric effect, the piezoresistive effect causes a change only in electrical resistance, not in electric potential. History The change of electrical resistance in metal devices due to an applied mechanical load was first discovered in 1856 by Lord Kelvin. With single crystal silicon becoming the material of choice for the design of analog and digital circuits, the large piezoresistive effect in silicon and germanium was first discovered in 1954 (Smith 1954). Mechanism In conducting and semi-conducting materials, changes in inter-atomic spacing resulting from strain affect the bandgaps, making it easier (or harder depending on the material and strain) for electrons to be raised into the conduction band. This results in a change in resistivity of the material. Within a certain range of strain this relationship is linear, so that the piezoresistive coefficient where ∂ρ = Change in resistivity ρ = Original resistivity ε = Strain are constant. Piezoresistivity in metals Usually the resistance change in metals is mostly due to the change of geometry resulting from applied mechanical stress. However, even though the piezoresistive effect is small in those cases it is often not negligible. In cases where it is, it can be calculated using the simple resistance equation derived from Ohm's law; where Conductor length [m] A Cross-sectional area of the current flow [m2] Some metals display piezoresistivity that is much larger than the resistance change due to geometry. In platinum alloys, for instance, piezoresistivity is more than a factor of two larger, combining with the geometry effects to give a strain gauge sensitivity of up to more than three times as large than due to geometry effects alone. Pure nickel's piezoresistivity is -13 times larger, completely dwarfing and even reversing the sign of the geometry-induced resistance change. Piezoresistive effect in bulk semiconductors The piezoresistive effect of semiconductor materials can be several orders of magnitudes larger than the geometrical effect and is present in materials like germanium, polycrystalline silicon, amorphous silicon, silicon carbide, and single crystal silicon. Hence, semiconductor strain gauges with a very high coefficient of sensitivity can be built. For precision measurements they are more difficult to handle than metal strain gauges, because semiconductor strain gauges are generally sensitive to environmental conditions (especially temperature). For silicon, gauge factors can be two orders of magnitudes larger than those observed in most metals (Smith 1954). The resistance of n-conducting silicon mainly changes due to a shift of the three different conducting valley pairs. The shifting causes a redistribution of the carriers between valleys with different mobilities. This results in varying mobilities dependent on the direction of current flow. A minor effect is due to the effective mass change related to changing shapes of the valleys. In p-conducting silicon the phenomena are more complex and also result in mass changes and hole transfer. Giant piezoresistance in metal-silicon hybrid structures A giant piezoresistive effect – where the piezoresistive coefficient exceeds the bulk value – was reported for a microfabricated silicon-aluminium hybrid structure. The effect has been applied to silicon-based sensor technologies. Giant piezoresistive effect in silicon nanostructures The longitudinal piezoresistive coefficient of top-down fabricated silicon nanowires was measured to be 60% larger than in bulk silicon. In 2006, giant piezoresistance was reported in bottom-up fabricated silicon nanowires – a >30 increase in the longitudinal piezoresistive coefficient compared to bulk silicon was reported. The suggestion of a giant piezoresistance in nanostructures has since stimulated much effort into a physical understanding of the effect not only in silicon but also in other functional materials. Piezoresistive silicon devices The piezoresistive effect of semiconductors has been used for sensor devices employing all kinds of semiconductor materials such as germanium, polycrystalline silicon, amorphous silicon, and single crystal silicon. Since silicon is today the material of choice for integrated digital and analog circuits the use of piezoresistive silicon devices has been of great interest. It enables the easy integration of stress sensors with Bipolar and CMOS circuits. This has enabled a wide range of products using the piezoresistive effect. Many commercial devices such as pressure sensors and acceleration sensors employ the piezoresistive effect in silicon. But due to its magnitude the piezoresistive effect in silicon has also attracted the attention of research and development for all other devices using single crystal silicon. Semiconductor Hall sensors, for example, were capable of achieving their current precision only after employing methods which eliminate signal contributions due to applied mechanical stress. Piezoresistors Piezoresistors are resistors made from a piezoresistive material and are usually used for measurement of mechanical stress. They are the simplest form of piezoresistive devices. Fabrication Piezoresistors can be fabricated using wide variety of piezoresistive materials. The simplest form of piezoresistive silicon sensors are diffused resistors. Piezoresistors consist of a simple two contact diffused n- or p-wells within a p- or n-substrate. As the typical square resistances of these devices are in the range of several hundred ohms, additional p+ or n+ plus diffusions are a potential method to facilitate ohmic contacts to the device. Schematic cross-section of the basic elements of a silicon n-well piezoresistor. Physics of operation For typical stress values in the MPa range the stress dependent voltage drop along the resistor Vr, can be considered to be linear. A piezoresistor aligned with the x-axis as shown in the figure may be described by where , I, , , and denote the stress free resistance, the applied current, the transverse and longitudinal piezoresistive coefficients, and the three tensile stress components, respectively. The piezoresistive coefficients vary significantly with the sensor orientation with respect to the crystallographic axes and with the doping profile. Despite the fairly large stress sensitivity of simple resistors, they are preferably used in more complex configurations eliminating certain cross sensitivities and drawbacks. Piezoresistors have the disadvantage of being highly sensitive to temperature changes while featuring comparatively small relative stress dependent signal amplitude changes. Other piezoresistive devices In silicon the piezoresistive effect is used in piezoresistors, transducers, piezo-FETS, solid state accelerometers and bipolar transistors. The electrically-conductive packaging material Velostat is used by hobbyists to make pressure sensors due to is piezoresistive properties and low cost. See also Piezoelectricity Electrical resistance References S. Middelhoek and S. A. Audet, Silicon Sensors, Delft, The Netherlands: Delft University Press, 1994. A. L. Window, Strain Gauge Technology, 2nd ed, London, England: Elsevier Applied Science, 1992. S. M. Sze, Semiconductor Sensors, New York: Wiley, 1994. Electrical phenomena
Piezoresistive effect
[ "Physics" ]
1,530
[ "Physical phenomena", "Electrical phenomena" ]
3,524,206
https://en.wikipedia.org/wiki/DriveSpace
DriveSpace (initially known as DoubleSpace) is a disk compression utility supplied with MS-DOS starting from version 6.0 in 1993 and ending in 2000 with the release of Windows Me. The purpose of DriveSpace is to increase the amount of data the user could store on disks by transparently compressing and decompressing data on-the-fly. It is primarily intended for use with hard drives, but use for floppy disks is also supported. This feature was removed in Windows XP and later. Overview In the most common usage scenario, the user would have one hard drive in the computer, with all the space allocated to one partition (usually as drive C:). The software would compress the entire partition contents into one large file in the root directory. On booting the system, the driver would allocate this large file as drive C:, enabling files to be accessed as normal. Microsoft's decision to add disk compression to MS-DOS 6.0 was influenced by the fact that the competing DR DOS had earlier started to include disk compression software since version 6.0 in 1991. Instead of developing its own product from scratch, Microsoft licensed the technology for the DoubleDisk product developed by Vertisoft and adapted it to become DoubleSpace. For instance, the loading of the driver controlling the compression/decompression () became more deeply integrated into the operating system (being loaded through the undocumented pre-load API even before the CONFIG.SYS file). Microsoft had originally sought to license the technology from Stac Electronics, which had a similar product called Stacker, but these negotiations had failed. Microsoft was later successfully sued for patent infringement by Stac Electronics for violating some of its compression patents. During the court case Stac Electronics claimed that Microsoft had refused to pay any money when it attempted to license Stacker, offering only the possibility for Stac Electronics to develop enhancement products. Consumption and compatibility A few computer programs, particularly games, were incompatible with DoubleSpace because they effectively bypassed the DoubleSpace driver. DoubleSpace also consumed a significant amount of conventional memory, making it difficult to run memory-intensive programs. Bugs and data loss Shortly after its release, reports of data loss emerged. A company called Blossom Software claimed to have found a bug that could lead to data corruption. The bug occurred when writing files to heavily fragmented disks and was demonstrated by a program called . The company sold a program called DoubleCheck that could be used to check for the fragmentation condition that could lead to the error. Microsoft's position was that the error only occurred under unlikely conditions, but fixed the problem in MS-DOS 6.2. The fragmentation condition was related to the way DoubleSpace compresses individual clusters (of size, say, 8 K), and fits them on the disk, occupying fewer sectors (size 512 bytes) than the fixed number required without DoubleSpace (16 sectors in this example). This created the possibility of a kind of internal fragmentation issue, where DoubleSpace would be unable to find enough consecutive sectors for storing a compressed cluster even if plenty of space was available. Other potential causes of data loss included the corruption of DoubleSpace's memory areas by other programs, DoubleSpace's memory areas were not protected, because MS-DOS ran in real mode. Microsoft attempted to remedy this in the MS-DOS 6.2 version of DoubleSpace (via a feature called DoubleGuard that would check for such corruption). The fact that the compressed contents of a compressed drive was stored in a single file implied the possibility of a user accidentally deleting all of their data by deleting just that file. This could happen if the user inadvertently got access to the host drive, containing this file. The host drive was usually mapped to the letter H: by the compression driver. However, if the compression driver had failed to load the user might see it as drive C:. Turning off the computer before DoubleSpace could finish updating its data structures could also result in data loss. This problem was compounded by Microsoft making write caching enabled by default in the disk cache software that came with MS-DOS 6.0. Because of this change, after exiting an application, the MS-DOS prompt might appear before all data had been written to the disk. However, due to the lack of a controlled shutdown procedure (as found in modern operating systems), many users saw the appearance of the MS-DOS prompt as an indication that it was safe to switch off the computer, which was typically the case prior to MS-DOS 6.0. Microsoft addressed this issue in MS-DOS 6.2 where the write caching was still enabled by default, but where the cache would be flushed before allowing the command prompt to reappear. Add-ons AddStor, Inc. offered an add-on product called Double Tools for DoubleSpace. It contained a number of tools to enhance the functions of the version of DoubleSpace that came with MS-DOS 6.0. This included various diagnostic features, the ability to have compressed removable media auto-mounted as they were used, as well as support for background defragmentation of DoubleSpace compressed drives. To defragment files in the background, it was possible to let DoubleTools replace the low-level DoubleSpace driver (DBLSPACE.BIN) with one supplied by DoubleTools. Replacing the driver also enabled other enhanced functionality of the product, such as the use of 32-bit code paths when it detected an Intel 80386 or higher CPU, caching capabilities and in addition to its supporting the use of the Upper Memory Area also permitted the use of Extended Memory for some of its buffers (reducing the driver's total footprint in conventional and upper memory, albeit at the cost of somewhat reduced speed). Another function was the ability to split a compressed volume over multiple floppy disks, being able to see the entire volume with only the first disk inserted (and being prompted to change discs as necessary). It was also possible to share a compressed volume with a remote computer. Double Tools also had the capability to put a special utility on compressed floppy disks that made it possible to access the compressed data even on computers that didn't have DoubleSpace (or Double Tools). Vertisoft, the company who developed the DoubleDisk program that Microsoft subsequently licensed and turned into DoubleSpace, developed and sold a DoubleSpace add-on program called SpaceManager, which contained a number of usability enhancements. It also offered improved compression ratios. Other products, like later versions of Stacker from Stac Electronics, were capable of converting existing DoubleSpace compressed drives into their own format. Later versions MS-DOS 6.2 MS-DOS 6.2 featured a new and improved version of DoubleSpace. The ability to remove DoubleSpace was added. The program SCANDISK introduced in this release was able to scan the non-compressed and compressed drives, including checks of the internal DoubleSpace structures. Security features (known as DoubleGuard) were added to prevent memory corruption from leading to data loss. The memory footprint of the DoubleSpace driver was reduced compared to the version shipped in MS-DOS 6.0. A fix was made to the fragmentation issue discussed above. MS-DOS 6.21 Following a successful lawsuit by Stac Electronics regarding demonstrated patent infringement, Microsoft released MS-DOS 6.21 without DoubleSpace. A court injunction also prevented any further distribution of the previous versions of MS-DOS that included DoubleSpace. MS-DOS 6.22 MS-DOS 6.22 contained a reimplemented version of the disk compression software, but this time released under the name DriveSpace. The software was essentially identical to the MS-DOS 6.2 version of DoubleSpace from a user point of view, and was compatible with previous versions. DriveSpace in Windows 95 Windows 95 had full support of DoubleSpace/DriveSpace via a native 32-bit driver for accessing the compressed drives, along with a graphical version of the software tools. MS-DOS DriveSpace users could upgrade to Windows 95 without any troubles. Furthermore, the Microsoft Plus! for Windows 95 pack contained version 3 of DriveSpace. This version introduced new compression formats (HiPack and UltraPack) with different performance characteristics for even greater compression ratios along with a tool that could recompress the files on the disk using the different formats, depending on how frequently the files were used. One could upgrade from DriveSpace 2 to DriveSpace 3, but there was no downgrade path back to DriveSpace 2. One could, however, decompress a DriveSpace 3 drive. The DOS device driver of DriveSpace 3 had a memory footprint of around 150 KB because of all these new features. This caused difficulty for users rebooting into the MS-DOS mode of Windows 95 for running games, because of the reduced amount of conventional memory that was available. DriveSpace 3 also shipped with Windows 95 OSR2 but many features were disabled unless Plus! was also installed. DriveSpace could also not be used with FAT32, making it of little use on PCs with large hard drives. DriveSpace in Windows 98 Windows 98 shipped with DriveSpace 3 as part of the operating system. Functionality was the same as in Windows 95 with Plus!. DriveSpace in Windows Me Because of the removal of real mode support, FAT32 going mainstream and the decreasing popularity of DriveSpace, DriveSpace in Windows Me had only limited support. DriveSpace no longer supported hard disk compression, but still supported reading and writing compressed removable media, although the only DriveSpace operation supported beside that was deleting and reallocating compressed drives. It is possible to restore full function of DriveSpace 3 (unofficially) in Windows Me, copying the executable file from a Windows 98 installation and using it to replace the executable included with Windows Me. After that, one could compress new drives as they could do on Windows 98. Support outside Microsoft DMSDOS, a Linux kernel driver, was developed in the late 1990s to support both the reading and writing of DoubleSpace/DriveSpace disks. However, reading and especially writing to compressed filesystems is reliable only in specific versions of the 2.0, 2.1 or 2.2 versions of the kernel. While DR-DOS supported its own disk compression technology (originally based on SuperStor, later on Stacker), Novell DOS 7 in 1993 and higher introduced an emulation of the undocumented pre-load API in order to provide seamless support for DoubleSpace as well. Since the DR-DOS drivers were DPMS-enabled whereas the MS-DOS ones were not, this did not offer any advantages for DR-DOS users, but allowed easier coexistence or migration due to the possibility of shared use of already existing compressed volumes in multi-boot scenarios. DR-DOS 7.02 and higher also added support for DriveSpace in 1998. References Further reading External links DoubleSpace Overview Mapping DOS FAT to MDFAT DoubleSpace Compressed Volume File Layout Microsoft Real-time Compression Interface (MRCI) Data compression Compression file systems DOS technology Windows 95 Windows 98 Discontinued Windows components
DriveSpace
[ "Technology" ]
2,273
[ "Windows commands", "Computing commands" ]
3,525,599
https://en.wikipedia.org/wiki/Custodial%20symmetry
In particle physics, a symmetry that remains after spontaneous symmetry breaking that can prevent higher-order radiative corrections from spoiling some property of a theory is called a custodial symmetry. Motivation In the Standard Model of particle physics, the custodial symmetry is a residual global SU(2) symmetry of the Higgs potential beyond the basic SU(2)×U(1) gauge symmetry of the Weak Interaction that prevents higher-order radiative-corrections from driving the Standard Model parameter away from ≈ 1 after spontaneous symmetry breaking. (Note: is a ratio involving the masses of the weak bosons and the Weinberg angle). With one or more electroweak Higgs doublets in the Higgs sector, the effective action term which generically arises with physics beyond the Standard Model at the scale Λ contributes to the Peskin–Takeuchi parameter T. Current precision electroweak measurements restrict Λ to more than a few TeV. Attempts to solve the gauge hierarchy problem generically require the addition of new particles below that scale, however. What is custodial symmetry? Before electroweak symmetry breaking there was a global SU(2)xSU(2) symmetry in the Higgs potential, which is broken to just SU(2) after electroweak symmetry breaking. This remnant symmetry is called custodial symmetry. The total standard model lagrangian would be custodial symmetric if the yukawa couplings are the same, i.e. Yu=Yd and hypercharge coupling is zero. It is very important to see beyond the standard model effect by including new terms which violate custodial symmetry. Construction The preferred way of preventing the term from being generated is to introduce an approximate symmetry which acts upon the Higgs sector. In addition to the gauged SU(2)W which acts exactly upon the Higgs doublets, we will also introduce another approximate global SU(2)R symmetry which also acts upon the Higgs doublet. The Higgs doublet is now a real representation (2,2) of SU(2)L × SU(2)R with four real components. Here, we have relabeled W as L following the standard convention. ("L" stand for "Left", both because the Weak interaction only couples to the "Left-Handed" components of the fermion degrees of freedom, and also because SU(2)L acts on the Higgs matrix from the left; contrawise, SU(2)R acts on from the right.) Such a symmetry will not forbid Higgs kinetic terms like or tachyonic mass terms like or self-coupling terms like (fortunately!) but will prevent . Such an SU(2)R symmetry can never be exact and unbroken because otherwise, the up-type and the down-type Yukawa couplings will be exactly identical. SU(2)R does not map the hypercharge symmetry U(1)Y to itself but the hypercharge gauge coupling strength is small and in the limit as it goes to zero, we won't have a problem. U(1)Y is said to be weakly gauged and this explicitly breaks SU(2)R. After the Higgs doublet acquires a nonzero vacuum expectation value, the (approximate) SU(2)L × SU(2)R symmetry is spontaneously broken to the (approximate) diagonal subgroup SU(2)V. This approximate symmetry is called the custodial symmetry. See also Peskin–Takeuchi parameter left-right model little Higgs References External links Rodolfo A. Diaz and R. Martínez, "The Custodial Symmetry", arXiv:hep-ph/0302058. Electroweak theory
Custodial symmetry
[ "Physics" ]
770
[ "Physical phenomena", "Fundamental interactions", "Electroweak theory" ]
3,525,881
https://en.wikipedia.org/wiki/Thermal%20quantum%20field%20theory
In theoretical physics, thermal quantum field theory (thermal field theory for short) or finite temperature field theory is a set of methods to calculate expectation values of physical observables of a quantum field theory at finite temperature. In the Matsubara formalism, the basic idea (due to Felix Bloch) is that the expectation values of operators in a canonical ensemble may be written as expectation values in ordinary quantum field theory where the configuration is evolved by an imaginary time . One can therefore switch to a spacetime with Euclidean signature, where the above trace (Tr) leads to the requirement that all bosonic and fermionic fields be periodic and antiperiodic, respectively, with respect to the Euclidean time direction with periodicity (we are assuming natural units ). This allows one to perform calculations with the same tools as in ordinary quantum field theory, such as functional integrals and Feynman diagrams, but with compact Euclidean time. Note that the definition of normal ordering has to be altered. In momentum space, this leads to the replacement of continuous frequencies by discrete imaginary (Matsubara) frequencies and, through the de Broglie relation, to a discretized thermal energy spectrum . This has been shown to be a useful tool in studying the behavior of quantum field theories at finite temperature. It has been generalized to theories with gauge invariance and was a central tool in the study of a conjectured deconfining phase transition of Yang–Mills theory. In this Euclidean field theory, real-time observables can be retrieved by analytic continuation. The Feynman rules for gauge theories in the Euclidean time formalism, were derived by C. W. Bernard.     The Matsubara formalism, also referred to as imaginary time formalism, can be extended to systems with thermal variations. In this approach, the variation in the temperature is recast as a variation in the Euclidean metric. Analysis of the partition function leads to an equivalence between thermal variations and the curvature of the Euclidean space. The alternative to the use of fictitious imaginary times is to use a real-time formalism which come in two forms. A path-ordered approach to real-time formalisms includes the Schwinger–Keldysh formalism and more modern variants. The latter involves replacing a straight time contour from (large negative) real initial time to by one that first runs to (large positive) real time and then suitably back to . In fact all that is needed is one section running along the real time axis, as the route to the end point, , is less important. The piecewise composition of the resulting complex time contour leads to a doubling of fields and more complicated Feynman rules, but obviates the need of analytic continuations of the imaginary-time formalism. The alternative approach to real-time formalisms is an operator based approach using Bogoliubov transformations, known as thermo field dynamics. As well as Feynman diagrams and perturbation theory, other techniques such as dispersion relations and the finite temperature analog of Cutkosky rules can also be used in the real time formulation. An alternative approach which is of interest to mathematical physics is to work with KMS states. See also Matsubara frequency Polyakov loop Quantum thermodynamics Quantum statistical mechanics References Quantum field theory Statistical mechanics
Thermal quantum field theory
[ "Physics" ]
682
[ "Quantum field theory", "Statistical mechanics", "Quantum mechanics" ]
3,526,032
https://en.wikipedia.org/wiki/Seiberg%E2%80%93Witten%20theory
In theoretical physics, Seiberg–Witten theory is an supersymmetric gauge theory with an exact low-energy effective action (for massless degrees of freedom), of which the kinetic part coincides with the Kähler potential of the moduli space of vacua. Before taking the low-energy effective action, the theory is known as supersymmetric Yang–Mills theory, as the field content is a single vector supermultiplet, analogous to the field content of Yang–Mills theory being a single vector gauge field (in particle theory language) or connection (in geometric language). The theory was studied in detail by Nathan Seiberg and Edward Witten . Seiberg–Witten curves In general, effective Lagrangians of supersymmetric gauge theories are largely determined by their holomorphic (really, meromorphic) properties and their behavior near the singularities. In gauge theory with extended supersymmetry, the moduli space of vacua is a special Kähler manifold and its Kähler potential is constrained by above conditions. In the original approach, by Seiberg and Witten, holomorphy and electric-magnetic duality constraints are strong enough to almost uniquely constrain the prepotential (a holomorphic function which defines the theory), and therefore the metric of the moduli space of vacua, for theories with SU(2) gauge group. More generally, consider the example with gauge group SU(n). The classical potential is where is a scalar field appearing in an expansion of superfields in the theory. The potential must vanish on the moduli space of vacua by definition, but the need not. The vacuum expectation value of can be gauge rotated into the Cartan subalgebra, making it a traceless diagonal complex matrix . Because the fields no longer have vanishing vacuum expectation value, other fields become massive due to the Higgs mechanism (spontaneous symmetry breaking). They are integrated out in order to find the effective U(1) gauge theory. Its two-derivative, four-fermions low-energy action is given by a Lagrangian which can be expressed in terms of a single holomorphic function on superspace as follows: where and is a chiral superfield on superspace which fits inside the chiral multiplet . The first term is a perturbative loop calculation and the second is the instanton part where labels fixed instanton numbers. In theories whose gauge groups are products of unitary groups, can be computed exactly using localization and the limit shape techniques. The Kähler potential is the kinetic part of the low energy action, and explicitly is written in terms of as From we can get the mass of the BPS particles. One way to interpret this is that these variables and its dual can be expressed as periods of a meromorphic differential on a Riemann surface called the Seiberg–Witten curve. N = 2 supersymmetric Yang–Mills theory Before the low energy, or infrared, limit is taken, the action can be given in terms of a Lagrangian over superspace with field content , which is a single vector/chiral superfield in the adjoint representation of the gauge group, and a holomorphic function of called the prepotential. Then the Lagrangian is given by where are coordinates for the spinor directions of superspace. Once the low energy limit is taken, the superfield is typically labelled by instead. The so called minimal theory is given by a specific choice of , where is the complex coupling constant. The minimal theory can be written on Minkowski spacetime as with making up the chiral multiplet. Geometry of the moduli space For this section fix the gauge group as . A low-energy vacuum solution is an vector superfield solving the equations of motion of the low-energy Lagrangian, for which the scalar part has vanishing potential, which as mentioned earlier holds if (which exactly means is a normal operator, and therefore diagonalizable). The scalar transforms in the adjoint, that is, it can be identified as an element of , the complexification of . Thus is traceless and diagonalizable so can be gauge rotated to (is in the conjugacy class of) a matrix of the form (where is the third Pauli matrix) for . However, and give conjugate matrices (corresponding to the fact the Weyl group of is ) so both label the same vacuum. Thus the gauge invariant quantity labelling inequivalent vacua is . The (classical) moduli space of vacua is a one-dimensional complex manifold (Riemann surface) parametrized by , although the Kähler metric is given in terms of as where . This is not invariant under an arbitrary change of coordinates, but due to symmetry in and , switching to local coordinate gives a metric similar to the final form but with a different harmonic function replacing . The switching of the two coordinates can be interpreted as an instance of electric-magnetic duality . Under a minimal assumption of assuming there are only three singularities in the moduli space at and , with prescribed monodromy data at each point derived from quantum field theoretic arguments, the moduli space was found to be , where is the hyperbolic half-plane and is the second principal congruence subgroup, the subgroup of matrices congruent to 1 mod 2, generated by This space is a six-fold cover of the fundamental domain of the modular group and admits an explicit description as parametrizing a space of elliptic curves given by the vanishing of which are the Seiberg–Witten curves. The curve becomes singular precisely when or . Monopole condensation and confinement The theory exhibits physical phenomena involving and linking magnetic monopoles, confinement, an attained mass gap and strong-weak duality, described in section 5.6 of . The study of these physical phenomena also motivated the theory of Seiberg–Witten invariants. The low-energy action is described by the chiral multiplet with gauge group , the residual unbroken gauge from the original symmetry. This description is weakly coupled for large , but strongly coupled for small . However, at the strongly coupled point the theory admits a dual description which is weakly coupled. The dual theory has different field content, with two chiral superfields , and gauge field the dual photon , with a potential that gives equations of motion which are Witten's monopole equations, also known as the Seiberg–Witten equations at the critical points where the monopoles become massless. In the context of Seiberg–Witten invariants, one can view Donaldson invariants as coming from a twist of the original theory at giving a topological field theory. On the other hand, Seiberg–Witten invariants come from twisting the dual theory at . In theory, such invariants should receive contributions from all finite but in fact can be localized to the two critical points, and topological invariants can be read off from solution spaces to the monopole equations. Relation to integrable systems The special Kähler geometry on the moduli space of vacua in Seiberg–Witten theory can be identified with the geometry of the base of complex completely integrable system. The total phase of this complex completely integrable system can be identified with the moduli space of vacua of the 4d theory compactified on a circle. The relation between Seiberg–Witten theory and integrable systems has been reviewed by Eric D'Hoker and D. H. Phong. See Hitchin system. Seiberg–Witten prepotential via instanton counting Using supersymmetric localisation techniques, one can explicitly determine the instanton partition function of super Yang–Mills theory. The Seiberg–Witten prepotential can then be extracted using the localization approach of Nikita Nekrasov. It arises in the flat space limit , , of the partition function of the theory subject to the so-called -background. The latter is a specific background of four dimensional supergravity. It can be engineered, formally by lifting the [[Supersymmetric gauge theory#Theories with 8 or more SUSY generators (N > 1)|super Yang–Mills theory]] to six dimensions, then compactifying on 2-torus, while twisting the four dimensional spacetime around the two non-contractible cycles. In addition, one twists fermions so as to produce covariantly constant spinors generating unbroken supersymmetries. The two parameters , of the -background correspond to the angles of the spacetime rotation. In Ω-background, all the non-zero modes can be integrated out, so the path integral with the boundary condition at can be expressed as a sum over instanton number of the products and ratios of fermionic and bosonic determinants, producing the so-called Nekrasov partition function. In the limit where , approach 0, this sum is dominated by a unique saddle point. On the other hand, when , approach 0, holds. See also Ginzburg–Landau theory Donaldson theory References (See Section 7.2) Supersymmetric quantum field theory Gauge theories
Seiberg–Witten theory
[ "Physics" ]
1,909
[ "Supersymmetric quantum field theory", "Supersymmetry", "Symmetry" ]
3,526,988
https://en.wikipedia.org/wiki/Paper%20drilling
Paper drilling is a technique used in binderies for providing large quantities of paper with round holes. The paper can be processed as loose leaves and in brochures (stitched, perfectly bound). The holes can be drilled for storage purposes, such as filing, or sometimes for decorative purposes. Terminology Paper drilling describes a technology for providing paper with round holes. For this purpose, paper-drilling machines are used. Paper-drilling machines are the generic term for manual, motorized, and fully automated paper drills. The paper-drilling system is an automated paper-drilling machine that usually integrates several production steps into one continuous workflow. The phrase paper drill is used as well for paper-drilling machines as for the tools used for paper drilling. Technology Paper drilling is a method to drill round holes into paper and other materials. For this purpose, hollow paper drill bits are clamped into a driven spindle that drills into the pile of paper. Paper drill bits are available for different hole sizes and in different coating qualities. Unlike hole punching, where only one or a few sheets may be processed, a large number of sheets can be processed with a paper-drilling machine. Depending on the type of paper drill, either the paper drill bits are lowered into the pile or the table is lifted. Paper-drilling machines can be equipped with a different number of spindles, which are each built into one paper drill head. The range starts with one- and two-spindle paper drills for small volumes and office purposes and reaches up to paper-drilling platforms with more than 20 spindles/paper drill heads. Applications Applications for paper drilling include file holes for different ring binders, loose leaf collections, rows of holes for wire comb binding, and tags. Many products processed on a paper drilling machine are stationery. Additionally, catalogues, manuals, and brochures are drilled on a paper-drilling system to be able to file them in a binder. Sometimes, drilled holes are used for decorative purposes. In addition to different stock types, such as offset paper, bond paper, glossy paper, and coated paper, a modern paper-drilling machine can drill many other materials like plastic films, cardboard, foils, etc. Some casinos use a paper drill to deface used decks of playing cards, a process known as canceling. Cards are canceled so that they cannot be marked by cheaters outside of the casino and surreptitiously brought back into play. They can also be used domestically in place of a standard hole punch, most notably Oliver Lambert of London, who is well known for his extensive passion for their use. In recent years, due to their rarity, they have become sought-after collector's items, often going for considerable prices in second hand markets. Users of paper drills Paper-drilling machines are used in trade binderies, commercial print shops with finishing departments, in-house print shops, and domestic and copy shops. Depending on the volumes, these companies operate many different types of paper-drilling machines, from simple one- and two-spindle hand-operated tabletop paper drills to standard motorized four-spindle paper-drilling machines and fully automated, integrated paper-drilling systems. These high-performance paper drills can run in line with other finishing equipment. References Book design Paper art
Paper drilling
[ "Engineering" ]
669
[ "Book design", "Design" ]
3,527,984
https://en.wikipedia.org/wiki/Fluid%20catalytic%20cracking
Fluid catalytic cracking (FCC) is the conversion process used in petroleum refineries to convert the high-boiling point, high-molecular weight hydrocarbon fractions of petroleum (crude oils) into gasoline, alkene gases, and other petroleum products. The cracking of petroleum hydrocarbons was originally done by thermal cracking, now virtually replaced by catalytic cracking, which yields greater volumes of high octane rating gasoline; and produces by-product gases, with more carbon-carbon double bonds (i.e. alkenes), that are of greater economic value than the gases produced by thermal cracking. The feedstock to the FCC conversion process usually is heavy gas oil (HGO), which is that portion of the petroleum (crude oil) that has an initial boiling-point temperature of or higher, at atmospheric pressure, and that has an average molecular weight that ranges from about 200 to 600 or higher; heavy gas oil also is known as "heavy vacuum gas oil" (HVGO). In the fluid catalytic cracking process, the HGO feedstock is heated to a high temperature and to a moderate pressure, and then is placed in contact with a hot, powdered catalyst, which breaks the long-chain molecules of the high-boiling-point hydrocarbon liquids into short-chain molecules, which then are collected as a vapor. Economics Oil refineries use fluid catalytic cracking to correct the imbalance between the market demand for gasoline and the excess of heavy, high boiling range products resulting from the distillation of crude oil. As of 2006, FCC units were in operation at 400 petroleum refineries worldwide, and about one-third of the crude oil refined in those refineries is processed in an FCC to produce high-octane gasoline and fuel oils. During 2007, the FCC units in the United States processed a total of of feedstock per day, and FCC units worldwide processed about twice that amount. FCC units are less common in Europe, the Middle East and Africa (EMEA) because those regions have high demand for diesel and kerosene, which can be satisfied with hydrocracking. In the US, fluid catalytic cracking is more common because the demand for gasoline is higher. Flow diagram and process description The modern FCC units are all continuous processes which operate 24 hours a day for as long as 3 to 5 years between scheduled shutdowns for routine maintenance. There are several different proprietary designs that have been developed for modern FCC units. Each design is available under a license that must be purchased from the design developer by any petroleum refining company desiring to construct and operate an FCC of a given design. There are two different configurations for an FCC unit: the "stacked" type where the reactor and the catalyst regenerator are contained in two separate vessels, with the reactor above the regenerator, with a skirt between these vessels allowing the regenerator off-gas piping to connect to the top of the regenerator vessel, and the "side-by-side" type where the reactor and catalyst regenerator are in two separate vessels. The stacked configuration occupies less physical space of the refinery area. These are the major FCC designers and licensors: Side-by-side configuration: Lummus Technology ExxonMobil Research and Engineering (EMRE) Shell Global Solutions Axens / Stone & Webster Process Technology — currently owned by Technip UOP LLC - A Honeywell Company Stacked configuration: Kellogg Brown & Root (KBR) Each of the proprietary design licensors claims to have unique features and advantages. A complete discussion of the relative advantages of each of the processes is beyond the scope of this article. Reactor and regenerator The reactor and regenerator are considered to be the heart of the fluid catalytic cracking unit. The schematic flow diagram of a typical modern FCC unit in Figure 1 below is based upon the "side-by-side" configuration. The preheated high-boiling petroleum feedstock (at about 315 to 430 °C) consisting of long-chain hydrocarbon molecules is combined with recycle slurry oil from the bottom of the distillation column and injected into the catalyst riser where it is vaporised and cracked into smaller molecules of vapour by contact and mixing with the very hot powdered catalyst from the regenerator. All of the cracking reactions take place in the catalyst riser within a period of 2–4 seconds. The hydrocarbon vapours "fluidize" the powdered catalyst and the mixture of hydrocarbon vapors and catalyst flows upward to enter the reactor at a temperature of about 535 °C and a pressure of about 1.72 bar. The reactor is a vessel in which the cracked product vapors are: (a) separated from the spent catalyst by flowing through a set of two-stage cyclones within the reactor and (b) the spent catalyst flows downward through a steam stripping section to remove any hydrocarbon vapors before the spent catalyst returns to the catalyst regenerator. The flow of spent catalyst to the regenerator is regulated by a slide valve in the spent catalyst line. Since the cracking reactions produce some carbonaceous material (referred to as catalyst coke) that deposits on the catalyst and very quickly reduces the catalyst activity, the catalyst is regenerated by burning off the deposited coke with air blown into the regenerator. The regenerator operates at a temperature of about 715 °C and a pressure of about 2.41 bar, hence the regenerator operates at about 0.7 bar higher pressure than the reactor. The combustion of the coke is exothermic and it produces a large amount of heat that is partially absorbed by the regenerated catalyst and provides the heat required for the vaporization of the feedstock and the endothermic cracking reactions that take place in the catalyst riser. For that reason, FCC units are often referred to as being 'heat balanced'. The hot catalyst (at about 715 °C) leaving the regenerator flows into a catalyst withdrawal well where any entrained combustion flue gases are allowed to escape and flow back into the upper part to the regenerator. The flow of regenerated catalyst to the feedstock injection point below the catalyst riser is regulated by a slide valve in the regenerated catalyst line. The hot flue gas exits the regenerator after passing through multiple sets of two-stage cyclones that remove entrained catalyst from the flue gas. The amount of catalyst circulating between the regenerator and the reactor amounts to about 5 kg per kg of feedstock, which is equivalent to about 4.66 kg per litre of feedstock. Thus, an FCC unit processing will circulate about 55,900 tonnes per day of catalyst. Main column The reaction product vapors (at 535 °C and a pressure of 1.72 bar) flow from the top of the reactor to the bottom section of the main column (commonly referred to as the main fractionator where feed splitting takes place) where they are distilled into the FCC end products of cracked petroleum naphtha, fuel oil, and offgas. After further processing for removal of sulfur compounds, the cracked naphtha becomes a high-octane component of the refinery's blended gasolines. The main fractionator offgas is sent to what is called a gas recovery unit where it is separated into butanes and butylenes, propane and propylene, and lower molecular weight gases (hydrogen, methane, ethylene and ethane). Some FCC gas recovery units may also separate out some of the ethane and ethylene. Although the schematic flow diagram above depicts the main fractionator as having only one sidecut stripper and one fuel oil product, many FCC main fractionators have two sidecut strippers and produce a light fuel oil and a heavy fuel oil. Likewise, many FCC main fractionators produce a light cracked naphtha and a heavy cracked naphtha. The terminology light and heavy in this context refers to the product boiling ranges, with light products having a lower boiling range than heavy products. The bottom product oil from the main fractionator contains residual catalyst particles which were not completely removed by the cyclones in the top of the reactor. For that reason, the bottom product oil is referred to as a slurry oil. Part of that slurry oil is recycled back into the main fractionator above the entry point of the hot reaction product vapors so as to cool and partially condense the reaction product vapors as they enter the main fractionator. The remainder of the slurry oil is pumped through a slurry settler. The bottom oil from the slurry settler contains most of the slurry oil catalyst particles and is recycled back into the catalyst riser by combining it with the FCC feedstock oil. The clarified slurry oil or decant oil is withdrawn from the top of slurry settler for use elsewhere in the refinery, as a heavy fuel oil blending component, or as carbon black feedstock. Regenerator flue gas Depending on the choice of FCC design, the combustion in the regenerator of the coke on the spent catalyst may or may not be complete combustion to carbon dioxide . The combustion air flow is controlled so as to provide the desired ratio of carbon monoxide (CO) to carbon dioxide for each specific FCC design. In the design shown in Figure 1, the coke has only been partially combusted to . The combustion flue gas (containing CO and ) at 715 °C and at a pressure of 2.41 bar is routed through a secondary catalyst separator containing swirl tubes designed to remove 70 to 90 percent of the particulates in the flue gas leaving the regenerator. This is required to prevent erosion damage to the blades in the turbo-expander that the flue gas is next routed through. The expansion of flue gas through a turbo-expander provides sufficient power to drive the regenerator's combustion air compressor. The electrical motor–generator can consume or produce electrical power. If the expansion of the flue gas does not provide enough power to drive the air compressor, the electric motor–generator provides the needed additional power. If the flue gas expansion provides more power than needed to drive the air compressor, then the electric motor–generator converts the excess power into electric power and exports it to the refinery's electrical system. The expanded flue gas is then routed through a steam-generating boiler (referred to as a CO boiler) where the carbon monoxide in the flue gas is burned as fuel to provide steam for use in the refinery as well as to comply with any applicable environmental regulatory limits on carbon monoxide emissions. The flue gas is finally processed through an electrostatic precipitator (ESP) to remove residual particulate matter to comply with any applicable environmental regulations regarding particulate emissions. The ESP removes particulates in the size range of 2 to 20 μm from the flue gas. Particulate filter systems, known as Fourth Stage Separators (FSS) are sometimes required to meet particulate emission limits. These can replace the ESP when particulate emissions are the only concern. The steam turbine in the flue gas processing system (shown in the above diagram) is used to drive the regenerator's combustion air compressor during start-ups of the FCC unit until there is sufficient combustion flue gas to take over that task. Mechanism and products of catalytic cracking The fluid catalytic cracking process breaks large hydrocarbons by their conversion to carbocations, which undergo myriad rearrangements. Figure 2 is a very simplified schematic diagram that exemplifies how the process breaks high boiling, straight-chain alkane (paraffin) hydrocarbons into smaller straight-chain alkanes as well as branched-chain alkanes, branched alkenes (olefins) and cycloalkanes (naphthenes). The breaking of the large hydrocarbon molecules into smaller molecules is more technically referred to by organic chemists as scission of the carbon-to-carbon bonds. As depicted in Figure 2, some of the smaller alkanes are then broken and converted into even smaller alkenes and branched alkenes such as the gases ethylene, propylene, butylenes, and isobutylenes. Those olefinic gases are valuable for use as petrochemical feedstocks. The propylene, butylene and isobutylene are also valuable feedstocks for certain petroleum refining processes that convert them into high-octane gasoline blending components. As also depicted in Figure 2, the cycloalkanes (naphthenes) formed by the initial breakup of the large molecules are further converted to aromatics such as benzene, toluene, and xylenes, which boil in the gasoline boiling range and have much higher octane ratings than alkanes. In the cracking process carbon is also produced which gets deposited on the catalyst (catalyst coke). The carbon formation tendency or amount of carbon in a crude or FCC feed is measured with methods such as Micro carbon residue, Conradson carbon residue, or Ramsbottom carbon residue. Catalysts FCC units continuously withdraw and replace some of the catalyst in order to maintain a steady level of activity. Modern FCC catalysts are fine powders with a bulk density of 0.80 to 0.96 g/cm3 and having a particle size distribution ranging from 10 to 150 μm and an average particle size of 60 to 100 μm. The design and operation of an FCC unit is largely dependent upon the chemical and physical properties of the catalyst. The desirable properties of an FCC catalyst are: Good stability to high temperature and to steam High activity Large pore sizes Good resistance to attrition Low coke production A modern FCC catalyst has four major components: crystalline zeolite, matrix, binder, and filler. Zeolite is the active component and can comprise from about 15% to 50%, by weight, of the catalyst. Faujasite (aka Type Y) is the zeolite used in FCC units. The zeolites are strong solid acids (equivalent to 90% sulfuric acid). The alumina matrix component of an FCC catalyst also contributes to catalytic activity sites. The binder and filler components provide the physical strength and integrity of the catalyst. The binder is usually silica sol and the filler is usually a clay (kaolin). The predominant suppliers of FCC catalysts worldwide are Albemarle Corporation, W.R. Grace Company, and BASF Catalysts (formerly Engelhard). History The first commercial use of catalytic cracking occurred in 1915 when Almer M. McAfee of Gulf Refining Company developed a batch process using aluminium chloride (a Friedel–Crafts catalyst known since 1877) to catalytically crack heavy petroleum oils. However, the prohibitive cost of the catalyst prevented the widespread use of McAfee's process at that time. In 1922, a French mechanical engineer named Eugene Jules Houdry and a French pharmacist named E. A. Prudhomme set up a laboratory near Paris to develop a catalytic process for converting lignite coal to gasoline. Supported by the French government, they built a small demonstration plant in 1929 that processed about 60 tons per day of lignite coal. The results indicated that the process was not economically viable and it was subsequently shut down. Houdry had found that Fuller's earth, a clay mineral containing aluminosilicates, could convert oil derived from the lignite to gasoline. He then began to study the catalysis of petroleum oils and had some success in converting vaporized petroleum oil to gasoline. In 1930, the Vacuum Oil Company invited him to come to the United States and he moved his laboratory to Paulsboro, New Jersey. In 1931, the Vacuum Oil Company merged with Standard Oil of New York (Socony) to form the Socony-Vacuum Oil Company. In 1933, a small Houdry unit processed of petroleum oil. Because of the economic depression of the early 1930s, Socony-Vacuum was no longer able to support Houdry's work and gave him permission to seek help elsewhere. In 1933, Houdry and Socony-Vacuum joined with Sun Oil Company in developing the Houdry process. Three years later, in 1936, Socony-Vacuum converted an older thermal cracking unit in their Paulsboro refinery in New Jersey to a small demonstration unit using the Houdry process to catalytically crack of petroleum oil. In 1937, Sun Oil began operation of a new Houdry unit processing at their Marcus Hook refinery in Pennsylvania. The Houdry process at that time used reactors with a fixed bed of catalyst and was a semi-batch operation involving multiple reactors with some of the reactors in operation while other reactors were in various stages of regenerating the catalyst. Motor-driven valves were used to switch the reactors between online operation and offline regeneration and a cycle timer managed the switching. Almost 50 percent of the cracked product was gasoline as compared with about 25 percent from the thermal cracking processes. By 1938, when the Houdry process was publicly announced, Socony-Vacuum had eight additional units under construction. Licensing the process to other companies also began and by 1940 there were 14 Houdry units in operation processing . The next major step was to develop a continuous process rather than the semi-batch Houdry process. That step was implemented by advent of the moving-bed process known as the Thermofor Catalytic Cracking (TCC) process which used a bucket conveyor-elevator to move the catalyst from the regeneration kiln to the separate reactor section. A small semi-commercial demonstration TCC unit was built in Socony-Vacuum's Paulsboro refinery in 1941 and operated successfully, producing . Then a full-scale commercial TCC unit processing began operation in 1943 at the Beaumont, Texas refinery of Magnolia Oil Company, an affiliate of Socony-Vacuum. By the end of World War II in 1945, the processing capacity of the TCC units in operation was about . It is said that the Houdry and TCC units were a major factor in the winning of World War II by supplying the high-octane gasoline needed by the air forces of Great Britain and the United States for the more efficient higher compression ratio engines of the Spitfire and the Mustang. Supplies of American aviation gas also negated the deficit of high-octane gasoline for the Red Army Air Force. In the years immediately after World War II, the Houdriflow process and the air-lift TCC process were developed as improved variations on the moving-bed theme. Just like Houdry's fixed-bed reactors, the moving-bed designs were prime examples of good engineering by developing a method of continuously moving the catalyst between the reactor and regeneration sections. The first air-lift TCC unit began operation in October 1950 at the Beaumont, Texas refinery. This fluid catalytic cracking process had first been investigated in the 1920s by Standard Oil of New Jersey, but research on it was abandoned during the economic depression years of 1929 to 1939. In 1938, when the success of Houdry's process had become apparent, Standard Oil of New Jersey resumed the project, hopefully in competition with Houdry, as part of a consortium of that include five oil companies (Standard Oil of New Jersey, Standard Oil of Indiana, Anglo-Iranian Oil, Texas Oil and Royal Dutch Shell), two engineering-construction companies (M. W. Kellogg Limited and Universal Oil Products) and a German chemical company (I.G. Farben). The consortium was called Catalytic Research Associates (CRA) and its purpose was to develop a catalytic cracking process which would not impinge on Houdry's patents. Chemical engineering professors Warren K. Lewis and Edwin R. Gilliland of the Massachusetts Institute of Technology (MIT) suggested to the CRA researchers that a low velocity gas flow through a powder might "lift" it enough to cause it to flow in a manner similar to a liquid. Focused on that idea of a fluidized catalyst, researchers Donald Campbell, Homer Martin, Eger Murphree and Charles Tyson of the Standard Oil of New Jersey (now Exxon-Mobil Company) developed the first fluidized catalytic cracking unit. Their U.S. Patent No. 2,451,804, A Method of and Apparatus for Contacting Solids and Gases, describes their milestone invention. Based on their work, M. W. Kellogg Company constructed a large pilot plant in the Baton Rouge, Louisiana refinery of the Standard Oil of New Jersey. The pilot plant began operation in May 1940. Based on the success of the pilot plant, the first commercial fluid catalytic cracking plant (known as the Model I FCC) began processing of petroleum oil in the Baton Rouge refinery on May 25, 1942, just four years after the CRA consortium was formed and in the midst of World War II. A little more than a month later, in July 1942, it was processing . In 1963, that first Model I FCC unit was shut down after 21 years of operation and subsequently dismantled. In the many decades since the Model I FCC unit began operation, the fixed bed Houdry units have all been shut down as have most of the moving bed units (such as the TCC units) while hundreds of FCC units have been built. During those decades, many improved FCC designs have evolved and cracking catalysts have been greatly improved, but the modern FCC units are essentially the same as that first Model I FCC unit. Typical Yields for a Modern FCC See also Cracking (chemistry) References External links Valero Refinery Tour (Houston, TX) Description and diagram of power train CD Tech website discussion of Lummus FCC and hydrotreating of catalytically cracked naphtha. The FCC Network Recovery of CO from a FCC using the COPureSM Process North American Catalysis Society Fluid Catalytic Cracking (University of British Columbia, Quak Foo, Lee ) CFD Simulation of a Full-Scale Commercial FCC Regenerator Chemical processes Oil refining Fluidization
Fluid catalytic cracking
[ "Chemistry" ]
4,527
[ "Fluidization", "Petroleum technology", "Chemical processes", "Oil refining", "nan", "Chemical process engineering" ]
3,528,740
https://en.wikipedia.org/wiki/Potassium%20aluminium%20borate
Potassium aluminium borate (K2Al2B2O7) is an ionic compound composed of potassium ions, aluminium ions, and borate ions. Its crystal form exhibits nonlinear optical properties. The ultraviolet beam at 266 nm can be obtained by fourth harmonic generation (FGH) of 1064 nm Nd:YAG laser radiation through a nonlinear crystal K2Al2B2O7 (KABO). References Nonlinear optical materials Borates Potassium compounds Aluminium compounds Crystals
Potassium aluminium borate
[ "Chemistry", "Materials_science" ]
97
[ "Crystallography", "Crystals", "Inorganic compounds", "Inorganic compound stubs" ]
10,553,773
https://en.wikipedia.org/wiki/Marine%20chronometer
A marine chronometer is a precision timepiece that is carried on a ship and employed in the determination of the ship's position by celestial navigation. It is used to determine longitude by comparing Greenwich Mean Time (GMT), and the time at the current location found from observations of celestial bodies. When first developed in the 18th century, it was a major technical achievement, as accurate knowledge of the time over a long sea voyage was vital for effective navigation, lacking electronic or communications aids. The first true chronometer was the life work of one man, John Harrison, spanning 31 years of persistent experimentation and testing that revolutionized naval (and later aerial) navigation. The term chronometer was coined from the Greek words () (meaning time) and (meaning measure). The 1713 book Physico-Theology by the English cleric and scientist William Derham includes one of the earliest theoretical descriptions of a marine chronometer. It has recently become more commonly used to describe watches tested and certified to meet certain precision standards. History To determine a position on the Earth's surface, using classical models, it is necessary and sufficient to know the latitude, longitude, and altitude. Altitude considerations can naturally be ignored for vessels operating at sea level. Until the mid-1750s, accurate navigation at sea out of sight of land was an unsolved problem due to the difficulty in calculating longitude. Navigators could determine their latitude by measuring the sun's angle at noon (i.e., when it reached its highest point in the sky, or culmination) or, in the Northern Hemisphere, by measuring the angle of Polaris (the North Star) from the horizon (usually during twilight). To find their longitude, however, they needed a time standard that would work aboard a ship. Observation of regular celestial motions, such as Galileo's method based on observing Jupiter's natural satellites, was usually not possible at sea due to the ship's motion. The lunar distances method, initially proposed by Johannes Werner in 1514, was developed in parallel with the marine chronometer. The Dutch scientist Gemma Frisius was the first to propose the use of a chronometer to determine longitude in 1530. The purpose of a chronometer is to measure accurately the time of a known fixed location. This is particularly important for navigation. As the Earth rotates at a regular predictable rate, the time difference between the chronometer and the ship's local time can be used to calculate the longitude of the ship relative to the Prime Meridian (defined as 0°) (or another starting point) if accurately enough known, using spherical trigonometry. Practical celestial navigation usually requires a marine chronometer to measure time, a sextant to measure the angles, an almanac giving schedules of the coordinates of celestial objects, a set of sight reduction tables to help perform the height and azimuth computations, and a chart of the region. With sight reduction tables, the only calculations required are addition and subtraction. Most people can master simpler celestial navigation procedures after a day or two of instruction and practice, even using manual calculation methods. The use of a marine chronometer to determine longitude by chronometer permits navigators to obtain a reasonably accurate position fix. For every four seconds that the time source is in error, the east–west position may be off by up to just over one nautical mile as the angular speed of Earth is latitude dependent. The creation of a timepiece which would work reliably at sea was difficult. Until the 20th century, the best timekeepers were pendulum clocks, but both the rolling of a ship at sea and the up to 0.2% variations in the gravity of Earth made a simple gravity-based pendulum useless both in theory and in practice. First examples Christiaan Huygens, following his invention of the pendulum clock in 1656, made the first attempt at a marine chronometer in 1673 in France, under the sponsorship of Jean-Baptiste Colbert. In 1675, Huygens, who was receiving a pension from Louis XIV, invented a chronometer that employed a balance wheel and a spiral spring for regulation, instead of a pendulum, opening the way to marine chronometers and modern pocket watches and wristwatches. He obtained a patent for his invention from Colbert, but his clock remained imprecise at sea. Huygens' attempt in 1675 to obtain an English patent from Charles II stimulated Robert Hooke, who claimed to have conceived of a spring-driven clock years earlier, to attempt to produce one and patent it. During 1675 Huygens and Hooke each delivered two such devices to Charles, but none worked well and neither Huygens nor Hooke received an English patent. It was during this work that Hooke formulated Hooke's law. The first published use of the term chronometer was in 1684 in , a theoretical work by Kiel professor Matthias Wasmuth. This was followed by a further theoretical description of a chronometer in works published by English scientist William Derham in 1713. Derham's principal work, Physico-theology, or a demonstration of the being and attributes of God from his works of creation, also proposed the use of vacuum sealing to ensure greater accuracy in the operation of clocks. Attempts to construct a working marine chronometer were begun by Jeremy Thacker in England in 1714, and by Henry Sully in France two years later. Sully published his work in 1726 with , but neither his nor Thacker's models were able to resist the rolling of the seas and keep precise time while in shipboard conditions. In 1714, the British government offered a longitude prize for a method of determining longitude at sea, with the awards ranging from £10,000 to £20,000 (£2 million to £4 million in terms) depending on accuracy. John Harrison, a Yorkshire carpenter, submitted a project in 1730, and in 1735 completed a clock based on a pair of counter-oscillating weighted beams connected by springs whose motion was not influenced by gravity or the motion of a ship. His first two sea timepieces H1 and H2 (completed in 1741) used this system, but he realised that they had a fundamental sensitivity to centrifugal force, which meant that they could never be accurate enough at sea. Construction of his third machine, designated H3, in 1759 included novel circular balances and the invention of the bi-metallic strip and caged roller bearings, inventions which are still widely used. However, H3's circular balances still proved too inaccurate and he eventually abandoned the large machines. Harrison solved the precision problems with his much smaller H4 chronometer design in 1761. H4 looked much like a large five-inch (12 cm) diameter pocket watch. In 1761, Harrison submitted H4 for the £20,000 longitude prize. His design used a fast-beating balance wheel controlled by a temperature-compensated spiral spring. These features remained in use until stable electronic oscillators allowed very accurate portable timepieces to be made at affordable cost. In 1767, the Board of Longitude published a description of his work in The Principles of Mr. Harrison's time-keeper. A French expedition under Charles-François-César Le Tellier de Montmirail performed the first measurement of longitude using marine chronometers aboard Aurore in 1767. Further development In France, 1748, Pierre Le Roy invented the detent escapement characteristic of modern chronometers. In 1766, he created a revolutionary chronometer that incorporated a detent escapement, the temperature-compensated balance and the isochronous balance spring: Harrison showed the possibility of having a reliable chronometer at sea, but these developments by Le Roy are considered by Rupert Gould to be the foundation of the modern chronometer. Le Roy's innovations made the chronometer a much more accurate piece than had been anticipated. Ferdinand Berthoud in France, as well as Thomas Mudge in Britain also successfully produced marine timekeepers. Although none were simple, they proved that Harrison's design was not the only answer to the problem. The greatest strides toward practicality came at the hands of Thomas Earnshaw and John Arnold, who in 1780 developed and patented simplified, detached, "spring detent" escapements, moved the temperature compensation to the balance, and improved the design and manufacturing of balance springs. This combination of innovations served as the basis of marine chronometers until the electronic era. The new technology was initially so expensive that not all ships carried chronometers, as illustrated by the fateful last journey of the East Indiaman Arniston, shipwrecked with the loss of 372 lives. However, by 1825, the Royal Navy had begun routinely supplying its vessels with chronometers. Beginning in 1820, the British Royal Observatory in Greenwich tested marine chronometers in an Admiralty-instigated trial or "chronometer competition" program intended to encourage the improvement of chronometers. In 1840 a new series of trials in a different format was begun by the seventh Astronomer Royal George Biddell Airy. These trials continued in much the same format until the outbreak of World War I in 1914, at which point they were suspended. Although the formal trials ceased, the testing of chronometers for the Royal Navy did not. Marine chronometer makers looked to a phalanx of astronomical observatories located in Western Europe to conduct accuracy assessments of their timepieces. Once mechanical timepiece movements developed sufficient precision to allow for adequately accurate marine navigation, these third party independent assessments also developed into what became known as "chronometer competitions" at the astronomical observatories located in Western Europe. The Neuchâtel Observatory, Geneva Observatory, Besançon Observatory, Kew Observatory, German Naval Observatory Hamburg and Glashütte Observatory are prominent examples of observatories that certified the accuracy of mechanical timepieces. The observatory testing regime typically lasted for 30 to 50 days and contained accuracy standards that were far more stringent and difficult than modern standards such as those set by the Contrôle Officiel Suisse des Chronomètres (COSC). When a movement passed the observatory test, it became certified as an observatory chronometer and received a Bulletin de Marche from the observatory, stipulating the performance of the movement. It was common for ships at the time to observe a time ball, such as the one at the Royal Observatory, Greenwich, to check their chronometers before departing on a long voyage. Every day, ships would anchor briefly in the River Thames at Greenwich, waiting for the ball at the observatory to drop at precisely 1pm. This practice was in small part responsible for the subsequent adoption of Greenwich Mean Time as an international standard. (Time balls became redundant around 1920 with the introduction of radio time signals, which have themselves largely been superseded by GPS time.) In addition to setting their time before departing on a voyage, ship chronometers were also routinely checked for accuracy while at sea by carrying out lunar or solar observations. In typical use, the chronometer would be mounted in a sheltered location below decks to avoid damage and exposure to the elements. Mariners would use the chronometer to set a so-called hack watch, which would be carried on deck to make the astronomical observations. Though much less accurate (and less expensive) than the chronometer, the hack watch would be satisfactory for a short period of time after setting it (i.e., long enough to make the observations). Rationalizing production methods Although industrial production methods began revolutionizing watchmaking in the middle of the 19th century, chronometer manufacture remained craft-based much longer and was dominated by British and Swiss manufacturers. Around the turn of the 20th century, Swiss makers such as Ulysse Nardin made great strides toward incorporating modern production methods and using fully interchangeable parts, but it was only with the onset of World War II that the Hamilton Watch Company in the United States perfected the process of mass production, which enabled it to produce thousands of its Hamilton Model 21 and Model 22 chronometers from 1942 onwards for the branches of the United States military and merchant marine as well as other Allied forces during World War II. The Hamilton 21 Marine Chronometer had a chain drive fusee and its second hand advanced in -second increments over a 60 seconds marked sub dial. In Germany, where marine chronometers were imported or used foreign key components, a (three-pillar movement unified chronometer) was developed by a collaboration between the Wempe Chronometerwerke and A. Lange & Söhne companies to make more efficient production possible. The development of a precise and inexpensive was a 1939 German naval command and Aviation ministry driven initiative. Serial production began in 1942. All parts were made in Germany and interchangeable. During the course of World War II modifications that became necessary when raw materials became scarce were applied and work was compulsory and sometimes voluntarily shared between various German manufacturers to speed up production. The production of German unified design chronometers with their harmonized components continued until long after World War II in Germany and the Soviet Union, who confiscated the original technical drawings, and set up a production line in Moscow in 1949 that produced the first Soviet MX6 chronometers containing German made movements. From 1952 onwards until 1997 MX6 chronometers with minor (NII Chasprom — Horological institute of the Soviet era) devised alterations were produced from components all made in the Soviet Union. The German ultimately became the mechanical marine timekeeper design produced in the highest volume, with about 58,000 units produced. Of these, less than 3,000 were produced during World War II, about 5,000 after the war in West and East Germany and about 50,000 in the Soviet Union and later post-Soviet Russia. Of the Hamilton 21 Marine Chronometer during and after World War II about 13,000 units were produced. Despite the and Hamilton's success, chronometers made in the old way never disappeared from the marketplace during the era of mechanical timekeepers. Thomas Mercer Chronometers was among the companies that continued to make them. Historical significance Ship’s marine chronometers are the most exact portable mechanical timepieces ever produced and in a static environment were only trumped by non-portable precision pendulum clocks for observatories. They served, alongside the sextant, to determine the location of ships at sea. The seafaring nations invested richly in the development of these precision instruments, as pinpointing location at sea gave a decisive naval advantage. Without their accuracy and the accuracy of the feats of navigation that marine chronometers enabled, it is arguable that the ascendancy of the Royal Navy, and by extension that of the British Empire, might not have occurred so overwhelmingly; the formation of the empire by wars and conquests of colonies abroad took place in a period in which British vessels had reliable navigation due to the chronometer, while their Portuguese, Dutch, and French opponents did not. For example: the French were well established in India and other places before Britain, but were defeated by naval forces in the Seven Years' War. Rating and maintaining marine chronometers was deemed important well into the 20th century, as after World War I the work of the British Royal Observatory’s Chronometer Department became largely confined to rating of chronometers and watches that the Admiralty already owned and providing acceptance testing. In 1937 a workshop was set up for the first time by the Time Department for the repair and adjustment of British armed forces issued chronometers and watches. These maintenance activities had previously been outsourced to commercial workshops. From about the 1960s onwards mechanical spring detent marine chronometers were gradually replaced and supplanted by chronometers based on electric engineering techniques and technologies. In 1985 the British Ministry of Defence invited bids by tender for the disposal of their mechanical Hamilton Model 21 Marine Chronometers. The US Navy kept their Hamilton Model 21 Marine Chronometers in service as backups to the Loran-C hyperbolic radio navigation system until 1988, when the GPS global navigation satellite system was approved as reliable. At the end of the 20th century the production of mechanical marine chronometers had declined to the point where only a few were being made to special order by the First Moscow Watch Factory 'Kirov' (Poljot) in Russia, Wempe in Germany and Mercer in England. The most complete international collection of marine chronometers, including Harrison's H1 to H4, is at the Royal Observatory, Greenwich, in London, UK. Characteristics The crucial problem was to find a resonator that remained unaffected by the changing conditions met by a ship at sea. The balance wheel, harnessed to a spring, solved most of the problems associated with the ship's motion. Unfortunately, the elasticity of most balance spring materials changes relative to temperature. To compensate for ever-changing spring strength, the majority of chronometer balances used bi-metallic strips to move small weights toward and away from the centre of oscillation, thus altering the period of the balance to match the changing force of the spring. The balance spring problem was solved with a nickel-steel alloy named Elinvar for its invariable elasticity at normal temperatures. The inventor was Charles Édouard Guillaume, who won the 1920 Nobel Prize for physics in recognition for his metallurgical work. The escapement serves two purposes. First, it allows the train to advance fractionally and record the balance's oscillations. At the same time, it supplies minute amounts of energy to counter tiny losses from friction, thus maintaining the momentum of the oscillating balance. The escapement is the part that ticks. Since the natural resonance of an oscillating balance serves as the heart of a chronometer, chronometer escapements are designed to interfere with the balance as little as possible. There are many constant-force and detached escapement designs, but the most common are the spring detent and pivoted detent. In both of these, a small detent locks the escape wheel and allows the balance to swing completely free of interference except for a brief moment at the centre of oscillation, when it is least susceptible to outside influences. At the centre of oscillation, a roller on the balance staff momentarily displaces the detent, allowing one tooth of the escape wheel to pass. The escape wheel tooth then imparts its energy on a second roller on the balance staff. Since the escape wheel turns in only one direction, the balance receives impulse in only one direction. On the return oscillation, a passing spring on the tip of the detent allows the unlocking roller on the staff to move by without displacing the detent. The weakest link of any mechanical timekeeper is the escapement's lubrication. When the oil thickens through age or temperature or dissipates through humidity or evaporation, the rate will change, sometimes dramatically as the balance motion decreases through higher friction in the escapement. A detent escapement has a strong advantage over other escapements as it needs no lubrication. An impulse from the escape wheel to the impulse roller is nearly dead-beat, meaning little sliding action needing lubrication. Chronometer escape wheels and passing springs are typically gold due to the metal's lower slide friction over brass and steel. Chronometers often included other innovations to increase their efficiency and precision. Hard stones such as ruby and sapphire were often used as jewel bearings to decrease friction and wear of the pivots and escapement. Diamond was often used as the cap stone for the lower balance staff pivot to prevent wear from years of the heavy balance turning on the small pivot end. Until the end of mechanical chronometer production in the third quarter of the 20th century, makers continued to experiment with things like ball bearings and chrome-plated pivots. The timepieces were normally protected from the elements and kept below decks in a fixed position in a traditional box suspended in gimbals (a set of rings connected by bearings). This keeps the chronometer isolated in a horizontal "dial up" position to counter ship inclination (rocking) movements induced timing errors on the balance wheel. Marine chronometers always contain a maintaining power which keeps the chronometer going while it is being wound, and a power reserve indicator to show how long the chronometer will continue to run without being wound. These technical provisions usually yield timekeeping in mechanical marine chronometers accurate to within 0.5 second per day. Chronometer rating In strictly horological terms, "rating" a chronometer means that prior to the instrument entering service, the average rate of gaining or losing per day is observed and recorded on a rating certificate which accompanies the instrument. This daily rate is used in the field to correct the time indicated by the instrument to get an accurate time reading. Even the best-made chronometer with the finest temperature compensation etc. exhibits two types of error, (1) random and (2) consistent. The quality of design and manufacture of the instrument keeps the random errors small. In principle, the consistent errors should be amenable to elimination by adjustment, but in practice it is not possible to make the adjustment so precisely that this error is completely eliminated, so the technique of rating is used. The rate will also change while the instrument is in service due to e.g. thickening of the oil, so on long expeditions the instrument's rate would be periodically checked against accurate time determined by astronomical observations. Marine chronometer use today Since the 1990s boats and ships can use several Global Navigation Satellite Systems (GNSS) to navigate all the world's lakes, seas and oceans. Maritime GNSS units include functions useful on water, such as "man overboard" (MOB) functions that allow instantly marking the location where a person has fallen overboard, which simplifies rescue efforts. GNSS may be connected to the ship's self-steering gear and Chartplotters using the NMEA 0183 interface, and can also improve the security of shipping traffic by enabling Automatic Identification Systems (AIS). Even with these convenient 21st-century technological tools, modern practical navigators usually use celestial navigation using electric-powered time sources in combination with satellite navigation. Small handheld computers, laptops, navigational calculators and even scientific calculators enable modern navigators to "reduce" sextant sights in minutes, by automating all the calculation and/or data lookup steps. Using multiple independent position fix methods without solely relying on subject-to-failure electronic systems helps the navigator detect errors. Professional mariners are still required to be proficient in traditional piloting and celestial navigation, which requires the use of a precisely adjusted and rated autonomous or periodically external time-signal corrected chronometer. These abilities are still a requirement for certain international mariner certifications such as Officer in Charge of Navigational Watch, and Master and Chief Mate deck officers, and supplements offshore yachtmasters on long-distance private cruising yachts. Modern marine chronometers can be based on quartz clocks that are corrected periodically by satellite time signals or radio time signals (see radio clock). These quartz chronometers are not always the most accurate quartz clocks when no signal is received, and their signals can be lost or blocked. However, there are autonomous quartz movements, even in wrist watches, that are accurate to within 5 or 20 seconds per year. At least one quartz chronometer made for advanced navigation utilizes multiple quartz crystals which are corrected by a computer using an average value, in addition to GPS time signal corrections. See also Celestial navigation Sextant Clockmaker Thomas Earnshaw, inventor of the standard chronometer escapement Larcum Kendall Noon Gun Time ball Time signal Railroad chronometer Rupert Gould, author of an important history of the marine chronometer Radio-controlled watch Watchmaker Timeline of invention Longitude (book) References External links National Maritime Museum, Greenwich Henri MOTEL n°258 Chronomètre de Marine 40 heures Marine Chronometer Kaliber 100 - Presentation of marine chronometers of "Glashütter Uhrenbetriebe VEB" with picture and explanation A working chronometer, National Museum of Australia . Short MPEG film showing an 1825 Barraud chronometer in action. (link is outdated) Marine chronometer meaning at Wiktionary Web chronometer Clocks Horology Navigational equipment
Marine chronometer
[ "Physics", "Technology", "Engineering" ]
5,067
[ "Machines", "Physical quantities", "Time", "Horology", "Clocks", "Measuring instruments", "Physical systems", "Spacetime" ]
10,554,103
https://en.wikipedia.org/wiki/Chronometer%20watch
A chronometer (, khronómetron, "time measurer") is an extraordinarily accurate mechanical timepiece, with an original focus on the needs of maritime navigation. In Switzerland, timepieces certified by the Contrôle Officiel Suisse des Chronomètres (COSC) may be marked as Certified Chronometer or Officially Certified Chronometer. Outside Switzerland, equivalent bodies, such as the Japan Chronometer Inspection Institute, have in the past certified timepieces to similar standards, although use of the term has not always been strictly controlled. History The term chronometer was coined by Jeremy Thacker of Beverley, England in 1714, referring to his invention of a clock ensconced in a vacuum chamber. The term chronometer is also used to describe a marine chronometer used for celestial navigation and determination of longitude. The marine chronometer was invented by John Harrison in 1730. This was the first of a series of chronometers that enabled accurate marine navigation. From then on, an accurate chronometer was essential to open-ocean marine or air navigation out of sight of land. Early in the 20th century the advent of radiotelegraphy time signals supplemented the onboard marine chronometer for marine and air navigation, and various radio navigation systems were invented, developed, and implemented during and following the Second World War (e.g., Gee, Sonne (a.k.a. Consol), LORAN(-A and -C), Decca Navigator System and Omega Navigation System) that significantly reduced the need for positioning using an onboard marine chronometer. These culminated in the development and implementation of global satellite navigation systems (GSN-GPS) in the last quarter of the 20th century. The marine chronometer is no longer used as the primary means for navigation at sea, although it is still required as a backup, since radio systems and their associated electronics can fail for a variety of reasons. Once mechanical timepiece movements developed sufficient precision to allow for accurate marine navigation, there eventually developed what became known as "chronometer competitions" at astronomical observatories located in Europe. The Neuchâtel Observatory, Geneva Observatory, Besancon Observatory, and Kew Observatory are prominent examples of observatories that certified the accuracy of mechanical timepieces. The observatory testing regime typically lasted for 30 to 50 days and contained accuracy standards that were far more stringent and difficult than modern standards such as those set by COSC. When a movement passed the observatory, it became certified as an observatory chronometer and received a Bulletin de Marche from the Observatory, stipulating the performance of the movement. Because only very few movements were ever given the attention and manufacturing level necessary to pass the Observatory standards, there are very few observatory chronometers in existence. Most observatory chronometers had movements so specialized to accuracy that they could never withstand being used as wristwatches in normal usage. They were useful only for accuracy competitions, and so never were sold to the public for usage. However, in 1966 and 1967, Girard Perregaux manufactured approximately 670 wristwatches with the Calibre 32A movement, which became Observatory Chronometers certified by the Neuchatel Observatory, while in 1968, 1969 and 1970 Seiko had 226 wristwatches with its 4520 and 4580 Calibres certified. These observatory chronometers were then sold to the public for normal usage as wristwatches, and some examples may still be found today. The observatory competitions ended with the advent of the quartz watch movement, in the late 1960s and early 1970s, which generally has superior accuracy at far lesser costs. In 2009, the Watch Museum of Le Locle renewed the tradition and launched a new chronometry contest based on ISO 3159 certification. In 2017 the Observatory Chronometer Database (OCD) went online, which contains all mechanical timepieces ("chronometres-mecaniques") certified as observatory chronometers by the observatory in Neuchatel from 1945 to 1967, due to a successful participation in the competition which resulted in the issuance of a Bulletin de Marche. All database entries are submissions to the wristwatch category ("chronometres-bracelet") at the observatory competition. The term chronometer is often wrongly used by the general public to refer to timekeeping instruments fitted with an additional mechanism that may be set in motion by pushbuttons to enable measurement of the duration of an event. Such an instrument, typically called a stopwatch, is in fact a chronograph or chronoscope. It may be chronometer certified, provided it meets the criteria set for the standard. Mechanical chronometers A mechanical chronometer is a spring-driven escapement timekeeper, like a watch, but its parts are more massively built. Changes in the elasticity of the balance spring caused by variations in temperature are compensated for by devices built into it. Chronometers often included other innovations to increase their efficiency and precision. Hard stones such as diamond, ruby, and sapphire were often used as jewel bearings to decrease friction and wear of the pivots and escapement. Chronometer makers also took advantage of the physical properties of rare metals such as gold, platinum, and palladium. Complications In horological terms, a complication in a mechanical watch is a special feature that causes the design of the watch movement to become more complicated. Examples of complications include: Tourbillon Perpetual calendar Minute repeater Equation of time Power reserve Moon phases Chronograph Rattrapante Grande sonnerie More recent times Quartz and atomic timepieces have made mechanical chronometers obsolete for time standards used scientifically and/or industrially. Most watchmakers do still produce them. However, they are mostly considered status symbols promoted by luxury watchmakers as a symbol of fine craftmanship and aesthetics. Certified chronometers More than 1.8 million officially-certified chronometer certificates, mostly for mechanical wristwatch chronometers (wristwatches) with sprung balance oscillators, are being delivered each year, after passing the COSC's most extreme tests and being singly identified by an officially-recorded individual serial number. According to COSC, an officially-certified chronometer is a high-precision watch capable of displaying the seconds and housing a movement that has been tested over several days, in different positions, and at different temperatures, by an official, neutral body (COSC). Each movement is individually tested for several consecutive days, in five positions and at three temperatures. Any watch with denominations "certified chronometer" or "officially-certified chronometer" contains a certified movement and matches the criteria in ISO 3159 Timekeeping instruments—wristwatch chronometers with spring balance oscillator. See also References External links American Watchmakers-Clockmakers Institute Federation of the Swiss Watch Industry Contrôle Officiel Suisse des Chronomètres - COSC Accuracy of wristwatches Observatory Chronometer Database (OCD) Chronometer certification chronometer Cronosurf - The online chronometer watch - Web Chronograph Chronometer web version Clocks Watches
Chronometer watch
[ "Physics", "Technology", "Engineering" ]
1,493
[ "Physical systems", "Machines", "Clocks", "Measuring instruments" ]
10,557,570
https://en.wikipedia.org/wiki/Parity-check%20matrix
In coding theory, a parity-check matrix of a linear block code C is a matrix which describes the linear relations that the components of a codeword must satisfy. It can be used to decide whether a particular vector is a codeword and is also used in decoding algorithms. Definition Formally, a parity check matrix H of a linear code C is a generator matrix of the dual code, C⊥. This means that a codeword c is in C if and only if the matrix-vector product (some authors would write this in an equivalent form, cH⊤ = 0.) The rows of a parity check matrix are the coefficients of the parity check equations. That is, they show how linear combinations of certain digits (components) of each codeword equal zero. For example, the parity check matrix , compactly represents the parity check equations, , that must be satisfied for the vector to be a codeword of C. From the definition of the parity-check matrix it directly follows the minimum distance of the code is the minimum number d such that every d - 1 columns of a parity-check matrix H are linearly independent while there exist d columns of H that are linearly dependent. Creating a parity check matrix The parity check matrix for a given code can be derived from its generator matrix (and vice versa). If the generator matrix for an [n,k]-code is in standard form , then the parity check matrix is given by , because . Negation is performed in the finite field Fq. Note that if the characteristic of the underlying field is 2 (i.e., 1 + 1 = 0 in that field), as in binary codes, then -P = P, so the negation is unnecessary. For example, if a binary code has the generator matrix , then its parity check matrix is . It can be verified that G is a matrix, while H is a matrix. Syndromes For any (row) vector x of the ambient vector space, s = Hx⊤ is called the syndrome of x. The vector x is a codeword if and only if s = 0. The calculation of syndromes is the basis for the syndrome decoding algorithm. See also Hamming code Notes References Coding theory
Parity-check matrix
[ "Mathematics" ]
462
[ "Discrete mathematics", "Coding theory" ]
10,559,785
https://en.wikipedia.org/wiki/Crude%20drug
Crude drugs are drugs of plant, animal and microbial origin that contain natural substances that have undergone only the processes of collection and drying. The term natural substances refers to those substances found in nature that have not had man-made changes made in their molecular structure. They are used as medicine for humans and animals, internally and externally for curing diseases, e.g., Senna and Cinchona. A crude drug is any naturally occurring, unrefined substance derived from organic or inorganic sources such as plant, animal, bacteria, organs or whole organisms intended for use in the diagnosis, cure, mitigation, treatment, or prevention of disease in humans or other animals. Overview Crude drugs are unrefined natural medications in their raw forms. Prior to the 1950s, every pharmacy student learned about crude drugs in pharmacognosy class. Pharmacognosy is the study of the proper horticulture, harvesting and uses of the raw medications found in nature. Raising, harvesting and selling crude drugs was how many large pharmaceutical companies started out. Companies such as Eli Lilly and Company sold crude drugs to pharmacists to save them time and money, but the early pharmacy graduate would know how to raise their own crude drugs if need be. Morphology and organoleptic characters of crude drugs Identification of the crude drug by organoleptic characters is one of the important aspects of pharmacognostical study. Morphological study follows a special terminology which must be known to a pharmacognostist. The morphological terminology is derived from botany and zoology, depending upon the source of the crude drug. In general, color, odor, taste, size, shape, and special features, like touch, texture, fracture, presence of trichomes, and presence of ridges of crude drugs are studied under morphology. Aromatic odor of umbelliferous fruits and sweet taste of liquorice are the example of this type of evaluation. The study of form of a crude drug is morphology, while description of the form is morphography. However, shape and size of crude drugs as described in official books should only be considered as guidelines and may vary depending upon several factors. For example, color of the crude drug may fade if it gets exposed to sunlight for very long duration or if, the drug is not stored properly. Depending upon the condition under which the drug is growing or cultivated, i.e., availability of proper irrigation, fertilizers or even high temperature, may influence the size may be available and the crude drugs if grown in adverse conditions may be of small size. Color of the flowers as in case Catharanthus roseus and Catharanthus alba, presence of thorns in case of Asparagus recemosus and absence in Asparagus officinalis, arrangement of flowers in case of Withania semnifera or Witharia coagulens can help in differentiating the varieties of the same plant. Arrangement of cracks and wrinkles in case of stem bark of varieties of Cinchona bark, or the color of aloe can separate in varieties. The adulteration of seed of Strychnos nux-vomica with the seed of Strrychnos nux-blanda or Strychnos potatorum, caraway with Indian dill, Alexandrian senna with dog senna or palthe senna are identified by morphological means. In case of cellular products (unorganized drugs), form of the drug depends totally on the method of preparation of the drug. Thus, gum acacia is found in the form of ovoid tears, while tragacanth is marketed as vermiform ribbon with longitudinal striations. Evaluation To evaluate means to identify it and to determine its quality and purity, the identity of a drug can be established by actual collection of the drug from a plant or animal that has been positively identified. The evaluation of drug involves a number of methods that may be classified as follows: Organoleptic and morphological evaluation: Evaluation by means of organs of senses; knowing the color, odor, taste, size, shape and special features like texture. Microscopic: For identification of the pure powdered drug. This method allows more detailed examination of a drug and their identification by their known histological characters. Microscope by the virtue of its property to magnify, permits minute sections under study to enlarge so that leaf constants, stomatal index, palisade ratio can be determined. Biologic: Pharmacological activities of drugs are evaluated by bioassays. When the estimation of potency of crude drug or its preparations are done by means of measuring its effect on living organisms like bacteria, fungal growth, or animal tissue, it is known as biological effect of the drug, compared to the standard drug. By these methods, a crude drug can be assessed and further clinical trial can be recommended. Chemical: Chemical assays are best to determine potency and active constituents. It comprises different test and assays. The isolation, purification and identification of active constituents are the methods of evaluation. Quantitative chemical test such as acid value, saponification value etc. are also covered under these techniques. Physical: Physical constants are applied to active principles. These are helpful in evaluation with reference to moisture content, specific gravity, density, optic rotation etc. History The usage of crude drugs dates to prehistoric times. Traditional medicine often incorporates the gathering and preparation of material from natural sources, particularly herbs. In such practice, the active ingredients and method of action are largely unknown to the practitioner. In recent history, the development of modern chemistry and application of the scientific method shaped the use of crude drugs. Eventually, the use of crude drugs reach a zenith in the early 1900s and eventually gave way to the use of purified active ingredients from the natural source. Currently the use and exploration of crude drugs has again gained prominence in the medical community. The realization that many completely unknown substances are yet to be discovered from crude drugs has created a new interest in pharmacognosy and has led to many medical breakthroughs. In 1907, the Pure Food and Drug Act was implemented and standardization of crude drugs took place. Often the USP would specify what percentage of active ingredient was needed to claim a crude drug met USP standards. An example of standardization would be as follows (from the United States Pharmacopeia): Opium is the air-dried milky exudate obtained by incising the unripe capsules of Papaver somniferum Linne or its variety album De Candolle (Fam. Papaveraceae). Opium in its normal air-dried condition yields not less than 9.5 percent of anhydrous morphine. Use in Chinese medicine Crude medicine (), (also known as crude drug in the Chinese materia medica) are bulk drugs from the Chinese materia medica basic processing and treatment. References See also Traditional Chinese medicine Chinese herbology Chinese patent medicine Pharmacognosy Traditional Chinese medicine
Crude drug
[ "Chemistry" ]
1,427
[ "Pharmacology", "Pharmacognosy" ]
10,559,845
https://en.wikipedia.org/wiki/Test%20method
A test method is a method for a test in science or engineering, such as a physical test, chemical test, or statistical test. It is a definitive procedure that produces a test result. In order to ensure accurate and relevant test results, a test method should be "explicit, unambiguous, and experimentally feasible.", as well as effective and reproducible. A test can be considered an observation or experiment that determines one or more characteristics of a given sample, product, process, or service. The purpose of testing involves a prior determination of expected observation and a comparison of that expectation to what one actually observes. The results of testing can be qualitative (yes/no), quantitative (a measured value), or categorical and can be derived from personal observation or the output of a precision measuring instrument. Usually the test result is the dependent variable, the measured response based on the particular conditions of the test or the level of the independent variable. Some tests, however, may involve changing the independent variable to determine the level at which a certain response occurs: in this case, the test result is the independent variable. Importance In software development, engineering, science, manufacturing, and business, its developers, researchers, manufacturers, and related personnel must understand and agree upon methods of obtaining data and making measurements. It is common for a physical property to be strongly affected by the precise method of testing or measuring that property. As such, fully documenting experiments and measurements while providing needed documentation and descriptions of specifications, contracts, and test methods is vital. Using a standardized test method, perhaps published by a respected standards organization, is a good place to start. Sometimes it is more useful to modify an existing test method or to develop a new one, though such home-grown test methods should be validated and, in certain cases, demonstrate technical equivalency to primary, standardized methods. Again, documentation and full disclosure are necessary. A well-written test method is important. However, even more important is choosing a method of measuring the correct property or characteristic. Not all tests and measurements are equally useful: usually a test result is used to predict or imply suitability for a certain purpose. For example, if a manufactured item has several components, test methods may have several levels of connections: test results of a raw material should connect with tests of a component made from that material test results of a component should connect with performance testing of a complete item results of laboratory performance testing should connect with field performance These connections or correlations may be based on published literature, engineering studies, or formal programs such as quality function deployment. Validation of the suitability of the test method is often required. Content Quality management systems usually require full documentation of the procedures used in a test. The document for a test method might include: descriptive title scope over which class(es) of items, policies, etc. may be evaluated date of last effective revision and revision designation reference to most recent test method validation person, office, or agency responsible for questions on the test method, updates, and deviations significance or importance of the test method and its intended use terminology and definitions to clarify the meanings of the test method types of apparatus and measuring instrument (sometimes the specific device) required to conduct the test sampling procedures (how samples are to be obtained and prepared, as well as the sample size) safety precautions required calibrations and metrology systems natural environment concerns and considerations testing environment concerns and considerations detailed procedures for conducting the test calculation and analysis of data interpretation of data and test method output report format, content, data, etc. Validation Test methods are often scrutinized for their validity, applicability, and accuracy. It is very important that the scope of the test method be clearly defined, and any aspect included in the scope is shown to be accurate and repeatable through validation. Test method validations often encompass the following considerations: accuracy and precision; demonstration of accuracy may require the creation of a reference value if none is yet available repeatability and reproducibility, sometimes in the form of a Gauge R&R. range, or a continuum scale over which the test method would be considered accurate (e.g., 10 N to 100 N force test) measurement resolution, be it spatial, temporal, or otherwise curve fitting, typically for linearity, which justifies interpolation between calibrated reference points robustness, or the insensitivity to potentially subtle variables in the test environment or setup which may be difficult to control usefulness to predict end-use characteristics and performance measurement uncertainty interlaboratory or round robin tests other types of measurement systems analysis See also Certified reference materials Data analysis Design of experiments Document management system EPA Methods Integrated test facility Measurement systems analysis Measurement uncertainty Metrication Observational error Replication (statistics) Sampling (statistics) Specification (technical standard) Test management approach Verification and validation References General references, books Pyzdek, T, "Quality Engineering Handbook", 2003, Godfrey, A. B., "Juran's Quality Handbook", 1999, Kimothi, S. K., "The Uncertainty of Measurements: Physical and Chemical Metrology: Impact and Analysis", 2002, Related standards ASTM E177 Standard Practice for Use of the Terms Precision and Bias in ASTM Test Methods ASTM E691 Standard Practice for Conducting an Interlaboratory Study to Determine the Precision of a Test Method ASTM E1488 Standard Guide for Statistical Procedures to Use in Developing and Applying Test Methods ASTM E2282 Standard Guide for Defining the Test Result of a Test Method ASTM E2655 - Standard Guide for Reporting Uncertainty of Test Results and Use of the Term Measurement Uncertainty in ASTM Test Methods Metrology Measurement Quality control
Test method
[ "Physics", "Mathematics" ]
1,165
[ "Quantity", "Physical quantities", "Measurement", "Size" ]
10,560,323
https://en.wikipedia.org/wiki/Rytov%20number
The Rytov number is a fundamental scaling parameter for laser propagation through atmospheric turbulence. Rytov numbers greater than 0.2 are generally considered to be strong scintillation. A Rytov number of 0 would indicate no turbulence, thus no scintillation of the beam. References Wave mechanics
Rytov number
[ "Physics" ]
59
[ "Wave mechanics", "Waves", "Physical phenomena", "Classical mechanics" ]
10,562,114
https://en.wikipedia.org/wiki/Faying%20surface
A faying surface is one of the surfaces that are in contact at a joint. Faying surfaces may be connected to each other by bolts or rivets, adhesives, welding, or soldering. An example would be steel pipe flanges, in which the connected ends of the flanges would be faying surfaces. References Welding http://www.eaa1000.av.org/technicl/corrosion/faysurf.htm
Faying surface
[ "Engineering" ]
100
[ "Welding", "Mechanical engineering" ]
10,562,182
https://en.wikipedia.org/wiki/Oversize%20permit
An oversize permit is a document obtained from a state, county, city or province (Canada) to authorize travel in the specified jurisdiction for oversize/overweight truck movement. In most cases it will list the hauler's name, the description of the load and its dimensions, and a route they are required to travel. They may be obtained from the transportation department/agency of the issuing jurisdiction or from a company specializing in transportation permits, called a permit service. Road transport
Oversize permit
[ "Physics" ]
100
[ "Physical systems", "Transport", "Transport stubs" ]
10,565,476
https://en.wikipedia.org/wiki/Domino%20tiling
In geometry, a domino tiling of a region in the Euclidean plane is a tessellation of the region by dominoes, shapes formed by the union of two unit squares meeting edge-to-edge. Equivalently, it is a perfect matching in the grid graph formed by placing a vertex at the center of each square of the region and connecting two vertices when they correspond to adjacent squares. Height functions For some classes of tilings on a regular grid in two dimensions, it is possible to define a height function associating an integer to the vertices of the grid. For instance, draw a chessboard, fix a node with height 0, then for any node there is a path from to it. On this path define the height of each node (i.e. corners of the squares) to be the height of the previous node plus one if the square on the right of the path from to is black, and minus one otherwise. More details can be found in . Thurston's height condition describes a test for determining whether a simply-connected region, formed as the union of unit squares in the plane, has a domino tiling. He forms an undirected graph that has as its vertices the points (x,y,z) in the three-dimensional integer lattice, where each such point is connected to four neighbors: if x + y is even, then (x,y,z) is connected to (x + 1,y,z + 1), (x − 1,y,z + 1), (x,y + 1,z − 1), and (x,y − 1,z − 1), while if x + y is odd, then (x,y,z) is connected to (x + 1,y,z − 1), (x − 1,y,z − 1), (x,y + 1,z + 1), and (x,y − 1,z + 1). The boundary of the region, viewed as a sequence of integer points in the (x,y) plane, lifts uniquely (once a starting height is chosen) to a path in this three-dimensional graph. A necessary condition for this region to be tileable is that this path must close up to form a simple closed curve in three dimensions, however, this condition is not sufficient. Using more careful analysis of the boundary path, Thurston gave a criterion for tileability of a region that was sufficient as well as necessary. Counting tilings of regions The number of ways to cover an rectangle with dominoes, calculated independently by and , is given by When both m and n are odd, the formula correctly reduces to zero possible domino tilings. A special case occurs when tiling the rectangle with n dominoes: the sequence reduces to the Fibonacci sequence. Another special case happens for squares with m = n = 0, 2, 4, 6, 8, 10, 12, ... is These numbers can be found by writing them as the Pfaffian of an skew-symmetric matrix whose eigenvalues can be found explicitly. This technique may be applied in many mathematics-related subjects, for example, in the classical, 2-dimensional computation of the dimer-dimer correlator function in statistical mechanics. The number of tilings of a region is very sensitive to boundary conditions, and can change dramatically with apparently insignificant changes in the shape of the region. This is illustrated by the number of tilings of an Aztec diamond of order n, where the number of tilings is 2(n + 1)n/2. If this is replaced by the "augmented Aztec diamond" of order n with 3 long rows in the middle rather than 2, the number of tilings drops to the much smaller number D(n,n), a Delannoy number, which has only exponential rather than super-exponential growth in n. For the "reduced Aztec diamond" of order n with only one long middle row, there is only one tiling. Tatami Tatami are Japanese floor mats in the shape of a domino (1x2 rectangle). They are used to tile rooms, but with additional rules about how they may be placed. In particular, typically, junctions where three tatami meet are considered auspicious, while junctions where four meet are inauspicious, so a proper tatami tiling is one where only three tatami meet at any corner. The problem of tiling an irregular room by tatami that meet three to a corner is NP-complete. Applications in statistical physics There is a one-to-one correspondence between a periodic domino tiling and a ground state configuration of the fully-frustrated Ising model on a two-dimensional periodic lattice. At the ground state, each plaquette of the spin model must contain exactly one frustrated interaction. Therefore, viewing from the dual lattice, each frustrated edge must be "covered" by a 1x2 rectangle, such that the rectangles span the entire lattice and do not overlap, or a domino tiling of the dual lattice. See also Gaussian free field, the scaling limit of the height function in the generic situation (e.g., inside the inscribed disk of a large Aztec diamond) Mutilated chessboard problem, a puzzle concerning domino tiling of a 62-square area of a standard 8×8 chessboard (or checkerboard) Statistical mechanics Notes References Further reading Combinatorics Exactly solvable models Lattice models Matching (graph theory) Statistical mechanics Tiling puzzles Rectangular subdivisions
Domino tiling
[ "Physics", "Materials_science", "Mathematics" ]
1,147
[ "Discrete mathematics", "Tessellation", "Recreational mathematics", "Rectangular subdivisions", "Lattice models", "Computational physics", "Graph theory", "Combinatorics", "Mathematical relations", "Condensed matter physics", "Tiling puzzles", "Matching (graph theory)", "Statistical mechanic...
10,566,027
https://en.wikipedia.org/wiki/Rupture%20disc
A rupture disc, also known as a pressure safety disc, burst disc, bursting disc, or burst diaphragm, is a non-reclosing pressure relief safety device that, in most uses, protects a pressure vessel, equipment or system from overpressurization or potentially damaging vacuum conditions. A rupture disc is a type of sacrificial part because it has a one-time-use membrane that fails at a predetermined differential pressure, either positive or vacuum and at a coincident temperature. The membrane is usually made out of metal, but nearly any material (or different materials in layers) can be used to suit a particular application. Rupture discs provide instant response (within milliseconds or microseconds in very small sizes) to an increase or decrease in system pressure, but once the disc has ruptured it will not reseal. Major advantages of the application of rupture discs compared to using pressure relief valves include leak-tightness, cost, response time, size constraints, flow area, and ease of maintenance. Rupture discs are commonly used in petrochemical, aerospace, aviation, defense, medical, railroad, nuclear, chemical, pharmaceutical, food processing and oil field applications. They can be used as single protection devices or as a secondary relief device for a conventional safety valve; if the pressure increases and the safety valve fails to operate or can not relieve enough pressure fast enough, the rupture disc will burst. Rupture discs are very often used in combination with safety relief valves, isolating the valves from the process, thereby saving on valve maintenance and creating a leak-tight pressure relief solution. It is sometimes possible and preferable for highest reliability, though at higher initial cost, to avoid the use of emergency pressure relief devices by developing an intrinsically safe mechanical design that provides containment in all cases. Although commonly manufactured in disc form, the devices also are manufactured as rectangular panels ('rupture panels', 'vent panels' or explosion vents) and used to protect buildings, enclosed conveyor systems or any very large space from overpressurization typically due to an explosion. Rupture disc sizes range from to over , depending upon the industry application. Rupture discs and vent panels are constructed from carbon steel, stainless steel, hastelloy, graphite, and other materials, as required by the specific use environment. Rupture discs are widely accepted throughout industry and specified in most global pressure equipment design codes (American Society of Mechanical Engineers (ASME), Pressure Equipment Directive (PED), etc.). Rupture discs can be used to specifically protect installations against unacceptably high pressures or can be designed to act as one-time valves or triggering devices to initiate with high reliability and speed a sequence of actions required. Two disc technologies There are two rupture disc technologies used in all rupture discs, forward-acting (tension loaded) and reverse buckling (compression). Both technologies can be paired with a bursting disc indicator to provide a visual and electrical indication of failure. In the traditional forward-acting design, the loads are applied to the concave side of a domed rupture disc, stretching the dome until the tensile forces exceed the ultimate tensile stress of the material and the disc bursts. Flat rupture disc do not have a dome but, when pressure is applied, are still subject to tension loaded forces and are thus also forward-acting discs. The thickness of the raw material used in manufacturing (also known as web thickness in graphite discs) and the diameter of the disc determines the burst pressure. Most forward-acting discs are installed in systems with an 80% or lower operating ratio. In later iterations on forward-acting disc designs, precision-cut or laser scores in the material during manufacturing were used to precisely weaken the material, allowing for more variables to control of the burst pressure. This approach to rupture discs, while effective, does have limitations. Forward-acting discs are prone to metal fatigue caused by pressure cycling and operating conditions that can spike past recommended limits for the disc, causing the disc to burst at lower than its marked burst pressure. Low burst pressures also pose a problem for this disc technology. As the burst pressure lowers, the material thickness decreases. This can lead to extremely thin discs (similar to tin foil) that are highly prone to damage and have a higher chance of forming pinhole leaks due to corrosion. These discs are still successfully used today and are preferred in some situations. Reverse buckling rupture discs are the inversion of the forward-acting disc. The dome is inverted and the pressure is now loaded on the convex side of the disc. Once the reversal threshold is met, the dome will collapse and snap through to create a dome in the opposite direction. While that is happening, the disc is opened by knife blades or points of metal located along the score line on the downstream side of the disc. By loading the reverse buckling disc in compression, it is able to resist pressure cycling or pulsating conditions. The material thickness of a reverse buckling disc is significantly higher than that of a forward-acting disc of the same size and burst pressure. The result is greater longevity, accuracy and reliability over time. Correct installation of reverse buckling discs is essential. If installed upside down, the device will act as a forward acting disc and, due to the greater material thickness, may burst at much higher than the marked burst pressure. Blowout panel Blowout panels, also called blow-off panels, areas with intentionally weakened structure, are used in enclosures, buildings or vehicles where a sudden overpressure may occur. By failing in a predictable manner, they channel the overpressure or pressure wave in the direction where it causes controlled, directed minimal harm, instead of causing a catastrophic failure of the structure. An alternative example is a deliberately weakened wall in a room used to store compressed gas cylinders; in the event of a fire or other accident, the tremendous energy stored in the (possibly flammable) compressed gas is directed into a "safe" direction, rather than potentially collapsing the structure in a similar manner to a thermobaric weapon. Military applications Blow-off panels are used in ammunition compartments of some tanks to protect the crew in case of ammunition explosion, turning a catastrophic kill into a lesser firepower kill. Blowout panels are installed in several modern main battle tanks, including the M1 Abrams. In military ammunition storage, blowout panels are included in the design of the bunkers which house explosives. Such bunkers are designed, typically, with concrete walls on four sides, and a roof made of a lighter material covered with earth. In some cases this lighter material is wood, though metal sheeting is also employed. The design is such that if an explosion or fire in the ammunition bunker (also called a locker) were to occur, the force of the blast would be directed vertically, and away from other structures and personnel. Blowout panels had been in the past been considered as a possible solution to magazine explosions on battleships. However battleship designs since the 1920s instead used the all or nothing armor scheme, particularly with its armored citadel encompassing the battleship's vitals including machinery and magazines, and in the case of magazine penetration the only recourse is to flood the magazine. The lack of blowout panels has resulted in catastrophic damage during the magazine explosions of several battleships including Tirpitz and Yamato. Applications in biology Some models of gene gun also use a rupture disc, but not as a safety device. Instead, their function is part of the normal operation of the device, allowing for precise pressure-based control of particle application to a sample. In these devices, the rupture disc is designed to fail within an optimal range of gas pressure that has been empirically associated with successful particle integration into tissue or cell culture. Different disc strengths can be available for some gene gun models. References Fluid technology Piping Safety valves
Rupture disc
[ "Chemistry", "Engineering" ]
1,630
[ "Building engineering", "Chemical engineering", "Fluid technology", "Mechanical engineering by discipline", "Mechanical engineering", "Piping", "Industrial safety devices", "Safety valves" ]
10,567,389
https://en.wikipedia.org/wiki/Hydrogenase%20mimic
A hydrogenase mimic or bio-mimetic is an enzyme mimic of hydrogenases. Bio-mimetic compounds inspired in hydrogenases One of the more interesting applications of hydrogenases is to produce hydrogen, due its capacity to catalyze its redox reaction: In the field of hydrogen production, the incorporation of chemical compounds in electrochemical devices to produce molecular hydrogen has been a topic of huge interest in the recent years due to the possibility of using hydrogen as a replacement of the fossil fuels as an energetic carrier. This approach of using materials inspired by natural models to do the same function as their natural counterparts is called bio-mimetic approach. Nowadays this approach has received a big impulse due to the availability of high-resolution crystal structures of several hydrogenases obtained with different techniques. The technical details of these hydrogenases are stored in electronic databases at disposition to who may be interested. This information has allowed scientists to determine the important parts of the enzyme necessary to catalyze the reaction and determine the pathway of the reaction in a very detailed way. Which allow to have a very good comprehension of what is necessary to catalyze the same reaction using artificial components. Examples of bio-mimetic compounds inspired in hydrogenase Several studies have demonstrated the possibility to develop chemical cells inspired by biological models to produce molecular hydrogen, for example: Selvaggi et al. explored the possibility to use energy captured by the PSII, developing for that goal, an organic-inorganic hybrid system replacing the PSII protein complex by microspheres of TiO2 a photo-inducible compound. In order to get the hydrogen production, the TiO2 microspheres were covered with hydrogenases extracted from the marine thermophile Pyrococcus furiosus, in that way the energy of the light was captured by the TiO2 microspheres and used to generate protons and electrons from water with the subsequent production of 29 μmol de H2 hour−1. The obtained results from immobilization of hydrogenases on the surface of electrodes have demonstrated the viability of incorporating these enzymes in electrochemical cells, due to their ability to produce gaseous hydrogen through a redox reaction. (Hallenbeck and Benemann). This opens the possibility of using biomimetic compounds in electrodes to generate hydrogen. Until the present day several bio-mimetic compounds have been developed: Collman et al. produced ruthenium porphyrins, furthermore of the bio-mimetic compounds published by the research teams of Rauchfuss, Darensbourg and Pickett (in Artero and Fontecave) who developed bio-mimetic compounds of the [Fe] hydrogenase. More recently Manor and Rauchfuss presented a very interesting mimic compound based in the [NiFe] hydrogenase with bidirectional properties, this compound has the characteristic that it carries two borane protected cyanide ligands at the iron atom. Some works about bio-mimetic compounds of hydrogenases are summarized in table 1. Table 1. Bio-mimetic compounds of hydrogenases However, obtaining bio-mimetic compounds able to hydrogen production on an industrial scale still is elusive. For that reason, the research of this topic is a hot spot in science which has taken the efforts of researchers around the world. Recently a review of the works done in bio-mimetic compounds was published by Schilter et al.. Showing that some studies have got promising results in bio-mimetic compounds synthesized in laboratory. Molecular modeling of bio-mimetic compounds of hydrogenases assisted with software Recently the possibility of study such compounds using molecular modeling assisted by informatic software has opened new possibilities in the study of the redox reaction of biomimetic compounds. For example, using "Density Functional Theory" (DFT) computer modeling made it possible to propose a pathway for H2 binding and splitting on the catalytic center of a hydrogenase active site model (Greco). Other example of the application of computational modeling in the study of hydrogenases is the work done by Breglia et al., whose results shows the chemical pathway of how oxygen inhibited the redox reaction of [NiFe] hydrogenases. Bio-mimetic compounds inspired in [Fe] hydrogenases The Fe-only hydrogenases are particularly common enzymes for synthetic organometallic chemists to mimic. This interest is motivated by the inclusion of high field ligands like cyano and CO (metal carbonyl) in the first coordination sphere of the pertinent di-iron cluster. Free cyano and carbonyl ligands are toxic to many biological systems. So, their inclusion in this system suggests they play pivotal roles. These high field ligands may ensure the iron centers at the active site remain in a low spin state throughout the catalytic cycle. In addition, there is bridging dithiolate between the two iron centers. This dithiolate has a three atom backbone in which the identity of the central atom is still undetermined; it models crystallographically as a CH2, NH or O group. There is reason to believe that this central atom is an amine which functions as a Lewis base. This amine combined with Lewis acidic iron centers makes the enzyme a bifunctional catalyst which can split hydrogen between a proton acceptor and a hydride acceptor or produce hydrogen from a proton and hydride. Since none of the ligands on the iron centers are part of the enzyme's amino acid backbone, they can not be investigated through site-directed mutagenesis, but enzyme mimicry is a feasible approach. Breadth Many elegant structural mimics have been synthesized reproducing the atomic content and connectivity of the active site. The work by Pickett is a prime example of this field. The catalytic activity of these mimics do not however compare to the native enzyme. In contrast, functional mimics, also known as bio-inspired catalysts, aim to reproduce only the functional features of an enzyme often through the use of different atomic content and connectivity from that found in the native enzymes. Functional mimics have made advances in the reactive chemistry and have implications on the mechanistic activity of the enzyme as well as acting as catalysts in their own right. References Biochemistry
Hydrogenase mimic
[ "Chemistry", "Biology" ]
1,302
[ "Biochemistry", "nan" ]
10,567,426
https://en.wikipedia.org/wiki/Underwater%20acoustics
Underwater acoustics (also known as hydroacoustics) is the study of the propagation of sound in water and the interaction of the mechanical waves that constitute sound with the water, its contents and its boundaries. The water may be in the ocean, a lake, a river or a tank. Typical frequencies associated with underwater acoustics are between 10 Hz and 1 MHz. The propagation of sound in the ocean at frequencies lower than 10 Hz is usually not possible without penetrating deep into the seabed, whereas frequencies above 1 MHz are rarely used because they are absorbed very quickly. Hydroacoustics, using sonar technology, is most commonly used for monitoring of underwater physical and biological characteristics. Hydroacoustics can be used to detect the depth of a water body (bathymetry), as well as the presence or absence, abundance, distribution, size, and behavior of underwater plants and animals. Hydroacoustic sensing involves "passive acoustics" (listening for sounds) or active acoustics making a sound and listening for the echo, hence the common name for the device, echo sounder or echosounder. There are a number of different causes of noise from shipping. These can be subdivided into those caused by the propeller, those caused by machinery, and those caused by the movement of the hull through the water. The relative importance of these three different categories will depend, amongst other things, on the ship type. One of the main causes of hydro acoustic noise from fully submerged lifting surfaces is the unsteady separated turbulent flow near the surface's trailing edge that produces pressure fluctuations on the surface and unsteady oscillatory flow in the near wake. The relative motion between the surface and the ocean creates a turbulent boundary layer (TBL) that surrounds the surface. The noise is generated by the fluctuating velocity and pressure fields within this TBL. The field of underwater acoustics is closely related to a number of other fields of acoustic study, including sonar, transduction, signal processing, acoustical oceanography, bioacoustics, and physical acoustics. History Underwater sound has probably been used by marine animals for millions of years. The science of underwater acoustics began in 1490, when Leonardo da Vinci wrote the following, "If you cause your ship to stop and place the head of a long tube in the water and place the outer extremity to your ear, you will hear ships at a great distance from you." In 1687 Isaac Newton wrote his Mathematical Principles of Natural Philosophy which included the first mathematical treatment of sound. The next major step in the development of underwater acoustics was made by Daniel Colladon, a Swiss physicist, and Charles Sturm, a French mathematician. In 1826, on Lake Geneva, they measured the elapsed time between a flash of light and the sound of a submerged ship's bell heard using an underwater listening horn. They measured a sound speed of 1435 metres per second over a 17 kilometre (km) distance, providing the first quantitative measurement of sound speed in water. The result they obtained was within about 2% of currently accepted values. In 1877 Lord Rayleigh wrote the Theory of Sound and established modern acoustic theory. The sinking of Titanic in 1912 and the start of World War I provided the impetus for the next wave of progress in underwater acoustics. Systems for detecting icebergs and U-boats were developed. Between 1912 and 1914, a number of echolocation patents were granted in Europe and the U.S., culminating in Reginald A. Fessenden's echo-ranger in 1914. Pioneering work was carried out during this time in France by Paul Langevin and in Britain by A B Wood and associates. The development of both active ASDIC and passive sonar (SOund Navigation And Ranging) proceeded apace during the war, driven by the first large scale deployments of submarines. Other advances in underwater acoustics included the development of acoustic mines. In 1919, the first scientific paper on underwater acoustics was published, theoretically describing the refraction of sound waves produced by temperature and salinity gradients in the ocean. The range predictions of the paper were experimentally validated by propagation loss measurements. The next two decades saw the development of several applications of underwater acoustics. The fathometer, or depth sounder, was developed commercially during the 1920s. Originally natural materials were used for the transducers, but by the 1930s sonar systems incorporating piezoelectric transducers made from synthetic materials were being used for passive listening systems and for active echo-ranging systems. These systems were used to good effect during World War II by both submarines and anti-submarine vessels. Many advances in underwater acoustics were made which were summarised later in the series Physics of Sound in the Sea, published in 1946. After World War II, the development of sonar systems was driven largely by the Cold War, resulting in advances in the theoretical and practical understanding of underwater acoustics, aided by computer-based techniques. Theory Sound waves in water, bottom of sea A sound wave propagating underwater consists of alternating compressions and rarefactions of the water. These compressions and rarefactions are detected by a receiver, such as the human ear or a hydrophone, as changes in pressure. These waves may be man-made or naturally generated. Speed of sound, density and impedance The speed of sound (i.e., the longitudinal motion of wavefronts) is related to frequency and wavelength of a wave by . This is different from the particle velocity , which refers to the motion of molecules in the medium due to the sound, and relates to the plane wave pressure to the fluid density and sound speed by . The product of and from the above formula is known as the characteristic acoustic impedance. The acoustic power (energy per second) crossing unit area is known as the intensity of the wave and for a plane wave the average intensity is given by , where is the root mean square acoustic pressure. Sometimes the term "sound velocity" is used but this is incorrect as the quantity is a scalar. The large impedance contrast between air and water (the ratio is about 3600) and the scale of surface roughness means that the sea surface behaves as an almost perfect reflector of sound at frequencies below 1 kHz. Sound speed in water exceeds that in air by a factor of 4.4 and the density ratio is about 820. Absorption of sound Absorption of low frequency sound is weak. (see Technical Guides – Calculation of absorption of sound in seawater for an on-line calculator). The main cause of sound attenuation in fresh water, and at high frequency in sea water (above 100 kHz) is viscosity. Important additional contributions at lower frequency in seawater are associated with the ionic relaxation of boric acid (up to c. 10 kHz) and magnesium sulfate (c. 10 kHz-100 kHz). Sound may be absorbed by losses at the fluid boundaries. Near the surface of the sea losses can occur in a bubble layer or in ice, while at the bottom sound can penetrate into the sediment and be absorbed. Sound reflection and scattering Boundary interactions Both the water surface and bottom are reflecting and scattering boundaries. Surface For many purposes the sea-air surface can be thought of as a perfect reflector. The impedance contrast is so great that little energy is able to cross this boundary. Acoustic pressure waves reflected from the sea surface experience a reversal in phase, often stated as either a "pi phase change" or a "180 deg phase change". This is represented mathematically by assigning a reflection coefficient of minus 1 instead of plus one to the sea surface. At high frequency (above about 1 kHz) or when the sea is rough, some of the incident sound is scattered, and this is taken into account by assigning a reflection coefficient whose magnitude is less than one. For example, close to normal incidence, the reflection coefficient becomes , where h is the rms wave height. A further complication is the presence of wind-generated bubbles or fish close to the sea surface. The bubbles can also form plumes that absorb some of the incident and scattered sound, and scatter some of the sound themselves. Seabed The acoustic impedance mismatch between water and the bottom is generally much less than at the surface and is more complex. It depends on the bottom material types and depth of the layers. Theories have been developed for predicting the sound propagation in the bottom in this case, for example by Biot and by Buckingham. At target The reflection of sound at a target whose dimensions are large compared with the acoustic wavelength depends on its size and shape as well as the impedance of the target relative to that of water. Formulae have been developed for the target strength of various simple shapes as a function of angle of sound incidence. More complex shapes may be approximated by combining these simple ones. Propagation of sound Underwater acoustic propagation depends on many factors. The direction of sound propagation is determined by the sound speed gradients in the water. These speed gradients transform the sound wave through refraction, reflection, and dispersion. In the sea the vertical gradients are generally much larger than the horizontal ones. Combining this with a tendency towards increasing sound speed at increasing depth, due to the increasing pressure in the deep sea, causes a reversal of the sound speed gradient in the thermocline, creating an efficient waveguide at the depth, corresponding to the minimum sound speed. The sound speed profile may cause regions of low sound intensity called "Shadow Zones", and regions of high intensity called "Caustics". These may be found by ray tracing methods. At the equator and temperate latitudes in the ocean, the surface temperature is high enough to reverse the pressure effect, such that a sound speed minimum occurs at depth of a few hundred meters. The presence of this minimum creates a special channel known as deep sound channel, or SOFAR (sound fixing and ranging) channel, permitting guided propagation of underwater sound for thousands of kilometers without interaction with the sea surface or the seabed. Another phenomenon in the deep sea is the formation of sound focusing areas, known as convergence zones. In this case sound is refracted downward from a near-surface source and then back up again. The horizontal distance from the source at which this occurs depends on the positive and negative sound speed gradients. A surface duct can also occur in both deep and moderately shallow water when there is upward refraction, for example due to cold surface temperatures. Propagation is by repeated sound bounces off the surface. In general, as sound propagates underwater there is a reduction in the sound intensity over increasing ranges, though in some circumstances a gain can be obtained due to focusing. Propagation loss (sometimes referred to as transmission loss) is a quantitative measure of the reduction in sound intensity between two points, normally the sound source and a distant receiver. If is the far field intensity of the source referred to a point 1 m from its acoustic center and is the intensity at the receiver, then the propagation loss is given by . In this equation is not the true acoustic intensity at the receiver, which is a vector quantity, but a scalar equal to the equivalent plane wave intensity (EPWI) of the sound field. The EPWI is defined as the magnitude of the intensity of a plane wave of the same RMS pressure as the true acoustic field. At short range the propagation loss is dominated by spreading while at long range it is dominated by absorption and/or scattering losses. An alternative definition is possible in terms of pressure instead of intensity, giving , where is the RMS acoustic pressure in the far-field of the projector, scaled to a standard distance of 1 m, and is the RMS pressure at the receiver position. These two definitions are not exactly equivalent because the characteristic impedance at the receiver may be different from that at the source. Because of this, the use of the intensity definition leads to a different sonar equation to the definition based on a pressure ratio. If the source and receiver are both in water, the difference is small. Propagation modelling The propagation of sound through water is described by the wave equation, with appropriate boundary conditions. A number of models have been developed to simplify propagation calculations. These models include ray theory, normal mode solutions, and parabolic equation simplifications of the wave equation. Each set of solutions is generally valid and computationally efficient in a limited frequency and range regime, and may involve other limits as well. Ray theory is more appropriate at short range and high frequency, while the other solutions function better at long range and low frequency. Various empirical and analytical formulae have also been derived from measurements that are useful approximations. Reverberation Transient sounds result in a decaying background that can be of much larger duration than the original transient signal. The cause of this background, known as reverberation, is partly due to scattering from rough boundaries and partly due to scattering from fish and other biota. For an acoustic signal to be detected easily, it must exceed the reverberation level as well as the background noise level. Doppler shift If an underwater object is moving relative to an underwater receiver, the frequency of the received sound is different from that of the sound radiated (or reflected) by the object. This change in frequency is known as a Doppler shift. The shift can be easily observed in active sonar systems, particularly narrow-band ones, because the transmitter frequency is known, and the relative motion between sonar and object can be calculated. Sometimes the frequency of the radiated noise (a tonal) may also be known, in which case the same calculation can be done for passive sonar. For active systems the change in frequency is 0.69 Hz per knot per kHz and half this for passive systems as propagation is only one way. The shift corresponds to an increase in frequency for an approaching target. Intensity fluctuations Though acoustic propagation modelling generally predicts a constant received sound level, in practice there are both temporal and spatial fluctuations. These may be due to both small and large scale environmental phenomena. These can include sound speed profile fine structure and frontal zones as well as internal waves. Because in general there are multiple propagation paths between a source and receiver, small phase changes in the interference pattern between these paths can lead to large fluctuations in sound intensity. Non-linearity In water, especially with air bubbles, the change in density due to a change in pressure is not exactly linearly proportional. As a consequence for a sinusoidal wave input additional harmonic and subharmonic frequencies are generated. When two sinusoidal waves are input, sum and difference frequencies are generated. The conversion process is greater at high source levels than small ones. Because of the non-linearity there is a dependence of sound speed on the pressure amplitude so that large changes travel faster than small ones. Thus a sinusoidal waveform gradually becomes a sawtooth one with a steep rise and a gradual tail. Use is made of this phenomenon in parametric sonar and theories have been developed to account for this, e.g. by Westerfield. Measurements Sound in water is measured using a hydrophone, which is the underwater equivalent of a microphone. A hydrophone measures pressure fluctuations, and these are usually converted to sound pressure level (SPL), which is a logarithmic measure of the mean square acoustic pressure. Measurements are usually reported in one of two forms: RMS acoustic pressure in pascals (or sound pressure level (SPL) in dB re 1 μPa) spectral density (mean square pressure per unit bandwidth) in pascals squared per hertz (dB re 1 μPa2/Hz) The scale for acoustic pressure in water differs from that used for sound in air. In air the reference pressure is 20 μPa rather than 1 μPa. For the same numerical value of SPL, the intensity of a plane wave (power per unit area, proportional to mean square sound pressure divided by acoustic impedance) in air is about 202×3600 = 1 440 000 times higher than in water. Similarly, the intensity is about the same if the SPL is 61.6 dB higher in the water. The 2017 standard ISO 18405 defines terms and expressions used in the field of underwater acoustics, including the calculation of underwater sound pressure levels. Sound speed Approximate values for fresh water and seawater, respectively, at atmospheric pressure are 1450 and 1500 m/s for the sound speed, and 1000 and 1030 kg/m3 for the density. The speed of sound in water increases with increasing pressure, temperature and salinity. The maximum speed in pure water under atmospheric pressure is attained at about 74 °C; sound travels slower in hotter water after that point; the maximum increases with pressure. Absorption Many measurements have been made of sound absorption in lakes and the ocean (see Technical Guides – Calculation of absorption of sound in seawater for an on-line calculator). Ambient noise Measurement of acoustic signals are possible if their amplitude exceeds a minimum threshold, determined partly by the signal processing used and partly by the level of background noise. Ambient noise is that part of the received noise that is independent of the source, receiver and platform characteristics. Thus it excludes reverberation and towing noise for example. The background noise present in the ocean, or ambient noise, has many different sources and varies with location and frequency. At the lowest frequencies, from about 0.1 Hz to 10 Hz, ocean turbulence and microseisms are the primary contributors to the noise background. Typical noise spectrum levels decrease with increasing frequency from about 140 dB re 1 μPa2/Hz at 1 Hz to about 30 dB re 1 μPa2/Hz at 100 kHz. Distant ship traffic is one of the dominant noise sources in most areas for frequencies of around 100 Hz, while wind-induced surface noise is the main source between 1 kHz and 30 kHz. At very high frequencies, above 100 kHz, thermal noise of water molecules begins to dominate. The thermal noise spectral level at 100 kHz is 25 dB re 1 μPa2/Hz. The spectral density of thermal noise increases by 20 dB per decade (approximately 6 dB per octave). Transient sound sources also contribute to ambient noise. These can include intermittent geological activity, such as earthquakes and underwater volcanoes, rainfall on the surface, and biological activity. Biological sources include cetaceans (especially blue, fin and sperm whales), certain types of fish, and snapping shrimp. Rain can produce high levels of ambient noise. However the numerical relationship between rain rate and ambient noise level is difficult to determine because measurement of rain rate is problematic at sea. Reverberation Many measurements have been made of sea surface, bottom and volume reverberation. Empirical models have sometimes been derived from these. A commonly used expression for the band 0.4 to 6.4 kHz is that by Chapman and Harris. It is found that a sinusoidal waveform is spread in frequency due to the surface motion. For bottom reverberation a Lambert's Law is found often to apply approximately, for example see Mackenzie. Volume reverberation is usually found to occur mainly in layers, which change depth with the time of day, e.g., see Marshall and Chapman. The under-surface of ice can produce strong reverberation when it is rough, see for example Milne. Bottom loss Bottom loss has been measured as a function of grazing angle for many frequencies in various locations, for example those by the US Marine Geophysical Survey. The loss depends on the sound speed in the bottom (which is affected by gradients and layering) and by roughness. Graphs have been produced for the loss to be expected in particular circumstances. In shallow water bottom loss often has the dominant impact on long range propagation. At low frequencies sound can propagate through the sediment then back into the water. Underwater hearing Comparison with airborne sound levels As with airborne sound, sound pressure level underwater is usually reported in units of decibels, but there are some important differences that make it difficult (and often inappropriate) to compare SPL in water with SPL in air. These differences include: difference in reference pressure: 1 μPa (one micropascal, or one millionth of a pascal) instead of 20 μPa. difference in interpretation: there are two schools of thought, one maintaining that pressures should be compared directly, and the other that one should first convert to the intensity of an equivalent plane wave. difference in hearing sensitivity: any comparison with (A-weighted) sound in air needs to take into account the differences in hearing sensitivity, either of a human diver or other animal. Human hearing Hearing sensitivity The lowest audible SPL for a human diver with normal hearing is about 67 dB re 1 μPa, with greatest sensitivity occurring at frequencies around 1 kHz. This corresponds to a sound intensity 5.4 dB, or 3.5 times, higher than the threshold in air (see Measurements above). Safety thresholds High levels of underwater sound create a potential hazard to human divers. Guidelines for exposure of human divers to underwater sound are reported by the SOLMAR project of the NATO Undersea Research Centre. Human divers exposed to SPL above 154 dB re 1 μPa in the frequency range 0.6 to 2.5 kHz are reported to experience changes in their heart rate or breathing frequency. Diver aversion to low frequency sound is dependent upon sound pressure level and center frequency. Other species Aquatic mammals Dolphins and other toothed whales are known for their acute hearing sensitivity, especially in the frequency range 5 to 50 kHz. Several species have hearing thresholds between 30 and 50 dB re 1 μPa in this frequency range. For example, the hearing threshold of the killer whale occurs at an RMS acoustic pressure of 0.02 mPa (and frequency 15 kHz), corresponding to an SPL threshold of 26 dB re 1 μPa. High levels of underwater sound create a potential hazard to marine and amphibious animals. The effects of exposure to underwater noise are reviewed by Southall et al. Fish The hearing sensitivity of fish is reviewed by Ladich and Fay. The hearing threshold of the soldier fish, is 0.32 mPa (50 dB re 1 μPa) at 1.3 kHz, whereas the lobster has a hearing threshold of 1.3 Pa at 70 Hz (122 dB re 1 μPa). The effects of exposure to underwater noise are reviewed by Popper et al. Aquatic birds Several aquatic bird species have been observed to react to underwater sound in the 1–4 kHz range, which follows the frequency range of best hearing sensitivities of birds in air. Seaducks and cormorants have been trained to respond to sounds of 1–4 kHz with lowest hearing threshold (highest sensitivity) of 71 dB re 1 μPa (cormorants) and 105 dB re 1 μPa (seaducks). Diving species have several morphological differences in the ear relative to terrestrial species, suggesting some adaptations of the ear in diving birds to aquatic conditions Applications of underwater acoustics Sonar Sonar is the name given to the acoustic equivalent of radar. Pulses of sound are used to probe the sea, and the echoes are then processed to extract information about the sea, its boundaries and submerged objects. An alternative use, known as passive sonar, attempts to do the same by listening to the sounds radiated by underwater objects. Underwater communication The need for underwater acoustic telemetry exists in applications such as data harvesting for environmental monitoring, communication with and between crewed and uncrewed underwater vehicles, transmission of diver speech, etc. A related application is underwater remote control, in which acoustic telemetry is used to remotely actuate a switch or trigger an event. A prominent example of underwater remote control are acoustic releases, devices that are used to return sea floor deployed instrument packages or other payloads to the surface per remote command at the end of a deployment. Acoustic communications form an active field of research with significant challenges to overcome, especially in horizontal, shallow-water channels. Compared with radio telecommunications, the available bandwidth is reduced by several orders of magnitude. Moreover, the low speed of sound causes multipath propagation to stretch over time delay intervals of tens or hundreds of milliseconds, as well as significant Doppler shifts and spreading. Often acoustic communication systems are not limited by noise, but by reverberation and time variability beyond the capability of receiver algorithms. The fidelity of underwater communication links can be greatly improved by the use of hydrophone arrays, which allow processing techniques such as adaptive beamforming and diversity combining. Underwater navigation and tracking Underwater navigation and tracking is a common requirement for exploration and work by divers, ROV, autonomous underwater vehicles (AUV), crewed submersibles and submarines alike. Unlike most radio signals which are quickly absorbed, sound propagates far underwater and at a rate that can be precisely measured or estimated. It can thus be used to measure distances between a tracked target and one or multiple reference of baseline stations precisely, and triangulate the position of the target, sometimes with centimeter accuracy. Starting in the 1960s, this has given rise to underwater acoustic positioning systems which are now widely used. Seismic exploration Seismic exploration involves the use of low frequency sound (< 100 Hz) to probe deep into the seabed. Despite the relatively poor resolution due to their long wavelength, low frequency sounds are preferred because high frequencies are heavily attenuated when they travel through the seabed. Sound sources used include airguns, vibroseis and explosives. Weather and climate observation Acoustic sensors can be used to monitor the sound made by wind and precipitation. For example, an acoustic rain gauge is described by Nystuen. Lightning strikes can also be detected. Acoustic thermometry of ocean climate (ATOC) uses low frequency sound to measure the global ocean temperature. Acoustical oceanography Acoustical oceanography is the use of underwater sound to study the sea, its boundaries and its contents. History Interest in developing echo ranging systems began in earnest following the sinking of the RMS Titanic in 1912. By sending a sound wave ahead of a ship, the theory went, a return echo bouncing off the submerged portion of an iceberg should give early warning of collisions. By directing the same type of beam downwards, the depth to the bottom of the ocean could be calculated. The first practical deep-ocean echo sounder was invented by Harvey C. Hayes, a U.S. Navy physicist. For the first time, it was possible to create a quasi-continuous profile of the ocean floor along the course of a ship. The first such profile was made by Hayes on board the U.S.S. Stewart, a Navy destroyer that sailed from Newport to Gibraltar between June 22 and 29, 1922. During that week, 900 deep-ocean soundings were made. Using a refined echo sounder, the German survey ship Meteor made several passes across the South Atlantic from the equator to Antarctica between 1925 and 1927, taking soundings every 5 to 20 miles. Their work created the first detailed map of the Mid-Atlantic Ridge. It showed that the Ridge was a rugged mountain range, and not the smooth plateau that some scientists had envisioned. Since that time, both naval and research vessels have operated echo sounders almost continuously while at sea. Important contributions to acoustical oceanography have been made by: Leonid Brekhovskikh Walter Munk Herman Medwin John L. Spiesberger C.C. Leroy David E. Weston D. Van Holliday Charles Greenlaw Equipment used The earliest and most widespread use of sound and sonar technology to study the properties of the sea is the use of a rainbow echo sounder to measure water depth. Sounders were the devices used that mapped the many miles of the Santa Barbara Harbor ocean floor until 1993. Fathometers measure the depth of the waters. It works by electronically sending sounds from ships, therefore also receiving the sound waves that bounces back from the bottom of the ocean. A paper chart moves through the fathometer and is calibrated to record the depth. As technology advances, the development of high resolution sonars in the second half of the 20th century made it possible to not just detect underwater objects but to classify them and even image them. Electronic sensors are now attached to ROVs since nowadays, ships or robot submarines have Remotely Operated Vehicles (ROVs). There are cameras attached to these devices giving out accurate images. The oceanographers are able to get a clear and precise quality of pictures. The 'pictures' can also be sent from sonars by having sound reflected off ocean surroundings. Oftentimes sound waves reflect off animals, giving information which can be documented into deeper animal behaviour studies. Marine biology Due to its excellent propagation properties, underwater sound is used as a tool to aid the study of marine life, from microplankton to the blue whale. Echo sounders are often used to provide data on marine life abundance, distribution, and behavior information. Echo sounders, also referred to as hydroacoustics is also used for fish location, quantity, size, and biomass. Acoustic telemetry is also used for monitoring fish and marine wildlife. An acoustic transmitter is attached to the fish (sometimes internally) while an array of receivers listen to the information conveyed by the sound wave. This enables the researchers to track the movements of individuals in a small-medium scale. Pistol shrimp create sonoluminescent cavitation bubbles that reach up to Particle physics A neutrino is a fundamental particle that interacts very weakly with other matter. For this reason, it requires detection apparatus on a very large scale, and the ocean is sometimes used for this purpose. In particular, it is thought that ultra-high energy neutrinos in seawater can be detected acoustically. Other applications Other applications include: rain rate measurement wind speed measurement global thermometry monitoring of ocean-atmospheric gas exchange Surveillance Towed Array Sensor System Acoustic Doppler current profiler for water speed measurement Acoustic camera Liquid sound Passive acoustic monitoring See also Underwater Audio, an electronics company Notes References Bibliography Further reading Quality assurance of hydroacoustic surveys: the repeatability of fish-abundance and biomass estimates in lakes within and between hydroacoustic systems (free link to document) Hydroacoustics as a tool for assessing fish biomass and size distribution associated with discrete shallow water estuarine habitats in Louisiana Acoustic assessment of squid stocks Summary of the use of hydroacoustics for quantifying the escapement of adult salmonids (Oncorhynchus and Salmo spp.) in rivers. Ransom, B.H., S.V. Johnston, and T.W. Steig. 1998. Presented at International Symposium and Workshop on Management and Ecology of River Fisheries, University of Hull, England, 30 March-3 April 1998 Multi-frequency acoustic assessment of fisheries and plankton resources. Torkelson, T.C., T.C. Austin, and P.H. Weibe. 1998. Presented at the 135th Meeting of the Acoustical Society of America and the 16th Meeting of the International Congress of Acoustics, Seattle, Washington. Acoustics Unpacked A great reference for freshwater hydroacoustics for resource assessment Inter-Calibration of Scientific Echosounders in the Great Lakes Hydroacoustic Evaluation of Spawning Red Hind Aggregations Along the Coast of Puerto Rico in 2002 and 2003 Feasibility Assessment of Split-Beam Hydroacoustic Techniques for Monitoring Adult Shortnose Sturgeon in the Delaware River Categorising Salmon Migration Behaviour Using Characteristics of Split-beam Acoustic Data Evaluation of Methods to Estimate Lake Herring Spawner Abundance in Lake Superior Estimating Sockeye Salmon Smolt Flux and Abundance with Side-Looking Sonar Herring Research: Using Acoustics to Count Fish. Hydroacoustic Applications in Lake, River and Marine environments for study of plankton, fish, vegetation, substrate or seabed classification, and bathymetry. Hydroacoustics: Rivers (in: Salmonid Field Protocols Handbook: Chapter 4) Hydroacoustics: Lakes and Reservoirs (in: Salmonid Field Protocols Handbook: Chapter 5) PAMGUARD: An Open-Source Software Community Developing Marine Mammal Acoustic Detection and Localisation Software to Benefit the Marine Environment; https://web.archive.org/web/20070904035315/http://www.pamguard.org/home.shtml External links Ultrasonics and Underwater Acoustics Monitoring the global ocean through underwater acoustics ASA Underwater Acoustics Technical Committee An Ocean of Sound Underwater Acoustic Communications Acoustic Communications Group at the Woods Hole Oceanographic Institution Sound in the Sea SFSU Underwater Acoustics Research Group Discovery of Sound in the Sea Marine acoustics research Acoustics Sound
Underwater acoustics
[ "Physics" ]
6,653
[ "Classical mechanics", "Acoustics" ]
10,568,323
https://en.wikipedia.org/wiki/Metolachlor
Metolachlor is an organic compound that is widely used as an herbicide. It is a derivative of aniline and is a member of the chloroacetanilide family of herbicides. It is highly effective toward grasses. Agricultural use Metolachlor was developed by Ciba-Geigy. Its acts by inhibition of elongases and of the geranylgeranyl pyrophosphate (GGPP) cyclases, which are part of the gibberellin pathway. It is used for grass and broadleaf weed control in corn, soybean, peanuts, sorghum, and cotton. It is also used in combination with other herbicides. Metolachlor is a popular herbicide in the United States. As originally formulated metolachlor was applied as a racemate, a 1:1 mixture of the (S)- and (R)-stereoisomers. The (R)-enantiomer is less active, and modern production methods afford a higher concentration of S-metolachlor, thus current application rates are far lower than original formulations. Production and basic structure Metolachlor is produced from 2-ethyl-6-methylaniline (MEA) via condensation with methoxyacetone. The resulting imine is hydrogenated to give primarily the S-stereoisomeric amine. This secondary amine is acetylated with chloroacetylchloride. Because of the steric effects of the 2,6-disubstituted aniline, rotation about the aryl-C to N bond is restricted. Thus, both the (R)- and the (S)-enantiomers exist as atropisomers. Both atropisomers of (S)-metolachlor exhibit the same biological activity. Safety and ecological effects The European Chemicals Agency classified metolachlor as a suspected human carcinogen (Carcinogen category 2) in 2022. The United States Environmental Protection Agency (US EPA) has classified Metolachlor as a Group C, possible human carcinogen, based on liver tumors in rats at the highest dose tested (HDT). Evidence of the bioaccumulation of metolachlor in edible species of fish as well as its adverse effect on the growth and development has raised concerns on its effects on human and environmental health. For example, products with this active ingredient are restricted to professional licensed applicators in the U.S. state of Massachusetts. Though there is no set maximum concentration (maximum contaminant level, MCL) for metolachlor that is allowed in drinking water, the US EPA does have a health advisory level (HAL) of 0.525 mg/L. Metolachlor has been detected in ground and surface waters in concentrations ranging from 0.08 to 4.5 parts per billion (ppb) throughout the U.S. Metolachlor induces cytotoxic and genotoxic effects in human lymphocytes. Genotoxic effects have also been observed in tadpoles exposed to metolachlor. Evidence also reveals that metolachlor affects cell growth. Cell division in yeast was reduced, and chicken embryos exposed to metolchlor showed a significant decrease in the average body mass compared to the control. See also Acetochlor Alachlor Josiphos ligands References External links Herbicides Acetanilides Ethers Organochlorides Alkyl-substituted benzenes S-Metolachlor
Metolachlor
[ "Chemistry", "Biology" ]
743
[ "Herbicides", "Functional groups", "Organic compounds", "Ethers", "Biocides" ]
8,174,919
https://en.wikipedia.org/wiki/Thymopoietin
Lamina-associated polypeptide 2 (LAP2), isoforms beta/gamma is a protein that in humans is encoded by the TMPO gene. LAP2 is an inner nuclear membrane (INM) protein. Thymopoietin is a protein involved in the induction of CD90 in the thymus. The thymopoetin (TMPO) gene encodes three alternatively spliced mRNAs encoding proteins of 75 kDa (alpha), 51 kDa (beta) and 39 kDa (gamma) which are ubiquitously expressed in all cells. The human TMPO gene maps to chromosome band 12q22 and consists of eight exons. TMPO alpha is present diffusely expressed with the cell nucleus while TMPO beta and gamma are localized to the nuclear membrane. TMPO beta is a human homolog of the murine protein LAP2. LAP2 plays a role in the regulation of nuclear architecture by binding lamin B1 and chromosomes. This interaction is regulated by phosphorylation during mitosis. Given the nuclear localization of the three TMPO isoforms, it is unlikely that these proteins play any role in CD90 induction. Interactions Thymopoietin has been shown to interact with Barrier to autointegration factor 1, AKAP8L, LMNB1 and LMNA. References Further reading External links Proteins
Thymopoietin
[ "Chemistry" ]
288
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
8,176,527
https://en.wikipedia.org/wiki/Bilin%20%28biochemistry%29
Bilins, bilanes or bile pigments are biological pigments formed in many organisms as a metabolic product of certain porphyrins. Bilin (also called bilichrome) was named as a bile pigment of mammals, but can also be found in lower vertebrates, invertebrates, as well as red algae, green plants and cyanobacteria. Bilins can range in color from red, orange, yellow or brown to blue or green. In chemical terms, bilins are linear arrangements of four pyrrole rings (tetrapyrroles). In human metabolism, bilirubin is a breakdown product of heme. A modified bilane is an intermediate in the biosynthesis and uroporphyrinogen III from porphobilinogen. Examples of bilins are found in animals (cardinal examples are bilirubin and biliverdin), and phycocyanobilin, the chromophore of the photosynthetic pigment phycocyanin, in algae and plants. In plants, bilins also serve as the photopigments of the photoreceptor protein phytochrome. An example of an invertebrate bilin is micromatabilin, which is responsible for the green color of the Green Huntsman Spider, Micrommata virescens. In plants Most photosynthetic, oxygen-producing organisms contain the positive chlorophyll biosynthesis regulator GENOMES UNCOUPLED 4 (GUN4). Research suggests that GUN4 regulates chlorophyll synthesis, by activating the enzyme Magnesium chelatase, which catalyzes the insertion of Mg2+ into Protoporphyrin IX. Bilins noncovalently bind to CrGUN4, an algal GUN4 from Chlamydomonas reinhardtii, which has been shown to participate in retrograde signaling. Bilin-binding protein in butterfly wings Butterfly wings are a new site of porphyrin synthesis and cleavage where bilin is portrayed; the expression of the lipocalin bilin-binding protein in Pieris brassicae. The function of the biliprotein during wing development is still unknown, as is the existence of an active pathway for porphyrin synthesis and cleavage in insect wings, which has been demonstrated here for the first time. The bilin-binding protein from Pieris brassicae, which was discovered to have a crystal structure, was one of the initial members of the lipocalins protein superfamily, which has since grown significantly. It is a blue pigment protein that can be clearly identified by its amino acid sequence and crystal structure. The bilin-binding protein is predominantly present in hemolymph, fat body, and epidermis in the last instar larval and in the wings of the adult insect of Pieris brassicae. Although it has recently been discovered that three swallowtail butterfly larval color patterns are correlated with the combination of bilin-binding protein and the yellow-related gene, additional physiological activities are still unknown. Normally, insect bilins are joined to proteins to create a variety of biliproteins that have been identified in Lepidoptera and other insects. The presence of the blue and yellow pigments contributes to the blue-green hue of some lepidopteran larvae. Blue pigments and yellow carotenoids are thought to work together as camouflage. Bilin-binding protein is a member of the lipocalin family, which includes extracellular proteins with a number of molecular ligand features in common, including the ability to bind tiny, primarily lipophilic compounds like retinol. Members of the lipocalin family have mostly been classified as transport proteins, but it is clear that they also perform a range of other tasks, including retinol transport, invertebrate cryptic coloring, olfaction, and pheromone transmission. There is a lot of structural and functional variation in the lipocalin family, both within and between species. See also Bilirubin Biliverdin Biliprotein Phycobilin Phycobiliprotein Phycoerythrobilin Stercobilin Urobilin Gmelin's test References External links Biological pigments Organic pigments Tetrapyrroles Biomolecules
Bilin (biochemistry)
[ "Chemistry", "Biology" ]
898
[ "Natural products", "Organic compounds", "Structural biology", "Biomolecules", "Biochemistry", "Biological pigments", "Pigmentation", "Molecular biology" ]
8,177,392
https://en.wikipedia.org/wiki/Edward%20A.%20Guggenheim
Edward Armand Guggenheim FRS (11 August 1901 – 9 August 1970) was an English physical chemist, noted for his contributions to thermodynamics. Life Guggenheim was born in Manchester 11 August 1901, the son of Armand Guggenheim and Marguerite Bertha Simon. His father was Swiss, a naturalised British citizen. Guggenheim married Simone Ganzin (died 1954), in 1934 and Ruth Helen Aitkin, born Clarke, widow, in 1955. They had no children. He died in Reading, Berkshire 9 August 1970. Education Guggenheim was educated at Terra Nova School, Southport, Charterhouse School and Gonville and Caius College, Cambridge, where he obtained firsts in both the mathematics part 1 and chemistry part 2 triposes. Unable to gain a fellowship at the college, he went to Denmark where he studied under J. N. Brønsted at the University of Copenhagen. Career Returning to England, he found a place at University College, London where he wrote his first book, Modern Thermodynamics by the Methods of Willard Gibbs (1933), which "established his reputation and revolutionized the teaching of the subject". He was also a visiting professor of chemistry at Stanford University, and later became a reader in the chemical engineering department at Imperial College London. During World War II he worked on defence matters for the navy. In 1946 he was appointed professor of chemistry and head of department at Reading University, where he stayed until his retirement in 1966. Publications Guggenheim produced eleven books and more than 100 papers. His first book,Modern Thermodynamics by the Methods of Willard Gibbs (1933), was a 206-page, detailed study, with text, figures, index, and preface by F. G. Donnan, showing how the analytical thermodynamic methods developed by Willard Gibbs leads in a straightforward manner to relations such as phases, constants, solution, systems, and laws, that are unambiguous and exact. This book, together with Gilbert N. Lewis and Merle Randall's 1923 textbook Thermodynamics and the Free Energy of Chemical Substances, are said to be responsible for the inception of the modern science of chemical thermodynamics. Other books included Statistical Thermodynamics with Ralph Fowler (1939), and Thermodynamics – an Advanced Treatment for Chemists and Physicists . In the preface to this book, he states that no thermodynamics book written before 1929 even attempts an account of any of the following matters: The modern definition of heat given by Max Born in 1921. The quantal theory of the entropy of gases and its experimental verification. Peter Debye's formulae for the activity coefficients of electrolytes. The use of electrochemical potentials of ions The application of thermodynamics to dielectrics and to paramagnetic substances. Honours and awards Guggenheim was elected a Fellow of the Royal Society in 1946. His nomination reads In 1972, the E. A. Guggenheim Memorial Fund was established by friends and colleagues. The income from the fund is used to (a) award an annual prize and (b) to provide a biennial or triennial memorial lecture on some topic of chemistry or physics appropriate to the interests of Guggenheim. The Guggenheim Medal was introduced in 2014 by the Institution of Chemical Engineers for significant contributions to research in thermodynamics and / or complex fluids. The first recipient (in 2015) was Professor George Jackson of Imperial College London. See also Guggenheim scheme Stavermann–Guggenheim equation Bromley equation Entropy (energy dispersal) Non-random two-liquid model Specific ion interaction theory Thermodynamic activity References 1901 births 1970 deaths People educated at Charterhouse School Alumni of Gonville and Caius College, Cambridge English chemists Thermodynamicists Academics of the University of Reading Fellows of the Royal Society
Edward A. Guggenheim
[ "Physics", "Chemistry" ]
784
[ "Thermodynamics", "Thermodynamicists" ]
8,178,277
https://en.wikipedia.org/wiki/Microstructured%20optical%20arrays
Microstructured optical arrays (MOAs) are instruments for focusing x-rays. MOAs use total external reflection at grazing incidence from an array of small channels to bring x-rays to a common focus. This method of focusing means that MOAs exhibit low absorption. MOAs are used in applications that require x-ray focal spots in the order of few micrometers or below, such as radiobiology of individual cells. Current MOA-based focusing optics designs have two consecutive array components in order to reduce comatic aberration. Properties MOAs are achromatic (which means the focal properties do not change for radiation of different wavelengths) as they utilize grazing incidence reflection. This means that they are able to focus chromatic radiation to a common point unlike zone plates. MOAs are also adjustable as the optic can be compressed to alter the focal properties such as focal length. Focal length can be calculated for the system in fig. 1 using the geometry shown in fig. 2 where it can be seen that changing the gap between the components (d+D in the figure) or the radius of curvature (R) will have a large effect on the focal length. MOAs have been used in configurations shown in figs. 1 & 3 whereby one or both components can be adjusted. This has varying effects on the focal properties, in general it has been found that smaller focal spot sizes are apparent when MOAs are used as shown in fig. 1 with only the second component adjusted. The focal length of this system can be calculated using the geometry shown below: Manufacturing Current microstructured optical arrays are composed of silicon and created via the Bosch process, an example of Deep reactive ion etching and not to be confused with the Haber–Bosch process. In the Bosch process the channels are etched into the silicon using a plasma (plasma (physics)) in increments of a few micrometres. In between each etching the silicon is coated with a polymer in order to preserve the integrity of the channel walls. Applications The focal spot size is important in x-ray microprobe instrumentation where x-rays are focused onto a biological sample to investigate phenomena such as the bystander effect. To target a specific cell the focal spot size of the system must be around 10 micrometers, whereas to target specific areas of a cell such as the cytoplasm or the cell nucleus it should be no more than a few micrometers. Currently, only MOAs in the configuration shown in fig. 1 are thought to be able to achieve this. MOAs provide a good alternative to zone plates in microprobe use due to the adjustable focal properties (making cell alignment easier) and ability to provide focusing of chromatic radiation to a single point. This is particularly useful when considering the finding that different effects can be observed using radiation of different wavelengths. References 5. X-ray instrumentation Optical devices
Microstructured optical arrays
[ "Materials_science", "Technology", "Engineering" ]
589
[ "Glass engineering and science", "X-ray instrumentation", "Optical devices", "Measuring instruments" ]
466,164
https://en.wikipedia.org/wiki/Onsager%20reciprocal%20relations
In thermodynamics, the Onsager reciprocal relations express the equality of certain ratios between flows and forces in thermodynamic systems out of equilibrium, but where a notion of local equilibrium exists. "Reciprocal relations" occur between different pairs of forces and flows in a variety of physical systems. For example, consider fluid systems described in terms of temperature, matter density, and pressure. In this class of systems, it is known that temperature differences lead to heat flows from the warmer to the colder parts of the system; similarly, pressure differences will lead to matter flow from high-pressure to low-pressure regions. What is remarkable is the observation that, when both pressure and temperature vary, temperature differences at constant pressure can cause matter flow (as in convection) and pressure differences at constant temperature can cause heat flow. Perhaps surprisingly, the heat flow per unit of pressure difference and the density (matter) flow per unit of temperature difference are equal. This equality was shown to be necessary by Lars Onsager using statistical mechanics as a consequence of the time reversibility of microscopic dynamics (microscopic reversibility). The theory developed by Onsager is much more general than this example and capable of treating more than two thermodynamic forces at once, with the limitation that "the principle of dynamical reversibility does not apply when (external) magnetic fields or Coriolis forces are present", in which case "the reciprocal relations break down". Though the fluid system is perhaps described most intuitively, the high precision of electrical measurements makes experimental realisations of Onsager's reciprocity easier in systems involving electrical phenomena. In fact, Onsager's 1931 paper refers to thermoelectricity and transport phenomena in electrolytes as well known from the 19th century, including "quasi-thermodynamic" theories by Thomson and Helmholtz respectively. Onsager's reciprocity in the thermoelectric effect manifests itself in the equality of the Peltier (heat flow caused by a voltage difference) and Seebeck (electric current caused by a temperature difference) coefficients of a thermoelectric material. Similarly, the so-called "direct piezoelectric" (electric current produced by mechanical stress) and "reverse piezoelectric" (deformation produced by a voltage difference) coefficients are equal. For many kinetic systems, like the Boltzmann equation or chemical kinetics, the Onsager relations are closely connected to the principle of detailed balance and follow from them in the linear approximation near equilibrium. Experimental verifications of the Onsager reciprocal relations were collected and analyzed by D. G. Miller for many classes of irreversible processes, namely for thermoelectricity, electrokinetics, transference in electrolytic solutions, diffusion, conduction of heat and electricity in anisotropic solids, thermomagnetism and galvanomagnetism. In this classical review, chemical reactions are considered as "cases with meager" and inconclusive evidence. Further theoretical analysis and experiments support the reciprocal relations for chemical kinetics with transport. Kirchhoff's law of thermal radiation is another special case of the Onsager reciprocal relations applied to the wavelength-specific radiative emission and absorption by a material body in thermodynamic equilibrium. For his discovery of these reciprocal relations, Lars Onsager was awarded the 1968 Nobel Prize in Chemistry. The presentation speech referred to the three laws of thermodynamics and then added "It can be said that Onsager's reciprocal relations represent a further law making a thermodynamic study of irreversible processes possible." Some authors have even described Onsager's relations as the "Fourth law of thermodynamics". Example: Fluid system The fundamental equation The basic thermodynamic potential is internal energy. In a simple fluid system, neglecting the effects of viscosity, the fundamental thermodynamic equation is written: where U is the internal energy, T is temperature, S is entropy, P is the hydrostatic pressure, V is the volume, is the chemical potential, and M mass. In terms of the internal energy density, u, entropy density s, and mass density , the fundamental equation at fixed volume is written: For non-fluid or more complex systems there will be a different collection of variables describing the work term, but the principle is the same. The above equation may be solved for the entropy density: The above expression of the first law in terms of entropy change defines the entropic conjugate variables of and , which are and and are intensive quantities analogous to potential energies; their gradients are called thermodynamic forces as they cause flows of the corresponding extensive variables as expressed in the following equations. The continuity equations The conservation of mass is expressed locally by the fact that the flow of mass density satisfies the continuity equation: where is the mass flux vector. The formulation of energy conservation is generally not in the form of a continuity equation because it includes contributions both from the macroscopic mechanical energy of the fluid flow and of the microscopic internal energy. However, if we assume that the macroscopic velocity of the fluid is negligible, we obtain energy conservation in the following form: where is the internal energy density and is the internal energy flux. Since we are interested in a general imperfect fluid, entropy is locally not conserved and its local evolution can be given in the form of entropy density as where is the rate of increase in entropy density due to the irreversible processes of equilibration occurring in the fluid and is the entropy flux. The phenomenological equations In the absence of matter flows, Fourier's law is usually written: where is the thermal conductivity. However, this law is just a linear approximation, and holds only for the case where , with the thermal conductivity possibly being a function of the thermodynamic state variables, but not their gradients or time rate of change. Assuming that this is the case, Fourier's law may just as well be written: In the absence of heat flows, Fick's law of diffusion is usually written: where D is the coefficient of diffusion. Since this is also a linear approximation and since the chemical potential is monotonically increasing with density at a fixed temperature, Fick's law may just as well be written: where, again, is a function of thermodynamic state parameters, but not their gradients or time rate of change. For the general case in which there are both mass and energy fluxes, the phenomenological equations may be written as: or, more concisely, where the entropic "thermodynamic forces" conjugate to the "displacements" and are and and is the Onsager matrix of transport coefficients. The rate of entropy production From the fundamental equation, it follows that: and Using the continuity equations, the rate of entropy production may now be written: and, incorporating the phenomenological equations: It can be seen that, since the entropy production must be non-negative, the Onsager matrix of phenomenological coefficients is a positive semi-definite matrix. The Onsager reciprocal relations Onsager's contribution was to demonstrate that not only is positive semi-definite, it is also symmetric, except in cases where time-reversal symmetry is broken. In other words, the cross-coefficients and are equal. The fact that they are at least proportional is suggested by simple dimensional analysis (i.e., both coefficients are measured in the same units of temperature times mass density). The rate of entropy production for the above simple example uses only two entropic forces, and a 2×2 Onsager phenomenological matrix. The expression for the linear approximation to the fluxes and the rate of entropy production can very often be expressed in an analogous way for many more general and complicated systems. Abstract formulation Let denote fluctuations from equilibrium values in several thermodynamic quantities, and let be the entropy. Then, Boltzmann's entropy formula gives for the probability distribution function , where A is a constant, since the probability of a given set of fluctuations is proportional to the number of microstates with that fluctuation. Assuming the fluctuations are small, the probability distribution function can be expressed through the second differential of the entropy where we are using Einstein summation convention and is a positive definite symmetric matrix. Using the quasi-stationary equilibrium approximation, that is, assuming that the system is only slightly non-equilibrium, we have Suppose we define thermodynamic conjugate quantities as , which can also be expressed as linear functions (for small fluctuations): Thus, we can write where are called kinetic coefficients The principle of symmetry of kinetic coefficients or the Onsager's principle states that is a symmetric matrix, that is Proof Define mean values and of fluctuating quantities and respectively such that they take given values at . Note that Symmetry of fluctuations under time reversal implies that or, with , we have Differentiating with respect to and substituting, we get Putting in the above equation, It can be easily shown from the definition that , and hence, we have the required result. See also Lars Onsager Langevin equation References Eponymous equations of physics Laws of thermodynamics Non-equilibrium thermodynamics Thermodynamic equations
Onsager reciprocal relations
[ "Physics", "Chemistry", "Mathematics" ]
1,945
[ "Thermodynamic equations", "Equations of physics", "Non-equilibrium thermodynamics", "Eponymous equations of physics", "Thermodynamics", "Laws of thermodynamics", "Dynamical systems" ]
466,192
https://en.wikipedia.org/wiki/Thermal%20equilibrium
Two physical systems are in thermal equilibrium if there is no net flow of thermal energy between them when they are connected by a path permeable to heat. Thermal equilibrium obeys the zeroth law of thermodynamics. A system is said to be in thermal equilibrium with itself if the temperature within the system is spatially uniform and temporally constant. Systems in thermodynamic equilibrium are always in thermal equilibrium, but the converse is not always true. If the connection between the systems allows transfer of energy as 'change in internal energy' but does not allow transfer of matter or transfer of energy as work, the two systems may reach thermal equilibrium without reaching thermodynamic equilibrium. Two varieties of thermal equilibrium Relation of thermal equilibrium between two thermally connected bodies The relation of thermal equilibrium is an instance of equilibrium between two bodies, which means that it refers to transfer through a selectively permeable partition of matter or work; it is called a diathermal connection. According to Lieb and Yngvason, the essential meaning of the relation of thermal equilibrium includes that it is reflexive and symmetric. It is not included in the essential meaning whether it is or is not transitive. After discussing the semantics of the definition, they postulate a substantial physical axiom, that they call the "zeroth law of thermodynamics", that thermal equilibrium is a transitive relation. They comment that the equivalence classes of systems so established are called isotherms. Internal thermal equilibrium of an isolated body Thermal equilibrium of a body in itself refers to the body when it is isolated. The background is that no heat enters or leaves it, and that it is allowed unlimited time to settle under its own intrinsic characteristics. When it is completely settled, so that macroscopic change is no longer detectable, it is in its own thermal equilibrium. It is not implied that it is necessarily in other kinds of internal equilibrium. For example, it is possible that a body might reach internal thermal equilibrium but not be in internal chemical equilibrium; glass is an example. One may imagine an isolated system, initially not in its own state of internal thermal equilibrium. It could be subjected to a fictive thermodynamic operation of partition into two subsystems separated by nothing, no wall. One could then consider the possibility of transfers of energy as heat between the two subsystems. A long time after the fictive partition operation, the two subsystems will reach a practically stationary state, and so be in the relation of thermal equilibrium with each other. Such an adventure could be conducted in indefinitely many ways, with different fictive partitions. All of them will result in subsystems that could be shown to be in thermal equilibrium with each other, testing subsystems from different partitions. For this reason, an isolated system, initially not its own state of internal thermal equilibrium, but left for a long time, practically always will reach a final state which may be regarded as one of internal thermal equilibrium. Such a final state is one of spatial uniformity or homogeneity of temperature. The existence of such states is a basic postulate of classical thermodynamics. This postulate is sometimes, but not often, called the minus first law of thermodynamics. A notable exception exists for isolated quantum systems which are many-body localized and which never reach internal thermal equilibrium. Thermal contact Heat can flow into or out of a closed system by way of thermal conduction or of thermal radiation to or from a thermal reservoir, and when this process is effecting net transfer of heat, the system is not in thermal equilibrium. While the transfer of energy as heat continues, the system's temperature can be changing. Bodies prepared with separately uniform temperatures, then put into purely thermal communication with each other If bodies are prepared with separately microscopically stationary states, and are then put into purely thermal connection with each other, by conductive or radiative pathways, they will be in thermal equilibrium with each other just when the connection is followed by no change in either body. But if initially they are not in a relation of thermal equilibrium, heat will flow from the hotter to the colder, by whatever pathway, conductive or radiative, is available, and this flow will continue until thermal equilibrium is reached and then they will have the same temperature. One form of thermal equilibrium is radiative exchange equilibrium. Two bodies, each with its own uniform temperature, in solely radiative connection, no matter how far apart, or what partially obstructive, reflective, or refractive, obstacles lie in their path of radiative exchange, not moving relative to one another, will exchange thermal radiation, in net the hotter transferring energy to the cooler, and will exchange equal and opposite amounts just when they are at the same temperature. In this situation, Kirchhoff's law of equality of radiative emissivity and absorptivity and the Helmholtz reciprocity principle are in play. Change of internal state of an isolated system If an initially isolated physical system, without internal walls that establish adiabatically isolated subsystems, is left long enough, it will usually reach a state of thermal equilibrium in itself, in which its temperature will be uniform throughout, but not necessarily a state of thermodynamic equilibrium, if there is some structural barrier that can prevent some possible processes in the system from reaching equilibrium; glass is an example. Classical thermodynamics in general considers idealized systems that have reached internal equilibrium, and idealized transfers of matter and energy between them. An isolated physical system may be inhomogeneous, or may be composed of several subsystems separated from each other by walls. If an initially inhomogeneous physical system, without internal walls, is isolated by a thermodynamic operation, it will in general over time change its internal state. Or if it is composed of several subsystems separated from each other by walls, it may change its state after a thermodynamic operation that changes its walls. Such changes may include change of temperature or spatial distribution of temperature, by changing the state of constituent materials. A rod of iron, initially prepared to be hot at one end and cold at the other, when isolated, will change so that its temperature becomes uniform all along its length; during the process, the rod is not in thermal equilibrium until its temperature is uniform. In a system prepared as a block of ice floating in a bath of hot water, and then isolated, the ice can melt; during the melting, the system is not in thermal equilibrium; but eventually, its temperature will become uniform; the block of ice will not re-form. A system prepared as a mixture of petrol vapour and air can be ignited by a spark and produce carbon dioxide and water; if this happens in an isolated system, it will increase the temperature of the system, and during the increase, the system is not in thermal equilibrium; but eventually, the system will settle to a uniform temperature. Such changes in isolated systems are irreversible in the sense that while such a change will occur spontaneously whenever the system is prepared in the same way, the reverse change will practically never occur spontaneously within the isolated system; this is a large part of the content of the second law of thermodynamics. Truly perfectly isolated systems do not occur in nature, and always are artificially prepared. In a gravitational field One may consider a system contained in a very tall adiabatically isolating vessel with rigid walls initially containing a thermally heterogeneous distribution of material, left for a long time under the influence of a steady gravitational field, along its tall dimension, due to an outside body such as the earth. It will settle to a state of uniform temperature throughout, though not of uniform pressure or density, and perhaps containing several phases. It is then in internal thermal equilibrium and even in thermodynamic equilibrium. This means that all local parts of the system are in mutual radiative exchange equilibrium. This means that the temperature of the system is spatially uniform. This is so in all cases, including those of non-uniform external force fields. For an externally imposed gravitational field, this may be proved in macroscopic thermodynamic terms, by the calculus of variations, using the method of Lagrange multipliers. Considerations of kinetic theory or statistical mechanics also support this statement. Distinctions between thermal and thermodynamic equilibria There is an important distinction between thermal and thermodynamic equilibrium. According to Münster (1970), in states of thermodynamic equilibrium, the state variables of a system do not change at a measurable rate. Moreover, "The proviso 'at a measurable rate' implies that we can consider an equilibrium only with respect to specified processes and defined experimental conditions." Also, a state of thermodynamic equilibrium can be described by fewer macroscopic variables than any other state of a given body of matter. A single isolated body can start in a state which is not one of thermodynamic equilibrium, and can change till thermodynamic equilibrium is reached. Thermal equilibrium is a relation between two bodies or closed systems, in which transfers are allowed only of energy and take place through a partition permeable to heat, and in which the transfers have proceeded till the states of the bodies cease to change. An explicit distinction between 'thermal equilibrium' and 'thermodynamic equilibrium' is made by C.J. Adkins. He allows that two systems might be allowed to exchange heat but be constrained from exchanging work; they will naturally exchange heat till they have equal temperatures, and reach thermal equilibrium, but in general, will not be in thermodynamic equilibrium. They can reach thermodynamic equilibrium when they are allowed also to exchange work. Another explicit distinction between 'thermal equilibrium' and 'thermodynamic equilibrium' is made by B. C. Eu. He considers two systems in thermal contact, one a thermometer, the other a system in which several irreversible processes are occurring. He considers the case in which, over the time scale of interest, it happens that both the thermometer reading and the irreversible processes are steady. Then there is thermal equilibrium without thermodynamic equilibrium. Eu proposes consequently that the zeroth law of thermodynamics can be considered to apply even when thermodynamic equilibrium is not present; also he proposes that if changes are occurring so fast that a steady temperature cannot be defined, then "it is no longer possible to describe the process by means of a thermodynamic formalism. In other words, thermodynamics has no meaning for such a process." Thermal equilibrium of planets A planet is in thermal equilibrium when the incident energy reaching it (typically the solar irradiance from its parent star) is equal to the infrared energy radiated away to space. See also Thermal center Thermodynamic equilibrium Radiative equilibrium Thermal oscillator Citations Citation references Adkins, C.J. (1968/1983). Equilibrium Thermodynamics, third edition, McGraw-Hill, London, . Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, . Boltzmann, L. (1896/1964). Lectures on Gas Theory, translated by S.G. Brush, University of California Press, Berkeley. Chapman, S., Cowling, T.G. (1939/1970). The Mathematical Theory of Non-uniform gases. An Account of the Kinetic Theory of Viscosity, Thermal Conduction and Diffusion in Gases, third edition 1970, Cambridge University Press, London. Gibbs, J.W. (1876/1878). On the equilibrium of heterogeneous substances, Trans. Conn. Acad., 3: 108-248, 343-524, reprinted in The Collected Works of J. Willard Gibbs, Ph.D, LL. D., edited by W.R. Longley, R.G. Van Name, Longmans, Green & Co., New York, 1928, volume 1, pp. 55–353. Maxwell, J.C. (1867). On the dynamical theory of gases, Phil. Trans. Roy. Soc. London, 157: 49–88. Münster, A. (1970). Classical Thermodynamics, translated by E.S. Halberstadt, Wiley–Interscience, London. Partington, J.R. (1949). An Advanced Treatise on Physical Chemistry, volume 1, Fundamental Principles. The Properties of Gases, Longmans, Green and Co., London. Planck, M., (1897/1903). Treatise on Thermodynamics, translated by A. Ogg, first English edition, Longmans, Green and Co., London. Planck, M. (1914). The Theory of Heat Radiation, second edition translated by M. Masius, P. Blakiston's Son and Co., Philadelphia. ter Haar, D., Wergeland, H. (1966). Elements of Thermodynamics, Addison-Wesley Publishing, Reading MA. Tisza, L. (1966). Generalized Thermodynamics, M.I.T. Press, Cambridge MA. Temperature Physical quantities Heat transfer Thermodynamics
Thermal equilibrium
[ "Physics", "Chemistry", "Mathematics" ]
2,764
[ "Transport phenomena", "Scalar physical quantities", "Temperature", "Thermodynamic properties", "Physical phenomena", "Physical quantities", "Heat transfer", "SI base quantities", "Intensive quantities", "Quantity", "Thermodynamics", "Wikipedia categories named after physical quantities", "P...
467,047
https://en.wikipedia.org/wiki/Thermal%20energy
The term "thermal energy" is often used ambiguously in physics and engineering. It can denote several different physical concepts, including: Internal energy: The energy contained within a body of matter or radiation, excluding the potential energy of the whole system, and excluding the kinetic energy of the system moving as a whole. Heat: Energy in transfer between a system and its surroundings by mechanisms other than thermodynamic work and transfer of matter. The characteristic energy associated with a single microscopic degree of freedom, where denotes temperature and denotes the Boltzmann constant. Mark Zemansky (1970) has argued that the term “thermal energy” is best avoided due to its ambiguity. He suggests using more precise terms such as “internal energy” and “heat” to avoid confusion. The term is, however, used in some textbooks. Relation between heat and internal energy In thermodynamics, heat is energy in transfer to or from a thermodynamic system by mechanisms other than thermodynamic work or transfer of matter, such as conduction, radiation, and friction. Heat refers to a quantity in transfer between systems, not to a property of any one system, or "contained" within it; on the other hand, internal energy and enthalpy are properties of a single system. Heat and work depend on the way in which an energy transfer occurs. In contrast, internal energy is a property of the state of a system and can thus be understood without knowing how the energy got there. Macroscopic thermal energy In addition to the microscopic kinetic energies of its molecules, the internal energy of a body includes chemical energy belonging to distinct molecules, and the global joint potential energy involved in the interactions between molecules and suchlike. Thermal energy may be viewed as contributing to internal energy or to enthalpy. Chemical internal energy The internal energy of a body can change in a process in which chemical potential energy is converted into non-chemical energy. In such a process, the thermodynamic system can change its internal energy by doing work on its surroundings, or by gaining or losing energy as heat. It is not quite lucid to merely say that "the converted chemical potential energy has simply become internal energy". It is, however, sometimes convenient to say that "the chemical potential energy has been converted into thermal energy". This is expressed in ordinary traditional language by talking of 'heat of reaction'. Potential energy of internal interactions In a body of material, especially in condensed matter, such as a liquid or a solid, in which the constituent particles, such as molecules or ions, interact strongly with one another, the energies of such interactions contribute strongly to the internal energy of the body. Still, they are not immediately apparent in the kinetic energies of molecules, as manifest in temperature. Such energies of interaction may be thought of as contributions to the global internal microscopic potential energies of the body. Microscopic thermal energy In a statistical mechanical account of an ideal gas, in which the molecules move independently between instantaneous collisions, the internal energy is just the sum total of the gas's independent particles' kinetic energies, and it is this kinetic motion that is the source and the effect of the transfer of heat across a system's boundary. For a gas that does not have particle interactions except for instantaneous collisions, the term "thermal energy" is effectively synonymous with "internal energy". In many statistical physics texts, "thermal energy" refers to , the product of the Boltzmann constant and the absolute temperature, also written as . Thermal current density When there is no accompanying flow of matter, the term "thermal energy" is also applied to the energy carried by a heat flow. See also Geothermal energy Geothermal heating Geothermal power Heat transfer Ocean thermal energy conversion Orders of magnitude (temperature) Thermal energy storage References Thermodynamic properties Forms of energy Physics
Thermal energy
[ "Physics", "Chemistry", "Mathematics" ]
781
[ "Thermodynamic properties", "Physical quantities", "Quantity", "Forms of energy", "Energy (physics)", "Thermodynamics" ]
467,111
https://en.wikipedia.org/wiki/Ivan%20Vinogradov
Ivan Matveevich Vinogradov (; 14 September 1891 – 20 March 1983) was a Soviet mathematician, who was one of the creators of modern analytic number theory, and also a dominant figure in mathematics in the USSR. He was born in the Velikiye Luki district, Pskov Oblast. He graduated from the University of St. Petersburg, where in 1920 he became a Professor. From 1934 he was a Director of the Steklov Institute of Mathematics, a position he held for the rest of his life, except for the five-year period (1941–1946) when the institute was directed by Academician Sergei Sobolev. In 1941 he was awarded the Stalin Prize. He was elected to the American Philosophical Society in 1942. In 1951 he became a foreign member of the Polish Academy of Sciences and Letters in Kraków. Mathematical contributions In analytic number theory, Vinogradov's method refers to his main problem-solving technique, applied to central questions involving the estimation of exponential sums. In its most basic form, it is used to estimate sums over prime numbers, or Weyl sums. It is a reduction from a complicated sum to a number of smaller sums which are then simplified. The canonical form for prime number sums is With the help of this method, Vinogradov tackled questions such as the ternary Goldbach problem in 1937 (using Vinogradov's theorem), and the zero-free region for the Riemann zeta function. His own use of it was inimitable; in terms of later techniques, it is recognised as a prototype of the large sieve method in its application of bilinear forms, and also as an exploitation of combinatorial structure. In some cases his results resisted improvement for decades. He also used this technique on the Dirichlet divisor problem, allowing him to estimate the number of integer points under an arbitrary curve. This was an improvement on the work of Georgy Voronoy. In 1918 Vinogradov proved the Pólya–Vinogradov inequality for character sums. Personality and career Vinogradov served as director of the Mathematical Institute for 49 years. For his long service he was twice awarded the order of The Hero of the Socialist Labour. The house where he was born was converted into his memorial – a unique honour among Russian mathematicians. As the head of a leading mathematical institute, Vinogradov enjoyed significant influence in the Academy of Sciences and was regarded as an informal leader of Soviet mathematicians, not always in a positive way: his anti-Semitic feelings led him to hinder the careers of many prominent Soviet mathematicians. Although he was always faithful to the official line, he was never a member of the Communist Party and his overall mindset was nationalistic rather than communist. This can at least partly be attributed to his origins: his father was a priest of the Russian Orthodox Church. Vinogradov was enormously strong: in some recollections it is stated that he could lift a chair with a person sitting on it by holding the leg of the chair in his hands. He was never married and was very attached to his dacha in Abramtsevo, where he spent all his weekends and vacations (together with his sister Nadezhda, also unmarried) enjoying flower gardening. He had friendly relations with the president of the Russian Academy of Sciences Mstislav Keldysh and Mikhail Lavrentyev, both mathematicians whose careers started in his institute. References Bibliography Selected Works, Berlin; New York: Springer-Verlag, 1985, . Vinogradov, I. M. Elements of Number Theory. Mineola, NY: Dover Publications, 2003, . Vinogradov, I. M. Method of Trigonometrical Sums in the Theory of Numbers. Mineola, NY: Dover Publications, 2004, . Vinogradov I. M. (Ed.) Matematicheskaya entsiklopediya. Moscow: Sov. Entsiklopediya 1977. Now translated as the Encyclopaedia of Mathematics. External links Vinogradov memorial Memoirs of colleagues DOC PDF Memoirs of his opponent academician Sergei Novikov Vinogradov in Abramtsevo, memoirs 1891 births 1983 deaths People from Velikoluksky District People from Velikoluksky Uyezd Soviet mathematicians Russian mathematicians Number theorists Saint Petersburg State University alumni Academic staff of Perm State University Academic staff of Tomsk State University Academic staff of the Steklov Institute of Mathematics Full Members of the USSR Academy of Sciences Members of the German Academy of Sciences at Berlin Foreign members of the Serbian Academy of Sciences and Arts Foreign members of the Royal Society Recipients of the Stalin Prize Recipients of the USSR State Prize Recipients of the Lenin Prize Heroes of Socialist Labour Recipients of the Order of Lenin Recipients of the Lomonosov Gold Medal Members of the American Philosophical Society Russian scientists
Ivan Vinogradov
[ "Mathematics", "Technology" ]
999
[ "Science and technology awards", "Number theorists", "Recipients of the Lomonosov Gold Medal", "Number theory" ]
467,183
https://en.wikipedia.org/wiki/Wide-bandgap%20semiconductor
Wide-bandgap semiconductors (also known as WBG semiconductors or WBGSs) are semiconductor materials which have a larger band gap than conventional semiconductors. Conventional semiconductors like silicon and selenium have a bandgap in the range of 0.7 – 1.5 electronvolt (eV), whereas wide-bandgap materials have bandgaps in the range above 2 eV. Generally, wide-bandgap semiconductors have electronic properties which fall in between those of conventional semiconductors and insulators. Wide-bandgap semiconductors permit devices to operate at much higher voltages, frequencies, and temperatures than conventional semiconductor materials like silicon and gallium arsenide. They are the key component used to make short-wavelength (green-UV) LEDs or lasers, and are also used in certain radio frequency applications, notably military radars. Their intrinsic qualities make them suitable for a wide range of other applications, and they are one of the leading contenders for next-generation devices for general semiconductor use. The wider bandgap is particularly important for allowing devices that use them to operate at much higher temperatures, on the order of 300 °C. This makes them highly attractive for military applications, where they have seen a fair amount of use. The high temperature tolerance also means that these devices can be operated at much higher power levels under normal conditions. Additionally, most wide-bandgap materials also have a much higher critical electrical field density, on the order of ten times that of conventional semiconductors. Combined, these properties allow them to operate at much higher voltages and currents, which makes them highly valuable in military, radio, and power conversion applications. The US Department of Energy believes they will be a foundational technology in new electrical grid and alternative energy devices, as well as the robust and efficient power components used in high-power vehicles from plug-in electric vehicles to electric trains. Most wide-bandgap materials also have high free-electron velocities, which allows them to work at higher switching speeds, which adds to their value in radio applications. A single WBG device can be used to make a complete radio system, eliminating the need for separate signal and radio-frequency components, while operating at higher frequencies and power levels. Research and development of wide-bandgap materials lags behind that of conventional semiconductors, which have received massive investment since the 1970s. However, their clear inherent advantages in many applications, combined with some unique properties not found in conventional semiconductors, has led to increasing interest in their use in everyday electronic devices instead of silicon. Their ability to handle higher power density is particularly attractive for attempts to sustain Moore's law – the observed steady rate of increase in the density of transistors on an integrated circuit, which has, over decades, doubled roughly every two years. Conventional technologies, however, appear to be reaching a plateau of transistor density. Use in devices Wide-bandgap materials have several characteristics that make them useful compared to narrower bandgap materials. The higher energy gap gives devices the ability to operate at higher temperatures, as bandgaps typically shrink with increasing temperature, which can be problematic when using conventional semiconductors. For some applications, wide-bandgap materials allow devices to switch larger voltages. The wide bandgap also brings the electronic transition energy into the range of the energy of visible light, and hence light-emitting devices such as light-emitting diodes (LEDs) and semiconductor lasers can be made that emit in the visible spectrum, or even produce ultraviolet radiation. Solid-state lighting using wide-bandgap semiconductors has the potential to reduce the amount of energy required to provide lighting compared with incandescent lights, which have a luminous efficacy of less than 20 lumens per watt. The efficacy of LEDs is on the order of 160 lumens per watt. Wide-bandgap semiconductors can also be used in RF signal processing. Silicon-based power transistors are reaching limits of operating frequency, breakdown voltage, and power density. Wide-bandgap materials can be used in high-temperature and power switching applications. Materials - The only wide bandgap materials in group IV are diamond and silicon carbide (SiC). - The only semiconducting material in group III is Boron, with a wider bandgap than Silicon, Selenium, and Germanium. There are many III–V and II–VI compound semiconductors with wide bandgaps. In the III-V semiconductor family, aluminium nitride (AlN) is used to fabricate ultraviolet LEDs with wavelengths down to 200–250 nm, gallium nitride (GaN) is used to make blue LEDs and laser diodes, and boron nitride (BN) is proposed for blue LEDs. Table of common wide-bandgap semiconductors Materials properties Bandgap Quantum mechanics gives rise to a series of distinct electron energy levels, or bands, that vary from material to material. Each band can hold a certain number of electrons; if the atom has more electrons then they are forced into higher energy bands. In the presence of external energy, some of the electrons will gain energy and move back up the energy bands, before releasing it and falling back down to a lower band. With the constant application of external energy, like the thermal energy present at room temperature, an equilibrium is reached where the population of electrons moving up and down the bands is equal. Depending on the distribution of the energy bands, and the "band gap" between them, the materials will have very different electrical properties. For instance, at room temperature most metals have a series of partially filled bands that allow electrons to be added or removed with little applied energy. When tightly packed together, electrons can easily move from atom to atom, making them excellent conductors. In comparison, most plastic materials have widely spaced energy levels that require considerable energy to move electrons between their atoms, making them natural insulators. Semiconductors are those materials that have both types of bands, and at normal operational temperatures, some electrons are in both bands. In semiconductors, adding a small amount of energy pushes more electrons into the conduction band, making them more conductive and allowing current to flow like a conductor. Reversing the polarity of this applied energy pushes the electrons into the more widely separated bands, making them insulators and stopping the flow. Since the amount of energy needed to push the electrons between these two levels is very small, semiconductors allow switching with very little energy input. However, this switching process depends on the electrons being naturally distributed between the two states, so small inputs cause the population statistics to change rapidly. As the external temperature changes, due to the Maxwell–Boltzmann distribution, more and more electrons will normally find themselves in one state or the other, causing the switching action to occur on its own, or stop entirely. The size of the atoms and the number of protons in the atom are the primary predictors of the strength and layout of the bandgaps. Materials with small atoms and strong atomic bonds are associated with wide bandgaps. With regard to III-V compounds, nitrides are associated with the largest bandgaps. Bandgaps can be engineered by alloying, and Vegard's law states that there is a linear relation between lattice constant and composition of a solid solution at constant temperature. The position of the conduction band minima versus maxima in the band structure determine whether a bandgap is direct or indirect, where direct bandgap materials absorb light strongly, and indirect bandgaps absorb less strongly. Likewise, direct bandgap material emit light strongly, while indirect bandgap semiconductor are poor light emitters, unless dopants are added which couple strongly to light. Optical properties The connection between the wavelength and the bandgap is that the energy of the bandgap is the minimum energy that is needed to excite an electron into the conduction band. In order for an unassisted photon to cause this excitation, it must have at least that much energy. In the opposite process, when excited electron-hole pairs undergo recombination, photons are generated with energies that correspond to the magnitude of the bandgap. The bandgap determines the wavelength at which LEDs emit light and the wavelength at which photovoltaics operate most efficiently. Wide-bandgap devices therefore are useful at shorter wavelengths than other semiconductor devices. The bandgap for GaAs of 1.4 eV, for example, corresponds to a wavelength of approximately 890 nm, which is infrared light (the equivalent wavelength for light energy can be determined by dividing the constant 1240 nm-eV by the energy in eV, so 1240 nm-eV/1.4 eV=886 nm). Since the highest efficiency would be produced from a photovoltaic cell with layers tuned to the different regions of the solar spectrum, modern multi-junction solar cells have multiple layers with different bandgaps, and wide-bandgap semiconductors are a key component for collecting the part of the spectrum beyond the infrared. The use of LEDs in lighting applications depends particularly on the development of wide-bandgap nitride semiconductors. Breakdown field Impact ionization is often attributed to be the cause of breakdown. At the point of breakdown, electrons in a semiconductor are associated with sufficient kinetic energy to produce carriers when they collide with lattice atoms. Wide-bandgap semiconductors are associated with a high breakdown voltage. This is due to a larger electric field required to generate carriers through impact. At high electric fields, drift velocity saturates due to scattering from optical phonons. A higher optical phonon energy results in fewer optical phonons at a particular temperature, and there are therefore fewer scattering centers, and electrons in wide-bandgap semiconductors can achieve high peak velocities. The drift velocity reaches a peak at an intermediate electric field and undergoes a small drop at higher fields. Intervalley scattering is an additional scattering mechanism at large electric fields, and it is due to a shift of carriers from the lowest valley of the conduction band to the upper valleys, where the lower band curvature raises the effective mass of the electrons and lowers electron mobility. The drop in drift velocity at high electric fields due to intervalley scattering is small in comparison to high saturation velocity that results from low optical phonon scattering. There is therefore an overall higher saturation velocity. Thermal properties Silicon and other common materials have a bandgap on the order of 1 to 1.5 electronvolt (eV), which implies that such semiconductor devices can be controlled by relatively low voltages. However, it also implies that they are more readily activated by thermal energy, which interferes with their proper operation. This limits silicon-based devices to operational temperatures below about 100 °C, beyond which the uncontrolled thermal activation of the devices makes it difficult for them to operate correctly. Wide-bandgap materials typically have bandgaps on the order of 2 to 4 eV, allowing them to operate at much higher temperatures on the order of 300 °C. This makes them highly attractive in military applications, where they have seen a fair amount of use. Melting temperatures, thermal expansion coefficients, and thermal conductivity can be considered to be secondary properties that are essential in processing, and these properties are related to the bonding in wide-bandgap materials. Strong bonds result in higher melting temperatures and lower thermal expansion coefficients. A high Debye temperature results in a high thermal conductivity. With such thermal properties, heat is easily removed. Applications High-power applications The high breakdown voltage of wide-bandgap semiconductors is a useful property in high-power applications that require large electric fields. Devices for high power and high temperature applications have been developed. Both gallium nitride and silicon carbide are robust materials well suited for such applications. Due to its robustness and ease of manufacture, silicon carbide semiconductors are expected to be used widely, creating simpler and higher efficiency charging for hybrid and all-electric vehicles, reducing energy loss, constructing longer-lasting solar and wind energy power converters, and eliminating bulky grid substation transformers. Cubic boron nitride is used as well. Most of these are for specialist applications in space programmes and military systems. They have not begun to displace silicon from its leading place in the general power semiconductor market. Light-emitting diodes White LEDs have replaced incandescent bulbs in many situations because of their greater brightness and longer life. The next generation of DVD players (Blu-ray and HD DVD formats) use GaN-based violet lasers. Transducers Large piezoelectric effects allow wide-bandgap materials to be used as transducers. High-electron-mobility transistor Very high speed GaN uses the phenomenon of high interface-charge densities. Due to its cost, aluminium nitride is so far used mostly in military applications. Challenges Devices based on wide bandgap materials are capable of switching at much higher frequencies than silicon versions. They also tend to have much less leakage as well. As a result, higher voltages and more sensitive current measurements are required to properly characterize the WBG semiconductor during testing. Broadband semiconductor power devices require multiple measurements including on and off state, capacitive voltage and dynamic characteristics. In addition to dynamic performance, it is important to test key static parameters to avoid problems throughout the system. WBG testing may also require additional equipment to properly utilize some of the upgraded sensors. High-precision, high-capacity current sensors tend to be larger than less accurate options. As a result, they may require specialized testing structures that allow them to be used with minimal impact on circuit performance. While broadband semiconductors offer significant advantages, they pose manufacturing challenges. Producing high quality broadband materials on a large scale can be difficult and costly. Researchers and manufacturers are working to optimize manufacturing processes to make these materials more affordable. Important wide-bandgap semiconductors Aluminium nitride Boron nitride, h-BN and c-BN can form UV-LEDs. Diamond Gallium nitride Silicon carbide Silicon dioxide See also Band gap Direct and indirect band gaps Semiconductor (materials) Semiconductor device List of semiconductor materials References Semiconductor material types
Wide-bandgap semiconductor
[ "Chemistry" ]
2,945
[ "Semiconductor material types", "Semiconductor materials" ]
467,198
https://en.wikipedia.org/wiki/Gallium%20nitride
Gallium nitride () is a binary III/V direct bandgap semiconductor commonly used in blue light-emitting diodes since the 1990s. The compound is a very hard material that has a Wurtzite crystal structure. Its wide band gap of 3.4 eV affords it special properties for applications in optoelectronics, high-power and high-frequency devices. For example, GaN is the substrate that makes violet (405 nm) laser diodes possible, without requiring nonlinear optical frequency doubling. Its sensitivity to ionizing radiation is low (like other group III nitrides), making it a suitable material for solar cell arrays for satellites. Military and space applications could also benefit as devices have shown stability in high radiation environments. Because GaN transistors can operate at much higher temperatures and work at much higher voltages than gallium arsenide (GaAs) transistors, they make ideal power amplifiers at microwave frequencies. In addition, GaN offers promising characteristics for THz devices. Due to high power density and voltage breakdown limits GaN is also emerging as a promising candidate for 5G cellular base station applications. Since the early 2020s, GaN power transistors have come into increasing use in power supplies in electronic equipment, converting AC mains electricity to low-voltage DC. Physical properties GaN is a very hard (Knoop hardness 14.21 GPa), mechanically stable wide-bandgap semiconductor material with high heat capacity and thermal conductivity. In its pure form it resists cracking and can be deposited in thin film on sapphire or silicon carbide, despite the mismatch in their lattice constants. GaN can be doped with silicon (Si) or with oxygen to n-type and with magnesium (Mg) to p-type. However, the Si and Mg atoms change the way the GaN crystals grow, introducing tensile stresses and making them brittle. Gallium nitride compounds also tend to have a high dislocation density, on the order of 108 to 1010 defects per square centimeter. The U.S. Army Research Laboratory (ARL) provided the first measurement of the high field electron velocity in GaN in 1999. Scientists at ARL experimentally obtained a peak steady-state velocity of , with a transit time of 2.5 picoseconds, attained at an electric field of 225 kV/cm. With this information, the electron mobility was calculated, thus providing data for the design of GaN devices. Developments One of the earliest syntheses of gallium nitride was at the George Herbert Jones Laboratory in 1932. An early synthesis of gallium nitride was by Robert Juza and Harry Hahn in 1938. GaN with a high crystalline quality can be obtained by depositing a buffer layer at low temperatures. Such high-quality GaN led to the discovery of p-type GaN, p–n junction blue/UV-LEDs and room-temperature stimulated emission (essential for laser action). This has led to the commercialization of high-performance blue LEDs and long-lifetime violet laser diodes, and to the development of nitride-based devices such as UV detectors and high-speed field-effect transistors. LEDs High-brightness GaN light-emitting diodes (LEDs) completed the range of primary colors, and made possible applications such as daylight-visible full-color LED displays, white LEDs and blue laser devices. The first GaN-based high-brightness LEDs used a thin film of GaN deposited via metalorganic vapour-phase epitaxy (MOVPE) on sapphire. Other substrates used are zinc oxide, with lattice constant mismatch of only 2% and silicon carbide (SiC). Group III nitride semiconductors are, in general, recognized as one of the most promising semiconductor families for fabricating optical devices in the visible short-wavelength and UV region. GaN transistors and power ICs The very high breakdown voltages, high electron mobility, and high saturation velocity of GaN has made it an ideal candidate for high-power and high-temperature microwave applications, as evidenced by its high Johnson's figure of merit. Potential markets for high-power/high-frequency devices based on GaN include microwave radio-frequency power amplifiers (e.g., those used in high-speed wireless data transmission) and high-voltage switching devices for power grids. A potential mass-market application for GaN-based RF transistors is as the microwave source for microwave ovens, replacing the magnetrons currently used. The large band gap means that the performance of GaN transistors is maintained up to higher temperatures (~400 °C) than silicon transistors (~150 °C) because it lessens the effects of thermal generation of charge carriers that are inherent to any semiconductor. The first gallium nitride metal semiconductor field-effect transistors (GaN MESFET) were experimentally demonstrated in 1993 and they are being actively developed. In 2010, the first enhancement-mode GaN transistors became generally available. Only n-channel transistors were available. These devices were designed to replace power MOSFETs in applications where switching speed or power conversion efficiency is critical. These transistors are built by growing a thin layer of GaN on top of a standard silicon wafer, often referred to as GaN-on-Si by manufacturers. This allows the FETs to maintain costs similar to silicon power MOSFETs but with the superior electrical performance of GaN. Another seemingly viable solution for realizing enhancement-mode GaN-channel HFETs is to employ a lattice-matched quaternary AlInGaN layer of acceptably low spontaneous polarization mismatch to GaN. GaN power ICs monolithically integrate a GaN FET, GaN-based drive circuitry and circuit protection into a single surface-mount device. Integration means that the gate-drive loop has essentially zero impedance, which further improves efficiency by virtually eliminating FET turn-off losses. Academic studies into creating low-voltage GaN power ICs began at the Hong Kong University of Science and Technology (HKUST) and the first devices were demonstrated in 2015. Commercial GaN power IC production began in 2018. CMOS logic In 2016 the first GaN CMOS logic using PMOS and NMOS transistors was reported with gate lengths of 0.5 μm (gate widths of the PMOS and NMOS transistors were 500 μm and 50 μm, respectively). Applications LEDs and lasers GaN-based violet laser diodes are used to read Blu-ray Discs. The mixture of GaN with In (InGaN) or Al (AlGaN) with a band gap dependent on the ratio of In or Al to GaN allows the manufacture of light-emitting diodes (LEDs) with colors that can go from red to ultra-violet. Transistors and power ICs GaN transistors are suitable for high frequency, high voltage, high temperature and high-efficiency applications. GaN is efficient at transferring current, and this ultimately means that less energy is lost to heat. GaN high-electron-mobility transistors (HEMT) have been offered commercially since 2006, and have found immediate use in various wireless infrastructure applications due to their high efficiency and high voltage operation. A second generation of devices with shorter gate lengths will address higher-frequency telecom and aerospace applications. GaN-based metal–oxide–semiconductor field-effect transistors (MOSFET) and metal–semiconductor field-effect transistor (MESFET) transistors also offer advantages including lower loss in high power electronics, especially in automotive and electric car applications. Since 2008 these can be formed on a silicon substrate. High-voltage (800 V) Schottky barrier diodes (SBDs) have also been made. The higher efficiency and high power density of integrated GaN power ICs allows them to reduce the size, weight and component count of applications including mobile and laptop chargers, consumer electronics, computing equipment and electric vehicles. GaN-based electronics (not pure GaN) have the potential to drastically cut energy consumption, not only in consumer applications but even for power transmission utilities. Unlike silicon transistors that switch off due to power surges, GaN transistors are typically depletion mode devices (i.e. on / resistive when the gate-source voltage is zero). Several methods have been proposed to reach normally-off (or E-mode) operation, which is necessary for use in power electronics: the implantation of fluorine ions under the gate (the negative charge of the F-ions favors the depletion of the channel) the use of a MIS-type gate stack, with recess of the AlGaN the integration of a cascaded pair constituted by a normally-on GaN transistor and a low voltage silicon MOSFET the use of a p-type layer on top of the AlGaN/GaN heterojunction Radars GaN technology is also utilized in military electronics such as active electronically scanned array radars. Thales Group introduced the Ground Master 400 radar in 2010 utilizing GaN technology. In 2021 Thales put in operation more than 50,000 GaN Transmitters on radar systems. The U.S. Army funded Lockheed Martin to incorporate GaN active-device technology into the AN/TPQ-53 radar system to replace two medium-range radar systems, the AN/TPQ-36 and the AN/TPQ-37. The AN/TPQ-53 radar system was designed to detect, classify, track, and locate enemy indirect fire systems, as well as unmanned aerial systems. The AN/TPQ-53 radar system provided enhanced performance, greater mobility, increased reliability and supportability, lower life-cycle cost, and reduced crew size compared to the AN/TPQ-36 and the AN/TPQ-37 systems. Lockheed Martin fielded other tactical operational radars with GaN technology in 2018, including TPS-77 Multi Role Radar System deployed to Latvia and Romania. In 2019, Lockheed Martin's partner ELTA Systems Limited, developed a GaN-based ELM-2084 Multi Mission Radar that was able to detect and track air craft and ballistic targets, while providing fire control guidance for missile interception or air defense artillery. On April 8, 2020, Saab flight tested its new GaN designed AESA X-band radar in a JAS-39 Gripen fighter. Saab already offers products with GaN based radars, like the Giraffe radar, Erieye, GlobalEye, and Arexis EW. Saab also delivers major subsystems, assemblies and software for the AN/TPS-80 (G/ATOR) India's Defence Research and Development Organisation is developing Virupaakhsha radar for Sukhoi Su-30MKI based on GaN technology. The radar is a further development of Uttam AESA Radar for use on HAL Tejas which employs GaAs technology. Nanoscale GaN nanotubes and nanowires are proposed for applications in nanoscale electronics, optoelectronics and biochemical-sensing applications. Spintronics potential When doped with a suitable transition metal such as manganese, GaN is a promising spintronics material (magnetic semiconductors). Synthesis Bulk substrates GaN crystals can be grown from a molten Na/Ga melt held under 100 atmospheres of pressure of N2 at 750 °C. As Ga will not react with N2 below 1000 °C, the powder must be made from something more reactive, usually in one of the following ways: 2 Ga + 2 NH3 → 2 GaN + 3 H2 Ga2O3 + 2 NH3 → 2 GaN + 3 H2O Gallium nitride can also be synthesized by injecting ammonia gas into molten gallium at at normal atmospheric pressure. Metal-organic vapour phase epitaxy Blue, white and ultraviolet LEDs are grown on industrial scale by metalorganic vapour-phase epitaxy (MOVPE). The precursors are ammonia with either trimethylgallium or triethylgallium, the carrier gas being nitrogen or hydrogen. Growth temperature ranges between . Introduction of trimethylaluminium and/or trimethylindium is necessary for growing quantum wells and other kinds of heterostructures. Molecular beam epitaxy Commercially, GaN crystals can be grown using molecular beam epitaxy or MOVPE. This process can be further modified to reduce dislocation densities. First, an ion beam is applied to the growth surface in order to create nanoscale roughness. Then, the surface is polished. This process takes place in a vacuum. Polishing methods typically employ a liquid electrolyte and UV irradiation to enable mechanical removal of a thin oxide layer from the wafer. More recent methods have been developed that utilize solid-state polymer electrolytes that are solvent-free and require no radiation before polishing. Safety GaN dust is an irritant to skin, eyes and lungs. The environment, health and safety aspects of gallium nitride sources (such as trimethylgallium and ammonia) and industrial hygiene monitoring studies of MOVPE sources have been reported in a 2004 review. Bulk GaN is non-toxic and biocompatible. Therefore, it may be used in the electrodes and electronics of implants in living organisms. See also Schottky diode Semiconductor devices Molecular-beam epitaxy Epitaxy Lithium-ion battery References External links Ioffe data archive Nitrides Gallium compounds Inorganic compounds III-V semiconductors Wurtzite structure type
Gallium nitride
[ "Chemistry" ]
2,794
[ "Semiconductor materials", "Inorganic compounds", "III-V semiconductors" ]
467,649
https://en.wikipedia.org/wiki/HACEK%20organisms
The HACEK organisms are a group of fastidious Gram-negative bacteria that are an unusual cause of infective endocarditis, which is an inflammation of the heart due to bacterial infection. HACEK is an abbreviation of the initials of the genera of this group of bacteria: Haemophilus, Aggregatibacter (previously Actinobacillus), Cardiobacterium, Eikenella, Kingella. The HACEK organisms are a normal part of the human microbiota, living in the oral-pharyngeal region. The bacteria were originally grouped because they were thought to be a significant cause of infective endocarditis, but recent research has shown that they are rare and only responsible for 1.4–3.0% of all cases of this disease. Organisms HACEK originally referred to Haemophilus parainfluenzae, Haemophilus aphrophilus, Actinobacillus actinomycetemcomitans, Cardiobacterium hominis, Eikenella corrodens, and Kingella kingae. However, taxonomic rearrangements have changed the A to Aggregatibacter species and the H to Haemophilus species to reflect the recategorization and novel identification of many of the species in these genera. Some reviews of medical literature on HACEK organisms use the older classification, but recent papers are using the new classification. A list of HACEK organisms: Haemophilus species Haemophilus haemolyticus Haemophilus influenzae: The incidence of endocarditis due to H. influenzae declined after the introduction of the Hib vaccine. Haemophilus parahaemolyticus Haemophilus parainfluenzae Aggregatibacter Aggregatibacter actinomycetemcomitans (previously Actinobacillus actinomycetemcomitans) Aggregatibacter aphrophilus (previously Haemophilus aphrophilus) Aggregatibacter paraphrophilus (previously Haemophilus aphrophilus) Aggregatibacter segnis Cardiobacterium Cardiobacterium hominis: This is the most common species in the genus Cardiobacterium. Cardiobacterium valvarum Eikenella Eikenella corrodens Kingella Kingella denitrificans Kingella kingae: This is the most common species in the genus Kingella. Presentation All of these organisms are part of the normal oropharyngeal flora, which grow slowly (up to 14 days), prefer a carbon dioxide–enriched atmosphere, and share an enhanced capacity to produce endocardial infections, especially in young children. Collectively, they account for 5–10% of cases of infective endocarditis involving native valves and are the most common Gram-negative cause of endocarditis among people who do not use drugs intravenously. They have been a frequent cause of culture-negative endocarditis. Culture-negative refers to an inability to produce a colony on regular agar plates because these bacteria are fastidious (require a specific nutrient). In addition to valvular infections in the heart, they can also produce other infections, such as bacteremia, abscess, peritonitis, otitis media, conjunctivitis, pneumonia, arthritis, osteomyelitis, and periodontal infections. Treatment The treatment of choice for HACEK organisms in endocarditis is the third-generation cephalosporin and β-Lactam antibiotic ceftriaxone. Ampicillin (a penicillin), combined with low-dose gentamicin (an aminoglycoside) is another therapeutic option. References Bacteria
HACEK organisms
[ "Biology" ]
819
[ "Prokaryotes", "Microorganisms", "Bacteria" ]
467,685
https://en.wikipedia.org/wiki/Transactional%20interpretation
The transactional interpretation of quantum mechanics (TIQM) takes the wave function of the standard quantum formalism, and its complex conjugate, to be retarded (forward in time) and advanced (backward in time) waves that form a quantum interaction as a Wheeler–Feynman handshake or transaction. It was first proposed in 1986 by John G. Cramer, who argues that it helps in developing intuition for quantum processes. He also suggests that it avoids the philosophical problems with the Copenhagen interpretation and the role of the observer, and also resolves various quantum paradoxes. TIQM formed a minor plot point in his science fiction novel Einstein's Bridge. More recently, he has also argued TIQM to be consistent with the Afshar experiment, while claiming that the Copenhagen interpretation and the many-worlds interpretation are not. The existence of both advanced and retarded waves as admissible solutions to Maxwell's equations was explored in the Wheeler–Feynman absorber theory. Cramer revived their idea of two waves for his transactional interpretation of quantum theory. While the ordinary Schrödinger equation does not admit advanced solutions, its relativistic version does, and these advanced solutions are the ones used by TIQM. In TIQM, the source emits a usual (retarded) wave forward in time, but it also emits an advanced wave backward in time; furthermore, the receiver, who is later in time, also emits an advanced wave backward in time and a retarded wave forward in time. A quantum event occurs when a "handshake" exchange of advanced and retarded waves triggers the formation of a transaction in which energy, momentum, angular momentum, etc. are transferred. The quantum mechanism behind transaction formation has been demonstrated explicitly for the case of a photon transfer between atoms in Sect. 5.4 of Carver Mead's book Collective Electrodynamics. In this interpretation, the collapse of the wavefunction does not happen at any specific point in time, but is "atemporal" and occurs along the whole transaction, and the emission/absorption process is time-symmetric. The waves are seen as physically real, rather than a mere mathematical device to record the observer's knowledge as in some other interpretations of quantum mechanics. Philosopher and writer Ruth Kastner argues that the waves exist as possibilities outside of physical spacetime and that therefore it is necessary to accept such possibilities as part of reality. Cramer has used TIQM in teaching quantum mechanics at the University of Washington in Seattle. Advances over previous interpretations TIQM is explicitly non-local and, as a consequence, logically consistent with counterfactual definiteness (CFD), the minimum realist assumption. As such it incorporates the non-locality demonstrated by the Bell test experiments and eliminates the observer-dependent reality that has been criticized as part of the Copenhagen interpretation. Cramer states that the key advances over Everett's Relative State Interpretation are that the transactional interpretation has a physical collapse and is time-symmetric. Cramer also states that the TI is consistent with but not dependent upon the notion of an Einsteinian block universe. Kastner claims that by considering the product of the advanced and retarded wavefunctions, the Born rule can be explained ontologically. The transactional interpretation is superficially similar to the two-state vector formalism (TSVF) which has its origin in work by Yakir Aharonov, Peter Bergmann and Joel Lebowitz of 1964. However, it has important differences—the TSVF is lacking the confirmation and therefore cannot provide a physical referent for the Born Rule (as TI does). Kastner has criticized some other time-symmetric interpretations, including TSVF, as making ontologically inconsistent claims. Kastner has developed a new Relativistic Transactional Interpretation (RTI) also called Possibilist Transactional Interpretation (PTI) in which space-time itself emerges by a way of transactions. It has been argued that this relativistic transactional interpretation can provide the quantum dynamics for the causal sets program. Debate In 1996, Tim Maudlin proposed a thought experiment involving Wheeler's delayed choice experiment that is generally taken as a refutation of TIQM. However Kastner showed Maudlin's argument is not fatal for TIQM. In his book, The Quantum Handshake, Cramer has added a hierarchy to the description of pseudo-time to deal with Maudlin's objection and has pointed out that some of Maudlin's arguments are based on the inappropriate application of Heisenberg's knowledge interpretation to the transactional description. Transactional Interpretation faces criticisms. The following is partial list and some replies: See also Retrocausality Quantum entanglement Quantum nonlocality Wheeler–Feynman absorber theory References Further reading John G. Cramer, The Quantum Handshake: Entanglement, Nonlocality and Transactions, Springer Verlag 2016, . Ruth E. Kastner, The Transactional Interpretation of Quantum Mechanics: The Reality of Possibility, Cambridge University Press, 2012. Ruth E. Kastner, Understanding Our Unseen Reality: Solving Quantum Riddles, Imperial College Press, 2015. Tim Maudlin, Quantum Non-Locality and Relativity, Blackwell Publishers 2002, (discusses a gedanken experiment designed to refute the TIQM; this has been refuted in Kastner 2012, Chapter 5) Carver A. Mead, Collective Electrodynamics: Quantum Foundations of Electromagnetism, 2000, . John Gribbin, Schrödinger's Kittens and the Search for Reality: solving the quantum mysteries has an overview of Cramer's interpretation and says that “with any luck at all it will supersede the Copenhagen interpretation as the standard way of thinking about quantum physics for the next generation of scientists.” External links John G. Cramer, professor emeritus of Physics at the University of Washington, presents "The Quantum Handshake Explored." YouTube video dated 1 Feb 2018. Pavel V. Kurakin, George G. Malinetskii, How bees can possibly explain quantum paradoxes, Automates Intelligents (February 2, 2005). (This paper tells about a work attempting to develop TIQM further) Kastner has also applied TIQM to other quantum mechanical issues in "The Transactional Interpretation, Counterfactuals, and Weak Values in Quantum Theory" and "The Quantum Liar Experiment in the Transactional Interpretation" Interpretations of quantum mechanics Quantum measurement Quantum field theory Theoretical physics
Transactional interpretation
[ "Physics" ]
1,351
[ "Quantum field theory", "Theoretical physics", "Quantum mechanics", "Quantum measurement", "Interpretations of quantum mechanics" ]
467,804
https://en.wikipedia.org/wiki/Partial%20discharge
In electrical engineering, partial discharge (PD) is a localized dielectric breakdown (DB) (which does not completely bridge the space between the two conductors) of a small portion of a solid or fluid electrical insulation (EI) system under high voltage (HV) stress. While a corona discharge (CD) is usually revealed by a relatively steady glow or brush discharge (BD) in air, partial discharges within solid insulation system are not visible. PD can occur in a gaseous, liquid, or solid insulating medium. It often starts within gas voids, such as voids in solid epoxy insulation or bubbles in transformer oil. Protracted partial discharge can erode solid insulation and eventually lead to breakdown of insulation. Discharge mechanism PD usually begins within voids, cracks, or inclusions within a solid dielectric, at conductor-dielectric interfaces within solid or liquid dielectrics, or in bubbles within liquid dielectrics. Since PDs are limited to only a portion of the insulation, the discharges only partially bridge the distance between electrodes. PD can also occur along the boundary between different insulating materials. Partial discharges within an insulating material are usually initiated within gas-filled voids within the dielectric. Because the dielectric constant of the void is considerably less than the surrounding dielectric, the electric field across the void is significantly higher than that across an equivalent distance of dielectric. If the voltage stress across the void is increased above the corona inception voltage (CIV) for the gas within the void, PD activity will start within the void. PD can also occur along the surface of solid insulating materials if the surface tangential electric field is high enough to cause a breakdown along the insulator surface. This phenomenon commonly manifests itself on overhead line insulators, particularly on contaminated insulators during days of high humidity. Overhead lines use air as their insulation medium. PD equivalent circuit The equivalent circuit of a dielectric incorporating a cavity can be modeled as a capacitive voltage divider in parallel with another capacitor. The upper capacitor of the divider represents the parallel combination of the capacitances in series with the void and the lower capacitor represents the capacitance of the void. The parallel capacitor represents the remaining unvoided capacitance of the sample. Partial discharge currents Whenever partial discharge is initiated, high frequency transient current pulses will appear and persist for nanoseconds to a microsecond, then disappear and reappear repeatedly as the voltage sinewave goes through the zero crossing. The PD happens near the peak voltage both positive and negative. PD pulses are easy to measure using the high frequency current transducer (HFCT) method. The current transducer is clamped around the case ground of the component being tested. The severity of the PD is measured by measuring the burst interval between the end of a burst and the beginning of the next burst. As the insulation breakdown worsens, the burst interval will shorten due to the breakdown happening at lower voltages. This burst interval will continue to shorten until a critical 2 millisecond point is reached. At this 2 ms point, the discharge is very close to the zero crossing and will fail with a full blown discharge and major failure. The HFCT method needs to be used because of the small magnitude and short duration of these PD events. The HFCT method is done while the component being tested stays energized and loaded. It is completely non-intrusive. Another method of measuring these currents is to put a small current-measuring resistor in series with the sample and then view the generated voltage on an oscilloscope via a matched coaxial cable. When PD, arcing or sparking occurs, electromagnetic waves propagate away from the fault site in all directions which contact the transformer tank and travel to earth (ground cable) where the HFCT is located to capture any EMI or EMP within the transformer, breaker, PT, CT, HV Cable, MCSG, LTC, LA, generator, large hv motors, etc. Detection of the high-frequency pulses will identify the existence of partial discharge, arcing or sparking. After PD or arcing is detected, the next step is to locate the fault area. Using the acoustic emission method (AE), 4 or more AE sensors are placed on the transformer shell where the AE and HFCT wavedata is collected at the same time. Bandpass filtering is used to eliminate interference from system noises. Partial discharge degradation can be caused by various factors, including inadequate stress regulation, or the presence of voids or delamination in the ground wall insulation. These issues can eventually result in machine failure. Discharge detection and measuring systems With the partial discharge measurement, the dielectric condition of high voltage equipment can be evaluated, and electrical treeing in the insulation can be detected and located. Partial discharge measurement can localize the damaged part of an insulated system. Data collected during partial discharge testing is compared to measurement values of the same cable gathered during the acceptance-test or to factory quality control standards. This allows simple and quick classification of the dielectric condition (new, strongly aged, faulty) of the device under test and appropriate maintenance and repair measures may be planned and organized in advance. Partial discharge measurement is applicable to cables and accessories with various insulation materials, such as polyethylene or paper-insulated lead-covered (PILC) cable. Partial discharge measurement is routinely carried out to assess the condition of the insulation system of rotating machines (motors and generators), transformers, and gas-insulated switchgear. Partial discharge measurement system A partial discharge measurement system basically consists of: a cable or other object being tested a coupling capacitor of low inductance design a high-voltage supply with low background noise high-voltage connections a high voltage filter to reduce background noise from the power supply a partial discharge detector PC software for analysis A partial discharge detection system for in-service, energized electric power equipment: a cable, transformer, or any MV/HV power equipment Ultra High Frequency Sensor (UHF) Detection Bandwidth 300 MHz-1.5 GHz High Frequency Current Transformer (HFCT) Bandwidth 500 kHz-50 MHz Ultrasonic microphone with center frequency 40 kHz Acoustic Contact Sensor with detection bandwidth 20 kHz - 300 kHz TEV sensor or coupling capacitor 3 MHz-100 MHz Phase-resolved analysis system to compare pulse timing to AC frequency The principle of partial discharge measurement A number of discharge detection schemes and partial discharge measurement methods have been invented since the importance of PD was realized in the late 20th century. Partial discharge currents tend to be of short duration and have rise times in the nanosecond realm. On an oscilloscope, the discharges appear as evenly spaced burst events that occur at the peak of the sinewave. Random events are arcing or sparking. The usual way of quantifying partial discharge magnitude is in picocoulombs. The intensity of partial discharge is displayed versus time. An automatic analysis of the reflectograms collected during the partial discharge measurement – using a method referred to as time domain reflectometry (TDR) – allows the location of insulation irregularities. They are displayed in a partial discharge mapping format. A phase-related depiction of the partial discharges provides additional information, useful for the evaluation of the device under test. Calibration setup The actual charge change that occurs due to a PD event is not directly measurable, therefore, apparent charge is used instead. The apparent charge (q) of a PD event is the charge that, if injected between the terminals of the device under test, would change the voltage across the terminals by an amount equivalent to the PD event. This can be modeled by the equation: Apparent charge is not equal to the actual amount of changing charge at the PD site, but can be directly measured and calibrated. 'Apparent charge' is usually expressed in picocoulombs. This is measured by calibrating the voltage of the spikes against the voltages obtained from a calibration unit discharged into the measuring instrument. The calibration unit is quite simple in operation and merely comprises a square wave generator in series with a capacitor connected across the sample. Usually these are triggered optically to enable calibration without entering a dangerous high voltage area. Calibrators are usually disconnected during the discharge testing. Laboratory methods Wideband PD detection circuits In wideband detection, the impedance usually comprises a low Q parallel-resonant RLC circuit. This circuit tends to attenuate the exciting voltage (usually between 50 and 60 Hz) and amplify the voltage generated due to the discharges. Tuned (narrow band) detection circuits Differential discharge bridge methods Acoustic and Ultrasonic methods Field testing methods Field measurements preclude the use of a Faraday cage and the energising supply can also be a compromise from the ideal. Field measurements are therefore prone to noise and may be consequently less sensitive. Factory quality PD tests in the field require equipment that may not be readily available, therefore other methods have been developed for field measurement which, while not as sensitive or accurate as standardized measurements, are substantially more convenient. By necessity field measurements have to be quick, safe and simple if they are to be widely applied by owners and operators of MV and HV assets. Transient Earth Voltages (TEVs) are induced voltage spikes on the surface of the surrounding metalwork. TEVs were first discovered in 1974 by Dr John Reeves of EA Technology. TEVs occur because the partial discharge creates current spikes in the conductor and hence also in the earthed metal surrounding the conductor. Dr John Reeves established that TEV signals are directly proportional to the condition of the insulation for all switchgear of the same type measured at the same point. TEV readings are measured in dBmV. TEV pulses are full of high frequency components and hence the earthed metalwork presents a considerable impedance to ground. Therefore, voltage spikes are generated. These will stay on the inner surface of surrounding metalwork (to a depth of approximately 0.5 μm in mild steel at 100 MHz) and loop around to the outer surface wherever there is an electrical discontinuity in the metalwork. There is a secondary effect whereby electromagnetic waves generated by the partial discharge also generate TEVs on the surrounding metalwork – the surrounding metalwork acting like an antenna. TEVs are a very convenient phenomenon for measuring and detecting partial discharges as they can be detected without making an electrical connection or removing any panels. While this method may be useful to detect some issues in switchgear and surface tracking on internal components, the sensitivity is not likely to be sufficient to detect issues within solid dielectric cable systems. Ultrasonic measurement relies on fact that the partial discharge will emit sound waves. The frequency for emissions is "white" noise in nature and therefore produces ultrasonic structure waves through the solid or liquid filled electrical component. Using a structure borne ultrasonic sensor on the exterior of the item under examination, internal partial discharge can be detected and located when the sensor is placed closest to the source. HFCT Method This method is ideal for detecting and determining the severity of the PD by burst interval measurement. The closer the bursts get to "zero voltage crossing" the more severe and critical the PD fault is. Location of the fault area is accomplished using AE described above. Electro Magnetic Field detection picks up the radio waves generated by the partial discharge. As noted before the radio waves can generate TEVs on the surrounding metalwork. More sensitive measurement, particularly at higher voltages, can be achieved using in built UHF antennas or external antenna mounted on insulating spacers in the surrounding metalwork. Directional Coupler detection picks up the signals emanating from a partial discharge. This method is ideal for joints and accessories, with the sensors being located on the semicon layers at the joint or accessory. Effects of partial discharge in insulation systems Once begun, PD causes progressive deterioration of insulating materials, ultimately leading to electrical breakdown. The effects of PD within high voltage cables and equipment can be very serious, ultimately leading to complete failure. The cumulative effect of partial discharges within solid dielectrics is the formation of numerous, branching partially conducting discharge channels, a process called treeing. Repetitive discharge events cause irreversible mechanical and chemical deterioration of the insulating material. Damage is caused by the energy dissipated by high energy electrons or ions, ultraviolet light from the discharges, ozone attacking the void walls, and cracking as the chemical breakdown processes liberate gases at high pressure. The chemical transformation of the dielectric also tends to increase the electrical conductivity of the dielectric material surrounding the voids. This increases the electrical stress in the (thus far) unaffected gap region, accelerating the breakdown process. A number of inorganic dielectrics, including glass, porcelain, and mica, are significantly more resistant to PD damage than organic and polymer dielectrics. In paper-insulated high-voltage cables, partial discharges begin as small pinholes penetrating the paper windings that are adjacent to the electrical conductor or outer sheath. As PD activity progresses, the repetitive discharges eventually cause permanent chemical changes within the affected paper layers and impregnating dielectric fluid. Over time, partially conducting carbonized trees are formed. This places greater stress on the remaining insulation, leading to further growth of the damaged region, resistive heating along the tree, and further charring (sometimes called tracking). This eventually culminates in the complete dielectric failure of the cable and, typically, an electrical explosion. Partial discharges dissipate energy in the form of heat, sound, and light. Localized heating from PD may cause thermal degradation of the insulation. Although the level of PD heating is generally low for DC and power line frequencies, it can accelerate failures within high voltage high-frequency equipment. The integrity of insulation in high voltage equipment can be confirmed by monitoring the PD activities that occur through the equipment's life. To ensure supply reliability and long-term operational sustainability, PD in high-voltage electrical equipment should be monitored closely with early warning signals for inspection and maintenance. PD can usually be prevented through careful design and material selection. In critical high voltage equipment, the integrity of the insulation is confirmed using PD detection equipment during the manufacturing stage as well as periodically through the equipment's useful life. PD prevention and detection are essential to ensure reliable, long-term operation of high voltage equipment used by electric power utilities. Monitoring partial discharge events in transformers and reactors Utilizing UHF couplers and sensors, partial discharge signals are detected and carried to a master control unit where a filtering process is applied to reject interference. The amplitude and frequency of the UHF partial discharge pulses are digitized, analyzed and processed in order to generate an appropriate partial discharge data output, supervisory control and data acquisition (SCADA) alarm. Depending on the provider of the system, the partial discharge outputs are accessible through either a local area network, via modem or even a via web-based viewer. The ability to differentiate between various types of partial discharge (PD) and accurately locate them is crucial for effective PD testing and monitoring. This capability enables corrective repairs to be performed during planned outages, preventing failures that often lead to expensive outages and associated downtime or production loss. International standards and informative guides IEC 60060-2 : 1989 High-voltage test techniques — Part 2: Measuring systems IEC 60270:2000/BS EN 60270:2001 "High-Voltage Test Techniques – Partial Discharge Measurements" IEC 61934:2006 "Electrical insulating materials and systems - Electrical measurement of PD under short rise time and repetitive voltage impulses" IEC 60664-4:2007 "Insulation coordination for equipment within low-voltage systems – Part 4: Consideration of high-frequency voltage stress" IEC 60034-27:2007 "Rotating electrical machines – Off-line partial discharge measurements on the stator winding insulation of rotating electrical machines" IEEE Std 436™-1991 (R2007) "IEEE Guide for Making Corona (Partial Discharge) Measurements on Electronics Transformers" IEEE 1434–2000 "IEEE Trial-Use Guide to the Measurement of Partial Discharges in Rotating Machinery" IEEE 400-2001 "IEEE Guide for Field Testing and Evaluation of the Insulation of Shielded Power Cable Systems" PD IEC/TS 62478:2016 "High-Voltage Test Techniques – Measurement of partial discharges by electromagnetic and acoustic methods" See also Condition-based maintenance Condition monitoring Dissolved gas analysis Electric generator Electric motor Electric power distribution Electric power transmission Electrical substation Electrical treeing Switchgear Transformer Electrostatic discharge Electrical measurements References Bibliography High Voltage Engineering Fundamentals, E.Kuffel, W.S. Zaengl, pub. Pergamon Press. First edition, 1992 Engineering Dielectrics, Volume IIA, Electrical Properties of Solid Insulating Materials: Molecular Structure and Electrical Behavior, R. Bartnikas, R. M Eichhorn, ASTM Special Technical Publication 783, ASTM, 1982 Engineering Dielectrics, Volume I, Corona Measurement and Interpretation, R. Bartnikas, E. J. McMahon, ASTM Special Technical Publication 669, ASTM, 1979, Electricity Today, May 2009, Page 28 – 29 Pommerenke D., Strehl T., Heinrich R., Kalkner W., Schmidt F., Weißenberg W.: Discrimination between Internal PD and other Pulses using Directional Coupling Sensors on High Voltage Cable Systems, IEEE Transactions on Dielectrics and Electrical Insulation, Vol.6, No 6, December 99, pp. 814–824 External links What is Partial Discharge? What is Partial Discharge (PD)? Measurement and Analysis of Partial Discharge on Typical Defects in GIS papers and resources on partial discharge Electric charge Electrical breakdown Sources of electromagnetic interference
Partial discharge
[ "Physics", "Mathematics" ]
3,682
[ "Physical phenomena", "Physical quantities", "Electric charge", "Quantity", "Electrical phenomena", "Electrical breakdown", "Wikipedia categories named after physical quantities" ]
467,830
https://en.wikipedia.org/wiki/Curveball
In baseball and softball, the curveball is a type of pitch thrown with a characteristic grip and hand movement that imparts forward spin to the ball, causing it to dive as it approaches the plate. Varieties of curveball include the 12–6 curveball, power curveball, and the knuckle curve. Its close relatives are the slider and the slurve. The "curve" of the ball varies from pitcher to pitcher. The expression "to throw a curveball" essentially translates to introducing a significant deviation to a preceding concept. Grip and action The curveball is typically gripped in a manner similar to holding a cup or glass. The pitcher positions the middle finger along and parallel to one of the ball’s long seams, while the thumb is placed on the seam opposite, forming a "C shape" when viewed from above, with the horseshoe-shaped seam facing inward toward the palm. The index finger is aligned alongside the middle finger, while the remaining two fingers are folded toward the palm, with the knuckle of the ring finger resting against the leather. Some pitchers may extend these two fingers away from the ball to prevent interference during the throwing motion. The grip and throwing mechanics of the curveball closely resemble those of the slider. The delivery of a curveball is entirely different from that of most other pitches. The pitcher at the top of the throwing arc will snap the arm and wrist in a downward motion. The ball first leaves contact with the thumb and tumbles over the index finger thus imparting the forward or "top-spin" characteristic of a curveball. The result is the exact opposite pitch of the four-seam fastball's backspin, but with all four seams rotating in the direction of the flight path with forward-spin, with the axis of rotation perpendicular to the intended flight path, much like a reel mower or a bowling ball. The amount of break on the ball depends on how hard the pitcher can snap the throw off, or how much forward spin can be put on the ball. The harder the snap, the more the pitch will break. Curveballs primarily break downwards, but can also break toward the pitcher's off hand to varying degrees. Unlike the fastball, the apex of the ball's flight path arc does not necessarily need to occur at the pitcher's release point, and often peaks shortly afterwards. Curveballs are thrown with considerably less velocity than fastballs, because of both the unnatural delivery of the ball and the general rule that pitches thrown with less velocity will break more. A typical curveball in the major collegiate level and above will average between 65 and 80 mph, with the average MLB curve at 77 mph. From a hitter's perspective, a curveball initially appears to travel toward a specific location—often high in the strike zone—before rapidly dropping as it approaches the plate. The most effective curveballs begin breaking at the apex of their flight path and continue to break increasingly sharply as they approach and pass through the strike zone. A curveball that lacks sufficient spin will fail to break significantly and is commonly referred to as a "hanging curve." These pitches are particularly disadvantageous for pitchers, as their low velocity and minimal movement often leave them high in the strike zone, making them easy for hitters to time and drive with power. The curveball is a popular and effective pitch in professional baseball, but it is not particularly widespread in leagues with players younger than college level. This is with regard for the safety of the pitcher – not because of its difficulty – though the pitch is widely considered difficult to learn as it requires some degree of mastery and the ability to pinpoint the thrown ball's location. There is generally a greater chance of throwing wild pitches when throwing the curveball. When thrown correctly, it could have a break from seven to as much as 20 inches in comparison to the same pitcher's fastball. Safety Due to the unnatural motion required to throw it, the curveball is considered a more advanced pitch and poses inherent risk of injury to a pitcher's elbow and shoulder. There has been a controversy, as reported in The New York Times, March 12, 2012, about whether curveballs alone are responsible for injuries in young pitchers or whether the number of pitches thrown is the predisposing factor. In theory, allowing time for the cartilage and tendons of the arm to fully develop would protect against injuries. While acquisition of proper form might be protective, physician James Andrews is quoted in the article as stating that in many children, insufficient neuromuscular control, lack of proper mechanics, and fatigue make maintenance of proper form unlikely. The parts of the arm most commonly injured by the curveball are the ligaments in the elbow, the biceps, and the forearm muscles. Major elbow injury requires repair through elbow ligament reconstruction, or Tommy John surgery. Variations Curveballs have a variety of trajectories and breaks among pitchers. This chiefly has to do with the arm slot and release point of a given pitcher, which is in turn governed by how comfortable the pitcher is throwing the overhand curveball. Pitchers who can throw a curveball completely overhanded with the arm slot more or less vertical will have a curveball that will break straight downwards. This is called a 12–6 curveball as the break of the pitch is on a straight path downwards like the hands of a clock at 12 and 6. The axis of rotation of a 12–6 curve is parallel with the level ground and perpendicular to its flight path. Pitchers throwing their curveballs with the arm slot at an angle will throw a curveball that breaks down and toward the pitcher's off-hand. In the most extreme cases, the curve will break very wide laterally. Because the slider and the curveball share nearly the same grip and have the same unique throwing motions, this curveball breaks much like a slider, and is colloquially termed a "slurve". The axis of rotation on a slurve will still be more or less perpendicular to the flight path of the ball; unlike on a 12–6 curve, however, the axis of rotation will not be parallel to the level ground. With some pitchers, the difference between curveball and other pitches such as slider and slurve may be difficult to detect or even describe. A less common term for this type of curveball is a 1–7 (outdrop, outcurve, dropping roundhouse) or 2–8 (sweeping roundhouse curveball). A curveball that spins on a vertical axis perpendicular to its flight path, producing complete side spin—3–9 for a right-handed pitcher or 9–3 for a left-handed pitcher—is commonly referred to as a sweeping curveball, flat curveball, or frisbee curveball. While this pitch still drops due to gravity, the absence of significant topspin results in less vertical movement compared to other curveballs, such as the 12–6, 1–7/11–5, or 2–8/10–4 varieties. Side spin typically occurs when a pitcher throws with a sidearm or low three-quarter arm angle, though it can also result from a higher arm slot if the pitcher twists their hand, causing the fingers to move around the side of the ball rather than over the top. This twisting motion is believed to increase the risk of arm injuries, particularly near the elbow. By contrast, a slider’s spin axis is nearly parallel to the ball's flight path, similar to the rotation of a football or bullet, but slightly tilted upward toward 12 o’clock. When the spin axis shifts to 1 o’clock or 2 o’clock, the pitch becomes a slurve. A slurve often occurs when a pitcher applies excessive force to a curveball with insufficient finesse, resulting in a slight pronation at the release point rather than a full supination. Alternatively, a slurve can develop from over-supination when throwing a slider, leading to what is sometimes referred to as a "slurvy slider." A slurvy slider thrown with the same velocity as a power slider (typically 5–8 mph slower than a fastball) may exhibit greater lateral break. Physics Generally the Magnus effect describes the laws of physics that make a curveball curve. A fastball travels through the air with backspin, which creates a higher pressure zone in the air ahead of and under the baseball. The baseball's raised seams augment the ball's ability to develop a boundary layer and therefore a greater differential of pressure between the upper and lower zones. The effect of gravity is partially counteracted as the ball rides on and into increased pressure. Thus the fastball falls less than a ball thrown without spin (neglecting knuckleball effects) during the 60 feet 6 inches it travels to home plate. On the other hand, a curveball, thrown with topspin, creates a higher pressure zone on top of the ball, which deflects the ball downward in flight. Instead of counteracting gravity, the curveball adds additional downward force, thereby giving the ball an exaggerated drop in flight. Real or illusion? There was once a debate on whether a curveball actually curves or is an optical illusion. In 1949, Ralph B. Lightfoot, an aeronautical engineer at Sikorsky Aircraft, used wind tunnel tests to prove that a curveball curves. On whether a curveball is caused by an illusion, Baseball Hall of Fame pitcher Dizzy Dean has been quoted in a number of variations on this basic premise: "Stand behind a tree 60 feet away, and I will whomp you with an optical illusion!" However, optical illusion caused by the ball's spinning may play an important part in what makes curveballs difficult to hit. The curveball's trajectory is smooth, but the batter perceives a sudden, dramatic change in the ball's direction. When an object that is spinning and moving through space is viewed directly, the overall motion is interpreted correctly by the brain. However, as it enters the peripheral vision, the internal spinning motion distorts how the overall motion is perceived. A curveball's trajectory begins in the center of the batter's vision, but overlaps with peripheral vision as it approaches the plate, which may explain the suddenness of the break perceived by the batter. A peer-reviewed article on this hypothesis was published in 2010. Nicknames Popular nicknames for the curveball include "the bender" and "the hook" (both describing the trajectory of the pitch), as well as "the yakker" and "Uncle Charlie". New York Mets pitcher Dwight Gooden threw a curve so deadly that it was nicknamed "Lord Charles" and the great hitter Bill Madlock called it "the yellow hammer”—apparently because it came down like a hammer and was too yellow to get hit by a bat. Because catchers frequently use two fingers to signal for a curve, the pitch is also referred to as "the deuce" or "number two". History Candy Cummings, a star pitcher in the 1860s and 1870s, is widely credited with inventing the curveball. In his biography of Cummings, Stephen Katz provides proof. Several other pitchers of Cummings' era claimed to have invented the curveball. One was Fred Goldsmith. Goldsmith maintained that he gave a demonstration of the pitch on August 16, 1870, at the Capitoline Grounds in Brooklyn, New York, and that renowned sportswriter Henry Chadwick had covered it in the Brooklyn Eagle on August 17, 1870. However, Stephen Katz, in his biography of Cummings, shows that Goldsmith's claim was not credible, and that Goldsmith's reference to an article by Chadwick in the Brooklyn Eagle was likely fabricated. Other claimants to invention of the curveball are shown by Katz to have gotten the curveball only after Cummings, or not to have been pitching curveballs. In 1876, the first known collegiate baseball player to perfect the curveball was Clarence Emir Allen of Western Reserve College, now known as Case Western Reserve University, where he never lost a game. Both Allen, and teammate pitcher John P. Barden, became famous for employing the curve in the late 1870s. In the early 1880s, Clinton Scollard (1860–1932), a pitcher from Hamilton College in New York, became famous for his curve ball and later earned fame as a prolific American poet. In 1885, St. Nicholas, a children's magazine, featured a story entitled, "How Science Won the Game". It told of how a boy pitcher mastered the curveball to defeat the opposing batters. The New York Clipper reported, of a September 26, 1863, game at Princeton University (then the College of New Jersey), that F. P. Henry's "slow pitching with a great twist to the ball achieved a victory over fast pitching." However, Katz, in his biography of Cummings, explains that Henry was not actually pitching curveballs. Harvard president Charles Eliot was among those opposed to the curve, claiming it was a dishonest practice unworthy of Harvard students. At an athletics conference at Yale University in 1884 a speaker (thought to be from Harvard, likely Charles Eliot Norton, a cousin of the Harvard president) was reported to have stated: "For the pitcher, instead of delivering the ball to the batter in an honest, straightforward way, that the latter may exert his strength to the best advantage in knocking it, now uses every effort to deceive him by curving—I think that is the word—the ball. And this is looked upon as the last triumph of athletic science and skill. I tell you it is time to call halt! when the boasted progress in athletics is in the direction of fraud and deceit." In the past, major league pitchers Tommy Bridges, Bob Feller, Virgil Trucks, Herb Score, Camilo Pascual, Sandy Koufax, Bert Blyleven, and the aforementioned Dwight Gooden were regarded as having outstanding curveballs. See also References External links Version 1: Candy Cummings was the inventor of the curveball Version 2: Fred Goldsmith: co-inventor of the curveball or pitcher who gave first recorded demonstration of the curveball? Aerodynamics of a Curveball in Navier-Stokes Flow Trajectory of a Moving Curveball in Viscid Flow How to Throw a Curveball – wikiHow page outlining technique Aerodynamics Baseball pitches Baseball plays Softball pitches
Curveball
[ "Chemistry", "Engineering" ]
2,952
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
467,832
https://en.wikipedia.org/wiki/Electrophile
In chemistry, an electrophile is a chemical species that forms bonds with nucleophiles by accepting an electron pair. Because electrophiles accept electrons, they are Lewis acids. Most electrophiles are positively charged, have an atom that carries a partial positive charge, or have an atom that does not have an octet of electrons. Electrophiles mainly interact with nucleophiles through addition and substitution reactions. Frequently seen electrophiles in organic syntheses include cations such as H+ and NO+, polarized neutral molecules such as HCl, alkyl halides, acyl halides, and carbonyl compounds, polarizable neutral molecules such as Cl2 and Br2, oxidizing agents such as organic peracids, chemical species that do not satisfy the octet rule such as carbenes and radicals, and some Lewis acids such as BH3 and DIBAL. Organic chemistry Addition of halogens These occur between alkenes and electrophiles, often halogens as in halogen addition reactions. Common reactions include use of bromine water to titrate against a sample to deduce the number of double bonds present. For example, ethene + bromine → 1,2-dibromoethane: C2H4 + Br2 → BrCH2CH2Br This takes the form of 3 main steps shown below; Forming of a π-complex The electrophilic Br-Br molecule interacts with electron-rich alkene molecule to form a π-complex 1. Forming of a three-membered bromonium ion The alkene is working as an electron donor and bromine as an electrophile. The three-membered bromonium ion 2 consisted of two carbon atoms and a bromine atom forms with a release of Br−. Attacking of bromide ion The bromonium ion is opened by the attack of Br− from the back side. This yields the vicinal dibromide with an antiperiplanar configuration. When other nucleophiles such as water or alcohol are existing, these may attack 2 to give an alcohol or an ether. This process is called AdE2 mechanism ("addition, electrophilic, second-order"). Iodine (I2), chlorine (Cl2), sulfenyl ion (RS+), mercury cation (Hg2+), and dichlorocarbene (:CCl2) also react through similar pathways. The direct conversion of 1 to 3 will appear when the Br− is large excess in the reaction medium. A β-bromo carbenium ion intermediate may be predominant instead of 3 if the alkene has a cation-stabilizing substituent like phenyl group. There is an example of the isolation of the bromonium ion 2. Addition of hydrogen halides Hydrogen halides such as hydrogen chloride (HCl) adds to alkenes to give alkyl halides in hydrohalogenation. For example, the reaction of HCl with ethylene furnishes chloroethane. The reaction proceeds with a cation intermediate, being different from the above halogen addition. An example is shown below: Proton (H+) adds (by working as an electrophile) to one of the carbon atoms on the alkene to form cation 1. Chloride ion (Cl−) combines with the cation 1 to form the adducts 2 and 3. In this manner, the stereoselectivity of the product, that is, from which side Cl− will attack relies on the types of alkenes applied and conditions of the reaction. At least, which of the two carbon atoms will be attacked by H+ is usually decided by Markovnikov's rule. Thus, H+ attacks the carbon atom that carries fewer substituents so as the more stabilized carbocation (with the more stabilizing substituents) will form. This is another example of an AdE2 mechanism. Hydrogen fluoride (HF) and hydrogen iodide (HI) react with alkenes in a similar manner, and Markovnikov-type products will be given. Hydrogen bromide (HBr) also takes this pathway, but sometimes a radical process competes and a mixture of isomers may form. Although introductory textbooks seldom mentions this alternative, the AdE2 mechanism is generally competitive with the AdE3 mechanism (described in more detail for alkynes, below), in which transfer of the proton and nucleophilic addition occur in a concerted manner. The extent to which each pathway contributes depends on the several factors like the nature of the solvent (e.g., polarity), nucleophilicity of the halide ion, stability of the carbocation, and steric effects. As brief examples, the formation of a sterically unencumbered, stabilized carbocation favors the AdE2 pathway, while a more nucleophilic bromide ion favors the AdE3 pathway to a greater extent compared to reactions involving the chloride ion. In the case of dialkyl-substituted alkynes (e.g., 3-hexyne), the intermediate vinyl cation that would result from this process is highly unstable. In such cases, the simultaneous protonation (by HCl) and attack of the alkyne by the nucleophile (Cl−) is believed to take place. This mechanistic pathway is known by the Ingold label AdE3 ("addition, electrophilic, third-order"). Because the simultaneous collision of three chemical species in a reactive orientation is improbable, the termolecular transition state is believed to be reached when the nucleophile attacks a reversibly-formed weak association of the alkyne and HCl. Such a mechanism is consistent with the predominantly anti addition (>15:1 anti:syn for the example shown) of the hydrochlorination product and the termolecular rate law, Rate = k[alkyne][HCl]2. In support of the proposed alkyne-HCl association, a T-shaped complex of an alkyne and HCl has been characterized crystallographically. In contrast, phenylpropyne reacts by the AdE2ip ("addition, electrophilic, second-order, ion pair") mechanism to give predominantly the syn product (~10:1 syn:anti). In this case, the intermediate vinyl cation is formed by addition of HCl because it is resonance-stabilized by the phenyl group. Nevertheless, the lifetime of this high energy species is short, and the resulting vinyl cation-chloride anion ion pair immediately collapses, before the chloride ion has a chance to leave the solvent shell, to give the vinyl chloride. The proximity of the anion to the side of the vinyl cation where the proton was added is used to rationalize the observed predominance of syn addition. Hydration One of the more complex hydration reactions utilises sulfuric acid as a catalyst. This reaction occurs in a similar way to the addition reaction but has an extra step in which the OSO3H group is replaced by an OH group, forming an alcohol: C2H4 + H2O → C2H5OH As can be seen, the H2SO4 does take part in the overall reaction, however it remains unchanged so is classified as a catalyst. This is the reaction in more detail: The H–OSO3H molecule has a δ+ charge on the initial H atom. This is attracted to and reacts with the double bond in the same way as before. The remaining (negatively charged) −OSO3H ion then attaches to the carbocation, forming ethyl hydrogensulphate (upper way on the above scheme). When water (H2O) is added and the mixture heated, ethanol (C2H5OH) is produced. The "spare" hydrogen atom from the water goes into "replacing" the "lost" hydrogen and, thus, reproduces sulfuric acid. Another pathway in which water molecule combines directly to the intermediate carbocation (lower way) is also possible. This pathway become predominant when aqueous sulfuric acid is used. Overall, this process adds a molecule of water to a molecule of ethene. This is an important reaction in industry, as it produces ethanol, whose purposes include fuels and starting material for other chemicals. Chiral derivatives Many electrophiles are chiral and optically stable. Typically chiral electrophiles are also optically pure. One such reagent is the fructose-derived organocatalyst used in the Shi epoxidation. The catalyst can accomplish highly enantioselective epoxidations of trans-disubstituted and trisubstituted alkenes. The Shi catalyst, a ketone, is oxidized by stoichiometric oxone to the active dioxirane form before proceeding in the catalytic cycle. Oxaziridines such as chiral N-sulfonyloxaziridines effect enantioselective ketone alpha oxidation en route to the AB-ring segments of various natural products, including γ-rhodomycionone and α-citromycinone. Polymer-bound chiral selenium electrophiles effect asymmetric selenenylation reactions. The reagents are aryl selenenyl bromides, and they were first developed for solution phase chemistry and then modified for solid phase bead attachment via an aryloxy moiety. The solid-phase reagents were applied toward the selenenylation of various alkenes with good enantioselectivities. The products can be cleaved from the solid support using organotin hydride reducing agents. Solid-supported reagents offers advantages over solution phase chemistry due to the ease of workup and purification. Electrophilicity scale Several methods exist to rank electrophiles in order of reactivity and one of them is devised by Robert Parr with the electrophilicity index ω given as: with the electronegativity and chemical hardness. This equation is related to the classical equation for electrical power: where is the resistance (Ohm or Ω) and is voltage. In this sense the electrophilicity index is a kind of electrophilic power. Correlations have been found between electrophilicity of various chemical compounds and reaction rates in biochemical systems and such phenomena as allergic contact dermititis. An electrophilicity index also exists for free radicals. Strongly electrophilic radicals such as the halogens react with electron-rich reaction sites, and strongly nucleophilic radicals such as the 2-hydroxypropyl-2-yl and tert-butyl radical react with a preference for electron-poor reaction sites. Superelectrophiles Superelectrophiles are defined as cationic electrophilic reagents with greatly enhanced reactivities in the presence of superacids. These compounds were first described by George A. Olah. Superelectrophiles form as a doubly electron deficient superelectrophile by protosolvation of a cationic electrophile. As observed by Olah, a mixture of acetic acid and boron trifluoride is able to remove a hydride ion from isobutane when combined with hydrofluoric acid via the formation of a superacid from BF3 and HF. The responsible reactive intermediate is the [CH3CO2H3]2+ dication. Likewise, methane can be nitrated to nitromethane with nitronium tetrafluoroborate NOBF only in presence of a strong acid like fluorosulfuric acid via the protonated nitronium dication. In gitionic (gitonic) superelectrophiles, charged centers are separated by no more than one atom, for example, the protonitronium ion O=N+=O+—H (a protonated nitronium ion). And, in distonic superelectrophiles, they are separated by 2 or more atoms, for example, in the fluorination reagent F-TEDA-BF4. See also Nucleophile TRPA1, the sensory neural target for electrophilic irritants in mammals. References Physical organic chemistry
Electrophile
[ "Chemistry" ]
2,607
[ "Physical organic chemistry" ]
467,899
https://en.wikipedia.org/wiki/Systems%20biology
Systems biology is the computational and mathematical analysis and modeling of complex biological systems. It is a biology-based interdisciplinary field of study that focuses on complex interactions within biological systems, using a holistic approach (holism instead of the more traditional reductionism) to biological research. Particularly from the year 2000 onwards, the concept has been used widely in biology in a variety of contexts. The Human Genome Project is an example of applied systems thinking in biology which has led to new, collaborative ways of working on problems in the biological field of genetics. One of the aims of systems biology is to model and discover emergent properties, properties of cells, tissues and organisms functioning as a system whose theoretical description is only possible using techniques of systems biology. These typically involve metabolic networks or cell signaling networks. Overview Systems biology can be considered from a number of different aspects. As a field of study, particularly, the study of the interactions between the components of biological systems, and how these interactions give rise to the function and behavior of that system (for example, the enzymes and metabolites in a metabolic pathway or the heart beats). As a paradigm, systems biology is usually defined in antithesis to the so-called reductionist paradigm (biological organisation), although it is consistent with the scientific method. The distinction between the two paradigms is referred to in these quotations: "the reductionist approach has successfully identified most of the components and many of the interactions but, unfortunately, offers no convincing concepts or methods to understand how system properties emerge ... the pluralism of causes and effects in biological networks is better addressed by observing, through quantitative measures, multiple components simultaneously and by rigorous data integration with mathematical models." (Sauer et al.) "Systems biology ... is about putting together rather than taking apart, integration rather than reduction. It requires that we develop ways of thinking about integration that are as rigorous as our reductionist programmes, but different. ... It means changing our philosophy, in the full sense of the term." (Denis Noble) As a series of operational protocols used for performing research, namely a cycle composed of theory, analytic or computational modelling to propose specific testable hypotheses about a biological system, experimental validation, and then using the newly acquired quantitative description of cells or cell processes to refine the computational model or theory. Since the objective is a model of the interactions in a system, the experimental techniques that most suit systems biology are those that are system-wide and attempt to be as complete as possible. Therefore, transcriptomics, metabolomics, proteomics and high-throughput techniques are used to collect quantitative data for the construction and validation of models. As the application of dynamical systems theory to molecular biology. Indeed, the focus on the dynamics of the studied systems is the main conceptual difference between systems biology and bioinformatics. As a socioscientific phenomenon defined by the strategy of pursuing integration of complex data about the interactions in biological systems from diverse experimental sources using interdisciplinary tools and personnel. History Although the concept of a systems view of cellular function has been well understood since at least the 1930s, technological limitations made it difficult to make systems wide measurements. The advent of microarray technology in the 1990s opened up an entire new visa for studying cells at the systems level. In 2000, the Institute for Systems Biology was established in Seattle in an effort to lure "computational" type people who it was felt were not attracted to the academic settings of the university. The institute did not have a clear definition of what the field actually was: roughly bringing together people from diverse fields to use computers to holistically study biology in new ways. A Department of Systems Biology at Harvard Medical School was launched in 2003. In 2006 it was predicted that the buzz generated by the "very fashionable" new concept would cause all the major universities to need a systems biology department, thus that there would be careers available for graduates with a modicum of ability in computer programming and biology. In 2006 the National Science Foundation put forward a challenge to build a mathematical model of the whole cell. In 2012 the first whole-cell model of Mycoplasma genitalium was achieved by the Covert Laboratory at Stanford University. The whole-cell model is able to predict viability of M. genitalium cells in response to genetic mutations. An earlier precursor of systems biology, as a distinct discipline, may have been by systems theorist Mihajlo Mesarovic in 1966 with an international symposium at the Case Institute of Technology in Cleveland, Ohio, titled Systems Theory and Biology. Mesarovic predicted that perhaps in the future there would be such a thing as "systems biology". Other early precursors that focused on the view that biology should be analyzed as a system, rather than a simple collection of parts, were Metabolic Control Analysis, developed by Henrik Kacser and Jim Burns later thoroughly revised, and Reinhart Heinrich and Tom Rapoport, and Biochemical Systems Theory developed by Michael Savageau. According to Robert Rosen in the 1960s, holistic biology had become passé by the early 20th century, as more empirical science dominated by molecular chemistry had become popular. Echoing him forty years later in 2006 Kling writes that the success of molecular biology throughout the 20th century had suppressed holistic computational methods. By 2011 the National Institutes of Health had made grant money available to support over ten systems biology centers in the United States, but by 2012 Hunter writes that systems biology still has someway to go to achieve its full potential. Nonetheless, proponents hoped that it might once prove more useful in the future. An important milestone in the development of systems biology has become the international project Physiome. Associated disciplines According to the interpretation of systems biology as using large data sets using interdisciplinary tools, a typical application is metabolomics, which is the complete set of all the metabolic products, metabolites, in the system at the organism, cell, or tissue level. Items that may be a computer database include: phenomics, organismal variation in phenotype as it changes during its life span; genomics, organismal deoxyribonucleic acid (DNA) sequence, including intra-organismal cell specific variation. (i.e., telomere length variation); epigenomics/epigenetics, organismal and corresponding cell specific transcriptomic regulating factors not empirically coded in the genomic sequence. (i.e., DNA methylation, Histone acetylation and deacetylation, etc.); transcriptomics, organismal, tissue or whole cell gene expression measurements by DNA microarrays or serial analysis of gene expression; interferomics, organismal, tissue, or cell-level transcript correcting factors (i.e., RNA interference), proteomics, organismal, tissue, or cell level measurements of proteins and peptides via two-dimensional gel electrophoresis, mass spectrometry or multi-dimensional protein identification techniques (advanced HPLC systems coupled with mass spectrometry). Sub disciplines include phosphoproteomics, glycoproteomics and other methods to detect chemically modified proteins; glycomics, organismal, tissue, or cell-level measurements of carbohydrates; lipidomics, organismal, tissue, or cell level measurements of lipids. The molecular interactions within the cell are also studied, this is called interactomics. A discipline in this field of study is protein–protein interactions, although interactomics includes the interactions of other molecules. Neuroelectrodynamics, where the computer's or a brain's computing function as a dynamic system is studied along with its (bio)physical mechanisms; and fluxomics, measurements of the rates of metabolic reactions in a biological system (cell, tissue, or organism). In approaching a systems biology problem there are two main approaches. These are the top down and bottom up approach. The top down approach takes as much of the system into account as possible and relies largely on experimental results. The RNA-Seq technique is an example of an experimental top down approach. Conversely, the bottom up approach is used to create detailed models while also incorporating experimental data. An example of the bottom up approach is the use of circuit models to describe a simple gene network. Various technologies utilized to capture dynamic changes in mRNA, proteins, and post-translational modifications. Mechanobiology, forces and physical properties at all scales, their interplay with other regulatory mechanisms; biosemiotics, analysis of the system of sign relations of an organism or other biosystems; Physiomics, a systematic study of physiome in biology. Cancer systems biology is an example of the systems biology approach, which can be distinguished by the specific object of study (tumorigenesis and treatment of cancer). It works with the specific data (patient samples, high-throughput data with particular attention to characterizing cancer genome in patient tumour samples) and tools (immortalized cancer cell lines, mouse models of tumorigenesis, xenograft models, high-throughput sequencing methods, siRNA-based gene knocking down high-throughput screenings, computational modeling of the consequences of somatic mutations and genome instability). The long-term objective of the systems biology of cancer is ability to better diagnose cancer, classify it and better predict the outcome of a suggested treatment, which is a basis for personalized cancer medicine and virtual cancer patient in more distant prospective. Significant efforts in computational systems biology of cancer have been made in creating realistic multi-scale in silico models of various tumours. The systems biology approach often involves the development of mechanistic models, such as the reconstruction of dynamic systems from the quantitative properties of their elementary building blocks. For instance, a cellular network can be modelled mathematically using methods coming from chemical kinetics and control theory. Due to the large number of parameters, variables and constraints in cellular networks, numerical and computational techniques are often used (e.g., flux balance analysis). Bioinformatics and data analysis Other aspects of computer science, informatics, and statistics are also used in systems biology. These include new forms of computational models, such as the use of process calculi to model biological processes (notable approaches include stochastic π-calculus, BioAmbients, Beta Binders, BioPEPA, and Brane calculus) and constraint-based modeling; integration of information from the literature, using techniques of information extraction and text mining; development of online databases and repositories for sharing data and models, approaches to database integration and software interoperability via loose coupling of software, websites and databases, or commercial suits; network-based approaches for analyzing high dimensional genomic data sets. For example, weighted correlation network analysis is often used for identifying clusters (referred to as modules), modeling the relationship between clusters, calculating fuzzy measures of cluster (module) membership, identifying intramodular hubs, and for studying cluster preservation in other data sets; pathway-based methods for omics data analysis, e.g. approaches to identify and score pathways with differential activity of their gene, protein, or metabolite members. Much of the analysis of genomic data sets also include identifying correlations. Additionally, as much of the information comes from different fields, the development of syntactically and semantically sound ways of representing biological models is needed. Creating biological models Researchers begin by choosing a biological pathway and diagramming all of the protein, gene, and/or metabolic pathways. After determining all of the interactions, mass action kinetics or enzyme kinetic rate laws are used to describe the speed of the reactions in the system. Using mass-conservation, the differential equations for the biological system can be constructed. Experiments or parameter fitting can be done to determine the parameter values to use in the differential equations. These parameter values will be the various kinetic constants required to fully describe the model. This model determines the behavior of species in biological systems and bring new insight to the specific activities of systems. Sometimes it is not possible to gather all reaction rates of a system. Unknown reaction rates are determined by simulating the model of known parameters and target behavior which provides possible parameter values. The use of constraint-based reconstruction and analysis (COBRA) methods has become popular among systems biologists to simulate and predict the metabolic phenotypes, using genome-scale models. One of the methods is the flux balance analysis (FBA) approach, by which one can study the biochemical networks and analyze the flow of metabolites through a particular metabolic network, by optimizing the objective function of interest (e.g. maximizing biomass production to predict growth). See also Biochemical systems equation Biological computation BioSystems (journal) Computational biology Exposome Interactome List of omics topics in biology List of systems biology modeling software Living systems Metabolic Control Analysis Metabolic network modelling Modelling biological systems Molecular pathological epidemiology Network biology Network medicine Synthetic biology Systems biomedicine Systems immunology Systems medicine TIARA (database) References Further reading provides a comparative review of three books: External links Biological Systems in bio-physics-wiki Bioinformatics Computational fields of study
Systems biology
[ "Technology", "Engineering", "Biology" ]
2,701
[ "Biological engineering", "Computational fields of study", "Bioinformatics", "Computing and society", "Systems biology" ]
467,919
https://en.wikipedia.org/wiki/Medical%20software
Medical software is any software item or system used within a medical context, such as reducing the paperwork, tracking patient activity Standalone software used for diagnostic or therapeutic purposes. Software embedded in a medical device (often referred to as "medical device software"). Software that drives a medical device or determines how it is used. Software that acts as an accessory to a medical device. Software used in the design, production, and testing of a medical device (or) Software that provides quality control management of a medical device. History Medical software has been in use since at least since the 1960s, a time when the first computerized information-handling system in the hospital sphere was being considered by Lockheed. As computing became more widespread and useful in the late 1970s and into the 1980s, the concept of "medical software" as a data and operations management tool in the medical industry — including in the physician's office — became more prevalent. Medical software became more prominent in medical devices in fields such as nuclear medicine, cardiology, and medical robotics by the early 1990s, prompting additional scrutiny of the "safety-critical" nature of medical software in the research and legislative communities, in part fueled by the Therac-25 radiation therapy device scandal. The development of the ISO 9000-3 standard as well as the European Medical Devices Directive in 1993 helped bring some harmonization of existing laws with medical devices and their associated software, and the addition of IEC 62304 in 2006 further cemented how medical device software should be developed and tested. The U.S. Food and Drug Administration (FDA) has also offered guidance and driven regulation on medical software, particularly embedded in and used as medical devices. There was an expansion of medical software innovation with the adoption of electronic health records and availability of electronic clinical data. In the United States, substantial resources were allocated starting with the HITECH Act of 2009. Medical device software The global IEC 62304 standard on the software life cycle processes of medical device software states it is a "software system that has been developed for the purpose of being incorporated into the medical device being developed or that is intended for use as a medical device in its own right." In the U.S., the FDA states that "any software that meets the legal definition of a [medical] device" is considered medical device software. A similar "software can be a medical device" interpretation was also made by the European Union in 2007 with an update to its European Medical Devices Directive, when "used specifically for diagnostic and/or therapeutic purposes." Due to the broad scope covered by these terms, manifold classifications can be proposed for various medical software, based for instance on their technical nature (embedded in a device or standalone), on their level of safety (from the most trivial to the most safety-critical ones), or on their primarily function (treatment, education, diagnostics, and/or data management). Software as a medical device The dramatic increase in smartphone usage in the twenty-first century triggered the emergence of thousands of stand-alone health- and medical-related software apps, many falling into a gray or borderline area in terms of regulation. While software embedded into a medical device was being addressed, medical software separate from medical hardware — referred to by the International Medical Device Regulators Forum (IMDRF) as "software as a medical device" or "SaMD" — was falling through existing regulatory cracks. In the U.S., the FDA eventually released new draft guidance in July 2011 on "mobile medical applications," with members of the legal community such as Keith Barritt speculating it should be read to imply "as applicable to all software, since the test for determining whether a mobile application is a regulated mobile 'medical' application is the same test one would use to determine if any software is regulated." Examples of mobile apps potentially covered by the guidance included those that regulate an installed pacemaker or those that analyze images for cancerous lesions, X-rays and MRI, graphic data such as EEG waveforms as well as bedside monitors, urine analyzers, glucometer, stethoscopes, spirometers, BMI calculators, heart rate monitors and body fat calculators. By the time its final guidance was released in late 2013, however, members of Congress began to be concerned about how the guidance would be used in the future, in particular with what it would mean to the SOFTWARE Act legislation that had recently been introduced. Around the same time, the IMDRF were working on a more global perspective of SaMD with the release of its Key Definitions in December 2013, focused on "[establishing] a common framework for regulators to incorporate converged controls into their regulatory approaches for SaMD." Aside from "not [being] necessary for a hardware medical device to achieve its intended medical purpose," the IMDRF also found that SaMD also could not drive a medical device, though it could be used as a module of or interfaced with one. The group further developed quality management system principles for SaMD in 2015. International standards IEC 62304 has become the benchmark standard for the development of medical device software, whether standalone software or otherwise, in both the E.U. and the U.S. Leading industry innovation in software technologies has led key industry leaders and government regulators to recognize the emergence of numerous standalone medical software products that operate as medical devices. This has been reflected in regulatory changes in the E.U. (European Medical Devices Directive) and the U.S. (various FDA guidance documents). Additionally, quality management system requirements for manufacturing a software medical device, as is the case with any medical device, are described in the U.S. Quality Systems Regulation of the FDA and also in ISO 13485:2003. Software technology manufacturers that operate within the software medical device space conduct mandatory development of their products in accordance with those requirements. Furthermore, though not mandatory, they may elect to obtain certification from a notified body, having implemented such quality system requirements as described within international standards such as ISO 13485:2003. Further reading Babelotzky, W; Bohrt, C.; Choudhuri, J.; Handorn, B.; Heidenreich, G.; Neuder, K.; Neumann, G.; Prinz, T.; Rösch, A.; Spyra, G.; Stephan, S.; Wenner, H.; Wufka, M. (2018) Development and Production of Medical Software : Standards in Medical Engineering. VDE VERLAG GMBH. pp. 1-207. . See also Health informatics Health information technology :Category: Medical software External links References
Medical software
[ "Biology" ]
1,360
[ "Medical software", "Medical technology" ]
468,001
https://en.wikipedia.org/wiki/FASTA%20format
In bioinformatics and biochemistry, the FASTA format is a text-based format for representing either nucleotide sequences or amino acid (protein) sequences, in which nucleotides or amino acids are represented using single-letter codes. The format allows for sequence names and comments to precede the sequences. It originated from the FASTA software package and has since become a near-universal standard in bioinformatics. The simplicity of FASTA format makes it easy to manipulate and parse sequences using text-processing tools and scripting languages. Overview A sequence begins with a greater-than character (">") followed by a description of the sequence (all in a single line). The lines immediately following the description line are the sequence representation, with one letter per amino acid or nucleic acid, and are typically no more than 80 characters in length. For example: >MCHU - Calmodulin - Human, rabbit, bovine, rat, and chicken MADQLTEEQIAEFKEAFSLFDKDGDGTITTKELGTVMRSLGQNPTEAELQDMINEVDADGNGTID FPEFLTMMARKMKDTDSEEEIREAFRVFDKDGNGYISAAELRHVMTNLGEKLTDEEVDEMIREA DIDGDGQVNYEEFVQMMTAK* Original format The original FASTA/Pearson format is described in the documentation for the FASTA suite of programs. It can be downloaded with any free distribution of FASTA (see fasta20.doc, fastaVN.doc, or fastaVN.me—where VN is the Version Number). In the original format, a sequence was represented as a series of lines, each of which was no longer than 120 characters and usually did not exceed 80 characters. This probably was to allow for the preallocation of fixed line sizes in software: at the time most users relied on Digital Equipment Corporation (DEC) VT220 (or compatible) terminals which could display 80 or 132 characters per line. Most people preferred the bigger font in 80-character modes and so it became the recommended fashion to use 80 characters or less (often 70) in FASTA lines. Also, the width of a standard printed page is 70 to 80 characters (depending on the font). Hence, 80 characters became the norm. The first line in a FASTA file started either with a ">" (greater-than) symbol or, less frequently, a ";" (semicolon) was taken as a comment. Subsequent lines starting with a semicolon would be ignored by software. Since the only comment used was the first, it quickly became used to hold a summary description of the sequence, often starting with a unique library accession number, and with time it has become commonplace to always use ">" for the first line and to not use ";" comments (which would otherwise be ignored). Following the initial line (used for a unique description of the sequence) was the actual sequence itself in the standard one-letter character string. Anything other than a valid character would be ignored (including spaces, tabulators, asterisks, etc...). It was also common to end the sequence with an "*" (asterisk) character (in analogy with use in PIR formatted sequences) and, for the same reason, to leave a blank line between the description and the sequence. Below are a few sample sequences: ;LCBO - Prolactin precursor - Bovine ; a sample sequence in FASTA format MDSKGSSQKGSRLLLLLVVSNLLLCQGVVSTPVCPNGPGNCQVSLRDLFDRAVMVSHYIHDLSS EMFNEFDKRYAQGKGFITMALNSCHTSSLPTPEDKEQAQQTHHEVLMSLILGLLRSWNDPLYHL VTEVRGMKGAPDAILSRAIEIEEENKRLLEGMEMIFGQVIPGAKETEPYPVWSGLPSLQTKDED ARYSAFYNLLHCLRRDSSKIDTYLKLLNCRIIYNNNC* >MCHU - Calmodulin - Human, rabbit, bovine, rat, and chicken MADQLTEEQIAEFKEAFSLFDKDGDGTITTKELGTVMRSLGQNPTEAELQDMINEVDADGNGTID FPEFLTMMARKMKDTDSEEEIREAFRVFDKDGNGYISAAELRHVMTNLGEKLTDEEVDEMIREA DIDGDGQVNYEEFVQMMTAK* >gi|5524211|gb|AAD44166.1| cytochrome b [Elephas maximus maximus] LCLYTHIGRNIYYGSYLYSETWNTGIMLLLITMATAFMGYVLPWGQMSFWGATVITNLFSAIPYIGTNLV EWIWGGFSVDKATLNRFFAFHFILPFTMVALAGVHLTFLHETGSNNPLGLTSDSDKIPFHPYYTIKDFLG LLILILLLLLLALLSPDMLGDPDNHMPADPLNTPLHIKPEWYFLFAYAILRSVPNKLGGVLALFLSIVIL GLMPFLHTSKHRSMMLRPLSQALFWTLTMDLLTLTWIGSQPVEYPYTIIGQMASILYFSIILAFLPIAGX IENY A multiple-sequence FASTA format, or multi-FASTA format, would be obtained by concatenating several single-sequence FASTA files in one file. This does not imply a contradiction with the format as only the first line in a FASTA file may start with a ";" or ">", forcing all subsequent sequences to start with a ">" in order to be taken as separate sequences (and further forcing the exclusive reservation of ">" for the sequence definition line). Thus, the examples above would be a multi-FASTA file if taken together. Modern bioinformatics programs that rely on the FASTA format expect the sequence headers to be preceded by ">". The sequence is generally represented as "interleaved", or on multiple lines as in the above example, but may also be "sequential", or on a single line. Running different bioinformatics programs may require conversions between "sequential" and "interleaved" FASTA formats. Description line The description line (defline) or header/identifier line, which begins with ">", gives a name and/or a unique identifier for the sequence, and may also contain additional information. In a deprecated practice, the header line sometimes contained more than one header, separated by a ^A (Control-A) character. In the original Pearson FASTA format, one or more comments, distinguished by a semi-colon at the beginning of the line, may occur after the header. Some databases and bioinformatics applications do not recognize these comments and follow the NCBI FASTA specification. An example of a multiple sequence FASTA file follows: >SEQUENCE_1 MTEITAAMVKELRESTGAGMMDCKNALSETNGDFDKAVQLLREKGLGKAAKKADRLAAEG LVSVKVSDDFTIAAMRPSYLSYEDLDMTFVENEYKALVAELEKENEERRRLKDPNKPEHK IPQFASRKQLSDAILKEAEEKIKEELKAQGKPEKIWDNIIPGKMNSFIADNSQLDSKLTL MGQFYVMDDKKTVEQVIAEKEKEFGGKIKIVEFICFEVGEGLEKKTEDFAAEVAAQL >SEQUENCE_2 SATVSEINSETDFVAKNDQFIALTKDTTAHIQSNSLQSVEELHSSTINGVKFEEYLKSQI ATIGENLVVRRFATLKAGANGVVNGYIHTNGRVGVVIAAACDSAEVASKSRDLLRQICMH NCBI identifiers The NCBI defined a standard for the unique identifier used for the sequence (SeqID) in the header line. This allows a sequence that was obtained from a database to be labelled with a reference to its database record. The database identifier format is understood by the NCBI tools like makeblastdb and table2asn. The following list describes the NCBI FASTA defined format for sequence identifiers. The vertical bars ("|") in the above list are not separators in the sense of the Backus–Naur form but are part of the format. Multiple identifiers can be concatenated, also separated by vertical bars. Sequence representation Following the header line, the actual sequence is represented. Sequences may be protein sequences or nucleic acid sequences, and they can contain gaps or alignment characters (see sequence alignment). Sequences are expected to be represented in the standard IUB/IUPAC amino acid and nucleic acid codes, with these exceptions: lower-case letters are accepted and are mapped into upper-case; a single hyphen or dash can be used to represent a gap character; and in amino acid sequences, U and * are acceptable letters (see below). Numerical digits are not allowed but are used in some databases to indicate the position in the sequence. The nucleic acid codes supported are: The amino acid codes supported (22 amino acids and 3 special codes) are: FASTA file Filename extension There is no standard filename extension for a text file containing FASTA formatted sequences. The table below shows each extension and its respective meaning. Compression The compression of FASTA files requires a specific compressor to handle both channels of information: identifiers and sequence. For improved compression results, these are mainly divided into two streams where the compression is made assuming independence. For example, the algorithm MFCompress performs lossless compression of these files using context modelling and arithmetic encoding. Genozip, a software package for compressing genomic files, uses an extensible context-based model. Benchmarks of FASTA file compression algorithms have been reported by Hosseini et al. in 2016, and Kryukov et al. in 2020. Encryption The encryption of FASTA files can be performed with various tools, including Cryfa and Genozip. Cryfa uses AES encryption and also enables data compression. Similarly, Genozip can encrypt FASTA files with AES-256 during compression. Extensions FASTQ format is a form of FASTA format extended to indicate information related to sequencing. It is created by the Sanger Centre in Cambridge. A2M/A3M are a family of FASTA-derived formats used for sequence alignments. In A2M/A3M sequences, lowercase characters are taken to mean insertions, which are then indicated in the other sequences as the dot ("") character. The dots can be discarded for compactness without loss of information. As with typical FASTA files used in alignments, the gap ("") is taken to mean exactly one position. A3M is similar to A2M, with the added rule that gaps aligned to insertions can too be discarded. Working with FASTA files A plethora of user-friendly scripts are available from the community to perform FASTA file manipulations. Online toolboxes, such as FaBox or the FASTX-Toolkit within Galaxy servers, are also available. These can be used to segregate sequence headers/identifiers, rename them, shorten them, or extract sequences of interest from large FASTA files based on a list of wanted identifiers (among other available functions). A tree-based approach to sorting multi-FASTA files (TREE2FASTA) also exists based on the coloring and/or annotation of sequences of interest in the FigTree viewer. Additionally, the Bioconductor Biostrings package can be used to read and manipulate FASTA files in R. Several online format converters exist to rapidly reformat multi-FASTA files to different formats (e.g. NEXUS, PHYLIP) for use with different phylogenetic programs, such as the converter available on phylogeny.fr. See also The FASTQ format, used to represent DNA sequencer reads along with quality scores. The SAM and CRAM formats, used to represent genome sequencer reads that have been aligned to genome sequences. The GVF format (Genome Variation Format), an extension based on the GFF3 format. References External links Bioconductor FASTX-Toolkit FigTree viewer Phylogeny.fr GTO Bioinformatics Biological sequence format
FASTA format
[ "Engineering", "Biology" ]
2,659
[ "Bioinformatics", "Biological engineering", "Biological sequence format" ]
468,117
https://en.wikipedia.org/wiki/Sequence%20clustering
In bioinformatics, sequence clustering algorithms attempt to group biological sequences that are somehow related. The sequences can be either of genomic, "transcriptomic" (ESTs) or protein origin. For proteins, homologous sequences are typically grouped into families. For EST data, clustering is important to group sequences originating from the same gene before the ESTs are assembled to reconstruct the original mRNA. Some clustering algorithms use single-linkage clustering, constructing a transitive closure of sequences with a similarity over a particular threshold. UCLUST and CD-HIT use a greedy algorithm that identifies a representative sequence for each cluster and assigns a new sequence to that cluster if it is sufficiently similar to the representative; if a sequence is not matched then it becomes the representative sequence for a new cluster. The similarity score is often based on sequence alignment. Sequence clustering is often used to make a non-redundant set of representative sequences. Sequence clusters are often synonymous with (but not identical to) protein families. Determining a representative tertiary structure for each sequence cluster is the aim of many structural genomics initiatives. Sequence clustering algorithms and packages CD-HIT UCLUST in USEARCH Starcode: a fast sequence clustering algorithm based on exact all-pairs search. OrthoFinder: a fast, scalable and accurate method for clustering proteins into gene families (orthogroups) Linclust: first algorithm whose runtime scales linearly with input set size, very fast, part of MMseqs2 software suite for fast, sensitive sequence searching and clustering of large sequence sets TribeMCL: a method for clustering proteins into related groups BAG: a graph theoretic sequence clustering algorithm JESAM: Open source parallel scalable DNA alignment engine with optional clustering software component UICluster: Parallel Clustering of EST (Gene) Sequences BLASTClust single-linkage clustering with BLAST Clusterer: extendable java application for sequence grouping and cluster analyses PATDB: a program for rapidly identifying perfect substrings nrdb: a program for merging trivially redundant (identical) sequences CluSTr: A single-linkage protein sequence clustering database from Smith-Waterman sequence similarities; covers over 7 mln sequences including UniProt and IPI ICAtools - original (ancient) DNA clustering package with many algorithms useful for artifact discovery or EST clustering Skipredudant EMBOSS tool to remove redundant sequences from a set CLUSS Algorithm to identify groups of structurally, functionally, or evolutionarily related hard-to-align protein sequences. CLUSS webserver CLUSS2 Algorithm for clustering families of hard-to-align protein sequences with multiple biological functions. CLUSS2 webserver Non-redundant sequence databases PISCES: A Protein Sequence Culling Server RDB90 UniRef: A non-redundant UniProt sequence database Uniclust: A clustered UniProtKB sequences at the level of 90%, 50% and 30% pairwise sequence identity. Virus Orthologous Clusters: A viral protein sequence clustering database; contains all predicted genes from eleven virus families organized into ortholog groups by BLASTP similarity See also Cluster analysis Social sequence analysis References Bioinformatics
Sequence clustering
[ "Engineering", "Biology" ]
674
[ "Bioinformatics", "Biological engineering" ]
468,154
https://en.wikipedia.org/wiki/Protein%20family
A protein family is a group of evolutionarily related proteins. In many cases, a protein family has a corresponding gene family, in which each gene encodes a corresponding protein with a 1:1 relationship. The term "protein family" should not be confused with family as it is used in taxonomy. Proteins in a family descend from a common ancestor and typically have similar three-dimensional structures, functions, and significant sequence similarity. Sequence similarity (usually amino-acid sequence) is one of the most common indicators of homology, or common evolutionary ancestry. Some frameworks for evaluating the significance of similarity between sequences use sequence alignment methods. Proteins that do not share a common ancestor are unlikely to show statistically significant sequence similarity, making sequence alignment a powerful tool for identifying the members of protein families. Families are sometimes grouped together into larger clades called superfamilies based on structural similarity, even if there is no identifiable sequence homology. Currently, over 60,000 protein families have been defined, although ambiguity in the definition of "protein family" leads different researchers to highly varying numbers. Terminology and usage The term protein family has broad usage and can be applied to large groups of proteins with barely detectable sequence similarity as well as narrow groups of proteins with near identical sequence, function, and structure. To distinguish between these cases, a hierarchical terminology is in use. At the highest level of classification are protein superfamilies, which group distantly related proteins, often based on their structural similarity. Next are protein families, which refer to proteins with a shared evolutionary origin exhibited by significant sequence similarity. Subfamilies can be defined within families to denote closely related proteins that have similar or identical functions. For example, a superfamily like the PA clan of proteases has less sequence conservation than the C04 family within it. Protein domains and motifs Protein families were first recognised when most proteins that were structurally understood were small, single-domain proteins such as myoglobin, hemoglobin, and cytochrome c. Since then, many proteins have been found with multiple independent structural and functional units called domains. Due to evolutionary shuffling, different domains in a protein have evolved independently. This has led to a focus on families of protein domains. Several online resources are devoted to identifying and cataloging these domains. Different regions of a protein have differing functional constraints. For example, the active site of an enzyme requires certain amino-acid residues to be precisely oriented. A protein–protein binding interface may consist of a large surface with constraints on the hydrophobicity or polarity of the amino-acid residues. Functionally constrained regions of proteins evolve more slowly than unconstrained regions such as surface loops, giving rise to blocks of conserved sequence when the sequences of a protein family are compared (see multiple sequence alignment). These blocks are most commonly referred to as motifs, although many other terms are used (blocks, signatures, fingerprints, etc.). Several online resources are devoted to identifying and cataloging protein motifs. Evolution of protein families According to current consensus, protein families arise in two ways. First, the separation of a parent species into two genetically isolated descendant species allows a gene/protein to independently accumulate variations (mutations) in these two lineages. This results in a family of orthologous proteins, usually with conserved sequence motifs. Second, a gene duplication may create a second copy of a gene (termed a paralog). Because the original gene is still able to perform its function, the duplicated gene is free to diverge and may acquire new functions (by random mutation). Certain gene/protein families, especially in eukaryotes, undergo extreme expansions and contractions in the course of evolution, sometimes in concert with whole genome duplications. Expansions are less likely, and losses more likely, for intrinsically disordered proteins and for protein domains whose hydrophobic amino acids are further from the optimal degree of dispersion along the primary sequence. This expansion and contraction of protein families is one of the salient features of genome evolution, but its importance and ramifications are currently unclear. Use and importance of protein families As the total number of sequenced proteins increases and interest expands in proteome analysis, an effort is ongoing to organize proteins into families and to describe their component domains and motifs. Reliable identification of protein families is critical to phylogenetic analysis, functional annotation, and the exploration of the diversity of protein function in a given phylogenetic branch. The Enzyme Function Initiative uses protein families and superfamilies as the basis for development of a sequence/structure-based strategy for large scale functional assignment of enzymes of unknown function. The algorithmic means for establishing protein families on a large scale are based on a notion of similarity. Protein family resources Many biological databases catalog protein families and allow users to match query sequences to known families. These include: Pfam - Protein families database of alignments and HMMs PROSITE - Database of protein domains, families and functional sites PIRSF - SuperFamily Classification System PASS2 - Protein Alignment as Structural Superfamilies v2 - PASS2@NCBS SUPERFAMILY - Library of HMMs representing superfamilies and database of (superfamily and family) annotations for all completely sequenced organisms SCOP and CATH - Classifications of protein structures into superfamilies, families and domains Similarly, many database-searching algorithms exist, for example: BLAST - DNA sequence similarity search BLASTp - Protein sequence similarity search OrthoFinder - Method for clustering proteins into families (orthogroups) See also Gene family Genome annotation Sequence clustering Protein families References External links Bioinformatics Protein classification
Protein family
[ "Engineering", "Biology" ]
1,153
[ "Biological engineering", "Protein classification", "Bioinformatics", "Protein families", "Protein superfamilies" ]