id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
4,954,757
https://en.wikipedia.org/wiki/HTTP%20compression
HTTP compression is a capability that can be built into web servers and web clients to improve transfer speed and bandwidth utilization. HTTP data is compressed before it is sent from the server: compliant browsers will announce what methods are supported to the server before downloading the correct format; browsers that do not support compliant compression method will download uncompressed data. The most common compression schemes include gzip and Brotli; a full list of available schemes is maintained by the IANA. There are two different ways compression can be done in HTTP. At a lower level, a Transfer-Encoding header field may indicate the payload of an HTTP message is compressed. At a higher level, a Content-Encoding header field may indicate that a resource being transferred, cached, or otherwise referenced is compressed. Compression using Content-Encoding is more widely supported than Transfer-Encoding, and some browsers do not advertise support for Transfer-Encoding compression to avoid triggering bugs in servers. Compression scheme negotiation The negotiation is done in two steps, described in RFC 2616 and RFC 9110: 1. The web client advertises which compression schemes it supports by including a list of tokens in the HTTP request. For Content-Encoding, the list is in a field called Accept-Encoding; for Transfer-Encoding, the field is called TE. GET /encrypted-area HTTP/1.1 Host: www.example.com Accept-Encoding: gzip, deflate 2. If the server supports one or more compression schemes, the outgoing data may be compressed by one or more methods supported by both parties. If this is the case, the server will add a Content-Encoding or Transfer-Encoding field in the HTTP response with the used schemes, separated by commas. HTTP/1.1 200 OK Date: mon, 26 June 2016 22:38:34 GMT Server: Apache/1.3.3.7 (Unix) (Red-Hat/Linux) Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT Accept-Ranges: bytes Content-Length: 438 Connection: close Content-Type: text/html; charset=UTF-8 Content-Encoding: gzip The web server is by no means obligated to use any compression method – this depends on the internal settings of the web server and also may depend on the internal architecture of the website in question. Content-Encoding tokens The official list of tokens available to servers and client is maintained by IANA, and it includes: br – Brotli, a compression algorithm specifically designed for HTTP content encoding, defined in and implemented in all modern major browsers. compress – UNIX "compress" program method (historic; deprecated in most applications and replaced by gzip or deflate) deflate – compression based on the deflate algorithm (described in ), a combination of the LZ77 algorithm and Huffman coding, wrapped inside the zlib data format (); exi – W3C Efficient XML Interchange gzip – GNU zip format (described in ). Uses the deflate algorithm for compression, but the data format and the checksum algorithm differ from the "deflate" content-encoding. This method is the most broadly supported as of March 2011. identity – No transformation is used. This is the default value for content coding. pack200-gzip – Network Transfer Format for Java Archives zstd – Zstandard compression, defined in In addition to these, a number of unofficial or non-standardized tokens are used in the wild by either servers or clients: bzip2 – compression based on the free bzip2 format, supported by lighttpd lzma – compression based on (raw) LZMA is available in Opera 20, and in elinks via a compile-time option peerdist – Microsoft Peer Content Caching and Retrieval rsync – delta encoding in HTTP, implemented by a pair of rproxy proxies. xpress – Microsoft compression protocol used by Windows 8 and later for Windows Store application updates. LZ77-based compression optionally using a Huffman encoding. xz – LZMA2-based content compression, supported by a non-official Firefox patch; and fully implemented in mget since 2013-12-31. Servers that support HTTP compression SAP NetWeaver Microsoft IIS: built-in or using third-party module Apache HTTP Server, via mod_deflate (despite its name, only supporting gzip), and mod_brotli Hiawatha HTTP server: serves pre-compressed files Cherokee HTTP server, On the fly gzip and deflate compressions Oracle iPlanet Web Server Zeus Web Server lighttpd nginx – built-in Applications based on Tornado, if "compress_response" is set to True in the application settings (for versions prior to 4.0, set "gzip" to True) Jetty Server – built-into default static content serving and available via servlet filter configurations GeoServer Apache Tomcat IBM Websphere AOLserver Ruby Rack, via the Rack::Deflater middleware HAProxy Varnish – built-in. Works also with ESI Armeria – Serving pre-compressed files NaviServer – built-in, dynamic and static compression Caddy – built-in via encode Many content delivery networks also implement HTTP compression to improve speedy delivery of resources to end users. The compression in HTTP can also be achieved by using the functionality of server-side scripting languages like PHP, or programming languages like Java. Various online tools exist to verify a working implementation of HTTP compression. These online tools usually request multiple variants of a URL, each with different request headers (with varying Accept-Encoding content). HTTP compression is considered to be implemented correctly when the server returns a document in a compressed format. By comparing the sizes of the returned documents, the effective compression ratio can be calculated (even between different compression algorithms). Problems preventing the use of HTTP compression A 2009 article by Google engineers Arvind Jain and Jason Glasgow states that more than 99 person-years are wasted daily due to increase in page load time when users do not receive compressed content. This occurs when anti-virus software interferes with connections to force them to be uncompressed, where proxies are used (with overcautious web browsers), where servers are misconfigured, and where browser bugs stop compression being used. Internet Explorer 6, which drops to HTTP 1.0 (without features like compression or pipelining) when behind a proxy – a common configuration in corporate environments – was the mainstream browser most prone to failing back to uncompressed HTTP. Another problem found while deploying HTTP compression on large scale is due to the deflate encoding definition: while HTTP 1.1 defines the deflate encoding as data compressed with deflate (RFC 1951) inside a zlib formatted stream (RFC 1950), Microsoft server and client products historically implemented it as a "raw" deflated stream, making its deployment unreliable. For this reason, some software, including the Apache HTTP Server, only implement gzip encoding. Security implications Compression allows a form of chosen plaintext attack to be performed: if an attacker can inject any chosen content into the page, they can know whether the page contains their given content by observing the size increase of the encrypted stream. If the increase is smaller than expected for random injections, it means that the compressor has found a repeat in the text, i.e. the injected content overlaps the secret information. This is the idea behind CRIME. In 2012, a general attack against the use of data compression, called CRIME, was announced. While the CRIME attack could work effectively against a large number of protocols, including but not limited to TLS, and application-layer protocols such as SPDY or HTTP, only exploits against TLS and SPDY were demonstrated and largely mitigated in browsers and servers. The CRIME exploit against HTTP compression has not been mitigated at all, even though the authors of CRIME have warned that this vulnerability might be even more widespread than SPDY and TLS compression combined. In 2013, a new instance of the CRIME attack against HTTP compression, dubbed BREACH, was published. A BREACH attack can extract login tokens, email addresses or other sensitive information from TLS encrypted web traffic in as little as 30 seconds (depending on the number of bytes to be extracted), provided the attacker tricks the victim into visiting a malicious web link. All versions of TLS and SSL are at risk from BREACH regardless of the encryption algorithm or cipher used. Unlike previous instances of CRIME, which can be successfully defended against by turning off TLS compression or SPDY header compression, BREACH exploits HTTP compression which cannot realistically be turned off, as virtually all web servers rely upon it to improve data transmission speeds for users. As of 2016, the TIME attack and the HEIST attack are now public knowledge. References External links : Hypertext Transfer Protocol – HTTP/1.1 : HTTP Semantics HTTP Content-Coding Values by Internet Assigned Numbers Authority Compression with lighttpd Coding Horror: HTTP Compression on IIS 6.0 Using HTTP Compression by Martin Brown of Server Watch Using HTTP Compression in PHP Dynamic and static HTTP compression with Apache httpd Web development Lossless compression algorithms Hypertext Transfer Protocol de:Hypertext Transfer Protocol#HTTP-Kompression
HTTP compression
[ "Engineering" ]
1,955
[ "Software engineering", "Web development" ]
4,954,950
https://en.wikipedia.org/wiki/List%20of%20common%20coordinate%20transformations
This is a list of some of the most commonly used coordinate transformations. 2-dimensional Let be the standard Cartesian coordinates, and the standard polar coordinates. To Cartesian coordinates From polar coordinates From log-polar coordinates By using complex numbers , the transformation can be written as That is, it is given by the complex exponential function. From bipolar coordinates From 2-center bipolar coordinates From Cesàro equation To polar coordinates From Cartesian coordinates Note: solving for returns the resultant angle in the first quadrant (). To find one must refer to the original Cartesian coordinate, determine the quadrant in which lies (for example, (3,−3) [Cartesian] lies in QIV), then use the following to solve for The value for must be solved for in this manner because for all values of , is only defined for , and is periodic (with period ). This means that the inverse function will only give values in the domain of the function, but restricted to a single period. Hence, the range of the inverse function is only half a full circle. Note that one can also use From 2-center bipolar coordinates Where 2c is the distance between the poles. To log-polar coordinates from Cartesian coordinates Arc-length and curvature In Cartesian coordinates In polar coordinates 3-dimensional Let (x, y, z) be the standard Cartesian coordinates, and (ρ, θ, φ) the spherical coordinates, with θ the angle measured away from the +Z axis (as , see conventions in spherical coordinates). As φ has a range of 360° the same considerations as in polar (2 dimensional) coordinates apply whenever an arctangent of it is taken. θ has a range of 180°, running from 0° to 180°, and does not pose any problem when calculated from an arccosine, but beware for an arctangent. If, in the alternative definition, θ is chosen to run from −90° to +90°, in opposite direction of the earlier definition, it can be found uniquely from an arcsine, but beware of an arccotangent. In this case in all formulas below all arguments in θ should have sine and cosine exchanged, and as derivative also a plus and minus exchanged. All divisions by zero result in special cases of being directions along one of the main axes and are in practice most easily solved by observation. To Cartesian coordinates From spherical coordinates So for the volume element: From cylindrical coordinates So for the volume element: To spherical coordinates From Cartesian coordinates See also the article on atan2 for how to elegantly handle some edge cases. So for the element: From cylindrical coordinates To cylindrical coordinates From Cartesian coordinates From spherical coordinates Arc-length, curvature and torsion from Cartesian coordinates See also Geographic coordinate conversion Transformation matrix References Coordinate transformations Coordinate systems Hamiltonian mechanics
List of common coordinate transformations
[ "Physics", "Mathematics" ]
578
[ "Functions and mappings", "Theoretical physics", "Mathematical objects", "Classical mechanics", "Hamiltonian mechanics", "Mathematical relations", "Transforms", "Coordinate systems", "Dynamical systems" ]
22,577,530
https://en.wikipedia.org/wiki/Flatness%20%28cosmology%29
In cosmology, flatness is a property of a space without curvature. Such a space is called a "flat space" or Euclidean space. Whether the universe is “flat″ could determine its ultimate fate; whether it will expand forever, or ultimately collapse back into itself. The geometry of spacetime has been measured by the Wilkinson Microwave Anisotropy Probe (WMAP) to be nearly flat. According to the WMAP 5-year results and analysis, “WMAP determined that the universe is flat, from which it follows that the mean energy density in the universe is equal to the critical density (within a 1% margin of error). This is equivalent to a mass density of 9.9 × 10−30 g/cm3, which is equivalent to only 5.9 protons per cubic meter.” The WMAP data are consistent with a flat geometry, with Ω = 1.02 +/- 0.02. See also Flatness problem References External links http://archive.ncsa.uiuc.edu/Cyberia/Cosmos/FlatnessProblem.html Physical cosmology
Flatness (cosmology)
[ "Physics", "Astronomy" ]
236
[ "Astronomical sub-disciplines", "Theoretical physics", "Physical cosmology", "Astrophysics" ]
22,578,403
https://en.wikipedia.org/wiki/Bender%E2%80%93Knuth%20involution
In algebraic combinatorics, a Bender–Knuth involution is an involution on the set of semistandard tableaux, introduced by in their study of plane partitions. Definition The Bender–Knuth involutions σk are defined for integers k, and act on the set of semistandard skew Young tableaux of some fixed shape μ/ν, where μ and ν are partitions. It acts by changing some of the elements k of the tableau to k + 1, and some of the entries k + 1 to k, in such a way that the numbers of elements with values k or k + 1 are exchanged. Call an entry of the tableau free if it is k or k + 1 and there is no other element with value k or k + 1 in the same column. For any i, the free entries of row i are all in consecutive columns, and consist of ai copies of k followed by bi copies of k + 1, for some ai and bi. The Bender–Knuth involution σk replaces them by bi copies of k followed by ai copies of k + 1. Applications Bender–Knuth involutions can be used to show that the number of semistandard skew tableaux of given shape and weight is unchanged under permutations of the weight. In turn this implies that the Schur function of a partition is a symmetric function. Bender–Knuth involutions were used by to give a short proof of the Littlewood–Richardson rule. References Symmetric functions Algebraic combinatorics Combinatorial algorithms Permutations
Bender–Knuth involution
[ "Physics", "Mathematics" ]
333
[ "Combinatorial algorithms", "Functions and mappings", "Permutations", "Algebra", "Computational mathematics", "Mathematical objects", "Combinatorics", "Symmetric functions", "Fields of abstract algebra", "Mathematical relations", "Algebraic combinatorics", "Symmetry" ]
19,984,031
https://en.wikipedia.org/wiki/Chris%20Dyer%20%28engineer%29
Chris Dyer (born 12 February 1968) is the former head of vehicle performance group at Renault Sport Formula 1 Team and the former race engineer of Michael Schumacher and Kimi Räikkönen at Scuderia Ferrari. Early career Born in Bendigo, Victoria, Dyer worked with the top V8 Supercar outfit, the Tom Walkinshaw owned Holden Racing Team in the mid-1990s alongside drivers like Peter Brock and Craig Lowndes. In 1997, he switched to Walkinshaw's Formula One team Arrows, working as Damon Hill's chief data engineer. In 1998, he stepped up to race engineering, working with drivers like Jos Verstappen. Ferrari For the 2001 season, Dyer moved to Ferrari, working as Schumacher's vehicle engineer, alongside Luca Baldisserri. By the end of 2002, Dyer engineered Schumacher at the tests; after the championship had been won, he also race engineered Schumacher at the last three races at Monza, Indianapolis, and Suzuka. Dyer then race engineered Schumacher to his 2003 and 2004 world titles, appearing with the German on the podium after his triumph at the 2003 Canadian Grand Prix. Dyer was quoted saying: "One of Michael's strengths is that, apart from driving quickly, he has an understanding of the car and how all the systems work." Dyer took over as Räikkönen's race engineer when the Finn moved to Ferrari in 2007. Despite suggestions that the pair did not always get the most from the package, Räikkönen took the title in 2007 at the final race by a single point over Lewis Hamilton and Fernando Alonso. After the disappointing results in the 2008 season, Ferrari announced that Dyer would be replaced by Andrea Stella for the 2009 season, with Dyer promoted to chief track engineer. On 4 January 2011, Ferrari announced that Dyer was replaced as head of race track engineering by former McLaren engineer Pat Fry. This decision was taken after Dyer made the call to bring Alonso in the pit lane to cover off the Australian Mark Webber's pit stop in the final race of the season, the 2010 Abu Dhabi Grand Prix. The decision was blamed for costing Alonso the title in favour of Sebastian Vettel who went on to become champion. In October 2012, it was announced that Dyer was to join BMW's Deutsche Tourenwagen Masters (DTM) programme as chief engineer. Renault On 4 February 2016, it was announced by Renault Sport that Dyer would be returning to Formula One as their head of vehicle performance group. Career 1997: Arrows data engineer 1998–2000: Arrows race engineer 2001–2002: Scuderia Ferrari vehicle engineer 2003–2006: Scuderia Ferrari race engineer, Michael Schumacher 2007–2008: Scuderia Ferrari race engineer, Kimi Räikkönen 2009–2011: Scuderia Ferrari chief track engineer 2011–2012: Scuderia Ferrari team member 2012–2015: BMW DTM chief engineer 2016–2020: Renault Sport F1 Team, head of vehicle performance group References 1968 births Australian engineers Australian motorsport people Data engineers Ferrari people Formula One engineers Living people
Chris Dyer (engineer)
[ "Engineering" ]
614
[ "Data engineers", "Data engineering" ]
19,986,090
https://en.wikipedia.org/wiki/Staged%20reforming
Staged reforming is a thermochemical process to convert organic material or bio waste such as wood, dung or hay into combustible gases containing methane, carbon monoxide and hydrogen. The single-stage reforming of bio materials results in high dust and tar yields in the produced gas restricting its use, hence the use of staged reforming. After reforming the output is approximately 80% fuel gas and 20% cokes. In staged reforming technology, gas conversion is a separate stage after pyrolysis. First stage Organic material is decomposed into gas and coal at approximately 600°C. Second stage Gas produced by the first stage is reformed with water vapor and heat energy from the cokes into a dust and residue-free fuel gas. Second stage process steps: Heating in the pre-heater at 1050°C. Cooldown in the reformer by chemical reaction at 750°C Further cooling to 550°C by heating the cold cokes by using cold mix Re-heating to 1050°C See also Plasma arc waste disposal Thermochemical conversion References External links Blauer Turm Biogas technology Chemical processes Hydrogen production
Staged reforming
[ "Chemistry", "Biology" ]
232
[ "Biofuels technology", "Chemical processes", "nan", "Chemical process engineering", "Biogas technology" ]
19,989,435
https://en.wikipedia.org/wiki/F%28R%29%20gravity
{{DISPLAYTITLE:f(R) gravity}} In physics, f(R) is a type of modified gravity theory which generalizes Einstein's general relativity. f(R) gravity is actually a family of theories, each one defined by a different function, , of the Ricci scalar, . The simplest case is just the function being equal to the scalar; this is general relativity. As a consequence of introducing an arbitrary function, there may be freedom to explain the accelerated expansion and structure formation of the Universe without adding unknown forms of dark energy or dark matter. Some functional forms may be inspired by corrections arising from a quantum theory of gravity. f(R) gravity was first proposed in 1970 by Hans Adolph Buchdahl (although was used rather than for the name of the arbitrary function). It has become an active field of research following work by Alexei Starobinsky on cosmic inflation. A wide range of phenomena can be produced from this theory by adopting different functions; however, many functional forms can now be ruled out on observational grounds, or because of pathological theoretical problems. Introduction In f(R) gravity, one seeks to generalize the Lagrangian of the Einstein–Hilbert action: to where is the determinant of the metric tensor, and f(R) is some function of the Ricci scalar. There are two ways to track the effect of changing R to f(R), i.e., to obtain the theory field equations. The first is to use metric formalism and the second is to use the Palatini formalism. While the two formalisms lead to the same field equations for General Relativity, i.e., when f(R) = R, the field equations may differ when f(R) ≠ R. Metric f(R) gravity Derivation of field equations In metric f(R) gravity, one arrives at the field equations by varying the action with respect to the metric and not treating the connection independently. For completeness we will now briefly mention the basic steps of the variation of the action. The main steps are the same as in the case of the variation of the Einstein–Hilbert action (see the article for more details) but there are also some important differences. The variation of the determinant is as always: The Ricci scalar is defined as Therefore, its variation with respect to the inverse metric is given by For the second step see the article about the Einstein–Hilbert action. Since is the difference of two connections, it should transform as a tensor. Therefore, it can be written as Substituting into the equation above: where is the covariant derivative and is the d'Alembert operator. Denoting , the variation in the action reads: Doing integration by parts on the second and third terms (and neglected the boundary contributions), we get: By demanding that the action remains invariant under variations of the metric, , one obtains the field equations: where is the energy–momentum tensor defined as where is the matter Lagrangian. Generalized Friedmann equations Assuming a Robertson–Walker metric with scale factor we can find the generalized Friedmann equations to be (in units where ): where is the Hubble parameter, the dot is the derivative with respect to the cosmic time , and the terms m and rad represent the matter and radiation densities respectively; these satisfy the continuity equations: Modified gravitational constant An interesting feature of these theories is the fact that the gravitational constant is time and scale dependent. To see this, add a small scalar perturbation to the metric (in the Newtonian gauge): where and are the Newtonian potentials and use the field equations to first order. After some lengthy calculations, one can define a Poisson equation in the Fourier space and attribute the extra terms that appear on the right-hand side to an effective gravitational constant eff. Doing so, we get the gravitational potential (valid on sub-horizon scales ): where m is a perturbation in the matter density, is the Fourier scale and eff is: with Massive gravitational waves This class of theories when linearized exhibits three polarization modes for the gravitational waves, of which two correspond to the massless graviton (helicities ±2) and the third (scalar) is coming from the fact that if we take into account a conformal transformation, the fourth order theory () becomes general relativity plus a scalar field. To see this, identify and use the field equations above to get Working to first order of perturbation theory: and after some tedious algebra, one can solve for the metric perturbation, which corresponds to the gravitational waves. A particular frequency component, for a wave propagating in the -direction, may be written as where and g() = d/d is the group velocity of a wave packet centred on wave-vector . The first two terms correspond to the usual transverse polarizations from general relativity, while the third corresponds to the new massive polarization mode of () theories. This mode is a mixture of massless transverse breathing mode (but not traceless) and massive longitudinal scalar mode. The transverse and traceless modes (also known as tensor modes) propagate at the speed of light, but the massive scalar mode moves at a speed G < 1 (in units where  = 1), this mode is dispersive. However, in () gravity metric formalism, for the model (also known as pure model), the third polarization mode is a pure breathing mode and propagate with the speed of light through the spacetime. Equivalent formalism Under certain additional conditions we can simplify the analysis of () theories by introducing an auxiliary field . Assuming for all , let () be the Legendre transformation of () so that and . Then, one obtains the O'Hanlon (1972) action: We have the Euler–Lagrange equations: Eliminating , we obtain exactly the same equations as before. However, the equations are only second order in the derivatives, instead of fourth order. We are currently working with the Jordan frame. By performing a conformal rescaling: we transform to the Einstein frame: after integrating by parts. Defining , and substituting This is general relativity coupled to a real scalar field: using f(R) theories to describe the accelerating universe is practically equivalent to using quintessence. (At least, equivalent up to the caveat that we have not yet specified matter couplings, so (for example) f(R) gravity in which matter is minimally coupled to the metric (i.e., in Jordan frame) is equivalent to a quintessence theory in which the scalar field mediates a fifth force with gravitational strength.) Palatini f(R) gravity In Palatini f(R) gravity, one treats the metric and connection independently and varies the action with respect to each of them separately. The matter Lagrangian is assumed to be independent of the connection. These theories have been shown to be equivalent to Brans–Dicke theory with . Due to the structure of the theory, however, Palatini () theories appear to be in conflict with the Standard Model, may violate Solar system experiments, and seem to create unwanted singularities. Metric-affine f(R) gravity In metric-affine f(R) gravity, one generalizes things even further, treating both the metric and connection independently, and assuming the matter Lagrangian depends on the connection as well. Observational tests As there are many potential forms of f(R) gravity, it is difficult to find generic tests. Additionally, since deviations away from General Relativity can be made arbitrarily small in some cases, it is impossible to conclusively exclude some modifications. Some progress can be made, without assuming a concrete form for the function f(R) by Taylor expanding The first term is like the cosmological constant and must be small. The next coefficient 1 can be set to one as in general relativity. For metric f(R) gravity (as opposed to Palatini or metric-affine () gravity), the quadratic term is best constrained by fifth force measurements, since it leads to a Yukawa correction to the gravitational potential. The best current bounds are or equivalently The parameterized post-Newtonian formalism is designed to be able to constrain generic modified theories of gravity. However, () gravity shares many of the same values as General Relativity, and is therefore indistinguishable using these tests. In particular light deflection is unchanged, so () gravity, like General Relativity, is entirely consistent with the bounds from Cassini tracking. Starobinsky gravity Starobinsky gravity has the following form where has the dimensions of mass. Starobinsky gravity provides a mechanism for the cosmic inflation, just after the Big Bang when R was still large. However, it is not suited to describe the present universe acceleration since at present R is very small. This implies that the quadratic term in is negligible, i.e., one tends to f(R) = R which is general relativity with a zero cosmological constant. Gogoi–Goswami gravity Gogoi–Goswami gravity (named after Dhruba Jyoti Gogoi and Umananda Dev Goswami) has the following form where and are two dimensionless positive constants and Rc is a characteristic curvature constant. Tensorial generalization f(R) gravity as presented in the previous sections is a scalar modification of general relativity. More generally, we can have a coupling involving invariants of the Ricci tensor and the Weyl tensor. Special cases are () gravity, conformal gravity, Gauss–Bonnet gravity and Lovelock gravity. Notice that with any nontrivial tensorial dependence, we typically have additional massive spin-2 degrees of freedom, in addition to the massless graviton and a massive scalar. An exception is Gauss–Bonnet gravity where the fourth order terms for the spin-2 components cancel out. See also Extended theories of gravity Gauss–Bonnet gravity Lovelock gravity References Further reading See Chapter 29 in the textbook on "Particles and Quantum Fields" by Kleinert, H. (2016), World Scientific (Singapore, 2016) (also available online) Salvatore Capozziello and Mariafelicia De Laurentis, (2015) "F(R) theories of gravitation". Scholarpedia, doi:10.4249/scholarpedia.31422 Kalvakota, Vaibhav R., (2021) "Investigating f(R)" gravity and cosmologies". Mathematical physics preprint archive, https://web.ma.utexas.edu/mp_arc/c/21/21-38.pdf External links f(R) gravity on arxiv.org Extended Theories of Gravity Theories of gravity
F(R) gravity
[ "Physics" ]
2,259
[ "Theoretical physics", "Theories of gravity" ]
508,070
https://en.wikipedia.org/wiki/Telescoping%20series
In mathematics, a telescoping series is a series whose general term is of the form , i.e. the difference of two consecutive terms of a sequence . As a consequence the partial sums of the series only consists of two terms of after cancellation. The cancellation technique, with part of each term cancelling with part of the next term, is known as the method of differences. An early statement of the formula for the sum or partial sums of a telescoping series can be found in a 1644 work by Evangelista Torricelli, De dimensione parabolae. Definition Telescoping sums are finite sums in which pairs of consecutive terms partly cancel each other, leaving only parts of the initial and final terms. Let be the elements of a sequence of numbers. Then If converges to a limit , the telescoping series gives: Every series is a telescoping series of its own partial sums. Examples The product of a geometric series with initial term and common ratio by the factor yields a telescoping sum, which allows for a direct calculation of its limit:when so when The seriesis the series of reciprocals of pronic numbers, and it is recognizable as a telescoping series once rewritten in partial fraction form Let k be a positive integer. Then where Hk is the kth harmonic number. Let k and m with k m be positive integers. Then where denotes the factorial operation. Many trigonometric functions also admit representation as differences, which may reveal telescopic canceling between the consecutive terms. Using the angle addition identity for a product of sines, which does not converge as Applications In probability theory, a Poisson process is a stochastic process of which the simplest case involves "occurrences" at random times, the waiting time until the next occurrence having a memoryless exponential distribution, and the number of "occurrences" in any time interval having a Poisson distribution whose expected value is proportional to the length of the time interval. Let Xt be the number of "occurrences" before time t, and let Tx be the waiting time until the xth "occurrence". We seek the probability density function of the random variable Tx. We use the probability mass function for the Poisson distribution, which tells us that where λ is the average number of occurrences in any time interval of length 1. Observe that the event {Xt ≥ x} is the same as the event {Tx ≤ t}, and thus they have the same probability. Intuitively, if something occurs at least times before time , we have to wait at most for the occurrence. The density function we seek is therefore The sum telescopes, leaving For other applications, see: Proof that the sum of the reciprocals of the primes diverges, where one of the proofs uses a telescoping sum; Fundamental theorem of calculus, a continuous analog of telescoping series; Order statistic, where a telescoping sum occurs in the derivation of a probability density function; Lefschetz fixed-point theorem, where a telescoping sum arises in algebraic topology; Homology theory, again in algebraic topology; Eilenberg–Mazur swindle, where a telescoping sum of knots occurs; Faddeev–LeVerrier algorithm. Related concepts A telescoping product is a finite product (or the partial product of an infinite product) that can be canceled by the method of quotients to be eventually only a finite number of factors. It is the finite products in which consecutive terms cancel denominator with numerator, leaving only the initial and final terms. Let be a sequence of numbers. Then, If converges to 1, the resulting product gives: For example, the infinite product simplifies as References Mathematical series
Telescoping series
[ "Mathematics" ]
778
[ "Sequences and series", "Mathematical structures", "Series (mathematics)", "Calculus" ]
508,390
https://en.wikipedia.org/wiki/Brownian%20motor
Brownian motors are nanoscale or molecular machines that use chemical reactions to generate directed motion in space. The theory behind Brownian motors relies on the phenomenon of Brownian motion, random motion of particles suspended in a fluid (a liquid or a gas) resulting from their collision with the fast-moving molecules in the fluid. On the nanoscale (1–100 nm), viscosity dominates inertia, and the extremely high degree of thermal noise in the environment makes conventional directed motion all but impossible, because the forces impelling these motors in the desired direction are minuscule when compared to the random forces exerted by the environment. Brownian motors operate specifically to utilise this high level of random noise to achieve directed motion, and as such are only viable on the nanoscale. The concept of Brownian motors is a recent one, having only been coined in 1995 by Peter Hänggi, but the existence of such motors in nature may have existed for a very long time and help to explain crucial cellular processes that require movement at the nanoscale, such as protein synthesis and muscular contraction. If this is the case, Brownian motors may have implications for the foundations of life itself. In more recent times, humans have attempted to apply this knowledge of natural Brownian motors to solve human problems. The applications of Brownian motors are most obvious in nanorobotics due to its inherent reliance on directed motion. History 20th century The term "Brownian motor" was originally invented by Swiss theoretical physicist Peter Hänggi in 1995. The Brownian motor, like the phenomenon of Brownian motion that underpinned its underlying theory, was also named after 19th century Scottish botanist Robert Brown, who, while looking through a microscope at pollen of the plant Clarkia pulchella immersed in water, famously described the random motion of pollen particles in water in 1827. In 1905, almost eighty years later, theoretical physicist Albert Einstein published a paper where he modeled the motion of the pollen as being moved by individual water molecules, and this was verified experimentally by Jean Perrin in 1908, who was awarded the Nobel Prize in Physics in 1926 "for his work on the discontinuous structure of matter". These developments helped to create the fundamentals of the present theories of the nanoscale world. Nanoscience has traditionally long remained at the intersection of the physical sciences of physics and chemistry, but more recent developments in research increasingly position it beyond the scope of either of these two traditional fields. 21st century In 2002, a seminal paper on Brownian motors published in the American Institute of Physics magazine Physics Today, "Brownian motors", by Dean Astumian and Peter Hänggi. There, they proposed the then novel concept of Brownian motors and posited that "thermal motion combined with input energy gives rise to a channeling of chance that can be used to exercise control over microscopic systems". Astumian and Hänggi provide in their paper a copy of Wallace Stevens' 1919 poem "The Place of the Solitaires" to elegantly illustrate, from an abstract perspective, the ceaseless nature of noise. A year after the Astumian-Hänggi paper, David Leigh's organic chemistry group reported the first artificial molecular Brownian motors. In 2007 the same team reported a Maxwell's demon-inspired molecular information ratchet. Another important demonstration of nanoengineering and nanotechnology was the building of a practical artificial Brownian motor by IBM in 2018. Specifically, an energy landscape was created by accurately shaping a nanofluidic slit, and alternate potentials and an oscillating electric field were then used to "rock" nanoparticles to produce directed motion. The experiment successfully made the nanoparticles move along a track in the shape of the outline of the IBM logo and serves as an important milestone in the practical use of Brownian motors and other elements at the nanoscale.Additionally, various institutions around the world, such as the University of Sydney Nano Institute, headquartered at the Sydney Nanoscience Hub (SNH), and the Swiss Nanoscience Institute (SNI) at the University of Basel, are examples of the research activity emerging in the field of nanoscience. Brownian motors remain a central concept in both the understanding of natural molecular motors and the construction of useful nanoscale machines that involve directed motion. Theory The thermal noise on the nanoscale is so great that moving in a particular direction is as difficult as "walking in a hurricane" or "swimming in molasses". The theoretical operation of the Brownian motor can be explained by ratchet theory, wherein strong random thermal fluctuations are allowed to move the particle in the desired direction, while energy is expended to counteract forces that would produce motion in the opposite direction. This motion can be both linear and rotational. In the biological sense and in the extent to which this phenomenon appears in nature, this exists as chemical energy is sourced from the molecule adenosine triphosphate (ATP). The Brownian ratchet is an apparent perpetual motion machine that appears to violate the second law of thermodynamics, but was later debunked upon more detailed analysis by Richard Feynman and other physicists. The difference between real Brownian motors and fictional Brownian ratchets is that only in Brownian motors is there an input of energy in order to provide the necessary force to hold the motor in place to counteract the thermal noise that try to move the motor in the opposite direction. Because Brownian motors rely on the random nature of thermal noise to achieve directed motion, they are stochastic in nature, in that they can be analysed statistically but not predicted precisely. Examples in nature In biology, much of what we understand to be protein-based molecular motors may also in fact be Brownian motors. These molecular motors facilitate critical cellular processes in living organisms and, indeed, are fundamental to life itself. Researchers have made significant advances in terms of examining these organic processes to gain insight into their inner workings. For example, molecular Brownian motors in the form of several different types of protein exist within humans. Two common biomolecular Brownian motors are ATP synthase, a rotary motor, and myosin II, a linear motor. The motor protein ATP synthase produces rotational torque that facilitates the synthesis of ATP from Adenosine diphosphate (ADP) and inorganic phosphate (Pi) through the following overall reaction: ADP + Pi + 3H+out ⇌ ATP + H2O + 3H+in In contrast, the torque produced by myosin II is linear and is a basis for the process of muscle contraction. Similar motor proteins include kinesin and dynein, which all convert chemical energy into mechanical work by the hydrolysis of ATP. Many motor proteins within human cells act as Brownian motors by producing directed motion on the nanoscale, and some common proteins of this type are illustrated by the following computer-generated images. Applications Nanorobotics The relevance of Brownian motors to the requirement of directed motion in nanorobotics has become increasingly apparent to researchers from both academia and industry. Artificial replications of Brownian motors are informed by and differ from nature, and one specific type is the photomotor, wherein the motor switches states due to pulses of light and generates directed motion. These photomotors, in contrast to their natural counterpartsˇ, are inorganic and possess greater efficiency and average velocity, and are thus better suited to human use than existing alternatives, such as organic protein motors. Currently, one of the six current "Grand Challenges" of the University of Sydney Nano Institute is to develop nanorobotics for health, a key aspect of which is a "nanoscale parts foundry" that can produce nanoscale Brownian motors for "active transport around the body". The institute predicts that among the implications of this research is a "paradigm shift" in healthcare "away from the "break-fix" model to a focus on prevention and early intervention," such as in the case with heart disease: Professor Paul Bannon, an adult cardiothoracic surgeon of international standing and leading medical researcher, summarises the benefits of nanorobotics in health. See also Molecular machines Molecular motor Brownian motion Brownian ratchet Nanoengineering Nanorobotics Robert Brown Peter Hänggi Notes External links Brownian motor on arxiv.org Nanotechnology Thermodynamics
Brownian motor
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,725
[ "Nanotechnology", "Materials science", "Thermodynamics", "Dynamical systems" ]
508,492
https://en.wikipedia.org/wiki/Locked%20nucleic%20acid
A locked nucleic acid (LNA), also known as bridged nucleic acid (BNA), and often referred to as inaccessible RNA, is a modified RNA nucleotide in which the ribose moiety is modified with an extra bridge connecting the 2' oxygen and 4' carbon. The bridge "locks" the ribose in the 3'-endo (North) conformation, which is often found in the A-form duplexes. This structure provides for increased stability against enzymatic degradation. LNA also offers improved specificity and affinity in base-pairing as a monomer or a constituent of an oligonucleotide. LNA nucleotides can be mixed with DNA or RNA residues in a oligonucleotide. Synthesis Obika et al. were the first to chemically synthesize LNA in 1997, independently followed by Jesper Wengel's group in 1998. This became possible after Zamecnick and Stephenson laid the groundwork on the possibility of oligonucleotides being great agents for controlling gene expression in 1978. To date, two different approaches, referred to as linear and convergent strategies respectively, have been shown to produce high yield and efficient LNAs. The linear strategy of synthesis was first detailed in the works of Obika et al. In this approach, uridine (or any readily available RNA nucleoside) can be used as the starting material. The convergent strategy requires the synthesis of a sugar intermediate which serves a glycosyl donor necessary for coupling with nucleobases. Commonly, D-glucose is used to produce the sugar intermediate which is subsequently reacted with nucleobases using a modified Vorbrügen procedure allowing for stereoselective coupling. The addition of different moieties has remained a possibility with the maintenance of key physicochemical properties like the high affinity and specificity evident in the originally synthesized LNA. Such oligomers are synthesized chemically and are commercially available. Incorporation into DNA/RNA LNA can be incorporated into DNA and RNA using the promiscuity of certain DNA and RNA polymerases. Phusion DNA polymerase, a commercially designed enzyme based on a Pfu DNA polymerase, efficiently incorporates LNA into DNA. Properties LNA offers enhanced biostability compared to biological nucleic acids. LNA modified oligonucleotides have demonstrated improved thermodynamics in hybridization to RNA, ssDNA, and dsDNA. Applications LNAzymes DNAzymes can be modified to include LNA residues, producing LNAzymes (LNA-modified DNAzymes). These modified oligonucleotides, like their DNAzyme relatives, are generally endonucleases that bind to specific RNA target sequences and cleave the phosphodiester bond that exists between the nucleotides. However, they demonstrate more efficient cleavage of phosphodiester bonds compared to their unmodified counterparts. Modification of the substrate recognition arms of DNAzymes with LNA monomers yields a LNAzyme which recognizes coxsackievirus A21 (CAV-21) and cleaves its RNA target sequence similar to one in the 5' untranslated region (5' UTR) of the human rhinovirus-14 (HRV-14); a sequence unrecognized by unmodified DNAzymes. Therapeutics Using LNA based oligonucleotides therapeutically is an emerging field in biotechnology. A variety of LNA oligonucleotides have been assessed for their pharmacokinetic and toxicity profiles. Studies concluded that LNA toxicity is generally independent of oligonucleotide sequence, and displays a preferential safety profile for translatable therapeutic applications. LNA has been investigated for its therapeutic properties in treating cancers and infectious diseases. A locked nucleic acid phosphorothioate antisense molecule, termed SPC2996, has been developed to target the mRNA coding for Bcl-2 oncoprotein, a protein that inhibits apoptosis in chronic lymphocytic leukemia cells (CLL). Phase I and II clinical trials demonstrated a dose dependent reduction in circulating CLL cells in approximately 30% of the sample population, suggesting further investigation into SPC2996. LNA has also been applied to Miravirsen, an experimental therapeutic intended for the treatment of Hepatitis C, constituting a 15-nucleotide phosphorothioate sequence with binding specificity for MiR-122 (a miRNA expressed in hepatocytes). Detection and diagnosis Allele-specific PCR using LNA allows for the design of shorter primers, without compromising binding specificity. LNA has been incorporated in fluorescence in situ hybridization (FISH). FISH is a common technique used to visualize genetic material in a variety of cells, but studies noted that this technique has been limited by low probe hybridization efficiency. Conversely, LNA-incorporated probes demonstrated increased hybridization efficiency in both DNA and RNA. The improved efficiency of LNA-incorporated FISH has resulted in FISH analysis of the human chromosome, several types of non-human cells, and microarrays. LNA genotyping assays have been conducted as well, specifically to detect a mutation in apolipoprotein B. For its high affinity for mismatch discrimination, LNA has been studied for its applications in diagnostic tools. Immobilized LNA probes have been introduced in a multiplex SNP genotyping assay. Gene editing LNA-modified ssODNs (synthetic single-stranded DNA oligonucleotides) can be used like ordinary ssODNs for single-base gene editing. Using LNA at or close to the intended site of modification offers evasion of DNA mismatch repair due to the higher thermodynamic stability it has. References Nucleic acids
Locked nucleic acid
[ "Chemistry" ]
1,237
[ "Biomolecules by chemical classification", "Nucleic acids" ]
508,664
https://en.wikipedia.org/wiki/Exergonic%20reaction
In chemical thermodynamics, an exergonic reaction is a chemical reaction where the change in the free energy is negative (there is a net release of free energy). This indicates a spontaneous reaction if the system is closed and initial and final temperatures are the same. For processes that take place in a closed system at constant pressure and temperature, the Gibbs free energy is used, whereas the Helmholtz energy is relevant for processes that take place at constant volume and temperature. Any reaction occurring at constant temperature without input of electrical or photon energy is exergonic, according to the second law of thermodynamics. An example is cellular respiration. Symbolically, the release of free energy, , in an exergonic reaction (at constant pressure and temperature) is denoted as Although exergonic reactions are said to occur spontaneously, this does not imply that the reaction will take place at an observable rate. For instance, the disproportionation of hydrogen peroxide releases free energy but is very slow in the absence of a suitable catalyst. It has been suggested that eager would be a more intuitive term in this context. More generally, the terms exergonic and endergonic relate to the free energy change in any process, not just chemical reactions. By contrast, the terms exothermic and endothermic relate to an enthalpy change in a closed system during a process, usually associated with the exchange of heat. See also Endergonic reaction References Thermochemistry Thermodynamic processes
Exergonic reaction
[ "Physics", "Chemistry" ]
324
[ "Thermochemistry", "Thermodynamic processes", "Thermodynamics" ]
508,681
https://en.wikipedia.org/wiki/Endergonic%20reaction
In chemical thermodynamics, an endergonic reaction (; also called a heat absorbing nonspontaneous reaction or an unfavorable reaction) is a chemical reaction in which the standard change in free energy is positive, and an additional driving force is needed to perform this reaction. In layman's terms, the total amount of useful energy is negative (it takes more energy to start the reaction than what is received out of it) so the total energy is a net negative result, as opposed to a net positive result in an exergonic reaction. Another way to phrase this is that useful energy must be absorbed from the surroundings into the workable system for the reaction to happen. Under constant temperature and constant pressure conditions, this means that the change in the standard Gibbs free energy would be positive, for the reaction at standard state (i.e. at standard pressure (1 bar), and standard concentrations (1 molar) of all the reagents). In metabolism, an endergonic process is anabolic, meaning that energy is stored; in many such anabolic processes, energy is supplied by coupling the reaction to adenosine triphosphate (ATP) and consequently resulting in a high energy, negatively charged organic phosphate and positive adenosine diphosphate. Equilibrium constant The equilibrium constant for the reaction is related to ΔG° by the relation: where T is the absolute temperature and R is the gas constant. A positive value of ΔG° therefore implies so that starting from molar stoichiometric quantities such a reaction would move backwards toward equilibrium, not forwards. Nevertheless, endergonic reactions are quite common in nature, especially in biochemistry and physiology. Examples of endergonic reactions in cells include protein synthesis, and the Na+/K+ pump which drives nerve conduction and muscle contraction. Gibbs free energy for endergonic reactions All physical and chemical systems in the universe follow the second law of thermodynamics and proceed in a downhill, i.e., exergonic, direction. Thus, left to itself, any physical or chemical system will proceed, according to the second law of thermodynamics, in a direction that tends to lower the free energy of the system, and thus to expend energy in the form of work. These reactions occur spontaneously. A chemical reaction is endergonic when non spontaneous. Thus in this type of reaction the Gibbs free energy increases. The entropy is included in any change of the Gibbs free energy. This differs from an endothermic reaction where the entropy is not included. The Gibbs free energy is calculated with the Gibbs–Helmholtz equation: where: = temperature in kelvins (K) = change in the Gibbs free energy = change in entropy (at 298 K) as = change in enthalpy (at 298 K) as A chemical reaction progresses non spontaneously when the Gibbs free energy increases, in that case the is positive. In exergonic reactions the is negative and in endergonic reactions the is positive: exergonic endergonic where equals the change in the Gibbs free energy after completion of a chemical reaction. Making endergonic reactions happen Endergonic reactions can be achieved if they are either pulled or pushed by an exergonic (stability increasing, negative change in free energy) process. Of course, in all cases the net reaction of the total system (the reaction under study plus the puller or pusher reaction) is exergonic. Pull Reagents can be pulled through an endergonic reaction, if the reaction products are cleared rapidly by a subsequent exergonic reaction. The concentration of the products of the endergonic reaction thus always remains low, so the reaction can proceed. A classic example of this might be the first stage of a reaction which proceeds via a transition state. The process of getting to the top of the activation energy barrier to the transition state is endergonic. However, the reaction can proceed because having reached the transition state, it rapidly evolves via an exergonic process to the more stable final products. Push Endergonic reactions can be pushed by coupling them to another reaction which is strongly exergonic, through a shared intermediate. This is often how biological reactions proceed. For example, on its own the reaction may be too endergonic to occur. However it may be possible to make it occur by coupling it to a strongly exergonic reaction – such as, very often, the decomposition of ATP into ADP and inorganic phosphate ions, ATP → ADP + Pi, so that This kind of reaction, with the ATP decomposition supplying the free energy needed to make an endergonic reaction occur, is so common in cell biochemistry that ATP is often called the "universal energy currency" of all living organisms. See also Exergonic Exergonic reaction Exothermic Endothermic Exothermic reaction Endothermic reaction Endotherm Exotherm References Thermochemistry Thermodynamic processes
Endergonic reaction
[ "Physics", "Chemistry" ]
1,051
[ "Thermochemistry", "Thermodynamic processes", "Thermodynamics" ]
509,256
https://en.wikipedia.org/wiki/Pyruvate%20dehydrogenase%20complex
Pyruvate dehydrogenase complex (PDC) is a complex of three enzymes that converts pyruvate into acetyl-CoA by a process called pyruvate decarboxylation. Acetyl-CoA may then be used in the citric acid cycle to carry out cellular respiration, and this complex links the glycolysis metabolic pathway to the citric acid cycle. Pyruvate decarboxylation is also known as the "pyruvate dehydrogenase reaction" because it also involves the oxidation of pyruvate. This multi-enzyme complex is related structurally and functionally to the oxoglutarate dehydrogenase and branched-chain oxo-acid dehydrogenase multi-enzyme complexes. Reaction The reaction catalysed by Pyruvate dehydrogenase complex is: Structure Pyruvate dehydrogenase (E1) The E1 subunit, called the pyruvate dehydrogenase subunit, is either a homodimer (comprising two “α” chains, e.g. in Escherichia coli) or a heterotetramer of two different chains (two “α” and two “β” chains). A magnesium ion forms a 4-coordinate complex with three, polar amino acid residues (Asp, Asn, and Tyr) located on the alpha chain, and the thiamine diphosphate (TPP) cofactor directly involved in decarboxylation of the pyruvate. Dihydrolipoyl transacetylase (E2) The E2 subunit, or dihydrolipoyl acetyltransferase, for both prokaryotes and eukaryotes, is generally composed of three domains. The N-terminal domain (the lipoyl domain), consists of 1–3 lipoyl groups of approximately 80 amino acids each. The peripheral subunit binding domain (PSBD), serves as a selective binding site for other domains of the E1 and E3 subunits. Finally, the C-terminal (catalytic) domain catalyzes the transfer of acetyl groups and acetyl-CoA synthesis. In Gammaproteobacteria, 24 copies of E2 form the cubic core of the pyruvate dehydrogenase complex, in which 8 E2 homotrimers are located at the vertices of the cubic core particle. Dihydrolipoyl dehydrogenase (E3) The E3 subunit, called the Dihydrolipoyl dehydrogenase enzyme, is characterized as a homodimer protein wherein two cysteine residues, engaged in disulfide bonding, and the FAD cofactor in the active site facilitate its main purpose as an oxidizing catalyst. One example of E3 structure, found in Pseudomonas putida, is formed such that each individual homodimer subunit contains two binding domains responsible for FAD binding and NAD binding, as well as a central domain and an interface domain. Dihydrolipoyl dehydrogenase Binding protein (E3BP) An auxiliary protein unique to most eukaryotes is the E3 binding protein (E3BP), which serves to bind the E3 subunit to the PDC complex. In the case of human E3BP, hydrophobic proline and leucine residues in the BP interact with the surface recognition site formed by the binding of two identical E3 monomers. Mechanism Pyruvate dehydrogenase (E1) Initially, pyruvate and thiamine pyrophosphate (TPP or vitamin B1) are bound by pyruvate dehydrogenase subunits. The thiazolium ring of TPP is in a zwitterionic form, and the anionic C2 carbon performs a nucleophilic attack on the C2 (ketone) carbonyl of pyruvate. The resulting intermediate undergoes decarboxylation to produce an acyl anion equivalent (see cyanohydrin or aldehyde-dithiane umpolung chemistry, as well as benzoin condensation). This anion attacks S1 of an oxidized lipoate species that is attached to a lysine residue. In a ring-opening SN2-like mechanism, S2 is displaced as a sulfide or sulfhydryl moiety. Subsequent collapse of the tetrahedral intermediate ejects thiazole, releasing the TPP cofactor and generating a thioacetate on S1 of lipoate. The E1-catalyzed process is the rate-limiting step of the whole pyruvate dehydrogenase complex. Dihydrolipoyl transacetylase (E2) At this point, the lipoate-thioester functionality is translocated into the Dihydrolipoyl transacetylase (E2) active site, where a transacylation reaction transfers the acetyl from the "swinging arm" of lipoyl to the thiol of coenzyme A. This produces acetyl-CoA, which is released from the enzyme complex and subsequently enters the citric acid cycle. E2 can also be known as lipoamide reductase-transacetylase. Dihydrolipoyl dehydrogenase (E3) The dihydrolipoate, covalently bound to a lysine residue of the complex, is then transferred to the Dihydrolipoyl dehydrogenase (E3) active site, where it undergoes a flavin-mediated oxidation, similar in chemistry to e.g. thioredoxin reductase. First, FAD oxidizes dihydrolipoate back to its lipoate (disulfide) resting state, producing FADH2. Then, the substrate NAD+ oxidizes FADH2 back to its FAD resting state, producing NADH and H+. Differences in structure and subunit composition between species In all organisms, PDC is a large complex composed of multiple copies of the three catalytic subunits E1, E2 and E3. Another common feature of all PDCs is the fact that the subunit E2 forms the core of the complex to which the peripheral subunits E1 and E3 bind. Eukaryotic PDCs contain an additional, non-catalytic subunit in the core termed E3 binding protein (E3BP) (sometimes also "protein X"). In PDCs with a hetero-oligomeric core with multiple copies of E2 and E3BP, E1 exclusively associates with E2, and E3 only binds to E3BP. In contrast, E1 and E3 compete for binding to E2 in bacterial PDCs with a homo-oligomeric E2 core. While the peripheral enzyme E3 is a homodimer in all organisms, the peripheral enzyme E1 is an alpha2beta2 heterotetramerin eukaryotes. Gram-negative bacteria In Gram-negative bacteria, e.g. Escherichia coli, PDC consists of a central cubic core made up from 24 molecules of dihydrolipoyl transacetylase (E2). Up to 16 homodimers of pyruvate dehydrogenase (E1) and 8 homodimers of dihydrolipoyl dehydrogenase (E3) bind to the 24 peripheral subunit binding domains (PSBDs) of the E2 24-mer. In Gammaproteobacteria, the specificity of PSBD for binding either E1 or E3 is determined by the oligomeric state of PSBD. In each E2 homotrimer, two of the three PSBDs dimerize. While two E1 homodimers cooperatively bind dimeric PSBD, the remaining, unpaired PSBD specifically interacts with one E3 homodimer. PSBD dimerization thus determines the subunit composition of the pyruvate dehydrogenase complex when fully saturated with the peripheral subunits E1 and E3, which has a stoichiometry of E1:E2:E3 (monomers) = 32:24:16 Gram-positive bacteria and eukaryotes In contrast, in Gram-positive bacteria (e.g. Bacillus stearothermophilus) and eukaryotes the central PDC core contains 60 E2 molecules arranged into an icosahedron. This E2 subunit “core” coordinates to 30 subunits of E1 and 12 copies of E3. Eukaryotes also contain 12 copies of an additional core protein, E3 binding protein (E3BP) which bind the E3 subunits to the E2 core. The exact location of E3BP is not completely clear. Cryo-electron microscopy has established that E3BP binds to each of the icosahedral faces in yeast. However, it has been suggested that it replaces an equivalent number of E2 molecules in the bovine PDC core. Up to 60 E1 or E3 molecules can associate with the E2 core from Gram-positive bacteria - binding is mutually exclusive. In eukaryotes E1 is specifically bound by E2, while E3 associates with E3BP. It is thought that up to 30 E1 and 6 E3 enzymes are present, although the exact number of molecules can vary in vivo and often reflects the metabolic requirements of the tissue in question. Regulation Pyruvate dehydrogenase is inhibited when one or more of the three following ratios are increased: ATP/ADP, NADH/NAD+ and acetyl-CoA/CoA. In eukaryotes PDC is tightly regulated by its own specific Pyruvate dehydrogenase kinase (PDK) and Pyruvate dehydrogenase phosphatase (PDP), deactivating and activating it respectively. PDK phosphorylates three specific serine residues on E1 with different affinities. Phosphorylation of any one of them (using ATP) renders E1 (and in consequence the entire complex) inactive. Dephosphorylation of E1 by PDP reinstates complex activity. Products of the reaction act as allosteric inhibitors of the PDC, because they activate PDK. Substrates in turn inhibit PDK, reactivating PDC. During starvation, PDK increases in amount in most tissues, including skeletal muscle, via increased gene transcription. Under the same conditions, the amount of PDP decreases. The resulting inhibition of PDC prevents muscle and other tissues from catabolizing glucose and gluconeogenesis precursors. Metabolism shifts toward fat utilization, while muscle protein breakdown to supply gluconeogenesis precursors is minimized, and available glucose is spared for use by the brain. Calcium ions have a role in regulation of PDC in muscle tissue, because it activates PDP, stimulating glycolysis on its release into the cytosol - during muscle contraction. Some products of these transcriptions release H2 into the muscles. This can cause calcium ions to decay over time. Localization of pyruvate decarboxylation In eukaryotic cells the pyruvate decarboxylation occurs inside the mitochondrial matrix, after transport of the substrate, pyruvate, from the cytosol. The transport of pyruvate into the mitochondria is via the transport protein pyruvate translocase. Pyruvate translocase transports pyruvate in a symport fashion with a proton (across the inner mitochondrial membrane), which may be considered to be a form of secondary active transport, but further confirmation/support may be needed for the usage of "secondary active transport" desciptor here (Note: the pyruvate transportation method via the pyruvate translocase appears to be coupled to a proton gradient according to S. Papa et al., 1971, seemingly matching secondary active transport in definition). Alternative sources say "transport of pyruvate across the outer mitochondrial membrane appears to be easily accomplished via large non-selective channels such as voltage-dependent anion channels, which enable passive diffusion" and transport across inner mitochondrial membrane is mediated by mitochondrial pyruvate carrier 1 (MPC1) and mitochondrial pyruvate carrier 2 (MPC2). Upon entry into the mitochondrial matrix, the pyruvate is decarboxylated, producing acetyl-CoA (and carbon dioxide and NADH). This irreversible reaction traps the acetyl-CoA within the mitochondria (the acetyl-CoA can only be transported out of the mitochondrial matrix under conditions of high oxaloacetate via the citrate shuttle, a TCA intermediate that is normally sparse). The carbon dioxide produced by this reaction is nonpolar and small, and can diffuse out of the mitochondria and out of the cell. In prokaryotes, which have no mitochondria, this reaction is either carried out in the cytosol, or not at all. Evolutionary history It was found that pyruvate dehydrogenase enzyme found in the mitochondria of eukaryotic cells closely resembles an enzyme from Geobacillus stearothermophilus, which is a species of gram-positive bacteria. Despite similarities of the pyruvate dehydrogenase complex with gram-positive bacteria, there is little resemblance with those of gram-negative bacteria.  Similarities of the quaternary structures between pyruvate dehydrogenase and enzymes in gram-positive bacteria point to a shared evolutionary history which is distinctive from the evolutionary history of corresponding enzymes found in gram-negative bacteria. Through an endosymbiotic event, pyruvate dehydrogenase found in the eukaryotic mitochondria points to ancestral linkages dating back to gram-positive bacteria. Pyruvate dehydrogenase complexes share many similarities with branched chain 2-oxoacid dehydrogenase (BCOADH), particularly in their substrate specificity for alpha-keto acids. Specifically, BCOADH catalyzes the degradation of amino acids and these enzymes would have been prevalent during the periods on prehistoric Earth dominated by rich amino acid environments. The E2 subunit from pyruvate dehydrogenase evolved from the E2 gene found in BCOADH while both enzymes contain identical E3 subunits due to the presence of only one E3 gene. Since the E1 subunits have a distinctive specificity for particular substrates, the E1 subunits of pyruvate dehydrogenase and BCOADH vary but share genetic similarities. The gram-positive bacteria and cyanobacteria that would later give rise to mitochondria and chloroplast found in eukaryotic cells retained the E1 subunits that are genetically related to those found in the BCOADH enzymes. Clinical relevance Pyruvate dehydrogenase deficiency (PDCD) can result from mutations in any of the enzymes or cofactors used to build the complex. Its primary clinical finding is lactic acidosis. Such PDCD mutations, leading to subsequent deficiencies in NAD and FAD production, hinder oxidative phosphorylation processes that are key in aerobic respiration. Thus, acetyl-CoA is instead reduced via anaerobic mechanisms into other molecules like lactate, leading to an excess of bodily lactate and associated neurological pathologies. While pyruvate dehydrogenase deficiency is rare, there are a variety of different genes when mutated or nonfunctional that can induce this deficiency. First, the E1 subunit of pyruvate dehydrogenase contains four different subunits: two alpha subunits designated as E1-alpha and two beta subunits designated as E1-beta. The PDHA1 gene found in the E1-alpha subunits, when mutated, causes 80% of the cases of pyruvate dehydrogenase deficiency because this mutation abridges the E1-alpha protein. Decreased functional E1 alpha prevents pyruvate dehydrogenase from sufficiently binding to pyruvate, thus reducing the activity of the overall complex. When the PDHB gene found in the E1 beta subunit of the complex is mutated, this also leads to pyruvate dehydrogenase deficiency. Likewise, mutations found on other subunits of the complex, like the DLAT gene found on the E2 subunit, the PDHX gene found on the E3 subunit, as well as a mutation on a pyruvate dehydrogenase phosphatase gene, known as PDP1, have all been traced back to pyruvate dehydrogenase deficiency, while their specific contribution to the disease state is unknown. See also Pyruvate dehydrogenase deficiency References External links https://web.archive.org/web/20070405211049/http://www.dentistry.leeds.ac.uk/biochem/MBWeb/mb1/part2/krebs.htm#animat1 - animation of the general mechanism of the PDC (link on upper right) at University of Leeds 3D structures , bovine kidney pyruvate dehydrogenase complex , human full-length and truncated E2 (tE2) cores of PDC, expressed in E. coli EC 1.2.1 Cellular respiration Glycolysis
Pyruvate dehydrogenase complex
[ "Chemistry", "Biology" ]
3,703
[ "Carbohydrate metabolism", "Cellular respiration", "Glycolysis", "Biochemistry", "Metabolism" ]
509,331
https://en.wikipedia.org/wiki/Pappus%27s%20centroid%20theorem
In mathematics, Pappus's centroid theorem (also known as the Guldinus theorem, Pappus–Guldinus theorem or Pappus's theorem) is either of two related theorems dealing with the surface areas and volumes of surfaces and solids of revolution. The theorems are attributed to Pappus of Alexandria and Paul Guldin. Pappus's statement of this theorem appears in print for the first time in 1659, but it was known before, by Kepler in 1615 and by Guldin in 1640. The first theorem The first theorem states that the surface area A of a surface of revolution generated by rotating a plane curve C about an axis external to C and on the same plane is equal to the product of the arc length s of C and the distance d traveled by the geometric centroid of C: For example, the surface area of the torus with minor radius r and major radius R is Proof A curve given by the positive function is bounded by two points given by: and If is an infinitesimal line element tangent to the curve, the length of the curve is given by: The component of the centroid of this curve is: The area of the surface generated by rotating the curve around the x-axis is given by: Using the last two equations to eliminate the integral we have: The second theorem The second theorem states that the volume V of a solid of revolution generated by rotating a plane figure F about an external axis is equal to the product of the area A of F and the distance d traveled by the geometric centroid of F. (The centroid of F is usually different from the centroid of its boundary curve C.) That is: For example, the volume of the torus with minor radius r and major radius R is This special case was derived by Johannes Kepler using infinitesimals. Proof 1 The area bounded by the two functions: and bounded by the two lines: and is given by: The component of the centroid of this area is given by: If this area is rotated about the y-axis, the volume generated can be calculated using the shell method. It is given by: Using the last two equations to eliminate the integral we have: Proof 2 Let be the area of , the solid of revolution of , and the volume of . Suppose starts in the -plane and rotates around the -axis. The distance of the centroid of from the -axis is its -coordinate and the theorem states that To show this, let be in the xz-plane, parametrized by for , a parameter region. Since is essentially a mapping from to , the area of is given by the change of variables formula: where is the determinant of the Jacobian matrix of the change of variables. The solid has the toroidal parametrization for in the parameter region ; and its volume is Expanding, The last equality holds because the axis of rotation must be external to , meaning . Now, by change of variables. Generalizations The theorems can be generalized for arbitrary curves and shapes, under appropriate conditions. Goodman & Goodman generalize the second theorem as follows. If the figure moves through space so that it remains perpendicular to the curve traced by the centroid of , then it sweeps out a solid of volume , where is the area of and is the length of . (This assumes the solid does not intersect itself.) In particular, may rotate about its centroid during the motion. However, the corresponding generalization of the first theorem is only true if the curve traced by the centroid lies in a plane perpendicular to the plane of . In n-dimensions In general, one can generate an dimensional solid by rotating an dimensional solid around a dimensional sphere. This is called an -solid of revolution of species . Let the -th centroid of be defined by Then Pappus' theorems generalize to: Volume of -solid of revolution of species = (Volume of generating -solid) (Surface area of -sphere traced by the -th centroid of the generating solid) and Surface area of -solid of revolution of species = (Surface area of generating -solid) (Surface area of -sphere traced by the -th centroid of the generating solid) The original theorems are the case with . Footnotes References External links Theorems in calculus Geometric centers Theorems in geometry Area Volume
Pappus's centroid theorem
[ "Physics", "Mathematics" ]
889
[ "Scalar physical quantities", "Symmetry", "Point (geometry)", "Theorems in mathematical analysis", "Physical quantities", "Theorems in calculus", "Calculus", "Mathematical theorems", "Quantity", "Geometric centers", "Size", "Extensive quantities", "Geometry", "Theorems in geometry", "Vol...
510,060
https://en.wikipedia.org/wiki/Intermediate-mass%20black%20hole
An intermediate-mass black hole (IMBH) is a class of black hole with mass in the range of one hundred to one hundred thousand (102–105) solar masses: significantly higher than stellar black holes but lower than the hundred thousand to more than one billion (105–109) solar mass supermassive black holes. Several IMBH candidate objects have been discovered in the Milky Way galaxy and others nearby, based on indirect gas cloud velocity and accretion disk spectra observations of various evidentiary strength. Observational evidence The gravitational wave signal GW190521, which occurred on 21 May 2019 at 03:02:29 UTC, and was published on 2 September 2020, resulted from the merger of two black holes. They had masses of 85 and 65 solar masses and merged to form a black hole of 142 solar masses, with 8 solar masses radiated away as gravitational waves. Before that, the strongest evidence for IMBHs came from a few low-luminosity active galactic nuclei. Due to their activity, these galaxies almost certainly contain accreting black holes, and in some cases the black hole masses can be estimated using the technique of reverberation mapping. For instance, the spiral galaxy NGC 4395 at a distance of about 4 Mpc appears to contain a black hole with mass of about solar masses. The largest up-to-date sample of intermediate-mass black holes includes 305 candidates selected by sophisticated analysis of one million optical spectra of galaxies collected by the Sloan Digital Sky Survey. X-ray emission was detected from 10 of these candidates confirming their classification as IMBH. Some ultraluminous X-ray sources (ULXs) in nearby galaxies are suspected to be IMBHs, with masses of a hundred to a thousand solar masses. The ULXs are observed in star-forming regions (e.g., in starburst galaxy M82), and are seemingly associated with young star clusters which are also observed in these regions. However, only a dynamical mass measurement from the analysis of the optical spectrum of the companion star can unveil the presence of an IMBH as the compact accretor of the ULX. A few globular clusters have been claimed to contain IMBHs, based on measurements of the velocities of stars near their centers; the figure shows one candidate object. However, none of the claimed detections has stood up to scrutiny. For instance, the data for M31 G1, the object shown in the figure, can be fit equally well without a massive central object. Additional evidence for the existence of IMBHs can be obtained from observation of gravitational radiation, emitted from a binary containing an IMBH and a compact remnant or another IMBH. Finally, the M–sigma relation predicts the existence of black holes with masses of 104 to 106 solar masses in low-luminosity galaxies. The smallest black hole from the M–sigma relation prediction is the nucleus of RGG 118 galaxy with only about 50,000 solar masses. Potential discoveries In November 2004 a team of astronomers reported the discovery of GCIRS 13E, the first intermediate-mass black hole in the Milky Way galaxy, orbiting three light-years from Sagittarius A*. This medium black hole of 1,300 solar masses is within a cluster of seven stars, possibly the remnant of a massive star cluster that has been stripped down by the Galactic Center. This observation may add support to the idea that supermassive black holes grow by absorbing nearby smaller black holes and stars. However, in 2005, a German research group claimed that the presence of an IMBH near the galactic center is doubtful, based on a dynamical study of the star cluster in which the IMBH was said to reside. An IMBH near the galactic center could also be detected via its perturbations on stars orbiting around the supermassive black hole. In January 2006 a team led by Philip Kaaret of the University of Iowa announced the discovery of a quasiperiodic oscillation from an intermediate-mass black hole candidate located using NASA's Rossi X-ray Timing Explorer. The candidate, M82 X-1, is orbited by a red giant star that is shedding its atmosphere into the black hole. Neither the existence of the oscillation nor its interpretation as the orbital period of the system are fully accepted by the rest of the scientific community, as the periodicity claimed is based on only about four cycles, meaning that it is possible for this to be random variation. If the period is real, it could be either the orbital period, as suggested, or a super-orbital period in the accretion disk, as is seen in many other systems. In 2009, a team of astronomers led by Sean Farrell discovered HLX-1, an intermediate-mass black hole with a smaller cluster of stars around it, in the galaxy ESO 243–49. This evidence suggested that ESO 243-49 had a galactic collision with HLX-1's galaxy and absorbed the majority of the smaller galaxy's matter. A team at the CSIRO radio telescope in Australia announced on 9 July 2012 that it had discovered the first intermediate-mass black hole. In 2015 a team at Keio University in Japan found a gas cloud (CO-0.40-0.22) with very wide velocity dispersion. They performed simulations and concluded that a model with a black hole of around 100,000 solar masses would be the best fit for the velocity distribution. However, a later work pointed out some difficulties with the association of high-velocity dispersion clouds with intermediate mass black holes and proposed that such clouds might be generated by supernovae. Further theoretical studies of the gas cloud and nearby IMBH candidates have been inconclusive but have reopened the possibility. In 2017, it was announced that a black hole of a few thousand solar masses may be located in the globular cluster 47 Tucanae. This was based on the accelerations and distributions of pulsars in the cluster; however, a later analysis of an updated and more complete data set on these pulsars found no positive evidence for this. In 2018, the Keio University team found several molecular gas streams orbiting around an invisible object near the galactic center, designated HCN-0.009-0.044, suggested that it is a black hole of 32,000 solar masses and, if so, is the third IMBH discovered in the region. Observations in 2019 found evidence for a gravitational wave event (GW190521) arising from the merger of two intermediate-mass black holes, with masses of 66 and 85 times that of the Sun. In September 2020 it was announced that the resulting merged black hole weighed 142 solar masses, with 9 solar masses being radiated away as gravitational waves. In 2020, astronomers reported the possible finding of an intermediate-mass black hole, named 3XMM J215022.4-055108, in the direction of the Aquarius constellation, about 740 million light years from Earth. In 2021 the discovery of a 100,000 solar-mass intermediate-mass black hole in the globular cluster B023-G78 in the Andromeda Galaxy was posted to arXiv in a preprint. In 2023, an analysis of proper motions of the closest known globular cluster, Messier 4, revealed an excess mass of roughly 800 solar masses in the center, which appears to not be extended, and could thus be considered as kinematic evidence for an IMBH (even if an unusually compact cluster of compact objects, white dwarfs, neutron stars or stellar-mass black holes cannot be completely discounted). A study from July 10, 2024, examined seven fast-moving stars from the center of the globular cluster Omega Centauri, finding that these stars were consistent with being bound to an intermediate-mass black hole of at least 8,200 solar masses. Origin Intermediate-mass black holes are too massive to be formed by the collapse of a single star, which is how stellar black holes are thought to form. Their environments lack the extreme conditions—i.e., high density and velocities observed at the centers of galaxies—which seemingly lead to the formation of supermassive black holes. There are three postulated formation scenarios for IMBHs. The first is the merging of stellar mass black holes and other compact objects by means of accretion. The second one is the runaway collision of massive stars in dense stellar clusters and the collapse of the collision product into an IMBH. The third is that they are primordial black holes formed in the Big Bang. Scientists have also considered the possibility of the creation of intermediate-mass black holes through mechanisms involving the collapse of a single star, such as the possibility of direct collapse into black holes of stars with pre-supernova helium core mass > (to avoid a pair instability supernova which would completely disrupt the star), requiring an initial total stellar mass of > , but there may be little chance of observing such a high-mass supernova remnant. Recent theories suggest that such massive stars which could lead to the formation of intermediate mass black holes may form in young star clusters via multiple stellar collisions. See also Pair-instability supernova References External links Black Hole Seeds Missing in Cosmic Garden Chandra images of starburst galaxy M82 NASA press release for discovery of IMBHs by Hubble Space Telescope A New Breed of Black Holes, by Davide Castelvecchi Sky & Telescope April 2006 +
Intermediate-mass black hole
[ "Physics", "Astronomy" ]
1,956
[ "Black holes", "Physical phenomena", "Physical quantities", "Unsolved problems in physics", "Astrophysics", "Intermediate-mass black holes", "Density", "Stellar phenomena", "Astronomical objects" ]
510,132
https://en.wikipedia.org/wiki/Tap%20%28valve%29
A tap (also spigot or faucet: see usage variations) is a valve controlling the release of a fluid. Nomenclature United Kingdom Tap is used in the United Kingdom and most of the Commonwealth for any everyday type of valve, particularly the fittings that control water supply to bathtubs and sinks. United States Faucet is the most common term in the US, similar in use to "tap" in British English, e.g. "water faucet" (although the term "tap" is also used in the US). Spigot is used by professionals in the trade (such as plumbers), and typically refers to an outdoor fixture. Silcock (and sillcock), same as "spigot", referring to a "cock" (as in stopcock and petcock) that penetrates a foundation sill. Bib (bibcock, and hose bib or hosebibb), usually a freeze-resistant version of a "spigot". Wall hydrant, same as "hosebibb". Tap generally refers to a keg or barrel tap, though also commonly refers to a faucet that supplies either hot or cold water and not both. It also appears as a descriptor in "tap water" (i.e. water purified for domestic use). A single temperature tap is commonly found in a commercial or public restroom where the temperature of the water will be controlled by a separate temperature regulating valve that mixes hot and cold water. The regulating valve may be under the handwashing sink or in a separate mechanical room or service closet. These single taps are less prone to breakage from heavy use or vandalism. Types Liquid Water for baths, sinks and basins can be provided by separate hot and cold taps; this arrangement is common in older installations, particularly in public washrooms/lavatories and utility rooms/laundries. In kitchens and bathrooms, mixer taps are commonly used. In this case, hot and cold water from the two valves is mixed before reaching the outlet, allowing the water to emerge at any temperature between that of the hot and cold water supplies. Mixer taps were invented by Thomas Campbell of Saint John, New Brunswick, and patented in 1880. For baths and showers, mixer taps frequently incorporate some sort of pressure balancing feature so that the hot/cold mixture ratio will not be affected by transient changes in the pressure of one or other of the supplies. This helps avoid scalding or uncomfortable chilling as other water loads occur (such as the flushing of a toilet). Rather than two separate valves, mixer taps frequently use a single, more complex, valve controlled by a single handle (single handle mixer). The handle moves up and down to control the amount of water flow and from side to side to control the temperature of the water. Especially for baths and showers, the latest designs are thermostatic mixing valves that do this using a built-in thermostat, and can be mechanical or electronic. There are also taps with color LEDs to show the temperature of the water. When two pipes are installed, the hot tap generally has a red indicator while the cold tap generally has a blue or green indicator. In the United States, the taps are frequently also labeled with an "H" or "C". In countries with Romance languages, the letters "C" for hot and "F" for cold are used (from French "chaud"/Italian "caldo"/Spanish "caliente" (hot) and French "froid"/Italian "freddo"/Spanish "frio" (cold)). Portuguese would use Q (for "quente", hot) and F. This can create confusion for English-speaking visitors. Mixer taps may have a red-blue stripe or arrows indicating which side will give hot and which cold. In most countries, there is a standard arrangement of hot/cold taps. For example, in the United States and many other countries, the hot tap is on the left by building code requirements. Many installations exist where this standard has been ignored (called "crossed connections" by plumbers). Mis-assembly of some single-valve mixer taps will exchange hot and cold even if the fixture has been plumbed correctly. In the United States, the Americans with Disabilities Act provide requirements for faucets, such as requiring less than five pounds of force to operate, and requiring that the user does not have to twist their wrist. Most handles in homes are fastened to the valve shafts with screws, but on many commercial and industrial applications they are fitted with a removable key called a "loose key", "water key", or "sillcock key", which has a square peg and a square-ended key to turn off and on the water; the "loose key" can be removed to prevent vandals from turning on the water. Before the "loose key" was invented it was common for some landlords or caretakers to take off the handle of a tap, which had teeth that would meet up with the gears on the valve shaft. This tooth and cog system is still used on most modern taps. "Loose keys" may also be found outside homes to prevent passers-by from using them. Taps are normally connected to the water supply by means of a "swivel tap connector", which is attached to the end of the water pipe using a soldered or compression fitting, and has a large nut to screw onto the threaded "tail" of the tap, which hangs down underneath the bath, basin or sink. A fibre washer (which expands when wet, aiding the seal) is used between the connector and the tap tail. Tap tails are normally  " or 12 mm in diameter for sinks and  " or 19 mm for baths, although continental Europe sometimes uses a  " (still imperial) size. The same connection method is used for a ballcock. The term tap is widely used to describe the valve used to dispense draft beer from a keg, whether gravity feed or pressurized. Gas A gas tap is a specific form of ball valve used in residential, commercial, and laboratory applications for coarse control of the release of fuel gases (such as natural gas, coal gas, and syngas). Like all ball valves its handle will parallel the gas line when open and be perpendicular when closed, making for easy visual identification of its status. Physics Water and gas taps have adjustable flow: gate valves are more progressive; ball valves more coarse, typically used in on-off applications. Turning a valve knob or lever adjusts flow by varying the aperture of the control device in the valve assembly. The result when opened in any degree is a choked flow. Its rate is independent of the viscosity or temperature of the fluid or gas in the pipe, and depends only weakly on the supply pressure, so that flow rate is stable at a given setting. At intermediate flow settings the pressure at the valve restriction drops nearly to zero from the Venturi effect; in water taps, this causes the water to boil momentarily at room temperature as it passes through the restriction. Bubbles of cool water vapor form and collapse at the restriction, causing the familiar hissing sound. At very low flow settings, the viscosity of the water becomes important and the pressure drop (and hissing noise) vanish; at full flow settings, parasitic drag in the pipes becomes important and the water again becomes silent. Mechanisms The first screw-down tap mechanism was patented and manufactured by the Rotherham brass founders Guest and Chrimes in 1845. Most older taps use a soft rubber or neoprene washer which is screwed down onto a valve seat in order to stop the flow. This is called a "globe valve" in engineering and, while it gives a leak-proof seal and good fine adjustment of flow, both the rubber washer and the valve seat are subject to wear (and for the seat, also corrosion) over time, so that eventually no tight seal is formed in the closed position, resulting in a leaking tap. The washer can be replaced and the valve seat resurfaced (at least a few times), but globe valves are never maintenance-free. Also, the tortuous S-shaped path the water is forced to follow offers a significant obstruction to the flow. For high pressure domestic water systems this does not matter, but for low pressure systems where flow rate is important, such as a shower fed by a storage tank, a "stop tap" or, in engineering terms, a "gate valve" is preferred. Gate valves use a metal wedge with a circular face, usually the same diameter as the pipe, which is screwed into place perpendicularly to the flow, cutting it off. There is little resistance to flow when the tap is fully open, but this type of tap rarely gives a perfect seal when closed. In the UK this type of tap normally has a wheel-shaped handle rather than a crutch or capstan handle. Cone valves or ball valves are another alternative. These are commonly found as the service shut-off valves in more-expensive water systems and usually found in gas taps (and, incidentally, the cask beer taps referred to above). They can be identified by their range of motion—only 90°—between fully open and closed. Usually, when the handle is in line with the pipe the valve is open, and when the handle is across the pipe it is closed. But it could move in either direction CW or CCW perpendicular to the pipe. S=shut and O=open. A cone valve consists of a shallowly tapering cone in a tight-fitting socket placed across the flow of the fluid. In UK English this is usually known as a taper-plug cock. A ball valve uses a spherical ball instead. In either case, a hole through the cone or ball allows the fluid to pass if it is lined up with the openings in the socket through which the fluid enters and leaves; turning the cone using the handle rotates the passage away, presenting the fluid with the unbroken surface of the cone through which it cannot pass. Valves of this type using a cylinder rather than a cone are sometimes encountered, but using a cone allows a tight fit to be made even with moderate manufacturing tolerances. The ball in ball valves rotates within plastic (usually PTFE) seats. Hands free infrared proximity sensors are replacing the standard valve. Thermostatically controlled electronic dual-purpose mixing or diverting valves are used within industrial applications to automatically provide liquids as required. Foot controlled valves are installed within laboratory and healthcare/hospitals, as well as in industrial settings where extremely dirty hands operating taps might leave residues on them. Modern taps often have aerators at the tip to limit water flow and introduce air in the form of bubbles to reduce splashing. Without an aerator, water usually flows out of the tap in one big stream. An aerator spreads the water flow into many small droplets. In sanitary settings such as hospitals or laboratories "laminar flow devices" are used in place of aerators. Laminar flow devices restrict flow and direct the water into a smooth stream without introducing the surrounding air which could contain hazardous bacteria or particles. Modern bathroom and kitchen taps often use ceramic or plastic surfaces sliding against other spring-loaded ceramic surfaces or plastic washers. These taps exploit the uniquely low value of the coefficient of friction of 2 ceramic surfaces in contact, especially in the presence of water as a lubricant. These taps tend to require far less maintenance than traditional globe valves, and when maintenance is required the entire interior of the valve is usually replaced, often as a single pre-assembled cartridge. Of three manufacturers in North America, Moen and American Standard use cartridges (Moen's being O-ring based, American Standard's being ceramic), while Delta uses rubber seats facing the cartridges. Each design has its advantages: Moen cartridges tend to be easiest to find, American Standard cartridges have nearly infinite lifespan in sediment-free municipal water, and Delta's rubber seats tend to be most forgiving of sediment in well water. Backflow prevention Most US jurisdictions now require hose spigots, hosebibbs, and wall hydrants to have a vacuum breaker or backflow preventer, so that water cannot return through the spigot from the hose. This prevents contamination of the building or public water system should there be a pressure drop. In the UK, water regulations require a double check valve; this is often incorporated within the body of the tap itself. ASME A112 Standards on Plumbing Materials and Equipment The American Society of Mechanical Engineers (ASME) publishes several Standards on plumbing. Some are: ASME A112.6.3 – Floor and Trench Drains ASME A112.6.4 – Roof, Deck, and Balcony Drains ASME A112.18.1/CSA B125.1 – Plumbing Supply Fittings ASME A112.19.1/CSA B45.2 – Enameled Cast Iron and Enameled Steel Plumbing Fixtures ASME A112.19.2/CSA B45.1 – Ceramic Plumbing Fixtures See also References External links Plumbing Valves Irrigation Gardening aids Gardening tools la:Clepsydra hu:Csap (áramlástechnika) pl:Bateria wodociągowa
Tap (valve)
[ "Physics", "Chemistry", "Engineering" ]
2,725
[ "Plumbing", "Physical systems", "Construction", "Valves", "Hydraulics", "Piping" ]
510,581
https://en.wikipedia.org/wiki/R-parity
R-parity is a concept in particle physics. In the Minimal Supersymmetric Standard Model, baryon number and lepton number are no longer conserved by all of the renormalizable couplings in the theory. Since baryon number and lepton number conservation have been tested very precisely, these couplings need to be very small in order not to be in conflict with experimental data. R-parity is a symmetry acting on the Minimal Supersymmetric Standard Model (MSSM) fields that forbids these couplings and can be defined as or, equivalently, as where is spin, is baryon number, and is lepton number. All Standard Model particles have R-parity of +1 while supersymmetric particles have R-parity of −1. Note that there are different forms of parity with different effects and principles, one should not confuse this parity with any other parity. Dark matter candidate With R-parity being preserved, the lightest supersymmetric particle (LSP) cannot decay. This lightest particle (if it exists) may therefore account for the observed missing mass of the universe that is generally called dark matter. In order to fit observations, it is assumed that this particle has a mass of to , is neutral and only interacts through weak interactions and gravitational interactions. It is often called a weakly interacting massive particle or WIMP. Typically the dark matter candidate of the MSSM is a mixture of the electroweak gauginos and Higgsinos and is called a neutralino. In extensions to the MSSM it is possible to have a sneutrino be the dark matter candidate. Another possibility is the gravitino, which only interacts via gravitational interactions and does not require strict R-parity. R-parity violating couplings of the MSSM The renormalizable R-parity violating couplings of the MSSM are violates by 1 unit The strongest constraint involving this coupling alone is from the non-observation of neutron–antineutron oscillations. violates by 1 unit The strongest constraint involving this coupling alone is the violation universality of Fermi constant in quark and leptonic charged current decays. violates by 1 unit The strongest constraint involving this coupling alone is the violation universality of Fermi constant in leptonic charged current decays. violates by 1 unit The strongest constraint involving this coupling alone is that it leads to a large neutrino mass. While the constraints on single couplings are reasonably strong, if multiple couplings are combined together, they lead to proton decay. Thus there are further maximal bounds on values of the couplings from maximal bounds on proton decay rate. Proton decay Without baryon and lepton number being conserved and taking couplings for the R-parity violating couplings, the proton can decay in approximately 10−2 seconds or if minimal flavor violation is assumed the proton lifetime can be extended to 1 year. Since the proton lifetime is observed to be greater than 1033 to 1034 years (depending on the exact decay channel), this would highly disfavour the model. R-parity sets all of the renormalizable baryon and lepton number violating couplings to zero and the proton is stable at the renormalizable level and the lifetime of the proton is increased to 1032 years and is nearly consistent with current observational data. Because proton decay involves violating both lepton and baryon number simultaneously, no single renormalizable R-parity violating coupling leads to proton decay. This has motivated the study of R-parity violation where only one set of the R-parity violating couplings are non-zero which is sometimes called the single coupling dominance hypothesis. Possible origins of R-parity A very attractive way to motivate R-parity is with a continuous gauge symmetry which is spontaneously broken at a scale inaccessible to current experiments. A continuous forbids renormalizable terms which violate and . If is only broken by scalar vacuum expectation values (or other order parameters) that carry even integer values of , then there exist an exactly conserved discrete remnant subgroup which has the desired properties. The crucial issue is to determine whether the sneutrino (the supersymmetric partner of neutrino), which is odd under R-parity, develops a vacuum expectation value. It can be shown, on phenomenological grounds, that this cannot happen in any theory where is broken at a scale much above the electroweak one. This is true in any theory based on a large-scale seesaw mechanism. As a consequence, in such theories R-parity remains exact at all energies. This phenomenon can arise as an automatic symmetry in SO(10) grand unified theories. This natural occurrence of R-parity is possible because in SO(10) the Standard Model fermions arise from the 16 dimensional spinor representation, while the Higgs arises from a 10 dimensional vector representation. In order to make an SO(10) invariant coupling, one must have an even number of spinor fields (i.e. there is a spinor parity). After GUT symmetry breaking, this spinor parity descends into R-parity so long as no spinor fields were used to break the GUT symmetry. Explicit examples of such SO(10) theories have been constructed. See also R-symmetry References External links Particle physics Supersymmetric quantum field theory
R-parity
[ "Physics" ]
1,128
[ "Supersymmetric quantum field theory", "Supersymmetry", "Symmetry", "Particle physics" ]
511,547
https://en.wikipedia.org/wiki/Quantum%20dot
Quantum dots (QDs) or semiconductor nanocrystals are semiconductor particles a few nanometres in size with optical and electronic properties that differ from those of larger particles via quantum mechanical effects. They are a central topic in nanotechnology and materials science. When a quantum dot is illuminated by UV light, an electron in the quantum dot can be excited to a state of higher energy. In the case of a semiconducting quantum dot, this process corresponds to the transition of an electron from the valence band to the conduction band. The excited electron can drop back into the valence band releasing its energy as light. This light emission (photoluminescence) is illustrated in the figure on the right. The color of that light depends on the energy difference between the discrete energy levels of the quantum dot in the conduction band and the valence band. Nanoscale semiconductor materials tightly confine either electrons or electron holes. The confinement is similar to a three-dimensional particle in a box model. The quantum dot absorption and emission features correspond to transitions between discrete quantum mechanically allowed energy levels in the box that are reminiscent of atomic spectra. For these reasons, quantum dots are sometimes referred to as artificial atoms, emphasizing their bound and discrete electronic states, like naturally occurring atoms or molecules. It was shown that the electronic wave functions in quantum dots resemble the ones in real atoms. Quantum dots have properties intermediate between bulk semiconductors and discrete atoms or molecules. Their optoelectronic properties change as a function of both size and shape. Larger QDs of 5–6 nm diameter emit longer wavelengths, with colors such as orange, or red. Smaller QDs (2–3 nm) emit shorter wavelengths, yielding colors like blue and green. However, the specific colors vary depending on the exact composition of the QD. Potential applications of quantum dots include single-electron transistors, solar cells, LEDs, lasers, single-photon sources, second-harmonic generation, quantum computing, cell biology research, microscopy, and medical imaging. Their small size allows for some QDs to be suspended in solution, which may lead to their use in inkjet printing, and spin coating. They have been used in Langmuir–Blodgett thin films. These processing techniques result in less expensive and less time-consuming methods of semiconductor fabrication. Core/shell and core/double-shell structures Quantum dots are usually coated with organic capping ligands (typically with long hydrocarbon chains, such as oleic acid) to control growth, prevent aggregation, and to promote dispersion in solution. However, these organic coatings can lead to non-radiative recombination after photogeneration, meaning the generated charge carriers can be dissipated without photon emission (e.g. via phonons or trapping in defect states), which reduces fluorescent quantum yield, or the conversion efficiency of absorbed photons into emitted fluorescence. To combat this, a semiconductor layer can be grown surrounding the quantum dot core. Depending on the bandgaps of the core and shell materials, the fluorescent properties of the nanocrystals can be tuned. Furthermore, adjusting the thicknesses of each of the layers and overall size of the quantum dots can affect the photoluminescent emission wavelength — the quantum confinement effect tends to blueshift the emission spectra as the quantum dot decreases in size. There are 4 major categories of quantum dot heterostructures: type I, inverse type I, type II, and inverse type II. Type I quantum dots are composed of a semiconductor core encapsulated in a second semiconductor material with a larger bandgap, which can passivate non-radiative recombination sites at the surface of the quantum dots and improve quantum yield. Inverse type I quantum dots have a semiconductor layer with a smaller bandgap which leads to delocalized charge carriers in the shell. For type II and inverse type II dots, either the conduction or valence band of the core is located within the bandgap of the shell, which can lead to spatial separation of charge carriers in the core and shell. For all of these core/shell systems, the deposition of the outer layer can lead to potential lattice mismatch, which can limit the ability to grow a thick shell without reducing photoluminescent performance. One such reason for the decrease in performance can be attributed to the physical strain being put on the lattice. In a case where ZnSe/ZnS (type I) and ZnSe/CdS (type II) quantum dots were being compared, the diameter of the uncoated ZnSe core (obtained using TEM) was compared to the capped core diameter (calculated via effective mass approximation model) [lattice strain source] to better understand the effect of core-shell strain. Type I heterostructures were found to induce compressive strain and “squeeze” the core, while the type II heterostructures had the effect of stretching the core under tensile strain. Because the fluorescent properties of quantum dots are dictated by nanocrystal size, induced changes in core dimensions can lead to shifting of emission wavelength, further proving why an intermediate semiconductor layer is necessary to rectify lattice mismatch and improve quantum yield. One such core/double-shell system is the CdSe/ZnSe/ZnS nanocrystal. In a study comparing CdSe/ZnS and CdSe/ZnSe nanocrystals, the former was found to have PL yield 84% of the latter’s, due to a lattice mismatch. To study the double-shell system, after synthesis of the core CdSe nanocrystals, a layer of ZnSe was coated prior to the ZnS outer shell, leading to an improvement in fluorescent efficiency by 70%. Furthermore, the two additional layers were found to improve resistance of the nanocrystals against photo-oxidation, which can contribute to degradation of the emission spectra. It is also standard for surface passivation techniques to be applied to these core/double-shell systems, as well. As mentioned above, oleic acid is one such organic capping ligand that is used to promote colloidal stability and control nanocrystal growth, and can even be used to initiate a second round of ligand exchange and surface functionalization. However, because of the detrimental effect organic ligands have on PL efficiency, further studies have been conducted to obtain all-inorganic quantum dots. In one such study, intensely luminescent all-inorganic nanocrystals (ILANs) were synthesized via a ligand exchange process which substituted metal salts for the oleic acid ligands, and were found to have comparable photoluminescent quantum yields to that of existing red- and green-emitting quantum dots. Production There are several ways to fabricate quantum dots. Possible methods include colloidal synthesis, self-assembly, and electrical gating. Colloidal synthesis Colloidal semiconductor nanocrystals are synthesized from solutions, much like traditional chemical processes. The main difference is the product neither precipitates as a bulk solid nor remains dissolved. Heating the solution at high temperature, the precursors decompose forming monomers which then nucleate and generate nanocrystals. Temperature is a critical factor in determining optimal conditions for the nanocrystal growth. It must be high enough to allow for rearrangement and annealing of atoms during the synthesis process while being low enough to promote crystal growth. The concentration of monomers is another critical factor that has to be stringently controlled during nanocrystal growth. The growth process of nanocrystals can occur in two different regimes: "focusing" and "defocusing". At high monomer concentrations, the critical size (the size where nanocrystals neither grow nor shrink) is relatively small, resulting in growth of nearly all particles. In this regime, smaller particles grow faster than large ones (since larger crystals need more atoms to grow than small crystals) resulting in the size distribution focusing, yielding an improbable distribution of nearly monodispersed particles. The size focusing is optimal when the monomer concentration is kept such that the average nanocrystal size present is always slightly larger than the critical size. Over time, the monomer concentration diminishes, the critical size becomes larger than the average size present, and the distribution defocuses. There are colloidal methods to produce many different semiconductors. Typical dots are made of binary compounds such as lead sulfide, lead selenide, cadmium selenide, cadmium sulfide, cadmium telluride, indium arsenide, and indium phosphide. Dots may also be made from ternary compounds such as cadmium selenide sulfide. Further, recent advances have been made which allow for synthesis of colloidal perovskite quantum dots. These quantum dots can contain as few as 100 to 100,000 atoms within the quantum dot volume, with a diameter of approximately 10 to 50 atom diameters. This corresponds to about 2 to 10 nanometers, and at 10 nm in diameter, nearly 3 million quantum dots could be lined up end to end and fit within the width of a human thumb. Large batches of quantum dots may be synthesized via colloidal synthesis. Due to this scalability and the convenience of benchtop conditions, colloidal synthetic methods are promising for commercial applications. Plasma synthesis Plasma synthesis has evolved to be one of the most popular gas-phase approaches for the production of quantum dots, especially those with covalent bonds. For example, silicon and germanium quantum dots have been synthesized by using nonthermal plasma. The size, shape, surface and composition of quantum dots can all be controlled in nonthermal plasma. Doping that seems quite challenging for quantum dots has also been realized in plasma synthesis. Quantum dots synthesized by plasma are usually in the form of powder, for which surface modification may be carried out. This can lead to excellent dispersion of quantum dots in either organic solvents or water (i. e., colloidal quantum dots). Fabrication The electrostatic potential needed to create a quantum dot can be realized with several methods. These include external electrodes, doping, strain, or impurities. Self-assembled quantum dots are typically between 5 and 50 nm in size. Quantum dots defined by lithographically patterned gate electrodes, or by etching on two-dimensional electron gases in semiconductor heterostructures can have lateral dimensions between 20 and 100 nm. Some quantum dots are small regions of one material buried in another with a larger band gap. These can be so-called core–shell structures, for example, with CdSe in the core and ZnS in the shell, or from special forms of silica called ormosil. Sub-monolayer shells can also be effective ways of passivating the quantum dots, such as PbS cores with sub-monolayer CdS shells. Quantum dots sometimes occur spontaneously in quantum well structures due to monolayer fluctuations in the well's thickness. Self-assembled quantum dots nucleate spontaneously under certain conditions during molecular beam epitaxy (MBE) and metalorganic vapour-phase epitaxy (MOVPE), when a material is grown on a substrate to which it is not lattice matched. The resulting strain leads to the formation of islands on top of a two-dimensional wetting layer. This growth mode is known as Stranski–Krastanov growth. The islands can be subsequently buried to form the quantum dot. A widely used type of quantum dots grown with this method are indium gallium arsenide () quantum dots in gallium arsenide (). Such quantum dots have the potential for applications in quantum cryptography (that is, single-photon sources) and quantum computation. The main limitations of this method are the cost of fabrication and the lack of control over positioning of individual dots. Individual quantum dots can be created from two-dimensional electron or hole gases present in remotely doped quantum wells or semiconductor heterostructures called lateral quantum dots. The sample surface is coated with a thin layer of resist and a lateral pattern is then defined in the resist by electron beam lithography. This pattern can then be transferred to the electron or hole gas by etching, or by depositing metal electrodes (lift-off process) that allow the application of external voltages between the electron gas and the electrodes. Such quantum dots are mainly of interest for experiments and applications involving electron or hole transport and they are also used as spin qubits. A strength of this type of quantum dots is that their energy spectrum can be engineered by controlling the geometrical size, shape, and the strength of the confinement potential with gate electrodes. These quantum dots can be easily connected by tunnel barriers to conducting leads, which allows the application of the techniques of tunneling spectroscopy for their investigation. Complementary metal–oxide–semiconductor (CMOS) technology can be employed to fabricate silicon quantum dots. Ultra small (20 nm × 20 nm) CMOS transistors behave as single electron quantum dots when operated at cryogenic temperature over a range of −269 °C (4 K) to about −258 °C (15 K). The transistor displays Coulomb blockade due to progressive charging of electrons (holes) one by one. The number of electrons (holes) confined in the channel is driven by the gate voltage, starting from an occupation of zero electrons (holes), and it can be set to one or many. Viral assembly Genetically engineered M13 bacteriophage viruses allow preparation of quantum dot biocomposite structures. It had previously been shown that genetically engineered viruses can recognize specific semiconductor surfaces through the method of selection by combinatorial phage display. Additionally, it is known that liquid crystalline structures of wild-type viruses (Fd, M13, and TMV) are adjustable by controlling the solution concentrations, solution ionic strength, and the external magnetic field applied to the solutions. Consequently, the specific recognition properties of the virus can be used to organize inorganic nanocrystals, forming ordered arrays over the length scale defined by liquid crystal formation. Using this information, Lee et al. (2000) were able to create self-assembled, highly oriented, self-supporting films from a phage and ZnS precursor solution. This system allowed them to vary both the length of bacteriophage and the type of inorganic material through genetic modification and selection. Electrochemical assembly Highly ordered arrays of quantum dots may also be self-assembled by electrochemical techniques. A template is created by causing an ionic reaction at an electrolyte–metal interface which results in the spontaneous assembly of nanostructures, including quantum dots, onto the metal which is then used as a mask for mesa-etching these nanostructures on a chosen substrate. Bulk manufacture Quantum dot manufacturing relies on a process called high temperature dual injection which has been scaled by multiple companies for commercial applications that require large quantities (hundreds of kilograms to tons) of quantum dots. This reproducible production method can be applied to a wide range of quantum dot sizes and compositions. The bonding in certain cadmium-free quantum dots, such as III–V-based quantum dots, is more covalent than that in II–VI materials, therefore it is more difficult to separate nanoparticle nucleation and growth via a high temperature dual injection synthesis. An alternative method of quantum dot synthesis, the molecular seeding process, provides a reproducible route to the production of high-quality quantum dots in large volumes. The process utilises identical molecules of a molecular cluster compound as the nucleation sites for nanoparticle growth, thus avoiding the need for a high temperature injection step. Particle growth is maintained by the periodic addition of precursors at moderate temperatures until the desired particle size is reached. The molecular seeding process is not limited to the production of cadmium-free quantum dots; for example, the process can be used to synthesise kilogram batches of high-quality II–VI quantum dots in just a few hours. Another approach for the mass production of colloidal quantum dots can be seen in the transfer of the well-known hot-injection methodology for the synthesis to a technical continuous flow system. The batch-to-batch variations arising from the needs during the mentioned methodology can be overcome by utilizing technical components for mixing and growth as well as transport and temperature adjustments. For the production of CdSe based semiconductor nanoparticles this method has been investigated and tuned to production amounts of kilograms per month. Since the use of technical components allows for easy interchange in regards of maximum throughput and size, it can be further enhanced to tens or even hundreds of kilograms. In 2011 a consortium of U.S. and Dutch companies reported a milestone in high-volume quantum dot manufacturing by applying the traditional high temperature dual injection method to a flow system. On 23 January 2013 Dow entered into an exclusive licensing agreement with UK-based Nanoco for the use of their low-temperature molecular seeding method for bulk manufacture of cadmium-free quantum dots for electronic displays, and on 24 September 2014 Dow commenced work on the production facility in South Korea capable of producing sufficient quantum dots for "millions of cadmium-free televisions and other devices, such as tablets". Mass production was due to commence in mid-2015. On 24 March 2015, Dow announced a partnership deal with LG Electronics to develop the use of cadmium free quantum dots in displays. Heavy-metal-free quantum dots In many regions of the world there is now a restriction or ban on the use of toxic heavy metals in many household goods, which means that most cadmium-based quantum dots are unusable for consumer-goods applications. For commercial viability, a range of restricted, heavy-metal-free quantum dots has been developed showing bright emissions in the visible and near-infrared region of the spectrum and have similar optical properties to those of CdSe quantum dots. Among these materials are InP/ZnS, CuInS/ZnS, CuInZnSe/ZnS Si, Ge, and C. Peptides are being researched as potential quantum dot material. Health and safety Some quantum dots pose risks to human health and the environment under certain conditions. Notably, the studies on quantum dot toxicity have focused on particles containing cadmium and have yet to be demonstrated in animal models after physiologically relevant dosing. In vitro studies, based on cell cultures, on quantum dots (QD) toxicity suggest that their toxicity may derive from multiple factors including their physicochemical characteristics (size, shape, composition, surface functional groups, and surface charges) and their environment. Assessing their potential toxicity is complex as these factors include properties such as QD size, charge, concentration, chemical composition, capping ligands, and also on their oxidative, mechanical, and photolytic stability. Many studies have focused on the mechanism of QD cytotoxicity using model cell cultures. It has been demonstrated that after exposure to ultraviolet radiation or oxidation by air, CdSe QDs release free cadmium ions causing cell death. Group II–VI QDs also have been reported to induce the formation of reactive oxygen species after exposure to light, which in turn can damage cellular components such as proteins, lipids, and DNA. Some studies have also demonstrated that addition of a ZnS shell inhibits the process of reactive oxygen species in CdSe QDs. Another aspect of QD toxicity is that there are, in vivo, size-dependent intracellular pathways that concentrate these particles in cellular organelles that are inaccessible by metal ions, which may result in unique patterns of cytotoxicity compared to their constituent metal ions. The reports of QD localization in the cell nucleus present additional modes of toxicity because they may induce DNA mutation, which in turn will propagate through future generation of cells, causing diseases. Although concentration of QDs in certain organelles have been reported in in vivo studies using animal models, no alterations in animal behavior, weight, hematological markers, or organ damage has been found through either histological or biochemical analysis. These findings have led scientists to believe that intracellular dose is the most important determining factor for QD toxicity. Therefore, factors determining the QD endocytosis that determine the effective intracellular concentration, such as QD size, shape, and surface chemistry determine their toxicity. Excretion of QDs through urine in animal models also have demonstrated via injecting radio-labeled ZnS-capped CdSe QDs where the ligand shell was labeled with 99mTc. Though multiple other studies have concluded retention of QDs in cellular levels, exocytosis of QDs is still poorly studied in the literature. While significant research efforts have broadened the understanding of toxicity of QDs, there are large discrepancies in the literature, and questions still remain to be answered. Diversity of this class of material as compared to normal chemical substances makes the assessment of their toxicity very challenging. As their toxicity may also be dynamic depending on the environmental factors such as pH level, light exposure, and cell type, traditional methods of assessing toxicity of chemicals such as LD50 are not applicable for QDs. Therefore, researchers are focusing on introducing novel approaches and adapting existing methods to include this unique class of materials. Furthermore, novel strategies to engineer safer QDs are still under exploration by the scientific community. A recent novelty in the field is the discovery of carbon quantum dots, a new generation of optically active nanoparticles potentially capable of replacing semiconductor QDs, but with the advantage of much lower toxicity. Optical properties Quantum dots have been gaining interest from the scientific community because of their interesting optical properties, the main being band gap tunability. When an electron is excited to the conduction band, it leaves behind a vacancy in the valence band called hole. These two opposite charges are bound by Coulombic interactions in what is called an exciton and their spatitial separation is defined by the exciton Bohr radius. In a nanostructure of comparable size to the exciton Bohr radius, the exciton is physically confined within the semiconductor resulting in an increase of the band gap of the material. This dependence can be predicted using the Brus model. As the confinement energy depends on the quantum dot's size, both absorption onset and fluorescence emission can be tuned by changing the size of the quantum dot during its synthesis. The larger the dot, the redder (lower-energy) its absorption onset and fluorescence spectrum. Conversely, smaller dots absorb and emit bluer (higher-energy) light. Recent articles suggest that the shape of the quantum dot may be a factor in the coloration as well, but as yet not enough information is available . Furthermore, it was shown that the lifetime of fluorescence is determined by the size of the quantum dot. Larger dots have more closely spaced energy levels in which the electron–hole pair can be trapped. Therefore, electron–hole pairs in larger dots live longer causing larger dots to show a longer lifetime. To improve fluorescence quantum yield, quantum dots can be made with shells of a larger bandgap semiconductor material around them. The improvement is suggested to be due to the reduced access of electron and hole to non-radiative surface recombination pathways in some cases, but also due to reduced Auger recombination in others. Applications Quantum dots are particularly promising for optical applications due to their high extinction coefficient and ultrafast optical nonlinearities with potential applications for developing all-optical systems. They operate like a single-electron transistor and show the Coulomb blockade effect. Quantum dots have also been suggested as implementations of qubits for quantum information processing, and as active elements for thermoelectrics. Tuning the size of quantum dots is attractive for many potential applications. For instance, larger quantum dots have a greater spectrum shift toward red compared to smaller dots and exhibit less pronounced quantum properties. Conversely, the smaller particles allow one to take advantage of more subtle quantum effects. Being zero-dimensional, quantum dots have a sharper density of states than higher-dimensional structures. As a result, they have superior transport and optical properties. They have potential uses in diode lasers, amplifiers, and biological sensors. Quantum dots may be excited within a locally enhanced electromagnetic field produced by gold nanoparticles, which then can be observed from the surface plasmon resonance in the photoluminescent excitation spectrum of (CdSe)ZnS nanocrystals. High-quality quantum dots are well suited for optical encoding and multiplexing applications due to their broad excitation profiles and narrow/symmetric emission spectra. The new generations of quantum dots have far-reaching potential for the study of intracellular processes at the single-molecule level, high-resolution cellular imaging, long-term in vivo observation of cell trafficking, tumor targeting, and diagnostics. CdSe nanocrystals are efficient triplet photosensitizers. Laser excitation of small CdSe nanoparticles enables the extraction of the excited state energy from the quantum dots into bulk solution, thus opening the door to a wide range of potential applications such as photodynamic therapy, photovoltaic devices, molecular electronics, and catalysis. Biology In modern biological analysis, various kinds of organic dyes are used. However, as technology advances, greater flexibility in these dyes is sought. To this end, quantum dots have quickly filled in the role, being found to be superior to traditional organic dyes on several counts, one of the most immediately obvious being brightness (owing to the high extinction coefficient combined with a comparable quantum yield to fluorescent dyes) as well as their stability (allowing much less photobleaching). It has been estimated that quantum dots are 20 times brighter and 100 times more stable than traditional fluorescent reporters. For single-particle tracking, the irregular blinking of quantum dots is a minor drawback. However, there have been groups which have developed quantum dots which are essentially nonblinking and demonstrated their utility in single-molecule tracking experiments. The use of quantum dots for highly sensitive cellular imaging has seen major advances. The improved photostability of quantum dots, for example, allows the acquisition of many consecutive focal-plane images that can be reconstructed into a high-resolution three-dimensional image. Another application that takes advantage of the extraordinary photostability of quantum dot probes is the real-time tracking of molecules and cells over extended periods of time. Antibodies, streptavidin, peptides, DNA, nucleic acid aptamers, or small-molecule ligands can be used to target quantum dots to specific proteins on cells. Researchers were able to observe quantum dots in lymph nodes of mice for more than 4 months. Quantum dots can have antibacterial properties similar to nanoparticles and can kill bacteria in a dose-dependent manner. One mechanism by which quantum dots can kill bacteria is through impairing the functions of antioxidative system in the cells and down regulating the antioxidative genes. In addition, quantum dots can directly damage the cell wall. Quantum dots have been shown to be effective against both gram- positive and gram-negative bacteria. Semiconductor quantum dots have also been employed for in vitro imaging of pre-labeled cells. The ability to image single-cell migration in real time is expected to be important to several research areas such as embryogenesis, cancer metastasis, stem cell therapeutics, and lymphocyte immunology. One application of quantum dots in biology is as donor fluorophores in Förster resonance energy transfer, where the large extinction coefficient and spectral purity of these fluorophores make them superior to molecular fluorophores It is also worth noting that the broad absorbance of QDs allows selective excitation of the QD donor and a minimum excitation of a dye acceptor in FRET-based studies. The applicability of the FRET model, which assumes that the Quantum Dot can be approximated as a point dipole, has recently been demonstrated The use of quantum dots for tumor targeting under in vivo conditions employ two targeting schemes: active targeting and passive targeting. In the case of active targeting, quantum dots are functionalized with tumor-specific binding sites to selectively bind to tumor cells. Passive targeting uses the enhanced permeation and retention of tumor cells for the delivery of quantum dot probes. Fast-growing tumor cells typically have more permeable membranes than healthy cells, allowing the leakage of small nanoparticles into the cell body. Moreover, tumor cells lack an effective lymphatic drainage system, which leads to subsequent nanoparticle accumulation. Quantum dot probes exhibit in vivo toxicity. For example, CdSe nanocrystals are highly toxic to cultured cells under UV illumination, because the particles dissolve, in a process known as photolysis, to release toxic cadmium ions into the culture medium. In the absence of UV irradiation, however, quantum dots with a stable polymer coating have been found to be essentially nontoxic. Hydrogel encapsulation of quantum dots allows for quantum dots to be introduced into a stable aqueous solution, reducing the possibility of cadmium leakage. Then again, only little is known about the excretion process of quantum dots from living organisms. In another potential application, quantum dots are being investigated as the inorganic fluorophore for intra-operative detection of tumors using fluorescence spectroscopy. Delivery of undamaged quantum dots to the cell cytoplasm has been a challenge with existing techniques. Vector-based methods have resulted in aggregation and endosomal sequestration of quantum dots while electroporation can damage the semi-conducting particles and aggregate delivered dots in the cytosol. Via cell squeezing, quantum dots can be efficiently delivered without inducing aggregation, trapping material in endosomes, or significant loss of cell viability. Moreover, it has shown that individual quantum dots delivered by this approach are detectable in the cell cytosol, thus illustrating the potential of this technique for single-molecule tracking studies. Photovoltaic devices The tunable absorption spectrum and high extinction coefficients of quantum dots make them attractive for light harvesting technologies such as photovoltaics. Quantum dots may be able to increase the efficiency and reduce the cost of today's typical silicon photovoltaic cells. According to an experimental report from 2004, quantum dots of lead selenide (PbSe) can produce more than one exciton from one high-energy photon via the process of carrier multiplication or multiple exciton generation (MEG). This compares favorably to today's photovoltaic cells which can only manage one exciton per high-energy photon, with high kinetic energy carriers losing their energy as heat. On the other hand, the quantum-confined ground-states of colloidal quantum dots (such as lead sulfide, PbS) incorporated in wider-bandgap host semiconductors (such as perovskite) can allow the generation of photocurrent from photons with energy below the host bandgap, via a two-photon absorption process, offering another approach (termed intermediate band, IB) to exploit a broader range of the solar spectrum and thereby achieve higher photovoltaic efficiency. Colloidal quantum dot photovoltaics would theoretically be cheaper to manufacture, as they can be made using simple chemical reactions. Quantum dot only solar cells Aromatic self-assembled monolayers (SAMs) (such as 4-nitrobenzoic acid) can be used to improve the band alignment at electrodes for better efficiencies. This technique has provided a record power conversion efficiency (PCE) of 10.7%. The SAM is positioned between ZnO–PbS colloidal quantum dot (CQD) film junction to modify band alignment via the dipole moment of the constituent SAM molecule, and the band tuning may be modified via the density, dipole and the orientation of the SAM molecule. Quantum dot in hybrid solar cells Colloidal quantum dots are also used in inorganic–organic hybrid solar cells. These solar cells are attractive because of the potential for low-cost fabrication and relatively high efficiency. Incorporation of metal oxides, such as ZnO, TiO2, and Nb2O5 nanomaterials into organic photovoltaics have been commercialized using full roll-to-roll processing. A 13.2% power conversion efficiency is claimed in Si nanowire/PEDOT:PSS hybrid solar cells. Quantum dot with nanowire in solar cells Another potential use involves capped single-crystal ZnO nanowires with CdSe quantum dots, immersed in mercaptopropionic acid as hole transport medium in order to obtain a QD-sensitized solar cell. The morphology of the nanowires allowed the electrons to have a direct pathway to the photoanode. This form of solar cell exhibits 50–60% internal quantum efficiencies. Nanowires with quantum dot coatings on silicon nanowires (SiNW) and carbon quantum dots. The use of SiNWs instead of planar silicon enhances the antiflection properties of Si. The SiNW exhibits a light-trapping effect due to light trapping in the SiNW. This use of SiNWs in conjunction with carbon quantum dots resulted in a solar cell that reached 9.10% PCE. Graphene quantum dots have also been blended with organic electronic materials to improve efficiency and lower cost in photovoltaic devices and organic light emitting diodes (OLEDs) compared to graphene sheets. These graphene quantum dots were functionalized with organic ligands that experience photoluminescence from UV–visible absorption. Light-emitting diodes Several methods are proposed for using quantum dots to improve existing light-emitting diode (LED) design, including quantum dot light-emitting diode (QD-LED or QLED) displays, and quantum dot white-light-emitting diode (QD-WLED) displays. Because quantum dots naturally produce monochromatic light, they can be more efficient than light sources which must be color filtered. QD-LEDs can be fabricated on a silicon substrate, which allows them to be integrated onto standard silicon-based integrated circuits or microelectromechanical systems. Quantum dot displays Quantum dots are valued for displays because they emit light in very specific Gaussian distributions. This can result in a display with visibly more accurate colors. A conventional color liquid crystal display (LCD) is usually backlit by fluorescent lamps (CCFLs) or conventional white LEDs that are color filtered to produce red, green, and blue pixels. Quantum dot displays use blue-emitting LEDs rather than white LEDs as the light sources. The converting part of the emitted light is converted into pure green and red light by the corresponding color quantum dots placed in front of the blue LED or using a quantum dot infused diffuser sheet in the backlight optical stack. Blank pixels are also used to allow the blue LED light to still generate blue hues. This type of white light as the backlight of an LCD panel allows for the best color gamut at lower cost than an RGB LED combination using three LEDs. Another method by which quantum dot displays can be achieved is the electroluminescent (EL) or electro-emissive method. This involves embedding quantum dots in each individual pixel. These are then activated and controlled via an electric current application. Since this is often light emitting itself, the achievable colors may be limited in this method. Electro-emissive QD-LED TVs exist in laboratories only. The ability of QDs to precisely convert and tune a spectrum makes them attractive for LCD displays. Previous LCD displays can waste energy converting red-green poor, blue-yellow rich white light into a more balanced lighting. By using QDs, only the necessary colors for ideal images are contained in the screen. The result is a screen that is brighter, clearer, and more energy-efficient. The first commercial application of quantum dots was the Sony XBR X900A series of flat panel televisions released in 2013. In June 2006, QD Vision announced technical success in making a proof-of-concept quantum dot display and show a bright emission in the visible and near infrared region of the spectrum. A QD-LED integrated at a scanning microscopy tip was used to demonstrate fluorescence near-field scanning optical microscopy (NSOM) imaging. Photodetector devices Quantum dot photodetectors (QDPs) can be fabricated either via solution-processing, or from conventional single-crystalline semiconductors. Conventional single-crystalline semiconductor QDPs are precluded from integration with flexible organic electronics due to the incompatibility of their growth conditions with the process windows required by organic semiconductors. On the other hand, solution-processed QDPs can be readily integrated with an almost infinite variety of substrates, and also postprocessed atop other integrated circuits. Such colloidal QDPs have potential applications in visible- and infrared-light cameras, machine vision, industrial inspection, spectroscopy, and fluorescent biomedical imaging. Photocatalysts Quantum dots also function as photocatalysts for the light driven chemical conversion of water into hydrogen as a pathway to solar fuel. In photocatalysis, electron hole pairs formed in the dot under band gap excitation drive redox reactions in the surrounding liquid. Generally, the photocatalytic activity of the dots is related to the particle size and its degree of quantum confinement. This is because the band gap determines the chemical energy that is stored in the dot in the excited state. An obstacle for the use of quantum dots in photocatalysis is the presence of surfactants on the surface of the dots. These surfactants (or ligands) interfere with the chemical reactivity of the dots by slowing down mass transfer and electron transfer processes. Also, quantum dots made of metal chalcogenides are chemically unstable under oxidizing conditions and undergo photo corrosion reactions. Fundamental Materials Science Quantum dots can also be used to study fundamental effects in materials science. By coupling two or more such quantum dots, an artificial molecule can be made, exhibiting hybridization even at room temperature. Precise assembly of quantum dots can form superlattices that act as artificial solid-state materials that exhibit unique optical and electronic properties. Theory Quantum dots are theoretically described as a point-like, or zero dimensional (0D) entity. Most of their properties depend on the dimensions, shape, and materials of which QDs are made. Generally, QDs present different thermodynamic properties from their bulk materials. One of these effects is melting-point depression. Optical properties of spherical metallic QDs are well described by the Mie scattering theory. Quantum confinement in semiconductors The energy levels of a single particle in a quantum dot can be predicted using the particle in a box model in which the energies of states depend on the length of the box. For an exciton inside a quantum dot, there is also the Coulomb interaction between the negatively charged electron and the positively charged hole. By comparing the quantum dot's size to the exciton Bohr radius, three regimes can be defined. In the 'strong confinement regime', the quantum dot's radius is much smaller than the exciton Bohr radius, respectively the confinement energy dominates over the Coulomb interaction. In the 'weak confinement' regime, the quantum dot is larger than the exciton Bohr radius, respectively the confinement energy is smaller than the Coulomb interactions between electron and hole. The regime where the exciton Bohr radius and confinement potential are comparable is called the 'intermediate confinement regime'. Band gap energy The band gap can become smaller in the strong confinement regime as the energy levels split up. The exciton Bohr radius can be expressed as: where aB = 0.053 nm is the Bohr radius, m is the mass, μ is the reduced mass, and εr is the size-dependent dielectric constant (relative permittivity). This results in the increase in the total emission energy (the sum of the energy levels in the smaller band gaps in the strong confinement regime is larger than the energy levels in the band gaps of the original levels in the weak confinement regime) and the emission at various wavelengths. If the size distribution of QDs is not enough peaked, the convolution of multiple emission wavelengths is observed as a continuous spectra. Confinement energy The exciton entity can be modeled using the particle in the box. The electron and the hole can be seen as hydrogen in the Bohr model with the hydrogen nucleus replaced by the hole of positive charge and negative electron mass. Then the energy levels of the exciton can be represented as the solution to the particle in a box at the ground level (n = 1) with the mass replaced by the reduced mass. Thus by varying the size of the quantum dot, the confinement energy of the exciton can be controlled. Bound exciton energy There is Coulomb attraction between the negatively charged electron and the positively charged hole. The negative energy involved in the attraction is proportional to Rydberg's energy and inversely proportional to square of the size-dependent dielectric constant of the semiconductor. When the size of the semiconductor crystal is smaller than the exciton Bohr radius, the Coulomb interaction must be modified to fit the situation. Therefore, the sum of these energies can be represented by Brus equation: where μ is the reduced mass, a is the radius of the quantum dot, me is the free electron mass, mh is the hole mass, and εr is the size-dependent dielectric constant. Although the above equations were derived using simplifying assumptions, they imply that the electronic transitions of the quantum dots will depend on their size. These quantum confinement effects are apparent only below the critical size. Larger particles do not exhibit this effect. This effect of quantum confinement on the quantum dots has been repeatedly verified experimentally and is a key feature of many emerging electronic structures. The Coulomb interaction between confined carriers can also be studied by numerical means when results unconstrained by asymptotic approximations are pursued. Besides confinement in all three dimensions (that is, a quantum dot), other quantum confined semiconductors include: Quantum wires, which confine electrons or holes in two spatial dimensions and allow free propagation in the third. Quantum wells, which confine electrons or holes in one dimension and allow free propagation in two dimensions. Models A variety of theoretical frameworks exist to model optical, electronic, and structural properties of quantum dots. These may be broadly divided into quantum mechanical, semiclassical, and classical. Quantum mechanics Quantum mechanical models and simulations of quantum dots often involve the interaction of electrons with a pseudopotential or random matrix. Semiclassical Semiclassical models of quantum dots frequently incorporate a chemical potential. For example, the thermodynamic chemical potential of an N-particle system is given by whose energy terms may be obtained as solutions of the Schrödinger equation. The definition of capacitance, with the potential difference may be applied to a quantum dot with the addition or removal of individual electrons, Then is the quantum capacitance of a quantum dot, where we denoted by the ionization potential and by the electron affinity of the N-particle system. Classical mechanics Classical models of electrostatic properties of electrons in quantum dots are similar in nature to the Thomson problem of optimally distributing electrons on a unit sphere. The classical electrostatic treatment of electrons confined to spherical quantum dots is similar to their treatment in the Thomson, or plum pudding model, of the atom. The classical treatment of both two-dimensional and three-dimensional quantum dots exhibit electron shell-filling behavior. A "periodic table of classical artificial atoms" has been described for two-dimensional quantum dots. As well, several connections have been reported between the three-dimensional Thomson problem and electron shell-filling patterns found in naturally occurring atoms found throughout the periodic table. This latter work originated in classical electrostatic modeling of electrons in a spherical quantum dot represented by an ideal dielectric sphere. History For thousands of years, glassmakers were able to make colored glass by adding different dusts and powdered elements such as silver, gold and cadmium and then used different temperatures to produce shades of glass. In the 19th century, scientists started to understand how glass color depended on elements and heating-cooling techniques. It was also found that for the same element and preparation, the color depended on the dust particles' size. Herbert Fröhlich in the 1930s first explored the idea that material properties can depend on the macroscopic dimensions of a small particle due to quantum size effects. The first quantum dots were synthesized in a glass matrix by Alexei A. Onushchenko and Alexey Ekimov in 1981 at the Vavilov State Optical Institute and independently in colloidal suspension by Louis E. Brus team at Bell Labs in 1983. They were first theorized by Alexander Efros in 1982. It was quickly identified that the optical changes that appeared for very small particles were due to quantum mechanical effects. The term quantum dot first appeared in a paper first authored by Mark Reed in 1986. According to Brus, the term "quantum dot" was coined by while they were working at Bell Labs. In 1993, David J. Norris, Christopher B. Murray and Moungi Bawendi at the Massachusetts Institute of Technology reported on a hot-injection synthesis method for producing reproducible quantum dots with well-defined size and with high optical quality. The method opened the door to the development of large-scale technological applications of quantum dots in a wide range of areas. The Nobel Prize in Chemistry 2023 was awarded to Moungi Bawendi, Louis E. Brus and Alexey Ekimov "for the discovery and synthesis of quantum dots." See also Cadmium-free quantum dot Carbon quantum dot Core–shell semiconductor nanocrystal Langmuir–Blodgett film Mark Reed (physicist) Nanocrystal solar cell Paul Alivisatos Quantum dot laser Quantum dot single-photon source Quantum point contact Shuming Nie Superatom Trojan wave packet Uri Banin References Further reading </ref> Methods to produce quantum-confined semiconductor structures (quantum wires, wells, and dots via grown by advanced epitaxial techniques), nanocrystals by gas-phase, liquid-phase, and solid-phase approaches. Photoluminescence of a QD vs. particle diameter. External links Quantum Dots: Technical Status and Market Prospects Quantum dots that produce white light could be the light bulb's successor Single quantum dots optical properties Quantum dot on arxiv.org Quantum Dots Research and Technical Data Simulation and interactive visualization of Quantum Dots wave function Mesoscopic physics Quantum chemistry Quantum electronics Semiconductor structures
Quantum dot
[ "Physics", "Chemistry", "Materials_science" ]
9,641
[ "Quantum chemistry", "Quantum electronics", "Quantum mechanics", "Theoretical chemistry", " molecular", "Condensed matter physics", "Atomic", "Nanotechnology", "Mesoscopic physics", " and optical physics" ]
1,946,702
https://en.wikipedia.org/wiki/Moyal%20product
In mathematics, the Moyal product (after José Enrique Moyal; also called the star product or Weyl–Groenewold product, after Hermann Weyl and Hilbrand J. Groenewold) is an example of a phase-space star product. It is an associative, non-commutative product, , on the functions on , equipped with its Poisson bracket (with a generalization to symplectic manifolds, described below). It is a special case of the -product of the "algebra of symbols" of a universal enveloping algebra. Historical comments The Moyal product is named after José Enrique Moyal, but is also sometimes called the Weyl–Groenewold product as it was introduced by H. J. Groenewold in his 1946 doctoral dissertation, in a trenchant appreciation of the Weyl correspondence. Moyal actually appears not to know about the product in his celebrated article and was crucially lacking it in his legendary correspondence with Dirac, as illustrated in his biography. The popular naming after Moyal appears to have emerged only in the 1970s, in homage to his flat phase-space quantization picture. Definition The product for smooth functions and on takes the form where each is a certain bidifferential operator of order , characterized by the following properties (see below for an explicit formula): Deformation of the pointwise product — implicit in the formula above. Deformation of the Poisson bracket, called Moyal bracket. The 1 of the undeformed algebra is also the identity in the new algebra. The complex conjugate is an antilinear antiautomorphism. Note that, if one wishes to take functions valued in the real numbers, then an alternative version eliminates the in the second condition and eliminates the fourth condition. If one restricts to polynomial functions, the above algebra is isomorphic to the Weyl algebra , and the two offer alternative realizations of the Weyl map of the space of polynomials in variables (or the symmetric algebra of a vector space of dimension ). To provide an explicit formula, consider a constant Poisson bivector on : where is a real number for each . The star product of two functions and can then be defined as the pseudo-differential operator acting on both of them, where is the reduced Planck constant, treated as a formal parameter here. This is a special case of what is known as the Berezin formula on the algebra of symbols and can be given a closed form (which follows from the Baker–Campbell–Hausdorff formula). The closed form can be obtained by using the exponential: where is the multiplication map, , and the exponential is treated as a power series, That is, the formula for is As indicated, often one eliminates all occurrences of above, and the formulas then restrict naturally to real numbers. Note that if the functions and are polynomials, the above infinite sums become finite (reducing to the ordinary Weyl-algebra case). The relationship of the Moyal product to the generalized -product used in the definition of the "algebra of symbols" of a universal enveloping algebra follows from the fact that the Weyl algebra is the universal enveloping algebra of the Heisenberg algebra (modulo that the center equals the unit). On manifolds On any symplectic manifold, one can, at least locally, choose coordinates so as to make the symplectic structure constant, by Darboux's theorem; and, using the associated Poisson bivector, one may consider the above formula. For it to work globally, as a function on the whole manifold (and not just a local formula), one must equip the symplectic manifold with a torsion-free symplectic connection. This makes it a Fedosov manifold. More general results for arbitrary Poisson manifolds (where the Darboux theorem does not apply) are given by the Kontsevich quantization formula. Examples A simple explicit example of the construction and utility of the -product (for the simplest case of a two-dimensional euclidean phase space) is given in the article on the Wigner–Weyl transform: two Gaussians compose with this -product according to a hyperbolic tangent law: Equivalently,The classical limit at is , as expected. Every correspondence prescription between phase space and Hilbert space, however, induces proper -product. Similar results are seen in the Segal–Bargmann space and in the theta representation of the Heisenberg group, where the creation and annihilation operators and are understood to act on the complex plane (respectively, the upper half-plane for the Heisenberg group), so that the position and momenta operators are given by and . This situation is clearly different from the case where the positions are taken to be real-valued, but does offer insights into the overall algebraic structure of the Heisenberg algebra and its envelope, the Weyl algebra. Inside phase-space integrals Inside a phase-space integral, just star product of the Moyal type may be dropped, resulting in plain multiplication, as evident by integration by parts, making the cyclicity of the phase-space trace manifest. This is a unique property of the above specific Moyal product, and does not hold for other correspondence rules' star products, such as Husimi's, etc. References Mathematical quantization Mathematical physics
Moyal product
[ "Physics", "Mathematics" ]
1,115
[ "Applied mathematics", "Theoretical physics", "Quantum mechanics", "Mathematical quantization", "Mathematical physics" ]
1,947,467
https://en.wikipedia.org/wiki/Radical%20polymerization
In polymer chemistry, radical polymerization (RP) is a method of polymerization by which a polymer forms by the successive addition of a radical to building blocks (repeat units). Radicals can be formed by a number of different mechanisms, usually involving separate initiator molecules. Following its generation, the initiating radical adds (nonradical) monomer units, thereby growing the polymer chain. Radical polymerization is a key synthesis route for obtaining a wide variety of different polymers and materials composites. The relatively non-specific nature of radical chemical interactions makes this one of the most versatile forms of polymerization available and allows facile reactions of polymeric radical chain ends and other chemicals or substrates. In 2001, 40 billion of the 110 billion pounds of polymers produced in the United States were produced by radical polymerization. Radical polymerization is a type of chain polymerization, along with anionic, cationic and coordination polymerization. Initiation Initiation is the first step of the polymerization process. During initiation, an active center is created from which a polymer chain is generated. Not all monomers are susceptible to all types of initiators. Radical initiation works best on the carbon–carbon double bond of vinyl monomers and the carbon–oxygen double bond in aldehydes and ketones. Initiation has two steps. In the first step, one or two radicals are created from the initiating molecules. In the second step, radicals are transferred from the initiator molecules to the monomer units present. Several choices are available for these initiators. Types of initiation and the initiators Thermal decomposition The initiator is heated until a bond is homolytically cleaved, producing two radicals (Figure 1). This method is used most often with organic peroxides or azo compounds. Photolysis Radiation cleaves a bond homolytically, producing two radicals (Figure 2). This method is used most often with metal iodides, metal alkyls, and azo compounds. Photoinitiation can also occur by bi-molecular H abstraction when the radical is in its lowest triplet excited state. An acceptable photoinitiator system should fulfill the following requirements: High absorptivity in the 300–400 nm range. Efficient generation of radicals capable of attacking the alkene double bond of vinyl monomers. Adequate solubility in the binder system (prepolymer + monomer). Should not impart yellowing or unpleasant odors to the cured material. The photoinitiator and any byproducts resulting from its use should be non-toxic. Redox reactions Reduction of hydrogen peroxide or an alkyl hydrogen peroxide by iron (Figure 3). Other reductants such as Cr2+, V2+, Ti3+, Co2+, and Cu+ can be employed in place of ferrous ion in many instances. Persulfates The dissociation of a persulfate in the aqueous phase (Figure 4). This method is useful in emulsion polymerizations, in which the radical diffuses into a hydrophobic monomer-containing droplet. Ionizing radiation α-, β-, γ-, or x-rays cause ejection of an electron from the initiating species, followed by dissociation and electron capture to produce a radical (Figure 5). Electrochemical Electrolysis of a solution containing both monomer and electrolyte. A monomer molecule will receive an electron at the cathode to become a radical anion, and a monomer molecule will give up an electron at the anode to form a radical cation (Figure 6). The radical ions then initiate free radical (and/or ionic) polymerization. This type of initiation is especially useful for coating metal surfaces with polymer films. Plasma A gaseous monomer is placed in an electric discharge at low pressures under conditions where a plasma (ionized gaseous molecules) is created. In some cases, the system is heated and/or placed in a radiofrequency field to assist in creating the plasma. Sonication High-intensity ultrasound at frequencies beyond the range of human hearing (16 kHz) can be applied to a monomer. Initiation results from the effects of cavitation (the formation and collapse of cavities in the liquid). The collapse of the cavities generates very high local temperatures and pressures. This results in the formation of excited electronic states, which in turn lead to bond breakage and radical formation. Ternary initiators A ternary initiator is the combination of several types of initiators into one initiating system. The types of initiators are chosen based on the properties they are known to induce in the polymers they produce. For example, poly(methyl methacrylate) has been synthesized by the ternary system benzoyl peroxide and 3,6-bis(o-carboxybenzoyl)-N-isopropylcarbazole and di-η5-indenylzirconium dichloride (Figure 7).This type of initiating system contains a metallocene, an initiator, and a heteroaromatic diketo carboxylic acid. Metallocenes in combination with initiators accelerate polymerization of poly(methyl methacrylate) and produce a polymer with a narrower molecular weight distribution. The example shown here consists of indenylzirconium (a metallocene) and benzoyl peroxide (an initiator). Also, initiating systems containing heteroaromatic diketo carboxylic acids, such as 3,6-bis(o-carboxybenzoyl)-N-isopropylcarbazole in this example, are known to catalyze the decomposition of benzoyl peroxide. Initiating systems with this particular heteroaromatic diket carboxylic acid are also known to have effects on the microstructure of the polymer. The combination of all of these components—a metallocene, an initiator, and a heteroaromatic diketo carboxylic acid—yields a ternary initiating system that was shown to accelerate the polymerization and produce polymers with enhanced heat resistance and regular microstructure. Initiator efficiency Due to side reactions, not all radicals formed by the dissociation of initiator molecules actually add monomers to form polymer chains. The efficiency factor f is defined as the fraction of the original initiator which contributes to the polymerization reaction. The maximal value of f is 1, but typical values range from 0.3 to 0.8. The following types of reactions can decrease the efficiency of the initiator. Primary recombination Two radicals recombine before initiating a chain (Figure 8). This occurs within the solvent cage, meaning that no solvent has yet come between the new radicals. Other recombination pathways Two radical initiators recombine before initiating a chain, but not in the solvent cage (Figure 9). Side reactions One radical is produced instead of the three radicals that could be produced (Figure 10). Propagation During polymerization, a polymer spends most of its time in increasing its chain length, or propagating. After the radical initiator is formed, it attacks a monomer (Figure 11). In an ethene monomer, one electron pair is held securely between the two carbons in a sigma bond. The other is more loosely held in a pi bond. The free radical uses one electron from the pi bond to form a more stable bond with the carbon atom. The other electron returns to the second carbon atom, turning the whole molecule into another radical. This begins the polymer chain. Figure 12 shows how the orbitals of an ethylene monomer interact with a radical initiator. Once a chain has been initiated, the chain propagates (Figure 13) until there are no more monomers (living polymerization) or until termination occurs. There may be anywhere from a few to thousands of propagation steps depending on several factors such as radical and chain reactivity, the solvent, and temperature. The mechanism of chain propagation is as follows: Termination Chain termination is inevitable in radical polymerization due to the high reactivity of radicals. Termination can occur by several different mechanisms. If longer chains are desired, the initiator concentration should be kept low; otherwise, many shorter chains will result. Combination of two active chain ends: one or both of the following processes may occur. Combination: two chain ends simply couple together to form one long chain (Figure 14). One can determine if this mode of termination is occurring by monitoring the molecular weight of the propagating species: combination will result in doubling of molecular weight. Also, combination will result in a polymer that is C2 symmetric about the point of the combination. Radical disproportionation: a hydrogen atom from one chain end is abstracted to another, producing a polymer with a terminal unsaturated group and a polymer with a terminal saturated group (Figure 15). Combination of an active chain end with an initiator radical (Figure 16). Interaction with impurities or inhibitors. Oxygen is the common inhibitor. The growing chain will react with molecular oxygen, producing an oxygen radical, which is much less reactive (Figure 17). This significantly slows down the rate of propagation. Nitrobenzene, butylated hydroxyl toluene, and diphenyl picryl hydrazyl (DPPH, Figure 18) are a few other inhibitors. The latter is an especially effective inhibitor because of the resonance stabilization of the radical. Chain transfer Contrary to the other modes of termination, chain transfer results in the destruction of only one radical, but also the creation of another radical. Often, however, this newly created radical is not capable of further propagation. Similar to disproportionation, all chain-transfer mechanisms also involve the abstraction of a hydrogen or other atom. There are several types of chain-transfer mechanisms. To solvent: a hydrogen atom is abstracted from a solvent molecule, resulting in the formation of radical on the solvent molecules, which will not propagate further (Figure 19). The effectiveness of chain transfer involving solvent molecules depends on the amount of solvent present (more solvent leads to greater probability of transfer), the strength of the bond involved in the abstraction step (weaker bond leads to greater probability of transfer), and the stability of the solvent radical that is formed (greater stability leads to greater probability of transfer). Halogens, except fluorine, are easily transferred. To monomer: a hydrogen atom is abstracted from a monomer. While this does create a radical on the affected monomer, resonance stabilization of this radical discourages further propagation (Figure 20). To initiator: a polymer chain reacts with an initiator, which terminates that polymer chain, but creates a new radical initiator (Figure 21). This initiator can then begin new polymer chains. Therefore, contrary to the other forms of chain transfer, chain transfer to the initiator does allow for further propagation. Peroxide initiators are especially sensitive to chain transfer. To polymer: the radical of a polymer chain abstracts a hydrogen atom from somewhere on another polymer chain (Figure 22). This terminates the growth of one polymer chain, but allows the other to branch and resume growing. This reaction step changes neither the number of polymer chains nor the number of monomers which have been polymerized, so that the number-average degree of polymerization is unaffected. Effects of chain transfer: The most obvious effect of chain transfer is a decrease in the polymer chain length. If the rate of transfer is much larger than the rate of propagation, then very small polymers are formed with chain lengths of 2-5 repeating units (telomerization). The Mayo equation estimates the influence of chain transfer on chain length (xn): . Where ktr is the rate constant for chain transfer and kp is the rate constant for propagation. The Mayo equation assumes that transfer to solvent is the major termination pathway. Methods There are four industrial methods of radical polymerization: Bulk polymerization: reaction mixture contains only initiator and monomer, no solvent. Solution polymerization: reaction mixture contains solvent, initiator, and monomer. Suspension polymerization: reaction mixture contains an aqueous phase, water-insoluble monomer, and initiator soluble in the monomer droplets (both the monomer and the initiator are hydrophobic). Emulsion polymerization: similar to suspension polymerization except that the initiator is soluble in the aqueous phase rather than in the monomer droplets (the monomer is hydrophobic, and the initiator is hydrophilic). An emulsifying agent is also needed. Other methods of radical polymerization include the following: Template polymerization: In this process, polymer chains are allowed to grow along template macromolecules for the greater part of their lifetime. A well-chosen template can affect the rate of polymerization as well as the molar mass and microstructure of the daughter polymer. The molar mass of a daughter polymer can be up to 70 times greater than those of polymers produced in the absence of the template and can be higher in molar mass than the templates themselves. This is because of retardation of the termination for template-associated radicals and by hopping of a radical to the neighboring template after reaching the end of a template polymer. Plasma polymerization: The polymerization is initiated with plasma. A variety of organic molecules including alkenes, alkynes, and alkanes undergo polymerization to high molecular weight products under these conditions. The propagation mechanisms appear to involve both ionic and radical species. Plasma polymerization offers a potentially unique method of forming thin polymer films for uses such as thin-film capacitors, antireflection coatings, and various types of thin membranes. Sonication: The polymerization is initiated by high-intensity ultrasound. Polymerization to high molecular weight polymer is observed but the conversions are low (<15%). The polymerization is self-limiting because of the high viscosity produced even at low conversion. High viscosity hinders cavitation and radical production. Reversible deactivation radical polymerization Also known as living radical polymerization, controlled radical polymerization, reversible deactivation radical polymerization (RDRP) relies on completely pure reactions, preventing termination caused by impurities. Because these polymerizations stop only when there is no more monomer, polymerization can continue upon the addition of more monomer. Block copolymers can be made this way. RDRP allows for control of molecular weight and dispersity. However, this is very difficult to achieve and instead a pseudo-living polymerization occurs with only partial control of molecular weight and dispersity. ATRP and RAFT are the main types of complete radical polymerization. Atom transfer radical polymerization (ATRP): based on the formation of a carbon-carbon bond by atom transfer radical addition. This method, independently discovered in 1995 by Mitsuo Sawamoto and by Jin-Shan Wang and Krzysztof Matyjaszewski, requires reversible activation of a dormant species (such as an alkyl halide) and a transition metal halide catalyst (to activate dormant species). Reversible Addition-Fragmentation Chain-Transfer Polymerization (RAFT): requires a compound that can act as a reversible chain-transfer agent, such as dithio compound. Stable Free Radical Polymerization (SFRP): used to synthesize linear or branched polymers with narrow molecular weight distributions and reactive end groups on each polymer chain. The process has also been used to create block co-polymers with unique properties. Conversion rates are about 100% using this process but require temperatures of about 135 °C. This process is most commonly used with acrylates, styrenes, and dienes. The reaction scheme in Figure 23 illustrates the SFRP process. Because the chain end is functionalized with the TEMPO molecule (Figure 24), premature termination by coupling is reduced. As with all living polymerizations, the polymer chain grows until all of the monomer is consumed. Kinetics In typical chain growth polymerizations, the reaction rates for initiation, propagation and termination can be described as follows: where f is the efficiency of the initiator and kd, kp, and kt are the constants for initiator dissociation, chain propagation and termination, respectively. [I] [M] and [M•] are the concentrations of the initiator, monomer and the active growing chain. Under the steady-state approximation, the concentration of the active growing chains remains constant, i.e. the rates of initiation and of termination are equal. The concentration of active chain can be derived and expressed in terms of the other known species in the system. In this case, the rate of chain propagation can be further described using a function of the initiator and monomer concentrations The kinetic chain length v is a measure of the average number of monomer units reacting with an active center during its lifetime and is related to the molecular weight through the mechanism of the termination. Without chain transfer, the kinetic chain length is only a function of propagation rate and initiation rate. Assuming no chain-transfer effect occurs in the reaction, the number average degree of polymerization Pn can be correlated with the kinetic chain length. In the case of termination by disproportionation, one polymer molecule is produced per every kinetic chain: Termination by combination leads to one polymer molecule per two kinetic chains: Any mixture of both these mechanisms can be described by using the value , the contribution of disproportionation to the overall termination process: If chain transfer is considered, the kinetic chain length is not affected by the transfer process because the growing free-radical center generated by the initiation step stays alive after any chain-transfer event, although multiple polymer chains are produced. However, the number average degree of polymerization decreases as the chain transfers, since the growing chains are terminated by the chain-transfer events. Taking into account the chain-transfer reaction towards solvent S, initiator I, polymer P, and added chain-transfer agent T. The equation of Pn will be modified as follows: It is usual to define chain-transfer constants C for the different molecules , , , , Thermodynamics In chain growth polymerization, the position of the equilibrium between polymer and monomers can be determined by the thermodynamics of the polymerization. The Gibbs free energy (ΔGp) of the polymerization is commonly used to quantify the tendency of a polymeric reaction. The polymerization will be favored if ΔGp < 0; if ΔGp > 0, the polymer will undergo depolymerization. According to the thermodynamic equation ΔG = ΔH – TΔS, a negative enthalpy and an increasing entropy will shift the equilibrium towards polymerization. In general, the polymerization is an exothermic process, i.e. negative enthalpy change, since addition of a monomer to the growing polymer chain involves the conversion of π bonds into σ bonds, or a ring–opening reaction that releases the ring tension in a cyclic monomer. Meanwhile, during polymerization, a large amount of small molecules are associated, losing rotation and translational degrees of freedom. As a result, the entropy decreases in the system, ΔSp < 0 for nearly all polymerization processes. Since depolymerization is almost always entropically favored, the ΔHp must then be sufficiently negative to compensate for the unfavorable entropic term. Only then will polymerization be thermodynamically favored by the resulting negative ΔGp. In practice, polymerization is favored at low temperatures: TΔSp is small. Depolymerization is favored at high temperatures: TΔSp is large. As the temperature increases, ΔGp become less negative. At a certain temperature, the polymerization reaches equilibrium (rate of polymerization = rate of depolymerization). This temperature is called the ceiling temperature (Tc). ΔGp = 0. Stereochemistry The stereochemistry of polymerization is concerned with the difference in atom connectivity and spatial orientation in polymers that has the same chemical composition. Hermann Staudinger studied the stereoisomerism in chain polymerization of vinyl monomers in the late 1920s, and it took another two decades for people to fully appreciate the idea that each of the propagation steps in the polymer growth could give rise to stereoisomerism. The major milestone in the stereochemistry was established by Ziegler and Natta and their coworkers in 1950s, as they developed metal based catalyst to synthesize stereoregular polymers. The reason why the stereochemistry of the polymer is of particular interest is because the physical behavior of a polymer depends not only on the general chemical composition but also on the more subtle differences in microstructure. Atactic polymers consist of a random arrangement of stereochemistry and are amorphous (noncrystalline), soft materials with lower physical strength. The corresponding isotactic (like substituents all on the same side) and syndiotactic (like substituents of alternate repeating units on the same side) polymers are usually obtained as highly crystalline materials. It is easier for the stereoregular polymers to pack into a crystal lattice since they are more ordered and the resulting crystallinity leads to higher physical strength and increased solvent and chemical resistance as well as differences in other properties that depend on crystallinity. The prime example of the industrial utility of stereoregular polymers is polypropene. Isotactic polypropene is a high-melting (165 °C), strong, crystalline polymer, which is used as both a plastic and fiber. Atactic polypropene is an amorphous material with an oily to waxy soft appearance that finds use in asphalt blends and formulations for lubricants, sealants, and adhesives, but the volumes are minuscule compared to that of isotactic polypropene. When a monomer adds to a radical chain end, there are two factors to consider regarding its stereochemistry: 1) the interaction between the terminal chain carbon and the approaching monomer molecule and 2) the configuration of the penultimate repeating unit in the polymer chain. The terminal carbon atom has sp2 hybridization and is planar. Consider the polymerization of the monomer CH2=CXY. There are two ways that a monomer molecule can approach the terminal carbon: the mirror approach (with like substituents on the same side) or the non-mirror approach (like substituents on opposite sides). If free rotation does not occur before the next monomer adds, the mirror approach will always lead to an isotactic polymer and the non-mirror approach will always lead to a syndiotactic polymer (Figure 25). However, if interactions between the substituents of the penultimate repeating unit and the terminal carbon atom are significant, then conformational factors could cause the monomer to add to the polymer in a way that minimizes steric or electrostatic interaction (Figure 26). Reactivity Traditionally, the reactivity of monomers and radicals are assessed by the means of copolymerization data. The Q–e scheme, the most widely used tool for the semi-quantitative prediction of monomer reactivity ratios, was first proposed by Alfrey and Price in 1947. The scheme takes into account the intrinsic thermodynamic stability and polar effects in the transition state. A given radical and a monomer are considered to have intrinsic reactivities Pi and Qj, respectively. The polar effects in the transition state, the supposed permanent electric charge carried by that entity (radical or molecule), is quantified by the factor e, which is a constant for a given monomer, and has the same value for the radical derived from that specific monomer. For addition of monomer 2 to a growing polymer chain whose active end is the radical of monomer 1, the rate constant, k12, is postulated to be related to the four relevant reactivity parameters by The monomer reactivity ratio for the addition of monomers 1 and 2 to this chain is given by For the copolymerization of a given pair of monomers, the two experimental reactivity ratios r1 and r2 permit the evaluation of (Q1/Q2) and (e1 – e2). Values for each monomer can then be assigned relative to a reference monomer, usually chosen as styrene with the arbitrary values Q = 1.0 and e = –0.8. Applications Free radical polymerization has found applications including the manufacture of polystyrene, thermoplastic block copolymer elastomers, cardiovascular stents, chemical surfactants and lubricants. Block copolymers are used for a wide variety of applications including adhesives, footwear and toys. Academic research Free radical polymerization allows the functionalization of carbon nanotubes. CNTs intrinsic electronic properties lead them to form large aggregates in solution, precluding useful applications. Adding small chemical groups to the walls of CNT can eliminate this propensity and tune the response to the surrounding environment. The use of polymers instead of smaller molecules can modify CNT properties (and conversely, nanotubes can modify polymer mechanical and electronic properties). For example, researchers coated carbon nanotubes with polystyrene by first polymerizing polystyrene via chain radical polymerization and subsequently mixing it at 130 °C with carbon nanotubes to generate radicals and graft them onto the walls of carbon nanotubes (Figure 27). Chain growth polymerization ("grafting to") synthesizes a polymer with predetermined properties. Purification of the polymer can be used to obtain a more uniform length distribution before grafting. Conversely, “grafting from”, with radical polymerization techniques such as atom transfer radical polymerization (ATRP) or nitroxide-mediated polymerization (NMP), allows rapid growth of high molecular weight polymers. Radical polymerization also aids synthesis of nanocomposite hydrogels. These gels are made of water-swellable nano-scale clay (especially those classed as smectites) enveloped by a network polymer. Aqueous dispersions of clay are treated with an initiator and a catalyst and the organic monomer, generally an acrylamide. Polymers grow off the initiators that are in turn bound to the clay. Due to recombination and disproportionation reactions, growing polymer chains bind to one another, forming a strong, cross-linked network polymer, with clay particles acting as branching points for multiple polymer chain segments. Free radical polymerization used in this context allows the synthesis of polymers from a wide variety of substrates (the chemistries of suitable clays vary). Termination reactions unique to chain growth polymerization produce a material with flexibility, mechanical strength and biocompatibility. See also Anionic addition polymerization Chain-growth polymerisation Chain transfer Cobalt-mediated radical polymerization Living polymerization Nitroxide mediated radical polymerization Polymer Polymerization Reversible-deactivation radical polymerization Step-growth polymerization References External links Addition Polymerization Free Radical Polymerization (video animation) Free Radical Polymerization - Chain Transfer Free Radical Vinyl Polymerization The Polymerization of Alkenes Polymer Synthesis Radical Reaction Chemistry Stable Free Radical Polymerization Reaction mechanisms Polymerization reactions
Radical polymerization
[ "Chemistry", "Materials_science" ]
5,728
[ "Reaction mechanisms", "Polymer chemistry", "Physical organic chemistry", "Chemical kinetics", "Polymerization reactions" ]
1,948,002
https://en.wikipedia.org/wiki/Plasma%20acceleration
Plasma acceleration is a technique for accelerating charged particles, such as electrons or ions, using the electric field associated with electron plasma wave or other high-gradient plasma structures. These plasma acceleration structures are created using either ultra-short laser pulses or energetic particle beams that are matched to the plasma parameters. The technique offers a way to build affordable and compact particle accelerators. Once fully developed, the technology can replace many of the traditional accelerators with applications ranging from high energy physics to medical and industrial applications. Medical applications include betatron and free-electron light sources for diagnostics or radiation therapy and proton sources for hadron therapy. History The basic concepts of plasma acceleration and its possibilities were originally conceived by Toshiki Tajima and John M. Dawson of UCLA in 1979. The initial experimental designs for a "wakefield" accelerator were conceived at UCLA by Chandrashekhar J. Joshi et al. The Texas Petawatt laser facility at the University of Texas at Austin accelerated electrons to 2 GeV over about 2 cm (1.6×1021 gn). This record was broken (by more than twice) in 2014 by the scientists at the BELLA Center at the Lawrence Berkeley National Laboratory, when they produced electron beams up to 4.25 GeV. In late 2014, researchers from SLAC National Accelerator Laboratory using the Facility for Advanced Accelerator Experimental Tests (FACET) published proof of the viability of plasma acceleration technology. It was shown to be able to achieve 400 to 500 times higher energy transfer compared to a general linear accelerator design. A proof-of-principle plasma wakefield accelerator experiment using a 400 GeV proton beam from the Super Proton Synchrotron is currently operating at CERN. The experiment, named AWAKE, started experiments at the end of 2016. In August 2020 scientists reported the achievement of a milestone in the development of laser-plasma accelerators and demonstrate their longest stable operation of 30 hours. Concept Wakefield acceleration A plasma consists of a fluid of positive and negative charged particles, generally created by heating or photo-ionizing (direct / tunneling / multi-photon / barrier-suppression) a dilute gas. Under normal conditions the plasma will be macroscopically neutral (or quasi-neutral), an equal mix of electrons and ions in equilibrium. However, if a strong enough external electric or electromagnetic field is applied, the plasma electrons, which are very light in comparison to the background ions (by a factor of 1836), will separate spatially from the massive ions creating a charge imbalance in the perturbed region. A particle injected into such a plasma would be accelerated by the charge separation field, but since the magnitude of this separation is generally similar to that of the external field, apparently nothing is gained in comparison to a conventional system that simply applies the field directly to the particle. But, the plasma medium acts as the most efficient transformer (currently known) of the transverse field of an electromagnetic wave into longitudinal fields of a plasma wave. In existing accelerator technology various appropriately designed materials are used to convert from transverse propagating extremely intense fields into longitudinal fields that the particles can get a kick from. This process is achieved using two approaches: standing-wave structures (such as resonant cavities) or traveling-wave structures such as disc-loaded waveguides etc. But, the limitation of materials interacting with higher and higher fields is that they eventually get destroyed through ionization and breakdown. Here the plasma accelerator science provides the breakthrough to generate, sustain, and exploit the highest fields ever produced in the laboratory. The acceleration gradient produced by a plasma wake is in the order of the wave breaking field, which is In this equation, is the electric field, is the speed of light in vacuum, is the mass of the electron, is the plasma electron density (in particles per unit volume), and is the permittivity of free space. What makes the system useful is the possibility of introducing waves of very high charge separation that propagate through the plasma similar to the traveling-wave concept in the conventional accelerator. The accelerator thereby phase-locks a particle bunch on a wave and this loaded space-charge wave accelerates them to higher velocities while retaining the bunch properties. Currently, plasma wakes are excited by appropriately shaped laser pulses or electron bunches. Plasma electrons are driven out and away from the center of wake by the ponderomotive force or the electrostatic fields from the exciting fields (electron or laser). Plasma ions are too massive to move significantly and are assumed to be stationary at the time-scales of plasma electron response to the exciting fields. As the exciting fields pass through the plasma, the plasma electrons experience a massive attractive force back to the center of the wake by the positive plasma ions chamber, bubble or column that have remained positioned there, as they were originally in the unexcited plasma. This forms a full wake of an extremely high longitudinal (accelerating) and transverse (focusing) electric field. The positive charge from ions in the charge-separation region then creates a huge gradient between the back of the wake, where there are many electrons, and the middle of the wake, where there are mostly ions. Any electrons in between these two areas will be accelerated (in self-injection mechanism). In the external bunch injection schemes the electrons are strategically injected to arrive at the evacuated region during maximum excursion or expulsion of the plasma electrons. A beam-driven wake can be created by sending a relativistic proton or electron bunch into an appropriate plasma or gas. In some cases, the gas can be ionized by the electron bunch, so that the electron bunch both creates the plasma and the wake. This requires an electron bunch with relatively high charge and thus strong fields. The high fields of the electron bunch then push the plasma electrons out from the center, creating the wake. Similar to a beam-driven wake, a laser pulse can be used to excite the plasma wake. As the pulse travels through the plasma, the electric field of the light separates the electrons and nucleons in the same way that an external field would. If the fields are strong enough, all of the ionized plasma electrons can be removed from the center of the wake: this is known as the "blowout regime". Although the particles are not moving very quickly during this period, macroscopically it appears that a "bubble" of charge is moving through the plasma at close to the speed of light. The bubble is the region cleared of electrons that is thus positively charged, followed by the region where the electrons fall back into the center and is thus negatively charged. This leads to a small area of very strong potential gradient following the laser pulse. In the linear regime, plasma electrons aren't completely removed from the center of the wake. In this case, the linear plasma wave equation can be applied. However, the wake appears very similar to the blowout regime, and the physics of acceleration is the same. It is this "wakefield" that is used for particle acceleration. A particle injected into the plasma near the high-density area will experience an acceleration toward (or away) from it, an acceleration that continues as the wakefield travels through the column, until the particle eventually reaches the speed of the wakefield. Even higher energies can be reached by injecting the particle to travel across the face of the wakefield, much like a surfer can travel at speeds much higher than the wave they surf on by traveling across it. Accelerators designed to take advantage of this technique have been referred to colloquially as "surfatrons". The wakefield acceleration can be categorized into several types according to how the electron plasma wave is formed: plasma wakefield acceleration (PWFA): The electron plasma wave is formed by an electron or proton bunch. laser wakefield acceleration (LWFA): A laser pulse is introduced to form an electron plasma wave. laser beat-wave acceleration (LBWA): The electron plasma wave arises based on different frequency generation of two laser pulses. The "Surfatron" is an improvement on this technique. self-modulated laser wakefield acceleration (SMLWFA): The formation of an electron plasma wave is achieved by a laser pulse modulated by stimulated Raman forward scattering instability. Some experiments are: Target normal sheath acceleration Laser–solid-target-based ion acceleration has become an active area of research, especially since the discovery of the target normal sheath acceleration (TNSA). This new scheme offers further improvements in hadrontherapy, fusion fast ignition and sources for fundamental research. Nonetheless, the maximum energies achieved so far with this scheme are in the order of 100 MeV energies. The main laser-solid acceleration scheme is Target Normal Sheath Acceleration, TNSA as it is usually referred as. TNSA like other laser based acceleration techniques is not capable of directly accelerating the ions. Instead it is a multi-step process consisting of several stages each with its associated difficulty to model mathematically. For this reason, so far there exists no perfect theoretical model capable of producing quantitative predictions for the TNSA mechanism. Particle-in-Cell simulations are usually employed to efficiently achieve predictions. The scheme employs a solid target that interacts firstly with the laser prepulse, this ionises the target turning it into a plasma and causing a pre-expansion of the target front. Which produces an underdense plasma region at the front of the target, the so-called preplasma. Once the main laser pulse arrives at the target front it will then propagate through this underdense region and be reflected from the front surface of the target propagating back through the preplasma. Throughout this process the laser has heated up the electrons in the underdense region and accelerated them via stochastic heating. This heating process is incredibly important, producing a high temperature electron populations is key for the next steps of the process. The importance of the preplasma in the electron heating process has recently been studied both theoretically and experimentally showing how longer preplasmas lead to stronger electron heating and an enhancement in TNSA. The hot electrons propagate through the solid target and exit it through the rear end. In doing so, the electrons produce an incredibly strong electric field, in the order of TV/m, through charge separation. This electric field, also referred to as the sheath field due to its resemblance with the shape of a sheath from a sword, is responsible for the acceleration of the ions. On the rear face of the target there is a small layer of contaminants (usually light hydrocarbons and water vapor). These contaminants are ionised by the strong electric field generated by the hot electrons and then accelerated. Which leads to an energetic ion beam and completes the acceleration process. Responsible for the spiky, fast ion front of the expanding plasma is an ion wave breaking process that takes place in the initial phase of the evolution and is described by the Sack-Schamel equation. Comparison with RF acceleration The advantage of plasma acceleration is that its acceleration field can be much stronger than that of conventional radio-frequency (RF) accelerators. In RF accelerators, the field has an upper limit determined by the threshold for dielectric breakdown of the acceleration tube. This limits the amount of acceleration over any given length, requiring very long accelerators to reach high energies. In contrast, the maximum field in a plasma is defined by mechanical qualities and turbulence, but is generally several orders of magnitude stronger than with RF accelerators. It is hoped that a compact particle accelerator can be created based on plasma acceleration techniques or accelerators for much higher energy can be built, if long accelerators are realizable with an accelerating field of 10 GV/m. Current experimental devices show accelerating gradients several orders of magnitude better than current particle accelerators over very short distances, and about one order of magnitude better (1 GeV/m vs 0.1 GeV/m for an RF accelerator) at the one meter scale. For example, an experimental laser plasma accelerator at Lawrence Berkeley National Laboratory accelerates electrons to 1 GeV over about 3.3 cm (5.4×1020 gn), and one conventional accelerator (highest electron energy accelerator) at SLAC requires 64 m to reach the same energy. Similarly, using plasmas an energy gain of more than 40 GeV was achieved using the SLAC SLC beam (42 GeV) in just 85 cm using a plasma wakefield accelerator (8.9×1020 gn). Application In the framework of HORIZON 2020 which is the Framework Programmes for Research and Technological Development the Conceptual Design Report of EuPRAXIA project “(European Plasma Research Accelerator with eXcellence In Applications”) was worked out by 74 scientific institutes. To find out which will be the most suitable technology, laserdriven (laser wakefield acceleration, LWFA), electron beam-driven (plasma wakefield acceleration, PWFA) as well as hybrid (combining LWFA and PWFA) acceleration approaches are under consideration. The beam-driven plasma wakefield acceleration facility will be built in the INFN National Laboratory of Frascati (LNF) near Rome in Italy. The second site of laser-driven (laser wakefield acceleration, LWFA) facility is however till undecided. Decision will be made in mid 2025. See also Dielectric wall accelerator References External links Plasma Wakefield Acceleration - A Guide Riding the Plasma Wave of the Future acceleration Accelerator physics
Plasma acceleration
[ "Physics" ]
2,725
[ "Applied and interdisciplinary physics", "Plasma physics", "Plasma technology and applications", "Experimental physics", "Accelerator physics" ]
1,949,009
https://en.wikipedia.org/wiki/Heat%20capacity%20ratio
In thermal physics and thermodynamics, the heat capacity ratio, also known as the adiabatic index, the ratio of specific heats, or Laplace's coefficient, is the ratio of the heat capacity at constant pressure () to heat capacity at constant volume (). It is sometimes also known as the isentropic expansion factor and is denoted by (gamma) for an ideal gas or (kappa), the isentropic exponent for a real gas. The symbol is used by aerospace and chemical engineers. where is the heat capacity, the molar heat capacity (heat capacity per mole), and the specific heat capacity (heat capacity per unit mass) of a gas. The suffixes and refer to constant-pressure and constant-volume conditions respectively. The heat capacity ratio is important for its applications in thermodynamical reversible processes, especially involving ideal gases; the speed of sound depends on this factor. Thought experiment To understand this relation, consider the following thought experiment. A closed pneumatic cylinder contains air. The piston is locked. The pressure inside is equal to atmospheric pressure. This cylinder is heated to a certain target temperature. Since the piston cannot move, the volume is constant. The temperature and pressure will rise. When the target temperature is reached, the heating is stopped. The amount of energy added equals , with representing the change in temperature. The piston is now freed and moves outwards, stopping as the pressure inside the chamber reaches atmospheric pressure. We assume the expansion occurs without exchange of heat (adiabatic expansion). Doing this work, air inside the cylinder will cool to below the target temperature. To return to the target temperature (still with a free piston), the air must be heated, but is no longer under constant volume, since the piston is free to move as the gas is reheated. This extra heat amounts to about 40% more than the previous amount added. In this example, the amount of heat added with a locked piston is proportional to , whereas the total amount of heat added is proportional to . Therefore, the heat capacity ratio in this example is 1.4. Another way of understanding the difference between and is that applies if work is done to the system, which causes a change in volume (such as by moving a piston so as to compress the contents of a cylinder), or if work is done by the system, which changes its temperature (such as heating the gas in a cylinder to cause a piston to move). applies only if , that is, no work is done. Consider the difference between adding heat to the gas with a locked piston and adding heat with a piston free to move, so that pressure remains constant. In the second case, the gas will both heat and expand, causing the piston to do mechanical work on the atmosphere. The heat that is added to the gas goes only partly into heating the gas, while the rest is transformed into the mechanical work performed by the piston. In the first, constant-volume case (locked piston), there is no external motion, and thus no mechanical work is done on the atmosphere; is used. In the second case, additional work is done as the volume changes, so the amount of heat required to raise the gas temperature (the specific heat capacity) is higher for this constant-pressure case. Ideal-gas relations For an ideal gas, the molar heat capacity is at most a function of temperature, since the internal energy is solely a function of temperature for a closed system, i.e., , where is the amount of substance in moles. In thermodynamic terms, this is a consequence of the fact that the internal pressure of an ideal gas vanishes. Mayer's relation allows us to deduce the value of from the more easily measured (and more commonly tabulated) value of : This relation may be used to show the heat capacities may be expressed in terms of the heat capacity ratio () and the gas constant (): Relation with degrees of freedom The classical equipartition theorem predicts that the heat capacity ratio () for an ideal gas can be related to the thermally accessible degrees of freedom () of a molecule by Thus we observe that for a monatomic gas, with 3 translational degrees of freedom per atom: As an example of this behavior, at 273 K (0 °C) the noble gases He, Ne, and Ar all have nearly the same value of , equal to 1.664. For a diatomic gas, often 5 degrees of freedom are assumed to contribute at room temperature since each molecule has 3 translational and 2 rotational degrees of freedom, and the single vibrational degree of freedom is often not included since vibrations are often not thermally active except at high temperatures, as predicted by quantum statistical mechanics. Thus we have For example, terrestrial air is primarily made up of diatomic gases (around 78% nitrogen, N2, and 21% oxygen, O2), and at standard conditions it can be considered to be an ideal gas. The above value of 1.4 is highly consistent with the measured adiabatic indices for dry air within a temperature range of 0–200 °C, exhibiting a deviation of only 0.2% (see tabulation above). For a linear triatomic molecule such as , there are only 5 degrees of freedom (3 translations and 2 rotations), assuming vibrational modes are not excited. However, as mass increases and the frequency of vibrational modes decreases, vibrational degrees of freedom start to enter into the equation at far lower temperatures than is typically the case for diatomic molecules. For example, it requires a far larger temperature to excite the single vibrational mode for , for which one quantum of vibration is a fairly large amount of energy, than for the bending or stretching vibrations of . For a non-linear triatomic gas, such as water vapor, which has 3 translational and 3 rotational degrees of freedom, this model predicts Real-gas relations As noted above, as temperature increases, higher-energy vibrational states become accessible to molecular gases, thus increasing the number of degrees of freedom and lowering . Conversely, as the temperature is lowered, rotational degrees of freedom may become unequally partitioned as well. As a result, both and increase with increasing temperature. Despite this, if the density is fairly low and intermolecular forces are negligible, the two heat capacities may still continue to differ from each other by a fixed constant (as above, ), which reflects the relatively constant difference in work done during expansion for constant pressure vs. constant volume conditions. Thus, the ratio of the two values, , decreases with increasing temperature. However, when the gas density is sufficiently high and intermolecular forces are important, thermodynamic expressions may sometimes be used to accurately describe the relationship between the two heat capacities, as explained below. Unfortunately the situation can become considerably more complex if the temperature is sufficiently high for molecules to dissociate or carry out other chemical reactions, in which case thermodynamic expressions arising from simple equations of state may not be adequate. Thermodynamic expressions Values based on approximations (particularly ) are in many cases not sufficiently accurate for practical engineering calculations, such as flow rates through pipes and valves at moderate to high pressures. An experimental value should be used rather than one based on this approximation, where possible. A rigorous value for the ratio can also be calculated by determining from the residual properties expressed as Values for are readily available and recorded, but values for need to be determined via relations such as these. See relations between specific heats for the derivation of the thermodynamic relations between the heat capacities. The above definition is the approach used to develop rigorous expressions from equations of state (such as Peng–Robinson), which match experimental values so closely that there is little need to develop a database of ratios or values. Values can also be determined through finite-difference approximation. Adiabatic process This ratio gives the important relation for an isentropic (quasistatic, reversible, adiabatic process) process of a simple compressible calorically-perfect ideal gas: is constant Using the ideal gas law, : is constant is constant where is the pressure of the gas, is the volume, and is the thermodynamic temperature. In gas dynamics we are interested in the local relations between pressure, density and temperature, rather than considering a fixed quantity of gas. By considering the density as the inverse of the volume for a unit mass, we can take in these relations. Since for constant entropy, , we have , or , it follows that For an imperfect or non-ideal gas, Chandrasekhar defined three different adiabatic indices so that the adiabatic relations can be written in the same form as above; these are used in the theory of stellar structure: All of these are equal to in the case of an ideal gas. See also Relations between heat capacities Heat capacity Specific heat capacity Speed of sound Thermodynamic equations Thermodynamics Volumetric heat capacity Notes References Thermodynamic properties Physical quantities Ratios Thought experiments in physics
Heat capacity ratio
[ "Physics", "Chemistry", "Mathematics" ]
1,878
[ "Physical phenomena", "Thermodynamic properties", "Physical quantities", "Quantity", "Arithmetic", "Thermodynamics", "Physical properties", "Ratios" ]
1,949,447
https://en.wikipedia.org/wiki/Motion%20control
Motion control is a sub-field of automation, encompassing the systems or sub-systems involved in moving parts of machines in a controlled manner. Motion control systems are extensively used in a variety of fields for automation purposes, including precision engineering, micromanufacturing, biotechnology, and nanotechnology. The main components involved typically include a motion controller, an energy amplifier, and one or more prime movers or actuators. Motion control may be open loop or closed loop. In open loop systems, the controller sends a command through the amplifier to the prime mover or actuator, and does not know if the desired motion was actually achieved. Typical systems include stepper motor or fan control. For tighter control with more precision, a measuring device may be added to the system (usually near the end motion). When the measurement is converted to a signal that is sent back to the controller, and the controller compensates for any error, it becomes a Closed loop System. Typically the position or velocity of machines are controlled using some type of device such as a hydraulic pump, linear actuator, or electric motor, generally a servo. Motion control is an important part of robotics and CNC machine tools, however in these instances it is more complex than when used with specialized machines, where the kinematics are usually simpler. The latter is often called General Motion Control (GMC). Motion control is widely used in the packaging, printing, textile, semiconductor production, and assembly industries. Motion Control encompasses every technology related to the movement of objects. It covers every motion system from micro-sized systems such as silicon-type micro induction actuators to micro-siml systems such as a space platform. But, these days, the focus of motion control is the special control technology of motion systems with electric actuators such as dc/ac servo motors. Control of robotic manipulators is also included in the field of motion control because most of robotic manipulators are driven by electrical servo motors and the key objective is the control of motion. Overview The basic architecture of a motion control system contains: A motion controller, which calculates and controls the mechanical trajectories (motion profile) an actuator must follow (i.e., motion planning) and, in closed loop systems, employs feedback to make control corrections and thus implement closed-loop control. A drive or amplifier to transform the control signal from the motion controller into energy that is presented to the actuator. Newer "intelligent" drives can close the position and velocity loops internally, resulting in much more accurate control. A prime mover or actuator such as a hydraulic pump, pneumatic cylinder, linear actuator, or electric motor for output motion. In closed loop systems, one or more feedback sensors such as absolute and incremental encoders, resolvers or Hall effect devices to return the position or velocity of the actuator to the motion controller in order to close the position or velocity control loops. Mechanical components to transform the motion of the actuator into the desired motion, including: gears, shafting, ball screw, belts, linkages, and linear and rotational bearings. The interface between the motion controller and drives it control is very critical when coordinated motion is required, as it must provide tight synchronization. Historically the only open interface was an analog signal, until open interfaces were developed that satisfied the requirements of coordinated motion control, the first being SERCOS in 1991 which is now enhanced to SERCOS III. Later interfaces capable of motion control include Ethernet/IP, Profinet IRT, Ethernet Powerlink, and EtherCAT. Common control functions include: Velocity control. Position (point-to-point) control: There are several methods for computing a motion trajectory. These are often based on the velocity profiles of a move such as a triangular profile, trapezoidal profile, or an S-curve profile. Pressure or force control. Impedance control: This type of control is suitable for environment interaction and object manipulation, such as in robotics. Electronic gearing (or cam profiling): The position of a slave axis is mathematically linked to the position of a master axis. A good example of this would be in a system where two rotating drums turn at a given ratio to each other. A more advanced case of electronic gearing is electronic camming. With electronic camming, a slave axis follows a profile that is a function of the master position. This profile need not be salted, but it must be an animated function See also Match moving, for motion tracking in computer-generated imagery Mechatronics, the science of computer-controlled smart motion devices Control system PID controller, proportional-integral-derivative controller Slewing Pneumatics Ethernet/IP High performance positioning system for controlling high precision at high speed External links What is a Motion Controller? Technical Summary for Motion Engineers Further reading Tan K. K., T. H. Lee and S. Huang, Precision motion control: Design and implementation, 2nd ed., London, Springer, 2008. Ellis, George, Control System Design Guide, Fourth Edition: Using Your Computer to Understand and Diagnose Feedback Controllers References Control theory Articles containing video clips
Motion control
[ "Physics", "Mathematics", "Engineering" ]
1,070
[ "Physical phenomena", "Applied mathematics", "Control theory", "Automation", "Motion (physics)", "Motion control", "Dynamical systems" ]
1,949,853
https://en.wikipedia.org/wiki/Aquificaceae
The Aquificaceae family are bacteria that live in harsh environmental settings such as hot springs, sulfur pools, and hydrothermal vents. Although they are true bacteria as opposed to the other inhabitants of extreme environments, the Archaea, Aquificaceae genera are an early phylogenetic branch. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI) Unassigned species: "Aquifex aeolicus" Huber and Stetter 2001 See also List of bacterial orders List of bacteria genera References Reysenbach A-L, Phylum BI (2001) Aquificae phy. nov. In: Boone DR, Castenholz RW (eds) Bergey's Manual of Systematic Bacteriology. Springer-Verlag, Berlin, 2nd edn., pp. 359–367 Aquificota
Aquificaceae
[ "Biology" ]
202
[ "Bacteria stubs", "Bacteria" ]
1,949,856
https://en.wikipedia.org/wiki/Hydrogenothermaceae
The Hydrogenothermaceae family are bacteria that live in harsh environmental settings. They have been found in hot springs, sulfur pools, and thermal ocean vents. They are true bacteria as opposed to the other inhabitants of extreme environments, the Archaea. An example occurrence of certain extremophiles in this family are organisms of the genus Sulfurihydrogenibium that are capable of surviving in extremely hot environments such as Hverigerdi, Iceland. Obtaining energy Hydrogenothermaceae families consist of aerobic or microaerophilic bacteria, which generally obtain energy by oxidation of hydrogen or reduced sulfur compounds by molecular oxygen. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LSPN) and the National Center for Biotechnology Information (NCBI). See also List of bacterial orders List of bacteria genera References Hedlund, Brian P., et al. “Isolation of Diverse Members of the Aquificales from Geothermal Springs in Tengchong, China.” Frontiers in Microbiology, vol. 6, 2015, . External links Aquificota
Hydrogenothermaceae
[ "Biology" ]
237
[ "Bacteria stubs", "Bacteria" ]
1,950,766
https://en.wikipedia.org/wiki/Graph%20isomorphism%20problem
The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. The problem is not known to be solvable in polynomial time nor to be NP-complete, and therefore may be in the computational complexity class NP-intermediate. It is known that the graph isomorphism problem is in the low hierarchy of class NP, which implies that it is not NP-complete unless the polynomial time hierarchy collapses to its second level. At the same time, isomorphism for many special classes of graphs can be solved in polynomial time, and in practice graph isomorphism can often be solved efficiently. This problem is a special case of the subgraph isomorphism problem, which asks whether a given graph G contains a subgraph that is isomorphic to another given graph H; this problem is known to be NP-complete. It is also known to be a special case of the non-abelian hidden subgroup problem over the symmetric group. In the area of image recognition it is known as the exact graph matching. State of the art In November 2015, László Babai announced a quasi-polynomial time algorithm for all graphs, that is, one with running time for some fixed . On January 4, 2017, Babai retracted the quasi-polynomial claim and stated a sub-exponential time bound instead after Harald Helfgott discovered a flaw in the proof. On January 9, 2017, Babai announced a correction (published in full on January 19) and restored the quasi-polynomial claim, with Helfgott confirming the fix. Helfgott further claims that one can take , so the running time is . Prior to this, the best accepted theoretical algorithm was due to , and was based on the earlier work by combined with a subfactorial algorithm of V. N. Zemlyachenko . The algorithm has run time 2O() for graphs with n vertices and relies on the classification of finite simple groups. Without this classification theorem, a slightly weaker bound was obtained first for strongly regular graphs by , and then extended to general graphs by . Improvement of the exponent for strongly regular graphs was done by . For hypergraphs of bounded rank, a subexponential upper bound matching the case of graphs was obtained by . There are several competing practical algorithms for graph isomorphism, such as those due to , , , and . While they seem to perform well on random graphs, a major drawback of these algorithms is their exponential time performance in the worst case. The graph isomorphism problem is computationally equivalent to the problem of computing the automorphism group of a graph, and is weaker than the permutation group isomorphism problem and the permutation group intersection problem. For the latter two problems, obtained complexity bounds similar to that for graph isomorphism. Solved special cases A number of important special cases of the graph isomorphism problem have efficient, polynomial-time solutions: Trees Planar graphs (In fact, planar graph isomorphism is in log space, a class contained in P) Interval graphs Permutation graphs Circulant graphs Bounded-parameter graphs Graphs of bounded treewidth Graphs of bounded genus (Planar graphs are graphs of genus 0.) Graphs of bounded degree Graphs with bounded eigenvalue multiplicity k-Contractible graphs (a generalization of bounded degree and bounded genus) Color-preserving isomorphism of colored graphs with bounded color multiplicity (i.e., at most k vertices have the same color for a fixed k) is in class NC, which is a subclass of P. Complexity class GI Since the graph isomorphism problem is neither known to be NP-complete nor known to be tractable, researchers have sought to gain insight into the problem by defining a new class GI, the set of problems with a polynomial-time Turing reduction to the graph isomorphism problem. If in fact the graph isomorphism problem is solvable in polynomial time, GI would equal P. On the other hand, if the problem is NP-complete, GI would equal NP and all problems in NP would be solvable in quasi-polynomial time. As is common for complexity classes within the polynomial time hierarchy, a problem is called GI-hard if there is a polynomial-time Turing reduction from any problem in GI to that problem, i.e., a polynomial-time solution to a GI-hard problem would yield a polynomial-time solution to the graph isomorphism problem (and so all problems in GI). A problem is called complete for GI, or GI-complete, if it is both GI-hard and a polynomial-time solution to the GI problem would yield a polynomial-time solution to . The graph isomorphism problem is contained in both NP and co-AM. GI is contained in and low for Parity P, as well as contained in the potentially much smaller class SPP. That it lies in Parity P means that the graph isomorphism problem is no harder than determining whether a polynomial-time nondeterministic Turing machine has an even or odd number of accepting paths. GI is also contained in and low for ZPPNP. This essentially means that an efficient Las Vegas algorithm with access to an NP oracle can solve graph isomorphism so easily that it gains no power from being given the ability to do so in constant time. GI-complete and GI-hard problems Isomorphism of other objects There are a number of classes of mathematical objects for which the problem of isomorphism is a GI-complete problem. A number of them are graphs endowed with additional properties or restrictions: digraphs labelled graphs, with the proviso that an isomorphism is not required to preserve the labels, but only the equivalence relation consisting of pairs of vertices with the same label "polarized graphs" (made of a complete graph Km and an empty graph Kn plus some edges connecting the two; their isomorphism must preserve the partition) 2-colored graphs explicitly given finite structures multigraphs hypergraphs finite automata Markov Decision Processes commutative class 3 nilpotent (i.e., xyz = 0 for every elements x, y, z) semigroups finite rank associative algebras over a fixed algebraically closed field with zero squared radical and commutative factor over the radical. context-free grammars normal-form games balanced incomplete block designs Recognizing combinatorial isomorphism of convex polytopes represented by vertex-facet incidences. GI-complete classes of graphs A class of graphs is called GI-complete if recognition of isomorphism for graphs from this subclass is a GI-complete problem. The following classes are GI-complete: connected graphs graphs of diameter 2 and radius 1 directed acyclic graphs regular graphs bipartite graphs without non-trivial strongly regular subgraphs bipartite Eulerian graphs bipartite regular graphs line graphs split graphs chordal graphs regular self-complementary graphs polytopal graphs of general, simple, and simplicial convex polytopes in arbitrary dimensions. Many classes of digraphs are also GI-complete. Other GI-complete problems There are other nontrivial GI-complete problems in addition to isomorphism problems. Finding a graph's automorphism group. Counting automorphisms of a graph. The recognition of self-complementarity of a graph or digraph. A clique problem for a class of so-called M-graphs. It is shown that finding an isomorphism for n-vertex graphs is equivalent to finding an n-clique in an M-graph of size n2. This fact is interesting because the problem of finding a clique of order (1 − ε)n in a M-graph of size n2 is NP-complete for arbitrarily small positive ε. The problem of homeomorphism of 2-complexes. The definability problem for first-order logic. The input of this problem is a relational database instance I and a relation R, and the question to answer is whether there exists a first-order query Q (without constants) such that Q evaluated on I gives R as the answer. GI-hard problems The problem of counting the number of isomorphisms between two graphs is polynomial-time equivalent to the problem of telling whether even one exists. The problem of deciding whether two convex polytopes given by either the V-description or H-description are projectively or affinely isomorphic. The latter means existence of a projective or affine map between the spaces that contain the two polytopes (not necessarily of the same dimension) which induces a bijection between the polytopes. Program checking have shown a probabilistic checker for programs for graph isomorphism. Suppose P is a claimed polynomial-time procedure that checks if two graphs are isomorphic, but it is not trusted. To check if graphs G and H are isomorphic: Ask P whether G and H are isomorphic. If the answer is "yes": Attempt to construct an isomorphism using P as subroutine. Mark a vertex u in G and v in H, and modify the graphs to make them distinctive (with a small local change). Ask P if the modified graphs are isomorphic. If no, change v to a different vertex. Continue searching. Either the isomorphism will be found (and can be verified), or P will contradict itself. If the answer is "no": Perform the following 100 times. Choose randomly G or H, and randomly permute its vertices. Ask P if the graph is isomorphic to G and H. (As in AM protocol for graph nonisomorphism). If any of the tests are failed, judge P as invalid program. Otherwise, answer "no". This procedure is polynomial-time and gives the correct answer if P is a correct program for graph isomorphism. If P is not a correct program, but answers correctly on G and H, the checker will either give the correct answer, or detect invalid behaviour of P. If P is not a correct program, and answers incorrectly on G and H, the checker will detect invalid behaviour of P with high probability, or answer wrong with probability 2−100. Notably, P is used only as a blackbox. Applications Graphs are commonly used to encode structural information in many fields, including computer vision and pattern recognition, and graph matching, i.e., identification of similarities between graphs, is an important tools in these areas. In these areas graph isomorphism problem is known as the exact graph matching. In cheminformatics and in mathematical chemistry, graph isomorphism testing is used to identify a chemical compound within a chemical database. Also, in organic mathematical chemistry graph isomorphism testing is useful for generation of molecular graphs and for computer synthesis. Chemical database search is an example of graphical data mining, where the graph canonization approach is often used. In particular, a number of identifiers for chemical substances, such as SMILES and InChI, designed to provide a standard and human-readable way to encode molecular information and to facilitate the search for such information in databases and on the web, use canonization step in their computation, which is essentially the canonization of the graph which represents the molecule. In electronic design automation graph isomorphism is the basis of the Layout Versus Schematic (LVS) circuit design step, which is a verification whether the electric circuits represented by a circuit schematic and an integrated circuit layout are the same. See also Graph automorphism problem Graph canonization Notes References . . . . . . . . . . . . . . . . . . . . . . . . English translation in Journal of Mathematical Sciences 22 (3): 1285–1289, 1983. . . . . . . . . . . . . Full paper in Information and Control 56 (1–2): 1–20, 1983. . . . . ; also Journal of Computer and System Sciences 37: 312–323, 1988. . . . Surveys and monographs . . . (Translated from Zapiski Nauchnykh Seminarov Leningradskogo Otdeleniya Matematicheskogo Instituta im. V. A. Steklova AN SSSR (Records of Seminars of the Leningrad Department of Steklov Institute of Mathematics of the USSR Academy of Sciences), Vol. 118, pp. 83–158, 1982.) . (A brief survey of open questions related to the isomorphism problem for graphs, rings and groups.) . (From the book cover: The books focuses on the issue of the computational complexity of the problem and presents several recent results that provide a better understanding of the relative position of the problem in the class NP as well as in other complexity classes.) . (This 24th edition of the Column discusses the state of the art for the open problems from the book Computers and Intractability and previous columns, in particular, for Graph Isomorphism.) . . Software Graph Isomorphism, review of implementations, The Stony Brook Algorithm Repository. Graph algorithms Morphisms Computational problems in graph theory Unsolved problems in computer science Computational complexity theory Quasi-polynomial time algorithms
Graph isomorphism problem
[ "Mathematics" ]
2,675
[ "Computational problems in graph theory", "Functions and mappings", "Mathematical structures", "Unsolved problems in mathematics", "Unsolved problems in computer science", "Mathematical objects", "Graph theory", "Computational mathematics", "Computational problems", "Mathematical relations", "Ca...
1,950,852
https://en.wikipedia.org/wiki/Glossary%20of%20chemical%20formulae
This is a list of common chemical compounds with chemical formulae and CAS numbers, indexed by formula. This complements alternative listing at list of inorganic compounds. There is no complete list of chemical compounds since by nature the list would be infinite. Note: There are elements for which spellings may differ, such as aluminum/aluminium, sulfur/sulphur, and caesium/cesium. A B C C C2 C3 C4 C5 H C7 C8 C9 C10 C15 C20 Ca–Cu D E F G H I K L M N O P R S T U V W X Y Z External links Webelements Landolt Börnstein Organic Index 2004 Chemical compounds Chemical formulas Chemical formulas Lists of chemical compounds Chemical formulae Wikipedia glossaries using tables
Glossary of chemical formulae
[ "Physics", "Chemistry" ]
161
[ "Chemical compounds", "Molecules", "Chemical structures", "nan", "Chemical formulas", "Lists of chemical compounds", "Matter" ]
1,950,953
https://en.wikipedia.org/wiki/Crowbar
A crowbar, also called a wrecking bar, pry bar or prybar, pinch-bar, or occasionally a prise bar or prisebar, colloquially gooseneck, or pig bar, or in Australia a jemmy, is a lever consisting of a metal bar with a single curved end and flattened points, used to force two objects apart or gain mechanical advantage in lifting; often the curved end has a notch for removing nails. The design can be used as any of the three lever classes. The curved end is usually used as a first-class lever, and the flat end as a second-class lever. Designs made from thick flat steel bar are often referred to as utility bars. Materials and construction A common hand tool, the crow bar is typically made of medium-carbon steel, possibly hardened on its ends. Commonly crowbars are forged from long steel stock, either hexagonal or sometimes cylindrical. Alternative designs may be forged with a rounded I-shaped cross-section shaft. Versions using relatively wide flat steel bar are often referred to as "utility" or "flat bars". Etymology and usage The accepted etymology identifies the first component of the word crowbar with the bird-name "crow", perhaps due to the crowbar's resemblance to the feet or beak of a crow. The first use of the term is dated back to . It was also called simply a crow, or iron crow; William Shakespeare used the latter, as in Romeo and Juliet, Act 5, Scene 2: "Get me an iron crow and bring it straight unto my cell." In Daniel Defoe's 1719 novel Robinson Crusoe, the protagonist lacks a pickaxe so uses a crowbar instead: "As for the pickaxe, I made use of the iron crows, which were proper enough, though heavy." Types Types of crowbar include: Alignment pry bar, also referred to as Sleeve bar Cat’s claw pry bar, more simply known as a cat's paw Digging pry bar Flat pry bar Gooseneck pry bar Heavy-duty pry bar Molding pry bar Rolling head pry bar See also Halligan bar Tire iron References Hand tools Woodworking hand tools Mechanical hand tools
Crowbar
[ "Physics", "Engineering" ]
447
[ "Human–machine interaction", "Mechanics", "Mechanical hand tools", "Hand tools" ]
1,951,419
https://en.wikipedia.org/wiki/Critical%20point%20%28thermodynamics%29
In thermodynamics, a critical point (or critical state) is the end point of a phase equilibrium curve. One example is the liquid–vapor critical point, the end point of the pressure–temperature curve that designates conditions under which a liquid and its vapor can coexist. At higher temperatures, the gas comes into a supercritical phase, and so cannot be liquefied by pressure alone. At the critical point, defined by a critical temperature Tc and a critical pressure pc, phase boundaries vanish. Other examples include the liquid–liquid critical points in mixtures, and the ferromagnet–paramagnet transition (Curie temperature) in the absence of an external magnetic field. Liquid–vapor critical point Overview For simplicity and clarity, the generic notion of critical point is best introduced by discussing a specific example, the vapor–liquid critical point. This was the first critical point to be discovered, and it is still the best known and most studied one. The figure shows the schematic P-T diagram of a pure substance (as opposed to mixtures, which have additional state variables and richer phase diagrams, discussed below). The commonly known phases solid, liquid and vapor are separated by phase boundaries, i.e. pressure–temperature combinations where two phases can coexist. At the triple point, all three phases can coexist. However, the liquid–vapor boundary terminates in an endpoint at some critical temperature Tc and critical pressure pc. This is the critical point. The critical point of water occurs at and . In the vicinity of the critical point, the physical properties of the liquid and the vapor change dramatically, with both phases becoming even more similar. For instance, liquid water under normal conditions is nearly incompressible, has a low thermal expansion coefficient, has a high dielectric constant, and is an excellent solvent for electrolytes. Near the critical point, all these properties change into the exact opposite: water becomes compressible, expandable, a poor dielectric, a bad solvent for electrolytes, and mixes more readily with nonpolar gases and organic molecules. At the critical point, only one phase exists. The heat of vaporization is zero. There is a stationary inflection point in the constant-temperature line (critical isotherm) on a PV diagram. This means that at the critical point: Above the critical point there exists a state of matter that is continuously connected with (can be transformed without phase transition into) both the liquid and the gaseous state. It is called supercritical fluid. The common textbook knowledge that all distinction between liquid and vapor disappears beyond the critical point has been challenged by Fisher and Widom, who identified a p–T line that separates states with different asymptotic statistical properties (Fisher–Widom line). Sometimes the critical point does not manifest in most thermodynamic or mechanical properties, but is "hidden" and reveals itself in the onset of inhomogeneities in elastic moduli, marked changes in the appearance and local properties of non-affine droplets, and a sudden enhancement in defect pair concentration. History The existence of a critical point was first discovered by Charles Cagniard de la Tour in 1822 and named by Dmitri Mendeleev in 1860 and Thomas Andrews in 1869. Cagniard showed that CO2 could be liquefied at 31 °C at a pressure of 73 atm, but not at a slightly higher temperature, even under pressures as high as 3000 atm. Theory Solving the above condition for the van der Waals equation, one can compute the critical point as However, the van der Waals equation, based on a mean-field theory, does not hold near the critical point. In particular, it predicts wrong scaling laws. To analyse properties of fluids near the critical point, reduced state variables are sometimes defined relative to the critical properties The principle of corresponding states indicates that substances at equal reduced pressures and temperatures have equal reduced volumes. This relationship is approximately true for many substances, but becomes increasingly inaccurate for large values of pr. For some gases, there is an additional correction factor, called Newton's correction, added to the critical temperature and critical pressure calculated in this manner. These are empirically derived values and vary with the pressure range of interest. Table of liquid–vapor critical temperature and pressure for selected substances Mixtures: liquid–liquid critical point The liquid–liquid critical point of a solution, which occurs at the critical solution temperature, occurs at the limit of the two-phase region of the phase diagram. In other words, it is the point at which an infinitesimal change in some thermodynamic variable (such as temperature or pressure) leads to separation of the mixture into two distinct liquid phases, as shown in the polymer–solvent phase diagram to the right. Two types of liquid–liquid critical points are the upper critical solution temperature (UCST), which is the hottest point at which cooling induces phase separation, and the lower critical solution temperature (LCST), which is the coldest point at which heating induces phase separation. Mathematical definition From a theoretical standpoint, the liquid–liquid critical point represents the temperature–concentration extremum of the spinodal curve (as can be seen in the figure to the right). Thus, the liquid–liquid critical point in a two-component system must satisfy two conditions: the condition of the spinodal curve (the second derivative of the free energy with respect to concentration must equal zero), and the extremum condition (the third derivative of the free energy with respect to concentration must also equal zero or the derivative of the spinodal temperature with respect to concentration must equal zero). See also Conformal field theory Critical exponent Critical phenomena (more advanced article) Critical points of the elements (data page) Curie point Joback method, Klincewicz method, Lydersen method (estimation of critical temperature, pressure, and volume from molecular structure) Liquid–liquid critical point Lower critical solution temperature Néel point Percolation thresholds Phase transition Rushbrooke inequality Scale invariance Self-organized criticality Supercritical fluid, Supercritical drying, Supercritical water oxidation, Supercritical fluid extraction Tricritical point Triple point Upper critical solution temperature Widom scaling References Further reading Conformal field theory Critical phenomena Phase transitions Renormalization group Threshold temperatures Gases
Critical point (thermodynamics)
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
1,311
[ "Matter", "Phase transitions", "Physical phenomena", "Phases of matter", "Critical phenomena", "Renormalization group", "Threshold temperatures", "Condensed matter physics", "Statistical mechanics", "Gases", "Dynamical systems" ]
1,951,644
https://en.wikipedia.org/wiki/Conversion%20between%20quaternions%20and%20Euler%20angles
Spatial rotations in three dimensions can be parametrized using both Euler angles and unit quaternions. This article explains how to convert between the two representations. Actually this simple use of "quaternions" was first presented by Euler some seventy years earlier than Hamilton to solve the problem of magic squares. For this reason the dynamics community commonly refers to quaternions in this application as "Euler parameters". Definition There are two representations of quaternions. This article uses the more popular Hamilton. A quaternion has 4 real values: (the real part or the scalar part) and (the imaginary part). Defining the norm of the quaternion as follows: A unit quaternion satisfies: We can associate a quaternion with a rotation around an axis by the following expression where α is a simple rotation angle (the value in radians of the angle of rotation) and cos(βx), cos(βy) and cos(βz) are the "direction cosines" of the angles between the three coordinate axes and the axis of rotation. (Euler's Rotation Theorem). Intuition To better understand how "direction cosines" work with quaternions: If the axis of rotation is the x-axis: If the axis of rotation is the y-axis: If the axis of rotation is the z-axis: If the axis of rotation is a vector located 45° ( radians) between the x and y axes: Therefore, the x and y axes "share" influence over the new axis of rotation. Tait–Bryan angles Similarly for Euler angles, we use the Tait Bryan angles (in terms of flight dynamics): Heading – : rotation about the Z-axis Pitch – : rotation about the new Y-axis Bank – : rotation about the new X-axis where the X-axis points forward, Y-axis to the right and Z-axis downward. In the conversion example above the rotation occurs in the order heading, pitch, bank. Rotation matrices The orthogonal matrix (post-multiplying a column vector) corresponding to a clockwise/left-handed (looking along positive axis to origin) rotation by the unit quaternion is given by the inhomogeneous expression: or equivalently, by the homogeneous expression: If is not a unit quaternion then the homogeneous form is still a scalar multiple of a rotation matrix, while the inhomogeneous form is in general no longer an orthogonal matrix. This is why in numerical work the homogeneous form is to be preferred if distortion is to be avoided. The direction cosine matrix (from the rotated Body XYZ coordinates to the original Lab xyz coordinates for a clockwise/lefthand rotation) corresponding to a post-multiply Body 3-2-1 sequence with Euler angles (ψ, θ, φ) is given by: Euler angles (in 3-2-1 sequence) to quaternion conversion By combining the quaternion representations of the Euler rotations we get for the Body 3-2-1 sequence, where the airplane first does yaw (Body-Z) turn during taxiing onto the runway, then pitches (Body-Y) during take-off, and finally rolls (Body-X) in the air. The resulting orientation of Body 3-2-1 sequence (around the capitalized axis in the illustration of Tait–Bryan angles) is equivalent to that of lab 1-2-3 sequence (around the lower-cased axis), where the airplane is rolled first (lab-x axis), and then nosed up around the horizontal lab-y axis, and finally rotated around the vertical lab-z axis (lB = lab2Body): Other rotation sequences use different conventions. Source code Below code in C++ illustrates above conversion: struct Quaternion { double w, x, y, z; }; // This is not in game format, it is in mathematical format. Quaternion ToQuaternion(double roll, double pitch, double yaw) // roll (x), pitch (y), yaw (z), angles are in radians { // Abbreviations for the various angular functions double cr = cos(roll * 0.5); double sr = sin(roll * 0.5); double cp = cos(pitch * 0.5); double sp = sin(pitch * 0.5); double cy = cos(yaw * 0.5); double sy = sin(yaw * 0.5); Quaternion q; q.w = cr * cp * cy + sr * sp * sy; q.x = sr * cp * cy - cr * sp * sy; q.y = cr * sp * cy + sr * cp * sy; q.z = cr * cp * sy - sr * sp * cy; return q; } Quaternion to Euler angles (in 3-2-1 sequence) conversion A direct formula for the conversion from a quaternion to Euler angles in any of the 12 possible sequences exists. For the rest of this section, the formula for the sequence Body 3-2-1 will be shown. If the quaternion is properly normalized, the Euler angles can be obtained from the quaternions via the relations: Note that the arctan functions implemented in computer languages only produce results between −π/2 and π/2, which is why atan2 is used to generate all the correct orientations. Moreover, typical implementations of arctan also might have some numerical disadvantages near zero and one. Some implementations use the equivalent expression: Source code The following C++ program illustrates conversion above: #define _USE_MATH_DEFINES #include <cmath> struct Quaternion { double w, x, y, z; }; struct EulerAngles { double roll, pitch, yaw; }; // this implementation assumes normalized quaternion // converts to Euler angles in 3-2-1 sequence EulerAngles ToEulerAngles(Quaternion q) { EulerAngles angles; // roll (x-axis rotation) double sinr_cosp = 2 * (q.w * q.x + q.y * q.z); double cosr_cosp = 1 - 2 * (q.x * q.x + q.y * q.y); angles.roll = std::atan2(sinr_cosp, cosr_cosp); // pitch (y-axis rotation) double sinp = std::sqrt(1 + 2 * (q.w * q.y - q.x * q.z)); double cosp = std::sqrt(1 - 2 * (q.w * q.y - q.x * q.z)); angles.pitch = 2 * std::atan2(sinp, cosp) - M_PI / 2; // yaw (z-axis rotation) double siny_cosp = 2 * (q.w * q.z + q.x * q.y); double cosy_cosp = 1 - 2 * (q.y * q.y + q.z * q.z); angles.yaw = std::atan2(siny_cosp, cosy_cosp); return angles; } Singularities One must be aware of singularities in the Euler angle parametrization when the pitch approaches ±90° (north/south pole). These cases must be handled specially. The common name for this situation is gimbal lock. Code to handle the singularities is derived on this site: www.euclideanspace.com Vector rotation Let us define scalar and vector such that quaternion . Note that the canonical way to rotate a three-dimensional vector by a quaternion defining an Euler rotation is via the formula where is a quaternion containing the embedded vector , is a conjugate quaternion, and is the rotated vector . In computational implementations this requires two quaternion multiplications. An alternative approach is to apply the pair of relations where indicates a three-dimensional vector cross product. This involves fewer multiplications and is therefore computationally faster. Numerical tests indicate this latter approach may be up to 30% faster than the original for vector rotation. Proof The general rule for quaternion multiplication involving scalar and vector parts is given by Using this relation one finds for that and upon substitution for the triple product where anti-commutivity of cross product and has been applied. By next exploiting the property that is a unit quaternion so that , along with the standard vector identity one obtains which upon defining can be written in terms of scalar and vector parts as See also Rotation operator (vector space) Quaternions and spatial rotation Euler Angles Rotation matrix Rotation formalisms in three dimensions References External links Q60. How do I convert Euler rotation angles to a quaternion? and related questions at The Matrix and Quaternions FAQ Rotation in three dimensions Euclidean symmetries 3D computer graphics Quaternions
Conversion between quaternions and Euler angles
[ "Physics", "Mathematics" ]
1,972
[ "Functions and mappings", "Euclidean symmetries", "Mathematical objects", "Mathematical relations", "Symmetry" ]
21,081,997
https://en.wikipedia.org/wiki/European%20Flight%20Test%20Safety%20Award
The European Flight Test Safety Award was created after the fatal accident of test pilot Gérard Guillaumaud by his fiancée Heidi Biermeier. The regulations of the award state that recipients must be individuals who made significant contributions in the area of safety within flight testing. Award Ceremony The award was first granted in October 2007 in London at the award dinner concluding the 1st European Flight Test Safety Workshops. The workshop is hosted by the Flight Test Safety Committee of Society of Experimental Test Pilots (SETP) and of Society of Flight Test Engineers (SFTE). The recipient is nominated by a jury, consisting of two flight test experts and the founder of the award, Ms. Heidi Biermeier. Recipients 2007 Dipl.-Ing. Dr.-Ing. Dieter W. Reisinger, MSc (Austrian Airlines) 2008 Gérard Temme of CertiFlyer B.V. 2009 Patrick L. Svatek, (Flight test engineer at NAVAIR, now USNTPS Patuxent River) 2010 Billie Flynn, (Experimental Test Pilot) Lockheed Martin, 2011 General (retired.) Desmond Barker (CSIR), author of the book Zero Error Margin - Airshow Display Flying Analyzed 2012 Capt. David C. Carbaugh, chief pilot, Boeing Flight Operations Safety 2013 Maurice "Moe" Girard, senior engineering test pilot, Bombardier 2014 Gulfstream 2015 Daniel Schwenzel (Airbus Helicopters) Workshop Locations and Theme 2007 London 2008 Amsterdam 2009 Vienna, "First Flight" 2010 London 2011 Salzburg, "Displaying Prototype Aircraft - Risks and Preparation" 2012 Salzburg, "Loss of Control - Tackling Aviation's #1 Killer" 2013 Amsterdam, "Human Machine Interface and Flight Deck Design" 2014 Manching, "Safety Management Systems in Flight Test Organizations" 2015 Aix-en-Provence, "Finding the Black Swan" There is also an annual North American Flight Test Safety Workshop. See also List of aviation awards References External links Website Flight Test Safety Committee Flight Test Safety Workshop 2009 in Vienna Flight Test Safety Workshop, May 2010 in San Jose, California Flight Test Safety Workshop 2011 in Salzburg Society of Experimental Test Pilots Aerospace engineering Aviation safety Aviation awards
European Flight Test Safety Award
[ "Engineering" ]
434
[ "Aerospace engineering" ]
21,085,092
https://en.wikipedia.org/wiki/Synthetic%20Metals
Synthetic Metals is a peer-reviewed scientific journal covering electronic polymers and electronic molecular materials. Abstracting and indexing Synthetic Metals is abstracted and indexed in the following services: According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.266. It has published several highly cited papers (1 with ~1000 citations; 5 with >600 citations; 30 with >200 citations, according to Web of Science); most of them are devoted to conductive polymers (especially polyaniline) and one to optical properties of carbon nanotubes (see Kataura plot). References External links Elsevier academic journals English-language journals Semi-monthly journals Academic journals established in 1985 Materials science journals
Synthetic Metals
[ "Materials_science", "Engineering" ]
143
[ "Materials science stubs", "Materials science journals", "Materials science journal stubs", "Materials science" ]
21,085,711
https://en.wikipedia.org/wiki/Mixed%20flowing%20gas%20testing
Mixed flowing gas (MFG) is a type of laboratory environmental testing for products, particularly electronics, to evaluate resistance to corrosion due to gases in the atmosphere. Mixed Flowing Gas (MFG) test is a laboratory test in which the temperature (°C), relative humidity (%RH), concentration of gaseous pollutants (in parts per billion, ppb or parts per million ppm level), and other critical variables (such as volume exchange rate and airflow rate) are carefully defined, monitored and controlled. The purpose of this test is to simulate corrosion phenomenon due to atmospheric exposure. The electronic product is exposed to gases such as chlorine, hydrogen sulfide, nitrogen dioxide, and sulfur dioxide at levels in the parts per billion range, in a controlled environmental chamber. Test samples that have been exposed to MFG testing have ranged from bare metal surfaces, to electrical connectors, and to complete assemblies. In regards to noble metal plated connector applications, MFG testing has been widely accepted as a qualification test method to evaluate the performance of these connectors. MFG testing was primarily developed by William H. Abbott at Batelle in the 1980s.  Much of the work was described in a series of “… Progress Report[s] on Studies of Natural and Laboratory Environmental Reactions on Materials and Components,” by Abbott, issued in 1981, ‘83, ‘84 and ‘86. Abbott published two papers on MFG testing in IEEE Transactions in 1988 and 1990.  Other research has evaluated MFG testing. While standard practice MFG testing requires careful definition, monitoring and control of temperature, humidity, gaseous pollutant concentrations, volume exchange rate and airflow rate, there is considerable potential for variations in mass flow, environmental mixing and gradients in the chambers used.  The only realistic benchmark for MFG testing is the use of metal reference coupons. Copper is the most commonly used material.  Silver has also been used.  Copper weight-gain rates are typically four times that observed for silver. Coupons are typically hung in the test chamber located in proximity to the materials under test. Metal coupons should ideally have large surface area and small edge thickness.  Coupons are prepared per ASTM B810-01a.  Coupons are weighed before and after exposure.  The surface deposits are assumed to be copper (I) sulfide, Cu2S, in the case of copper coupons and silver sulfide, Ag2S, for silver.  The weight change for both metals is assumed to be due strictly to the addition of sulfur.  The deposit thickness is determined by multiplying the coupon weight change by the formula weight for the metal sulfide divided by the density of the metal sulfide times the atomic weight of sulfur times the total surface area for the two faces of the coupon (minus any drill hole for hanging). Where F.W. = formula weight, ρ = density and is the standard, relative atomic weight. Thicknesses are typically converted from centimeters to Angstrom units. Common practice is to report the calculated copper and silver corrosion levels per ISA 71.04 [see Specification, below] reactive environment exposure severity levels.  The levels are “G1” (mild), “G2” (moderate) and “G3” (harsh), reported as equivalent months or years.  For equivalent months, for copper, the thickness of the deposits in Angstrom units is divided by 300 for G1, 1000 for G2 and 2000 for G3.  For silver, the thickness in Angstrom units is divided by 200, 1000 and 2000, respectively.  For equivalent years, the exposures in months are further divided by 12. Industry specifications ASTM B827-05(2014) [Replaces ASTM B827-97]—Standard Practice for Conducting Mixed Flowing Gas Environmental Tests ASTM B845-97(2018) [Replaces ASTM B845-97]—Standard Guide for Mixed Flowing Gas Tests for Electrical Contacts ASTM B810-01a(2017) [Replaces ASTM B810-01a]—Standard Method for Calibration of Atmospheric Corrosion Test Chambers by Change in Mass of Copper Coupons ASTM B825-97(WITHDRAWN, NO REPLACEMENT)—Standard Test Method for Coulometric Reduction of Surface Films on Metallic Test Samples ASTM B826-09(2015) [Replaces ASTM B826-97]—Standard Test Method for Monitoring Corrosion Tests by Electrical Resistance Probes ASTM B808-10(2015) [Replaces ASTM B808-97]—Standard Test Method for Monitoring of Atmospheric Corrosion Chambers by Quartz Crystal Microbalances EIA 364, Test Procedure 65A IEC 60068-2-60:2015 RLV IEC 512-11-7 ISA 71.04-2013—Environmental Conditions for Process Measurement & Control Systems: Airborne Contaminants References External links Battelle Memorial Institute, Columbus, OH, MFG Testing (http:/www.battelle.org) Center For Advanced Life Cycle Engineering (CALCE), University of Maryland, MFG Testing Description of test methods and environments http://www.contechresearch.com/mfg.html https://web.archive.org/web/20110719142237/http://www.connectorsupplier.com/tech_updates_BM_QA_Acceleration_9-16-08.htm Hardware testing Environmental testing
Mixed flowing gas testing
[ "Engineering" ]
1,134
[ "Environmental testing", "Reliability engineering" ]
21,093,002
https://en.wikipedia.org/wiki/Thyroid%20blocker
Potassium iodide (KI) and potassium iodate (KIO3) are called thyroid blockers when used in radiation protection. If a person consumes a dose of one of these chemical compounds, his or her thyroid may saturate with stable iodine, preventing accumulation of radioactive iodine found after a nuclear meltdown or explosion. References Radiobiology Thyroid Potassium compounds Iodine compounds Radiation protection Medical treatments
Thyroid blocker
[ "Chemistry", "Biology" ]
84
[ "Radiobiology", "Radioactivity" ]
13,126,459
https://en.wikipedia.org/wiki/Unified%20communications
Unified communications (UC) is a business and marketing concept describing the integration of enterprise communication services such as instant messaging (chat), presence information, voice (including IP telephony), mobility features (including extension mobility and single number reach), audio, web & video conferencing, fixed-mobile convergence (FMC), desktop sharing, data sharing (including web connected electronic interactive whiteboards), call control and speech recognition with non-real-time communication services such as unified messaging (integrated voicemail, e-mail, SMS and fax). UC is not necessarily a single product, but a set of products that provides a consistent unified user interface and user experience across multiple devices and media types. In its broadest sense, the UC can encompass all forms of communications that are exchanged via a network to include other forms of communications such as Internet Protocol television (IPTV) and digital signage as they become an integrated part of the network communications deployment and may be directed as one-to-one communications or broadcast communications from one to many. UC allows an individual to send a message on one medium and receive the same communication on another medium. For example, one can receive a voicemail message and choose to access it through e-mail or a cell phone. If the sender is online according to the presence information and currently accepts calls, the response can be sent immediately through text chat or a video call. Otherwise, it may be sent as a non-real-time message that can be accessed through a variety of media. Definition There are varying definitions for unified communications. A basic definition is "communications integrated to optimize business processes and increase user productivity", but such integration can take many forms, such as: users simply adjusting their habits, manual integration as defined by procedures and training, integration of communications into off-the-shelf tools such as Thunderbird, Outlook, Lotus Notes, BlackBerry, Salesforce.com, etc., or purpose-specific integration into customized applications in specific operating departments or in vertical markets such as healthcare. Unified communications is an evolving set of technologies that automates and unifies human and device communications in a common context and experience. It optimizes business processes and enhances human communications by reducing latency, managing flows, and eliminating device and media dependencies. A UC system may include features such as messaging, voice and video calls, meetings, team collaboration, file sharing, and integrated apps. History The history of unified communications is tied to the evolution of the supporting technology. Originally, business telephone systems were a private branch exchange (PBX) or key telephone system provided and managed by the local phone company. These systems used the phone company's analog or digital circuits to deliver phone calls from a central office (CO) to the customer. The system —PBX or key telephone system— accepted the call and routed the call to the appropriate extension or line appearance on the phones at the customer's office. In the 1980s, voice mail systems with IVR-like features were recognized as an access mechanism to corporate information for mobile employees, before the explosion of cell phones and the proliferation of PCs. E-mail also began to grow in popularity, and as early as 1985, e-mail reading features were made available for certain voicemail. The term unified communications arose in the mid-1990s, when messaging and real-time communications began to combine. In 1993, ThinkRite (VoiceRite) developed the unified messaging system, POET, for IBM's internal use. It was installed in 55 IBM US Branch Offices for 54,000 employees and integrated with IBM OfficeVision/VM (PROFS) and provided IBMers with one phone number for voicemail, fax, alphanumeric paging and follow-me. POET was in use until 2000. In the late 1990s, a New Zealand-based organization called IPFX developed a commercially available presence product, which let users see the location of colleagues, make decisions on how to contact them, and define how their messages were handled based on their own presence. The first full-featured converged telephony/UC offering was the Nortel Succession MX (Multimedia eXchange) product, which later became known as Nortel Multimedia Communications Server (MCS 5100). The major drawback to this service was the reliance on the phone company or vendor partner to manage (in most cases) the PBX or key telephone system. This resulted in a residual, recurring cost to customers. Over time, the PBX became more privatized, and internal staff members were hired to manage these systems. This was typically done by companies that could afford to bring this skill in-house and thereby reduce the requirement to notify the phone company or their local PBX vendor each time a change was required in the system. This increasing privatization triggered the development of more powerful software that increased the usability and manageability of the system. As companies began to deploy IP networks in their environment, companies began to use these networks to transmit voice instead of relying on traditional telephone network circuits. Some vendors such as Avaya and Nortel created circuit packs or cards for their PBX systems that could interconnect their communications systems to the IP network. Other vendors such as Cisco created equipment that could be placed in routers to transport voice calls across a company network from site to site. The termination of PBX circuits to be transported across a network and delivered to another phone system is traditionally referred to as Voice over IP (Voice over Internet Protocol or VoIP). This design required special hardware on both ends of the network equipment to provide the termination and delivery at each site. As time went by, Siemens, Alcatel-Lucent, Cisco, Nortel, Avaya, Wildix and Mitel realized the potential for eliminating the traditional PBX or key system and replacing it with a solution based on IP. This IP solution is software driven only, and thereby does away with the need for "switching" equipment at a customer site (save the equipment necessary to connect to the outside world). This created a new technology, now called IP telephony. A system that uses IP-based telephony services only, rather than a legacy PBX or key system, is called an IP telephony solution. With the advent of IP telephony the handset was no longer a digital device hanging off a copper loop from a PBX. Instead, the handset lived on the network as another computer device. The transport of audio was therefore no longer a variation in voltages or modulation of frequency such as with the handsets from before, but rather encoding the conversation using a codec (G.711 originally) and transporting it with a protocol such as the Real-time Transport Protocol (RTP). When the handset is just another computer connected to the network, advanced features can be provided by letting computer applications communicate with server computers elsewhere in any number of ways; applications can even be upgraded or freshly installed on the handset. When considering the efforts of Unified Communications solutions providers, the overall goal is to no longer focus strictly on the telephony portion of daily communications. The unification of all communication devices inside a single platform provides the mobility, presence, and contact capabilities that extend beyond the phone to all devices a person may use or have at their disposal. Given the wide scope of unified communications, there has been a lack of community definition as most solutions are from proprietary vendors. Since March 2008, there are several open source projects with a UC focus such as Druid and Elastix, which are based on Asterisk, a leading open source telephony project. The aim of these open source UC projects is to allow the open source community of developers and users to have a say in unified communications and what it means. IBM entered the unified communications marketplace with several products, beginning in 2006 with the updated release of a unified communications middleware platform, IBM Lotus Sametime 7.5, as well as related products and services such as IBM WebSphere Unified Messaging, IBM Global Technology Services - Converged Communications Services, and more. In October 2007, Microsoft entered the UC market with the launch of Office Communications Server, a software-based application running on Windows. In March 2008, Unison Technologies launched Unison, a software-based unified communications solution that runs on Linux and Windows. In May 2010, the Unified Communications Interoperability Forum (UCIF) was announced. UCIF is an independent, non-profit alliance between technology companies that creates and tests interoperability profiles, implementation guidelines, and best practices for interoperability between UC products and existing communications and business applications. The original founding members were HP, Juniper Networks, Logitech / LifeSize, Microsoft, and Polycom. There is some debate about whether unified communications hosted on an enterprise's premises is the same thing as unified communications solutions that are hosted by a service provider, or UCaaS (UC as a Service). While both offer their respective advantages, all of these approaches can be grouped under the single umbrella category of unified communications. Technology Contrasting unified messaging Unified communications is sometimes confused with unified messaging, but it is distinct. Unified communications refers to both real-time and non-real-time delivery of communications based on the preferred method and location of the recipient; unified messaging culls messages from several sources (such as e-mail, voice mail and faxes), but holds those messages only for retrieval at a later time. Unified communications allows for an individual to check and retrieve an e-mail or voice mail from any communication device at any time. It expands beyond voice mail services to data communications and video services. Components With unified communications, multiple modes of business communications are integrated. Unified communications is not a single product but a collection of elements that includes: Call control and multimodal communications Presence Instant messaging Unified messaging Speech access and personal assistant Conferencing (audio, Web and video) Collaboration tools Mobility Business process integration (BPI) Software to enable business process integration Presence—knowing where intended recipients are, and if they are available, in real time—is a key component of unified communications. Unified communications integrates all systems a user might already use, and helps those systems work together in real time. For example, unified communications technology could allow a user to seamlessly collaborate with another person on a project, even if the two users are in separate locations. The user could quickly locate the necessary person by accessing an interactive directory, engage in a text messaging session, and then escalate the session to a voice call, or even a video call. In another example, an employee receives a call from a customer who wants answers. Unified communications enables that employee to call an expert colleague from a real-time list. This way, the employee can answer the customer faster by eliminating rounds of back-and-forth e-mails and phone-tag. The examples in the previous paragraph primarily describe "personal productivity" enhancements that tend to benefit the individual user. While such benefits can be important, enterprises are finding that they can achieve even greater impact by using unified communications capabilities to transform business processes. This is achieved by integrating UC functionality directly into the business applications using development tools provided by many of the suppliers. Instead of the individual user invoking the UC functionality to, say, find an appropriate resource, the workflow or process application automatically identifies the resource at the point in the business activity where one is needed. When used in this manner, the concept of presence often changes. Most people associate presence with instant messaging (IM "buddy lists") the status of individuals is identified. But, in many business process applications, what is important is finding someone with a certain skill. In these environments, presence identifies available skills or capabilities. This "business process" approach to integrating UC functionality can result in bottom line benefits that are an order of magnitude greater than those achievable by personal productivity methods alone. Related concepts Unified communications & collaboration (UCC) is the integration of various communications methods with collaboration tools such as virtual white boards, real-time audio and video conferencing, and enhanced call control capabilities. Before this fusion of communications and collaboration tools into a single platform, enterprise collaboration service vendors and enterprise communications service vendors offered distinctly different solutions. Now, collaboration service vendors also offer communications services, and communications service providers have developed collaboration tools. Unified communications & collaboration as a service (UCCaaS) is cloud-based UCC platforms. Compared to premises-based UCC solutions, UCCaaS platforms offer enhanced flexibility and scalability due to the SaaS subscription model. Unified communications provisioning is the act of entering and configuring the settings for users of phone systems, instant messaging, telepresence, and other collaboration channels. Provisioners refer to this process as making moves, adds, changes, and deletes or MAC-Ds. See also Intelligent network service Unified communications management Unified communications as a service (UCaaS) Mobile collaboration Telepresence Unified messaging References Teleconferencing Videotelephony Telephone service enhanced features Sociology of technology
Unified communications
[ "Technology" ]
2,673
[ "nan" ]
13,127,033
https://en.wikipedia.org/wiki/Flood%20risk%20assessment
A flood risk assessment (FRA) is an assessment of the risk of flooding from all flooding mechanisms, the identification of flood mitigation measures and should provide advice on actions to be taken before and during a flood. The sources of water which produce floods include: groundwater, surface water (rivers, streams or watercourses), artificial water (burst water mains, canals or reservoirs), sewers and drains, seawater. For each of the sources of water, different hydraulic intensities occur. Floods can occur because of a combination of sources of flooding, such as high groundwater and an inadequate surface water drainage system. The topography, hydrogeology and physical attributes of the existing or proposed development need to be considered. A flood risk assessment should be an evaluation of the flood risk and the consequences and impact and vulnerability. In the UK, the writing of professional flood risk assessments is undertaken by Civil Engineering Consultants. They will have membership of the Institution of Civil Engineers and are bound by their rules of professional conduct. A key requirement is to ensure such professional flood risk assessments are independent to all parties by carrying out their professional duties with complete objectivity and impartiality. Their professional advice should be supported by professional indemnity insurance for such specific professional advice ultimately held with a Lloyd's of London underwriter. Professional flood risk assessments can cover single buildings or whole regions. They can part of a due-diligence process for existing householders or businesses, or can be required in England and Wales to provide independent evidence to a planning application on the flood risk. England and Wales In England and Wales, the Environment Agency requires a professional Flood Risk Assessment (FRA) to be submitted alongside planning applications in areas that are known to be at risk of flooding (within flood zones 2 or 3) and/ or are greater than 1ha in area, planning permission is not usually granted until the FRA has been accepted by the Environment Agency. PPS 25 – England only Flood Risk Assessments are required to be completed according to the National Planning Policy Framework, which replaces Planning Policy Statement PPS 25: Development and Flood Risk. The initial legislation (PPG25) was introduced in 2001 and subsequently revised. PPS 25 was designed to "strengthen and clarify the key role of the planning system in managing flood risk and contributing to adapting to the impacts of climate change." and sets out policies for local authorities to ensure flood risk is taken into account during the planning process to prevent inappropriate development in high risk areas and to direct development away from areas at highest risk. In its introduction, PPS25 states "flooding threatens life and causes substantial damage to property [and that] although [it] cannot be wholly prevented, its impacts can be avoided and reduced through good planning and management". Composition of an FRA For a flood risk assessment to be written, information is needed concerning the existing and proposed developments, the Environment Agency modeled flood levels and topographic levels on site. At its most simple (and cheapest) level an FRA can provide an indication of whether a development will be allowed to take place at a site. An initial idea of the risk of fluvial flooding to a local area can be found on the Environment Agency flood map website. FRAs consist of a detailed analysis of available data to inform the Environment Agency of flood risk at an individual site and also recommend to the developer any mitigation measures. More costly analysis of flood risk can be achieved through detailed flood modelling to challenge the agency's modelled levels and corresponding flood zones. The FRA takes into account the risk and impact of flooding on the site, and takes into consideration how the development may affect flooding in the local area. It also includes provides recommendations as to how the risk of flooding to the development can be mitigated. FRAs should also consider flooding from all sources including fluvial, groundwater, surface water runoff and sewer flooding. For sites located within areas at risk of flooding a sequential test may be required. The aim of the sequential test is to direct development to locations at the lowest risk of flooding. The National Planning Policy Framework (NPPF) was amended in 2020 to require sequential tests for sites that are at risk of any form of flooding. Northern Ireland In 2006, the Planning Service, part of The Department of the Environment, published Planning Policy Statement 15 (PPS15): Planning and flood risk. The guidelines are precautionary and advise against development in flood plains and areas subject to historical flooding. In exceptional cases a FRA can be completed to justify development in flood risk areas. Advice on flood risk assessment is provided to the Planning Service by the Rivers Agency, which is the statutory drainage and flood defence authority for Northern Ireland. Republic of Ireland In 2009, the Department of the Environment, Heritage and Local Government and Office of Public Works published planning guidelines requiring local authorities to apply a sequential approach to flood risk management. The guidelines require that proposed development in flood risk areas must undergo a justification test, consisting of a flood risk assessment. See also Flood warning Floods directive Flood Modeller Pro, software used to undertake flood risk assessments References Flood control Environmental policy in the United Kingdom Extreme value data
Flood risk assessment
[ "Chemistry", "Engineering" ]
1,036
[ "Flood control", "Environmental engineering" ]
13,127,410
https://en.wikipedia.org/wiki/Oberth%20effect
In astronautics, a powered flyby, or Oberth maneuver, is a maneuver in which a spacecraft falls into a gravitational well and then uses its engines to further accelerate as it is falling, thereby achieving additional speed. The resulting maneuver is a more efficient way to gain kinetic energy than applying the same impulse outside of a gravitational well. The gain in efficiency is explained by the Oberth effect, wherein the use of a reaction engine at higher speeds generates a greater change in mechanical energy than its use at lower speeds. In practical terms, this means that the most energy-efficient method for a spacecraft to burn its fuel is at the lowest possible orbital periapsis, when its orbital velocity (and so, its kinetic energy) is greatest. In some cases, it is even worth spending fuel on slowing the spacecraft into a gravity well to take advantage of the efficiencies of the Oberth effect. The maneuver and effect are named after the person who first described them in 1927, Hermann Oberth, a Transylvanian Saxon physicist and a founder of modern rocketry. Because the vehicle remains near periapsis only for a short time, for the Oberth maneuver to be most effective the vehicle must be able to generate as much impulse as possible in the shortest possible time. As a result the Oberth maneuver is much more useful for high-thrust rocket engines like liquid-propellant rockets, and less useful for low-thrust reaction engines such as ion drives, which take a long time to gain speed. Low thrust rockets can use the Oberth effect by splitting a long departure burn into several short burns near the periapsis. The Oberth effect also can be used to understand the behavior of multi-stage rockets: the upper stage can generate much more usable kinetic energy than the total chemical energy of the propellants it carries. In terms of the energies involved, the Oberth effect is more effective at higher speeds because at high speed the propellant has significant kinetic energy in addition to its chemical potential energy. At higher speed the vehicle is able to employ the greater change (reduction) in kinetic energy of the propellant (as it is exhausted backward and hence at reduced speed and hence reduced kinetic energy) to generate a greater increase in kinetic energy of the vehicle. Explanation in terms of work and kinetic energy Because kinetic energy equals mv2/2, this change in velocity imparts a greater increase in kinetic energy at a high velocity than it would at a low velocity. For example, considering a 2 kg rocket: at 1 m/s, the rocket starts with 12 = 1 J of kinetic energy. Adding 1 m/s increases the kinetic energy to 22 = 4 J, for a gain of 3 J; at 10 m/s, the rocket starts with 102 = 100 J of kinetic energy. Adding 1 m/s increases the kinetic energy to 112 = 121 J, for a gain of 21 J. This greater change in kinetic energy can then carry the rocket higher in the gravity well than if the propellant were burned at a lower speed. Description in terms of work The thrust produced by a rocket engine is independent of the rocket’s velocity relative to the surrounding atmosphere. A rocket acting on a fixed object, as in a static firing, does no useful work on the rocket; the rocket's chemical energy is progressively converted to kinetic energy of the exhaust, plus heat. But when the rocket moves, its thrust acts through the distance it moves. Force multiplied by displacement is the definition of mechanical work. The greater the velocity of the rocket and payload during the burn the greater is the displacement and the work done, and the greater the increase in kinetic energy of the rocket and its payload. As the velocity of the rocket increases, progressively more of the available kinetic energy goes to the rocket and its payload, and less to the exhaust. This is shown as follows. The mechanical work done on the rocket is defined as the dot product of the force of the engine's thrust and the displacement it travels during the burn If the burn is made in the prograde direction, The work results in a change in kinetic energy Differentiating with respect to time, we obtain or where is the velocity. Dividing by the instantaneous mass to express this in terms of specific energy we get where is the acceleration vector. Thus it can be readily seen that the rate of gain of specific energy of every part of the rocket is proportional to speed and, given this, the equation can be integrated (numerically or otherwise) to calculate the overall increase in specific energy of the rocket. Impulsive burn Integrating the above energy equation is often unnecessary if the burn duration is short. Short burns of chemical rocket engines close to periapsis or elsewhere are usually mathematically modeled as impulsive burns, where the force of the engine dominates any other forces that might change the vehicle's energy over the burn. For example, as a vehicle falls toward periapsis in any orbit (closed or escape orbits) the velocity relative to the central body increases. Briefly burning the engine (an "impulsive burn") prograde at periapsis increases the velocity by the same increment as at any other time (). However, since the vehicle's kinetic energy is related to the square of its velocity, this increase in velocity has a non-linear effect on the vehicle's kinetic energy, leaving it with higher energy than if the burn were achieved at any other time. Oberth calculation for a parabolic orbit If an impulsive burn of Δv is performed at periapsis in a parabolic orbit, then the velocity at periapsis before the burn is equal to the escape velocity (Vesc), and the specific kinetic energy after the burn is where . When the vehicle leaves the gravity field, the loss of specific kinetic energy is so it retains the energy which is larger than the energy from a burn outside the gravitational field () by When the vehicle has left the gravity well, it is traveling at a speed For the case where the added impulse Δv is small compared to escape velocity, the 1 can be ignored, and the effective Δv of the impulsive burn can be seen to be multiplied by a factor of simply and one gets ≈ Similar effects happen in closed and hyperbolic orbits. Parabolic example If the vehicle travels at velocity v at the start of a burn that changes the velocity by Δv, then the change in specific orbital energy (SOE) due to the new orbit is Once the spacecraft is far from the planet again, the SOE is entirely kinetic, since gravitational potential energy approaches zero. Therefore, the larger the v at the time of the burn, the greater the final kinetic energy, and the higher the final velocity. The effect becomes more pronounced the closer to the central body, or more generally, the deeper in the gravitational field potential in which the burn occurs, since the velocity is higher there. So if a spacecraft is on a parabolic flyby of Jupiter with a periapsis velocity of 50 km/s and performs a 5 km/s burn, it turns out that the final velocity change at great distance is 22.9 km/s, giving a multiplication of the burn by 4.58 times. Paradox It may seem that the rocket is getting energy for free, which would violate conservation of energy. However, any gain to the rocket's kinetic energy is balanced by a relative decrease in the kinetic energy the exhaust is left with (the kinetic energy of the exhaust may still increase, but it does not increase as much). Contrast this to the situation of static firing, where the speed of the engine is fixed at zero. This means that its kinetic energy does not increase at all, and all the chemical energy released by the fuel is converted to the exhaust's kinetic energy (and heat). At very high speeds the mechanical power imparted to the rocket can exceed the total power liberated in the combustion of the propellant; this may also seem to violate conservation of energy. But the propellants in a fast-moving rocket carry energy not only chemically, but also in their own kinetic energy, which at speeds above a few kilometres per second exceed the chemical component. When these propellants are burned, some of this kinetic energy is transferred to the rocket along with the chemical energy released by burning. The Oberth effect can therefore partly make up for what is extremely low efficiency early in the rocket's flight when it is moving only slowly. Most of the work done by a rocket early in flight is "invested" in the kinetic energy of the propellant not yet burned, part of which they will release later when they are burned. See also Bi-elliptic transfer Gravity assist Propulsive efficiency References External links Oberth effect Explanation of the effect by Geoffrey Landis. Rocket propulsion, classical relativity, and the Oberth effect Animation (MP4) of the Oberth effect in orbit from the Blanco and Mungan paper cited above. Aerospace engineering Rocketry Astrodynamics
Oberth effect
[ "Engineering" ]
1,819
[ "Rocketry", "Astrodynamics", "Aerospace engineering" ]
6,513,914
https://en.wikipedia.org/wiki/Komar%20mass
The Komar mass (named after Arthur Komar) of a system is one of several formal concepts of mass that are used in general relativity. The Komar mass can be defined in any stationary spacetime, which is a spacetime in which all the metric components can be written so that they are independent of time. Alternatively, a stationary spacetime can be defined as a spacetime which possesses a timelike Killing vector field. The following discussion is an expanded and simplified version of the motivational treatment in (Wald, 1984, pg 288). Motivation Consider the Schwarzschild metric. Using the Schwarzschild basis, a frame field for the Schwarzschild metric, one can find that the radial acceleration required to hold a test mass stationary at a Schwarzschild coordinate of r is: Because the metric is static, there is a well-defined meaning to "holding a particle stationary". Interpreting this acceleration as being due to a "gravitational force", we can then compute the integral of normal acceleration multiplied by area to get a "Gauss law" integral of: While this approaches a constant as r approaches infinity, it is not a constant independent of r. We are therefore motivated to introduce a correction factor to make the above integral independent of the radius r of the enclosing shell. For the Schwarzschild metric, this correction factor is just , the "red-shift" or "time dilation" factor at distance r. One may also view this factor as "correcting" the local force to the "force at infinity", the force that an observer at infinity would need to apply through a string to hold the particle stationary. (Wald, 1984). To proceed further, we will write down a line element for a static metric. where and the quadratic form are functions only of the spatial coordinates x, y, z and are not functions of time. In spite of our choices of variable names, it should not be assumed that our coordinate system is Cartesian. The fact that none of the metric coefficients are functions of time makes the metric stationary: the additional fact that there are no "cross terms" involving both time and space components (such as ) make it static. Because of the simplifying assumption that some of the metric coefficients are zero, some of our results in this motivational treatment will not be as general as they could be. In flat space-time, the proper acceleration required to hold station is , where u is the 4-velocity of our hovering particle and is the proper time. In curved space-time, we must take the covariant derivative. Thus we compute the acceleration vector as: where is a unit time-like vector such that The component of the acceleration vector normal to the surface is where Nb is a unit vector normal to the surface. In a Schwarzschild coordinate system, for example, we find that as expected - we have simply re-derived the previous results presented in a frame-field in a coordinate basis. We define so that in our Schwarzschild example: We can, if we desire, derive the accelerations and the adjusted "acceleration at infinity" from a scalar potential Z, though there is not necessarily any particular advantage in doing so. (Wald 1984, pg 158, problem 4) We will demonstrate that integrating the normal component of the "acceleration at infinity" over a bounding surface will give us a quantity that does not depend on the shape of the enclosing sphere, so that we can calculate the mass enclosed by a sphere by the integral To make this demonstration, we need to express this surface integral as a volume integral. In flat space-time, we would use Stokes theorem and integrate over the volume. In curved space-time, this approach needs to be modified slightly. Using the formulas for electromagnetism in curved space-time as a guide, we write instead. where F plays a role similar to the "Faraday tensor", in that We can then find the value of "gravitational charge", i.e. mass, by evaluating and integrating it over the volume of our sphere. An alternate approach would be to use differential forms, but the approach above is computationally more convenient as well as not requiring the reader to understand differential forms. A lengthy, but straightforward (with computer algebra) calculation from our assumed line element shows us that Thus we can write In any vacuum region of space-time, all components of the Ricci tensor must be zero. This demonstrates that enclosing any amount of vacuum will not change our volume integral. It also means that our volume integral will be constant for any enclosing surface, as long as we enclose all of the gravitating mass inside our surface. Because Stokes theorem guarantees that our surface integral is equal to the above volume integral, our surface integral will also be independent of the enclosing surface as long as the surface encloses all of the gravitating mass. By using Einstein's Field Equations letting u=v and summing, we can show that This allows us to rewrite our mass formula as a volume integral of the stress–energy tensor. where V is the volume being integrated over; Tab is the Stress–energy tensor; ua is a unit time-like vector such that ua ua = -1. Komar mass as volume integral - general stationary metric To make the formula for Komar mass work for a general stationary metric, regardless of the choice of coordinates, it must be modified slightly. We will present the applicable result from (Wald, 1984 eq 11.2.10) without a formal proof. where V is the volume being integrated over Tab is the Stress–energy tensor; ua is a unit time-like vector such that ua ua = -1; is a Killing vector, which expresses the time-translation symmetry of any stationary metric. The Killing vector is normalized so that it has a unit length at infinity, i.e. so that at infinity. Note that replaces in our motivational result. If none of the metric coefficients are functions of time, While it is not necessary to choose coordinates for a stationary space-time such that the metric coefficients are independent of time, it is often convenient. When we chose such coordinates, the time-like Killing vector for our system becomes a scalar multiple of a unit coordinate-time vector i.e. When this is the case, we can rewrite our formula as Because is by definition a unit vector, K is just the length of , i.e. K = . Evaluating the "red-shift" factor K based on our knowledge of the components of , we can see that K = . If we chose our spatial coordinates so that we have a locally Minkowskian metric we know that With these coordinate choices, we can write our Komar integral as While we can't choose a coordinate system to make a curved space-time globally Minkowskian, the above formula provides some insight into the meaning of the Komar mass formula. Essentially, both energy and pressure contribute to the Komar mass. Furthermore, the contribution of local energy and mass to the system mass is multiplied by the local "red shift" factor Komar mass as surface integral - general stationary metric We also wish to give the general result for expressing the Komar mass as a surface integral. The formula for the Komar mass in terms of the metric and its Killing vector is (Wald, 1984, pg 289, formula 11.2.9) where are the Levi-civita symbols and is the Killing vector of our stationary metric, normalized so that at infinity. The surface integral above is interpreted as the "natural" integral of a two form over a manifold. As mentioned previously, if none of the metric coefficients are functions of time, See also Komar superpotential Mass in general relativity Notes References General relativity Mass
Komar mass
[ "Physics", "Mathematics" ]
1,609
[ "Scalar physical quantities", "Physical quantities", "Quantity", "Mass", "General relativity", "Size", "Theory of relativity", "Wikipedia categories named after physical quantities", "Matter" ]
6,513,985
https://en.wikipedia.org/wiki/Mass%20in%20general%20relativity
The concept of mass in general relativity (GR) is more subtle to define than the concept of mass in special relativity. In fact, general relativity does not offer a single definition of the term mass, but offers several different definitions that are applicable under different circumstances. Under some circumstances, the mass of a system in general relativity may not even be defined. The reason for this subtlety is that the energy and momentum in the gravitational field cannot be unambiguously localized. (See Chapter 20 of .) So, rigorous definitions of the mass in general relativity are not local, as in classical mechanics or special relativity, but make reference to the asymptotic nature of the spacetime. A well defined notion of the mass exists for asymptotically flat spacetimes and for asymptotically Anti-de Sitter space. However, these definitions must be used with care in other settings. Defining mass in general relativity: concepts and obstacles In special relativity, the rest mass of a particle can be defined unambiguously in terms of its energy and momentum as described in the article on mass in special relativity. Generalizing the notion of the energy and momentum to general relativity, however, is subtle. The main reason for this is that that gravitational field itself contributes to the energy and momentum. However, the "gravitational field energy" is not a part of the energy–momentum tensor; instead, what might be identified as the contribution of the gravitational field to a total energy is part of the Einstein tensor on the other side of Einstein's equation (and, as such, a consequence of these equations' non-linearity). While in certain situations it is possible to rewrite the equations so that part of the "gravitational energy" now stands alongside the other source terms in the form of the stress–energy–momentum pseudotensor, this separation is not true for all observers, and there is no general definition for obtaining it. How, then, does one define a concept as a system's total mass which is easily defined in classical mechanics? As it turns out, at least for spacetimes which are asymptotically flat (roughly speaking, which represent some isolated gravitating system in otherwise empty and gravity-free infinite space), the ADM 3+1 split leads to a solution: as in the usual Hamiltonian formalism, the time direction used in that split has an associated energy, which can be integrated up to yield a global quantity known as the ADM mass (or, equivalently, ADM energy). Alternatively, there is a possibility to define mass for a spacetime that is stationary, in other words, one that has a time-like Killing vector field (which, as a generating field for time, is canonically conjugate to energy); the result is the so-called Komar mass Although defined in a totally different way, it can be shown to be equivalent to the ADM mass for stationary spacetimes. The Komar integral definition can also be generalized to non-stationary fields for which there is at least an asymptotic time translation symmetry; imposing a certain gauge condition, one can define the Bondi energy at null infinity. In a way, the ADM energy measures all of the energy contained in spacetime, while the Bondi energy excludes those parts carried off by gravitational waves to infinity. Great effort has been expended on proving positivity theorems for the masses just defined, not least because positivity, or at least the existence of a lower limit, has a bearing on the more fundamental question of boundedness from below: if there were no lower limit to the energy, then no isolated system would be absolutely stable; there would always be the possibility of a decay to a state of even lower total energy. Several kinds of proofs that both the ADM mass and the Bondi mass are indeed positive exist; in particular, this means that Minkowski space (for which both are zero) is indeed stable. While the focus here has been on energy, analogue definitions for global momentum exist; given a field of angular Killing vectors and following the Komar technique, one can also define global angular momentum. Quasi-local quantities The disadvantage of all the definitions mentioned so far is that they are defined only at (null or spatial) infinity; since the 1970s, physicists and mathematicians have worked on the more ambitious endeavor of defining suitable quasi-local quantities, such as the mass of an isolated system defined using only quantities defined within a finite region of space containing that system. However, while there is a variety of proposed definitions such as the Hawking energy, the Geroch energy or Penrose's quasi-local energy–momentum based on twistor methods, the field is still in flux. Eventually, the hope is to use a suitable defined quasi-local mass to give a more precise formulation of the hoop conjecture, prove the so-called Penrose inequality for black holes (relating the black hole's mass to the horizon area) and find a quasi-local version of the laws of black hole mechanics. Types of mass in general relativity Komar mass in stationary spacetimes A non-technical definition of a stationary spacetime is a spacetime where none of the metric coefficients are functions of time. The Schwarzschild metric of a black hole and the Kerr metric of a rotating black hole are common examples of stationary spacetimes. By definition, a stationary spacetime exhibits time translation symmetry. This is technically called a time-like Killing vector. Because the system has a time translation symmetry, Noether's theorem guarantees that it has a conserved energy. Because a stationary system also has a well defined rest frame in which its momentum can be considered to be zero, defining the energy of the system also defines its mass. In general relativity, this mass is called the Komar mass of the system. Komar mass can only be defined for stationary systems. Komar mass can also be defined by a flux integral. This is similar to the way that Gauss's law defines the charge enclosed by a surface as the normal electric force multiplied by the area. The flux integral used to define Komar mass is slightly different from that used to define the electric field, however the normal force is not the actual force, but the "force at infinity". See the main article for more detail. Of the two definitions, the description of Komar mass in terms of a time translation symmetry provides the deepest insight. ADM and Bondi masses in asymptotically flat space-times If a system containing gravitational sources is surrounded by an infinite vacuum region, the geometry of the space-time will tend to approach the flat Minkowski geometry of special relativity at infinity. Such space-times are known as "asymptotically flat" space-times. For systems in which space-time is asymptotically flat, the ADM and Bondi energy, momentum, and mass can be defined. In terms of Noether's theorem, the ADM energy, momentum, and mass are defined by the asymptotic symmetries at spatial infinity, and the Bondi energy, momentum, and mass are defined by the asymptotic symmetries at null infinity. Note that mass is computed as the length of the energy–momentum four-vector, which can be thought of as the energy and momentum of the system "at infinity". The ADM energy is defined through the following flux integral at infinity. If a spacetime is asymptotically flat this means that near "infinity" the metric tends to that of flat space. The asymptotic deviations of the metric away from flat space can be parametrized by where is the flat space metric. The ADM energy is then given by an integral over a surface, at infinity where is the outward-pointing normal to . The Einstein summation convention is assumed for repeated indices but the sum over k and j only runs over the spatial directions. The use of ordinary derivatives instead of covariant derivatives in the formula above is justified because of the assumption that the asymptotic geometry is flat. Some intuition for the formula above can be obtained as follows. Imagine that that we take the surface, S, to be a spherical surface so that the normal points radially outwards. At large distances from the source of the energy, r, the tensor is expected to fall off as and the derivative with respect to r converts this into . The area of the sphere at large radius also grows precisely as and therefore one obtains a finite value for the energy. It is also possible to obtain expressions for the momentum in asymptotically flat spacetime. To obtain such an expression one defines where Then the momentum is obtained by a flux integral in the asymptotically flat region Note that the expression for obtained from the formula above coincides with the expression for the ADM energy given above as can easily be checked using the explicit expression for H. The Newtonian limit for nearly flat space-times In the Newtonian limit, for quasi-static systems in nearly flat space-times, one can approximate the total energy of the system by adding together the non-gravitational components of the energy of the system and then subtracting the Newtonian gravitational binding energy. Translating the above statement into the language of general relativity, we say that a system in nearly flat space-time has a total non-gravitational energy E and momentum P given by: When the components of the momentum vector of the system are zero, i.e. Pi = 0, the approximate mass of the system is just (E+Ebinding)/c2, Ebinding being a negative number representing the Newtonian gravitational self-binding energy. Hence when one assumes that the system is quasi-static, one assumes that there is no significant energy present in the form of "gravitational waves". When one assumes that the system is in "nearly-flat" space-time, one assumes that the metric coefficients are essentially Minkowskian within acceptable experimental error. The formulas for the total energy and momentum can be seen to arise naturally in this limit as follows. In the linearized limit, the equations of general relativity can be written in the form In this limit, the total energy-momentum of the system is simply given by integrating the stress-tensor on a spacelike slice. But using the equations of motion, one can also write this as where the sum over j runs only over the spatial directions and the second equality uses the fact that is anti-symmetric in and . Finally, one uses the Gauss law to convert the integral of a divergence over the spatial slice into an integral over a Gaussian sphere which coincides precisely with the formula for the total momentum given above. History In 1918, David Hilbert wrote about the difficulty in assigning an energy to a "field" and "the failure of the energy theorem" in a correspondence with Klein. In this letter, Hilbert conjectured that this failure is a characteristic feature of the general theory, and that instead of "proper energy theorems" one had 'improper energy theorems'. This conjecture was soon proved to be correct by one of Hilbert's close associates, Emmy Noether. Noether's theorem applies to any system which can be described by an action principle. Noether's theorem associates conserved energies with time-translation symmetries. When the time-translation symmetry is a finite parameter continuous group, such as the Poincaré group, Noether's theorem defines a scalar conserved energy for the system in question. However, when the symmetry is an infinite parameter continuous group, the existence of a conserved energy is not guaranteed. In a similar manner, Noether's theorem associates conserved momenta with space-translations, when the symmetry group of the translations is finite-dimensional. Because General Relativity is a diffeomorphism invariant theory, it has an infinite continuous group of symmetries rather than a finite-parameter group of symmetries, and hence has the wrong group structure to guarantee a conserved energy. Noether's theorem has been influential in inspiring and unifying various ideas of mass, system energy, and system momentum in General Relativity. As an example of the application of Noether's theorem is the example of stationary space-times and their associated Komar mass.(Komar 1959). While general space-times lack a finite-parameter time-translation symmetry, stationary space-times have such a symmetry, known as a Killing vector. Noether's theorem proves that such stationary space-times must have an associated conserved energy. This conserved energy defines a conserved mass, the Komar mass. ADM mass was introduced (Arnowitt et al., 1960) from an initial-value formulation of general relativity. It was later reformulated in terms of the group of asymptotic symmetries at spatial infinity, the SPI group, by various authors. (Held, 1980). This reformulation did much to clarify the theory, including explaining why ADM momentum and ADM energy transforms as a 4-vector (Held, 1980). Note that the SPI group is actually infinite-dimensional. The existence of conserved quantities is because the SPI group of "super-translations" has a preferred 4-parameter subgroup of "pure" translations, which, by Noether's theorem, generates a conserved 4-parameter energy–momentum. The norm of this 4-parameter energy–momentum is the ADM mass. The Bondi mass was introduced (Bondi, 1962) in a paper that studied the loss of mass of physical systems via gravitational radiation. The Bondi mass is also associated with a group of asymptotic symmetries, the BMS group at null infinity. Like the SPI group at spatial infinity, the BMS group at null infinity is infinite-dimensional, and it also has a preferred 4-parameter subgroup of "pure" translations. Another approach to the problem of energy in General Relativity is the use of pseudotensors such as the Landau–Lifshitz pseudotensor.(Landau and Lifshitz, 1962). Pseudotensors are not gauge invariant because of this, they only give consistent gauge-independent answers for the total energy when additional constraints (such as asymptotic flatness) are met. The gauge dependence of pseudotensors also prevents any gauge-independent definition of the local energy density, as every different gauge choice results in a different local energy density. See also Mass in special relativity General relativity Conservation of energy Komar mass Hawking energy ADM mass Positive mass theorem Notes References "If you go too fast, do you become a black hole?" Updated by Don Koks 2008. Original by Philip Gibbs 1996. The Original Usenet Physics FAQ External links "Is energy conserved in General Relativity? General relativity Mass Unsolved problems in physics Unsolved problems in astronomy
Mass in general relativity
[ "Physics", "Astronomy", "Mathematics" ]
3,065
[ "Scalar physical quantities", "Unsolved problems in astronomy", "Physical quantities", "Concepts in astronomy", "Quantity", "Mass", "Unsolved problems in physics", "General relativity", "Size", "Astronomical controversies", "Theory of relativity", "Wikipedia categories named after physical qua...
6,517,456
https://en.wikipedia.org/wiki/Plane%20stress
In continuum mechanics, a material is said to be under plane stress if the stress vector is zero across a particular plane. When that situation occurs over an entire element of a structure, as is often the case for thin plates, the stress analysis is considerably simplified, as the stress state can be represented by a tensor of dimension 2 (representable as a 2×2 matrix rather than 3×3). A related notion, plane strain, is often applicable to very thick members. Plane stress typically occurs in thin flat plates that are acted upon only by load forces that are parallel to them. In certain situations, a gently curved thin plate may also be assumed to have plane stress for the purpose of stress analysis. This is the case, for example, of a thin-walled cylinder filled with a fluid under pressure. In such cases, stress components perpendicular to the plate are negligible compared to those parallel to it. In other situations, however, the bending stress of a thin plate cannot be neglected. One can still simplify the analysis by using a two-dimensional domain, but the plane stress tensor at each point must be complemented with bending terms. Mathematical definition Mathematically, the stress at some point in the material is a plane stress if one of the three principal stresses (the eigenvalues of the Cauchy stress tensor) is zero. That is, there is Cartesian coordinate system in which the stress tensor has the form For example, consider a rectangular block of material measuring 10, 40 and 5 cm along the , , and , that is being stretched in the direction and compressed in the direction, by pairs of opposite forces with magnitudes 10 N and 20 N, respectively, uniformly distributed over the corresponding faces. The stress tensor inside the block will be More generally, if one chooses the first two coordinate axes arbitrarily but perpendicular to the direction of zero stress, the stress tensor will have the form and can therefore be represented by a 2 × 2 matrix, Constitutive equations Plane stress in curved surfaces In certain cases, the plane stress model can be used in the analysis of gently curved surfaces. For example, consider a thin-walled cylinder subjected to an axial compressive load uniformly distributed along its rim, and filled with a pressurized fluid. The internal pressure will generate a reactive hoop stress on the wall, a normal tensile stress directed perpendicular to the cylinder axis and tangential to its surface. The cylinder can be conceptually unrolled and analyzed as a flat thin rectangular plate subjected to tensile load in one direction and compressive load in another other direction, both parallel to the plate. Plane strain (strain matrix) If one dimension is very large compared to the others, the principal strain in the direction of the longest dimension is constrained and can be assumed as constant, that means there will be effectively zero strain along it, hence yielding a plane strain condition (Figure 7.2). In this case, though all principal stresses are non-zero, the principal stress in the direction of the longest dimension can be disregarded for calculations. Thus, allowing a two dimensional analysis of stresses, e.g. a dam analyzed at a cross section loaded by the reservoir. The corresponding strain tensor is: and the corresponding stress tensor is: in which the non-zero term arises from the Poisson's effect. However, this term can be temporarily removed from the stress analysis to leave only the in-plane terms, effectively reducing the analysis to two dimensions. Stress transformation in plane stress and plane strain Consider a point in a continuum under a state of plane stress, or plane strain, with stress components and all other stress components equal to zero (Figure 8.1). From static equilibrium of an infinitesimal material element at (Figure 8.2), the normal stress and the shear stress on any plane perpendicular to the - plane passing through with a unit vector making an angle of with the horizontal, i.e. is the direction cosine in the direction, is given by: These equations indicate that in a plane stress or plane strain condition, one can determine the stress components at a point on all directions, i.e. as a function of , if one knows the stress components on any two perpendicular directions at that point. It is important to remember that we are considering a unit area of the infinitesimal element in the direction parallel to the - plane. The principal directions (Figure 8.3), i.e., orientation of the planes where the shear stress components are zero, can be obtained by making the previous equation for the shear stress equal to zero. Thus we have: and we obtain This equation defines two values which are apart (Figure 8.3). The same result can be obtained by finding the angle which makes the normal stress a maximum, i.e. The principal stresses and , or minimum and maximum normal stresses and , respectively, can then be obtained by replacing both values of into the previous equation for . This can be achieved by rearranging the equations for and , first transposing the first term in the first equation and squaring both sides of each of the equations then adding them. Thus we have where which is the equation of a circle of radius centered at a point with coordinates , called Mohr's circle. But knowing that for the principal stresses the shear stress , then we obtain from this equation: When the infinitesimal element is oriented in the direction of the principal planes, thus the stresses acting on the rectangular element are principal stresses: and . Then the normal stress and shear stress as a function of the principal stresses can be determined by making . Thus we have Then the maximum shear stress occurs when , i.e. (Figure 8.3): Then the minimum shear stress occurs when , i.e. (Figure 8.3): See also Plane strain References Metallurgy Mechanical engineering
Plane stress
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,192
[ "Applied and interdisciplinary physics", "Metallurgy", "Materials science", "nan", "Mechanical engineering" ]
26,961,524
https://en.wikipedia.org/wiki/Heyting%20field
A Heyting field is one of the inequivalent ways in constructive mathematics to capture the classical notion of a field. It is essentially a field with an apartness relation. Definition A commutative ring is a Heyting field if it is a field in the sense that Each non-invertible element is zero and if it is moreover local: Not only does the non-invertible not equal the invertible , but the following disjunctions are granted more generally Either or is invertible for every The third axiom may also be formulated as the statement that the algebraic "" transfers invertibility to one of its inputs: If is invertible, then either or is as well. Relation to classical logic The structure defined without the third axiom may be called a weak Heyting field. Every such structure with decidable equality being a Heyting field is equivalent to excluded middle. Indeed, classically all fields are already local. Discussion An apartness relation is defined by writing if is invertible. This relation is often now written as with the warning that it is not equivalent to . The assumption is then generally not sufficient to construct the inverse of . However, is sufficient. Example The prototypical Heyting field is the real numbers. See also Constructive analysis Pseudo-order Markov's principle References Mines, Richman, Ruitenberg. A Course in Constructive Algebra. Springer, 1987. Constructivism (mathematics)
Heyting field
[ "Mathematics" ]
297
[ "Mathematical logic", "Algebra stubs", "Constructivism (mathematics)", "Algebra" ]
26,964,158
https://en.wikipedia.org/wiki/Trion%20%28physics%29
A trion is a bound state of three charged particles. A negatively charged trion in crystals consists of two electrons and one hole, while a positively charged trion consists of two holes and one electron. The binding energy of a trion is largely determined by the exchange interaction between the two electrons (holes). The ground state of a negatively charged trion is a singlet (total spin of two electrons S=0). The triplet state (total spin of two electrons S=1) is unbound in the absence of an additional potential or sufficiently strong magnetic field. Like excitons, trions can be created by optical excitation. An incident photon creates an exciton, and this exciton binds to an additional electron (hole), creating a trion. The binding time of the exciton to the extra electron is of the same order as the time of exciton formation. This is why trions are observed not only in the emission spectra, but also in the absorption and reflection spectra. Trion states were predicted theoretically in 1958; First time they were observed experimentally in 1993 in CdTe/Cd1−xZnxTe quantum wells by Ronald Cox and co-authors, and later in various other semiconductor structures. In recent years, trion states in quantum dots have been actively studied. There are experimental proofs of their existence in nanotubes supported by theoretical studies. Particularly interesting is the study of trions in atomically thin two-dimensional (2D) layers of transition metal dichalcogenides. In such materials, the interaction between the charge carriers is enhanced many times over due to the weakening of the screening An important property of a trion is that its ground state is a singlet. As a result, in a sufficiently large magnetic field, when all the electrons appear spin-polarised, trions are born under the action of light of only one circular polarization. In this polarization, excitons with the appropriate angular momentum form singlet trion states. Light with the opposite circular polarization can only form triplet states of the trion. In addition to the formation of bound states, the interaction of excitons with electrons can lead to the scattering of excitons by electrons. In a magnetic field, the electron spectrum becomes discrete, and the exciton states scattered by electrons manifest as the phenomenon of "exciton cyclotron resonance" (ExCR). In ExCR, an incident photon creates an exciton, which forces an additional electron to transfer between Landau level s. The reverse process is called "shake-up". In this case, the recombination of the trion is accompanied by the transition of an additional electron between Landau levels. Since the energies of an exciton and a trion are close, they can form a coherent bound state in which a trion can "lose" an electron to become an exciton and an exciton can "capture" an electron to become a trion. If there is no time between the loss and capture of the electron for it to dissipate, a mixed state similar to an exciton-polariton is formed. Such states have been reliably observed in quantum wells and monolayers of dichalcogenides. The exciton-electron interaction in the presence of a dense electron gas can lead to the formation of the so-called "Suris tetron". This is a state of four particles: an exciton, an electron and a hole in the Fermi Sea. References Spintronics Quasiparticles Quantum electronics
Trion (physics)
[ "Physics", "Materials_science" ]
744
[ "Quantum electronics", "Spintronics", "Quantum mechanics", "Subatomic particles", "Condensed matter physics", "Nanotechnology", "Quasiparticles", "Matter" ]
26,964,637
https://en.wikipedia.org/wiki/SDD-AGE
In biochemistry and molecular biology, SDD-AGE is short for Semi-Denaturating Detergent Agarose Gel Electrophoresis. This is a method for detecting and characterizing large protein polymers which are stable in 2% SDS at room temperature, unlike most large protein complexes. This method is very useful for studying prions and amyloids, which are characterized by the formation of proteinaceous polymers. Agarose is used for the gel since the SDS-resistant polymers are large (in the 200-4000+ kDa range) and cannot enter a conventional polyacrylamide gel, which has small pores. Agarose on the other hand has large pores, which allows for the separation of polymers. Use of this method allowed researchers to understand that at least some types of prion aggregates existed in a two-level structure - protein molecules grouped into polymers, which are very stable and withstand treatment with 2% SDS at room temperature, and aggregates, which are bundles of polymers, that dissociate under these conditions. Differences in the size of polymers can indicate the efficiency of polymer fragmentation in vivo. History The method was created in the Molecular Genetics laboratory of the Russian Cardiology Research Institute and was published in 2003 by Kryndushkin et al. The original method used a TAE buffering system and incorporated a modified vacuum blotting system for the transfer of proteins onto a membrane (originally PVDF). The modified vacuum blotting system is actually a vacuum-assisted capillary transfer, since the vacuum only helps fluid that has already gone through the gel and membrane to leave the system. Variations Other modifications have also been used, such as the one described in Bagriantsev et al., using traditional wet transfer and a TGB buffering system, and others using semi-dry transfer or capillary transfer. DD-AGE, a further variation of the method that uses fully denaturing conditions - including reducing agents such as dithiothreitol (DTT) and heat denaturation at 95°C - is suitable for the analysis of heat-stable inclusion bodies of polyglutamine proteins. References Protein methods Molecular biology
SDD-AGE
[ "Chemistry", "Biology" ]
450
[ "Biochemistry methods", "Protein methods", "Protein biochemistry", "Molecular biology", "Biochemistry" ]
26,967,898
https://en.wikipedia.org/wiki/Macroketone
Macroketones are macrocyclic compounds that contain a ketone functional group. Macroketones form the central rings systems of some synthetic polyketide antibiotics. References Ketones Macrocycles
Macroketone
[ "Chemistry" ]
42
[ "Ketones", "Organic compounds", "Functional groups", "Macrocycles" ]
26,968,071
https://en.wikipedia.org/wiki/Glembatumumab%20vedotin
Glembatumumab vedotin (also known as CDX-011 and CR011-vcMMAE) is an antibody-drug conjugate (ADC) that targets cancer cells expressing transmembrane glycoprotein NMB (GPNMB). In May 2010, the U.S FDA granted Fast Track designation to CDX-011 for the treatment of advanced, refractory, or resistant GPNMB-expressing breast cancer. Structure and mechanism The fully human IgG2 monoclonal antibody glembatumumab (CR011) is linked to monomethyl auristatin E (MMAE). It uses a valine-citrulline enzyme-cleavable linker. The linkage is stable in the bloodstream. The antibody binds to GPNMB on the cancer cells, the ADC is internalised, the linkage is broken and MMAE is released to kill the cell. In preclinical studies glembatumumab vedotin was capable of killing GPNMB expressing melanoma and breast cancer cells in vitro and inducing partial or complete regression of GPNMB-expressing tumors in mouse models. Development Glembatumumab vedotin was in development through April 2018 by Celldex Therapeutics, who acquired CuraGen in 2009. It was originally developed through a partnership between CuraGen and Amgen, using Xenomouse technology licensed from Abgenix and ADC technology licensed from Seattle Genetics. In 2015, Celldex announced that it had formed a cooperative research and development agreement with NCI to sponsor two clinical trials for uveal melanoma and pediatric osteosarcoma. These were both phase II clinical trials. Clinical trials In September 2010 a Phase 2b clinical study started of glembatumumab vedotin in 120 patients with GPNMB-expressing breast cancer including those with triple negative breast cancer. , Phase I/II clinical trials of glembatumumab vedotin for the treatment of advanced melanoma and breast cancer have been completed but no official study result was posted. Preliminary results from these trials have shown that glembatumumab vedotin has some clinical activity (promotes tumor shrinkage) in both cancer types. Patients whose tumors express GPNMB respond better to glembatumumab and have longer progression-free survival than those whose tumors do not express GPNMB; in melanoma, and breast cancer. An accelerated approval Phase II clinical trial (METRIC) investigating glembatumumab vedotin versus capecitabine (2:1 with crossover allowed) has begun in November 2013, expected to enroll 300 patients with GPNMB-expressing metastatic triple negative breast cancer. Patients who have progressed after receiving anthracyclines and taxanes are eligible. Development of the ADC was discontinued in April 2018 after missing the primary endpoint of its study and failed to help women with tough-to-treat metastatic triple-negative breast cancers (TNBC) stay both alive and progression-free for longer than Roche Holding AG's Xeloda (capecitabine). References Antibody-drug conjugates Monoclonal antibodies for tumors Experimental cancer drugs
Glembatumumab vedotin
[ "Biology" ]
677
[ "Antibody-drug conjugates" ]
142,100
https://en.wikipedia.org/wiki/Hydride
In chemistry, a hydride is formally the anion of hydrogen (H−), a hydrogen ion with two electrons. In modern usage, this is typically only used for ionic bonds, but it is sometimes (and more frequently in the past) been applied to all compounds containing covalently bound H atoms. In this broad and potentially archaic sense, water (H2O) is a hydride of oxygen, ammonia is a hydride of nitrogen, etc. In covalent compounds, it implies hydrogen is attached to a less electronegative element. In such cases, the H centre has nucleophilic character, which contrasts with the protic character of acids. The hydride anion is very rarely observed. Almost all of the elements form binary compounds with hydrogen, the exceptions being He, Ne, Ar, Kr, Pm, Os, Ir, Rn, Fr, and Ra. Exotic molecules such as positronium hydride have also been made. Bonds Bonds between hydrogen and the other elements range from being highly ionic to somewhat covalent. Some hydrides, e.g. boron hydrides, do not conform to classical electron counting rules and the bonding is described in terms of multi-centered bonds, whereas the interstitial hydrides often involve metallic bonding. Hydrides can be discrete molecules, oligomers or polymers, ionic solids, chemisorbed monolayers, bulk metals (interstitial), or other materials. While hydrides traditionally react as Lewis bases or reducing agents, some metal hydrides behave as hydrogen-atom donors and act as acids. Applications Hydrides such as sodium borohydride, lithium aluminium hydride, diisobutylaluminium hydride (DIBAL) and super hydride, are commonly used as reducing agents in chemical synthesis. The hydride adds to an electrophilic center, typically unsaturated carbon. Hydrides such as sodium hydride and potassium hydride are used as strong bases in organic synthesis. The hydride reacts with the weak Bronsted acid releasing H2. Hydrides such as calcium hydride are used as desiccants, i.e. drying agents, to remove trace water from organic solvents. The hydride reacts with water forming hydrogen and hydroxide salt. The dry solvent can then be distilled or vacuum transferred from the "solvent pot". Hydrides are important in storage battery technologies such as nickel-metal hydride battery. Various metal hydrides have been examined for use as a means of hydrogen storage for fuel cell-powered electric cars and other purposed aspects of a hydrogen economy. Hydride complexes are catalysts and catalytic intermediates in a variety of homogeneous and heterogeneous catalytic cycles. Important examples include hydrogenation, hydroformylation, hydrosilylation, hydrodesulfurization catalysts. Even certain enzymes, the hydrogenase, operate via hydride intermediates. The energy carrier nicotinamide adenine dinucleotide reacts as a hydride donor or hydride equivalent. Hydride ion Free hydride anions exist only under extreme conditions and are not invoked for homogeneous solution. Instead, many compounds have hydrogen centres with hydridic character. Aside from electride, the hydride ion is the simplest possible anion, consisting of two electrons and a proton. Hydrogen has a relatively low electron affinity, 72.77 kJ/mol and reacts exothermically with protons as a powerful Lewis base. H- + H+ -> H2 The low electron affinity of hydrogen and the strength of the H–H bond () means that the hydride ion would also be a strong reducing agent H2 + 2e- <=> 2H- Types of hydrides According to the general definition, every element of the periodic table (except some noble gases) forms one or more hydrides. These substances have been classified into three main types according to the nature of their bonding: Ionic hydrides, which have significant ionic bonding character. Covalent hydrides, which include the hydrocarbons and many other compounds which covalently bond to hydrogen atoms. Interstitial hydrides, which may be described as having metallic bonding. While these divisions have not been used universally, they are still useful to understand differences in hydrides. Ionic hydrides These are stoichiometric compounds of hydrogen. Ionic or saline hydrides are composed of hydride bound to an electropositive metal, generally an alkali metal or alkaline earth metal. The divalent lanthanides such as europium and ytterbium form compounds similar to those of heavier alkaline earth metals. In these materials the hydride is viewed as a pseudohalide. Saline hydrides are insoluble in conventional solvents, reflecting their non-molecular structures. Ionic hydrides are used as bases and, occasionally, as reducing reagents in organic synthesis. C6H5C(O)CH3 + KH → C6H5C(O)CH2K + H2 Typical solvents for such reactions are ethers. Water and other protic solvents cannot serve as a medium for ionic hydrides because the hydride ion is a stronger base than hydroxide and most hydroxyl anions. Hydrogen gas is liberated in a typical acid-base reaction. NaH + H2O -> H2_{(g)}{} + NaOH ΔH = −83.6 kJ/mol, ΔG = −109.0 kJ/mol Often alkali metal hydrides react with metal halides. Lithium aluminium hydride (often abbreviated as LAH) arises from reactions of lithium hydride with aluminium chloride. 4 LiH + AlCl3 → LiAlH4 + 3 LiCl Covalent hydrides According to some definitions, covalent hydrides cover all other compounds containing hydrogen. Some definitions limit hydrides to hydrogen centres that formally react as hydrides, i.e. are nucleophilic, and hydrogen atoms bound to metal centers. These hydrides are formed by all the true non-metals (except zero group elements) and the elements like Al, Ga, Sn, Pb, Bi, Po, etc., which are normally metallic in nature, i.e., this class includes the hydrides of p-block elements. In these substances the hydride bond is formally a covalent bond much like the bond made by a proton in a weak acid. This category includes hydrides that exist as discrete molecules, polymers or oligomers, and hydrogen that has been chem-adsorbed to a surface. A particularly important segment of covalent hydrides are complex metal hydrides, powerful soluble hydrides commonly used in synthetic procedures. Molecular hydrides often involve additional ligands; for example, diisobutylaluminium hydride (DIBAL) consists of two aluminum centers bridged by hydride ligands. Hydrides that are soluble in common solvents are widely used in organic synthesis. Particularly common are sodium borohydride () and lithium aluminium hydride and hindered reagents such as DIBAL. Interstitial hydrides or metallic hydrides Interstitial hydrides most commonly exist within metals or alloys. They are traditionally termed "compounds" even though they do not strictly conform to the definition of a compound, more closely resembling common alloys such as steel. In such hydrides, hydrogen can exist as either atomic or diatomic entities. Mechanical or thermal processing, such as bending, striking, or annealing, may cause the hydrogen to precipitate out of solution by degassing. Their bonding is generally considered metallic. Such bulk transition metals form interstitial binary hydrides when exposed to hydrogen. These systems are usually non-stoichiometric, with variable amounts of hydrogen atoms in the lattice. In materials engineering, the phenomenon of hydrogen embrittlement results from the formation of interstitial hydrides. Hydrides of this type form according to either one of two main mechanisms. The first mechanism involves the adsorption of dihydrogen, succeeded by the cleaving of the H-H bond, the delocalisation of the hydrogen's electrons, and finally the diffusion of the protons into the metal lattice. The other main mechanism involves the electrolytic reduction of ionised hydrogen on the surface of the metal lattice, also followed by the diffusion of the protons into the lattice. The second mechanism is responsible for the observed temporary volume expansion of certain electrodes used in electrolytic experiments. Palladium absorbs up to 900 times its own volume of hydrogen at room temperatures, forming palladium hydride. This material has been discussed as a means to carry hydrogen for vehicular fuel cells. Interstitial hydrides show certain promise as a way for safe hydrogen storage. Neutron diffraction studies have shown that hydrogen atoms randomly occupy the octahedral interstices in the metal lattice (in an fcc lattice there is one octahedral hole per metal atom). The limit of absorption at normal pressures is PdH0.7, indicating that approximately 70% of the octahedral holes are occupied. Many interstitial hydrides have been developed that readily absorb and discharge hydrogen at room temperature and atmospheric pressure. They are usually based on intermetallic compounds and solid-solution alloys. However, their application is still limited, as they are capable of storing only about 2 weight percent of hydrogen, insufficient for automotive applications. Transition metal hydride complexes Transition metal hydrides include compounds that can be classified as covalent hydrides. Some are even classified as interstitial hydrides and other bridging hydrides. Classical transition metal hydride feature a single bond between the hydrogen centre and the transition metal. Some transition metal hydrides are acidic, e.g., and . The anions potassium nonahydridorhenate and are examples from the growing collection of known molecular homoleptic metal hydrides. As pseudohalides, hydride ligands are capable of bonding with positively polarized hydrogen centres. This interaction, called dihydrogen bonding, is similar to hydrogen bonding, which exists between positively polarized protons and electronegative atoms with open lone pairs. Protides Hydrides containing protium are known as protides. Deuterides Hydrides containing deuterium are known as deuterides. Some deuterides, such as LiD, are important fusion fuels in thermonuclear weapons and useful moderators in nuclear reactors. Tritides Hydrides containing tritium are known as tritides. Mixed anion compounds Mixed anion compounds exist that contain hydride with other anions. These include boride hydrides, carbohydrides, hydridonitrides, oxyhydrides and others. Appendix on nomenclature Protide, deuteride and tritide are used to describe ions or compounds that contain enriched hydrogen-1, deuterium or tritium, respectively. In the classic meaning, hydride refers to any compound hydrogen forms with other elements, ranging over groups 1–16 (the binary compounds of hydrogen). The following is a list of the nomenclature for the hydride derivatives of main group compounds according to this definition: alkali and alkaline earth metals: metal hydride boron: borane, BH3 aluminium: alumane, AlH3 gallium: gallane, GaH3 indium: indigane, InH3 thallium: thallane, TlH3 carbon: alkanes, alkenes, alkynes, and all hydrocarbons silicon: silane germanium: germane tin: stannane lead: plumbane nitrogen: ammonia ("azane" when substituted), hydrazine phosphorus: phosphine (note "phosphane" is the IUPAC recommended name) arsenic: arsine (note "arsane" is the IUPAC recommended name) antimony: stibine (note "stibane" is the IUPAC recommended name) bismuth: bismuthine (note "bismuthane" is the IUPAC recommended name) helium: helium hydride (only exists as an ion) According to the convention above, the following are "hydrogen compounds" and not "hydrides": oxygen: water ("oxidane" when substituted; synonym: hydrogen oxide), hydrogen peroxide sulfur: hydrogen sulfide ("sulfane" when substituted) selenium: hydrogen selenide ("selane" when substituted) tellurium: hydrogen telluride ("tellane" when substituted) polonium: hydrogen polonide ("polane" when substituted) halogens: hydrogen halides Examples: nickel hydride: used in NiMH batteries palladium hydride: electrodes in cold fusion experiments lithium aluminium hydride: a powerful reducing agent used in organic chemistry sodium borohydride: selective specialty reducing agent, hydrogen storage in fuel cells sodium hydride: a powerful base used in organic chemistry diborane: reducing agent, rocket fuel, semiconductor dopant, catalyst, used in organic synthesis; also borane, pentaborane and decaborane arsine: used for doping semiconductors stibine: used in semiconductor industry phosphine: used for fumigation silane: many industrial uses, e.g. manufacture of composite materials and water repellents ammonia: coolant, fuel, fertilizer, many other industrial uses hydrogen sulfide: component of natural gas, important source of sulfur Chemically, even water and hydrocarbons could be considered hydrides. All metalloid hydrides are highly flammable. All solid non-metallic hydrides except ice are highly flammable. But when hydrogen combines with halogens it produces acids rather than hydrides, and they are not flammable. Precedence convention According to IUPAC convention, by precedence (stylized electronegativity), hydrogen falls between group 15 and group 16 elements. Therefore, we have NH3, "nitrogen hydride" (ammonia), versus H2O, "hydrogen oxide" (water). This convention is sometimes broken for polonium, which on the grounds of polonium's metallicity is often referred to as "polonium hydride" instead of the expected "hydrogen polonide". See also Parent hydride Hydron (hydrogen cation) Hydronium Proton Hydrogen ion Hydride compressor Superhydrides References Bibliography W. M. Mueller, J. P. Blackledge, G. G. Libowitz, Metal Hydrides, Academic Press, N.Y. and London, (1968) External links Anions Hydrogen storage Functional groups
Hydride
[ "Physics", "Chemistry" ]
3,215
[ "Ions", "Functional groups", "Matter", "Anions" ]
142,257
https://en.wikipedia.org/wiki/Varistor
A varistor (a.k.a. voltage-dependent resistor (VDR)) is a surge protecting electronic component with an electrical resistance that varies with the applied voltage. It has a nonlinear, non-ohmic current–voltage characteristic that is similar to that of a diode. Unlike a diode however, it has the same characteristic for both directions of traversing current. Traditionally, varistors were constructed by connecting two rectifiers, such as the copper-oxide or germanium-oxide rectifier in antiparallel configuration. At low voltage the varistor has a high electrical resistance which decreases as the voltage is raised. Modern varistors are primarily based on sintered ceramic metal-oxide materials which exhibit directional behavior only on a microscopic scale. This type is commonly known as the metal-oxide varistor (MOV). Varistors are used as control or compensation elements in circuits either to provide optimal operating conditions or to protect against excessive transient voltages. When used as protection devices, they shunt the current created by the excessive voltage away from sensitive components when triggered. The name varistor is a portmanteau of varying resistor. The term is only used for non-ohmic varying resistors. Variable resistors, such as the potentiometer and the rheostat, have ohmic characteristics. History The development of the varistor, in form of a new type of rectifier based on a cuprous oxide (Cu2O) layer on copper, originated in the work by L.O. Grondahl and P.H. Geiger in 1927. The copper-oxide varistor exhibited a varying resistance in dependence on the polarity and magnitude of applied voltage. It was constructed from a small copper disk, on one side of which, a layer of cuprous oxide was formed. This arrangement provides low resistance to current flowing from the semiconducting oxide to the copper side, but a high resistance to current in the opposite direction, with the instantaneous resistance varying continuously with the voltage applied. In the 1930s, small multiple-varistor assemblies of a maximum dimension of less than one inch and apparently indefinite useful lifetime found application in replacing bulky electron tube circuits as modulators and demodulators in carrier current systems for telephonic transmission. Other applications for varistors in the telephone plant included protection of circuits from voltage spikes and noise, as well as click suppression on receiver (ear-piece) elements to protect users' ears from popping noises when switching circuits. These varistors were constructed by layering an even number of rectifier disks in a stack and connecting the terminal ends and the center in an anti-parallel configuration, as shown in the photo of a Western Electric Type 3B varistor of June 1952 (below). The Western Electric type 500 telephone set of 1949 introduced a dynamic loop equalization circuit using varistors that shunted relatively high levels of loop current on short central office loops to adjust the transmission and receiving signal levels automatically. On long loops, the varistors maintained a relatively high resistance and did not alter the signals significantly. Another type of varistor was made from silicon carbide (SiC) by R. O. Grisdale in the early 1930s. It was used to guard telephone lines from lightning. In the early 1970s, Japanese researchers recognized the semiconducting electronic properties of zinc oxide (ZnO) as being useful as a new varistor type in a ceramic sintering process, which exhibited a voltage-current function similar to that of a pair of back-to-back Zener diodes. This type of device became the preferred method for protecting circuits from power surges and other destructive electric disturbances, and became known generally as the metal-oxide varistor (MOV). The randomness of orientation of ZnO grains in the bulk of this material provided the same voltage-current characteristics for both directions of current flow. Composition, properties, and operation of the metal-oxide varistor The most common modern type of varistor is the metal-oxide varistor (MOV). This type contains a ceramic mass of zinc oxide (ZnO) grains, in a matrix of other metal oxides, such as small amounts of bismuth, cobalt, manganese oxides, sandwiched between two metal plates, which constitute the electrodes of the device. The boundary between each grain and a neighbor forms a diode junction, which allows current to flow in only one direction. The accumulation of randomly oriented grains is electrically equivalent to a network of back-to-back diode pairs, each pair in parallel with many other pairs. When a small voltage is applied across the electrodes, only a tiny current flows, caused by reverse leakage through the diode junctions. When a large voltage is applied, the diode junction breaks down due to a combination of thermionic emission and electron tunneling, resulting in a large current flow. The result of this behavior is a nonlinear current-voltage characteristic, in which the MOV has a high resistance at low voltages and a low resistance at high voltages. Electrical characteristics A varistor remains non-conductive as a shunt-mode device during normal operation when the voltage across it remains well below its "clamping voltage", thus varistors are typically used for suppressing line voltage surges. Varistors can fail for either of two reasons. A catastrophic failure occurs from not successfully limiting a very large surge from an event like a lightning strike, where the energy involved is many orders of magnitude greater than the varistor can handle. Follow-through current resulting from a strike may melt, burn, or even vaporize the varistor. This thermal runaway is due to a lack of conformity in individual grain-boundary junctions, which leads to the failure of dominant current paths under thermal stress when the energy in a transient pulse (normally measured in joules) is too high (i.e. significantly exceeds the manufacture's "Absolute Maximum Ratings"). The probability of catastrophic failure can be reduced by increasing the rating, or using specially selected MOVs in parallel. Cumulative degradation occurs as more surges happen. For historical reasons, many MOVs have been incorrectly specified allowing frequent swells to also degrade capacity. In this condition the varistor is not visibly damaged and outwardly appears functional (no catastrophic failure), but it no longer offers protection. Eventually, it proceeds into a shorted circuit condition as the energy discharges create a conductive channel through the oxides. The main parameter affecting varistor life expectancy is its energy (Joule) rating. Increasing the energy rating raises the number of (defined maximum size) transient pulses that it can accommodate exponentially as well as the cumulative sum of energy from clamping lesser pulses. As these pulses occur, the "clamping voltage" it provides during each event decreases, and a varistor is typically deemed to be functionally degraded when its "clamping voltage" has changed by 10%. Manufacturer's life-expectancy charts relate current, severity, and number of transients to make failure predictions based on the total energy dissipated over the life of the part. In consumer electronics, particularly surge protectors, the MOV varistor size employed is small enough that eventually failure is expected. Other applications, such as power transmission, use VDRs of different construction in multiple configurations engineered for long life span. Voltage rating MOVs are specified according to the voltage range that they can tolerate without damage. Other important parameters are the varistor's energy rating in joules, operating voltage, response time, maximum current, and breakdown (clamping) voltage. Energy rating is often defined using standardized transients such as 8/20 microseconds or 10/1000 microseconds, where 8 microseconds is the transient's front time and 20 microseconds is the time to half value. Capacitance Typical capacitance for consumer-sized (7–20 mm diameter) varistors are in the range of 100–2,500 pF. Smaller, lower-capacitance varistors are available with capacitance of ~1 pF for microelectronic protection, such as in cellular phones. These low-capacitance varistors are, however, unable to withstand large surge currents simply due to their compact PCB-mount size. Response time The response time of the MOV is not standardized. The sub-nanosecond MOV response claim is based on the material's intrinsic response time, but will be slowed down by other factors such as the inductance of component leads and the mounting method. That response time is also qualified as insignificant when compared to a transient having an 8 µs rise-time, thereby allowing ample time for the device to slowly turn-on. When subjected to a very fast, <1 ns rise-time transient, response times for the MOV are in the 40–60 ns range. Applications A typical surge protector power strip is built using MOVs. Low-cost versions may use only one varistor, from the hot (live, active) to the neutral conductor. A better protector contains at least three varistors; one across each of the three pairs of conductors. Some standards mandate a triple varistor scheme so that catastrophic MOV failure does not create a fire hazard. Hazards While a MOV is designed to conduct significant power for very short durations (about 8 to 20 microseconds), such as caused by lightning strikes, it typically does not have the capacity to conduct sustained energy. Under normal utility voltage conditions, this is not a problem. However, certain types of faults on the utility power grid can result in sustained over-voltage conditions. Examples include a loss of a neutral conductor or shorted lines on the high voltage system. Application of sustained over-voltage to a MOV can cause high dissipation, potentially resulting in the MOV device catching fire. The National Fire Protection Association (NFPA) has documented many cases of catastrophic fires that have been caused by MOV devices in surge suppressors, and has issued bulletins on the issue. A series connected thermal fuse is one solution to catastrophic MOV failure. Varistors with internal thermal protection are also available. There are several issues to be noted regarding behavior of transient voltage surge suppressors (TVSS) incorporating MOVs under over-voltage conditions. Depending on the level of conducted current, dissipated heat may be insufficient to cause failure, but may degrade the MOV device and reduce its life expectancy. If excessive current is conducted by a MOV, it may fail catastrophically to an open circuit condition, keeping the load connected but now without any surge protection. A user may have no indication that the surge suppressor has failed. Under the right conditions of over-voltage and line impedance, it may be possible to cause the MOV to burst into flames, the root cause of many fires which is the main reason for NFPA's concern resulting in UL1449 in 1986 and subsequent revisions in 1998 and 2009. Properly designed TVSS devices must not fail catastrophically, instead resulting in the opening of a thermal fuse or something equivalent that only disconnects MOV devices. Limitations A MOV inside a transient voltage surge suppressor (TVSS) does not provide complete protection for electrical equipment. In particular, it provides no protection from sustained over-voltages that may result in damage to that equipment as well as to the protector device. Other sustained and harmful over-voltages may be lower and therefore ignored by a MOV device. A varistor provides no equipment protection from inrush current surges (during equipment startup), from overcurrent (created by a short circuit), or from voltage sags (brownouts); it neither senses nor affects such events. Susceptibility of electronic equipment to these other electric power disturbances is defined by other aspects of the system design, either inside the equipment itself or externally by means such as a UPS, a voltage regulator or a surge protector with built-in overvoltage protection (which typically consists of a voltage-sensing circuit and a relay for disconnecting the AC input when the voltage reaches a danger threshold). Comparison to other transient suppressors Another method for suppressing voltage spikes is the transient-voltage-suppression diode (TVS). Although diodes do not have as much capacity to conduct large surges as MOVs, diodes are not degraded by smaller surges and can be implemented with a lower "clamping voltage". MOVs degrade from repeated exposure to surges and generally have a higher "clamping voltage" so that leakage does not degrade the MOV. Both types are available over a wide range of voltages. MOVs tend to be more suitable for higher voltages, because they can conduct the higher associated energies at less cost. Another type of transient suppressor is the gas-tube suppressor. This is a type of spark gap that may use air or an inert gas mixture and often, a small amount of radioactive material such as Ni-63, to provide a more consistent breakdown voltage and reduce response time. Unfortunately, these devices may have higher breakdown voltages and longer response times than varistors. However, they can handle significantly higher fault currents and withstand multiple high-voltage hits (for example, from lightning) without significant degradation. Multi-layer varistor Multi-layer varistor (MLV) devices provide electrostatic discharge protection to electronic circuits from low to medium energy transients in sensitive equipment operating at 0–120 volts dc. They have peak current ratings from about 20 to 500 amperes, and peak energy ratings from 0.05 to 2.5 joules. See also Resettable fuse, a current-sensitive device Trisil References External links The ABCs of MOVs — application notes from Littelfuse company Varistor testing from Littelfuse company Electrical components Resistive components
Varistor
[ "Physics", "Technology", "Engineering" ]
2,909
[ "Electrical components", "Physical quantities", "Resistive components", "Electrical engineering", "Components", "Electrical resistance and conductance" ]
142,340
https://en.wikipedia.org/wiki/Stained%20glass
Stained glass is colored glass as a material or works created from it. Although, it is traditionally made in flat panels and used as windows, the creations of modern stained glass artists also include three-dimensional structures and sculpture. Modern vernacular usage has often extended the term "stained glass" to include domestic lead light and objets d'art created from foil glasswork exemplified in the famous lamps of Louis Comfort Tiffany. As a material stained glass is glass that has been colored by adding metallic salts during its manufacture, and usually then further decorating it in various ways. The colored glass is crafted into stained glass windows in which small pieces of glass are arranged to form patterns or pictures, held together (traditionally) by strips of lead, called cames or calms, and supported by a rigid frame. Painted details and yellow stain are often used to enhance the design. The term stained glass is also applied to windows in enamelled glass in which the colors have been painted onto the glass and then fused to the glass in a kiln; very often this technique is only applied to parts of a window. Stained glass, as an art and a craft, requires the artistic skill to conceive an appropriate and workable design, and the engineering skills to assemble the piece. A window must fit snugly into the space for which it is made, must resist wind and rain, and also, especially in the larger windows, must support its own weight. Many large windows have withstood the test of time and remained substantially intact since the Late Middle Ages. In Western Europe, together with illuminated manuscripts, they constitute the major form of medieval pictorial art to have survived. In this context, the purpose of a stained glass window is not to allow those within a building to see the world outside or even primarily to admit light but rather to control it. For this reason stained glass windows have been described as "illuminated wall decorations". The design of a window may be abstract or figurative; may incorporate narratives drawn from the Bible, history, or literature; may represent saints or patrons, or use symbolic motifs, in particular armorial. Windows within a building may be thematic, for example: within a church – episodes from the life of Christ; within a parliament building – shields of the constituencies; within a college hall – figures representing the arts and sciences; or within a home – flora, fauna, or landscape. Glass production During the late medieval period, glass factories were set up where there was a ready supply of silica, the essential material for glass manufacture. Silica requires a very high temperature to melt, something not all glass factories were able to achieve. Such materials as potash, soda, and lead can be added to lower the melting temperature. Other substances, such as lime, are added to make the glass more stable. Glass is coloured by adding metallic oxide powders or finely divided metals while it is in a molten state. Copper oxides produce green or bluish green, cobalt makes deep blue, and gold produces wine red and violet glass. Much of modern red glass is produced using copper, which is less expensive than gold and gives a brighter, more vermilion shade of red. Glass coloured while in the clay pot in the furnace is known as pot metal glass, as opposed to flashed glass. Cylinder or mouth-blown ('muff') glass Using a blow-pipe, a glass maker will gather a glob of molten glass that was taken from the pot heating in the furnace. The 'gather' is formed to the correct shape and a bubble of air blown into it. Using metal tools, molds of wood that have been soaking in water, and gravity, the gather is manipulated to form a long, cylindrical shape. As it cools, it is reheated so that the manipulation can continue. During the process, the bottom of the cylinder is removed. Once brought to the desired size it is left to cool. One side of the cylinder is opened, and the cylinder is then put into another oven to quickly heat and flatten it, and then placed in an annealer to cool at a controlled rate, making the material more stable. "Hand-blown" or "mouth-blown" cylinder (also called muff glass) and crown glass were the types used in the traditional fabrication of stained-glass windows. Crown glass Crown glass is hand-blown glass created by blowing a bubble of air into a gather of molten glass and then spinning it, either by hand or on a table that revolves rapidly like a potter's wheel. The centrifugal force causes the molten bubble to open up and flatten. It can then be cut into small sheets. Glass formed this way can be either coloured and used for stained-glass windows, or uncoloured as seen in small paned windows in 16th- and 17th-century houses. Concentric, curving waves are characteristic of the process. The centre of each piece of glass, known as the "bull's-eye", is subject to less acceleration during spinning, so it remains thicker than the rest of the sheet. It also has the pontil mark, a distinctive lump of glass left by the "pontil" rod, which holds the glass as it is spun out. This lumpy, refractive quality means the bulls-eyes are less transparent, but they have still been used for windows, both domestic and ecclesiastical. Crown glass is still made today, but not on a large scale. Rolled glass Rolled glass (sometimes called "table glass") is produced by pouring molten glass onto a metal or graphite table and immediately rolling it into a sheet using a large metal cylinder, similar to rolling out a pie crust. The rolling can be done by hand or by machine. Glass can be "double rolled", which means it is passed through two cylinders at once (similar to the clothes wringers on older washing machines) to yield glass of a specified thickness (typically about 1/8" or 3mm). The glass is then annealed. Rolled glass was first commercially produced around the mid-1830s and is widely used today. It is often called cathedral glass, but this has nothing to do with medieval cathedrals, where the glass used was hand-blown. Flashed glass Architectural glass must be at least of an inch (3 mm) thick to survive the push and pull of typical wind loads. However, in the creation of red glass, the colouring ingredients must be of a certain concentration, or the colour will not develop. This results in a colour so intense that at the thickness of inch (3 mm), the red glass transmits little light and appears black. The method employed to create red stained glass is to laminate a thin layer of red glass to a thicker body of glass that is clear or lightly tinted, forming "flashed glass". A lightly coloured molten gather is dipped into a pot of molten red glass, which is then blown into a sheet of laminated glass using either the cylinder (muff) or the crown technique described above. Once this method was found for making red glass, other colours were made this way as well. A great advantage is that the double-layered glass can be engraved or abraded to reveal the clear or tinted glass below. The method allows rich detailing and patterns to be achieved without needing to add more lead-lines, giving artists greater freedom in their designs. A number of artists have embraced the possibilities flashed glass gives them. For instance, 16th-century heraldic windows relied heavily on a variety of flashed colours for their intricate crests and creatures. In the medieval period the glass was abraded; later, hydrofluoric acid was used to remove the flash in a chemical reaction (a very dangerous technique), and in the 19th century sandblasting started to be used for this purpose. Modern production of traditional glass There are a number of glass factories, notably in Germany, the United States, England, France, Poland and Russia, which produce high-quality glass, both hand-blown (cylinder, muff, crown) and rolled (cathedral and opalescent). Modern stained-glass artists have a number of resources to use and the work of centuries of other artists from which to learn as they continue the tradition in new ways. In the late 19th and 20th centuries there have been many innovations in techniques and in the types of glass used. Many new types of glass have been developed for use in stained glass windows, in particular Tiffany glass and dalle de verre. Techniques "Pot metal" and flashed glass The primary method of including colour in stained glass is to use glass, originally colourless, that has been given colouring by mixing with metal oxides in its melted state (in a crucible or "pot"), producing glass sheets that are coloured all the way through; these are known as "pot metal" glass. A second method, sometimes used in some areas of windows, is flashed glass, a thin coating of coloured glass fused to colourless glass (or coloured glass, to produce a different colour). In medieval glass flashing was especially used for reds, as glass made with gold compounds was very expensive and tended to be too deep in colour to use at full thickness. Glass paint Another group of techniques give additional colouring, including lines and shading, by treating the surfaces of the coloured sheets, and often fixing these effects by a light firing in a furnace or kiln. These methods may be used over broad areas, especially with silver stain, which gave better yellows than other methods in the Middle Ages. Alternatively they may be used for painting linear effects, or polychrome areas of detail. The most common method of adding the black linear painting necessary to define stained glass images is the use of what is variously called "glass paint", "vitreous paint", or "grisaille paint". This was applied as a mixture of powdered glass, iron or rust filings to give a black colour, clay, and oil, vinegar or water for a brushable texture, with a binder such as gum arabic. This was painted on the pieces of coloured glass, and then fired to burn away the ingredients giving texture, leaving a layer of the glass and colouring, fused to the main glass piece. Silver stain "Silver stain", introduced soon after 1300, produced a wide range of yellow to orange colours; this is the "stain" in the term "stained glass". Silver compounds (notably silver nitrate) are mixed with binding substances, applied to the surface of glass, and then fired in a furnace or kiln. They can produce a range of colours from orange-red to yellow. Used on blue glass they produce greens. The way the glass is heated and cooled can significantly affect the colours produced by these compounds. The chemistry involved is complex and not well understood. The chemicals actually penetrate the glass they are added to a little way, and the technique therefore gives extremely stable results. By the 15th century it had become cheaper than using pot metal glass and was often used with glass paint as the only colour on transparent glass. Silver stain was applied to the opposite face of the glass to silver paint, as the two techniques did not work well one on top of the other. The stain was usually on the exterior face, where it appears to have given the glass some protection against weathering, although this can also be true for paint. They were also probably fired separately, the stain needing a lower heat than the paint. "Sanguine" or "Cousin's rose" "Sanguine", "carnation", "Rouge Jean Cousin" or "Cousin's rose", after its supposed inventor, is an iron-based fired paint producing red colours, mainly used to highlight small areas, often on flesh. It was introduced around 1500. Copper stain, similar to silver stain but using copper compounds, also produced reds, and was mainly used in the 18th and 19th centuries. Cold painting "Cold paint" is various types of paint that were applied without firing. Contrary to the optimistic claims of the 12th century writer Theophilus Presbyter, cold paint is not very durable, and very little medieval paint has survived. Scratching techniques As well as painting, scratched sgraffito techniques were often used. This involved painting a colour over pot metal glass of another colour, and then before firing selectively scratching the glass paint away to make the design, or the lettering of an inscription. This was the most common method of making inscriptions in early medieval glass, giving white or light letters on a black background, with later inscriptions more often using black painted letters on a transparent glass background. "Pot glass" colours These are the colours in which the glass itself is made, as opposed to colours applied to the glass. Transparent glass Ordinary soda-lime glass appears colourless to the naked eye when it is thin, although iron oxide impurities produce a green tint which becomes evident in thick pieces or with the aid of scientific instruments. A number of additives are used to reduce the green tint, particularly if the glass is to be used for plain window glass, rather than stained glass windows. These additives include manganese dioxide which produces sodium permanganate, and may result in a slightly mauve tint, characteristic of the glass in older houses in New England. Selenium has been used for the same purpose. Green glass While very pale green is the typical colour of transparent glass, deeper greens can be achieved by the addition of Iron(II) oxide which results in a bluish-green glass. Together with chromium it gives glass of a richer green colour, typical of the glass used to make wine bottles. The addition of chromium yields dark green glass, suitable for flashed glass. Together with tin oxide and arsenic it yields emerald green glass. Blue glass In medieval times, blue glass was made by adding cobalt blue, which at a concentration of 0.025% to 0.1% in soda-lime glass achieves the brilliant blue characteristic of Chartres Cathedral. The addition of sulphur to boron-rich borosilicate glasses imparts a blue colour. The addition of copper oxide at 2–3% produces a turquoise colour. The addition of nickel, at different concentrations, produces blue, violet, or black glass. Red glass Metallic gold, in very low concentrations (around 0.001%), produces a rich ruby-coloured glass ("ruby gold"); in even lower concentrations it produces a less intense red, often marketed as "cranberry glass". The colour is caused by the size and dispersion of gold particles. Ruby gold glass is usually made of lead glass with tin added. Pure metallic copper produces a very dark red, opaque glass. Glass created in this manner is generally "flashed" (laminated glass). It was used extensively in the late 19th and early 20th centuries and exploited for the decorative effects that could be achieved by sanding and engraving. Selenium is an important agent to make pink and red glass. When used together with cadmium sulphide, it yields a brilliant red colour known as "Selenium Ruby". Yellow glass This was very often achieved by "silver stain" applied externally to the sheets of glass (see above). The addition of sulphur, together with carbon and iron salts, is used to form iron polysulphides and produce amber glass ranging from yellowish to almost black. With calcium it yields a deep yellow colour. Adding titanium produces yellowish-brown glass. Titanium is rarely used on its own and is more often employed to intensify and brighten other additives. Cadmium together with sulphur results in deep yellow colour, often used in glazes. However, cadmium is toxic. Uranium (0.1% to 2%) can be added to give glass a fluorescent yellow or green colour. Uranium glass is typically not radioactive enough to be dangerous, but if ground into a powder, such as by polishing with sandpaper, and inhaled, it can be carcinogenic. When used with lead glass with a very high proportion of lead, it produces a deep red colour. Purple glass The addition of manganese gives an amethyst colour. Manganese is one of the oldest glass additives, and purple manganese glass has been used since early Egyptian history. Nickel, depending on the concentration, produces blue, or violet, or even black glass. Lead crystal with added nickel acquires a purplish colour. White glass Tin dioxide with antimony and arsenic oxides produce an opaque white glass, first used in Venice to produce an imitation porcelain. White glass was used extensively by Louis Comfort Tiffany to create a range of opalescent, mottled and streaky glasses. Creating stained-glass windows Design The first stage in the production of a window is to make, or acquire from the architect or owners of the building, an accurate template of the window opening that the glass is to fit. The subject matter of the window is determined to suit the location, a particular theme, or the wishes of the patron. A small design called a Vidimus (from Latin "we have seen") is prepared which can be shown to the patron. A scaled model maquette may also be provided. The designer must take into account the design, the structure of the window, the nature and size of the glass available and his or her own preferred technique. A traditional narrative window has panels which relate a story. A figurative window could have rows of saints or dignitaries. Scriptural texts or mottoes are sometimes included and perhaps the names of the patrons or the person to whose memory the window is dedicated. In a window of a traditional type, it is usually left to the discretion of the designer to fill the surrounding areas with borders, floral motifs and canopies. A full-sized cartoon is drawn for every "light" (opening) of the window. A small church window might typically have two lights, with some simple tracery lights above. A large window might have four or five lights. The east or west window of a large cathedral might have seven lights in three tiers, with elaborate tracery. In medieval times the cartoon was drawn directly on the surface of a whitewashed table, which was then used as a pattern for cutting, painting and assembling the window. The cartoon is then divided into a patchwork, providing a template for each small glass piece. The exact position of the lead which holds the glass in place is also noted, as it is part of the calculated visual effect. Selecting and painting the glass Each piece of glass is selected for the desired colour and cut to match a section of the template. An exact fit is ensured by "grozing" the edges with a tool which can nibble off small pieces. Details of faces, hair and hands can be painted onto the inner surface of the glass using a special glass paint which contains finely ground lead or copper filings, ground glass, gum arabic and a medium such as wine, vinegar or (traditionally) urine. The art of painting details became increasingly elaborate and reached its height in the early 20th century. From 1300 onwards, artists started using "silver stain" which was made with silver nitrate. It gave a yellow effect ranging from pale lemon to deep orange. It was usually painted onto the outside of a piece of glass, then fired to make it permanent. This yellow was particularly useful for enhancing borders, canopies and haloes, and turning blue glass into green glass. By about 1450, a stain known as "Cousin's rose" was used to enhance flesh tones. In the 16th century, a range of glass stains were introduced, most of them coloured by ground glass particles. They were a form of enamelled glass. Painting on glass with these stains was initially used for small heraldic designs and other details. By the 17th century a style of stained glass had evolved that was no longer dependent upon the skilful cutting of coloured glass into sections. Scenes were painted onto glass panels of square format, like tiles. The colours were then annealed to the glass before the pieces were assembled. A method used for embellishment and gilding is the decoration of one side of each of two pieces of thin glass, which are then placed back to back within the lead came. This allows for the use of techniques such as Angel gilding and Eglomise to produce an effect visible from both sides but not exposing the decorated surface to the atmosphere or mechanical damage. Assembly and mounting Once the glass is cut and painted, the pieces are assembled by slotting them into H-sectioned lead cames. All the joints are then soldered together and the glass pieces are prevented from rattling and the window made weatherproof by forcing a soft oily cement or mastic between the glass and the cames. In modern windows, copper foil is now sometimes used instead of lead. For further technical details, see Came glasswork. Traditionally, when a window was inserted into the window space, iron rods were put across it at various points to support its weight. The window was tied to these rods with lead strips or, more recently, with copper wires. Some very large early Gothic windows are divided into sections by heavy metal frames called ferramenta. This method of support was also favoured for large, usually painted, windows of the Baroque period. History Origins Coloured glass has been produced since ancient times. Both the Egyptians and the Romans excelled at the manufacture of small colored glass objects. Phoenicia was important in glass manufacture with its chief centres Sidon, Tyre and Antioch. The British Museum holds two of the finest Roman pieces, the Lycurgus Cup, which is a murky mustard color but glows purple-red to transmitted light, and the cameo glass Portland vase which is midnight blue, with a carved white overlay. In early Christian churches of the 4th and 5th centuries, there are many remaining windows which are filled with ornate patterns of thinly-sliced alabaster set into wooden frames, giving a stained-glass like effect. Evidence of stained-glass windows in churches and monasteries in Britain can be found as early as the 7th century. The earliest known reference dates from 675 AD when Benedict Biscop imported workmen from France to glaze the windows of the monastery of St Peter which he was building at Monkwearmouth. Hundreds of pieces of coloured glass and lead, dating back to the late 7th century, have been discovered here and at Jarrow. In the Middle East, the glass industry of Syria continued during the Islamic period with major centres of manufacture at Raqqa, Aleppo and Damascus and the most important products being highly transparent colourless glass and gilded glass, rather than coloured glass. In Southwest Asia The creation of stained glass in Southwest Asia began in ancient times. One of the region's earliest surviving formulations for the production of colored glass comes from the Assyrian city of Nineveh, dating to the 7th-century BC. The Kitab al-Durra al-Maknuna, attributed to the 8th century alchemist Jābir ibn Hayyān, discusses the production of colored glass in ancient Babylon and Egypt. The Kitab al-Durra al-Maknuna also describes how to create colored glass and artificial gemstones made from high-quality stained glass. The tradition of stained glass manufacture has continued, with mosques, palaces, and public spaces being decorated with stained glass throughout the Islamic world. The stained glass of Islam is generally non-pictorial and of purely geometric design, but may contain both floral motifs and text. Stained glass creation had flourished in Persia (now Iran) during the Safavid dynasty (1501–1736 A.D.), and Zand dynasty (1751–1794 A.D.). In Persia stained glass sash windows are called Orosi windows (or transliterated as Arasi, and Orsi), and were once used for decoration, as well as controlling the incoming sunlight in the hot and semi-arid climate. Medieval glass in Europe Stained glass, as an art form, reached its height in the Middle Ages when it became a major pictorial form used to illustrate the narratives of the Bible to a largely illiterate populace. In the Romanesque and Early Gothic period, from about 950 to 1240, the untraceried windows demanded large expanses of glass which of necessity were supported by robust iron frames, such as may be seen at Chartres Cathedral and at the eastern end of Canterbury Cathedral. As Gothic architecture developed into a more ornate form, windows grew larger, affording greater illumination to the interiors, but were divided into sections by vertical shafts and tracery of stone. This elaboration of form reached its height of complexity in the Flamboyant style in Europe, and windows grew still larger with the development of the Perpendicular style in England and Rayonnant style in France. Integrated with the lofty verticals of Gothic cathedrals and parish churches, glass designs became more daring. The circular form, or rose window, developed in France from relatively simple windows with openings pierced through slabs of thin stone to wheel windows, as exemplified by the west front of Chartres Cathedral, and ultimately to designs of enormous complexity, the tracery being drafted from hundreds of different points, such as those at Sainte-Chapelle, Paris and the "Bishop's Eye" at Lincoln Cathedral. While stained glass was widely manufactured, Chartres was the greatest centre of stained glass manufacture, producing glass of unrivalled quality. Renaissance, Reformation and Classical windows Probably the earliest scheme of stained glass windows that was created during the Renaissance was that for Florence Cathedral, devised by Lorenzo Ghiberti. The scheme includes three ocular windows for the dome and three for the facade which were designed from 1405 to 1445 by several of the most renowned artists of this period: Ghiberti, Donatello, Uccello and Andrea del Castagno. Each major ocular window contains a single picture drawn from the Life of Christ or the Life of the Virgin Mary, surrounded by a wide floral border, with two smaller facade windows by Ghiberti showing the martyred deacons, St Stephen and St Lawrence. One of the cupola windows has since been lost, and that by Donatello has lost nearly all of its painted details. In Europe, stained glass continued to be produced; the style evolved from the Gothic to the Classical, which is well represented in Germany, Belgium and the Netherlands, despite the rise of Protestantism. In France, much glass of this period was produced at the Limoges factory, and in Italy at Murano, where stained glass and faceted lead crystal are often coupled together in the same window. The French Revolution brought about the neglect or destruction of many windows in France. Nonetheless, the country still holds the largest set of Renaissance stained glass in its churches, particularly in the regions of Normandy and Champagne where there were vivid ateliers in many cities until the early 17th century with the stained glass painter Linard Gonthier being active in Troyes until 1642. There are 1042 preserved 16th-century windows in the Aube department alone. At the Reformation in England, large numbers of medieval and Renaissance windows were smashed and replaced with plain glass. The Dissolution of the Monasteries under Henry VIII and the injunctions of Thomas Cromwell against "abused images" (the object of veneration) resulted in the loss of thousands of windows. Few remain undamaged; of these the windows in the private chapel at Hengrave Hall in Suffolk are among the finest. With the latter wave of destruction the traditional methods of working with stained glass died, and were not rediscovered in England until the early 19th century. See Stained glass – British glass, 1811–1918 for more details. In the Netherlands a rare scheme of glass has remained intact at Grote Sint-Jan Church, Gouda. The windows, some of which are 18 metres (59 feet) high, date from 1555 to the early 1600s; the earliest is the work of Dirck Crabeth and his brother Wouter. Many of the original cartoons still exist. In Latin America Stained glass was first imported to Latin America during the 17th–18th centuries by Portuguese and Spanish settlers. By the 20th century, many European artists had begun to establish their own studios within Latin America and had started up local production. With these new local studios came inventive techniques and less traditional imagery. Examples of these more modern works of art are the Basílica Nuestra Señora de Lourde and the Templo Vótivo de Maipú both located in Chile. Revival in Great Britain and Ireland The Catholic revival in England, gaining force in the early 19th century with its renewed interest in the medieval church, brought a revival of church building in the Gothic style, claimed by John Ruskin to be "the true Catholic style". The architectural movement was led by Augustus Welby Pugin. Many new churches were planted in large towns and many old churches were restored. This brought about a great demand for the revival of the art of stained glass window making. Among the earliest 19th-century English manufacturers and designers were William Warrington and John Hardman of Birmingham, whose nephew, John Hardman Powell, had a commercial eye and exhibited works at the Philadelphia Exhibition of 1876, influencing stained glass in the United States of America. Other manufacturers included William Wailes, Ward and Hughes, Clayton and Bell, Heaton, Butler and Bayne and Charles Eamer Kempe. A Scottish designer, Daniel Cottier, opened firms in Australia and the US. Revival in France In France there was a greater continuity of stained glass production than in England. In the early 19th century most stained glass was made of large panes that were extensively painted and fired, the designs often being copied directly from oil paintings by famous artists. In 1824 the Sèvres porcelain factory began producing stained glass to supply the increasing demand. In France many churches and cathedrals suffered despoliation during the French Revolution. During the 19th century a great number of churches were restored by Viollet-le-Duc. Many of France's finest ancient windows were restored at that time. From 1839 onwards much stained glass was produced that very closely imitated medieval glass, both in the artwork and in the nature of the glass itself. The pioneers were Henri Gèrente and André Lusson. Other glass was designed in a more Classical manner, and characterised by the brilliant cerulean colour of the blue backgrounds (as against the purple-blue of the glass of Chartres) and the use of pink and mauve glass. Revival in Germany, Austria and beyond During the mid- to late 19th century, many of Germany's ancient buildings were restored, and some, such as Cologne Cathedral, were completed in the medieval style. There was a great demand for stained glass. The designs for many windows were based directly on the work of famous engravers such as Albrecht Dürer. Original designs often imitate this style. Much 19th-century German glass has large sections of painted detail rather than outlines and details dependent on the lead. The Royal Bavarian Glass Painting Studio was founded by Ludwig I in 1827. A major firm was Mayer of Munich, which commenced glass production in 1860, and is still operating as Franz Mayer of Munich, Inc.. German stained glass found a market across Europe, in America and Australia. Stained glass studios were also founded in Italy and Belgium at this time. In the Austrian Empire and later Austria-Hungary, one of the leading stained glass artists was Carl Geyling, who founded his studio in 1841. His son would continue the tradition as Carl Geyling's Erben, which still exists today. Carl Geyling's Erben completed numerous stained glass windows for major churches in Vienna and elsewhere, and received an imperial and royal warrant of appointment from emperor Franz Joseph I of Austria. Innovations in Britain and Europe Among the most innovative English designers were the Pre-Raphaelites, William Morris (1834–1898) and Edward Burne-Jones (1833–1898), whose work heralds the influential Arts and Crafts Movement, which regenerated stained glass throughout the English-speaking world. Amongst its most important exponents in England was Christopher Whall (1849–1924), author of the classic craft manual 'Stained Glass Work' (published London and New York, 1905), who advocated the direct involvement of designers in the making of their windows. His masterpiece is the series of windows (1898–1910) in the Lady Chapel at Gloucester Cathedral. Whall taught at London's Royal College of Art and Central School of Arts and Crafts: his many pupils and followers included Karl Parsons, Mary Lowndes, Henry Payne, Caroline Townshend, Veronica Whall (his daughter) and Paul Woodroffe. The Scottish artist Douglas Strachan (1875–1950), who was much influenced by Whall's example, developed the Arts & Crafts idiom in an expressionist manner, in which powerful imagery and meticulous technique are masterfully combined. In Ireland, a generation of young artists taught by Whall's pupil Alfred Child at Dublin's Metropolitan School of Art created a distinctive national school of stained glass: its leading representatives were Wilhelmina Geddes, Michael Healy and Harry Clarke. Art Nouveau or Belle Epoque stained glass design flourished in France, and Eastern Europe, where it can be identified by the use of curving, sinuous lines in the lead, and swirling motifs. In France it is seen in the work of Francis Chigot of Limoges. In Britain it appears in the refined and formal leadlight designs of Charles Rennie Mackintosh. Innovations in the United States J&R Lamb Studios, established in 1857 in New York City, was the first major decorative arts studio in the United States and for many years a major producer of ecclesiastical stained glass. Notable American practitioners include John La Farge (1835–1910), who invented opalescent glass and for which he received a U.S. patent on 24 February 1880, and Louis Comfort Tiffany (1848–1933), who received several patents for variations of the same opalescent process in November of the same year and he used the copper foil method as an alternative to lead in some windows, lamps and other decorations. Sanford Bray of Boston patented the use of copper foil in stained glass in 1886, However, a reaction against the aesthetics and technique of opalescent windows - led initially by architects such as Ralph Adams Cram - led to a rediscovery of traditional stained glass in the early 1900s. Charles J. Connick (1875–1945), who founded his Boston studio in 1913, was profoundly influenced by his study of medieval stained glass in Europe and by the Arts & Crafts philosophy of Englishman Christopher Whall. Connick created hundreds of windows throughout the US, including major glazing schemes at Princeton University Chapel (1927-9) and at Pittsburgh's Heinz Memorial Chapel (1937-8). Other American artist-makers who espoused a medieval-inspired idiom included Nicola D'Ascenzo of Philadelphia, Wilbur Burnham and Reynolds, Francis & Rohnstock of Boston and Henry Wynd Young and J. Gordon Guthrie of New York. 20th and 21st centuries Many 19th-century firms failed early in the 20th century as the Gothic movement was superseded by newer styles. At the same time there were also some interesting developments where stained glass artists took studios in shared facilities. Examples include the Glass House in London set up by Mary Lowndes and Alfred J. Drury and An Túr Gloine in Dublin, which was run by Sarah Purser and included artists such as Harry Clarke. A revival occurred in the middle of the century because of a desire to restore thousands of church windows throughout Europe destroyed as a result of World War II bombing. German artists led the way. Much work of the period is mundane and often was not made by its designers, but industrially produced. Other artists sought to transform an ancient art form into a contemporary one, sometimes using traditional techniques while exploiting the medium of glass in innovative ways and in combination with different materials. The use of slab glass, a technique known as dalle de verre, where the glass is set in concrete or epoxy resin, was a 20th-century innovation credited to Jean Gaudin and brought to the UK by Pierre Fourmaintraux. One of the most prolific glass artists using this technique was the Benedictine monk Dom Charles Norris OSB of Buckfast Abbey. Gemmail, a technique developed by the French artist Jean Crotti in 1936 and perfected in the 1950s, is a type of stained glass where adjacent pieces of glass are overlapped without using lead cames to join the pieces, allowing for greater diversity and subtlety of colour. Many famous works by late 19th- and early 20th-century painters, notably Picasso, have been reproduced in gemmail. A major exponent of this technique is the German artist Walter Womacka. Among the early well-known 20th-century artists who experimented with stained glass as an Abstract art form were Theo van Doesburg and Piet Mondrian. In the 1960s and 1970s the Expressionist painter Marc Chagall produced designs for many stained glass windows that are intensely coloured and crammed with symbolic details. Important 20th-century stained glass artists include John Hayward, Douglas Strachan, Ervin Bossanyi, Louis Davis, Wilhelmina Geddes, Karl Parsons, John Piper, Patrick Reyntiens, Johannes Schreiter, Brian Clarke, Paul Woodroffe, Jean René Bazaine at Saint Séverin, Sergio de Castro at Couvrechef- La Folie (Caen), Hamburg-Dulsberg and Romont (Switzerland), and the Loire Studio of Gabriel Loire at Chartres. The west windows of England's Manchester Cathedral, by Tony Hollaway, are some of the most notable examples of symbolic work. In Germany, stained glass development continued with the inter-war work of Johan Thorn Prikker and Josef Albers, and the post-war achievements of Joachim Klos, Johannes Schreiter and Ludwig Shaffrath. This group of artists, who advanced the medium through the abandonment of figurative designs and painting on glass in favour of a mix of biomorphic and rigorously geometric abstraction, and the calligraphic non-functional use of leads, are described as having produced "the first authentic school of stained glass since the Middle Ages". The works of Ludwig Schaffrath demonstrate the late 20th-century trends in the use of stained glass for architectural purposes, filling entire walls with coloured and textured glass. In the 1970s young British stained-glass artists such as Brian Clarke were influenced by the large scale and abstraction in German twentieth-century glass. In the UK, the professional organisation for stained glass artists has been the British Society of Master Glass Painters, founded in 1921. Since 1924 the BSMGP has published an annual journal, The Journal of Stained Glass. It continues to be Britain's only organisation devoted exclusively to the art and craft of stained glass. From the outset, its chief objectives have been to promote and encourage high standards in stained glass painting and staining, to act as a locus for the exchange of information and ideas within the stained glass craft and to preserve the invaluable stained glass heritage of Britain. See www.bsmgp.org.uk for a range of stained glass lectures, conferences, tours, portfolios of recent stained glass commissions by members, and information on courses and the conservation of stained glass. Back issues of The Journal of Stained Glass are listed and there is a searchable index for stained glass articles, an invaluable resource for stained glass researchers. After the First World War, stained glass window memorials were a popular choice among wealthier families, examples can be found in churches across the UK. In the United States, there is a 100-year-old trade organization, The Stained Glass Association of America, whose purpose is to function as a publicly recognized organization to assure survival of the craft by offering guidelines, instruction and training to craftspersons. The SGAA also sees its role as defending and protecting its craft against regulations that might restrict its freedom as an architectural art form. The current president is Kathy Bernard. Today there are academic establishments that teach the traditional skills. One of these is Florida State University's Master Craftsman Program, which recently completed a high stained-glass windows, designed by Robert Bischoff, the program's director, and Jo Ann, his wife and installed to overlook Bobby Bowden Field at Doak Campbell Stadium. The Roots of Knowledge installation at Utah Valley University in Orem, Utah is long and has been compared to those in several European cathedrals, including the Cologne Cathedral in Germany, Sainte-Chapelle in France, and York Minster in England. There are also contemporary stained glass artists in the US who are creating stained glass windows based on grids, rather than recognizable images. Combining ancient and modern traditions Buildings incorporating stained glass windows Churches Stained glass windows were commonly used in churches for decorative and informative purposes. Many windows are donated to churches by members of the congregation as memorials of loved ones. For more on the use of stained glass to depict religious subjects, see Poor Man's Bible. Important examples Cathedral of Chartres, in France, 11th to 13th-century glass Canterbury Cathedral, in England, 12th to 15th century plus 19th- and 20th-century glass York Minster, in England, 11th to 15th-century glass Sainte-Chapelle, in Paris, 13th and 14th-century glass Bourges Cathedral in France, 13th to 16th-century glass Florence Cathedral, Italy, 15th-century glass designed by Uccello, Donatello and Ghiberti Janskerk (Gouda), The Netherlands, date from 1555 to the early 1600s; the earliest is the work of Dirck Crabeth and his brother Wouter. St. Andrew's Cathedral, Sydney, Australia, early complete cycle of 19th-century glass, Hardman of Birmingham. Fribourg Cathedral, Switzerland, complete cycle of glass 1896–1936, by Józef Mehoffer Coventry Cathedral, England, mid-20th-century glass by various designers, the large baptistry window being by John Piper Brown Memorial Presbyterian Church, extensive collection of windows by Louis Comfort Tiffany Synagogues In addition to Christian churches, stained glass windows have been incorporated into Jewish temple architecture for centuries. Jewish communities in the United States saw this emergence in the mid-19th century, with such notable examples as the sanctuary depiction of the Ten Commandments in New York's Congregation Anshi Chesed. From the mid-20th century to the present, stained glass windows have been a ubiquitous feature of American synagogue architecture. Styles and themes for synagogue stained glass artwork are as diverse as their church counterparts. As with churches, synagogue stained glass windows are often dedicated by member families in exchange for major financial contributions to the institution. Places of worship Mausolea Mausolea, whether for general community use or for private family use, may employ stained glass as a comforting entry for natural light, for memorialization, or for display of religious imagery. Houses Stained glass windows in houses were particularly popular in the Victorian era and many domestic examples survive. In their simplest form they typically depict birds and flowers in small panels, often surrounded with machine-made cathedral glass which, despite what the name suggests, is pale-coloured and textured. Some large homes have splendid examples of secular pictorial glass. Many small houses of the 19th and early 20th centuries have leadlight windows. Prairie style homes The houses of Frank Lloyd Wright Public and commercial buildings Stained glass has often been used as a decorative element in public buildings, initially in places of learning, government or justice but increasingly in other public and commercial places such as banks, retailers and railway stations. Public houses in some countries make extensive use of stained glass and leaded lights to create a comfortable atmosphere and retain privacy. Sculpture See also Architectural glass Architecture of cathedrals and great churches Art Nouveau glass Autonomous stained glass Beveled glass British and Irish stained glass (1811–1918) English Gothic stained glass windows French Gothic stained glass windows Float glass Glass beadmaking List of stained glass windows in the Janskerk, Gouda Sagrada (board game) Stained glass conservation Studio glass Suncatcher Venetian glass Window References "Historic England" = Practical Building Conservation: Glass and glazing, by Historic England, 2011, Ashgate Publishing, Ltd., , 9780754645573, google books Further reading Theophilus (ca 1100). On Divers Arts, translated from Latin by John G. Hawthorne and Cyril Stanley Smith, Dover, Martin Harrison, Victorian Stained Glass, Barrie & Jenkins, 1980 The Journal of Stained Glass, Burne-Jones Special Issue, Vol. XXXV, 2011 The Journal of Stained Glass, Scotland Issue, Vol. XXX, 2006 The Journal of Stained Glass, Special Issue, The Stained Glass Collection of Sir John Soane's Museum, Vol. XXVII, 2003 The Journal of Stained Glass, America Issue, Vol. XXVIII, 2004 Brian Clarke (editor) Architectural Stained Glass (1979). Johannes Schreiter, Martin Harrison, Ludwig Schaffrath, John Piper, and Patrick Reyntiens. Architectural Record Books. London: McGraw-Hill Education, 1979 Peter Cormack, 'Arts & Crafts Stained Glass', Yale University Press, 2015 Caroline Swash, 'The 100 Best Stained Glass Sites in London', Malvern Arts Press, 2015 Nicola Gordon Bowe, 'Wilhelmina Geddes, Life and Work', Four Courts Press Lucy Costigan and Michael Cullen (2010). Strangest Genius: The Stained Glass of Harry Clarke, The History Press, Dublin, Elizabeth Morris (1993). Stained and Decorative Glass, Tiger Books, Sarah Brown (1994). Stained Glass- an Illustrated History, Bracken Books, Painton Cowen (1985). A Guide to Stained Glass in Britain, Michael Joseph, Husband, TB (2000). The Luminous Image: Painted Glass Roundels in the Lowlands, 1480-1560, Metropolitan Museum of Art Lawrence Lee, George Seddon, and Francis Stephens (1976). Stained Glass, Mitchell Beazley, Simon Jenkins (2000). England's Thousand Best Churches, Penguin, Robert Eberhard. Database: Church Stained Glass Windows . Cliff and Monica Robinson. Database: Buckinghamshire Stained Glass . Stained Glass Association of America. History of Stained Glass . Robert Kehlmann (1992). 20th Century Stained Glass: A New Definition, Kyoto Shoin Co., Ltd., Kyoto, Kisky, Hans (1959). 100 Jahre Rheinische Glasmalerei, Neuss : Verl. Gesellschaft für Buchdruckerei OCLC 632380232 Robert Sowers (1954). The Lost Art, George Wittenborn Inc., New York, OCLC 1269795 Robert Sowers (1965). Stained Glass: An Architectural Art, Universe Books, Inc., New York, OCLC 21650951 Robert Sowers (1981). The Language of Stained Glass, Timber Press, Forest Grove, Oregon, Conrad Rudolph (2011). 'Inventing the Exegetical Stained-Glass Window: Suger, Hugh, and a New Elite Art', Art Bulletin, 93, 399–422 Conrad Rudolph (2015). 'The Parabolic Discourse Window and the Canterbury Roll: Social Change and the Assertion of Elite Status at Canterbury Cathedral', Oxford Art Journal, 38, 1–19 External links BSMGP | The home of British Stained Glass SGAA Sourcebook Find a Studio – The Stained Glass Association of America Preservation of Stained Glass Church Stained Glass Window Database recorded by Robert Eberhard , covering ≈ 2800 churches in the southeast of England Institute for Stained Glass in Canada , over 10,000 photos; a multi-year photographic survey of Canada's stained glass from many countries; 1856 to present The Stained Glass Museum (Ely, England) Vitromusée Romont (Romont (FR), Switzerland) Stained glass workshops (UK) Stained glass guide (UK) Gloine – Stained glass in the Church of Ireland Research carried out by David Lawrence on behalf of the Representative Church Body of the Church of Ireland, partially funded by the Heritage Council Stained-glass windows by Sergio de Castro in France, Germany and Switzerland Glass architecture Glass production History of glass Windows Decorative arts
Stained glass
[ "Materials_science", "Engineering" ]
9,878
[ "Glass architecture", "Glass engineering and science", "Glass production" ]
143,021
https://en.wikipedia.org/wiki/Centers%20of%20gravity%20in%20non-uniform%20fields
In physics, a center of gravity of a material body is a point that may be used for a summary description of gravitational interactions. In a uniform gravitational field, the center of mass serves as the center of gravity. This is a very good approximation for smaller bodies near the surface of Earth, so there is no practical need to distinguish "center of gravity" from "center of mass" in most applications, such as engineering and medicine. In a non-uniform field, gravitational effects such as potential energy, force, and torque can no longer be calculated using the center of mass alone. In particular, a non-uniform gravitational field can produce a torque on an object, even about an axis through the center of mass. The center of gravity seeks to explain this effect. Formally, a center of gravity is an application point of the resultant gravitational force on the body. Such a point may not exist, and if it exists, it is not unique. One can further define a unique center of gravity by approximating the field as either parallel or spherically symmetric. The concept of a center of gravity as distinct from the center of mass is rarely used in applications, even in celestial mechanics, where non-uniform fields are important. Since the center of gravity depends on the external field, its motion is harder to determine than the motion of the center of mass. The common method to deal with gravitational torques is a field theory. Center of mass One way to define the center of gravity of a body is as the unique point in the body if it exists, that satisfies the following requirement: There is no torque about the point for any positioning of the body in the field of force in which it is placed. This center of gravity exists only when the force is uniform, in which case it coincides with the center of mass. This approach dates back to Archimedes. Centers of gravity in a field When a body is affected by a non-uniform external gravitational field, one can sometimes define a center of gravity relative to that field that will act as a point where the gravitational force is applied. Textbooks such as The Feynman Lectures on Physics characterize the center of gravity as a point about which there is no torque. In other words, the center of gravity is a point of application for the resultant force. Under this formulation, the center of gravity is defined as a point that satisfies the equation where and are the total force and torque on the body due to gravity. One complication concerning is that its defining equation is not generally solvable. If and are not orthogonal, then there is no solution; the force of gravity does not have a resultant and cannot be replaced by a single force at any point. There are some important special cases where and are guaranteed to be orthogonal, such as if all forces lie in a single plane or are aligned with a single point. If the equation is solvable, there is another complication: its solutions are not unique. Instead, there are infinitely many solutions; the set of all solutions is known as the line of action of the force. This line is parallel to the weight . In general, there is no way to choose a particular point as the unique center of gravity. A single point may still be chosen in some special cases, such as if the gravitational field is parallel or spherically symmetric. These cases are considered below. Parallel fields Some of the inhomogeneity in a gravitational field may be modeled by a variable but parallel field: , where is some constant unit vector. Although a non-uniform gravitational field cannot be exactly parallel, this approximation can be valid if the body is sufficiently small. The center of gravity may then be defined as a certain weighted average of the locations of the particles composing the body. Whereas the center of mass averages over the mass of each particle, the center of gravity averages over the weight of each particle: where is the (scalar) weight of the th particle and is the (scalar) total weight of all the particles. This equation always has a unique solution, and in the parallel-field approximation, it is compatible with the torque requirement. A common illustration concerns the Moon in the field of the Earth. Using the weighted-average definition, the Moon has a center of gravity that is lower (closer to the Earth) than its center of mass, because its lower portion is more strongly influenced by the Earth's gravity. This eventually lead to the Moon always showing the same face, a phenomenon known as tidal locking. Spherically symmetric fields If the external gravitational field is spherically symmetric, then it is equivalent to the field of a point mass at the center of symmetry . In this case, the center of gravity can be defined as the point at which the total force on the body is given by Newton's Law: where is the gravitational constant and is the mass of the body. As long as the total force is nonzero, this equation has a unique solution, and it satisfies the torque requirement. A convenient feature of this definition is that if the body is itself spherically symmetric, then lies at its center of mass. In general, as the distance between and the body increases, the center of gravity approaches the center of mass. Another way to view this definition is to consider the gravitational field of the body; then is the apparent source of gravitational attraction for an observer located at . For this reason, is sometimes referred to as the center of gravity of relative to the point . Usage The centers of gravity defined above are not fixed points on the body; rather, they change as the position and orientation of the body changes. This characteristic makes the center of gravity difficult to work with, so the concept has little practical use. When it is necessary to consider a gravitational torque, it is easier to represent gravity as a force acting at the center of mass, plus an orientation-dependent couple. The latter is best approached by treating the gravitational potential as a field. Notes References Classical mechanics Gravity
Centers of gravity in non-uniform fields
[ "Physics", "Mathematics" ]
1,210
[ "Point (geometry)", "Geometric centers", "Classical mechanics", "Mechanics", "Symmetry" ]
143,133
https://en.wikipedia.org/wiki/Center%20of%20pressure%20%28fluid%20mechanics%29
In fluid mechanics, the center of pressure is the point on a body where a single force acting at that point can represent the total effect of the pressure field acting on the body. The total force vector acting at the center of pressure is the surface integral of the pressure vector field across the surface of the body. The resultant force and center of pressure location produce an equivalent force and moment on the body as the original pressure field. Pressure fields occur in both static and dynamic fluid mechanics. Specification of the center of pressure, the reference point from which the center of pressure is referenced, and the associated force vector allows the moment generated about any point to be computed by a translation from the reference point to the desired new point. It is common for the center of pressure to be located on the body, but in fluid flows it is possible for the pressure field to exert a moment on the body of such magnitude that the center of pressure is located outside the body. Hydrostatic example (dam) Since the forces of water on a dam are hydrostatic forces, they vary linearly with depth. The total force on the dam is then the integral of the pressure multiplied by the width of the dam as a function of the depth. The center of pressure is located at the centroid of the triangular shaped pressure field from the top of the water line. The hydrostatic force and tipping moment on the dam about some point can be computed from the total force and center of pressure location relative to the point of interest. Historical usage for sailboats Center of pressure is used in sailboat design to represent the position on a sail where the aerodynamic force is concentrated. The relationship of the aerodynamic center of pressure on the sails to the hydrodynamic center of pressure (referred to as the center of lateral resistance) on the hull determines the behavior of the boat in the wind. This behavior is known as the "helm" and is either a weather helm or lee helm. A slight amount of weather helm is thought by some sailors to be a desirable situation, both from the standpoint of the "feel" of the helm, and the tendency of the boat to head slightly to windward in stronger gusts, to some extent self-feathering the sails. Other sailors disagree and prefer a neutral helm. The fundamental cause of "helm", be it weather or lee, is the relationship of the center of pressure of the sail plan to the center of lateral resistance of the hull. If the center of pressure is astern of the center of lateral resistance, a weather helm, the tendency of the vessel is to want to turn into the wind. If the situation is reversed, with the center of pressure forward of the center of lateral resistance of the hull, a "lee" helm will result, which is generally considered undesirable, if not dangerous. Too much of either helm is not good, since it forces the helmsman to hold the rudder deflected to counter it, thus inducing extra drag beyond what a vessel with neutral or minimal helm would experience. Aircraft aerodynamics A stable configuration is desirable not only in sailing, but in aircraft design as well. Aircraft design therefore borrowed the term center of pressure. And like a sail, a rigid non-symmetrical airfoil not only produces lift, but a moment. The center of pressure of an aircraft is the point where all of the aerodynamic pressure field may be represented by a single force vector with no moment. A similar idea is the aerodynamic center which is the point on an airfoil where the pitching moment produced by the aerodynamic forces is constant with angle of attack. The aerodynamic center plays an important role in analysis of the longitudinal static stability of all flying machines. It is desirable that when the pitch angle and angle of attack of an aircraft are disturbed (by, for example wind shear/vertical gust) that the aircraft returns to its original trimmed pitch angle and angle of attack without a pilot or autopilot changing the control surface deflection. For an aircraft to return towards its trimmed attitude, without input from a pilot or autopilot, it must have positive longitudinal static stability. Missile aerodynamics Missiles typically do not have a preferred plane or direction of maneuver and thus have symmetric airfoils. Since the center of pressure for symmetric airfoils is relatively constant for small angle of attack, missile engineers typically speak of the complete center of pressure of the entire vehicle for stability and control analysis. In missile analysis, the center of pressure is typically defined as the center of the additional pressure field due to a change in the angle of attack off of the trim angle of attack. For unguided rockets the trim position is typically zero angle of attack and the center of pressure is defined to be the center of pressure of the resultant flow field on the entire vehicle resulting from a very small angle of attack (that is, the center of pressure is the limit as angle of attack goes to zero). For positive stability in missiles, the total vehicle center of pressure defined as given above must be further from the nose of the vehicle than the center of gravity. In missiles at lower angles of attack, the contributions to the center of pressure are dominated by the nose, wings, and fins. The normalized normal force coefficient derivative with respect to the angle of attack of each component multiplied by the location of the center of pressure can be used to compute a centroid representing the total center of pressure. The center of pressure of the added flow field is behind the center of gravity and the additional force "points" in the direction of the added angle of attack; this produces a moment that pushes the vehicle back to the trim position. In guided missiles where the fins can be moved to trim the vehicles in different angles of attack, the center of pressure is the center of pressure of the flow field at that angle of attack for the undeflected fin position. This is the center of pressure of any small change in the angle of attack (as defined above). Once again for positive static stability, this definition of center of pressure requires that the center of pressure be further from the nose than the center of gravity. This ensures that any increased forces resulting from increased angle of attack results in increased restoring moment to drive the missile back to the trimmed position. In missile analysis, positive static margin implies that the complete vehicle makes a restoring moment for any angle of attack from the trim position. Movement of center of pressure for aerodynamic fields The center of pressure on a symmetric airfoil typically lies close to 25% of the chord length behind the leading edge of the airfoil. (This is called the "quarter-chord point".) For a symmetric airfoil, as angle of attack and lift coefficient change, the center of pressure does not move. It remains around the quarter-chord point for angles of attack below the stalling angle of attack. The role of center of pressure in the control characterization of aircraft takes a different form than in missiles. On a cambered airfoil the center of pressure does not occupy a fixed location. For a conventionally cambered airfoil, the center of pressure lies a little behind the quarter-chord point at maximum lift coefficient (large angle of attack), but as lift coefficient reduces (angle of attack reduces) the center of pressure moves toward the rear. When the lift coefficient is zero an airfoil is generating no lift but a conventionally cambered airfoil generates a nose-down pitching moment, so the location of the center of pressure is an infinite distance behind the airfoil. For a reflex-cambered airfoil, the center of pressure lies a little ahead of the quarter-chord point at maximum lift coefficient (large angle of attack), but as lift coefficient reduces (angle of attack reduces) the center of pressure moves forward. When the lift coefficient is zero an airfoil is generating no lift but a reflex-cambered airfoil generates a nose-up pitching moment, so the location of the center of pressure is an infinite distance ahead of the airfoil. This direction of movement of the center of pressure on a reflex-cambered airfoil has a stabilising effect. The way the center of pressure moves as lift coefficient changes makes it difficult to use the center of pressure in the mathematical analysis of longitudinal static stability of an aircraft. For this reason, it is much simpler to use the aerodynamic center when carrying out a mathematical analysis. The aerodynamic center occupies a fixed location on an airfoil, typically close to the quarter-chord point. The aerodynamic center is the conceptual starting point for longitudinal stability. The horizontal stabilizer contributes extra stability and this allows the center of gravity to be a small distance aft of the aerodynamic center without the aircraft reaching neutral stability. The position of the center of gravity at which the aircraft has neutral stability is called the neutral point. See also Aerodynamic center Aerodynamic force Aeroprediction Center of lateral resistance Longitudinal static stability Zero moment point Notes References Anderson, John D. (1999), Aircraft Performance and Design, McGraw-Hill. Clancy, L.J. (1975), Aerodynamics, Pitman Publishing Limited, London. Aircraft aerodynamics Fluid dynamics Pressure Vehicle dynamics
Center of pressure (fluid mechanics)
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
1,839
[ "Point (geometry)", "Chemical engineering", "Geometric centers", "Piping", "Symmetry", "Fluid dynamics" ]
27,327,833
https://en.wikipedia.org/wiki/Nanocellulose
Nanocellulose is a term referring to a family of cellulosic materials that have at least one of their dimensions in the nanoscale. Examples of nanocellulosic materials are microfibrilated cellulose, cellulose nanofibers or cellulose nanocrystals. Nanocellulose may be obtained from natural cellulose fibers through a variety of production processes. This family of materials possesses interesting properties suitable for a wide range of potential applications. Terminology Microfibrilated cellulose Micro cellulose (MFC) is a type of nanocellulose that is more heterogeneous than cellulose nanofibers or nanocrystals as it contains a mixture of nano- and micron-scale particles. The term is sometimes misused to refer to cellulose nanofibers instead. Cellulose nanofibers Cellulose nanofibers (CNF), also called nanofibrillated cellulose (NFC), are nanosized cellulose fibrils with a high aspect ratio (length to width ratio). Typical fibril widths are 5–20 nanometers with a wide range of lengths, typically several micrometers. The fibrils can be isolated from natural cellulose, generally wood pulp, through high-pressure, high temperature and high velocity impact homogenization, grinding or microfluidization (see manufacture below). Cellulose nanocrystals Cellulose nanocrystals (CNCs), or nanocrystalline cellulose (NCC), are highly crystalline, rod-like nanoparticles. They are usually covered by negatively charged groups that render them colloidally stable in water. They are typically shorter than CNFs, with a typical length of 100 to 1000 nanometers. Bacterial nanocellulose Some cellulose producing bacteria have also been used to produce nanocellulosic materials that are then referred to as bacterial nanocellulose. The most common examples being Medusomyces gisevii (the bacteria involved in the making of Kombucha) and Komagataeibacter xylinus (involve in the fabrication of Nata de coco), see bacterial cellulose for more details. This naming distinction might arise from the very peculiar morphology of these materials compared to the more traditional ones made of wood or cotton cellulose. In practice, bacterial nanocellulosic materials are often larger than their wood or cotton counterparts. History The discovery of nanocellulosic materials can be traced back to late 1940s studies on the hydrolysis of cellulose fibers. Eventually it was noticed that cellulose hydrolysis seemed to occur preferentially at some disordered intercrystalline portions of the fibers. This led to the obtention of colloidally stable and highly crystalline nanorods particles. These particles were first referred to as micelles, before being given multiple names including cellulose nanocrystals (CNCs), nanocrystalline cellulose (NCC), or cellulose (nano)whiskers, though this last term is less used today. Later studies by O. A. Battista showed that in milder hydrolysis conditions, the crystalline nanorods stay aggregated as micron size objects. This material was later referred to as microcrystalline cellulose (MCC) and commercialised under the name Avicel by FMC Corporation. Microfibrillated cellulose (MFC) was discovered later, in the 1980s, by Turbak, Snyder and Sandberg at the ITT Rayonier labs in Shelton, Washington. This terminology was used to describe a gel-like material prepared by passing wood pulp through a Gaulin type milk homogenizer at high temperatures and high pressures followed by ejection impact against a hard surface. In later work, F. W. Herrick at ITT Rayonier Eastern Research Division (ERD) Lab in Whippany also published work on making a dry powder form of the gel. Rayonier, as a company, never pursued scale-up and gave free license to whoever wanted to pursue this new use for cellulose. Rather, Turbak et al. pursued 1) finding new uses for the MFC, including using as a thickener and binder in foods, cosmetics, paper formation, textiles, nonwovens, etc. and 2) evaluate swelling and other techniques for lowering the energy requirements for MFC production. The first MFC pilot production plant of MFC was established in 2010 by Innventia AB (Sweden). Manufacture Cellulose sources Nanocellulose materials can be prepared from any natural cellulose source including wood, cotton, agricultural or household wastes, algae, bacteria or tunicate. Wood, in the form of wood pulp is currently the most commonly used starting material for the industrial production of nanocellulosic materials. Nanocellulose fibrils Nanocellulose fibrils (MFC and CNFs) may be isolated from the cellulose fibers using mechanical methods that expose the fibers to high shear forces, delaminating them into nano-fibers. For this purpose, high-pressure homogenizers, grinders or microfluidizers can be used. This process consumes very large amounts of energy and values over 30 MWh/tonne are not uncommon. To address this problem, sometimes enzymatic/mechanical pre-treatments and introduction of charged groups for example through carboxymethylation or TEMPO-mediated oxidation are used. These pre-treatments can decrease energy consumption below 1 MWh/tonne. "Nitro-oxidation" has been developed to prepare carboxycellulose nanofibers directly from raw plant biomass. Owing to fewer processing steps to extract nanocellulose, the nitro-oxidation method has been found to be a cost-effective, less-chemically oriented and efficient method to extract carboxycellulose nanofibers. Functionalized nanofibers obtained using nitro-oxidation have been found to be an excellent substrate to remove heavy metal ion impurities such as lead, cadmium, and uranium. A chemo-mechanical process for production of nanocellulose from cotton linters has been demonstrated with a capacity of 10 kg per day. Cellulose nanocrystals Cellulose nanocrystals (CNC) are formed by the acid hydrolysis of native cellulose fibers, most commonly using sulfuric or hydrochloric acid. Disordered sections of native cellulose are hydrolysed and after careful timing, the remaining crystalline sections can be retrieved from the acid solution by centrifugation and dialysis against water. Their final dimensions depend on the cellulose source, its history, the hydrolysis conditions and the purification procedures. CNCs are commercialised by various companies that use different sources and processes, leading to a range of available products. Other cellulose based nanoparticles Spherical shaped carboxycellulose nanoparticles prepared by nitric acid-phosphoric acid treatment are stable in dispersion in its non-ionic form. Structure and properties Dimensions and crystallinity The ultrastructure of nanocellulose derived from various sources has been extensively studied. Techniques such as transmission electron microscopy (TEM), scanning electron microscopy (SEM), atomic force microscopy (AFM), wide angle X-ray scattering (WAXS), small incidence angle X-ray diffraction and solid state 13C cross-polarization magic angle spinning (CP/MAS), nuclear magnetic resonance (NMR) and spectroscopy have been used to characterize typically dried nanocellulose morphology. A combination of microscopic techniques with image analysis can provide information on fibril widths, it is more difficult to determine fibril lengths, because of entanglements and difficulties in identifying both ends of individual nanofibrils. Also, nanocellulose suspensions may not be homogeneous and can consist of various structural components, including cellulose nanofibrils and nanofibril bundles. In a study of enzymatically pre-treated nanocellulose fibrils in a suspension the size and size-distribution were established using cryo-TEM. The fibrils were found to be rather mono-dispersed mostly with a diameter of ca. 5 nm although occasionally thicker fibril bundles were present. By combining ultrasonication with an "oxidation pretreatment", cellulose microfibrils with a lateral dimension below 1 nm has been observed by AFM. The lower end of the thickness dimension is around 0.4 nm, which is related to the thickness of a cellulose monolayer sheet. Aggregate widths can be determined by CP/MAS NMR developed by Innventia AB, Sweden, which also has been demonstrated to work for nanocellulose (enzymatic pre-treatment). An average width of 17 nm has been measured with the NMR-method, which corresponds well with SEM and TEM. Using TEM, values of 15 nm have been reported for nanocellulose from carboxymethylated pulp. However, thinner fibrils can also be detected. Wågberg et al. reported fibril widths of 5–15 nm for a nanocellulose with a charge density of about 0.5 meq./g. The group of Isogai reported fibril widths of 3–5 nm for TEMPO-oxidized cellulose having a charge density of 1.5 meq./g. Pulp chemistry has a significant influence on nanocellulose microstructure. Carboxymethylation increases the numbers of charged groups on the fibril surfaces, making the fibrils easier to liberate and results in smaller and more uniform fibril widths (5–15 nm) compared to enzymatically pre-treated nanocellulose, where the fibril widths were 10–30 nm. The degree of crystallinity and crystal structure of nanocellulose. Nanocellulose exhibits cellulose crystal I organization and the degree of crystallinity is unchanged by the preparation of the nanocellulose. Typical values for the degree of crystallinity were around 63%. Viscosity The rheology of nanocellulose dispersions has been investigated. and revealed that the storage and loss modulus were independent of the angular frequency at all nanocellulose concentrations between 0.125% to 5.9%. The storage modulus values are particularly high (104 Pa at 3% concentration) compared to results for CNCs (102 Pa at 3% concentration). There is also a strong concentration dependence as the storage modulus increases 5 orders of magnitude if the concentration is increased from 0.125% to 5.9%. Nanocellulose gels are also highly shear thinning (the viscosity is lost upon introduction of the shear forces). The shear-thinning behaviour is particularly useful in a range of different coating applications. It is pseudo-plastic and exhibits thixotropy, the property of certain gels or fluids that are thick (viscous) under normal conditions, but become less viscous when shaken or agitated. When the shearing forces are removed the gel regains much of its original state. Mechanical properties Crystalline cellulose has a stiffness about 140–220 GPa, comparable with that of Kevlar and better than that of glass fiber, both of which are used commercially to reinforce plastics. Films made from nanocellulose have high strength (over 200 MPa), high stiffness (around 20 GPa) but lack of high strain (12%). Its strength/weight ratio is 8 times that of stainless steel. Fibers made from nanocellulose have high strength (up to 1.57 GPa) and stiffness (up to 86 GPa). Barrier properties In semi-crystalline polymers, the crystalline regions are considered to be gas impermeable. Due to relatively high crystallinity, in combination with the ability of the nanofibers to form a dense network held together by strong inter-fibrillar bonds (high cohesive energy density), it has been suggested that nanocellulose might act as a barrier material. Although the number of reported oxygen permeability values are limited, reports attribute high oxygen barrier properties to nanocellulose films. One study reported an oxygen permeability of 0.0006 (cm3 μm)/(m2 day kPa) for a ca. 5 μm thin nanocellulose film at 23 °C and 0% RH. In a related study, a more than 700-fold decrease in oxygen permeability of a polylactide (PLA) film when a nanocellulose layer was added to the PLA surface was reported. The influence of nanocellulose film density and porosity on film oxygen permeability has been explored. Some authors have reported significant porosity in nanocellulose films, which seems to be in contradiction with high oxygen barrier properties, whereas Aulin et al. measured a nanocellulose film density close to density of crystalline cellulose (cellulose Iß crystal structure, 1.63 g/cm3) indicating a very dense film with a porosity close to zero. Changing the surface functionality of the cellulose nanoparticle can also affect the permeability of nanocellulose films. Films constituted of negatively charged CNCs could effectively reduce permeation of negatively charged ions, while leaving neutral ions virtually unaffected. Positively charged ions were found to accumulate in the membrane. Multi-parametric surface plasmon resonance is one of the methods to study barrier properties of natural, modified or coated nanocellulose. The different antifouling, moisture, solvent, antimicrobial barrier formulation quality can be measured on the nanoscale. The adsorption kinetics as well as the degree of swelling can be measured in real-time and label-free. Liquid crystals, colloidal glasses, and hydrogels Owed to their anisotropic shape and surface charge, nanocelluloses (mostly rigid CNCs) have a high excluded volume and self-assemble into cholesteric liquid crystals beyond a critical volume fraction. Nanocellulose liquid crystals are left-handed due to the right-handed twist on particle level. Nanocellulose phase behavior is susceptible to ionic charge screening. An increase in ionic strength induces the arrest of nanocellulose dispersions into attractive glasses. At further increasing ionic strength, nanocelluloses aggregate into hydrogels. The interactions within nanocelluloses are weak and reversible, wherefore nanocellulose suspensions and hydrogels are self-healing and may be applied as injectable materials or 3D printing inks. Bulk foams and aerogels Nanocellulose can also be used to make aerogels/foams, either homogeneously or in composite formulations. Nanocellulose-based foams are being studied for packaging applications in order to replace polystyrene-based foams. Svagan et al. showed that nanocellulose has the ability to reinforce starch foams by using a freeze-drying technique. The advantage of using nanocellulose instead of wood-based pulp fibers is that the nanofibrils can reinforce the thin cells in the starch foam. Moreover, it is possible to prepare pure nanocellulose aerogels applying various freeze-drying and super critical drying techniques. Aerogels and foams can be used as porous templates. Tough ultra-high porosity foams prepared from cellulose I nanofibril suspensions were studied by Sehaqui et al. a wide range of mechanical properties including compression was obtained by controlling density and nanofibril interaction in the foams. CNCs could also be made to gel in water under low power sonication giving rise to aerogels with the highest reported surface area (>600m2/g) and lowest shrinkage during drying (6.5%) of cellulose aerogels. In another study by Aulin et al., the formation of structured porous aerogels of nanocellulose by freeze-drying was demonstrated. The density and surface texture of the aerogels was tuned by selecting the concentration of the nanocellulose dispersions before freeze-drying. Chemical vapour deposition of a fluorinated silane was used to uniformly coat the aerogel to tune their wetting properties towards non-polar liquids/oils. The authors demonstrated that it is possible to switch the wettability behaviour of the cellulose surfaces between super-wetting and super-repellent, using different scales of roughness and porosity created by the freeze-drying technique and change of concentration of the nanocellulose dispersion. Structured porous cellulose foams can however also be obtained by utilizing the freeze-drying technique on cellulose generated by Gluconobacter strains of bacteria, which bio-synthesize open porous networks of cellulose fibers with relatively large amounts of nanofibrils dispersed inside. Olsson et al. demonstrated that these networks can be further impregnated with metalhydroxide/oxide precursors, which can readily be transformed into grafted magnetic nanoparticles along the cellulose nanofibers. The magnetic cellulose foam may allow for a number of novel applications of nanocellulose and the first remotely actuated magnetic super sponges absorbing 1 gram of water within a 60 mg cellulose aerogel foam were reported. Notably, these highly porous foams (>98% air) can be compressed into strong magnetic nanopapers, which may find use as functional membranes in various applications. Pickering emulsions and foams Nanocelluloses can stabilize emulsions and foams by a Pickering mechanism, i.e. they adsorb at the oil-water or air-water interface and prevent their energetic unfavorable contact. Nanocelluloses form oil-in-water emulsions with a droplet size in the range of 4-10 μm that are stable for months and can resist high temperatures and changes in pH. Nanocelluloses decrease the oil-water interface tension and their surface charge induces electrostatic repulsion within emulsion droplets. Upon salt-induced charge screening the droplets aggregate but do not undergo coalescence, indicating strong steric stabilization. The emulsion droplets even remain stable in the human stomach and resist gastric lipolysis, thereby delaying lipid absorption and satiation. In contrast to emulsions, native nanocelluloses are generally not suitable for the Pickering stabilization of foams, which is attributed to their primarily hydrophilic surface properties that results in an unfavorable contact angle below 90° (they are preferably wetted by the aqueous phase). Using hydrophobic surface modifications or polymer grafting, the surface hydrophobicity and contact angle of nanocelluloses can be increased, allowing also the Pickering stabilization of foams. By further increasing the surface hydrophobicity, inverse water-in-oil emulsions can be obtained, which denotes a contact angle higher than 90°. It was further demonstrated that nanocelluloses can stabilize water-in-water emulsions in presence of two incompatible water-soluble polymers. Cellulose nanofiber plate A bottom up approach can be used to create a high-performance bulk material with low density, high strength and toughness, and great thermal dimensional stability: cellulose nanofiber plate (CNFP). Cellulose nanofiber hydrogel is created by biosynthesis. The hydrogels can then be treated with a polymer solution or by surface modification and then are hot-pressed at 80 °C. The result is bulk material with excellent machinability. “The ultrafine nanofiber network structure in CNFP results in more extensive hydrogen bonding, the high in-plane orientation, and “three way branching points” of the microfibril networks”. This structure gives CNFP its high strength by distributing stress and adding barriers to crack formation and propagation. The weak link in this structure is bond between the pressed layers which can lead to delamination. To reduce delamination, the hydrogel can be treated with silicic acid, which creates strong covalent cross-links between layers during hot pressing. Surface modification The surface modification of nanocellulose is currently receiving a large amount of attention. Nanocellulose displays a high concentration of hydroxyl groups at the surface which can be reacted. However, hydrogen bonding strongly affects the reactivity of the surface hydroxyl groups. In addition, impurities at the surface of nanocellulose such as glucosidic and lignin fragments need to be removed before surface modification to obtain acceptable reproducibility between different batches. Safety aspects Processing of nanocellulose does not cause significant exposure to fine particles during friction grinding or spray drying. No evidence of inflammatory effects or cytotoxicity on mouse or human macrophages can be observed after exposure to nanocellulose. The results of toxicity studies suggest that nanocellulose is not cytotoxic and does not cause any effects on inflammatory system in macrophages. In addition, nanocellulose is not acutely toxic to Vibrio fischeri in environmentally relevant concentrations. Despite intensified research on oral food or pharmaceutical formulations containing nanocelluloses they are not generally recognized as safe. Nanocelluloses were demonstrated to exhibit limited toxicity and oxidative stress in in vitro intestinal epithelium or animal models. Potential applications The properties of nanocellulose (e.g. mechanical properties, film-forming properties, viscosity etc.) makes it an interesting material for many applications. Paper and paperboard In the area of paper and paperboard manufacture, nanocelluloses are expected to enhance the fiber-fiber bond strength and, hence, have a strong reinforcement effect on paper materials. Nanocellulose may be useful as a barrier in grease-proof type of papers and as a wet-end additive to enhance retention, dry and wet strength in commodity type of paper and board products. It has been shown that applying CNF as a coating material on the surface of paper and paperboard improves the barrier properties, especially air resistance and grease/oil resistance. It also enhances the structure properties of paperboards (smoother surface). Very high viscosity of MFC/CNF suspensions at low solids content limits the type of coating techniques that can be utilized to apply these suspensions onto paper/paperboard. Some of the coating methods utilized for MFC surface application onto paper/paperboard have been rod coating, size press, spray coating, foam coating and slot-die coating. Wet-end surface application of mineral pigments and MFC mixture to improve barrier, mechanical and printing properties of paperboard are also being explored. Nanocellulose can be used to prepare flexible and optically transparent paper. Such paper is an attractive substrate for electronic devices because it is recyclable, compatible with biological objects, and easily biodegrades. Composite As described above the properties of the nanocellulose makes an interesting material for reinforcing plastics. Nanocellulose can be spun into filaments that are stronger and stiffer than spider silk. Nanocellulose has been reported to improve the mechanical properties of thermosetting resins, starch-based matrixes, soy protein, rubber latex, poly(lactide). Hybrid cellulose nanofibrils-clay minerals composites present interesting mechanical, gas barrier and fire retardancy properties. The composite applications may be for use as coatings and films, paints, foams, packaging. Food Nanocellulose can be used as a low calorie replacement for carbohydrate additives used as thickeners, flavour carriers, and suspension stabilizers in a wide variety of food products. It is useful for producing fillings, crushes, chips, wafers, soups, gravies, puddings etc. The food applications arise from the rheological behaviour of the nanocellulose gel. Hygiene and absorbent products Applications in this field include: super water absorbent material (e.g. for incontinence pads material), nanocellulose used together with super absorbent polymers, nanocellulose in tissue, non-woven products or absorbent structures and as antimicrobial films. Emulsion and dispersion Nanocellulose has potential applications in the general area of emulsion and dispersion applications in other fields. Medical, cosmetic and pharmaceutical The use of nanocellulose in cosmetics and pharmaceuticals has been suggested: Freeze-dried nanocellulose aerogels used in sanitary napkins, tampons, diapers or as wound dressing The use of nanocellulose as a composite coating agent in cosmetics e.g. for hair, eyelashes, eyebrows or nails A dry solid nanocellulose composition in the form of tablets for treating intestinal disorders Nanocellulose films for screening of biological compounds and nucleic acids encoding a biological compound Filter medium partly based on nanocellulose for leukocyte free blood transfusion A buccodental formulation, comprising nanocellulose and a polyhydroxylated organic compound Powdered nanocellulose has also been suggested as an excipient in pharmaceutical compositions Nanocellulose in compositions of a photoreactive noxious substance purging agent Elastic cryo-structured gels for potential biomedical and biotechnological application Matrix for 3D cell culture Bio-based electronics and energy storage Nanocellulose can pave the way for a new type of "bio-based electronics" where interactive materials are mixed with nanocellulose to enable the creation of new interactive fibers, films, aerogels, hydrogels and papers. E.g. nanocellulose mixed with conducting polymers such as PEDOT:PSS show synergetic effects resulting in extraordinary mixed electronic and ionic conductivity, which is important for energy storage applications. Filaments spun from a mix of nanocellulose and carbon nanotubes show good conductivity and mechanical properties. Nanocellulose aerogels decorated with carbon nanotubes can be constructed into robust compressible 3D supercapacitor devices. Structures from nanocellulose can be turned into bio-based triboelectric generators and sensors. In April 2013 breakthroughs in nanocellulose production, by algae, were announced at an American Chemical Society conference, by speaker R. Malcolm Brown, Jr., Ph.D, who has pioneered research in the field for more than 40 years, spoke at the First International Symposium on Nanocellulose, part of the American Chemical Society meeting. Genes from the family of bacteria that produce vinegar, Kombucha tea and nata de coco have become stars in a project — which scientists said has reached an advanced stage - that would turn algae into solar-powered factories for producing the “wonder material” nanocellulose. Bio-based coloured materials Cellulose nanocrystals have shown the possibility to self organize into chiral nematic structures with angle-dependent iridescent colours. It is thus possible to manufacture totally bio-based pigments and glitters, films including sequins having a metallic glare and a small footprint compared to fossil-based alternatives. Other potential applications As a highly scattering material for ultra-white coatings Activate the dissolution of cellulose in different solvents Regenerated cellulose products, such as fibers films, cellulose derivatives Tobacco filter additive Organometallic modified nanocellulose in battery separators Reinforcement of conductive materials Loud-speaker membranes High-flux membranes Computer components Capacitors Lightweight body armour and ballistic glass Corrosion inhibitors Radio lenses Related materials Nanochitin is similar in its nanostructure to cellulose nanocrystals but extracted from chitin. See also Cellulose Cellulose fiber Microcrystalline cellulose Composite material References Polymers Cellulose Nanoparticles by composition Wood products Biomaterials
Nanocellulose
[ "Physics", "Chemistry", "Materials_science", "Biology" ]
5,983
[ "Biomaterials", "Materials", "Polymer chemistry", "Polymers", "Matter", "Medical technology" ]
27,332,750
https://en.wikipedia.org/wiki/Primary%20and%20secondary%20antibodies
Primary and secondary antibodies are two groups of antibodies that are classified based on whether they bind to antigens or proteins directly or target another (primary) antibody that, in turn, is bound to an antigen or protein. Primary A primary antibody can be very useful for the detection of biomarkers for diseases such as cancer, diabetes, Parkinson’s and Alzheimer’s disease and they are used for the study of absorption, distribution, metabolism, and excretion (ADME) and multi-drug resistance (MDR) of therapeutic agents. Secondary Secondary antibodies provide signal detection and amplification along with extending the utility of an antibody through conjugation to proteins. Secondary antibodies are especially efficient in immunolabeling. Secondary antibodies bind to primary antibodies, which are directly bound to the target antigen(s). In immunolabeling, the primary antibody's Fab domain binds to an antigen and exposes its Fc domain to secondary antibody. Then, the secondary antibody's Fab domain binds to the primary antibody's Fc domain. Since the Fc domain is constant within the same animal class, only one type of secondary antibody is required to bind to many types of primary antibodies. This reduces the cost by labeling only one type of secondary antibody, rather than labeling various types of primary antibodies. Secondary antibodies help increase sensitivity and signal amplification due to multiple secondary antibodies binding to a primary antibody. Whole Immunoglobulin molecule secondary antibodies are the most commonly used format, but these can be enzymatically processed to enable assay refinement. F(ab')2 fragments are generated by pepsin digestion to remove most of the Fc fragment, this avoids recognition by Fc receptors on live cells, or to Protein A or Protein G. Papain digestion generates Fab fragments, which removes the entire Fc fragment including the hinge region, yielding two monovalent Fab moieties. They can be used to block endogenous immunoglobulins on cells, tissues or other surfaces, and to block the exposed immunoglobulins in multiple labeling experiments using primary antibodies from the same species. Applications Secondary antibodies can be conjugated to enzymes such as horseradish peroxidase (HRP) or alkaline phosphatase (AP); or fluorescent dyes such as fluorescein isothiocyanate (FITC), rhodamine derivatives, Alexa Fluor dyes; or other molecules to be used in various applications. Secondary antibodies are used in many biochemical assays including: ELISA, including many HIV tests Western blot Immunostaining Immunohistochemistry Immunocytochemistry References Glycoproteins Immune system Biochemistry methods
Primary and secondary antibodies
[ "Chemistry", "Biology" ]
565
[ "Biochemistry methods", "Biochemistry", "Immune system", "Organ systems", "Glycoproteins", "Glycobiology" ]
27,333,066
https://en.wikipedia.org/wiki/Divinylcyclopropane-cycloheptadiene%20rearrangement
The divinylcyclopropane-cycloheptadiene rearrangement is an organic chemical transformation that involves the isomerization of a 1,2-divinylcyclopropane into a cycloheptadiene or -triene. It is conceptually related to the Cope rearrangement, but has the advantage of a strong thermodynamic driving force due to the release of ring strain. This thermodynamic power is recently being considered as an alternative energy source. Introduction In 1960, Vogel discovered that 1,2-divinylcyclopropane rearranges to cycloheptan-1,4-diene., After his discovery, a series of intense mechanistic investigations of the reaction followed in the 1960s, as researchers realized it bore resemblance (both structural and mechanistic) to the related rearrangement of vinylcyclopropane to cyclopentene. By the 1970s, the rearrangement had achieved synthetic utility and to this day it continues to be a useful method for the formation of seven-membered rings. Variations incorporating heteroatoms have been reported (see below). (1) Advantages: Being a rearrangement, the process exhibits ideal atom economy. It often proceeds spontaneously without the need for a catalyst. Competitive pathways are minimal for the all-carbon rearrangement. Disadvantages: The configuration of the starting materials needs be controlled in many cases—trans-divinylcyclopropanes often require heating to facilitate isomerization before rearrangement will occur. Rearrangements involving heteroatoms can exhibit reduced yields due to the formation of side products. Mechanism and stereochemistry Prevailing mechanism The primary debate concerning the mechanism of the rearrangement centers on whether it is a concerted (sigmatropic) or stepwise (diradical) process. Mechanistic experiments have shown that trans-divinylcyclopropanes epimerize to the corresponding cis isomers and undergo the rearrangement via what is most likely a concerted pathway. A boat-like transition state has been proposed and helps explain the observed stereospecificity of the process. Whether the initial epimerization of trans substrates occurs via a one- or two-center process is unclear in most cases. (2) Transition-metal-catalyzed versions of the rearrangement are known, and mechanisms vary. In one example employing rhodium bis(ethylene) hexafluoroacetylacetonate, coordination and formation of a bis-π-allyl complex precede electrocyclic ring closure and catalyst release. (3) Stereoselective variants Reactions of divinylcyclopropanes containing substituted double bonds are stereospecific with respect to the configurations at the double bonds—cis,cis isomers give cis products, while cis,trans isomers give trans products. Thus, chiral, non-racemic starting materials give rise to chiral products without loss of enantiomeric purity. In the example below, only the isomers depicted were observed in each case. (4) Scope and limitations A wide variety of divinylcyclopropanes undergo the titular reaction. These precursors have been generated by a variety of methods, including the addition of cyclopropyl nucleophiles (salts of lithium, or copper) to activated double or triple bonds, elimination of bis(2-haloethyl)cyclopropanes and cyclopropanation. In the example below, cuprate addition-elimination generates the transient enone 1, which rearranges to spirocycle 2. (5) Organolithiums can be employed in a similar role, but add in a direct fashion to carbonyls. Products with fused topology result. (6) Rearrangement after elimination of ditosylates has been observed; the chlorinated cycloheptadiene thus produced isomerizes to conjugated heptadiene 3 during the reaction. (7) Cyclopropanation with conjugated diazo compounds produces divinylcyclopropanes that then undergo rearrangement. When cyclic starting materials are used, bridged products result. (8) Substrates containing three-membered heterocyclic rings can also undergo the reaction. cis-Divinylepoxides give oxepines at elevated temperatures (100 °C). trans Isomers undergo an interesting competitive rearrangement to dihydrofurans through the intermediacy of a carbonyl ylide and the same ylide intermediate has been proposed as the direct precursor to the oxepine product 4. Conjugated dienyl epoxides form similar products, lending support to the existence of an ylide intermediate. (9) Divinyl aziridines undergo a similar suite of reactions providing azepines or vinyl pyrrolines depending on the relative configuration of the aziridine starting material. Divinyl thiiranes can provide thiepines or dihydrothiophenes, although these reactions are slower than those of the corresponding nitrogen- and oxygen-containing compounds. Synthetic applications The earliest observation of a cycloheptadiene via the title rearrangement was made by Baeyer in his synthesis of eucarvone from carvone hydrobromide. Mechanistic studies revealed that the rearrangement did indeed proceed via a concerted, Cope-type mechanism. (10) In the Eschenmoser synthesis of colchicine, the rearrangement is used to form the seven-membered ring of the target. (11) A racemic synthesis of sirenin employs a Wittig reaction to form the key divinylcyclopropane. Hydrogenation of the rearrangement product afforded the target. (12) Experimental conditions and procedure Typical conditions Typically, the rearrangement is carried out just after the formation of the divinylcyclopropane, in the same pot. Heating is sometimes necessary, particularly for trans substrates, which must undergo epimerization prior to rearrangement. With enough energy to surmount activation barriers, however, the isomerization is usually very efficient. Example procedure (13) To a cold (–78°) stirred solution of lithium diisopropylamide (1.4–1.5 mmol/mmol of ketone) in dry THF (4 mL/mmol of base) under an atmosphere of argon was added slowly a solution of n-butyl-trans-2-vinylcyclopropyl ketone (1.19 mmol) in dry THF (1 mL/mmol of ketone), and the resulting solution was stirred at –78° for 45 minutes. A solution of freshly sublimed tert-butyldimethylsilyl chloride (1.6 mmol/mmol of ketone) in dry THF (1 mL/mmol of chloride) was added, followed by dry HMPA (0.5 mL/mmol of ketone). The solution was stirred at –78° for 15 minutes and at room temperature for 2–3 hours, and then it was partitioned between saturated aqueous sodium bicarbonate and pentane (10 mL and 20 mL/mmol of ketone, respectively). The aqueous phase was washed twice with pentane. The combined extract was washed four times with saturated aqueous sodium bicarbonate and twice with brine, and then dried (MgSO4). Removal of the solvent, followed by bulb-to-bulb distillation of the remaining oil, gave the corresponding silyl enol ether as a colorless oil that exhibited no IR carbonyl stretching absorption. Thermolysis of the silyl enol ether was accomplished by heating (neat, argon atmosphere) at 230° (air-bath temperature) for 30–60 minutes. Direct distillation (140–150°/12 torr) of the resultant materials provided the cycloheptadiene in 85% yield: IR (film) 1660, 1260, 840 cm–1; 1H NMR (CDCl3) δ 0.09 (s, 6H), 0.88 (s, 9H), 0.7–2.75 (m, 14H), 4.8 (t, 1H, J = 5.5 Hz), 5.5–5.9 (m, 2H). References Organic reactions Cyclopropanes Rearrangement reactions Ring expansion reactions
Divinylcyclopropane-cycloheptadiene rearrangement
[ "Chemistry" ]
1,801
[ "Ring expansion reactions", "Rearrangement reactions", "Organic reactions" ]
27,338,603
https://en.wikipedia.org/wiki/CHAdeMO
CHAdeMO is a fast-charging system for battery electric vehicles, developed in 2010 by the CHAdeMO Association, formed by the Tokyo Electric Power Company and five major Japanese automakers. The name is an abbreviation of "CHArge de MOve" (which the organization translates as "charge for moving") and is derived from the Japanese phrase "" (), translating to English as "How about a cup of tea?", referring to the time it would take to charge a car. It competes with the Combined Charging System (CCS), which since 2014 has been required on public charging infrastructure installed in the European Union, Tesla's North American Charging System (NACS) used by its Supercharger network outside of Europe, and China's GB/T charging standard. , CHAdeMO remains popular in Japan, but is being equipped on very few new cars sold in North America or Europe. First-generation CHAdeMO connectors deliver up to 62.5 kW by 500 V, 125 A direct current through a proprietary electrical connector, adding about of range in a half an hour. It has been included in several international vehicle charging standards. The second-generation specification allows for up to 400 kW by 1 kV, 400 A direct current. The CHAdeMO Association is currently co-developing with China Electricity Council (CEC) the third-generation standard with the working name of “ChaoJi” that aims to deliver 900 kW. The charging system is now considered outdated in the U.S, with the Nissan Leaf and the Mitsubishi Outlander PHEV being the only models to use it in the country. History CHAdeMO originated out of a charging system design from the Tokyo Electric Power Company (TEPCO). TEPCO had been participating on numerous EV infrastructure trial projects between 2006 and 2009 in collaboration with Nissan, Mitsubishi, Fuji Heavy Industries (now Subaru), and other manufacturers. These trials resulted in TEPCO developing patented technology and a specification, which would form the basis for the CHAdeMO. The first commercial CHAdeMO charging infrastructure was commissioned in 2009 alongside the launch of the Mitsubishi i-MiEV. In March 2010, TEPCO formed the CHAdeMO Association with Toyota, Nissan, Mitsubishi, and Subaru. They were later joined by Hitachi, Honda and Panasonic. CHAdeMO would be the first organization to propose a standardized DC fast charge system to be shared across diverse EVs, regardless of their brands and models. CHAdeMO became a published international standard in 2014 when the International Electrotechnical Commission (IEC) adopted IEC 61851-23 for the charging system, IEC 61851-24 for communication, and IEC 62196-3 configuration AA for the connector. Later that year, the European Committee for Electrotechnical Standardization (EN) added CHAdeMO as a published standard along with CCS Combo 2, followed by the Institute of Electrical and Electronics Engineers (IEEE) in 2016. A major blow to the international adoption of CHAdeMO came in 2013 when European Commission designated the Combined Charging System (CCS) Combo 2 as the mandated plug for DC high-power charging infrastructure in Europe. While the European Parliament had contemplated transitioning out CHAdeMO infrastructure by January 2019, the final mandate only required that all publicly accessible chargers in the EU be equipped 'at least' with CCS Combo 2, allowing stations to offer multiple connector types. While CHAdeMO was the first fast-charging standard to see widespread deployment and remains widely equipped on vehicles sold in Japan, it has been losing market share in other countries. Honda was the first of the CHAdeMO Association members to stop equipping the connector on vehicles sold outside of Japan starting with the Clarity Electric in 2016. Nissan decided not to use CHAdeMO on its Ariya SUVs introduced in 2021 outside of Japan. Toyota and Subaru have also equipped their jointly developed bZ4X/Solterra with CCS connectors outside of Japan. , the Mitsubishi Outlander PHEV and Nissan Leaf are the only plug-in vehicles equipped with CHAdeMO for sale in North America. As demand increased for EV charging services for Tesla vehicles after 2019, and prior to opening of the competing North American Charging System (NACS) in late 2022, several electric vehicle charging network operators had added some Tesla charging connector adapters to CHAdeMO-standard charging stations. These included, ONroute rest stop network in Ontario, Canada—where a Tesla adaptor was permanently attached to a CHAdeMO connector on some 60 charge stations— and REVEL opened a charging station in Brooklyn for a while after they were denied a license to operate a Tesla ride-hailing fleet in New York City. Also, EVgo, added a few optional Tesla adaptors to CHAdeMO connectors as early as 2019. Connector design DC fast charge Most electric vehicles (EV) have an on-board charger that uses a full bridge rectifier to transform alternating current (AC) from the electrical grid to direct current (DC) suitable for recharging the EV's battery pack. Most EVs are designed with limited AC input power, typically based on the available power of consumer outlets: for example, 240 V, 30 A in the United States and Japan; 240 V, 40 A in Canada; and 230 V, 15 A or 3φ, 400 V, 32 A in Europe and Australia. AC chargers with higher limits have been specified, for example SAE J1772-2009 has an option for 240 V, 80 A and VDE-AR-E 2623-2-2 has a 3φ, 400 V, 63 A. But these charger types have been rarely deployed. Cost and thermal issues limit how much power the rectifier can handle, so beyond approximately 240 V AC and 75 A it is better for an external charging station to deliver DC directly to the battery. For faster charging, dedicated DC chargers can be built in permanent locations and provided with high-current connections to the grid. Such high voltage and high-current charging is called a DC fast charge (DCFC) or DC quick charging (DCQC). Connector protocols and history While the notion of shared off-board DC charging infrastructure, together with the charging system design for CHAdeMO came out of TEPCOs trials starting in 2006, the connector itself had been designed in 1993, and was specified by the 1993 Japan Electric Vehicle Standard (JEVS) G105-1993 from the JARI. In addition to carrying power, the connector also makes a data connection using the CAN bus protocol. This performs functions such as a safety interlock to avoid energizing the connector before it is safe (similar to SAE J1772), transmitting battery parameters to the charging station including when to stop charging (top battery percentage, usually 80%), target voltage, total battery capacity, and how the station should vary its output current while charging. The first protocol issued was CHAdeMO 0.9, which offered maximum charging power of 62.5 kW (125 A × 500 V DC). Version 1.0 followed in 2012, enhancing vehicle protection, compatibility, and reliability. Version 1.1 (2015) allowed the current to dynamically change during charging; Version 1.2 (2017) increased maximum power to 200 kW (400 A × 500 V DC). CHAdeMO published its protocol for 400 kW (400 A × 1 kV) 'ultra-fast' charging in May 2018 as CHAdeMO 2.0. CHAdeMO 2.0 allowed the standard to better compete with the CCS 'ultra-fast' stations being built around the world as part of new networks such as IONITY charging consortium. Vehicle-to-grid (V2G) In 2014, CHAdeMO published its protocol for vehicle-to-grid (V2G) integration, which also includes applications for vehicle to load (V2L) or vehicle to home-off grid (V2H), collectively denoted V2X. The technology enables EV owners to use the car as an energy storage device, potentially lowering costs by optimising energy usage for the current time of use pricing and providing electricity to the grid. Since 2012, multiple V2X demo projects using the CHAdeMO protocol have been demonstrated worldwide. Some of the recent projects include UCSD INVENT in the United States, as well as Sciurus and e4Future in the United Kingdom that are supported by Innovate UK. CHAdeMO 3.0: ChaoJi Deployment CHAdeMO-type fast charging stations were initially installed in great numbers by TEPCO in Japan, which required the creation of an additional power distribution network to supply these stations. Since then, CHAdeMO charger installation has expanded its geographical reach and in May 2023, the CHAdeMO Association stated that there were 57,800 CHAdeMO chargers installed in 99 countries. These included 9,600 charging stations in Japan, 31,600 in Europe, 9,400 in North America, and 7,000 elsewhere. As of January 2022, a total of 260 certified CHAdeMO charger models have been produced by 50 companies. Gallery See also CCS Combo SAE J1772 (Type 1) ChaoJi GB/T charging standard References External links International Electrotechnical Commission DC power connectors Plug-in hybrid vehicle industry Charging stations Automotive standards 2010 establishments in Japan
CHAdeMO
[ "Engineering" ]
1,929
[ "Electrical engineering organizations", "International Electrotechnical Commission" ]
27,339,211
https://en.wikipedia.org/wiki/Spectral%20radiance
In radiometry, spectral radiance or specific intensity is the radiance of a surface per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. The SI unit of spectral radiance in frequency is the watt per steradian per square metre per hertz () and that of spectral radiance in wavelength is the watt per steradian per square metre per metre ()—commonly the watt per steradian per square metre per nanometre (). The microflick is also used to measure spectral radiance in some fields. Spectral radiance gives a full radiometric description of the field of classical electromagnetic radiation of any kind, including thermal radiation and light. It is conceptually distinct from the descriptions in explicit terms of Maxwellian electromagnetic fields or of photon distribution. It refers to material physics as distinct from psychophysics. For the concept of specific intensity, the line of propagation of radiation lies in a semi-transparent medium which varies continuously in its optical properties. The concept refers to an area, projected from the element of source area into a plane at right angles to the line of propagation, and to an element of solid angle subtended by the detector at the element of source area. The term brightness is also sometimes used for this concept. The SI system states that the word brightness should not be so used, but should instead refer only to psychophysics. Definition The specific (radiative) intensity is a quantity that describes the rate of radiative transfer of energy at , a point of space with coordinates , at time . It is a scalar-valued function of four variables, customarily written as where: is defined to be such that a virtual source area, , containing the point , is an apparent emitter of a small but finite amount of energy transported by radiation of frequencies in a small time duration , where and where is the angle between the line of propagation and the normal to ; the effective destination of is a finite small area , containing the point , that defines a finite small solid angle about in the direction of . The cosine accounts for the projection of the source area into a plane at right angles to the line of propagation indicated by . The use of the differential notation for areas indicates they are very small compared to , the square of the magnitude of vector , and thus the solid angles are also small. There is no radiation that is attributed to itself as its source, because is a geometrical point with no magnitude. A finite area is needed to emit a finite amount of light. Invariance For propagation of light in a vacuum, the definition of specific (radiative) intensity implicitly allows for the inverse square law of radiative propagation. The concept of specific (radiative) intensity of a source at the point presumes that the destination detector at the point has optical devices (telescopic lenses and so forth) that can resolve the details of the source area . Then the specific radiative intensity of the source is independent of the distance from source to detector; it is a property of the source alone. This is because it is defined per unit solid angle, the definition of which refers to the area of the detecting surface. This may be understood by looking at the diagram. The factor has the effect of converting the effective emitting area into a virtual projected area at right angles to the vector from source to detector. The solid angle also has the effect of converting the detecting area into a virtual projected area at right angles to the vector , so that . Substituting this for in the above expression for the collected energy , one finds : when the emitting and detecting areas and angles and , and , are held constant, the collected energy is inversely proportional to the square of the distance between them, with invariant . This may be expressed also by the statement that is invariant with respect to the length of ; that is to say, provided the optical devices have adequate resolution, and that the transmitting medium is perfectly transparent, as for example a vacuum, then the specific intensity of the source is unaffected by the length of the ray . For the propagation of light in a transparent medium with a non-unit non-uniform refractive index, the invariant quantity along a ray is the specific intensity divided by the square of the absolute refractive index. Reciprocity For the propagation of light in a semi-transparent medium, specific intensity is not invariant along a ray, because of absorption and emission. Nevertheless, the Stokes-Helmholtz reversion-reciprocity principle applies, because absorption and emission are the same for both senses of a given direction at a point in a stationary medium. Étendue and reciprocity The term étendue is used to focus attention specifically on the geometrical aspects. The reciprocal character of étendue is indicated in the article about it. Étendue is defined as a second differential. In the notation of the present article, the second differential of the étendue, , of the pencil of light which "connects" the two surface elements and is defined as This can help understand the geometrical aspects of the Stokes-Helmholtz reversion-reciprocity principle. Collimated beam For the present purposes, the light from a star can be treated as a practically collimated beam, but apart from this, a collimated beam is rarely if ever found in nature, though artificially produced beams can be very nearly collimated. For some purposes the rays of the sun can be considered as practically collimated, because the sun subtends an angle of only 32′ of arc. The specific (radiative) intensity is suitable for the description of an uncollimated radiative field. The integrals of specific (radiative) intensity with respect to solid angle, used for the definition of spectral flux density, are singular for exactly collimated beams, or may be viewed as Dirac delta functions. Therefore, the specific (radiative) intensity is unsuitable for the description of a collimated beam, while spectral flux density is suitable for that purpose. Rays Specific (radiative) intensity is built on the idea of a pencil of rays of light. In an optically isotropic medium, the rays are normals to the wavefronts, but in an optically anisotropic crystalline medium, they are in general at angles to those normals. That is to say, in an optically anisotropic crystal, the energy does not in general propagate at right angles to the wavefronts. Alternative approaches The specific (radiative) intensity is a radiometric concept. Related to it is the intensity in terms of the photon distribution function, which uses the metaphor of a particle of light that traces the path of a ray. The idea common to the photon and the radiometric concepts is that the energy travels along rays. Another way to describe the radiative field is in terms of the Maxwell electromagnetic field, which includes the concept of the wavefront. The rays of the radiometric and photon concepts are along the time-averaged Poynting vector of the Maxwell field. In an anisotropic medium, the rays are not in general perpendicular to the wavefront. References Radiometry
Spectral radiance
[ "Engineering" ]
1,486
[ "Telecommunications engineering", "Radiometry" ]
24,079,760
https://en.wikipedia.org/wiki/Virtual%20state
In quantum physics, a virtual state is a very short-lived, unobservable quantum state. In many quantum processes a virtual state is an intermediate state, sometimes described as "imaginary" in a multi-step process that mediates otherwise forbidden transitions. Since virtual states are not eigenfunctions of any operator, normal parameters such as occupation, energy and lifetime need to be qualified. No measurement of a system will show one to be occupied, but they still have lifetimes derived from uncertainty relations. While each virtual state has an associated energy, no direct measurement of its energy is possible but various approaches have been used to make some measurements (for example see and related work on virtual state spectroscopy) or extract other parameters using measurement techniques that depend upon the virtual state's lifetime. The concept is quite general and can be used to predict and describe experimental results in many areas including Raman spectroscopy, non-linear optics generally, various types of photochemistry, and nuclear processes. See also Two-photon absorption Virtual particle Feshbach resonance Shape resonance References Quantum mechanics
Virtual state
[ "Physics" ]
218
[ "Theoretical physics", "Quantum mechanics" ]
24,081,827
https://en.wikipedia.org/wiki/Electromeric%20effect
In chemistry, the electromeric effect is a molecular polarization occurring by an intramolecular electron displacement, characterized by the substitution of one electron pair for another within the same atomic octet of electrons. It is sometimes called the conjugative mechanism, and previously, the tautomeric mechanism. The electromeric effect is often considered along with the inductive effect as types of electron displacement. Although some people refer to it as an effect produced by the presence of a reagent like an electrophile or a nucleophile, IUPAC does not define it as such. The term electromeric effect is no longer used in standard texts and is considered as obsolete. The concepts implied by the terms electromeric effect and mesomeric effect are absorbed in the term resonance effect. This effect can be represented using curved arrows, which symbolize the electron shift, as in the diagram below: Types of electromeric effects The effect can be classified into two types, namely the +E effect and the -E effect. This classification is based on the direction of electron pair transfer. When the attacking reagent is electrophile, the +E effect is generally observed and π-electrons are transferred to the positively charged atom. When the attacking reagent is a nucleophile, there is generally an -E effect, where π electrons are transferred to atoms to which the attacking reagent will not bind. References Physical organic chemistry Chemical bonding
Electromeric effect
[ "Physics", "Chemistry", "Materials_science" ]
296
[ "nan", "Chemical bonding", "Condensed matter physics", "Physical organic chemistry" ]
24,082,423
https://en.wikipedia.org/wiki/Mass%E2%80%93luminosity%20relation
In astrophysics, the mass–luminosity relation is an equation giving the relationship between a star's mass and its luminosity, first noted by Jakob Karl Ernst Halm. The relationship is represented by the equation: where and are the luminosity and mass of the Sun and . The value is commonly used for main-sequence stars. This equation and the usual value of only applies to main-sequence stars with masses and does not apply to red giants or white dwarfs. As a star approaches the Eddington luminosity then . In summary, the relations for stars with different ranges of mass are, to a good approximation, as the following: For stars with masses less than 0.43M⊙, convection is the sole energy transport process, so the relation changes significantly. For stars with masses M > 55M⊙ the relationship flattens out and becomes L ∝ M but in fact those stars don't last because they are unstable and quickly lose matter by intense solar winds. It can be shown this change is due to an increase in radiation pressure in massive stars. These equations are determined empirically by determining the mass of stars in binary systems to which the distance is known via standard parallax measurements or other techniques. After enough stars are plotted, stars will form a line on a logarithmic plot and slope of the line gives the proper value of a. Another form, valid for K-type main-sequence stars, that avoids the discontinuity in the exponent has been given by Cuntz & Wang; it reads: with (M in M⊙). This relation is based on data by Mann and collaborators, who used moderate-resolution spectra of nearby late-K and M dwarfs with known parallaxes and interferometrically determined radii to refine their effective temperatures and luminosities. Those stars have also been used as a calibration sample for Kepler candidate objects. Besides avoiding the discontinuity in the exponent at M = 0.43M⊙, the relation also recovers a = 4.0 for M ≃ 0.85M⊙. The mass/luminosity relation is important because it can be used to find the distance to binary systems which are too far for normal parallax measurements, using a technique called "dynamical parallax". In this technique, the masses of the two stars in a binary system are estimated, usually in terms of the mass of the Sun. Then, using Kepler's laws of celestial mechanics, the distance between the stars is calculated. Once this distance is found, the distance away can be found via the arc subtended in the sky, giving a preliminary distance measurement. From this measurement and the apparent magnitudes of both stars, the luminosities can be found, and by using the mass–luminosity relationship, the masses of each star. These masses are used to re-calculate the separation distance, and the process is repeated. The process is iterated many times, and accuracies as high as 5% can be achieved. The mass/luminosity relationship can also be used to determine the lifetime of stars by noting that lifetime is approximately proportional to M/L although one finds that more massive stars have shorter lifetimes than that which the M/L relationship predicts. A more sophisticated calculation factors in a star's loss of mass over time. Derivation Deriving a theoretically exact mass/luminosity relation requires finding the energy generation equation and building a thermodynamic model of the inside of a star. However, the basic relation L ∝ M3 can be derived using some basic physics and simplifying assumptions. The first such derivation was performed by astrophysicist Arthur Eddington in 1924. The derivation showed that stars can be approximately modelled as ideal gases, which was a new, somewhat radical idea at the time. What follows is a somewhat more modern approach based on the same principles. An important factor controlling the luminosity of a star (energy emitted per unit time) is the rate of energy dissipation through its bulk. Where there is no heat convection, this dissipation happens mainly by photons diffusing. By integrating Fick's first law over the surface of some radius r in the radiation zone (where there is negligible convection), we get the total outgoing energy flux which is equal to the luminosity by conservation of energy: where D is the photons diffusion coefficient, and u is the energy density. Note that this assumes that the star is not fully convective, and that all heat creating processes (nucleosynthesis) happen in the core, below the radiation zone. These two assumptions are not correct in red giants, which do not obey the usual mass-luminosity relation. Stars of low mass are also fully convective, hence do not obey the law. Approximating the star by a black body, the energy density is related to the temperature by the Stefan–Boltzmann law: where is the Stefan–Boltzmann constant, c is the speed of light, kB is Boltzmann constant and is the reduced Planck constant. As in the theory of diffusion coefficient in gases, the diffusion coefficient D approximately satisfies: where λ is the photon mean free path. Since matter is fully ionized in the star core (as well as where the temperature is of the same order of magnitude as inside the core), photons collide mainly with electrons, and so λ satisfies Here is the electron density and: is the cross section for electron-photon scattering, equal to Thomson cross-section. α is the fine-structure constant and me the electron mass. The average stellar electron density is related to the star mass M and radius R Finally, by the virial theorem, the total kinetic energy is equal to half the gravitational potential energy EG, so if the average nuclei mass is mn, then the average kinetic energy per nucleus satisfies: where the temperature T is averaged over the star and C is a factor of order one related to the stellar structure and can be estimated from the star approximate polytropic index. Note that this does not hold for large enough stars, where the radiation pressure is larger than the gas pressure in the radiation zone, hence the relation between temperature, mass and radius is different, as elaborated below. Wrapping up everything, we also take r to be equal to R up to a factor, and ne at r is replaced by its stellar average up to a factor. The combined factor is approximately 1/15 for the sun, and we get: The added factor is actually dependent on M, therefore the law has an approximate dependence. Distinguishing between small and large stellar masses One may distinguish between the cases of small and large stellar masses by deriving the above results using radiation pressure. In this case, it is easier to use the optical opacity and to consider the internal temperature TI directly; more precisely, one can consider the average temperature in the radiation zone. The consideration begins by noting the relation between the radiation pressure Prad and luminosity. The gradient of radiation pressure is equal to the momentum transfer absorbed from the radiation, giving: where c is the velocity of light. Here, ; the photon mean free path. The radiation pressure is related to the temperature by , therefore from which it follows directly that In the radiation zone gravity is balanced by the pressure on the gas coming from both itself (approximated by ideal gas pressure) and from the radiation. For a small enough stellar mass the latter is negligible and one arrives at as before. More precisely, since integration was done from 0 to R so on the left side, but the surface temperature TE can be neglected with respect to the internal temperature TI. From this it follows directly that For a large enough stellar mass, the radiation pressure is larger than the gas pressure in the radiation zone. Plugging in the radiation pressure, instead of the ideal gas pressure used above, yields hence Core and surface temperatures To the first approximation, stars are black body radiators with a surface area of . Thus, from the Stefan–Boltzmann law, the luminosity is related to the surface temperature TS, and through it to the color of the star, by where σB is Stefan–Boltzmann constant, The luminosity is equal to the total energy produced by the star per unit time. Since this energy is produced by nucleosynthesis, usually in the star core (this is not true for red giants), the core temperature is related to the luminosity by the nucleosynthesis rate per unit volume: Here, ε is the total energy emitted in the chain reaction or reaction cycle. is the Gamow peak energy, dependent on EG, the Gamow factor. Additionally, S(E)/E is the reaction cross section, n is number density, is the reduced mass for the particle collision, and A,B are the two species participating in the limiting reaction (e.g. both stand for a proton in the proton-proton chain reaction, or A a proton and B an nucleus for the CNO cycle). Since the radius R is itself a function of the temperature and the mass, one may solve this equation to get the core temperature. References Stellar astronomy Stellar evolution
Mass–luminosity relation
[ "Physics", "Astronomy" ]
1,894
[ "Stellar astronomy", "Astronomical sub-disciplines", "Astrophysics", "Stellar evolution" ]
24,084,383
https://en.wikipedia.org/wiki/Hemipolyhedron
In geometry, a hemipolyhedron is a uniform star polyhedron some of whose faces pass through its center. These "hemi" faces lie parallel to the faces of some other symmetrical polyhedron, and their count is half the number of faces of that other polyhedron – hence the "hemi" prefix. The prefix "hemi" is also used to refer to certain projective polyhedra, such as the hemi-cube, which are the image of a 2 to 1 map of a spherical polyhedron with central symmetry. Wythoff symbol and vertex figure Their Wythoff symbols are of the form p/(p − q) p/q | r; their vertex figures are crossed quadrilaterals. They are thus related to the cantellated polyhedra, which have similar Wythoff symbols. The vertex configuration is p/q.2r.p/(p − q).2r. The 2r-gon faces pass through the center of the model: if represented as faces of spherical polyhedra, they cover an entire hemisphere and their edges and vertices lie along a great circle. The p/(p − q) notation implies a {p/q} face turning backwards around the vertex figure. The nine forms, listed with their Wythoff symbols and vertex configurations are: Note that Wythoff's kaleidoscopic construction generates the nonorientable hemipolyhedra (all except the octahemioctahedron) as double covers (two coincident hemipolyhedra). In the Euclidean plane, the sequence of hemipolyhedra continues with the following four star tilings, where apeirogons appear as the aforementioned equatorial polygons: Of these four tilings, only 6/5 6 ∞ is generated as a double cover by Wythoff's construction. Orientability Only the octahemioctahedron represents an orientable surface; the remaining hemipolyhedra have non-orientable or single-sided surfaces. This is because proceeding around an equatorial 2r-gon, the p/q-gonal faces alternately point "up" and "down", so any two consecutive ones have opposite senses. This is equivalent to demanding that the p/q-gons in the corresponding quasiregular polyhedra below can be alternatively given positive and negative orientations. But that is only possible for the triangles of the cuboctahedron (corresponding to the triangles of the octahedron, the only regular polyhedron with an even number of faces meeting at a vertex), which are precisely the non-hemi faces of the octahemioctahedron. Duals of the hemipolyhedra Since the hemipolyhedra have faces passing through the center, the dual figures have corresponding vertices at infinity; properly, on the real projective plane at infinity. In Magnus Wenninger's Dual Models, they are represented with intersecting prisms, each extending in both directions to the same vertex at infinity, in order to maintain symmetry. In practice the model prisms are cut off at a certain point that is convenient for the maker. Wenninger suggested these figures are members of a new class of stellation figures, called stellation to infinity. However, he also suggested that strictly speaking they are not polyhedra because their construction does not conform to the usual definitions. There are 9 such duals, sharing only 5 distinct outward forms, four of them existing in outwardly identical pairs. The members of a given visually identical pair differ in their arrangements of true and false vertices (a false vertex is where two edges cross each other but do not join). The outward forms are: Relationship with the quasiregular polyhedra The hemipolyhedra occur in pairs as facetings of the quasiregular polyhedra with four faces at a vertex. These quasiregular polyhedra have vertex configuration m.n.m.n and their edges, in addition to forming the m- and n-gonal faces, also form hemi-faces of the hemipolyhedra. Thus, the hemipolyhedra can be derived from the quasiregular polyhedra by discarding either the m-gons or n-gons (to maintain two faces at an edge) and then inserting the hemi faces. Since either m-gons or n-gons may be discarded, either of two hemipolyhedra may be derived from each quasiregular polyhedron, except for the octahedron as a tetratetrahedron, where m = n = 3 and the two facetings are congruent. (This construction does not work for the quasiregular polyhedra with six faces at a vertex, also known as the ditrigonal polyhedra, as their edges do not form any regular hemi-faces.) Since the hemipolyhedra, like the quasiregular polyhedra, also have two types of faces alternating around each vertex, they are sometimes also considered to be quasiregular. Here m and n correspond to p/q above, and h corresponds to 2r above. References (Wenninger models: 67, 68, 78, 89, 91, 100, 102, 106, 107) Har'El, Z. Uniform Solution for Uniform Polyhedra., Geometriae Dedicata 47, 57-110, 1993. Zvi Har’El (Page 10, 5.2. Hemi polyhedra p p'|r.) External links Stella Polyhedral Glossary Versi-Regular Polyhedra in Visual Polyhedra Uniform polyhedra
Hemipolyhedron
[ "Physics" ]
1,196
[ "Uniform polytopes", "Uniform polyhedra", "Symmetry" ]
24,085,985
https://en.wikipedia.org/wiki/XZ%20Tauri
XZ Tauri is a binary system approximately away in the constellation Taurus. The system consists of two T Tauri stars orbiting each other about 6 billion kilometers apart (roughly the same distance as Pluto is from the Sun). The system made news in 2000 when a superflare was observed in the system. A third star, component C, has been observed at a separation of , but subsequent observations failed to find it. The T Tauri star HL Tauri, away, is also sometimes listed as a companion. Gallery Notes References External links XZ Tauri at Constellation Guide Binary stars T Tauri stars Taurus (constellation) Tauri, XZ
XZ Tauri
[ "Astronomy" ]
136
[ "Taurus (constellation)", "Constellations" ]
24,086,376
https://en.wikipedia.org/wiki/Sirtuin-activating%20compound
Sirtuin-activating compounds (STAC) are chemical compounds having an effect on sirtuins, a group of enzymes that use NAD+ to remove acetyl groups from proteins. They are caloric restriction mimetic compounds that may be helpful in treating various aging-related diseases. Context Leonard P. Guarente is recognized as the leading proponent of the hypothesis that caloric restriction slows aging by activation of Sirtuins. STACs have been discovered by Konrad Howitz of Biomol Inc and biologist David Sinclair. In September 2003, Howitz and Sinclair et al. published a highly cited paper reporting that polyphenols such as resveratrol activate human SIRT1 and extend the lifespan of budding yeast (Howitz et al., Nature, 2003). Other examples of such products are butein, piceatannol, isoliquiritigenin, fisetin, and quercetin. Sirtuins depend on the crucial cellular molecule called nicotinamide adenine dinucleotide (NAD+) for their function. Falling NAD+ levels during aging may adversely impact sirtuin maintenance of DNA integrity and ability to combat oxidative stress-induced cell damage. Increasing cellular NAD+ levels with supplements like nicotinamide mononucleotide (NMN) during aging may slow or reverse certain aging processes with sirtuin function enhancement. Some STACs can cause artificial effects in the assay initially used for their identification, but it has been shown that STACs also activate SIRT1 against regular polypeptide substrates, with an influence of the substrate sequence. Sirtris Pharmaceuticals, Sinclair's company, was purchased by GlaxoSmithKline (GSK) in 2008, and subsequently shut down as a separate entity within GSK. See also Gerontology SRT1460 SRT1720 References External links http://pubs.acs.org/cen/coverstory/8234/8234aging.html Sirtris Pharmaceuticals' US patent for STAC Patent title: Novel Sirtuin Activating Compounds and Methods for Making the Same Anti-aging substances Biogerontology
Sirtuin-activating compound
[ "Chemistry", "Biology" ]
450
[ "Senescence", "Anti-aging substances" ]
24,087,239
https://en.wikipedia.org/wiki/Noncommutative%20unique%20factorization%20domain
In mathematics, a noncommutative unique factorization domain is a noncommutative ring with the unique factorization property. Examples The ring of Hurwitz quaternions, also known as integral quaternions. A quaternion a = a0 + a1i + a2j + a3k is integral if either all the coefficients ai are integers or all of them are half-integers. All free associative algebras. References P.M. Cohn, "Noncommutative unique factorization domains", Transactions of the American Mathematical Society 109:2:313-331 (1963). full text R. Sivaramakrishnan, Certain number-theoretic episodes in algebra, CRC Press, 2006, Notes Ring theory Number theory
Noncommutative unique factorization domain
[ "Mathematics" ]
163
[ "Discrete mathematics", "Number theory stubs", "Ring theory", "Fields of abstract algebra", "Number theory" ]
8,472,042
https://en.wikipedia.org/wiki/Flux%20method
The flux method is a crystal growth method where starting materials are dissolved in a solvent (flux), and are precipitated out to form crystals of a desired compound. The flux lowers the melting point of the desired compound, analogous to a wet chemistry recrystallization. The flux is molten in a highly stable crucible that does not react with the flux. Metal crucibles, such as platinum, titanium, and niobium are used for the growth of oxide crystals. Ceramic crucibles, such as alumina, zirconia, and boron nitride are used for the growth of metallic crystals. For air-sensitive growths, contents are sealed in ampoules or placed in atmosphere controlled furnaces. Choice of flux Oxide fluxes are often combined to reduce volatility, viscosity, and reactivity towards the crucibles. Metallic fluxes aren't typically combined, as they do not suffer from the same volatility, viscosity, and reactivity issues. An ideal flux should have the following properties: Good solubility for desired compound at growth temperatures. Low melting point. Large gap between melting and boiling point. Easily removed from crystals. Unreactive with crucible and starting materials at growth temperatures. Furnace procedure The growth (starting materials, flux, and crucible) are heated to form a complete liquid solution. The growth is cooled to a temperature where the solution is fully saturated. Further cooling causes crystals to precipitate from the solution, lowering the concentration of starting materials in solution, and lowering the temperature where the solution is fully saturated. The process is repeated, decreasing temperature and precipitating more crystals. The process is then stopped at a desired temperature, and the growth is removed from the furnace. Practically, the flux method is done by placing the growth into a programmable furnace: Ramp - The furnace is heated from an initial temperature to a maximum temperature, where the growth forms a complete liquid solution. Dwell - The furnace is maintained at the maximum temperature to homogenize the solution. Cool - The furnace is cooled to a desired temperature over a specified rate or time. Removal - The growth is removed from the furnace. The growth can be quenched, centrifuged, or simply removed if already at room temperature. Additional steps may be added to this basic temperature profile, such as additional dwells or different cooling rates over different points of the cool. Crystallization can occur through spontaneous nucleation, encouragement with a seed, or through mechanical stress. Flux separation After crystallization, often some solidified flux remains on the surface or inside the desired crystal. This flux may cause defects in the crystal due to the different thermal expansivities of the flux and crystal. A solvent (typically an acid or a base) can dissolve the flux, but it's difficult to find a solvent that doesn't also dissolve the crystal. The flux can be removed mechanically using a blade or drill. If the crystal and flux have significantly different boiling points, the flux may be removed with evaporation. Flux can also be removed through recrystallization through use of a seed in the liquid phase, leaving the flux behind as the crystals accumulate. The removal of excess flux is important to assess a crystals properties, as the flux can affect measurements. For example, tin and lead super conduct at low temperatures, if a sample has tin or lead flux superconductivity can be observed even if the desired crystal is not a superconductor. See also Chemical vapor deposition Crystal growth Crystallography Czochralski process Epitaxy Hydrothermal synthesis Micro-pulling-down Verneuil process External links Flux Method for Preparing Crystals Growth of single crystals from metallic fluxes Flux Technique References Crystallography Methods of crystal growth fr:Cristallogénèse pt:Cristalografia sk:Kryštalizácia
Flux method
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
796
[ "Crystallography", "Methods of crystal growth", "Condensed matter physics", "Materials science" ]
296,775
https://en.wikipedia.org/wiki/Contract%20theory
From a legal point of view, a contract is an institutional arrangement for the way in which resources flow, which defines the various relationships between the parties to a transaction or limits the rights and obligations of the parties. From an economic perspective, contract theory studies how economic actors can and do construct contractual arrangements, generally in the presence of information asymmetry. Because of its connections with both agency and incentives, contract theory is often categorized within a field known as law and economics. One prominent application of it is the design of optimal schemes of managerial compensation. In the field of economics, the first formal treatment of this topic was given by Kenneth Arrow in the 1960s. In 2016, Oliver Hart and Bengt R. Holmström both received the Nobel Memorial Prize in Economic Sciences for their work on contract theory, covering many topics from CEO pay to privatizations. Holmström focused more on the connection between incentives and risk, while Hart on the unpredictability of the future that creates holes in contracts. A standard practice in the microeconomics of contract theory is to represent the behaviour of a decision maker under certain numerical utility structures, and then apply an optimization algorithm to identify optimal decisions. Such a procedure has been used in the contract theory framework to several typical situations, labeled moral hazard, adverse selection and signalling. The spirit of these models lies in finding theoretical ways to motivate agents to take appropriate actions, even under an insurance contract. The main results achieved through this family of models involve: mathematical properties of the utility structure of the principal and the agent, relaxation of assumptions, and variations of the time structure of the contract relationship, among others. It is customary to model people as maximizers of some von Neumann–Morgenstern utility functions, as stated by expected utility theory. Development and origin Contract theory in economics began with 1991 Nobel Laureate Ronald H. Coase's 1937 article "The Nature of the Firm". Coase notes that "the longer the duration of a contract regarding the supply of goods or services due to the difficulty of forecasting, then the less likely and less appropriate it is for the buyer to specify what the other party should do." That suggests two points, the first is that Coase already understands transactional behaviour in terms of contracts, and the second is that Coase implies that if contracts are less complete then firms are more likely to substitute for markets. The contract theory has since evolved in two directions. One is the complete contract theory and the other is the incomplete contract theory. Complete contract theory Complete contract theory states that there is no essential difference between a firm and a market; they are both contracts. Principals and agents are able to foresee all future scenarios and develop optimal risk sharing and revenue transfer mechanisms to achieve sub-optimal efficiency under constraints. It is equivalent to principal-agent theory. Armen Albert Alchian and Harold Demsetz disagree with Coase's view that the nature of the firm is a substitute for the market, but argue that both the firm and the market are contracts and that there is no fundamental difference between the two. They believe that the essence of the firm is a team production, and that the central issue in team production is the measurement of agent effort, namely the moral hazard of single agents and multiple agents. Michael C. Jensen and William Meckling believe that the nature of a business is a contractual relationship. They defined a business as an organisation. Such an organisation, like the majority of other organisations, as a legal fiction whose function is to act as a connecting point for a set of contractual relationships between individuals. James Mirrlees and Bengt Holmström et al. developed a basic framework for single-agent and multi-agent moral hazard models in a principal-agent framework with the help of the favourable labour tool of game theory. Eugene F. Fama et al. extend static contract theory to dynamic contract theory, thus introducing the issue of principal commitment and the agent's reputation effect into long-term contracts. Eric Brousseau and Jean-Michel Glachant believe that contract theory should include incentive theory,incomplete contract theory and the new institutional transaction costs theory. Main models of agency problems Moral hazard The moral hazard problem refers to the extent to which an employee's behaviour is concealed from the employer: whether they work, how hard they work and how carefully they do so. In moral hazard models, the information asymmetry is the principal's inability to observe and/or verify the agent's action. Performance-based contracts that depend on observable and verifiable output can often be employed to create incentives for the agent to act in the principal's interest. When agents are risk-averse, however, such contracts are generally only second-best because incentivization precludes full insurance. The typical moral hazard model is formulated as follows. The principal solves: subject to the agent's "individual rationality (IR)" constraint, and the agent's "incentive compatibility (IC)" constraint, , where is the wage for the agent as a function of output , which in turn is a function of effort:. represents the cost of effort, and reservation utility is given by . is the "utility function", which is concave for the risk-averse agent, is convex for the risk-prone agent, and is linear for the risk-neutral agent. If the agent is risk-neutral and there are no bounds on transfer payments, the fact that the agent's effort is unobservable (i.e., it is a "hidden action") does not pose a problem. In this case, the same outcome can be achieved that would be attained with verifiable effort: The agent chooses the so-called "first-best" effort level that maximizes the expected total surplus of the two parties. Specifically, the principal can give the realized output to the agent, but let the agent make a fixed up-front payment. The agent is then a "residual claimant" and will maximize the expected total surplus minus the fixed payment. Hence, the first-best effort level maximizes the agent's payoff, and the fixed payment can be chosen such that in equilibrium the agent's expected payoff equals his or her reservation utility (which is what the agent would get if no contract was written). Yet, if the agent is risk-averse, there is a trade-off between incentives and insurance. Moreover, if the agent is risk-neutral but wealth-constrained, the agent cannot make the fixed up-front payment to the principal, so the principal must leave a "limited liability rent" to the agent (i.e., the agent earns more than his or her reservation utility). The moral hazard model with risk aversion was pioneered by Steven Shavell, Sanford J. Grossman, Oliver D. Hart, and others in the 1970s and 1980s. It has been extended to the case of repeated moral hazard by William P. Rogerson and to the case of multiple tasks by Bengt Holmström and Paul Milgrom. The moral hazard model with risk-neutral but wealth-constrained agents has also been extended to settings with repeated interaction and multiple tasks. While it is difficult to test models with hidden action empirically (since there is no field data on unobservable variables), the premise of contract theory that incentives matter has been successfully tested in the field. Moreover, contract-theoretic models with hidden actions have been directly tested in laboratory experiments. Example of possible solution to moral hazard A study on the solution to moral hazard concludes that adding moral sensitivity to the principal–agent model increases its descriptiveness, prescriptiveness, and pedagogical usefulness because it induces employees to work at the appropriate effort for which they receive a wage. The theory suggests that as employee work efforts increase, so proportional premium wage should increases also to encourage productivity. Adverse selection In adverse selection models, the principal is not informed about a certain characteristic of the agent at the time the contract is written. The characteristic is called the agent's "type". For example, health insurance is more likely to be purchased by people who are more likely to get sick. In this case, the agent's type is his or her health status, which is privately known by the agent. Another prominent example is public procurement contracting: The government agency (the principal) does not know the private firm's cost. In this case, the private firm is the agent and the agent's type is the cost level. In adverse selection models, there is typically too little trade (i.e., there is a so-called "downward distortion" of the trade level compared to a "first-best" benchmark situation with complete information), except when the agent is of the best possible type (which is known as the "no distortion at the top" property). The principal offers a menu of contracts to the agent; the menu is called "incentive-compatible" if the agent picks the contract that was designed for his or her type. In order to make the agent reveal the true type, the principal has to leave an information rent to the agent (i.e., the agent earns more than his or her reservation utility, which is what the agent would get if no contract was written). Adverse selection theory has been pioneered by Roger Myerson, Eric Maskin, and others in the 1980s. More recently, adverse selection theory has been tested in laboratory experiments and in the field. Adverse selection theory has been expanded in several directions, e.g. by endogenizing the information structure (so the agent can decide whether or not to gather private information) and by taking into consideration social preferences and bounded rationality. Signalling In signalling models, one party chooses how and whether or not to present information about itself to another party to reduce the information asymmetry between them. In signaling models, the signaling party agent and the receiving party principal have access to different information. The challenge for the receiving party is to decipher the credibility of the signaling party so as to assess their capabilities. The formulation of this theory began in 1973 by Michael Spence through his job-market signaling model. In his model, job applicants are tasked with signalling their skills and capabilities to employers to reduce the probabilities for the employer to choose a lesser qualified applicant over a qualified applicant. This is because potential employers lack the knowledge to discern the skills and capabilities of potential employees. Incomplete contracts Contract theory also utilizes the notion of a complete contract, which is thought of as a contract that specifies the legal consequences of every possible state of the world. More recent developments known as the theory of incomplete contracts, pioneered by Oliver Hart and his coauthors, study the incentive effects of parties' inability to write complete contingent contracts. In fact, it may be the case that the parties to a transaction are unable to write a complete contract at the contract stage because it is either difficult to reach an agreement to get it done or it is too expensive to do so, e.g. concerning relationship-specific investments. A leading application of the incomplete contracting paradigm is the Grossman-Hart-Moore property rights approach to the theory of the firm (see Hart, 1995). Because it would be impossibly complex and costly for the parties to an agreement to make their contract complete, the law provides default rules which fill in the gaps in the actual agreement of the parties. During the last 20 years, much effort has gone into the analysis of dynamic contracts. Important early contributors to this literature include, among others, Edward J. Green, Stephen Spear, and Sanjay Srivastava. Expected utility theory Much of contract theory can be explained through expected utility theory. This theory indicates that individuals will measure their choices based on the risks and benefits associated with a decision. A study analyzed that agents' anticipatory feelings are affected by uncertainty. Hence why principals need to form contracts with agents in the presence of information asymmetry to more clearly understand each party's motives and benefits. Examples of contract theory George Akerlof described adverse selection in the market for used cars. In certain models, such as Michael Spence's job-market model, the agent can signal his type to the principal which may help to resolve the problem. Leland and Pyle's (1977) IPO theory for agents (companies) to reduce adverse selection in the market by always sending clear signals before going public. Incentive Design In the contract theory, the goal is to motivate employees by giving them rewards. Trading on service level/quality, results, performance or goals. It can be seen that reward determines whether the incentive mechanism can fully motivate employees. In view of the large number of contract theoretical models, the design of compensation under different contract conditions is different. Rewards on Absolute Performance and Relative Performance Source: Absolute performance-related reward: The reward is in direct proportion to the absolute performance of employees. Relative performance-related reward: The rewards are arranged according to the performance of the employees, from the highest to the lowest. Absolute performance-related reward is an incentive mechanism widely recognized in economics in the real society, because it provides employees with the basic option of necessary and effective incentives. But, absolute performance-related rewards have two drawbacks. There will be people who cheat Vulnerable to recessions or sudden growth Design contracts for multiple employees Source: Considering absolute performance-related compensation is a popular way for employers to design contracts for more than one employee at a time, and one of the most widely accepted methods in practical economics. There are also other forms of absolute rewards linked to employees' performance. For example, dividing employees into groups and rewarding the whole group based on the overall performance of each group. But one drawback of this method is that some people will fish in troubled waters while others are working hard, so that they will be rewarded together with the rest of the group. It is better to set the reward mechanism as the competitive competition, and obtain higher rewards through better performance. Information elicitation A particular kind of a principal-agent problem is when the agent can compute the value of an item that belongs to the principal (e.g. an assessor can compute the value of the principal's car), and the principal wants to incentivize the agent to compute and report the true value. See also Agency cost Allocative efficiency Efficient contract theory Clawback Complete contract Contract Contract awarding Default rule First-order approach Incomplete contracts Mechanism design New institutional economics Perverse incentive References External links Bolton, Patrick and Dewatripont, Mathias, 2005.: Contract Theory. MIT Press. Description and preview. Laffont, Jean-Jacques, and David Martimort, 2002. The Theory of Incentives: The Principal-Agent Model. Description, "Introduction," & down for chapter links. (Princeton University Press, 2002) Martimort, David, 2008. "contract theory," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract. Salanié, Bernard, 1997. The Economics of Contracts: A Primer. MIT Press, Description (2nd ed., 2005) and chapter-preview links. Asymmetric information Game theory Law and economics Mathematical economics Microeconomic theories
Contract theory
[ "Physics", "Mathematics" ]
3,110
[ "Asymmetric information", "Applied mathematics", "Game theory", "Asymmetry", "Mathematical economics", "Symmetry" ]
296,942
https://en.wikipedia.org/wiki/Double%20counting%20%28proof%20technique%29
In combinatorics, double counting, also called counting in two ways, is a combinatorial proof technique for showing that two expressions are equal by demonstrating that they are two ways of counting the size of one set. In this technique, which call "one of the most important tools in combinatorics", one describes a finite set from two perspectives leading to two distinct expressions for the size of the set. Since both expressions equal the size of the same set, they equal each other. Examples Multiplication (of natural numbers) commutes This is a simple example of double counting, often used when teaching multiplication to young children. In this context, multiplication of natural numbers is introduced as repeated addition, and is then shown to be commutative by counting, in two different ways, a number of items arranged in a rectangular grid. Suppose the grid has rows and columns. We first count the items by summing rows of items each, then a second time by summing columns of items each, thus showing that, for these particular values of and , . Forming committees One example of the double counting method counts the number of ways in which a committee can be formed from people, allowing any number of the people (even zero of them) to be part of the committee. That is, one counts the number of subsets that an -element set may have. One method for forming a committee is to ask each person to choose whether or not to join it. Each person has two choices – yes or no – and these choices are independent of those of the other people. Therefore there are possibilities. Alternatively, one may observe that the size of the committee must be some number between 0 and . For each possible size , the number of ways in which a committee of people can be formed from people is the binomial coefficient Therefore the total number of possible committees is the sum of binomial coefficients over . Equating the two expressions gives the identity a special case of the binomial theorem. A similar double counting method can be used to prove the more general identity Handshaking lemma Another theorem that is commonly proven with a double counting argument states that every undirected graph contains an even number of vertices of odd degree. That is, the number of vertices that have an odd number of incident edges must be even. In more colloquial terms, in a party of people some of whom shake hands, an even number of people must have shaken an odd number of other people's hands; for this reason, the result is known as the handshaking lemma. To prove this by double counting, let be the degree of vertex . The number of vertex-edge incidences in the graph may be counted in two different ways: by summing the degrees of the vertices, or by counting two incidences for every edge. Therefore where is the number of edges. The sum of the degrees of the vertices is therefore an even number, which could not happen if an odd number of the vertices had odd degree. This fact, with this proof, appears in the 1736 paper of Leonhard Euler on the Seven Bridges of Königsberg that first began the study of graph theory. Counting trees What is the number of different trees that can be formed from a set of distinct vertices? Cayley's formula gives the answer . list four proofs of this fact; they write of the fourth, a double counting proof due to Jim Pitman, that it is "the most beautiful of them all." Pitman's proof counts in two different ways the number of different sequences of directed edges that can be added to an empty graph on vertices to form from it a rooted tree. The directed edges point away from the root. One way to form such a sequence is to start with one of the possible unrooted trees, choose one of its vertices as root, and choose one of the possible sequences in which to add its (directed) edges. Therefore, the total number of sequences that can be formed in this way is . Another way to count these edge sequences is to consider adding the edges one by one to an empty graph, and to count the number of choices available at each step. If one has added a collection of edges already, so that the graph formed by these edges is a rooted forest with trees, there are choices for the next edge to add: its starting vertex can be any one of the vertices of the graph, and its ending vertex can be any one of the roots other than the root of the tree containing the starting vertex. Therefore, if one multiplies together the number of choices from the first step, the second step, etc., the total number of choices is Equating these two formulas for the number of edge sequences results in Cayley's formula: and As Aigner and Ziegler describe, the formula and the proof can be generalized to count the number of rooted forests with trees, for any See also Additional examples Vandermonde's identity, another identity on sums of binomial coefficients that can be proven by double counting. Square pyramidal number. The equality between the sum of the first square numbers and a cubic polynomial can be shown by double counting the triples of numbers , , and where is larger than either of the other two numbers. Lubell–Yamamoto–Meshalkin inequality. Lubell's proof of this result on set families is a double counting argument on permutations, used to prove an inequality rather than an equality. Erdős–Ko–Rado theorem, an upper bound on intersecting families of sets, proven by Gyula O. H. Katona using a double counting inequality. Proofs of Fermat's little theorem. A divisibility proof by double counting: for any prime and natural number , there are length- words over an -symbol alphabet having two or more distinct symbols. These may be grouped into sets of words that can be transformed into each other by circular shifts; these sets are called necklaces. Therefore, (number of necklaces) and is divisible by . Proofs of quadratic reciprocity. A proof by Eisenstein derives another important number-theoretic fact by double counting lattice points in a triangle. Related topics Bijective proof. Where double counting involves counting one set in two ways, bijective proofs involve counting two sets in one way, by showing that their elements correspond one-for-one. The inclusion–exclusion principle, a formula for the size of a union of sets that may, together with another formula for the same union, be used as part of a double counting argument. Notes References . Double counting is described as a general principle on page 126; Pitman's double counting proof of Cayley's formula is on pp. 145–146; Katona's double counting inequality for the Erdős–Ko–Rado theorem is pp. 214–215. . Reprinted and translated in . . . . Enumerative combinatorics Articles containing proofs Mathematical proofs
Double counting (proof technique)
[ "Mathematics" ]
1,436
[ "Articles containing proofs", "Enumerative combinatorics", "nan", "Combinatorics" ]
297,004
https://en.wikipedia.org/wiki/Coxeter%20group
In mathematics, a Coxeter group, named after H. S. M. Coxeter, is an abstract group that admits a formal description in terms of reflections (or kaleidoscopic mirrors). Indeed, the finite Coxeter groups are precisely the finite Euclidean reflection groups; for example, the symmetry group of each regular polyhedron is a finite Coxeter group. However, not all Coxeter groups are finite, and not all can be described in terms of symmetries and Euclidean reflections. Coxeter groups were introduced in 1934 as abstractions of reflection groups, and finite Coxeter groups were classified in 1935. Coxeter groups find applications in many areas of mathematics. Examples of finite Coxeter groups include the symmetry groups of regular polytopes, and the Weyl groups of simple Lie algebras. Examples of infinite Coxeter groups include the triangle groups corresponding to regular tessellations of the Euclidean plane and the hyperbolic plane, and the Weyl groups of infinite-dimensional Kac–Moody algebras. Definition Formally, a Coxeter group can be defined as a group with the presentation where and is either an integer or for . Here, the condition means that no relation of the form for any integer should be imposed. The pair where is a Coxeter group with generators is called a Coxeter system. Note that in general is not uniquely determined by . For example, the Coxeter groups of type and are isomorphic but the Coxeter systems are not equivalent, since the former has 3 generators and the latter has 1 + 3 = 4 generators (see below for an explanation of this notation). A number of conclusions can be drawn immediately from the above definition. The relation means that for all  ; as such the generators are involutions. If , then the generators and commute. This follows by observing that , together with implies that . Alternatively, since the generators are involutions, , so . That is to say, the commutator of and is equal to 1, or equivalently that and commute. The reason that for is stipulated in the definition is that , together with already implies that . An alternative proof of this implication is the observation that and are conjugates: indeed . Coxeter matrix and Schläfli matrix The Coxeter matrix is the symmetric matrix with entries . Indeed, every symmetric matrix with diagonal entries exclusively 1 and nondiagonal entries in the set is a Coxeter matrix. The Coxeter matrix can be conveniently encoded by a Coxeter diagram, as per the following rules. The vertices of the graph are labelled by generator subscripts. Vertices and are adjacent if and only if . An edge is labelled with the value of whenever the value is or greater. In particular, two generators commute if and only if they are not joined by an edge. Furthermore, if a Coxeter graph has two or more connected components, the associated group is the direct product of the groups associated to the individual components. Thus the disjoint union of Coxeter graphs yields a direct product of Coxeter groups. The Coxeter matrix, , is related to the Schläfli matrix with entries , but the elements are modified, being proportional to the dot product of the pairwise generators. The Schläfli matrix is useful because its eigenvalues determine whether the Coxeter group is of finite type (all positive), affine type (all non-negative, at least one zero), or indefinite type (otherwise). The indefinite type is sometimes further subdivided, e.g. into hyperbolic and other Coxeter groups. However, there are multiple non-equivalent definitions for hyperbolic Coxeter groups. An example The graph in which vertices through are placed in a row with each vertex joined by an unlabelled edge to its immediate neighbors is the Coxeter diagram of the symmetric group ; the generators correspond to the transpositions . Any two non-consecutive transpositions commute, while multiplying two consecutive transpositions gives a 3-cycle : . Therefore is a quotient of the Coxeter group having Coxeter diagram . Further arguments show that this quotient map is an isomorphism. Abstraction of reflection groups Coxeter groups are an abstraction of reflection groups. Coxeter groups are abstract groups, in the sense of being given via a presentation. On the other hand, reflection groups are concrete, in the sense that each of its elements is the composite of finitely many geometric reflections about linear hyperplanes in some euclidean space. Technically, a reflection group is a subgroup of a linear group (or various generalizations) generated by orthogonal matrices of determinant -1. Each generator of a Coxeter group has order 2, which abstracts the geometric fact that performing a reflection twice is the identity. Each relation of the form , corresponding to the geometric fact that, given two hyperplanes meeting at an angle of , the composite of the two reflections about these hyperplanes is a rotation by , which has order k. In this way, every reflection group may be presented as a Coxeter group. The converse is partially true: every finite Coxeter group admits a faithful representation as a finite reflection group of some Euclidean space. However, not every infinite Coxeter group admits a representation as a reflection group. Finite Coxeter groups have been classified. Finite Coxeter groups Classification Finite Coxeter groups are classified in terms of their Coxeter diagrams. The finite Coxeter groups with connected Coxeter diagrams consist of three one-parameter families of increasing dimension ( for , for , and for ), a one-parameter family of dimension two ( for ), and six exceptional groups ( and ). Every finite Coxeter group is the direct product of finitely many of these irreducible groups. Weyl groups Many, but not all of these, are Weyl groups, and every Weyl group can be realized as a Coxeter group. The Weyl groups are the families and and the exceptions and denoted in Weyl group notation as The non-Weyl ones are the exceptions and and those members of the family that are not exceptionally isomorphic to a Weyl group (namely and ). This can be proven by comparing the restrictions on (undirected) Dynkin diagrams with the restrictions on Coxeter diagrams of finite groups: formally, the Coxeter graph can be obtained from the Dynkin diagram by discarding the direction of the edges, and replacing every double edge with an edge labelled 4 and every triple edge by an edge labelled 6. Also note that every finitely generated Coxeter group is an automatic group. Dynkin diagrams have the additional restriction that the only permitted edge labels are 2, 3, 4, and 6, which yields the above. Geometrically, this corresponds to the crystallographic restriction theorem, and the fact that excluded polytopes do not fill space or tile the plane – for the dodecahedron (dually, icosahedron) does not fill space; for the 120-cell (dually, 600-cell) does not fill space; for a p-gon does not tile the plane except for or (the triangular, square, and hexagonal tilings, respectively). Note further that the (directed) Dynkin diagrams Bn and Cn give rise to the same Weyl group (hence Coxeter group), because they differ as directed graphs, but agree as undirected graphs – direction matters for root systems but not for the Weyl group; this corresponds to the hypercube and cross-polytope being different regular polytopes but having the same symmetry group. Properties Some properties of the finite irreducible Coxeter groups are given in the following table. The order of a reducible group can be computed by the product of its irreducible subgroup orders. Symmetry groups of regular polytopes The symmetry group of every regular polytope is a finite Coxeter group. Note that dual polytopes have the same symmetry group. There are three series of regular polytopes in all dimensions. The symmetry group of a regular n-simplex is the symmetric group Sn+1, also known as the Coxeter group of type An. The symmetry group of the n-cube and its dual, the n-cross-polytope, is Bn, and is known as the hyperoctahedral group. The exceptional regular polytopes in dimensions two, three, and four, correspond to other Coxeter groups. In two dimensions, the dihedral groups, which are the symmetry groups of regular polygons, form the series I2(p), for p ≥ 3. In three dimensions, the symmetry group of the regular dodecahedron and its dual, the regular icosahedron, is H3, known as the full icosahedral group. In four dimensions, there are three exceptional regular polytopes, the 24-cell, the 120-cell, and the 600-cell. The first has symmetry group F4, while the other two are dual and have symmetry group H4. The Coxeter groups of type Dn, E6, E7, and E8 are the symmetry groups of certain semiregular polytopes. Affine Coxeter groups The affine Coxeter groups form a second important series of Coxeter groups. These are not finite themselves, but each contains a normal abelian subgroup such that the corresponding quotient group is finite. In each case, the quotient group is itself a Coxeter group, and the Coxeter graph of the affine Coxeter group is obtained from the Coxeter graph of the quotient group by adding another vertex and one or two additional edges. For example, for n ≥ 2, the graph consisting of n+1 vertices in a circle is obtained from An in this way, and the corresponding Coxeter group is the affine Weyl group of An (the affine symmetric group). For n = 2, this can be pictured as a subgroup of the symmetry group of the standard tiling of the plane by equilateral triangles. In general, given a root system, one can construct the associated Stiefel diagram, consisting of the hyperplanes orthogonal to the roots along with certain translates of these hyperplanes. The affine Coxeter group (or affine Weyl group) is then the group generated by the (affine) reflections about all the hyperplanes in the diagram. The Stiefel diagram divides the plane into infinitely many connected components called alcoves, and the affine Coxeter group acts freely and transitively on the alcoves, just as the ordinary Weyl group acts freely and transitively on the Weyl chambers. The figure at right illustrates the Stiefel diagram for the root system. Suppose is an irreducible root system of rank and let be a collection of simple roots. Let, also, denote the highest root. Then the affine Coxeter group is generated by the ordinary (linear) reflections about the hyperplanes perpendicular to , together with an affine reflection about a translate of the hyperplane perpendicular to . The Coxeter graph for the affine Weyl group is the Coxeter–Dynkin diagram for , together with one additional node associated to . In this case, one alcove of the Stiefel diagram may be obtained by taking the fundamental Weyl chamber and cutting it by a translate of the hyperplane perpendicular to . A list of the affine Coxeter groups follows: The group symbol subscript is one less than the number of nodes in each case, since each of these groups was obtained by adding a node to a finite group's graph. Hyperbolic Coxeter groups There are infinitely many hyperbolic Coxeter groups describing reflection groups in hyperbolic space, notably including the hyperbolic triangle groups. Irreducible Coxeter groups A Coxeter group is said to be irreducible if its Coxeter–Dynkin diagram is connected. Every Coxeter group is the direct product of the irreducible groups that correspond to the components of its Coxeter–Dynkin diagram. Partial orders A choice of reflection generators gives rise to a length function ℓ on a Coxeter group, namely the minimum number of uses of generators required to express a group element; this is precisely the length in the word metric in the Cayley graph. An expression for v using ℓ(v) generators is a reduced word. For example, the permutation (13) in S3 has two reduced words, (12)(23)(12) and (23)(12)(23). The function defines a map generalizing the sign map for the symmetric group. Using reduced words one may define three partial orders on the Coxeter group, the (right) weak order, the absolute order and the Bruhat order (named for François Bruhat). An element v exceeds an element u in the Bruhat order if some (or equivalently, any) reduced word for v contains a reduced word for u as a substring, where some letters (in any position) are dropped. In the weak order, v ≥ u if some reduced word for v contains a reduced word for u as an initial segment. Indeed, the word length makes this into a graded poset. The Hasse diagrams corresponding to these orders are objects of study, and are related to the Cayley graph determined by the generators. The absolute order is defined analogously to the weak order, but with generating set/alphabet consisting of all conjugates of the Coxeter generators. For example, the permutation (1 2 3) in S3 has only one reduced word, (12)(23), so covers (12) and (23) in the Bruhat order but only covers (12) in the weak order. Homology Since a Coxeter group is generated by finitely many elements of order 2, its abelianization is an elementary abelian 2-group, i.e., it is isomorphic to the direct sum of several copies of the cyclic group . This may be restated in terms of the first homology group of . The Schur multiplier , equal to the second homology group of , was computed in for finite reflection groups and in for affine reflection groups, with a more unified account given in . In all cases, the Schur multiplier is also an elementary abelian 2-group. For each infinite family of finite or affine Weyl groups, the rank of stabilizes as goes to infinity. See also Artin–Tits group Chevalley–Shephard–Todd theorem Complex reflection group Coxeter element Iwahori–Hecke algebra, a quantum deformation of the group algebra Kazhdan–Lusztig polynomial Longest element of a Coxeter group Parabolic subgroup of a reflection group Supersolvable arrangement Notes References Bibliography Further reading External links
Coxeter group
[ "Physics" ]
3,041
[ "Euclidean symmetries", "Reflection groups", "Symmetry" ]
297,069
https://en.wikipedia.org/wiki/Perturbation%20theory%20%28quantum%20mechanics%29
In quantum mechanics, perturbation theory is a set of approximation schemes directly related to mathematical perturbation for describing a complicated quantum system in terms of a simpler one. The idea is to start with a simple system for which a mathematical solution is known, and add an additional "perturbing" Hamiltonian representing a weak disturbance to the system. If the disturbance is not too large, the various physical quantities associated with the perturbed system (e.g. its energy levels and eigenstates) can be expressed as "corrections" to those of the simple system. These corrections, being small compared to the size of the quantities themselves, can be calculated using approximate methods such as asymptotic series. The complicated system can therefore be studied based on knowledge of the simpler one. In effect, it is describing a complicated unsolved system using a simple, solvable system. Approximate Hamiltonians Perturbation theory is an important tool for describing real quantum systems, as it turns out to be very difficult to find exact solutions to the Schrödinger equation for Hamiltonians of even moderate complexity. The Hamiltonians to which we know exact solutions, such as the hydrogen atom, the quantum harmonic oscillator and the particle in a box, are too idealized to adequately describe most systems. Using perturbation theory, we can use the known solutions of these simple Hamiltonians to generate solutions for a range of more complicated systems. Applying perturbation theory Perturbation theory is applicable if the problem at hand cannot be solved exactly, but can be formulated by adding a "small" term to the mathematical description of the exactly solvable problem. For example, by adding a perturbative electric potential to the quantum mechanical model of the hydrogen atom, tiny shifts in the spectral lines of hydrogen caused by the presence of an electric field (the Stark effect) can be calculated. This is only approximate because the sum of a Coulomb potential with a linear potential is unstable (has no true bound states) although the tunneling time (decay rate) is very long. This instability shows up as a broadening of the energy spectrum lines, which perturbation theory fails to reproduce entirely. The expressions produced by perturbation theory are not exact, but they can lead to accurate results as long as the expansion parameter, say , is very small. Typically, the results are expressed in terms of finite power series in that seem to converge to the exact values when summed to higher order. After a certain order however, the results become increasingly worse since the series are usually divergent (being asymptotic series). There exist ways to convert them into convergent series, which can be evaluated for large-expansion parameters, most efficiently by the variational method. In practice, convergent perturbation expansions often converge slowly while divergent perturbation expansions sometimes give good results, c.f. the exact solution, at lower order. In the theory of quantum electrodynamics (QED), in which the electron–photon interaction is treated perturbatively, the calculation of the electron's magnetic moment has been found to agree with experiment to eleven decimal places. In QED and other quantum field theories, special calculation techniques known as Feynman diagrams are used to systematically sum the power series terms. Limitations Large perturbations Under some circumstances, perturbation theory is an invalid approach to take. This happens when the system we wish to describe cannot be described by a small perturbation imposed on some simple system. In quantum chromodynamics, for instance, the interaction of quarks with the gluon field cannot be treated perturbatively at low energies because the coupling constant (the expansion parameter) becomes too large, violating the requirement that corrections must be small. Non-adiabatic states Perturbation theory also fails to describe states that are not generated adiabatically from the "free model", including bound states and various collective phenomena such as solitons. Imagine, for example, that we have a system of free (i.e. non-interacting) particles, to which an attractive interaction is introduced. Depending on the form of the interaction, this may create an entirely new set of eigenstates corresponding to groups of particles bound to one another. An example of this phenomenon may be found in conventional superconductivity, in which the phonon-mediated attraction between conduction electrons leads to the formation of correlated electron pairs known as Cooper pairs. When faced with such systems, one usually turns to other approximation schemes, such as the variational method and the WKB approximation. This is because there is no analogue of a bound particle in the unperturbed model and the energy of a soliton typically goes as the inverse of the expansion parameter. However, if we "integrate" over the solitonic phenomena, the nonperturbative corrections in this case will be tiny; of the order of or in the perturbation parameter . Perturbation theory can only detect solutions "close" to the unperturbed solution, even if there are other solutions for which the perturbative expansion is not valid. Difficult computations The problem of non-perturbative systems has been somewhat alleviated by the advent of modern computers. It has become practical to obtain numerical non-perturbative solutions for certain problems, using methods such as density functional theory. These advances have been of particular benefit to the field of quantum chemistry. Computers have also been used to carry out perturbation theory calculations to extraordinarily high levels of precision, which has proven important in particle physics for generating theoretical results that can be compared with experiment. Time-independent perturbation theory Time-independent perturbation theory is one of two categories of perturbation theory, the other being time-dependent perturbation (see next section). In time-independent perturbation theory, the perturbation Hamiltonian is static (i.e., possesses no time dependence). Time-independent perturbation theory was presented by Erwin Schrödinger in a 1926 paper, shortly after he produced his theories in wave mechanics. In this paper Schrödinger referred to earlier work of Lord Rayleigh, who investigated harmonic vibrations of a string perturbed by small inhomogeneities. This is why this perturbation theory is often referred to as Rayleigh–Schrödinger perturbation theory. First order corrections The process begins with an unperturbed Hamiltonian , which is assumed to have no time dependence. It has known energy levels and eigenstates, arising from the time-independent Schrödinger equation: For simplicity, it is assumed that the energies are discrete. The superscripts denote that these quantities are associated with the unperturbed system. Note the use of bra–ket notation. A perturbation is then introduced to the Hamiltonian. Let be a Hamiltonian representing a weak physical disturbance, such as a potential energy produced by an external field. Thus, is formally a Hermitian operator. Let be a dimensionless parameter that can take on values ranging continuously from 0 (no perturbation) to 1 (the full perturbation). The perturbed Hamiltonian is: The energy levels and eigenstates of the perturbed Hamiltonian are again given by the time-independent Schrödinger equation, The objective is to express and in terms of the energy levels and eigenstates of the old Hamiltonian. If the perturbation is sufficiently weak, they can be written as a (Maclaurin) power series in , where When , these reduce to the unperturbed values, which are the first term in each series. Since the perturbation is weak, the energy levels and eigenstates should not deviate too much from their unperturbed values, and the terms should rapidly become smaller as the order is increased. Substituting the power series expansion into the Schrödinger equation produces: Expanding this equation and comparing coefficients of each power of results in an infinite series of simultaneous equations. The zeroth-order equation is simply the Schrödinger equation for the unperturbed system, The first-order equation is Operating through by , the first term on the left-hand side cancels the first term on the right-hand side. (Recall, the unperturbed Hamiltonian is Hermitian). This leads to the first-order energy shift, This is simply the expectation value of the perturbation Hamiltonian while the system is in the unperturbed eigenstate. This result can be interpreted in the following way: supposing that the perturbation is applied, but the system is kept in the quantum state , which is a valid quantum state though no longer an energy eigenstate. The perturbation causes the average energy of this state to increase by . However, the true energy shift is slightly different, because the perturbed eigenstate is not exactly the same as . These further shifts are given by the second and higher order corrections to the energy. Before corrections to the energy eigenstate are computed, the issue of normalization must be addressed. Supposing that but perturbation theory also assumes that . Then at first order in , the following must be true: Since the overall phase is not determined in quantum mechanics, without loss of generality, in time-independent theory it can be assumed that is purely real. Therefore, leading to To obtain the first-order correction to the energy eigenstate, the expression for the first-order energy correction is inserted back into the result shown above, equating the first-order coefficients of . Then by using the resolution of the identity: where the are in the orthogonal complement of , i.e., the other eigenvectors. The first-order equation may thus be expressed as Suppose that the zeroth-order energy level is not degenerate, i.e. that there is no eigenstate of in the orthogonal complement of with the energy . After renaming the summation dummy index above as , any can be chosen and multiplying the first-order equation through by gives The above also gives us the component of the first-order correction along . Thus, in total, the result is, The first-order change in the -th energy eigenket has a contribution from each of the energy eigenstates . Each term is proportional to the matrix element , which is a measure of how much the perturbation mixes eigenstate with eigenstate ; it is also inversely proportional to the energy difference between eigenstates and , which means that the perturbation deforms the eigenstate to a greater extent if there are more eigenstates at nearby energies. The expression is singular if any of these states have the same energy as state , which is why it was assumed that there is no degeneracy. The above formula for the perturbed eigenstates also implies that the perturbation theory can be legitimately used only when the absolute magnitude of the matrix elements of the perturbation is small compared with the corresponding differences in the unperturbed energy levels, i.e., Second-order and higher-order corrections We can find the higher-order deviations by a similar procedure, though the calculations become quite tedious with our current formulation. Our normalization prescription gives that Up to second order, the expressions for the energies and (normalized) eigenstates are: If an intermediate normalization is taken (it means, if we require that ), we obtain the same expression for the second-order correction to the wave function, except for the last term. Extending the process further, the third-order energy correction can be shown to be It is possible to relate the k-th order correction to the energy to the -point connected correlation function of the perturbation in the state . For , one has to consider the inverse Laplace transform of the two-point correlator: where is the perturbing operator in the interaction picture, evolving in Euclidean time. Then Similar formulas exist to all orders in perturbation theory, allowing one to express in terms of the inverse Laplace transform of the connected correlation function To be precise, if we write then the -th order energy shift is given by Effects of degeneracy Suppose that two or more energy eigenstates of the unperturbed Hamiltonian are degenerate. The first-order energy shift is not well defined, since there is no unique way to choose a basis of eigenstates for the unperturbed system. The various eigenstates for a given energy will perturb with different energies, or may well possess no continuous family of perturbations at all. This is manifested in the calculation of the perturbed eigenstate via the fact that the operator does not have a well-defined inverse. Let denote the subspace spanned by these degenerate eigenstates. No matter how small the perturbation is, in the degenerate subspace the energy differences between the eigenstates of are non-zero, so complete mixing of at least some of these states is assured. Typically, the eigenvalues will split, and the eigenspaces will become simple (one-dimensional), or at least of smaller dimension than D. The successful perturbations will not be "small" relative to a poorly chosen basis of D. Instead, we consider the perturbation "small" if the new eigenstate is close to the subspace . The new Hamiltonian must be diagonalized in , or a slight variation of D, so to speak. These perturbed eigenstates in are now the basis for the perturbation expansion, For the first-order perturbation, we need solve the perturbed Hamiltonian restricted to the degenerate subspace , simultaneously for all the degenerate eigenstates, where are first-order corrections to the degenerate energy levels, and "small" is a vector of orthogonal to D. This amounts to diagonalizing the matrix This procedure is approximate, since we neglected states outside the subspace ("small"). The splitting of degenerate energies is generally observed. Although the splitting may be small, , compared to the range of energies found in the system, it is crucial in understanding certain details, such as spectral lines in Electron Spin Resonance experiments. Higher-order corrections due to other eigenstates outside can be found in the same way as for the non-degenerate case, The operator on the left-hand side is not singular when applied to eigenstates outside , so we can write but the effect on the degenerate states is of . Near-degenerate states should also be treated similarly, when the original Hamiltonian splits aren't larger than the perturbation in the near-degenerate subspace. An application is found in the nearly free electron model, where near-degeneracy, treated properly, gives rise to an energy gap even for small perturbations. Other eigenstates will only shift the absolute energy of all near-degenerate states simultaneously. Degeneracy lifted to first order Let us consider degenerate energy eigenstates and a perturbation that completely lifts the degeneracy to first order of correction. The perturbed Hamiltonian is denoted as where is the unperturbed Hamiltonian, is the perturbation operator, and is the parameter of the perturbation. Let us focus on the degeneracy of the -th unperturbed energy . We will denote the unperturbed states in this degenerate subspace as and the other unperturbed states as , where is the index of the unperturbed state in the degenerate subspace and represents all other energy eigenstates with energies different from . The eventual degeneracy among the other states with does not change our arguments. All states with various values of share the same energy when there is no perturbation, i.e., when . The energies of the other states with are all different from , but not necessarily unique, i.e. not necessarily always different among themselves. By and , we denote the matrix elements of the perturbation operator in the basis of the unperturbed eigenstates. We assume that the basis vectors in the degenerate subspace are chosen such that the matrix elements are diagonal. Assuming also that the degeneracy is completely lifted to the first order, i.e. that if , we have the following formulae for the energy correction to the second order in and for the state correction to the first order in Notice that here the first order correction to the state is orthogonal to the unperturbed state, Generalization to multi-parameter case The generalization of time-independent perturbation theory to the case where there are multiple small parameters in place of λ can be formulated more systematically using the language of differential geometry, which basically defines the derivatives of the quantum states and calculates the perturbative corrections by taking derivatives iteratively at the unperturbed point. Hamiltonian and force operator From the differential geometric point of view, a parameterized Hamiltonian is considered as a function defined on the parameter manifold that maps each particular set of parameters to an Hermitian operator that acts on the Hilbert space. The parameters here can be external field, interaction strength, or driving parameters in the quantum phase transition. Let and be the -th eigenenergy and eigenstate of respectively. In the language of differential geometry, the states form a vector bundle over the parameter manifold, on which derivatives of these states can be defined. The perturbation theory is to answer the following question: given and at an unperturbed reference point , how to estimate the and at close to that reference point. Without loss of generality, the coordinate system can be shifted, such that the reference point is set to be the origin. The following linearly parameterized Hamiltonian is frequently used If the parameters are considered as generalized coordinates, then should be identified as the generalized force operators related to those coordinates. Different indices label the different forces along different directions in the parameter manifold. For example, if denotes the external magnetic field in the -direction, then should be the magnetization in the same direction. Perturbation theory as power series expansion The validity of perturbation theory lies on the adiabatic assumption, which assumes the eigenenergies and eigenstates of the Hamiltonian are smooth functions of parameters such that their values in the vicinity region can be calculated in power series (like Taylor expansion) of the parameters: Here denotes the derivative with respect to . When applying to the state , it should be understood as the covariant derivative if the vector bundle is equipped with non-vanishing connection. All the terms on the right-hand-side of the series are evaluated at , e.g. and . This convention will be adopted throughout this subsection, that all functions without the parameter dependence explicitly stated are assumed to be evaluated at the origin. The power series may converge slowly or even not converge when the energy levels are close to each other. The adiabatic assumption breaks down when there is energy level degeneracy, and hence the perturbation theory is not applicable in that case. Hellmann–Feynman theorems The above power series expansion can be readily evaluated if there is a systematic approach to calculate the derivates to any order. Using the chain rule, the derivatives can be broken down to the single derivative on either the energy or the state. The Hellmann–Feynman theorems are used to calculate these single derivatives. The first Hellmann–Feynman theorem gives the derivative of the energy, The second Hellmann–Feynman theorem gives the derivative of the state (resolved by the complete basis with ), For the linearly parameterized Hamiltonian, simply stands for the generalized force operator . The theorems can be simply derived by applying the differential operator to both sides of the Schrödinger equation which reads Then overlap with the state from left and make use of the Schrödinger equation again, Given that the eigenstates of the Hamiltonian always form an orthonormal basis , the cases of and can be discussed separately. The first case will lead to the first theorem and the second case to the second theorem, which can be shown immediately by rearranging the terms. With the differential rules given by the Hellmann–Feynman theorems, the perturbative correction to the energies and states can be calculated systematically. Correction of energy and state To the second order, the energy correction reads where denotes the real part function. The first order derivative is given by the first Hellmann–Feynman theorem directly. To obtain the second order derivative , simply applying the differential operator to the result of the first order derivative , which reads Note that for a linearly parameterized Hamiltonian, there is no second derivative on the operator level. Resolve the derivative of state by inserting the complete set of basis, then all parts can be calculated using the Hellmann–Feynman theorems. In terms of Lie derivatives, according to the definition of the connection for the vector bundle. Therefore, the case can be excluded from the summation, which avoids the singularity of the energy denominator. The same procedure can be carried on for higher order derivatives, from which higher order corrections are obtained. The same computational scheme is applicable for the correction of states. The result to the second order is as follows Both energy derivatives and state derivatives will be involved in deduction. Whenever a state derivative is encountered, resolve it by inserting the complete set of basis, then the Hellmann-Feynman theorem is applicable. Because differentiation can be calculated systematically, the series expansion approach to the perturbative corrections can be coded on computers with symbolic processing software like Mathematica. Effective Hamiltonian Let be the Hamiltonian completely restricted either in the low-energy subspace or in the high-energy subspace , such that there is no matrix element in connecting the low- and the high-energy subspaces, i.e. if . Let be the coupling terms connecting the subspaces. Then when the high energy degrees of freedoms are integrated out, the effective Hamiltonian in the low energy subspace reads Here are restricted in the low energy subspace. The above result can be derived by power series expansion of . In a formal way it is possible to define an effective Hamiltonian that gives exactly the low-lying energy states and wavefunctions. In practice, some kind of approximation (perturbation theory) is generally required. Time-dependent perturbation theory Method of variation of constants Time-dependent perturbation theory, initiated by Paul Dirac and further developed by John Archibald Wheeler, Richard Feynman, and Freeman Dyson, studies the effect of a time-dependent perturbation applied to a time-independent Hamiltonian . It is an extremely valuable tool for calculating the properties of any physical system. It is used for the quantitative description of phenomena as diverse as proton-proton scattering, photo-ionization of materials, scattering of electrons off lattice defects in a conductor, scattering of neutrons off nuclei, electric susceptibilities of materials, neutron absorption cross sections in a nuclear reactor, and much more. Since the perturbed Hamiltonian is time-dependent, so are its energy levels and eigenstates. Thus, the goals of time-dependent perturbation theory are slightly different from time-independent perturbation theory. One is interested in the following quantities: The time-dependent expectation value of some observable , for a given initial state. The time-dependent expansion coefficients (w.r.t. a given time-dependent state) of those basis states that are energy eigenkets (eigenvectors) in the unperturbed system. The first quantity is important because it gives rise to the classical result of an measurement performed on a macroscopic number of copies of the perturbed system. For example, we could take to be the displacement in the -direction of the electron in a hydrogen atom, in which case the expected value, when multiplied by an appropriate coefficient, gives the time-dependent dielectric polarization of a hydrogen gas. With an appropriate choice of perturbation (i.e. an oscillating electric potential), this allows one to calculate the AC permittivity of the gas. The second quantity looks at the time-dependent probability of occupation for each eigenstate. This is particularly useful in laser physics, where one is interested in the populations of different atomic states in a gas when a time-dependent electric field is applied. These probabilities are also useful for calculating the "quantum broadening" of spectral lines (see line broadening) and particle decay in particle physics and nuclear physics. We will briefly examine the method behind Dirac's formulation of time-dependent perturbation theory. Choose an energy basis for the unperturbed system. (We drop the (0) superscripts for the eigenstates, because it is not useful to speak of energy levels and eigenstates for the perturbed system.) If the unperturbed system is an eigenstate (of the Hamiltonian) at time = 0, its state at subsequent times varies only by a phase (in the Schrödinger picture, where state vectors evolve in time and operators are constant), Now, introduce a time-dependent perturbing Hamiltonian . The Hamiltonian of the perturbed system is Let denote the quantum state of the perturbed system at time . It obeys the time-dependent Schrödinger equation, The quantum state at each instant can be expressed as a linear combination of the complete eigenbasis of : where the s are to be determined complex functions of which we will refer to as amplitudes (strictly speaking, they are the amplitudes in the Dirac picture). We have explicitly extracted the exponential phase factors on the right hand side. This is only a matter of convention, and may be done without loss of generality. The reason we go to this trouble is that when the system starts in the state and no perturbation is present, the amplitudes have the convenient property that, for all , = 1 and = 0 if . The square of the absolute amplitude is the probability that the system is in state at time , since Plugging into the Schrödinger equation and using the fact that ∂/∂t acts by a product rule, one obtains By resolving the identity in front of and multiplying through by the bra on the left, this can be reduced to a set of coupled differential equations for the amplitudes, where we have used equation () to evaluate the sum on in the second term, then used the fact that . The matrix elements of play a similar role as in time-independent perturbation theory, being proportional to the rate at which amplitudes are shifted between states. Note, however, that the direction of the shift is modified by the exponential phase factor. Over times much longer than the energy difference , the phase winds around 0 several times. If the time-dependence of is sufficiently slow, this may cause the state amplitudes to oscillate. (For example, such oscillations are useful for managing radiative transitions in a laser.) Up to this point, we have made no approximations, so this set of differential equations is exact. By supplying appropriate initial values , we could in principle find an exact (i.e., non-perturbative) solution. This is easily done when there are only two energy levels ( = 1, 2), and this solution is useful for modelling systems like the ammonia molecule. However, exact solutions are difficult to find when there are many energy levels, and one instead looks for perturbative solutions. These may be obtained by expressing the equations in an integral form, Repeatedly substituting this expression for back into right hand side, yields an iterative solution, where, for example, the first-order term is To the same approximation, the summation in the above expression can be removed since in the unperturbed state so that we have Several further results follow from this, such as Fermi's golden rule, which relates the rate of transitions between quantum states to the density of states at particular energies; or the Dyson series, obtained by applying the iterative method to the time evolution operator, which is one of the starting points for the method of Feynman diagrams. Method of Dyson series Time-dependent perturbations can be reorganized through the technique of the Dyson series. The Schrödinger equation has the formal solution where is the time ordering operator, Thus, the exponential represents the following Dyson series, Note that in the second term, the 1/2! factor exactly cancels the double contribution due to the time-ordering operator, etc. Consider the following perturbation problem assuming that the parameter is small and that the problem has been solved. Perform the following unitary transformation to the interaction picture (or Dirac picture), Consequently, the Schrödinger equation simplifies to so it is solved through the above Dyson series, as a perturbation series with small . Using the solution of the unperturbed problem and (for the sake of simplicity assume a pure discrete spectrum), yields, to first order, Thus, the system, initially in the unperturbed state , by dint of the perturbation can go into the state . The corresponding transition probability amplitude to first order is as detailed in the previous section——while the corresponding transition probability to a continuum is furnished by Fermi's golden rule. As an aside, note that time-independent perturbation theory is also organized inside this time-dependent perturbation theory Dyson series. To see this, write the unitary evolution operator, obtained from the above Dyson series, as and take the perturbation to be time-independent. Using the identity resolution with for a pure discrete spectrum, write It is evident that, at second order, one must sum on all the intermediate states. Assume and the asymptotic limit of larger times. This means that, at each contribution of the perturbation series, one has to add a multiplicative factor in the integrands for arbitrarily small. Thus the limit gives back the final state of the system by eliminating all oscillating terms, but keeping the secular ones. The integrals are thus computable, and, separating the diagonal terms from the others yields where the time secular series yields the eigenvalues of the perturbed problem specified above, recursively; whereas the remaining time-constant part yields the corrections to the stationary eigenfunctions also given above (.) The unitary evolution operator is applicable to arbitrary eigenstates of the unperturbed problem and, in this case, yields a secular series that holds at small times. Strong perturbation theory In a similar way as for small perturbations, it is possible to develop a strong perturbation theory. Consider as usual the Schrödinger equation and we consider the question if a dual Dyson series exists that applies in the limit of a perturbation increasingly large. This question can be answered in an affirmative way and the series is the well-known adiabatic series. This approach is quite general and can be shown in the following way. Consider the perturbation problem being . Our aim is to find a solution in the form but a direct substitution into the above equation fails to produce useful results. This situation can be adjusted making a rescaling of the time variable as producing the following meaningful equations that can be solved once we know the solution of the leading order equation. But we know that in this case we can use the adiabatic approximation. When does not depend on time one gets the Wigner-Kirkwood series that is often used in statistical mechanics. Indeed, in this case we introduce the unitary transformation that defines a free picture as we are trying to eliminate the interaction term. Now, in dual way with respect to the small perturbations, we have to solve the Schrödinger equation and we see that the expansion parameter appears only into the exponential and so, the corresponding Dyson series, a dual Dyson series, is meaningful at large s and is After the rescaling in time we can see that this is indeed a series in justifying in this way the name of dual Dyson series. The reason is that we have obtained this series simply interchanging and and we can go from one to another applying this exchange. This is called duality principle in perturbation theory. The choice yields, as already said, a Wigner-Kirkwood series that is a gradient expansion. The Wigner-Kirkwood series is a semiclassical series with eigenvalues given exactly as for WKB approximation. Examples Example of first-order perturbation theory – ground-state energy of the quartic oscillator Consider the quantum harmonic oscillator with the quartic potential perturbation and the Hamiltonian The ground state of the harmonic oscillator is (), and the energy of unperturbed ground state is Using the first-order correction formula, we get or Example of first- and second-order perturbation theory – quantum pendulum Consider the quantum-mathematical pendulum with the Hamiltonian with the potential energy taken as the perturbation i.e. The unperturbed normalized quantum wave functions are those of the rigid rotor and are given by and the energies The first-order energy correction to the rotor due to the potential energy is Using the formula for the second-order correction, one gets or or Potential energy as a perturbation When the unperturbed state is a free motion of a particle with kinetic energy , the solution of the Schrödinger equation corresponds to plane waves with wavenumber . If there is a weak potential energy present in the space, in the first approximation, the perturbed state is described by the equation whose particular integral is where . In the two-dimensional case, the solution is where and is the Hankel function of the first kind. In the one-dimensional case, the solution is where . Applications Rabi cycle Fermi's golden rule Muon spin spectroscopy Perturbed angular correlation References External links (lecture by Barton Zwiebach) Quantum Physics Online - perturbation theory Mathematical physics
Perturbation theory (quantum mechanics)
[ "Physics", "Mathematics" ]
7,196
[ "Applied mathematics", "Theoretical physics", "Quantum mechanics", "Mathematical physics", "Perturbation theory" ]
297,135
https://en.wikipedia.org/wiki/GMD%20M%C3%BCller
GMD Müller Lifts AG, known as GMD Müller, was a ropeway manufacturing company based in Dietlikon, Switzerland. GMD stands for Gerhard Müller Dietlikon. Founded in 1947 by engineer Gerhard Müller, who is credited with the invention of the modern detachable chairlift in the late 1940s, it was one of the most prolific and respected aerial lift manufacturers in skiing history. The company was bought out by the management in 1985 after Müller's death Overview In the late 1920s, Gerhard Müller, a mechanical engineering student, was a newcomer to growing sport of skiing. At the time, there were no user-friendly ski lifts in Switzerland. At a resort hotel outside Zürich, Müller created his first usable ski lift consisting of a 1-inch hemp rope and some old motorcycle parts. Naturally, being a rope tow, guests regularly complained about sore hands and torn clothes resulting from using the lift. The Sami people use reindeer to tow themselves around on skis, but they rest their hands by looping the reins around their hips. Inspired by this practice, Müller solved the problems of his rope tow by creating the first modern T-bar lift. GMD Müller gave the licence for T-bar lift to polish company Mostostal Zabrze in 1959. Some of this lifts are still exists. During the 1960s and 1970s GMD Müller installed more than one hundred fixed-grip chairlifts in North America. Many of these lifts are still in service today. In Europe 4-seater gondola lifts with Müller's patented detachable cable grips and T-bars were more popular. Some resorts, such as Whistler, were at the time exclusively equipped with Müller lifts. GMD Müller is also noted for inventing the Aerobus, a self-propelled bus-like vehicle riding on a suspended overhead cable. Rowema AG legally succeeded GMD Müller Lifts AG in 1985, and continues to service and supply spare parts to existing GMD Müller systems. See also List of aerial lift manufacturers References External links Chairlift.org page on Müller Official Aerobus web page Aerial lift manufacturers Manufacturing companies of Switzerland Skiing organizations Vertical transport devices
GMD Müller
[ "Technology" ]
433
[ "Vertical transport devices", "Transport systems" ]
297,374
https://en.wikipedia.org/wiki/Schwinger%20model
In physics, the Schwinger model, named after Julian Schwinger, is the model describing 1+1D (1 spatial dimension + time) Lorentzian quantum electrodynamics which includes electrons, coupled to photons. The model defines the usual QED Lagrangian over a spacetime with one spatial dimension and one temporal dimension. Where is the photon field strength, is the gauge covariant derivative, is the fermion spinor, is the fermion mass and form the two-dimensional representation of the Clifford algebra. This model exhibits confinement of the fermions and as such, is a toy model for QCD. A handwaving argument why this is so is because in two dimensions, classically, the potential between two charged particles goes linearly as , instead of in 4 dimensions, 3 spatial, 1 time. This model also exhibits a spontaneous symmetry breaking of the U(1) symmetry due to a chiral condensate due to a pool of instantons. The photon in this model becomes a massive particle at low temperatures. This model can be solved exactly and is used as a toy model for other more complex theories. References Quantum field theory Quantum electrodynamics Exactly solvable models Quantum chromodynamics
Schwinger model
[ "Physics" ]
262
[ "Quantum field theory", "Quantum mechanics", "Quantum physics stubs" ]
297,466
https://en.wikipedia.org/wiki/Spontaneous%20symmetry%20breaking
Spontaneous symmetry breaking is a spontaneous process of symmetry breaking, by which a physical system in a symmetric state spontaneously ends up in an asymmetric state. In particular, it can describe systems where the equations of motion or the Lagrangian obey symmetries, but the lowest-energy vacuum solutions do not exhibit that same symmetry. When the system goes to one of those vacuum solutions, the symmetry is broken for perturbations around that vacuum even though the entire Lagrangian retains that symmetry. Overview The spontaneous symmetry breaking cannot happen in quantum mechanics that describes finite dimensional systems, due to Stone-von Neumann theorem (that states the uniqueness of Heisenberg commutation relations in finite dimensions). So spontaneous symmetry breaking can be observed only in infinite dimensional theories, as quantum field theories. By definition, spontaneous symmetry breaking requires the existence of physical laws which are invariant under a symmetry transformation (such as translation or rotation), so that any pair of outcomes differing only by that transformation have the same probability distribution. For example if measurements of an observable at any two different positions have the same probability distribution, the observable has translational symmetry. Spontaneous symmetry breaking occurs when this relation breaks down, while the underlying physical laws remain symmetrical. Conversely, in explicit symmetry breaking, if two outcomes are considered, the probability distributions of a pair of outcomes can be different. For example in an electric field, the forces on a charged particle are different in different directions, so the rotational symmetry is explicitly broken by the electric field which does not have this symmetry. Phases of matter, such as crystals, magnets, and conventional superconductors, as well as simple phase transitions can be described by spontaneous symmetry breaking. Notable exceptions include topological phases of matter like the fractional quantum Hall effect. Typically, when spontaneous symmetry breaking occurs, the observable properties of the system change in multiple ways. For example the density, compressibility, coefficient of thermal expansion, and specific heat will be expected to change when a liquid becomes a solid. Examples Sombrero potential Consider a symmetric upward dome with a trough circling the bottom. If a ball is put at the very peak of the dome, the system is symmetric with respect to a rotation around the center axis. But the ball may spontaneously break this symmetry by rolling down the dome into the trough, a point of lowest energy. Afterward, the ball has come to a rest at some fixed point on the perimeter. The dome and the ball retain their individual symmetry, but the system does not. In the simplest idealized relativistic model, the spontaneously broken symmetry is summarized through an illustrative scalar field theory. The relevant Lagrangian of a scalar field , which essentially dictates how a system behaves, can be split up into kinetic and potential terms, It is in this potential term that the symmetry breaking is triggered. An example of a potential, due to Jeffrey Goldstone is illustrated in the graph at the left. This potential has an infinite number of possible minima (vacuum states) given by for any real θ between 0 and 2π. The system also has an unstable vacuum state corresponding to . This state has a U(1) symmetry. However, once the system falls into a specific stable vacuum state (amounting to a choice of θ), this symmetry will appear to be lost, or "spontaneously broken". In fact, any other choice of θ would have exactly the same energy, and the defining equations respect the symmetry but the ground state (vacuum) of the theory breaks the symmetry, implying the existence of a massless Nambu–Goldstone boson, the mode running around the circle at the minimum of this potential, and indicating there is some memory of the original symmetry in the Lagrangian. Other examples For ferromagnetic materials, the underlying laws are invariant under spatial rotations. Here, the order parameter is the magnetization, which measures the magnetic dipole density. Above the Curie temperature, the order parameter is zero, which is spatially invariant, and there is no symmetry breaking. Below the Curie temperature, however, the magnetization acquires a constant nonvanishing value, which points in a certain direction (in the idealized situation where we have full equilibrium; otherwise, translational symmetry gets broken as well). The residual rotational symmetries which leave the orientation of this vector invariant remain unbroken, unlike the other rotations which do not and are thus spontaneously broken. The laws describing a solid are invariant under the full Euclidean group, but the solid itself spontaneously breaks this group down to a space group. The displacement and the orientation are the order parameters. General relativity has a Lorentz symmetry, but in FRW cosmological models, the mean 4-velocity field defined by averaging over the velocities of the galaxies (the galaxies act like gas particles at cosmological scales) acts as an order parameter breaking this symmetry. Similar comments can be made about the cosmic microwave background. For the electroweak model, as explained earlier, a component of the Higgs field provides the order parameter breaking the electroweak gauge symmetry to the electromagnetic gauge symmetry. Like the ferromagnetic example, there is a phase transition at the electroweak temperature. The same comment about us not tending to notice broken symmetries suggests why it took so long for us to discover electroweak unification. In superconductors, there is a condensed-matter collective field ψ, which acts as the order parameter breaking the electromagnetic gauge symmetry. Take a thin cylindrical plastic rod and push both ends together. Before buckling, the system is symmetric under rotation, and so visibly cylindrically symmetric. But after buckling, it looks different, and asymmetric. Nevertheless, features of the cylindrical symmetry are still there: ignoring friction, it would take no force to freely spin the rod around, displacing the ground state in time, and amounting to an oscillation of vanishing frequency, unlike the radial oscillations in the direction of the buckle. This spinning mode is effectively the requisite Nambu–Goldstone boson. Consider a uniform layer of fluid over an infinite horizontal plane. This system has all the symmetries of the Euclidean plane. But now heat the bottom surface uniformly so that it becomes much hotter than the upper surface. When the temperature gradient becomes large enough, convection cells will form, breaking the Euclidean symmetry. Consider a bead on a circular hoop that is rotated about a vertical diameter. As the rotational velocity is increased gradually from rest, the bead will initially stay at its initial equilibrium point at the bottom of the hoop (intuitively stable, lowest gravitational potential). At a certain critical rotational velocity, this point will become unstable and the bead will jump to one of two other newly created equilibria, equidistant from the center. Initially, the system is symmetric with respect to the diameter, yet after passing the critical velocity, the bead ends up in one of the two new equilibrium points, thus breaking the symmetry. The two-balloon experiment is an example of spontaneous symmetry breaking when both balloons are initially inflated to the local maximum pressure. When some air flows from one balloon into the other, the pressure in both balloons will drop, making the system more stable in the asymmetric state. In particle physics In particle physics, the force carrier particles are normally specified by field equations with gauge symmetry; their equations predict that certain measurements will be the same at any point in the field. For instance, field equations might predict that the mass of two quarks is constant. Solving the equations to find the mass of each quark might give two solutions. In one solution, quark A is heavier than quark B. In the second solution, quark B is heavier than quark A by the same amount. The symmetry of the equations is not reflected by the individual solutions, but it is reflected by the range of solutions. An actual measurement reflects only one solution, representing a breakdown in the symmetry of the underlying theory. "Hidden" is a better term than "broken", because the symmetry is always there in these equations. This phenomenon is called spontaneous symmetry breaking (SSB) because nothing (that we know of) breaks the symmetry in the equations. By the nature of spontaneous symmetry breaking, different portions of the early Universe would break symmetry in different directions, leading to topological defects, such as two-dimensional domain walls, one-dimensional cosmic strings, zero-dimensional monopoles, and/or textures, depending on the relevant homotopy group and the dynamics of the theory. For example, Higgs symmetry breaking may have created primordial cosmic strings as a byproduct. Hypothetical GUT symmetry-breaking generically produces monopoles, creating difficulties for GUT unless monopoles (along with any GUT domain walls) are expelled from our observable Universe through cosmic inflation. Chiral symmetry Chiral symmetry breaking is an example of spontaneous symmetry breaking affecting the chiral symmetry of the strong interactions in particle physics. It is a property of quantum chromodynamics, the quantum field theory describing these interactions, and is responsible for the bulk of the mass (over 99%) of the nucleons, and thus of all common matter, as it converts very light bound quarks into 100 times heavier constituents of baryons. The approximate Nambu–Goldstone bosons in this spontaneous symmetry breaking process are the pions, whose mass is an order of magnitude lighter than the mass of the nucleons. It served as the prototype and significant ingredient of the Higgs mechanism underlying the electroweak symmetry breaking. Higgs mechanism The strong, weak, and electromagnetic forces can all be understood as arising from gauge symmetries, which is a redundancy in the description of the symmetry. The Higgs mechanism, the spontaneous symmetry breaking of gauge symmetries, is an important component in understanding the superconductivity of metals and the origin of particle masses in the standard model of particle physics. The term "spontaneous symmetry breaking" is a misnomer here as Elitzur's theorem states that local gauge symmetries can never be spontaneously broken. Rather, after gauge fixing, the global symmetry (or redundancy) can be broken in a manner formally resembling spontaneous symmetry breaking. One important consequence of the distinction between true symmetries and gauge symmetries, is that the massless Nambu–Goldstone resulting from spontaneous breaking of a gauge symmetry are absorbed in the description of the gauge vector field, providing massive vector field modes, like the plasma mode in a superconductor, or the Higgs mode observed in particle physics. In the standard model of particle physics, spontaneous symmetry breaking of the gauge symmetry associated with the electro-weak force generates masses for several particles, and separates the electromagnetic and weak forces. The W and Z bosons are the elementary particles that mediate the weak interaction, while the photon mediates the electromagnetic interaction. At energies much greater than 100 GeV, all these particles behave in a similar manner. The Weinberg–Salam theory predicts that, at lower energies, this symmetry is broken so that the photon and the massive W and Z bosons emerge. In addition, fermions develop mass consistently. Without spontaneous symmetry breaking, the Standard Model of elementary particle interactions requires the existence of a number of particles. However, some particles (the W and Z bosons) would then be predicted to be massless, when, in reality, they are observed to have mass. To overcome this, spontaneous symmetry breaking is augmented by the Higgs mechanism to give these particles mass. It also suggests the presence of a new particle, the Higgs boson, detected in 2012. Superconductivity of metals is a condensed-matter analog of the Higgs phenomena, in which a condensate of Cooper pairs of electrons spontaneously breaks the U(1) gauge symmetry associated with light and electromagnetism. Dynamical symmetry breaking Dynamical symmetry breaking (DSB) is a special form of spontaneous symmetry breaking in which the ground state of the system has reduced symmetry properties compared to its theoretical description (i.e., Lagrangian). Dynamical breaking of a global symmetry is a spontaneous symmetry breaking, which happens not at the (classical) tree level (i.e., at the level of the bare action), but due to quantum corrections (i.e., at the level of the effective action). Dynamical breaking of a gauge symmetry is subtler. In conventional spontaneous gauge symmetry breaking, there exists an unstable Higgs particle in the theory, which drives the vacuum to a symmetry-broken phase (i.e, electroweak interactions.) In dynamical gauge symmetry breaking, however, no unstable Higgs particle operates in the theory, but the bound states of the system itself provide the unstable fields that render the phase transition. For example, Bardeen, Hill, and Lindner published a paper that attempts to replace the conventional Higgs mechanism in the standard model by a DSB that is driven by a bound state of top-antitop quarks. (Such models, in which a composite particle plays the role of the Higgs boson, are often referred to as "Composite Higgs models".) Dynamical breaking of gauge symmetries is often due to creation of a fermionic condensate — e.g., the quark condensate, which is connected to the dynamical breaking of chiral symmetry in quantum chromodynamics. Conventional superconductivity is the paradigmatic example from the condensed matter side, where phonon-mediated attractions lead electrons to become bound in pairs and then condense, thereby breaking the electromagnetic gauge symmetry. In condensed matter physics Most phases of matter can be understood through the lens of spontaneous symmetry breaking. For example, crystals are periodic arrays of atoms that are not invariant under all translations (only under a small subset of translations by a lattice vector). Magnets have north and south poles that are oriented in a specific direction, breaking rotational symmetry. In addition to these examples, there are a whole host of other symmetry-breaking phases of matter — including nematic phases of liquid crystals, charge- and spin-density waves, superfluids, and many others. There are several known examples of matter that cannot be described by spontaneous symmetry breaking, including: topologically ordered phases of matter, such as fractional quantum Hall liquids, and spin-liquids. These states do not break any symmetry, but are distinct phases of matter. Unlike the case of spontaneous symmetry breaking, there is not a general framework for describing such states. Continuous symmetry The ferromagnet is the canonical system that spontaneously breaks the continuous symmetry of the spins below the Curie temperature and at , where h is the external magnetic field. Below the Curie temperature, the energy of the system is invariant under inversion of the magnetization m(x) such that . The symmetry is spontaneously broken as when the Hamiltonian becomes invariant under the inversion transformation, but the expectation value is not invariant. Spontaneously-symmetry-broken phases of matter are characterized by an order parameter that describes the quantity which breaks the symmetry under consideration. For example, in a magnet, the order parameter is the local magnetization. Spontaneous breaking of a continuous symmetry is inevitably accompanied by gapless (meaning that these modes do not cost any energy to excite) Nambu–Goldstone modes associated with slow, long-wavelength fluctuations of the order parameter. For example, vibrational modes in a crystal, known as phonons, are associated with slow density fluctuations of the crystal's atoms. The associated Goldstone mode for magnets are oscillating waves of spin known as spin-waves. For symmetry-breaking states, whose order parameter is not a conserved quantity, Nambu–Goldstone modes are typically massless and propagate at a constant velocity. An important theorem, due to Mermin and Wagner, states that, at finite temperature, thermally activated fluctuations of Nambu–Goldstone modes destroy the long-range order, and prevent spontaneous symmetry breaking in one- and two-dimensional systems. Similarly, quantum fluctuations of the order parameter prevent most types of continuous symmetry breaking in one-dimensional systems even at zero temperature. (An important exception is ferromagnets, whose order parameter, magnetization, is an exactly conserved quantity and does not have any quantum fluctuations.) Other long-range interacting systems, such as cylindrical curved surfaces interacting via the Coulomb potential or Yukawa potential, have been shown to break translational and rotational symmetries. It was shown, in the presence of a symmetric Hamiltonian, and in the limit of infinite volume, the system spontaneously adopts a chiral configuration — i.e., breaks mirror plane symmetry. Generalisation and technical usage For spontaneous symmetry breaking to occur, there must be a system in which there are several equally likely outcomes. The system as a whole is therefore symmetric with respect to these outcomes. However, if the system is sampled (i.e. if the system is actually used or interacted with in any way), a specific outcome must occur. Though the system as a whole is symmetric, it is never encountered with this symmetry, but only in one specific asymmetric state. Hence, the symmetry is said to be spontaneously broken in that theory. Nevertheless, the fact that each outcome is equally likely is a reflection of the underlying symmetry, which is thus often dubbed "hidden symmetry", and has crucial formal consequences. (See the article on the Goldstone boson.) When a theory is symmetric with respect to a symmetry group, but requires that one element of the group be distinct, then spontaneous symmetry breaking has occurred. The theory must not dictate which member is distinct, only that one is. From this point on, the theory can be treated as if this element actually is distinct, with the proviso that any results found in this way must be resymmetrized, by taking the average of each of the elements of the group being the distinct one. The crucial concept in physics theories is the order parameter. If there is a field (often a background field) which acquires an expectation value (not necessarily a vacuum expectation value) which is not invariant under the symmetry in question, we say that the system is in the ordered phase, and the symmetry is spontaneously broken. This is because other subsystems interact with the order parameter, which specifies a "frame of reference" to be measured against. In that case, the vacuum state does not obey the initial symmetry (which would keep it invariant, in the linearly realized Wigner mode in which it would be a singlet), and, instead changes under the (hidden) symmetry, now implemented in the (nonlinear) Nambu–Goldstone mode. Normally, in the absence of the Higgs mechanism, massless Goldstone bosons arise. The symmetry group can be discrete, such as the space group of a crystal, or continuous (e.g., a Lie group), such as the rotational symmetry of space. However, if the system contains only a single spatial dimension, then only discrete symmetries may be broken in a vacuum state of the full quantum theory, although a classical solution may break a continuous symmetry. Nobel Prize On October 7, 2008, the Royal Swedish Academy of Sciences awarded the 2008 Nobel Prize in Physics to three scientists for their work in subatomic physics symmetry breaking. Yoichiro Nambu, of the University of Chicago, won half of the prize for the discovery of the mechanism of spontaneous broken symmetry in the context of the strong interactions, specifically chiral symmetry breaking. Physicists Makoto Kobayashi and Toshihide Maskawa, of Kyoto University, shared the other half of the prize for discovering the origin of the explicit breaking of CP symmetry in the weak interactions. This origin is ultimately reliant on the Higgs mechanism, but, so far understood as a "just so" feature of Higgs couplings, not a spontaneously broken symmetry phenomenon. See also Autocatalytic reactions and order creation Catastrophe theory Chiral symmetry breaking CP-violation Fermi ball Gauge gravitation theory Goldstone boson Grand unified theory Higgs mechanism Higgs boson Higgs field (classical) Irreversibility Magnetic catalysis of chiral symmetry breaking Mermin–Wagner theorem Norton's dome Second-order phase transition Spontaneous absolute asymmetric synthesis in chemistry Symmetry breaking Tachyon condensation 1964 PRL symmetry breaking papers Notes Note that (as in fundamental Higgs driven spontaneous gauge symmetry breaking) the term "symmetry breaking" is a misnomer when applied to gauge symmetries. References External links For a pedagogic introduction to electroweak symmetry breaking with step by step derivations, not found in texts, of many key relations, see http://www.quantumfieldtheory.info/Electroweak_Sym_breaking.pdf Spontaneous symmetry breaking Physical Review Letters – 50th Anniversary Milestone Papers In CERN Courier, Steven Weinberg reflects on spontaneous symmetry breaking Englert–Brout–Higgs–Guralnik–Hagen–Kibble Mechanism on Scholarpedia History of Englert–Brout–Higgs–Guralnik–Hagen–Kibble Mechanism on Scholarpedia The History of the Guralnik, Hagen and Kibble development of the Theory of Spontaneous Symmetry Breaking and Gauge Particles International Journal of Modern Physics A: The History of the Guralnik, Hagen and Kibble development of the Theory of Spontaneous Symmetry Breaking and Gauge Particles Guralnik, G S; Hagen, C R and Kibble, T W B (1967). Broken Symmetries and the Goldstone Theorem. Advances in Physics, vol. 2 Interscience Publishers, New York. pp. 567–708 Spontaneous Symmetry Breaking in Gauge Theories: a Historical Survey The Royal Society Publishing: Spontaneous symmetry breaking in gauge theories University of Cambridge, David Tong: Lectures on Quantum Field Theory for masters level students. Theoretical physics Quantum field theory Standard Model Quantum chromodynamics Symmetry Quantum phases
Spontaneous symmetry breaking
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
4,533
[ "Quantum phases", "Quantum field theory", "Standard Model", "Symmetry", "Theoretical physics", "Phases of matter", "Quantum mechanics", "Particle physics", "Condensed matter physics", "Geometry", "Matter" ]
298,448
https://en.wikipedia.org/wiki/Cosmochemistry
Cosmochemistry () or chemical cosmology is the study of the chemical composition of matter in the universe and the processes that led to those compositions. This is done primarily through the study of the chemical composition of meteorites and other physical samples. Given that the asteroid parent bodies of meteorites were some of the first solid material to condense from the early solar nebula, cosmochemists are generally, but not exclusively, concerned with the objects contained within the Solar System. History In 1938, Swiss mineralogist Victor Goldschmidt and his colleagues compiled a list of what they called "cosmic abundances" based on their analysis of several terrestrial and meteorite samples. Goldschmidt justified the inclusion of meteorite composition data into his table by claiming that terrestrial rocks were subjected to a significant amount of chemical change due to the inherent processes of the Earth and the atmosphere. This meant that studying terrestrial rocks exclusively would not yield an accurate overall picture of the chemical composition of the cosmos. Therefore, Goldschmidt concluded that extraterrestrial material must also be included to produce more accurate and robust data. This research is considered to be the foundation of modern cosmochemistry. During the 1950s and 1960s, cosmochemistry became more accepted as a science. Harold Urey, widely considered to be one of the fathers of cosmochemistry, engaged in research that eventually led to an understanding of the origin of the elements and the chemical abundance of stars. In 1956, Urey and his colleague, German scientist Hans Suess, published the first table of cosmic abundances to include isotopes based on meteorite analysis. The continued refinement of analytical instrumentation throughout the 1960s, especially that of mass spectrometry, allowed cosmochemists to perform detailed analyses of the isotopic abundances of elements within meteorites. in 1960, John Reynolds determined, through the analysis of short-lived nuclides within meteorites, that the elements of the Solar System were formed before the Solar System itself which began to establish a timeline of the processes of the early Solar System. Meteorites Meteorites are one of the most important tools that cosmochemists have for studying the chemical nature of the Solar System. Many meteorites come from material that is as old as the Solar System itself, and thus provide scientists with a record from the early solar nebula. Carbonaceous chondrites are especially primitive; that is they have retained many of their chemical properties since their formation 4.56 billion years ago, and are therefore a major focus of cosmochemical investigations. The most primitive meteorites also contain a small amount of material (< 0.1%) which is now recognized to be presolar grains that are older than the Solar System itself, and which are derived directly from the remnants of the individual supernovae that supplied the dust from which the Solar System formed. These grains are recognizable from their exotic chemistry which is alien to the Solar System (such as matrixes of graphite, diamond, or silicon carbide). They also often have isotope ratios which are not those of the rest of the Solar System (in particular, the Sun), and which differ from each other, indicating sources in a number of different explosive supernova events. Meteorites also may contain interstellar dust grains, which have collected from non-gaseous elements in the interstellar medium, as one type of composite cosmic dust ("stardust"). Recent findings by NASA, based on studies of meteorites found on Earth, suggests DNA and RNA components (adenine, guanine and related organic molecules), building blocks for life as we know it, may be formed extraterrestrially in outer space. Comets On 30 July 2015, scientists reported that upon the first touchdown of the Philae lander on comet 67/P surface, measurements by the COSAC and Ptolemy instruments revealed sixteen organic compounds, four of which were seen for the first time on a comet, including acetamide, acetone, methyl isocyanate and propionaldehyde. Research In 2004, scientists reported detecting the spectral signatures of anthracene and pyrene in the ultraviolet light emitted by the Red Rectangle nebula (no other such complex molecules had ever been found before in outer space). This discovery was considered a confirmation of a hypothesis that as nebulae of the same type as the Red Rectangle approach the ends of their lives, convection currents cause carbon and hydrogen in the nebulae's core to get caught in stellar winds, and radiate outward. As they cool, the atoms supposedly bond to each other in various ways and eventually form particles of a million or more atoms. The scientists inferred that since they discovered polycyclic aromatic hydrocarbons (PAHs)—which may have been vital in the formation of early life on Earth—in a nebula, by necessity they must originate in nebulae. In August 2009, NASA scientists identified one of the fundamental chemical building-blocks of life (the amino acid glycine) in a comet for the first time. In 2010, fullerenes (or "buckyballs") were detected in nebulae. Fullerenes have been implicated in the origin of life; according to astronomer Letizia Stanghellini, "It's possible that buckyballs from outer space provided seeds for life on Earth." In August 2011, findings by NASA, based on studies of meteorites found on Earth, suggests DNA and RNA components (adenine, guanine and related organic molecules), building blocks for life as we know it, may be formed extraterrestrially in outer space. In October 2011, scientists reported that cosmic dust contains complex organic matter ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. On August 29, 2012, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation. In September 2012, NASA scientists reported that polycyclic aromatic hydrocarbons (PAHs), subjected to interstellar medium (ISM) conditions, are transformed, through hydrogenation, oxygenation and hydroxylation, to more complex organics—"a step along the path toward amino acids and nucleotides, the raw materials of proteins and DNA, respectively". Further, as a result of these transformations, the PAHs lose their spectroscopic signature which could be one of the reasons "for the lack of PAH detection in interstellar ice grains, particularly the outer regions of cold, dense clouds or the upper molecular layers of protoplanetary disks." In 2013, the Atacama Large Millimeter Array (ALMA Project) confirmed that researchers have discovered an important pair of prebiotic molecules in the icy particles in interstellar space (ISM). The chemicals, found in a giant cloud of gas about 25,000 light-years from Earth in ISM, may be a precursor to a key component of DNA and the other may have a role in the formation of an important amino acid. Researchers found a molecule called cyanomethanimine, which produces adenine, one of the four nucleobases that form the "rungs" in the ladder-like structure of DNA. The other molecule, called ethanamine, is thought to play a role in forming alanine, one of the twenty amino acids in the genetic code. Previously, scientists thought such processes took place in the very tenuous gas between the stars. The new discoveries, however, suggest that the chemical formation sequences for these molecules occurred not in gas, but on the surfaces of ice grains in interstellar space. NASA ALMA scientist Anthony Remijan stated that finding these molecules in an interstellar gas cloud means that important building blocks for DNA and amino acids can 'seed' newly formed planets with the chemical precursors for life. In January 2014, NASA reported that current studies on the planet Mars by the Curiosity and Opportunity rovers will now be searching for evidence of ancient life, including a biosphere based on autotrophic, chemotrophic and/or chemolithoautotrophic microorganisms, as well as ancient water, including fluvio-lacustrine environments (plains related to ancient rivers or lakes) that may have been habitable. The search for evidence of habitability, taphonomy (related to fossils), and organic carbon on the planet Mars is now a primary NASA objective. In February 2014, NASA announced a greatly upgraded database for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. According to scientists, more than 20% of the carbon in the universe may be associated with PAHs, possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets. See also Abundance of the chemical elements Astrochemistry Extraterrestrial materials Geochemistry List of interstellar and circumstellar molecules Molecules in stars Nucleocosmochronology Stellar chemistry References External links Planetary Science Research Discoveries Educational journal with articles about cosmochemistry, meteorites, and planetary science Astrochemistry Astrophysics Planetary science
Cosmochemistry
[ "Physics", "Chemistry", "Astronomy" ]
1,980
[ "Astrophysics", "Astrochemistry", "nan", "Planetary science", "Astronomical sub-disciplines" ]
298,585
https://en.wikipedia.org/wiki/Beetle%20bank
In agriculture and horticulture, a beetle bank is a form of biological pest control. It is a strip, preferably raised, planted with grasses (bunch grasses) and/or perennial plants, within a crop field or a garden, that fosters and provides habitat for beneficial insects, birds, and other fauna that prey on pests. Usage Beetle banks are typically made up from plants such as sunflowers, Vicia faba, Centaurea cyanus, coriander, borage, Muhlenbergia, Stipa, and buckwheats (Eriogonum spp.). Beetle banks are used to reduce or replace the use of insecticides, and can also serve as habitat for birds and beneficial rodents. For example, insects such as Chrysoperla carnea and the Ichneumon fly can prey on pests. The concept was developed by the Game & Wildlife Conservation Trust in collaboration with the University of Southampton. Other important benefits can be providing habitat for pollinators and endangered species. If using local native plants, endemic and indigenous flora and fauna restoration ecology is supported. History of the term According to a March 2005 draft entry for the Oxford English Dictionary, the term first came into use in the early 1990s, with published examples including the August 22, 1992 issue of the New Scientist and an October 12, 1994 reference in The Guardian society section: ‘Beetle banks’, a recent initiative by the Game Conservancy Trust, would also help encourage ground-nesting birds while creating cover for aphid-eating bugs with more pay-off in savings on aphicides. See also Beneficial insects Buffer strip Insect hotel List of companion plants List of pest-repelling plants References External links Beetle bank, from the Game & Wildlife Conservation Trust website Create and maintain beetle banks, from the DEFRA website, 2021 Biological pest control beetles Sustainable agriculture Ecological restoration 1990s neologisms
Beetle bank
[ "Chemistry", "Engineering" ]
388
[ "Ecological restoration", "Environmental engineering" ]
298,828
https://en.wikipedia.org/wiki/Affine%20group
In mathematics, the affine group or general affine group of any affine space is the group of all invertible affine transformations from the space into itself. In the case of a Euclidean space (where the associated field of scalars is the real numbers), the affine group consists of those functions from the space to itself such that the image of every line is a line. Over any field, the affine group may be viewed as a matrix group in a natural way. If the associated field of scalars is the real or complex field, then the affine group is a Lie group. Relation to general linear group Construction from general linear group Concretely, given a vector space , it has an underlying affine space obtained by "forgetting" the origin, with acting by translations, and the affine group of can be described concretely as the semidirect product of by , the general linear group of : The action of on is the natural one (linear transformations are automorphisms), so this defines a semidirect product. In terms of matrices, one writes: where here the natural action of on is matrix multiplication of a vector. Stabilizer of a point Given the affine group of an affine space , the stabilizer of a point is isomorphic to the general linear group of the same dimension (so the stabilizer of a point in is isomorphic to ); formally, it is the general linear group of the vector space : recall that if one fixes a point, an affine space becomes a vector space. All these subgroups are conjugate, where conjugation is given by translation from to (which is uniquely defined), however, no particular subgroup is a natural choice, since no point is special – this corresponds to the multiple choices of transverse subgroup, or splitting of the short exact sequence In the case that the affine group was constructed by starting with a vector space, the subgroup that stabilizes the origin (of the vector space) is the original . Matrix representation Representing the affine group as a semidirect product of by , then by construction of the semidirect product, the elements are pairs , where is a vector in and is a linear transform in , and multiplication is given by This can be represented as the block matrix where is an matrix over , an column vector, 0 is a row of zeros, and 1 is the identity block matrix. Formally, is naturally isomorphic to a subgroup of , with embedded as the affine plane , namely the stabilizer of this affine plane; the above matrix formulation is the (transpose of the) realization of this, with the and blocks corresponding to the direct sum decomposition . A similar representation is any matrix in which the entries in each column sum to 1. The similarity for passing from the above kind to this kind is the identity matrix with the bottom row replaced by a row of all ones. Each of these two classes of matrices is closed under matrix multiplication. The simplest paradigm may well be the case , that is, the upper triangular matrices representing the affine group in one dimension. It is a two-parameter non-Abelian Lie group, so with merely two generators (Lie algebra elements), and , such that , where so that Character table of has order . Since we know has conjugacy classes, namely Then we know that has irreducible representations. By above paragraph (), there exist one-dimensional representations, decided by the homomorphism for , where and , , is a generator of the group . Then compare with the order of , we have hence is the dimension of the last irreducible representation. Finally using the orthogonality of irreducible representations, we can complete the character table of : Planar affine group over the reals The elements of can take a simple form on a well-chosen affine coordinate system. More precisely, given an affine transformation of an affine plane over the reals, an affine coordinate system exists on which it has one of the following forms, where , , and are real numbers (the given conditions insure that transformations are invertible, but not for making the classes distinct; for example, the identity belongs to all the classes). Case 1 corresponds to translations. Case 2 corresponds to scalings that may differ in two different directions. When working with a Euclidean plane these directions need not be perpendicular, since the coordinate axes need not be perpendicular. Case 3 corresponds to a scaling in one direction and a translation in another one. Case 4 corresponds to a shear mapping combined with a dilation. Case 5 corresponds to a shear mapping combined with a dilation. Case 6 corresponds to similarities when the coordinate axes are perpendicular. The affine transformations without any fixed point belong to cases 1, 3, and 5. The transformations that do not preserve the orientation of the plane belong to cases 2 (with ) or 3 (with ). The proof may be done by first remarking that if an affine transformation has no fixed point, then the matrix of the associated linear map has an eigenvalue equal to one, and then using the Jordan normal form theorem for real matrices. Other affine groups and subgroups General case Given any subgroup of the general linear group, one can produce an affine group, sometimes denoted , analogously as . More generally and abstractly, given any group and a representation of on a vector space , one gets an associated affine group : one can say that the affine group obtained is "a group extension by a vector representation", and, as above, one has the short exact sequence Special affine group The subset of all invertible affine transformations that preserve a fixed volume form up to sign is called the special affine group. (The transformations themselves are sometimes called equiaffinities.) This group is the affine analogue of the special linear group. In terms of the semi-direct product, the special affine group consists of all pairs with , that is, the affine transformations where is a linear transformation of whose determinant has absolute value 1 and is any fixed translation vector. The subgroup of the special affine group consisting of those transformations whose linear part has determinant 1 is the group of orientation- and volume-preserving maps. Algebraically, this group is a semidirect product of the special linear group of with the translations. It is generated by the shear mappings. Projective subgroup Presuming knowledge of projectivity and the projective group of projective geometry, the affine group can be easily specified. For example, Günter Ewald wrote: The set of all projective collineations of is a group which we may call the projective group of . If we proceed from to the affine space by declaring a hyperplane to be a hyperplane at infinity, we obtain the affine group of as the subgroup of consisting of all elements of that leave fixed. Isometries of Euclidean space When the affine space is a Euclidean space (over the field of real numbers), the group of distance-preserving maps (isometries) of is a subgroup of the affine group. Algebraically, this group is a semidirect product of the orthogonal group of with the translations. Geometrically, it is the subgroup of the affine group generated by the orthogonal reflections. Poincaré group The Poincaré group is the affine group of the Lorentz group : This example is very important in relativity. See also Affine Coxeter group – certain discrete subgroups of the affine group on a Euclidean space that preserve a lattice Holomorph Notes References Affine geometry Group theory Lie groups
Affine group
[ "Mathematics" ]
1,560
[ "Lie groups", "Mathematical structures", "Group theory", "Fields of abstract algebra", "Algebraic structures" ]
298,834
https://en.wikipedia.org/wiki/Affine%20space
In mathematics, an affine space is a geometric structure that generalizes some of the properties of Euclidean spaces in such a way that these are independent of the concepts of distance and measure of angles, keeping only the properties related to parallelism and ratio of lengths for parallel line segments. Affine space is the setting for affine geometry. As in Euclidean space, the fundamental objects in an affine space are called points, which can be thought of as locations in the space without any size or shape: zero-dimensional. Through any pair of points an infinite straight line can be drawn, a one-dimensional set of points; through any three points that are not collinear, a two-dimensional plane can be drawn; and, in general, through points in general position, a -dimensional flat or affine subspace can be drawn. Affine space is characterized by a notion of pairs of parallel lines that lie within the same plane but never meet each-other (non-parallel lines within the same plane intersect in a point). Given any line, a line parallel to it can be drawn through any point in the space, and the equivalence class of parallel lines are said to share a direction. Unlike for vectors in a vector space, in an affine space there is no distinguished point that serves as an origin. There is no predefined concept of adding or multiplying points together, or multiplying a point by a scalar number. However, for any affine space, an associated vector space can be constructed from the differences between start and end points, which are called free vectors, displacement vectors, translation vectors or simply translations. Likewise, it makes sense to add a displacement vector to a point of an affine space, resulting in a new point translated from the starting point by that vector. While points cannot be arbitrarily added together, it is meaningful to take affine combinations of points: weighted sums with numerical coefficients summing to 1, resulting in another point. These coefficients define a barycentric coordinate system for the flat through the points. Any vector space may be viewed as an affine space; this amounts to "forgetting" the special role played by the zero vector. In this case, elements of the vector space may be viewed either as points of the affine space or as displacement vectors or translations. When considered as a point, the zero vector is called the origin. Adding a fixed vector to the elements of a linear subspace (vector subspace) of a vector space produces an affine subspace of the vector space. One commonly says that this affine subspace has been obtained by translating (away from the origin) the linear subspace by the translation vector (the vector added to all the elements of the linear space). In finite dimensions, such an affine subspace is the solution set of an inhomogeneous linear system. The displacement vectors for that affine space are the solutions of the corresponding homogeneous linear system, which is a linear subspace. Linear subspaces, in contrast, always contain the origin of the vector space. The dimension of an affine space is defined as the dimension of the vector space of its translations. An affine space of dimension one is an affine line. An affine space of dimension 2 is an affine plane. An affine subspace of dimension in an affine space or a vector space of dimension is an affine hyperplane. Informal description The following characterization may be easier to understand than the usual formal definition: an affine space is what is left of a vector space after one has forgotten which point is the origin (or, in the words of the French mathematician Marcel Berger, "An affine space is nothing more than a vector space whose origin we try to forget about, by adding translations to the linear maps"). Imagine that Alice knows that a certain point is the actual origin, but Bob believes that another point—call it —is the origin. Two vectors, and , are to be added. Bob draws an arrow from point to point and another arrow from point to point , and completes the parallelogram to find what Bob thinks is , but Alice knows that he has actually computed . Similarly, Alice and Bob may evaluate any linear combination of and , or of any finite set of vectors, and will generally get different answers. However, if the sum of the coefficients in a linear combination is 1, then Alice and Bob will arrive at the same answer. If Alice travels to then Bob can similarly travel to . Under this condition, for all coefficients , Alice and Bob describe the same point with the same linear combination, despite using different origins. While only Alice knows the "linear structure", both Alice and Bob know the "affine structure"—i.e. the values of affine combinations, defined as linear combinations in which the sum of the coefficients is 1. A set with an affine structure is an affine space. Definition While affine space can be defined axiomatically (see below), analogously to the definition of Euclidean space implied by Euclid's Elements, for convenience most modern sources define affine spaces in terms of the well developed vector space theory. An affine space is a set together with a vector space , and a transitive and free action of the additive group of on the set . The elements of the affine space are called points. The vector space is said to be associated to the affine space, and its elements are called vectors, translations, or sometimes free vectors. Explicitly, the definition above means that the action is a mapping, generally denoted as an addition, that has the following properties. Right identity: , where is the zero vector in Associativity: (here the last is the addition in ) Free and transitive action: For every , the mapping is a bijection. The first two properties are simply defining properties of a (right) group action. The third property characterizes free and transitive actions, the onto character coming from transitivity, and then the injective character follows from the action being free. There is a fourth property that follows from 1, 2 above: Existence of one-to-one translations For all , the mapping is a bijection. Property 3 is often used in the following equivalent form (the 5th property). Subtraction: For every in , there exists a unique , denoted , such that . Another way to express the definition is that an affine space is a principal homogeneous space for the action of the additive group of a vector space. Homogeneous spaces are, by definition, endowed with a transitive group action, and for a principal homogeneous space, such a transitive action is, by definition, free. Subtraction and Weyl's axioms The properties of the group action allows for the definition of subtraction for any given ordered pair of points in , producing a vector of . This vector, denoted or , is defined to be the unique vector in such that Existence follows from the transitivity of the action, and uniqueness follows because the action is free. This subtraction has the two following properties, called Weyl's axioms: , there is a unique point such that The parallelogram property is satisfied in affine spaces, where it is expressed as: given four points the equalities and are equivalent. This results from the second Weyl's axiom, since Affine spaces can be equivalently defined as a point set , together with a vector space , and a subtraction satisfying Weyl's axioms. In this case, the addition of a vector to a point is defined from the first of Weyl's axioms. Affine subspaces and parallelism An affine subspace (also called, in some contexts, a linear variety, a flat, or, over the real numbers, a linear manifold) of an affine space is a subset of such that, given a point , the set of vectors is a linear subspace of . This property, which does not depend on the choice of , implies that is an affine space, which has as its associated vector space. The affine subspaces of are the subsets of of the form where is a point of , and a linear subspace of . The linear subspace associated with an affine subspace is often called its , and two subspaces that share the same direction are said to be parallel. This implies the following generalization of Playfair's axiom: Given a direction , for any point of there is one and only one affine subspace of direction , which passes through , namely the subspace . Every translation maps any affine subspace to a parallel subspace. The term parallel is also used for two affine subspaces such that the direction of one is included in the direction of the other. Affine map Given two affine spaces and whose associated vector spaces are and , an affine map or affine homomorphism from to is a map such that is a well defined linear map. By being well defined is meant that implies . This implies that, for a point and a vector , one has Therefore, since for any given in , for a unique , is completely defined by its value on a single point and the associated linear map . Endomorphisms An affine transformation or endomorphism of an affine space is an affine map from that space to itself. One important family of examples is the translations: given a vector , the translation map that sends for every in is an affine map. Another important family of examples are the linear maps centred at an origin: given a point and a linear map , one may define an affine map by for every in . After making a choice of origin , any affine map may be written uniquely as a combination of a translation and a linear map centred at . Vector spaces as affine spaces Every vector space may be considered as an affine space over itself. This means that every element of may be considered either as a point or as a vector. This affine space is sometimes denoted for emphasizing the double role of the elements of . When considered as a point, the zero vector is commonly denoted (or , when upper-case letters are used for points) and called the origin. If is another affine space over the same vector space (that is ) the choice of any point in defines a unique affine isomorphism, which is the identity of and maps to . In other words, the choice of an origin in allows us to identify and up to a canonical isomorphism. The counterpart of this property is that the affine space may be identified with the vector space in which "the place of the origin has been forgotten". Relation to Euclidean spaces Definition of Euclidean spaces Euclidean spaces (including the one-dimensional line, two-dimensional plane, and three-dimensional space commonly studied in elementary geometry, as well as higher-dimensional analogues) are affine spaces. Indeed, in most modern definitions, a Euclidean space is defined to be an affine space, such that the associated vector space is a real inner product space of finite dimension, that is a vector space over the reals with a positive-definite quadratic form . The inner product of two vectors and is the value of the symmetric bilinear form The usual Euclidean distance between two points and is In older definition of Euclidean spaces through synthetic geometry, vectors are defined as equivalence classes of ordered pairs of points under equipollence (the pairs and are equipollent if the points (in this order) form a parallelogram). It is straightforward to verify that the vectors form a vector space, the square of the Euclidean distance is a quadratic form on the space of vectors, and the two definitions of Euclidean spaces are equivalent. Affine properties In Euclidean geometry, the common phrase "affine property" refers to a property that can be proved in affine spaces, that is, it can be proved without using the quadratic form and its associated inner product. In other words, an affine property is a property that does not involve lengths and angles. Typical examples are parallelism, and the definition of a tangent. A non-example is the definition of a normal. Equivalently, an affine property is a property that is invariant under affine transformations of the Euclidean space. Affine combinations and barycenter Let be a collection of points in an affine space, and be elements of the ground field. Suppose that . For any two points and one has Thus, this sum is independent of the choice of the origin, and the resulting vector may be denoted When , one retrieves the definition of the subtraction of points. Now suppose instead that the field elements satisfy . For some choice of an origin , denote by the unique point such that One can show that is independent from the choice of . Therefore, if one may write The point is called the barycenter of the for the weights . One says also that is an affine combination of the with coefficients . Examples When children find the answers to sums such as or by counting right or left on a number line, they are treating the number line as a one-dimensional affine space. Time can be modelled as a one-dimensional affine space. Specific points in time (such as a date on the calendar) are points in the affine space, while durations (such as a number of days) are displacements. The space of energies is an affine space for , since it is often not meaningful to talk about absolute energy, but it is meaningful to talk about energy differences. The vacuum energy when it is defined picks out a canonical origin. Physical space is often modelled as an affine space for in non-relativistic settings and in the relativistic setting. To distinguish them from the vector space these are sometimes called Euclidean spaces and . Any coset of a subspace of a vector space is an affine space over that subspace. In particular, a line in the plane that doesn't pass through the origin is an affine space that is not a vector space relative to the operations it inherits from , although it can be given a canonical vector space structure by picking the point closest to the origin as the zero vector; likewise in higher dimensions and for any normed vector space If is a matrix and lies in its column space, the set of solutions of the equation is an affine space over the subspace of solutions of . The solutions of an inhomogeneous linear differential equation form an affine space over the solutions of the corresponding homogeneous linear equation. Generalizing all of the above, if is a linear map and lies in its image, the set of solutions to the equation is a coset of the kernel of , and is therefore an affine space over . The space of (linear) complementary subspaces of a vector subspace in a vector space is an affine space, over . That is, if is a short exact sequence of vector spaces, then the space of all splittings of the exact sequence naturally carries the structure of an affine space over . The space of connections (viewed from the vector bundle , where is a smooth manifold) is an affine space for the vector space of valued 1-forms. The space of connections (viewed from the principal bundle ) is an affine space for the vector space of -valued 1-forms, where is the associated adjoint bundle. Affine span and bases For any non-empty subset of an affine space , there is a smallest affine subspace that contains it, called the affine span of . It is the intersection of all affine subspaces containing , and its direction is the intersection of the directions of the affine subspaces that contain . The affine span of is the set of all (finite) affine combinations of points of , and its direction is the linear span of the for and in . If one chooses a particular point , the direction of the affine span of is also the linear span of the for in . One says also that the affine span of is generated by and that is a generating set of its affine span. A set of points of an affine space is said to be or, simply, independent, if the affine span of any strict subset of is a strict subset of the affine span of . An or barycentric frame (see , below) of an affine space is a generating set that is also independent (that is a minimal generating set). Recall that the dimension of an affine space is the dimension of its associated vector space. The bases of an affine space of finite dimension are the independent subsets of elements, or, equivalently, the generating subsets of elements. Equivalently, } is an affine basis of an affine space if and only if } is a linear basis of the associated vector space. Coordinates There are two strongly related kinds of coordinate systems that may be defined on affine spaces. Barycentric coordinates Let be an affine space of dimension over a field , and be an affine basis of . The properties of an affine basis imply that for every in there is a unique -tuple of elements of such that and The are called the barycentric coordinates of over the affine basis . If the are viewed as bodies that have weights (or masses) , the point is thus the barycenter of the , and this explains the origin of the term barycentric coordinates. The barycentric coordinates define an affine isomorphism between the affine space and the affine subspace of defined by the equation . For affine spaces of infinite dimension, the same definition applies, using only finite sums. This means that for each point, only a finite number of coordinates are non-zero. Affine coordinates An affine frame is a coordinate frame of an affine space, consisting of a point, called the origin, and a linear basis of the associated vector space. More precisely, for an affine space with associated vector space , the origin belongs to , and the linear basis is a basis of (for simplicity of the notation, we consider only the case of finite dimension, the general case is similar). For each point of , there is a unique sequence of elements of the ground field such that or equivalently The are called the affine coordinates of over the affine frame . Example: In Euclidean geometry, Cartesian coordinates are affine coordinates relative to an orthonormal frame, that is an affine frame such that is an orthonormal basis. Relationship between barycentric and affine coordinates Barycentric coordinates and affine coordinates are strongly related, and may be considered as equivalent. In fact, given a barycentric frame one deduces immediately the affine frame and, if are the barycentric coordinates of a point over the barycentric frame, then the affine coordinates of the same point over the affine frame are Conversely, if is an affine frame, then is a barycentric frame. If are the affine coordinates of a point over the affine frame, then its barycentric coordinates over the barycentric frame are Therefore, barycentric and affine coordinates are almost equivalent. In most applications, affine coordinates are preferred, as involving less coordinates that are independent. However, in the situations where the important points of the studied problem are affinely independent, barycentric coordinates may lead to simpler computation, as in the following example. Example of the triangle The vertices of a non-flat triangle form an affine basis of the Euclidean plane. The barycentric coordinates allows easy characterization of the elements of the triangle that do not involve angles or distances: The vertices are the points of barycentric coordinates , and . The lines supporting the edges are the points that have a zero coordinate. The edges themselves are the points that have one zero coordinate and two nonnegative coordinates. The interior of the triangle are the points whose coordinates are all positive. The medians are the points that have two equal coordinates, and the centroid is the point of coordinates . Change of coordinates Case of barycentric coordinates Barycentric coordinates are readily changed from one basis to another. Let and be affine bases of . For every in there is some tuple for which Similarly, for every from the first basis, we now have in the second basis for some tuple . Now we can rewrite our expression in the first basis as one in the second with giving us coordinates in the second basis as the tuple . Case of affine coordinates Affine coordinates are also readily changed from one basis to another. Let , and , be affine frames of . For each point of , there is a unique sequence of elements of the ground field such that and similarly, for every from the first basis, we now have in the second basis for tuple and tuples . Now we can rewrite our expression in the first basis as one in the second with giving us coordinates in the second basis as the tuple . Properties of affine homomorphisms Matrix representation An affine transformation is executed on a projective space of , by a 4 by 4 matrix with a special fourth column: The transformation is affine instead of linear due to the inclusion of point , the transformed output of which reveals the affine shift. Image and fibers Let be an affine homomorphism, with its associated linear map. The image of is the affine subspace of , which has as associated vector space. As an affine space does not have a zero element, an affine homomorphism does not have a kernel. However, the linear map does, and if we denote by its kernel, then for any point of , the inverse image of is an affine subspace of whose direction is . This affine subspace is called the fiber of . Projection An important example is the projection parallel to some direction onto an affine subspace. The importance of this example lies in the fact that Euclidean spaces are affine spaces, and that these kinds of projections are fundamental in Euclidean geometry. More precisely, given an affine space with associated vector space , let be an affine subspace of direction , and be a complementary subspace of in (this means that every vector of may be decomposed in a unique way as the sum of an element of and an element of ). For every point of , its projection to parallel to is the unique point in such that This is an affine homomorphism whose associated linear map is defined by for and in . The image of this projection is , and its fibers are the subspaces of direction . Quotient space Although kernels are not defined for affine spaces, quotient spaces are defined. This results from the fact that "belonging to the same fiber of an affine homomorphism" is an equivalence relation. Let be an affine space, and be a linear subspace of the associated vector space . The quotient of by is the quotient of by the equivalence relation such that and are equivalent if This quotient is an affine space, which has as associated vector space. For every affine homomorphism , the image is isomorphic to the quotient of by the kernel of the associated linear map. This is the first isomorphism theorem for affine spaces. Axioms Affine spaces are usually studied by analytic geometry using coordinates, or equivalently vector spaces. They can also be studied as synthetic geometry by writing down axioms, though this approach is much less common. There are several different systems of axioms for affine space. axiomatizes the special case of affine geometry over the reals as ordered geometry together with an affine form of Desargues's theorem and an axiom stating that in a plane there is at most one line through a given point not meeting a given line. Affine planes satisfy the following axioms : (in which two lines are called parallel if they are equal or disjoint): Any two distinct points lie on a unique line. Given a point and line there is a unique line that contains the point and is parallel to the line There exist three non-collinear points. As well as affine planes over fields (or division rings), there are also many non-Desarguesian planes satisfying these axioms. gives axioms for higher-dimensional affine spaces. Purely axiomatic affine geometry is more general than affine spaces and is treated in a separate article. Relation to projective spaces Affine spaces are contained in projective spaces. For example, an affine plane can be obtained from any projective plane by removing one line and all the points on it, and conversely any affine plane can be used to construct a projective plane as a closure by adding a line at infinity whose points correspond to equivalence classes of parallel lines. Similar constructions hold in higher dimensions. Further, transformations of projective space that preserve affine space (equivalently, that leave the hyperplane at infinity invariant as a set) yield transformations of affine space. Conversely, any affine linear transformation extends uniquely to a projective linear transformation, so the affine group is a subgroup of the projective group. For instance, Möbius transformations (transformations of the complex projective line, or Riemann sphere) are affine (transformations of the complex plane) if and only if they fix the point at infinity. Affine algebraic geometry In algebraic geometry, an affine variety (or, more generally, an affine algebraic set) is defined as the subset of an affine space that is the set of the common zeros of a set of so-called polynomial functions over the affine space. For defining a polynomial function over the affine space, one has to choose an affine frame. Then, a polynomial function is a function such that the image of any point is the value of some multivariate polynomial function of the coordinates of the point. As a change of affine coordinates may be expressed by linear functions (more precisely affine functions) of the coordinates, this definition is independent of a particular choice of coordinates. The choice of a system of affine coordinates for an affine space of dimension over a field induces an affine isomorphism between and the affine coordinate space . This explains why, for simplification, many textbooks write , and introduce affine algebraic varieties as the common zeros of polynomial functions over . As the whole affine space is the set of the common zeros of the zero polynomial, affine spaces are affine algebraic varieties. Ring of polynomial functions By the definition above, the choice of an affine frame of an affine space allows one to identify the polynomial functions on with polynomials in variables, the ith variable representing the function that maps a point to its th coordinate. It follows that the set of polynomial functions over is a -algebra, denoted , which is isomorphic to the polynomial ring . When one changes coordinates, the isomorphism between and changes accordingly, and this induces an automorphism of , which maps each indeterminate to a polynomial of degree one. It follows that the total degree defines a filtration of , which is independent from the choice of coordinates. The total degree defines also a graduation, but it depends on the choice of coordinates, as a change of affine coordinates may map indeterminates on non-homogeneous polynomials. Zariski topology Affine spaces over topological fields, such as the real or the complex numbers, have a natural topology. The Zariski topology, which is defined for affine spaces over any field, allows use of topological methods in any case. Zariski topology is the unique topology on an affine space whose closed sets are affine algebraic sets (that is sets of the common zeros of polynomial functions over the affine set). As, over a topological field, polynomial functions are continuous, every Zariski closed set is closed for the usual topology, if any. In other words, over a topological field, Zariski topology is coarser than the natural topology. There is a natural injective function from an affine space into the set of prime ideals (that is the spectrum) of its ring of polynomial functions. When affine coordinates have been chosen, this function maps the point of coordinates to the maximal ideal . This function is a homeomorphism (for the Zariski topology of the affine space and of the spectrum of the ring of polynomial functions) of the affine space onto the image of the function. The case of an algebraically closed ground field is especially important in algebraic geometry, because, in this case, the homeomorphism above is a map between the affine space and the set of all maximal ideals of the ring of functions (this is Hilbert's Nullstellensatz). This is the starting idea of scheme theory of Grothendieck, which consists, for studying algebraic varieties, of considering as "points", not only the points of the affine space, but also all the prime ideals of the spectrum. This allows gluing together algebraic varieties in a similar way as, for manifolds, charts are glued together for building a manifold. Cohomology Like all affine varieties, local data on an affine space can always be patched together globally: the cohomology of affine space is trivial. More precisely, for all coherent sheaves F, and integers . This property is also enjoyed by all other affine varieties. But also all of the étale cohomology groups on affine space are trivial. In particular, every line bundle is trivial. More generally, the Quillen–Suslin theorem implies that every algebraic vector bundle over an affine space is trivial. See also Notes References Affine geometry Linear algebra Space (mathematics)
Affine space
[ "Mathematics" ]
6,120
[ "Mathematical structures", "Mathematical objects", "Space (mathematics)", "Linear algebra", "Algebra" ]
298,975
https://en.wikipedia.org/wiki/Audion
The Audion was an electronic detecting or amplifying vacuum tube invented by American electrical engineer Lee de Forest as a diode in 1906. Improved, it was patented as the first triode in 1908, consisting of an evacuated glass tube containing three electrodes: a heated filament (the cathode, made out of tantalum), a grid, and a plate (the anode). It is important in the history of technology because it was the first widely used electronic device which could amplify. A low power signal at the grid could control much more power in the plate circuit. Audions had more residual gas than later vacuum tubes; the residual gas limited the dynamic range and gave the Audion non-linear characteristics and erratic performance. Originally developed as a radio receiver detector by adding a grid electrode to the Fleming valve, it found little use until its amplifying ability was recognized around 1912 by several researchers, who used it to build the first amplifying radio receivers and electronic oscillators. The many practical applications for amplification motivated its rapid development, and the original Audion was superseded within a few years by improved versions with a higher vacuum. History It had been known since the middle of the 19th century that gas flames were electrically conductive, and early wireless experimenters had noticed that this conductivity was affected by the presence of radio waves. De Forest found that gas in a partial vacuum heated by a conventional lamp filament behaved much the same way, and that if a wire were wrapped around the glass housing, the device could serve as a detector of radio signals. In his original design, a small metal plate was sealed into the lamp housing, and this was connected to the positive terminal of a 22–volt battery via a pair of headphones, the negative terminal being connected to one side of the lamp filament. When wireless signals were applied to the wire wrapped around the outside of the glass, they caused disturbances in the current which produced sounds in the headphones. This was a significant development as existing commercial wireless systems were heavily protected by patents; a new type of detector would allow de Forest to market his own system. He eventually discovered that connecting the antenna circuit to a third electrode placed directly in the space current path greatly improved the sensitivity; in his earliest versions, this was simply a piece of wire bent into the shape of a gridiron (hence grid). The Audion provided power gain; with other detectors, all of the power to operate the headphones had to come from the antenna circuit itself. Consequently, weak transmitters could be heard at greater distances. Patents and disputes De Forest and everybody else at the time greatly underestimated the potential of his grid Audion, imagining it to be limited to mostly military applications. It is significant that de Forest apparently did not see its potential as a telephone repeater amplifier at the time he filed the patent claiming it, even though he had previously patented amplification devices and crude electromechanical note magnifiers had been the bane of the telephone industry for at least two decades. (Ironically, in the years of patent disputes leading up to World War I, it was only this "loophole" that allowed vacuum triodes to be manufactured at all since de Forest's grid Audion patent did not mention this application). De Forest was granted a patent for his early two-electrode diode version of the Audion on November 13, 1906 (), and the "triode" (three-electrode) version was patented in 1908 (). De Forest continued to claim that he developed the Audion independently from John Ambrose Fleming's earlier research on the thermionic valve (for which Fleming received Great Britain patent 24850 and the American Fleming valve patent ), and de Forest became embroiled in many radio-related patent disputes. De Forest was famous for saying that he "didn't know why it worked, it just did". He always referred to the vacuum triodes developed by other researchers as "Oscillaudions", although there is no evidence that he had any significant input to their development. It is true that after the invention of the true vacuum triode in 1913 (see below), de Forest continued to manufacture various types of radio transmitting and receiving apparatus, (examples of which are illustrated on this page). However, although he routinely described these devices as using "Audions", they actually used high-vacuum triodes, using circuitry very similar to that developed by other experimenters. In 1914, Columbia University student Edwin Howard Armstrong worked with professor John Harold Morecroft to document the electrical principles of the Audion. Armstrong published his explanation of the Audion in Electrical World in December 1914, complete with circuit diagrams and oscilloscope graphs. In March and April 1915, Armstrong spoke to the Institute of Radio Engineers in New York and Boston, respectively, presenting his paper "Some Recent Developments in the Audion Receiver", which was published in September. A combination of the two papers was reprinted in other journals such as the Annals of the New York Academy of Sciences. When Armstrong and de Forest later faced each other in a dispute over the regeneration patent, Armstrong was able to demonstrate conclusively that de Forest still had no idea how it worked. The problem was that (possibly to distance his invention from the Fleming valve) de Forest's original patents specified that low-pressure gas inside the Audion was essential to its operation (Audion being a contraction of "Audio-Ion"), and in fact early Audions had severe reliability problems due to this gas being adsorbed by the metal electrodes. The Audions sometimes worked extremely well; at other times they would barely work at all. As well as de Forest himself, numerous researchers had tried to find ways to improve the reliability of the device by stabilizing the partial vacuum. Much of the research that led to the development of true vacuum tubes was carried out by Irving Langmuir in the General Electric (GE) research laboratories. Kenotron and Pliotron Langmuir had long suspected that certain assumed limitations on the performance of various low-pressure and vacuum electrical devices, might not be fundamental physical limitations at all, but simply due to contamination and impurities in the manufacturing process. His first success was in demonstrating that, contrary to what Edison and others had long asserted, incandescent lamps could function more efficiently and with longer life if the glass envelope was filled with low-pressure inert gas rather than a complete vacuum. However, this only worked if the gas used was meticulously 'scrubbed" of all traces of oxygen and water vapor. He then applied the same approach to producing a rectifier for the newly developed "Coolidge" X-ray tubes. Again contrary to what had been widely believed to be possible, by virtue of meticulous cleanliness and attention to detail, he was able to produce versions of the Fleming Diode that could rectify hundreds of thousands of volts. His rectifiers were called "Kenotrons" from the Greek keno (empty, contains nothing, as in a vacuum) and tron (device, instrument). He then turned his attention to the Audion tube, again suspecting that its notoriously unpredictable behaviour might be tamed with more care in the manufacturing process. However he took a somewhat unorthodox approach. Instead of trying to stabilize the partial vacuum, he wondered if it was possible to make the Audion function with the total vacuum of a Kenotron, since that was somewhat easier to stabilize. He soon realized that his "vacuum" Audion had markedly different characteristics from the de Forest version, and was really a quite different device, capable of linear amplification and at much higher frequencies. To distinguish his device from the Audion he named it the "Pliotron", from the Greek plio (more or extra, in this sense meaning gain, more signal coming out than went in). Essentially, he referred to all his vacuum tube designs as Kenotrons, the Pliotron basically being a specialized type of Kenotron. However, because Pliotron and Kenotron were registered trademarks, technical writers tended to use the more generic term "vacuum tube". By the mid-1920s, the term "Kenotron" had come to exclusively refer to vacuum tube rectifiers, while the term "Pliotron" had fallen into disuse. Ironically, in popular usage, the sound-alike brands "Radiotron" and "Ken-Rad" outlasted the original names. Applications and use De Forest continued to manufacture and supply Audions to the US Navy up until the early 1920s, for maintenance of existing equipment, but elsewhere they were regarded as well and truly obsolete by then. It was the vacuum triode that made practical radio broadcasts a reality. Prior to the introduction of the Audion, radio receivers had used a variety of detectors including coherers, barretters, and crystal detectors. The most popular crystal detector consisted of a small piece of galena crystal probed by a fine wire commonly referred to as a "cat's-whisker detector". They were very unreliable, requiring frequent adjustment of the cat's whisker and offered no amplification. Such systems usually required the user to listen to the signal through headphones, sometimes at very low volume, as the only energy available to operate the headphones was that picked up by the antenna. For long distance communication huge antennas were normally required, and enormous amounts of electrical power had to be fed into the transmitter. The Audion was a considerable improvement on this, but the original devices could not provide any subsequent amplification to what was produced in the signal detection process. The later vacuum triodes allowed the signal to be amplified to any desired level, typically by feeding the amplified output of one triode into the grid of the next, eventually providing more than enough power to drive a full-sized speaker. Apart from this, they were able to amplify the incoming radio signals prior to the detection process, making it work much more efficiently. Vacuum tubes could also be used to make superior radio transmitters. The combination of much more efficient transmitters and much more sensitive receivers revolutionized radio communication during World War I. By the late 1920s such "tube radios" began to become a fixture of most Western world households, and remained so until long after the introduction of transistor radios in the mid-1950s. In modern electronics, the vacuum tube has been largely superseded by solid state devices such as the transistor, invented in 1947 and implemented in integrated circuits in 1959, although vacuum tubes remain to this day in such applications as high-powered transmitters, guitar amplifiers and some high fidelity audio equipment. Application images References Further reading Where Good Ideas Come From, Chapter V, Steven Johnson, Riverhead Books, (2011). External links 1906 photograph of the original Audion tube, from New York Public Library Digital Gallery http://www.britannica.com/EBchecked/topic/1262240/radio-technology/25131/The-Fleming-diode-and-De-Forest-Audion . Reprint of . (Includes comments from de Forest.) The Audion: A new Receiver for Wireless Telegraphy, Lee de Forest, Scientific American Supplement No. 1665, November 30, 1907, pages 348-350, Scientific American Supplement No. 1666, December 7, 1907, page 354–356. Lee de Forest's Audion Piano on '120 years Of Electronic Music' https://books.google.com/books?id=YEASAAAAIAAJ&pg=PA166 de Forest and Armstong debate Also page 43 stating, Regular Audion Detector Bulbs are not adapted for the reception of continuous waves, because the vacuum is not correct for the purpose and because the filaments must be operated at such a high intensity that they give very short service, making them unnecessarily expensive. Also page 44 stating, BLUE DISCHARGE OF GLOW  This appears in some Audion Bulbs and not in others. If allowed to persist, the vacuum automatically increases. For this reason the glow should not be allowed to appear and certainly not to continue, as the vacuum may rise to a very high value, requiring very high voltage in the “B” battery. Audiovisual introductions in 1906 Vacuum tubes American inventions sv:Elektronrör#Trioden
Audion
[ "Physics" ]
2,547
[ "Vacuum tubes", "Vacuum", "Matter" ]
299,120
https://en.wikipedia.org/wiki/Chairlift
An elevated passenger ropeway, or chairlift, is a type of aerial lift, which consists of a continuously circulating steel wire rope loop strung between two end terminals and usually over intermediate towers. They are the primary on-hill transport at most ski areas (in such cases referred to as 'ski lifts'), but are also found at amusement parks and various tourist attractions. Depending on carrier size and loading efficiency, a passenger ropeway can move up to 4,000 people per hour, and the fastest lifts achieve operating speeds of up to or . The two-person double chair, which for many years was the workhorse of the ski industry, can move roughly 1,200 people per hour at rope speeds of up to . The four person detachable chairlift ("high-speed quad") can transport 2,400 people per hour with an average rope speed of . Some bi- and tri-cable elevated ropeways and reversible tramways achieve much greater operating speeds. Design and function A chairlift consists of numerous components to provide safe efficient transport. Terminology At American ski areas, chairlifts are referred to with a ski industry vernacular. A one-person lift is a "single", a two-person lift is a "double", a three-person lift a "triple", four-person lifts are "quads", and a six-person lift is a "six pack". If the lift is a detachable chairlift, it is typically referred to as a "high-speed" or "express" lift, which results in an "express quad" or "high-speed six pack". rope speed the speed in meters per second or feet per minute/second at which the rope moves [load] interval the spacing between carriers, measured either by distance or time capacity the number of passengers the lift transports per hour efficiency the ratio of fully loaded carriers during peak operation, usually expressed as a percentage of capacity. Because fixed grip lifts move faster than detachables at load and unload, misloads (and missed unloads) are more frequent on fixed grips, and can reduce the efficiency as low as 80%. fixed grip each carrier is fastened to a fixed point on the rope detachable grip each carrier's grip opens and closes during regular operation allowing detachment from the rope and travel slowly for load and unload. Detachable grips allow a greater rope speed to be used, usually twice that of a fixed grip chair, while simultaneously having slower loading and unloading sections. See detachable chairlift. The capacity of a lift is constrained by the motive power (prime mover), the rope speed, the carrier spacing, the vertical displacement, and the number of carriers on the rope (a function of the rope length). Human passengers can load only so quickly until loading efficiency decreases; usually an interval of at least five seconds is needed. Rope The rope is the defining characteristic of an elevated passenger ropeway. The rope stretches and contracts as the tension exerted upon it increases and decreases, and it bends and flexes as it passes over sheaves and around the bullwheels. The fibre core contains a lubricant which protects the rope from corrosion and also allows for smooth flexing operation. The rope must be regularly lubricated to ensure safe operation and long life. Various techniques are used for constructing the rope. Dozens of wires are wound into a strand. Several strands are wound around a textile core, their twist oriented in the same or opposite direction as the individual wires; this is referred to as Lang lay and regular lay respectively. Rope is constructed in a linear fashion, and must be spliced together before carriers are affixed. Splicing involves unwinding long sections of either end of the rope, and then winding each strand from opposing ends around the core. Sections of rope must be removed, as the strands overlap during the splicing process. Terminals and towers Every lift involves at least two terminals and may also have intermediate supporting towers. A bullwheel in each terminal redirects the rope, while sheaves (pulley assemblies) on the towers support the rope well above the ground. The number of towers is engineered based on the length and strength of the rope, worst case environmental conditions, and the type of terrain traversed. The bullwheel with the prime mover is called the drive bullwheel; the other is the return bullwheel. Chairlifts are usually electrically powered, often with Diesel or gasoline engine backup, and sometimes a hand crank tertiary backup. Drive terminals can be located either at the top or the bottom of an installation; though the top-drive configuration is more efficient, practicalities of electric service might dictate bottom-drive. Braking systems The drive terminal is also the location of a lift's primary braking system. The service brake is located on the drive shaft beside the main drive, before the gearbox. The emergency brake acts directly on the bullwheel. While not technically a brake, an anti-rollback device (usually a cam) also acts on the bullwheel. This prevents the potentially disastrous situation of runaway reverse operation. Tensioning system The rope must be tensioned to compensate for sag caused by wind load and passenger weight, variations in rope length due to temperature and to maintain friction between the rope and the drive bullwheel. Tension is provided either by a counterweight system or by hydraulic or pneumatic rams, which adjust the position of the bullwheel carriage to maintain design tension. For most chairlifts, the tension is measured in tons. Prime mover and gearbox Either Diesel engines or electric motors can function as prime movers. The power can range from under 7.5 kW (10 hp) for the smallest of lifts, to more than 750 kW (1000 hp) for a long, swift, detachable eight-seat up a steep slope. DC electric motors and DC drives are the most common, though AC motors and AC drives are becoming economically competitive for certain smaller chairlift installations. DC drives are less expensive than AC variable-frequency drives and were used almost exclusively until the 21st century when costs of AC variable-frequency drive technology dropped. DC motors produce more starting torque than AC motors, so applications of AC motors on chairlifts is largely limited to smaller chairlift installations, otherwise the AC motor would need to be significantly oversized relative to the equivalent horsepower DC motor. The driveshaft turns at high RPM, but with lower torque. The gearbox transforms high RPM/low torque rotation into a low RPM/high torque drive at the bullwheel. More power is able to pull heavier loads or sustain a higher rope speed (the power of a force is the rate at which it does work, and is given by the product of the driving force and the cable velocity) . Secondary and auxiliary movers In most localities, the prime mover is required to have a backup drive; this is usually provided by a Diesel engine that can operate during power outages. The purpose of the backup is to permit clearing the rope to ensure the safety of passengers; it usually is much less powerful and is not used for normal operation. The secondary drive connects with the drive shaft before the gear box, usually with a chain coupling. Some chairlifts are also equipped with an auxiliary drive, to be used to continue regular operation in the event of a problem with the prime mover. Some lifts even have a hydrostatic coupling so the driveshaft of a snowcat can drive the chairlift. Carriers and grips Carriers are designed to seat 1, 2, 3, 4, 6, or 8 passengers. Each is connected to the cable with a steel cable grip that is either clamped onto or woven into the cable. Clamping systems use either a bolt system or coiled spring or magnets to provide clamping force. For maintenance or servicing, the carriers may be removed from or relocated along the rope by loosening the grip. Restraining bar Also called a retention bar or safety bar, these may help hold passengers in the chair in the same way as a safety bar in an amusement park ride. If equipped, each chair has a retractable bar, sometimes with attached foot rests. In most configurations, a passenger may reach up and behind their head, grab the bar or a handle, and pull the restraint forward and down. Once the bar has swung sufficiently, gravity assists positioning the bar to its down limit. Before disembarking, the bar must be swung up, out of the way. The physics of a passenger sitting properly in a chairlift do not require use of a restraining bar. If the chairlift stops suddenly (as from use of the system emergency brake), the carrier's arm connecting to the grip pivots smoothly forward—driven by the chair's inertia—and maintains friction (and seating angle) between the seat and passenger. The restraining bar is useful for children—who do not fit comfortably into adult sized chairs—as well as apprehensive passengers, and for those who are disinclined or unable to sit still. In addition, restraining bars with footrests reduce muscle fatigue from supporting the weight of a snowboard or skis, especially during long lift rides. The restraining bar is also useful in very strong wind and when the chair is coated by ice. Some ski areas mandate the use of safety bars on dangerous or windy lifts, with forfeiture of the lift ticket as a penalty. Vermont and Massachusetts state law also require the use of safety bars, as well as most Ontario and Quebec in Canada. Restraining bars (often with foot rests) on chairlifts are more common in Europe and also naturally used by passengers of all ages. Some chairlifts have restraining bars that open and close automatically. Canopy Some lifts also have individual canopies which can be lowered to protect against inclement weather. The canopy, or bubble, is usually constructed of transparent acrylic glass or fiberglass. In most designs, passenger legs are unprotected; however in rain or strong wind this is considerably more comfortable than no canopy. Among more notable bubble lifts are the Ramcharger 8 at Big Sky Resort, North America's first high speed eight pack; and the longest bubble lift in the world is the American Flyer high speed six pack at Copper Mountain. Control system To maintain safe operation, the chairlift's control system monitors sensors and controls system parameters. Expected variances are compensated for; out-of-limit and dangerous conditions cause system shutdown. In the unusual instance of system shutdown, inspection by technicians, repair or evacuation might be needed. Both fixed and detachable lifts have sensors to monitor rope speed and hold it within established limits for each defined system operating speed. Also, the minimum and maximum rope tension, and speed feedback redundancy are monitored. Many—if not most—installations have numerous safety sensors which detect rare but potentially hazardous situations, such as the rope coming out of an individual sheave. Detachable chairlift control systems measure carrier grip tension during each detach and attach cycle, verify proper carrier spacing and verify correct movement of the detached carriers through the terminals. Safety systems Aerial lifts have a variety of mechanisms to ensure safe operation over a lifetime often measured in decades. In June 1990, Winter Park Resort performed planned destructive safety testing on Eskimo, a 1963 Riblet Tramway Company two-chair, center-pole fixed grip lift, as it was slated for removal and replacement with a high-speed quad Poma lift. The destructive testing attempted to mimic potential real-life operating scenarios, including tests for braking, rollback, oily rope, tree on line, fire, and tower pull. The data gleaned from this destructive safety testing helped improve the safety and construction of both existing as well as the next generation of chairlifts. Braking As mentioned above, there are multiple redundant braking systems. When a Normal Stop is activated from the control panel, the lift will be slowed and stopped using regenerative braking through the electric motor and the service brake located on the highspeed shaft between the gearbox and electric motor. When an Emergency Stop is activated all power is cut to the motor and the emergency brake or bull-wheel brake is activated. In the case of a rollback, some lifts utilize a ratchet like system to prevent the bull-wheel from spinning backwards while newer installations utilize sensors which activate one or more bull-wheel brakes. All braking systems are fail-safe in that a loss of power or hydraulic pressure will activate the brake. Older chairlifts, for example 1960s-era Riblet Tramway Company lifts, have a hydraulic release emergency brake with pressure maintained by a hydraulic solenoid. If the emergency brake/stop button is depressed by any control panel, the lift cannot be restarted until the hydraulic brake is hand-pumped to proper operating pressure. Brittle bars Some installations use brittle bars to detect several hazardous situations. Brittle bars alongside the sheaves detect the rope coming out of the track. They may also be placed to detect counterweight or hydraulic ram movement beyond safe parameters (sometimes called a brittle fork in this usage) and to detect detached carriers leaving the terminal's track. If a brittle bar breaks, it interrupts a circuit which causes the system controller to immediately stop the system. Cable catcher These are small hooks sometimes installed next to sheaves to catch the rope and prevent it from falling if it should come out of the track. They are designed to allow passage of chair grips while the lift is stopping and for evacuation. It is extremely rare for the rope to leave the sheaves. In May 2006, a cable escaped the sheaves on the Arthurs Seat, Victoria chairlift in Australia causing four chairs to crash into one another. No one was injured, though 13 passengers were stranded for four hours. The operator blamed mandated changes in the height of some towers to improve clearance over a road. Collision Passenger loading and unloading is supervised by lift operators. Their primary purpose is to ensure passenger safety by checking that passengers are suitably outfitted for the elements and not wearing or transporting items which could entangle chairs, towers, trees, etc. If a misload or missed unload occurs—or is imminent—they slow or stop the lift to prevent carriers from colliding with or dragging any person. Also, if the exit area becomes congested, they will slow or stop the chair until safe conditions are established. Communication The lift operators at the terminals of a chairlift communicate with each other to verify that all terminals are safe and ready when restarting the system. Communication is also used to warn of an arriving carrier with a passenger missing a ski, or otherwise unable to efficiently unload, such as patients being transported in a rescue toboggan. These uses are the chief purpose for a visible identification number on each carrier. Evacuation Aerial ropeways always have several backup systems in the event of failure of the prime mover. An additional electric motor, diesel or gasoline engine—even a hand crank—allows movement of the rope to eventually unload passengers. In the event of a failure which prevents rope movement, ski patrol may conduct emergency evacuation using a simple rope harness looped over the aerial ropeway to lower passengers to the ground one by one. Grounding A steel line strung alongside a mountain is likely to attract lightning strikes. To protect against that and electrostatic buildup, all components of the system are electrically bonded together and connected to one or many grounding systems connecting the lift system to earth ground. In areas subject to frequent electrical strikes, a protective aerial line is fixed above the aerial ropeway. A red sheave may indicate it is a grounding sheave. Load testing In most jurisdictions, chairlifts must be load inspected and tested periodically. The typical test consists of loading the uphill chairs with bags of water (secured in boxes) weighing more than the worst case passenger loading scenario. The system's ability to start, stop, and forestall reverse operation are carefully evaluated against the system's design parameters. Load testing a new lift is shown in a short video. Rope testing Frequent visual inspection of the rope is required in most jurisdictions, as well as periodic non-destructive testing. Electromagnetic induction testing detects and quantifies hidden adverse conditions within the strands such as a broken wire, pitting caused by corrosion or wear, variations in cross sectional area, and tightening or loosening of wire lay or strand lay. Safety gate If passengers fail to unload, their legs will contact a lightweight bar, line, or pass through a light beam which stops the lift. The lift operator will then help them disembark, reset the safety gate, and initiate the lift restart procedure. While possibly annoying to other passengers on the chairlift, it is preferable to strike the safety gate—that is, it should not be avoided—and stop the lift than be an unexpected downhill passenger. Many lifts are limited in their download capacity; others can transport passengers at 100 percent capacity in either direction. Moving walkways The boarding area of a detachable chairlift can be fitted with a moving walkway which takes the passengers from the entrance gate to the boarding area. This ensures the correct, safe and quick boarding of all passengers. For fixed grip lifts, a walkway can be designed so that it moves at a slightly slower speed than the chairs: passengers stand on the moving walkway while their chair approaches, hence easing the boarding process since the relative speed of the chairlift will be slower. History Aerial passenger ropeways were known in Asia well before the 17th century for crossing chasms in mountainous regions. Men would traverse a woven fiber line hand over hand. Evolutionary refinement added a harness or basket to also transport cargo. The first recorded mechanical ropeway was by Venetian Fausto Veranzio who designed a bicable passenger ropeway in 1616. The industry generally considers Dutchman Adam Wybe to have built the first operational system in 1644. The technology, which was further developed by the people living in the Alpine regions of Europe, progressed rapidly and expanded due to the advent of wire rope and electric drive. World War I motivated extensive use of military tramways for warfare between Italy and Austria. First chairlifts The world's first three ski chairlifts were created for the ski resort in Sun Valley, Idaho in 1936 and 1937, then owned by the Union Pacific Railroad. The first chairlift, since removed, was installed on Proctor Mountain, two miles (3 km) east of the more famous Bald Mountain, the primary ski mountain of Sun Valley resort since 1939. One of the chairlifts still remains on Ruud Mountain, named for Thomas Ruud a famous Norwegian ski racer. The chairlift has been preserved with its ski jump and original single chairs as it was during WWII. The chairlift was developed by James Curran of Union Pacific's engineering department in Omaha during the summer of 1936. Prior to working for Union Pacific, Curran worked for Paxton and Vierling Steel, also in Omaha, which engineered banana conveyor systems to load cargo ships in the tropics. (PVS manufactured these chairs in their Omaha, NE facility.) Curran re-engineered the banana hooks with chairs and created a machine with greater capacity than the up-ski toboggan (cable car) and better comfort than the J-bar, the two most common skier transports at the time—apart from mountain climbing. His basic design is still used for chairlifts today. The patent for the original ski lift was issued to Mr. Curran along with Gordon H. Bannerman and Glen H. Trout (Chief Engineer of the Union Pacific RR) in March 1939. The patent was titled "Aerial Ski Tramway,'. W. Averell Harriman, Sun Valley's creator and former governor of New York State, financed the project. Mont Tremblant, Quebec opens in February 1938 with the first Canadian chairlift, built by Joseph Ryan. The ski lift had 4,200 feet of cable and took 250 skiers per hour. The first chairlift in Europe was built in 1938 in Czechoslovakia (present-day Czech Republic), from Ráztoka, at , to Pustevny, at , in the Moravian-Silesian Beskids mountain range. Modern chairlifts New chairlifts built since the 1990s are infrequently fixed-grip. Existing fixed-grip lifts are being replaced with detachable chairlifts at most major ski areas. However the relative simplicity of the fixed-grip design results in lower installation, maintenance and, often, operation costs. For these reasons, they are likely to remain at low volume and community hills, and for short distances, such as beginner terrain. See also Snowsport transport Heliskiing Riblet tramway Ski industry related List of aerial lift manufacturers Skiing and Skiing Topics Other lifts Aerial lift Aerial tramway Cable car (railway) Elevator Funifor Funitel Gondola lift Hallidie ropeway List of transport topics Paternoster lift Ski lift References External links Skilifts.org An online community dedicated to documenting all types of Ski Lifts, founded by Bill Wolfe. Chairlift.org preservation society Liftblog.com A blog dedicated to taking photos of aerial lifts T-bars, and platters. Colorado Chairlift Locations Aerial lifts Chairlifts Amusement rides Ski lift types Vertical transport devices Articles containing video clips pt:Teleférico#Tipos de teleférico
Chairlift
[ "Physics", "Technology" ]
4,363
[ "Machines", "Transport systems", "Amusement rides", "Physical systems", "Vertical transport devices" ]
299,329
https://en.wikipedia.org/wiki/Probabilistic%20context-free%20grammar
In theoretical linguistics and computational linguistics, probabilistic context free grammars (PCFGs) extend context-free grammars, similar to how hidden Markov models extend regular grammars. Each production is assigned a probability. The probability of a derivation (parse) is the product of the probabilities of the productions used in that derivation. These probabilities can be viewed as parameters of the model, and for large problems it is convenient to learn these parameters via machine learning. A probabilistic grammar's validity is constrained by context of its training dataset. PCFGs originated from grammar theory, and have application in areas as diverse as natural language processing to the study the structure of RNA molecules and design of programming languages. Designing efficient PCFGs has to weigh factors of scalability and generality. Issues such as grammar ambiguity must be resolved. The grammar design affects results accuracy. Grammar parsing algorithms have various time and memory requirements. Definitions Derivation: The process of recursive generation of strings from a grammar. Parsing: Finding a valid derivation using an automaton. Parse Tree: The alignment of the grammar to a sequence. An example of a parser for PCFG grammars is the pushdown automaton. The algorithm parses grammar nonterminals from left to right in a stack-like manner. This brute-force approach is not very efficient. In RNA secondary structure prediction variants of the Cocke–Younger–Kasami (CYK) algorithm provide more efficient alternatives to grammar parsing than pushdown automata. Another example of a PCFG parser is the Stanford Statistical Parser which has been trained using Treebank. Formal definition Similar to a CFG, a probabilistic context-free grammar can be defined by a quintuple: where is the set of non-terminal symbols is the set of terminal symbols is the set of production rules is the start symbol is the set of probabilities on production rules Relation with hidden Markov models PCFGs models extend context-free grammars the same way as hidden Markov models extend regular grammars. The Inside-Outside algorithm is an analogue of the Forward-Backward algorithm. It computes the total probability of all derivations that are consistent with a given sequence, based on some PCFG. This is equivalent to the probability of the PCFG generating the sequence, and is intuitively a measure of how consistent the sequence is with the given grammar. The Inside-Outside algorithm is used in model parametrization to estimate prior frequencies observed from training sequences in the case of RNAs. Dynamic programming variants of the CYK algorithm find the Viterbi parse of a RNA sequence for a PCFG model. This parse is the most likely derivation of the sequence by the given PCFG. Grammar construction Context-free grammars are represented as a set of rules inspired from attempts to model natural languages. The rules are absolute and have a typical syntax representation known as Backus–Naur form. The production rules consist of terminal and non-terminal symbols and a blank may also be used as an end point. In the production rules of CFG and PCFG the left side has only one nonterminal whereas the right side can be any string of terminal or nonterminals. In PCFG nulls are excluded. An example of a grammar: This grammar can be shortened using the '|' ('or') character into: Terminals in a grammar are words and through the grammar rules a non-terminal symbol is transformed into a string of either terminals and/or non-terminals. The above grammar is read as "beginning from a non-terminal the emission can generate either or or ". Its derivation is: Ambiguous grammar may result in ambiguous parsing if applied on homographs since the same word sequence can have more than one interpretation. Pun sentences such as the newspaper headline "Iraqi Head Seeks Arms" are an example of ambiguous parses. One strategy of dealing with ambiguous parses (originating with grammarians as early as Pāṇini) is to add yet more rules, or prioritize them so that one rule takes precedence over others. This, however, has the drawback of proliferating the rules, often to the point where they become difficult to manage. Another difficulty is overgeneration, where unlicensed structures are also generated. Probabilistic grammars circumvent these problems by ranking various productions on frequency weights, resulting in a "most likely" (winner-take-all) interpretation. As usage patterns are altered in diachronic shifts, these probabilistic rules can be re-learned, thus updating the grammar. Assigning probability to production rules makes a PCFG. These probabilities are informed by observing distributions on a training set of similar composition to the language to be modeled. On most samples of broad language, probabilistic grammars where probabilities are estimated from data typically outperform hand-crafted grammars. CFGs when contrasted with PCFGs are not applicable to RNA structure prediction because while they incorporate sequence-structure relationship they lack the scoring metrics that reveal a sequence structural potential Weighted context-free grammar A weighted context-free grammar (WCFG) is a more general category of context-free grammar, where each production has a numeric weight associated with it. The weight of a specific parse tree in a WCFG is the product (or sum ) of all rule weights in the tree. Each rule weight is included as often as the rule is used in the tree. A special case of WCFGs are PCFGs, where the weights are (logarithms of ) probabilities. An extended version of the CYK algorithm can be used to find the "lightest" (least-weight) derivation of a string given some WCFG. When the tree weight is the product of the rule weights, WCFGs and PCFGs can express the same set of probability distributions. Applications RNA structure prediction Since the 1990s, PCFG has been applied to model RNA structures. Energy minimization and PCFG provide ways of predicting RNA secondary structure with comparable performance. However structure prediction by PCFGs is scored probabilistically rather than by minimum free energy calculation. PCFG model parameters are directly derived from frequencies of different features observed in databases of RNA structures rather than by experimental determination as is the case with energy minimization methods. The types of various structure that can be modeled by a PCFG include long range interactions, pairwise structure and other nested structures. However, pseudoknots can not be modeled. PCFGs extend CFG by assigning probabilities to each production rule. A maximum probability parse tree from the grammar implies a maximum probability structure. Since RNAs preserve their structures over their primary sequence, RNA structure prediction can be guided by combining evolutionary information from comparative sequence analysis with biophysical knowledge about a structure plausibility based on such probabilities. Also search results for structural homologs using PCFG rules are scored according to PCFG derivations probabilities. Therefore, building grammar to model the behavior of base-pairs and single-stranded regions starts with exploring features of structural multiple sequence alignment of related RNAs. The above grammar generates a string in an outside-in fashion, that is the basepair on the furthest extremes of the terminal is derived first. So a string such as is derived by first generating the distal 's on both sides before moving inwards: A PCFG model extendibility allows constraining structure prediction by incorporating expectations about different features of an RNA . Such expectation may reflect for example the propensity for assuming a certain structure by an RNA. However incorporation of too much information may increase PCFG space and memory complexity and it is desirable that a PCFG-based model be as simple as possible. Every possible string a grammar generates is assigned a probability weight given the PCFG model . It follows that the sum of all probabilities to all possible grammar productions is . The scores for each paired and unpaired residue explain likelihood for secondary structure formations. Production rules also allow scoring loop lengths as well as the order of base pair stacking hence it is possible to explore the range of all possible generations including suboptimal structures from the grammar and accept or reject structures based on score thresholds. Implementations RNA secondary structure implementations based on PCFG approaches can be utilized in : Finding consensus structure by optimizing structure joint probabilities over MSA. Modeling base-pair covariation to detecting homology in database searches. pairwise simultaneous folding and alignment. Different implementation of these approaches exist. For example, Pfold is used in secondary structure prediction from a group of related RNA sequences, covariance models are used in searching databases for homologous sequences and RNA annotation and classification, RNApromo, CMFinder and TEISER are used in finding stable structural motifs in RNAs. Design considerations PCFG design impacts the secondary structure prediction accuracy. Any useful structure prediction probabilistic model based on PCFG has to maintain simplicity without much compromise to prediction accuracy. Too complex a model of excellent performance on a single sequence may not scale. A grammar based model should be able to: Find the optimal alignment between a sequence and the PCFG. Score the probability of the structures for the sequence and subsequences. Parameterize the model by training on sequences/structures. Find the optimal grammar parse tree (CYK algorithm). Check for ambiguous grammar (Conditional Inside algorithm). The resulting of multiple parse trees per grammar denotes grammar ambiguity. This may be useful in revealing all possible base-pair structures for a grammar. However an optimal structure is the one where there is one and only one correspondence between the parse tree and the secondary structure. Two types of ambiguities can be distinguished. Parse tree ambiguity and structural ambiguity. Structural ambiguity does not affect thermodynamic approaches as the optimal structure selection is always on the basis of lowest free energy scores. Parse tree ambiguity concerns the existence of multiple parse trees per sequence. Such an ambiguity can reveal all possible base-paired structures for the sequence by generating all possible parse trees then finding the optimal one. In the case of structural ambiguity multiple parse trees describe the same secondary structure. This obscures the CYK algorithm decision on finding an optimal structure as the correspondence between the parse tree and the structure is not unique. Grammar ambiguity can be checked for by the conditional-inside algorithm. Building a PCFG model A probabilistic context free grammar consists of terminal and nonterminal variables. Each feature to be modeled has a production rule that is assigned a probability estimated from a training set of RNA structures. Production rules are recursively applied until only terminal residues are left. A starting non-terminal produces loops. The rest of the grammar proceeds with parameter that decide whether a loop is a start of a stem or a single stranded region and parameter that produces paired bases. The formalism of this simple PCFG looks like: The application of PCFGs in predicting structures is a multi-step process. In addition, the PCFG itself can be incorporated into probabilistic models that consider RNA evolutionary history or search homologous sequences in databases. In an evolutionary history context inclusion of prior distributions of RNA structures of a structural alignment in the production rules of the PCFG facilitates good prediction accuracy. A summary of general steps for utilizing PCFGs in various scenarios: Generate production rules for the sequences. Check ambiguity. Recursively generate parse trees of the possible structures using the grammar. Rank and score the parse trees for the most plausible sequence. Algorithms Several algorithms dealing with aspects of PCFG based probabilistic models in RNA structure prediction exist. For instance the inside-outside algorithm and the CYK algorithm. The inside-outside algorithm is a recursive dynamic programming scoring algorithm that can follow expectation-maximization paradigms. It computes the total probability of all derivations that are consistent with a given sequence, based on some PCFG. The inside part scores the subtrees from a parse tree and therefore subsequences probabilities given an PCFG. The outside part scores the probability of the complete parse tree for a full sequence. CYK modifies the inside-outside scoring. Note that the term 'CYK algorithm' describes the CYK variant of the inside algorithm that finds an optimal parse tree for a sequence using a PCFG. It extends the actual CYK algorithm used in non-probabilistic CFGs. The inside algorithm calculates probabilities for all of a parse subtree rooted at for subsequence . Outside algorithm calculates probabilities of a complete parse tree for sequence from root excluding the calculation of . The variables and refine the estimation of probability parameters of an PCFG. It is possible to reestimate the PCFG algorithm by finding the expected number of times a state is used in a derivation through summing all the products of and divided by the probability for a sequence given the model . It is also possible to find the expected number of times a production rule is used by an expectation-maximization that utilizes the values of and . The CYK algorithm calculates to find the most probable parse tree and yields . Memory and time complexity for general PCFG algorithms in RNA structure predictions are and respectively. Restricting a PCFG may alter this requirement as is the case with database searches methods. PCFG in homology search Covariance models (CMs) are a special type of PCFGs with applications in database searches for homologs, annotation and RNA classification. Through CMs it is possible to build PCFG-based RNA profiles where related RNAs can be represented by a consensus secondary structure. The RNA analysis package Infernal uses such profiles in inference of RNA alignments. The Rfam database also uses CMs in classifying RNAs into families based on their structure and sequence information. CMs are designed from a consensus RNA structure. A CM allows indels of unlimited length in the alignment. Terminals constitute states in the CM and the transition probabilities between the states is 1 if no indels are considered. Grammars in a CM are as follows: probabilities of pairwise interactions between 16 possible pairs probabilities of generating 4 possible single bases on the left probabilities of generating 4 possible single bases on the right bifurcation with a probability of 1 start with a probability of 1 end with a probability of 1 The model has 6 possible states and each state grammar includes different types of secondary structure probabilities of the non-terminals. The states are connected by transitions. Ideally current node states connect to all insert states and subsequent node states connect to non-insert states. In order to allow insertion of more than one base insert states connect to themselves. In order to score a CM model the inside-outside algorithms are used. CMs use a slightly different implementation of CYK. Log-odds emission scores for the optimum parse tree - - are calculated out of the emitting states . Since these scores are a function of sequence length a more discriminative measure to recover an optimum parse tree probability score- - is reached by limiting the maximum length of the sequence to be aligned and calculating the log-odds relative to a null. The computation time of this step is linear to the database size and the algorithm has a memory complexity of . Example: Using evolutionary information to guide structure prediction The KH-99 algorithm by Knudsen and Hein lays the basis of the Pfold approach to predicting RNA secondary structure. In this approach the parameterization requires evolutionary history information derived from an alignment tree in addition to probabilities of columns and mutations. The grammar probabilities are observed from a training dataset. Estimate column probabilities for paired and unpaired bases In a structural alignment the probabilities of the unpaired bases columns and the paired bases columns are independent of other columns. By counting bases in single base positions and paired positions one obtains the frequencies of bases in loops and stems. For basepair and an occurrence of is also counted as an occurrence of . Identical basepairs such as are counted twice. Calculate mutation rates for paired and unpaired bases By pairing sequences in all possible ways overall mutation rates are estimated. In order to recover plausible mutations a sequence identity threshold should be used so that the comparison is between similar sequences. This approach uses 85% identity threshold between pairing sequences. First single base positions differences -except for gapped columns- between sequence pairs are counted such that if the same position in two sequences had different bases the count of the difference is incremented for each sequence. For unpaired bases a 4 X 4 mutation rate matrix is used that satisfies that the mutation flow from X to Y is reversible: For basepairs a 16 X 16 rate distribution matrix is similarly generated. The PCFG is used to predict the prior probability distribution of the structure whereas posterior probabilities are estimated by the inside-outside algorithm and the most likely structure is found by the CYK algorithm. Estimate alignment probabilities After calculating the column prior probabilities the alignment probability is estimated by summing over all possible secondary structures. Any column in a secondary structure for a sequence of length such that can be scored with respect to the alignment tree and the mutational model . The prior distribution given by the PCFG is . The phylogenetic tree, can be calculated from the model by maximum likelihood estimation. Note that gaps are treated as unknown bases and the summation can be done through dynamic programming. Assign production probabilities to each rule in the grammar Each structure in the grammar is assigned production probabilities devised from the structures of the training dataset. These prior probabilities give weight to predictions accuracy. The number of times each rule is used depends on the observations from the training dataset for that particular grammar feature. These probabilities are written in parentheses in the grammar formalism and each rule will have a total of 100%. For instance: Predict the structure likelihood Given the prior alignment frequencies of the data the most likely structure from the ensemble predicted by the grammar can then be computed by maximizing through the CYK algorithm. The structure with the highest predicted number of correct predictions is reported as the consensus structure. Pfold improvements on the KH-99 algorithm PCFG based approaches are desired to be scalable and general enough. Compromising speed for accuracy needs to as minimal as possible. Pfold addresses the limitations of the KH-99 algorithm with respect to scalability, gaps, speed and accuracy. In Pfold gaps are treated as unknown. In this sense the probability of a gapped column equals that of an ungapped one. In Pfold the tree is calculated prior to structure prediction through neighbor joining and not by maximum likelihood through the PCFG grammar. Only the branch lengths are adjusted to maximum likelihood estimates. An assumption of Pfold is that all sequences have the same structure. Sequence identity threshold and allowing a 1% probability that any nucleotide becomes another limit the performance deterioration due to alignment errors. Protein sequence analysis Whereas PCFGs have proved powerful tools for predicting RNA secondary structure, usage in the field of protein sequence analysis has been limited. Indeed, the size of the amino acid alphabet and the variety of interactions seen in proteins make grammar inference much more challenging. As a consequence, most applications of formal language theory to protein analysis have been mainly restricted to the production of grammars of lower expressive power to model simple functional patterns based on local interactions. Since protein structures commonly display higher-order dependencies including nested and crossing relationships, they clearly exceed the capabilities of any CFG. Still, development of PCFGs allows expressing some of those dependencies and providing the ability to model a wider range of protein patterns. See also Statistical parsing Stochastic grammar L-system References External links Rfam Database Infernal The Stanford Parser: A statistical parser pyStatParser Bioinformatics Formal languages Language modeling Natural language parsing Statistical natural language processing Probabilistic models
Probabilistic context-free grammar
[ "Mathematics", "Engineering", "Biology" ]
4,163
[ "Bioinformatics", "Formal languages", "Mathematical logic", "Biological engineering" ]
299,368
https://en.wikipedia.org/wiki/Oil%20sands
Oil sands are a type of unconventional petroleum deposit. They are either loose sands, or partially consolidated sandstone containing a naturally occurring mixture of sand, clay, and water, soaked with bitumen (a dense and extremely viscous form of petroleum). Significant bitumen deposits are reported in Canada, Kazakhstan, Russia, and Venezuela. The estimated worldwide deposits of oil are more than . Proven reserves of bitumen contain approximately 100 billion barrels, and total natural bitumen reserves are estimated at worldwide, of which , or 70.8%, are in Alberta, Canada. Crude bitumen is a thick, sticky form of crude oil, and is so viscous that it will not flow unless heated or diluted with lighter hydrocarbons such as light crude oil or natural-gas condensate. At room temperature, it is much like cold molasses. The Orinoco Belt in Venezuela is sometimes described as oil sands, but these deposits are non-bituminous, falling instead into the category of heavy or extra-heavy oil due to their lower viscosity. Natural bitumen and extra-heavy oil differ in the degree by which they have been degraded from the original conventional oils by bacteria. The 1973 and 1979 oil price increases, and the development of improved extraction technology enabled profitable extraction and processing of the oil sands. Together with other so-called unconventional oil extraction practices, oil sands are implicated in the unburnable carbon debate but also contribute to energy security and counteract the international price cartel OPEC. According to the Oil Climate Index, carbon emissions from oil-sand crude are 31% higher than from conventional oil. In Canada, oil sands production in general, and in-situ extraction, in particular, are the largest contributors to the increase in the nation's greenhouse gas emissions from 2005 to 2017, according to Natural Resources Canada (NRCan). History The use of bituminous deposits and seeps dates back to Paleolithic times. The earliest known use of bitumen was by Neanderthals, some 40,000 years ago. Bitumen has been found adhering to stone tools used by Neanderthals at sites in Syria. After the arrival of Homo sapiens, humans used bitumen for construction of buildings and waterproofing of reed boats, among other uses. In ancient Egypt, the use of bitumen was important in preparing mummies. In ancient times, bitumen was primarily a Mesopotamian commodity used by the Sumerians and Babylonians, although it was also found in the Levant and Persia. The area along the Tigris and Euphrates rivers was littered with hundreds of pure bitumen seepages. The Mesopotamians used the bitumen for waterproofing boats and buildings. In Europe, they were extensively mined near the French city of Pechelbronn, where the vapour separation process was in use in 1742. In Canada, the First Nation peoples had used bitumen from seeps along the Athabasca and Clearwater Rivers to waterproof their birch bark canoes from early prehistoric times. The Canadian oil sands first became known to Europeans in 1719 when a Cree person named Wa-Pa-Su brought a sample to Hudson's Bay Company fur trader Henry Kelsey, who commented on it in his journals. Fur trader Peter Pond paddled down the Clearwater River to Athabasca in 1778, saw the deposits and wrote of "springs of bitumen that flow along the ground". In 1787, fur trader and explorer Alexander MacKenzie on his way to the Arctic Ocean saw the Athabasca oil sands, and commented, "At about 24 miles from the fork (of the Athabasca and Clearwater Rivers) are some bituminous fountains into which a pole of 20 feet long may be inserted without the least resistance." Cost of oil sands petroleum-mining operations In their May 2019 comparison of the "cost of supply curve update" in which the Norway-based Rystad Energy—an "independent energy research and consultancy"—ranked the "worlds total recoverable liquid resources by their breakeven price", Rystad reported that the average breakeven price for oil from the oil sands was US$83 in 2019, making it the most expensive to produce, compared to all other "significant oil producing regions" in the world. The International Energy Agency made similar comparisons. The price per barrel of heavier, sour crude oils lacking in tidewater access—such as Western Canadian Select (WCS) from the Athabaska oil sands, are priced at a differential to the lighter, sweeter oil—such as West Texas Intermediate (WTI). The price is based on its grade—determined by factors such as its specific gravity or API and its sulfur content—and its location—for example, its proximity to tidewater and/or refineries. Because the cost of production is so much higher at oil sands petroleum-mining operations, the breakeven point is much higher than for sweeter lighter oils like that produced by Saudi Arabia, Iran, Iraq, and, the United States. Oil sands productions expand and prosper as the global price of oil increased to peak highs because of the Arab oil embargo of 1973, the 1979 Iranian Revolution, the 1990 Persian Gulf crisis and war, the 11 September 2001 attacks, and the 2003 invasion of Iraq. The boom periods were followed by the bust, as the global price of oil dropped during the 1980s and again in the 1990s, during a period of global recessions, and again in 2003. Nomenclature The name tar sands was applied to bituminous sands in the late 19th and early 20th century. People who saw the bituminous sands during this period were familiar with the large amounts of tar residue produced in urban areas as a by-product of the manufacture of coal gas for urban heating and lighting. The word "tar" to describe these natural bitumen deposits is really a misnomer, since, chemically speaking, tar is a human-made substance produced by the destructive distillation of organic material, usually coal. Since then, coal gas has almost completely been replaced by natural gas as a fuel, and coal tar as a material for paving roads has been replaced by the petroleum product asphalt. Naturally occurring bitumen is chemically more similar to asphalt than to coal tar, and the term oil sands (or oilsands) is more commonly used by industry in the producing areas than tar sands because synthetic oil is manufactured from the bitumen, and due to the feeling that the terminology of tar sands is less politically acceptable to the public. Oil sands are now an alternative to conventional crude oil. Geology The world's largest deposits of oil sands are in Venezuela and Canada. The geology of the deposits in the two countries is generally rather similar. They are vast heavy oil, extra-heavy oil, and/or bitumen deposits with oil heavier than 20°API, found largely in unconsolidated sandstones with similar properties. "Unconsolidated" in this context means that the sands have high porosity, no significant cohesion, and a tensile strength close to zero. The sands are saturated with oil which has prevented them from consolidating into hard sandstone. Size of resources The magnitude of the resources in the two countries is on the order of 3.5 to 4 trillion barrels (550 to 650 billion cubic metres) of original oil in place (OOIP). Oil in place is not necessarily oil reserves, and the amount that can be produced depends on technological evolution. Rapid technological developments in Canada in the 1985–2000 period resulted in techniques such as steam-assisted gravity drainage (SAGD) that can recover a much greater percentage of the OOIP than conventional methods. The Alberta government estimates that with current technology, 10% of its bitumen and heavy oil can be recovered, which would give it about 200 billion barrels (32 billion m3) of recoverable oil reserves. Venezuela estimates its recoverable oil at 267 billion barrels (42 billion m3). This places Canada and Venezuela in the same league as Saudi Arabia, having the three largest oil reserves in the world. Major deposits There are numerous deposits of oil sands in the world, but the biggest and most important are in Canada and Venezuela, with lesser deposits in Kazakhstan and Russia. The total volume of non-conventional oil in the oil sands of these countries exceeds the reserves of conventional oil in all other countries combined. Vast deposits of bitumenover 350 billion cubic metres (2.2 trillion barrels) of oil in placeexist in the Canadian provinces of Alberta and Saskatchewan. If 30% of this oil could be extracted, it could supply the entire needs of North America for over 100 years at 2002 consumption levels. These deposits represent plentiful oil, but not cheap oil. They require advanced technology to extract the oil and transport it to oil refineries. Canada The oil sands of the Western Canadian Sedimentary Basin (WCSB) are a result of the formation of the Canadian Rocky Mountains by the Pacific Plate overthrusting the North American Plate as it pushed in from the west, carrying the formerly large island chains which now compose most of British Columbia. The collision compressed the Alberta plains and raised the Rockies above the plains, forming mountain ranges. This mountain building process buried the sedimentary rock layers which underlie most of Alberta to a great depth, creating high subsurface temperatures, and producing a giant pressure cooker effect that converted the kerogen in the deeply buried organic-rich shales to light oil and natural gas. These source rocks were similar to the American so-called oil shales, except the latter have never been buried deep enough to convert the kerogen in them into liquid oil. This overthrusting also tilted the pre-Cretaceous sedimentary rock formations underlying most of the sub-surface of Alberta, depressing the rock formations in southwest Alberta up to deep near the Rockies, but to zero depth in the northeast, where they pinched out against the igneous rocks of the Canadian Shield, which outcrop on the surface. This tilting is not apparent on the surface because the resulting trench has been filled in by eroded material from the mountains. The light oil migrated up-dip through hydro-dynamic transport from the Rockies in the southwest toward the Canadian Shield in the northeast following a complex pre-Cretaceous unconformity that exists in the formations under Alberta. The total distance of oil migration southwest to northeast was about . At the shallow depths of sedimentary formations in the northeast, massive microbial biodegradation as the oil approached the surface caused the oil to become highly viscous and immobile. Almost all of the remaining oil is found in the far north of Alberta, in Middle Cretaceous (115 million-year old) sand-silt-shale deposits overlain by thick shales, although large amounts of heavy oil lighter than bitumen are found in the Heavy Oil Belt along the Alberta-Saskatchewan border, extending into Saskatchewan and approaching the Montana border. Note that, although adjacent to Alberta, Saskatchewan has no massive deposits of bitumen, only large reservoirs of heavy oil >10°API. Most of the Canadian oil sands are in three major deposits in northern Alberta. They are the Athabasca-Wabiskaw oil sands of north northeastern Alberta, the Cold Lake deposits of east northeastern Alberta, and the Peace River deposits of northwestern Alberta. Between them, they cover over —an area larger than England—and contain approximately of crude bitumen in them. About 10% of the oil in place, or , is estimated by the government of Alberta to be recoverable at current prices, using current technology, which amounts to 97% of Canadian oil reserves and 75% of total North American petroleum reserves. Although the Athabasca deposit is the only one in the world which has areas shallow enough to mine from the surface, all three Alberta areas are suitable for production using in-situ methods, such as cyclic steam stimulation (CSS) and steam-assisted gravity drainage (SAGD). The largest Canadian oil sands deposit, the Athabasca oil sands is in the McMurray Formation, centered on the city of Fort McMurray, Alberta. It outcrops on the surface (zero burial depth) about north of Fort McMurray, where enormous oil sands mines have been established, but is deep southeast of Fort McMurray. Only 3% of the oil sands area containing about 20% of the recoverable oil can be produced by surface mining, so the remaining 80% will have to be produced using in-situ wells. The other Canadian deposits are between deep and will require in-situ production. Athabasca Cold Lake The Cold Lake oil sands are northeast of Alberta's capital, Edmonton, near the border with Saskatchewan. A small portion of the Cold Lake deposit lies in Saskatchewan. Although smaller than the Athabasca oil sands, the Cold Lake oil sands are important because some of the oil is fluid enough to be extracted by conventional methods. The Cold Lake bitumen contains more alkanes and less asphaltenes than the other major Alberta oil sands and the oil is more fluid. As a result, cyclic steam stimulation (CSS) is commonly used for production. The Cold Lake oil sands are of a roughly circular shape, centered around Bonnyville, Alberta. They probably contain over 60 billion cubic metres (370 billion barrels) of extra-heavy oil-in-place. The oil is highly viscous, but considerably less so than the Athabasca oil sands, and is somewhat less sulfurous. The depth of the deposits is and they are from thick. They are too deep to surface mine. Much of the oil sands are on Canadian Forces Base Cold Lake. CFB Cold Lake's CF-18 Hornet jet fighters defend the western half of Canadian air space and cover Canada's Arctic territory. Cold Lake Air Weapons Range (CLAWR) is one of the largest live-drop bombing ranges in the world, including testing of cruise missiles. As oil sands production continues to grow, various sectors vie for access to airspace, land, and resources, and this complicates oil well drilling and production significantly. Peace River Venezuela The Eastern Venezuelan Basin has a structure similar to the WCSB, but on a shorter scale. The distance the oil has migrated up-dip from the Sierra Oriental mountain front to the Orinoco oil sands where it pinches out against the igneous rocks of the Guyana Shield is only about . The hydrodynamic conditions of oil transport were similar, source rocks buried deep by the rise of the mountains of the Sierra Orientale produced light oil that moved up-dip toward the south until it was gradually immobilized by the viscosity increase caused by biodgradation near the surface. The Orinoco deposits are early Tertiary (50 to 60 million years old) sand-silt-shale sequences overlain by continuous thick shales, much like the Canadian deposits. In Venezuela, the Orinoco Belt oil sands range from deep and no surface outcrops exist. The deposit is about long east-to-west and wide north-to-south, much less than the combined area covered by the Canadian deposits. In general, the Canadian deposits are found over a much wider area, have a broader range of properties, and have a broader range of reservoir types than the Venezuelan ones, but the geological structures and mechanisms involved are similar. The main differences is that the oil in the sands in Venezuela is less viscous than in Canada, allowing some of it to be produced by conventional drilling techniques, but none of it approaches the surface as in Canada, meaning none of it can be produced using surface mining. The Canadian deposits will almost all have to be produced by mining or using new non-conventional techniques. Orinoco The Orinoco Belt is a territory in the southern strip of the eastern Orinoco River Basin in Venezuela which overlies one of the world's largest deposits of petroleum. The Orinoco Belt follows the line of the river. It is approximately from east to west, and from north to south, with an area about . The oil sands consist of large deposits of extra heavy crude. Venezuela's heavy oil deposits of about of oil in place are estimated to approximately equal the world's reserves of lighter oil. In 2009, the US Geological Survey (USGS) increased its estimates of the reserves to of oil which is "technically recoverable (producible using currently available technology and industry practices)." No estimate of how much of the oil is economically recoverable was made. Other deposits In addition to the three major Canadian oil sands in Alberta, there is a fourth major oil sands deposit in Canada, the Melville Island oil sands in the Canadian Arctic islands, which are too remote to expect commercial production in the foreseeable future. Apart from the megagiant oil sands deposits in Canada and Venezuela, numerous other countries hold smaller oil sands deposits. In the United States, there are supergiant oil sands resources primarily concentrated in Eastern Utah, with a total of of oil (known and potential) in eight major deposits in Carbon, Garfield, Grand, Uintah, and Wayne counties. In addition to being much smaller than the Canadian oil sands deposits, the US oil sands are hydrocarbon-wet, whereas the Canadian oil sands are water-wet. This requires somewhat different extraction techniques for the Utah oil sands from those used for the Alberta oil sands. Russia holds oil sands in two main regions. Large resources are present in the Tunguska Basin, East Siberia, with the largest deposits being Olenyok and Siligir. Other deposits are located in the Timan-Pechora and Volga-Urals basins (in and around Tatarstan), which is an important but very mature province in terms of conventional oil, holds large amounts of oil sands in a shallow Permian formation. In Kazakhstan, large bitumen deposits are located in the North Caspian Basin. In Madagascar, Tsimiroro and Bemolanga are two heavy oil sands deposits, with a pilot well already producing small amounts of oil in Tsimiroro. and larger scale exploitation in the early planning phase. In the Republic of the Congo reserves are estimated between . Production Bituminous sands are a major source of unconventional oil, although only Canada has a large-scale commercial oil sands industry. In 2006, bitumen production in Canada averaged through 81 oil sands projects. 44% of Canadian oil production in 2007 was from oil sands. This proportion was (as of 2008) expected to increase in coming decades as bitumen production grows while conventional oil production declines, although due to the 2008 economic downturn work on new projects has been deferred. Petroleum is not produced from oil sands on a significant level in other countries. Canada The Alberta oil sands have been in commercial production since the original Great Canadian Oil Sands (now Suncor Energy) mine began operation in 1967. Syncrude's second mine began operation in 1978 and is the biggest mine of any type in the world. The third mine in the Athabasca Oil Sands, the Albian Sands consortium of Shell Canada, Chevron Corporation, and Western Oil Sands Inc. (purchased by Marathon Oil Corporation in 2007) began operation in 2003. Petro-Canada was also developing a $33 billion Fort Hills Project, in partnership with UTS Energy Corporation and Teck Cominco, which lost momentum after the 2009 merger of Petro-Canada into Suncor. By 2013 there were nine oil sands mining projects in the Athabasca oil sands deposit: Suncor Energy Inc. (Suncor), Syncrude Canada Limited (Syncrude)'s Mildred Lake and Aurora North, Shell Canada Limited (Shell)'s Muskeg River and Jackpine, Canadian Natural Resources Limited (CNRL)'s Horizon, Imperial Oil Resources Ventures Limited (Imperial), Kearl Oil Sands Project (KOSP), Total E&P Canada Ltd. Joslyn North Mine and Fort Hills Energy Corporation (FHEC). In 2011 alone they produced over 52 million cubic metres of bitumen. Canadian oil sand extraction has created extensive environmental damage, and many first nations peoples, scientists, lawyers, journalists and environmental groups have described Canadian oil sands mining as an ecocide. From the beginning of 2022 oil sands extraction in Alberta has sharply increased, overpassing by far the level of 2014. High oil prices is one of the causes. In 2024 it is projected to increase more, so Canada can become a leader in oil production. Venezuela No significant development of Venezuela's extra-heavy oil deposits was undertaken before 2000, except for the BITOR operation which produced somewhat less than 100,000 barrels of oil per day (16,000 m3/d) of 9°API oil by primary production. This was mostly shipped as an emulsion (Orimulsion) of 70% oil and 30% water with similar characteristics as heavy fuel oil for burning in thermal power plants. However, when a major strike hit the Venezuelan state oil company PDVSA, most of the engineers were fired as punishment. Orimulsion had been the pride of the PDVSA engineers, so Orimulsion fell out of favor with the key political leaders. As a result, the government has been trying to "Wind Down" the Orimulsion program. Despite the fact that the Orinoco oil sands contain extra-heavy oil which is easier to produce than Canada's similarly sized reserves of bitumen, Venezuela's oil production has been declining in recent years because of the country's political and economic problems, while Canada's has been increasing. As a result, Canadian heavy oil and bitumen exports have been backing Venezuelan heavy and extra-heavy oil out of the US market, and Canada's total exports of oil to the US have become several times as great as Venezuela's. By 2016, with the economy of Venezuela in a tailspin and the country experiencing widespread shortages of food, rolling power blackouts, rioting, and anti-government protests, it was unclear how much new oil sands production would occur in the near future. Other countries In May 2008, the Italian oil company Eni announced a project to develop a small oil sands deposit in the Republic of the Congo. Production is scheduled to commence in 2014 and is estimated to eventually yield a total of . Methods of extraction Except for a fraction of the extra-heavy oil or bitumen which can be extracted by conventional oil well technology, oil sands must be produced by strip mining or the oil made to flow into wells using sophisticated in-situ techniques. These methods usually use more water and require larger amounts of energy than conventional oil extraction. While much of Canada's oil sands are being produced using open-pit mining, approximately 90% of Canadian oil sands and all of Venezuela's oil sands are too far below the surface to use surface mining. Primary production Conventional crude oil is normally extracted from the ground by drilling oil wells into a petroleum reservoir, allowing oil to flow into them under natural reservoir pressures, although artificial lift and techniques such as horizontal drilling, water flooding and gas injection are often required to maintain production. When primary production is used in the Venezuelan oil sands, where the extra-heavy oil is about 50 degrees Celsius, the typical oil recovery rates are about 8–12%. Canadian oil sands are much colder and more biodegraded, so bitumen recovery rates are usually only about 5–6%. Historically, primary recovery was used in the more fluid areas of Canadian oil sands. However, it recovered only a small fraction of the oil in place, so it is not often used today. Surface mining The Athabasca oil sands are the only major oil sands deposits which are shallow enough to surface mine. In the Athabasca sands there are very large amounts of bitumen covered by little overburden, making surface mining the most efficient method of extracting it. The overburden consists of water-laden muskeg (peat bog) overtop of clay and barren sand. The oil sands themselves are typically thick deposits of crude bitumen embedded in unconsolidated sandstone, sitting on top of flat limestone rock. Since Great Canadian Oil Sands (now Suncor Energy) started operation of the first large-scale oil sands mine in 1967, bitumen has been extracted on a commercial scale and the volume has grown at a steady rate ever since. A large number of oil sands mines are currently in operation and more are in the stages of approval or development. The Syncrude Canada mine was the second to open in 1978, Shell Canada opened its Muskeg River mine (Albian Sands) in 2003 and Canadian Natural Resources Ltd (CNRL) opened its Horizon Oil Sands project in 2009. Newer mines include Shell Canada's Jackpine mine, Imperial Oil's Kearl Oil Sands Project, the Synenco Energy (now owned by TotalEnergies) Northern Lights mine, and Suncor's Fort Hills mine. Oil sands tailings ponds Oil sands tailings ponds are engineered dam and dyke systems that contain salts, suspended solids and other dissolvable chemical compounds such as naphthenic acids, benzene, hydrocarbons residual bitumen, fine silts (mature fine tails MFT), and water. Large volumes of tailings are a byproduct of surface mining of the oil sands and managing these tailings are one of the most damaging aspects of tar sands. The Government of Alberta reported in 2013 that tailings ponds in the Alberta oil sands covered an area of about . The Syncrude Tailings Dam or Mildred Lake Settling Basin (MLSB) is an embankment dam that is, by volume of construction material, the largest earth structure in the world in 2001. Cold Heavy Oil Production with Sand (CHOPS) Some years ago Canadian oil companies discovered that if they removed the sand filters from heavy oil wells and produced as much sand as possible with the oil, production rates improved significantly. This technique became known as Cold Heavy Oil Production with Sand (CHOPS). Further research disclosed that pumping out sand opened "wormholes" in the sand formation which allowed more oil to reach the wellbore. The advantage of this method is better production rates and recovery (around 10% versus 5–6% with sand filters in place) and the disadvantage that disposing of the produced sand is a problem. A novel way to do this was spreading it on rural roads, which rural governments liked because the oily sand reduced dust and the oil companies did their road maintenance for them. However, governments have become concerned about the large volume and composition of oil spread on roads. so in recent years disposing of oily sand in underground salt caverns has become more common. Cyclic Steam Stimulation (CSS) The use of steam injection to recover heavy oil has been in use in the oil fields of California since the 1950s. The cyclic steam stimulation (CSS) "huff-and-puff" method is now widely used in heavy oil production worldwide due to its quick early production rates; however recovery factors are relatively low (10–40% of oil in place) compared to SAGD (60–70% of OIP). CSS has been in use by Imperial Oil at Cold Lake since 1985 and is also used by Canadian Natural Resources at Primrose and Wolf Lake and by Shell Canada at Peace River. In this method, the well is put through cycles of steam injection, soak, and oil production. First, steam is injected into a well at a temperature of 300 to 340 degrees Celsius for a period of weeks to months; then, the well is allowed to sit for days to weeks to allow heat to soak into the formation; and, later, the hot oil is pumped out of the well for a period of weeks or months. Once the production rate falls off, the well is put through another cycle of injection, soak and production. This process is repeated until the cost of injecting steam becomes higher than the money made from producing oil. Steam-assisted gravity drainage (SAGD) Steam-assisted gravity drainage was developed in the 1980s by the Alberta Oil Sands Technology and Research Authority and fortuitously coincided with improvements in directional drilling technology that made it quick and inexpensive to do by the mid 1990s. In SAGD, two horizontal wells are drilled in the oil sands, one at the bottom of the formation and another about 5 metres above it. These wells are typically drilled in groups off central pads and can extend for miles in all directions. In each well pair, steam is injected into the upper well, the heat melts the bitumen, which allows it to flow into the lower well, where it is pumped to the surface. SAGD has proved to be a major breakthrough in production technology since it is cheaper than CSS, allows very high oil production rates, and recovers up to 60% of the oil in place. Because of its economic feasibility and applicability to a vast area of oil sands, this method alone quadrupled North American oil reserves and allowed Canada to move to second place in world oil reserves after Saudi Arabia. Most major Canadian oil companies now have SAGD projects in production or under construction in Alberta's oil sands areas and in Wyoming. Examples include Japan Canada Oil Sands Ltd's (JACOS) project, Suncor's Firebag project, Nexen's Long Lake project, Suncor's (formerly Petro-Canada's) MacKay River project, Husky Energy's Tucker Lake and Sunrise projects, Shell Canada's Peace River project, Cenovus Energy's Foster Creek and Christina Lake developments, ConocoPhillips' Surmont project, Devon Canada's Jackfish project, and Derek Oil & Gas's LAK Ranch project. Alberta's OSUM Corp has combined proven underground mining technology with SAGD to enable higher recovery rates by running wells underground from within the oil sands deposit, thus also reducing energy requirements compared to traditional SAGD. This particular technology application is in its testing phase. Vapor Extraction (VAPEX) Several methods use solvents, instead of steam, to separate bitumen from sand. Some solvent extraction methods may work better in in situ production and other in mining. Solvent can be beneficial if it produces more oil while requiring less energy to produce steam. Vapor Extraction Process (VAPEX) is an in situ technology, similar to SAGD. Instead of steam, hydrocarbon solvents are injected into an upper well to dilute bitumen and enables the diluted bitumen to flow into a lower well. It has the advantage of much better energy efficiency over steam injection, and it does some partial upgrading of bitumen to oil right in the formation. The process has attracted attention from oil companies, who are experimenting with it. The above methods are not mutually exclusive. It is becoming common for wells to be put through one CSS injection-soak-production cycle to condition the formation prior to going to SAGD production, and companies are experimenting with combining VAPEX with SAGD to improve recovery rates and lower energy costs. Toe to Heel Air Injection (THAI) This is a very new and experimental method that combines a vertical air injection well with a horizontal production well. The process ignites oil in the reservoir and creates a vertical wall of fire moving from the "toe" of the horizontal well toward the "heel", which burns the heavier oil components and upgrades some of the heavy bitumen into lighter oil right in the formation. Historically fireflood projects have not worked out well because of difficulty in controlling the flame front and a propensity to set the producing wells on fire. However, some oil companies feel the THAI method will be more controllable and practical, and have the advantage of not requiring energy to create steam. Advocates of this method of extraction state that it uses less freshwater, produces 50% less greenhouse gases, and has a smaller footprint than other production techniques. Petrobank Energy and Resources has reported encouraging results from their test wells in Alberta, with production rates of up to per well, and the oil upgraded from 8 to 12 API degrees. The company hopes to get a further 7-degree upgrade from its CAPRI (controlled atmospheric pressure resin infusion) system, which pulls the oil through a catalyst lining the lower pipe. After several years of production in situ, it has become clear that current THAI methods do not work as planned. Amid steady drops in production from their THAI wells at Kerrobert, Petrobank has written down the value of their THAI patents and the reserves at the facility to zero. They have plans to experiment with a new configuration they call "multi-THAI," involving adding more air injection wells. Combustion Overhead Gravity Drainage (COGD) This is an experimental method that employs a number of vertical air injection wells above a horizontal production well located at the base of the bitumen pay zone. An initial Steam Cycle similar to CSS is used to prepare the bitumen for ignition and mobility. Following that cycle, air is injected into the vertical wells, igniting the upper bitumen and mobilizing (through heating) the lower bitumen to flow into the production well. It is expected that COGD will result in water savings of 80% compared to SAGD. Froth treatment Energy balance Approximately of energy is needed to extract a barrel of bitumen and upgrade it to synthetic crude. As of 2006, most of this is produced by burning natural gas. Since a barrel of oil equivalent is about , its EROEI is 5–6. That means this extracts about 5 or 6 times as much energy as is consumed. Energy efficiency is expected to improve to an average of of natural gas or of energy per barrel by 2015, giving an EROEI of about 6.5. Alternatives to natural gas exist and are available in the oil sands area. Bitumen can itself be used as the fuel, consuming about 30–35% of the raw bitumen per produced unit of synthetic crude. Nexen's Long Lake project will use a proprietary deasphalting technology to upgrade the bitumen, using asphaltene residue fed to a gasifier whose syngas will be used by a cogeneration turbine and a hydrogen producing unit, providing all the energy needs of the project: steam, hydrogen, and electricity. Thus, it will produce syncrude without consuming natural gas, but the capital cost is very high. Shortages of natural gas for project fuel were forecast to be a problem for Canadian oil sands production a few years ago, but recent increases in US shale gas production have eliminated much of the problem for North America. With the increasing use of hydraulic fracturing making US largely self-sufficient in natural gas and exporting more natural gas to Eastern Canada to replace Alberta gas, the Alberta government is using its powers under the NAFTA and the Canadian Constitution to reduce shipments of natural gas to the US and Eastern Canada, and divert the gas to domestic Alberta use, particularly for oil sands fuel. The natural gas pipelines to the east and south are being converted to carry increasing oil sands production to these destinations instead of gas. Canada also has huge undeveloped shale gas deposits in addition to those of the US, so natural gas for future oil sands production does not seem to be a serious problem. The low price of natural gas as the result of new production has considerably improved the economics of oil sands production. Upgrading and blending The extra-heavy crude oil or crude bitumen extracted from oil sands is a very viscous semisolid form of oil that does not easily flow at normal temperatures, making it difficult to transport to market by pipeline. To flow through oil pipelines, it must either be upgraded to lighter synthetic crude oil (SCO), blended with diluents to form dilbit, or heated to reduce its viscosity. Canada In the Canadian oil sands, bitumen produced by surface mining is generally upgraded on-site and delivered as synthetic crude oil. This makes delivery of oil to market through conventional oil pipelines quite easy. On the other hand, bitumen produced by the in-situ projects is generally not upgraded but delivered to market in raw form. If the agent used to upgrade the bitumen to synthetic crude is not produced on site, it must be sourced elsewhere and transported to the site of upgrading. If the upgraded crude is being transported from the site by pipeline, and additional pipeline will be required to bring in sufficient upgrading agent. The costs of production of the upgrading agent, the pipeline to transport it and the cost to operate the pipeline must be calculated into the production cost of the synthetic crude. Upon reaching a refinery, the synthetic crude is processed and a significant portion of the upgrading agent will be removed during the refining process. It may be used for other fuel fractions, but the end result is that liquid fuel has to be piped to the upgrading facility simply to make the bitumen transportable by pipeline. If all costs are considered, synthetic crude production and transfer using bitumen and an upgrading agent may prove economically unsustainable. When the first oil sands plants were built over 50 years ago, most oil refineries in their market area were designed to handle light or medium crude oil with lower sulfur content than the 4–7% that is typically found in bitumen. The original oil sands upgraders were designed to produce a high-quality synthetic crude oil (SCO) with lower density and lower sulfur content. These are large, expensive plants which are much like heavy oil refineries. Research is currently being done on designing simpler upgraders which do not produce SCO but simply treat the bitumen to reduce its viscosity, allowing to be transported unblended like conventional heavy oil. Western Canadian Select, launched in 2004 as a new heavy oil stream, blended at the Husky Energy terminal in Hardisty, Alberta, is the largest crude oil stream coming from the Canadian oil sands and the benchmark for emerging heavy, high TAN (acidic) crudes. Western Canadian Select (WCS) is traded at Cushing, Oklahoma, a major oil supply hub connecting oil suppliers to the Gulf Coast, which has become the most significant trading hub for crude oil in North America. While its major component is bitumen, it also contains a combination of sweet synthetic and condensate diluents, and 25 existing streams of both conventional and unconventional oil making it a syndilbit—both a dilbit and a synbit. The first step in upgrading is vacuum distillation to separate the lighter fractions. After that, de-asphalting is used to separate the asphalt from the feedstock. Cracking is used to break the heavier hydrocarbon molecules down into simpler ones. Since cracking produces products which are rich in sulfur, desulfurization must be done to get the sulfur content below 0.5% and create sweet, light synthetic crude oil. In 2012, Alberta produced about of crude bitumen from its three major oil sands deposits, of which about was upgraded to lighter products and the rest sold as raw bitumen. The volume of both upgraded and non-upgraded bitumen is increasing yearly. Alberta has five oil sands upgraders producing a variety of products. These include: Suncor Energy can upgrade of bitumen to light sweet and medium sour synthetic crude oil (SCO), plus produce diesel fuel for its oil sands operations at the upgrader. Syncrude can upgrade of bitumen to sweet light SCO. Canadian Natural Resources Limited (CNRL) can upgrade of bitumen to sweet light SCO. Nexen, since 2013 wholly owned by China National Offshore Oil Corporation (CNOOC), can upgrade of bitumen to sweet light SCO. Shell Canada operates its Scotford Upgrader in combination with an oil refinery and chemical plant at Scotford, Alberta, near Edmonton. The complex can upgrade of bitumen to sweet and heavy SCO as well as a range of refinery and chemical products. Modernized and new large refineries such as are found in the Midwestern United States and on the Gulf Coast of the United States, as well as many in China, can handle upgrading heavy oil themselves, so their demand is for non-upgraded bitumen and extra-heavy oil rather than SCO. The main problem is that the feedstock would be too viscous to flow through pipelines, so unless it is delivered by tanker or rail car, it must be blended with diluent to enable it to flow. This requires mixing the crude bitumen with a lighter hydrocarbon diluent such as condensate from gas wells, pentanes and other light products from oil refineries or gas plants, or synthetic crude oil from oil sands upgraders to allow it to flow through pipelines to market. Typically, blended bitumen contains about 30% natural gas condensate or other diluents and 70% bitumen. Alternatively, bitumen can also be delivered to market by specially designed railway tank cars, tank trucks, liquid cargo barges, or ocean-going oil tankers. These do not necessarily require the bitumen be blended with diluent since the tanks can be heated to allow the oil to be pumped out. The demand for condensate for oil sands diluent is expected to be more than by 2020, double 2012 volumes. Since Western Canada only produces about of condensate, the supply was expected to become a major constraint on bitumen transport. However, the recent huge increase in US tight oil production has largely solved this problem, because much of the production is too light for US refinery use but ideal for diluting bitumen. The surplus American condensate and light oil is being exported to Canada and blended with bitumen, and then re-imported to the US as feedstock for refineries. Since the diluent is simply exported and then immediately re-imported, it is not subject to the US ban on exports of crude oil. Once it is back in the US, refineries separate the diluent and re-export it to Canada, which again bypasses US crude oil export laws since it is now a refinery product. To aid in this process, Kinder Morgan Energy Partners is reversing its Cochin Pipeline, which used to carry propane from Edmonton to Chicago, to transport of condensate from Chicago to Edmonton by mid-2014; and Enbridge is considering the expansion of its Southern Lights pipeline, which currently ships of diluent from the Chicago area to Edmonton, by adding another . Venezuela Although Venezuelan extra-heavy oil is less viscous than Canadian bitumen, much of the difference is due to temperature. Once the oil comes out of the ground and cools, it has the same difficulty in that it is too viscous to flow through pipelines. Venezuela is now producing more extra heavy crude in the Orinoco oil sands than its four upgraders, which were built by foreign oil companies over a decade ago, can handle. The upgraders have a combined capacity of , which is only half of its production of extra-heavy oil. In addition Venezuela produces insufficient volumes of naphtha to use as diluent to move extra-heavy oil to market. Unlike Canada, Venezuela does not produce much natural gas condensate from its own gas wells, nor does it have easy access to condensate from new US shale gas production. Since Venezuela also has insufficient refinery capacity to supply its domestic market, supplies of naptha are insufficient to use as pipeline diluent, and it is having to import naptha to fill the gap. Since Venezuela also has financial problems—as a result of the country's economic crisis—and political disagreements with the US government and oil companies, the situation remains unresolved. Refining Heavy crude feedstock needs pre-processing before it is fit for conventional refineries, although heavy oil and bitumen refineries can do the pre-processing themselves. This pre-processing is called "upgrading", the key components of which are as follows: removal of water, sand, physical waste, and lighter products catalytic purification by hydrodemetallisation (HDM), hydrodesulfurization (HDS) and hydrodenitrogenation (HDN) hydrogenation through carbon rejection or catalytic hydrocracking (HCR) As carbon rejection is very inefficient and wasteful in most cases, catalytic hydrocracking is preferred in most cases. All these processes take large amounts of energy and water, while emitting more carbon dioxide than conventional oil. Catalytic purification and hydrocracking are together known as hydroprocessing. The big challenge in hydroprocessing is to deal with the impurities found in heavy crude, as they poison the catalysts over time. Many efforts have been made to deal with this to ensure high activity and long life of a catalyst. Catalyst materials and pore size distributions are key parameters that need to be optimized to deal with this challenge and varies from place to place, depending on the kind of feedstock present. Canada There are four major oil refineries in Alberta which supply most of Western Canada with petroleum products, but as of 2012 these processed less than 1/4 of the approximately of bitumen and SCO produced in Alberta. Some of the large oil sands upgraders also produced diesel fuel as part of their operations. Some of the oil sands bitumen and SCO went to refineries in other provinces, but most of it was exported to the United States. The four major Alberta refineries are: Suncor Energy operates the Petro-Canada refinery near Edmonton, which can process of all types of oil and bitumen into all types of products. Imperial Oil operates the Strathcona Refinery near Edmonton, which can process of SCO and conventional oil into all types of products. Shell Canada operates the Scotford Refinery near Edmonton, which is integrated with the Scotford Upgrader, and which can process of all types of oil and bitumen into all types of products. Husky Energy, operates the Husky Lloydminster Refinery in Lloydminster, which can process of feedstock from the adjacent Husky Upgrader into bitumen and other products. The $8.5 billion Sturgeon Refinery, a fifth major Alberta refinery, is under construction near Fort Saskatchewan with a completion date of 2017. The Pacific Future Energy project proposed a new refinery in British Columbia that would process bitumen into fuel for Asian and Canadian markets. Pacific Future Energy proposes to transport near-solid bitumen to the refinery using railway tank cars. Most of the Canadian oil refining industry is foreign-owned. Canadian refineries can process only about 25% of the oil produced in Canada. Canadian refineries, outside of Alberta and Saskatchewan, were originally built for light and medium crude oil. With new oil sands production coming on production at lower prices than international oil, market price imbalances have ruined the economics of refineries which could not process it. United States Prior to 2013, when China surpassed it, the United States was the largest oil importer in the world. Unlike Canada, the US has hundreds of oil refineries, many of which have been modified to process heavy oil as US production of light and medium oil declined. The main market for Canadian bitumen as well as Venezuelan extra-heavy oil was assumed to be the US. The United States has historically been Canada's largest customer for crude oil and products, particularly in recent years. American imports of oil and products from Canada grew from in 1981 to in 2013 as Canada's oil sands produced more and more oil, while in the US, domestic production and imports from other countries declined. However, this relationship is becoming strained due to physical, economic and political influences. Export pipeline capacity is approaching its limits; Canadian oil is selling at a discount to world market prices; US demand for crude oil and product imports has declined because of US economic problems; and US oil domestic unconventional oil production (shale oil production from fracking is growing rapidly.) The US resumed export of crude oil in 2016; as of early 2019, the US produced as much oil as it consumed, with shale oil displacing Canadian imports. For the benefit of oil marketers, in 2004 Western Canadian producers created a new benchmark crude oil called Western Canadian Select, (WCS), a bitumen-derived heavy crude oil blend that is similar in its transportation and refining characteristics to California, Mexico Maya, or Venezuela heavy crude oils. This heavy oil has an API gravity of 19–21 and despite containing large amounts of bitumen and synthetic crude oil, flows through pipelines well and is classified as "conventional heavy oil" by governments. There are several hundred thousand barrels per day of this blend being imported into the US, in addition to larger amounts of crude bitumen and synthetic crude oil (SCO) from the oil sands. The demand from US refineries is increasingly for non-upgraded bitumen rather than SCO. The Canadian National Energy Board (NEB) expects SCO volumes to double to around by 2035, but not keep pace with the total increase in bitumen production. It projects that the portion of oil sands production that is upgraded to SCO to decline from 49% in 2010 to 37% in 2035. This implies that over of bitumen will have to be blended with diluent for delivery to market. Asia Demand for oil in Asia has been growing much faster than in North America or Europe. In 2013, China replaced the United States as the world's largest importer of crude oil, and its demand continues to grow much faster than its production. The main impediment to Canadian exports to Asia is pipeline capacity – The only pipeline capable of delivering oil sands production to Canada's Pacific Coast is the Trans Mountain Pipeline from Edmonton to Vancouver, which is now operating at its capacity of supplying refineries in B.C. and Washington State. However, once complete, the Northern Gateway pipeline and the Trans Mountain expansion currently undergoing government review are expected to deliver an additional to to tankers on the Pacific coast, from where they could deliver it anywhere in the world. There is sufficient heavy oil refinery capacity in China and India to refine the additional Canadian volume, possibly with some modifications to the refineries. In recent years, Chinese oil companies such as China Petrochemical Corporation (Sinopec), China National Offshore Oil Corporation (CNOOC), and PetroChina have bought over $30 billion in assets in Canadian oil sands projects, so they would probably like to export some of their newly acquired oil to China. Economics The world's largest deposits of bitumen are in Canada, although Venezuela's deposits of extra-heavy crude oil are even bigger. Canada has vast energy resources of all types and its oil and natural gas resource base would be large enough to meet Canadian needs for generations if demand was sustained. Abundant hydroelectric resources account for the majority of Canada's electricity production and very little electricity is produced from oil. The National Energy Board (NEB) reported in 2013, that if oil prices are above $100, Canada would have more than enough energy to meet its growing needs. The excess oil production from the oil sands could be exported. The major importing country would probably continue to be the United States, although before the developments in 2014, there was increasing demand for oil, particularly heavy oil, from Asian countries such as China and India. Canada has abundant resources of bitumen and crude oil, with an estimated remaining ultimate resource potential of 54 billion cubic metres (340 billion barrels). Of this, oil sands bitumen accounts for 90 per cent. Alberta currently accounts for all of Canada's bitumen resources. "Resources" become "reserves" only after it is proven that economic recovery can be achieved. At 2013 prices using current technology, Canada had remaining oil reserves of 27 billion m3 (170 billion bbls), with 98% of this attributed to oil sands bitumen. This put its reserves in third place in the world behind Venezuela and Saudi Arabia. At the much lower prices of 2015, the reserves are much smaller. Costs The costs of production and transportation of saleable petroleum from oil sands is typically significantly higher than from conventional global sources. Hence the economic viability of oil sands production is more vulnerable to the price of oil. The price of benchmark West Texas Intermediate (WTI) oil at Cushing, Oklahoma above US$100/bbl that prevailed until late 2014 was sufficient to promote active growth in oil sands production. Major Canadian oil companies had announced expansion plans and foreign companies were investing significant amounts of capital, in many cases forming partnerships with Canadian companies. Investment had been shifting towards in-situ steam-assisted gravity drainage (SAGD) projects and away from mining and upgrading projects, as oil sands operators foresee better opportunities from selling bitumen and heavy oil directly to refineries than from upgrading it to synthetic crude oil. Cost estimates for Canada include the effects of the mining when the mines are returned to the environment in "as good as or better than original condition". Cleanup of the end products of consumption are the responsibility of the consuming jurisdictions, which are mostly in provinces or countries other than the producing one. The Alberta government estimated that in 2012, the supply cost of oil sands new mining operations was $70 to $85 per barrel, whereas the cost of new SAGD projects was $50 to $80 per barrel. These costs included capital and operating costs, royalties and taxes, plus a reasonable profit to the investors. Since the price of WTI rose to $100/bbl beginning in 2011, production from oil sands was then expected to be highly profitable assuming the product could be delivered to markets. The main market was the huge refinery complexes on the US Gulf Coast, which are generally capable of processing Canadian bitumen and Venezuelan extra-heavy oil without upgrading. The Canadian Energy Research Institute (CERI) performed an analysis, estimating that in 2012 the average plant gate costs (including 10% profit margin, but excluding blending and transport) of primary recovery was $30.32/bbl, of SAGD was $47.57/bbl, of mining and upgrading was $99.02/bbl, and of mining without upgrading was $68.30/bbl. Thus, all types of oil sands projects except new mining projects with integrated upgraders were expected to be consistently profitable from 2011 onward, provided that global oil prices remained favourable. Since the larger and more sophisticated refineries preferred to buy raw bitumen and heavy oil rather than synthetic crude oil, new oil sands projects avoided the costs of building new upgraders. Although primary recovery such as is done in Venezuela is cheaper than SAGD, it only recovers about 10% of the oil in place versus 60% or more for SAGD and over 99% for mining. Canadian oil companies were in a more competitive market and had access to more capital than in Venezuela, and preferred to spend that extra money on SAGD or mining to recover more oil. Then in late 2014 the dramatic rise in U.S. production from shale formations, combined with a global economic malaise that reduced demand, caused the price of WTI to drop below $50, where it remained as of late 2015. In 2015, the Canadian Energy Research Institute (CERI) re-estimated the average plant gate costs (again including 10% profit margin) of SAGD to be $58.65/bbl, and 70.18/bbl for mining without upgrading. Including costs of blending and transportation, the WTI equivalent supply costs for delivery to Cushing become US$80.06/bbl for SAGD projects, and $89.71/bbl for a standalone mine. In this economic environment, plans for further development of production from oil sands have been slowed or deferred, or even abandoned during construction. Production of synthetic crude from mining operations may continue at a loss because of the costs of shutdown and restart, as well as commitments to supply contracts. During the 2020 Russia–Saudi Arabia oil price war, the price of Canadian heavy crude dipped below $5 per barrel. Production forecasts Oil sands production forecasts released by the Canadian Association of Petroleum Producers (CAPP), the Alberta Energy Regulator (AER), and the Canadian Energy Research Institute (CERI) are comparable to National Energy Board (NEB) projections, in terms of total bitumen production. None of these forecasts take into account probable international constraints to be imposed on combustion of all hydrocarbons in order to limit global temperature rise, giving rise to a situation denoted by the term "carbon bubble". Ignoring such constraints, and also assuming that the price of oil recovers from its collapse in late 2014, the list of currently proposed projects, many of which are in the early planning stages, would suggest that by 2035 Canadian bitumen production could potentially reach as much as 1.3 million m3/d (8.3 million barrels per day) if most were to go ahead. Under the same assumptions, a more likely scenario is that by 2035, Canadian oil sands bitumen production would reach 800,000 m3/d (5.0 million barrels/day), 2.6 times the production for 2012. The majority of the growth would likely occur in the in-situ category, as in-situ projects usually have better economics than mining projects. Also, 80% of Canada's oil sands reserves are well-suited to in-situ extraction, versus 20% for mining methods. An additional assumption is that there would be sufficient pipeline infrastructure to deliver increased Canadian oil production to export markets. If this were a limiting factor, there could be impacts on Canadian crude oil prices, constraining future production growth. Another assumption is that US markets will continue to absorb increased Canadian exports. Rapid growth of tight oil production in the US, Canada's primary oil export market, has greatly reduced US reliance on imported crude. The potential for Canadian oil exports to alternative markets such as Asia is also uncertain. There are increasing political obstacles to building any new pipelines to deliver oil in Canada and the US. In November 2015, U.S. President Barack Obama rejected the proposal to build the Keystone XL pipeline from Alberta to Steele City, Nebraska. In the absence of new pipeline capacity, companies are increasingly shipping bitumen to US markets by railway, river barge, tanker, and other transportation methods. Other than ocean tankers, these alternatives are all more expensive than pipelines. A shortage of skilled workers in the Canadian oil sands developed during periods of rapid development of new projects. In the absence of other constraints on further development, the oil and gas industry would need to fill tens of thousands of job openings in the next few years as a result of industry activity levels as well as age-related attrition. In the longer term, under a scenario of higher oil and gas prices, the labor shortages would continue to get worse. A potential labor shortage can increase construction costs and slow the pace of oil sands development. The skilled worker shortage was much more severe in Venezuela because the government controlled oil company PDVSA fired most of its heavy oil experts after the Venezuelan general strike of 2002–03, and wound down the production of Orimulsion, which was the primary product from its oil sands. Following that, the government re-nationalized the Venezuelan oil industry and increased taxes on it. The result was that foreign companies left Venezuela, as did most of its elite heavy oil technical experts. In recent years, Venezuela's heavy oil production has been falling, and it has consistently been failing to meet its production targets. As of late 2015, development of new oil sand projects were deterred by the price of WTI below US$50, which is barely enough to support production from existing operations. Demand recovery was suppressed by economic problems that may continue indefinitely to bedevil both the European Community and China. Low-cost production by OPEC continued at maximum capacity, efficiency of production from U.S. shales continued to improve, and Russian exports were mandated even below cost of production, as their only source of hard currency. There is also the possibility that there will emerge an international agreement to introduce measures to constrain the combustion of hydrocarbons in an effort to limit global temperature rise to the nominal 2 °C that is consensually predicted to limit environmental harm to tolerable levels. Rapid technological progress is being made to reduce the cost of competing renewable sources of energy. Hence there is no consensus about when, if ever, oil prices paid to producers may substantially recover. A detailed academic study of the consequences for the producers of the various hydrocarbon fuels concluded in early 2015 that a third of global oil reserves, half of gas reserves and over 80% of current coal reserves should remain underground from 2010 to 2050 in order to meet the target of 2 °C. Hence continued exploration or development of reserves would be extraneous to needs. To meet the 2 °C target, strong measures would be needed to suppress demand, such as a substantial carbon tax leaving a lower price for the producers from a smaller market. The impact on producers in Canada would be far larger than in the U.S. Open-pit mining of natural bitumen in Canada would soon drop to negligible levels after 2020 in all scenarios considered because it is considerably less economic than other methods of production. Environmental issues In their 2011 commissioned report entitled "Prudent Development: Realizing the Potential of North America's Abundant Natural Gas and Oil Resources," the National Petroleum Council, an advisory committee to the U.S. Secretary of Energy, acknowledged health and safety concerns regarding the oil sands which include "volumes of water needed to generate issues of water sourcing; removal of overburden for surface mining can fragment wildlife habitat and increase the risk of soil erosion or surface run-off events to nearby water systems; GHG and other air emissions from production." Oil sands extraction can affect the land when the bitumen is initially mined, water resources by its requirement for large quantities of water during separation of the oil and sand, and the air due to the release of carbon dioxide and other emissions. Heavy metals such as vanadium, nickel, lead, cobalt, mercury, chromium, cadmium, arsenic, selenium, copper, manganese, iron and zinc are naturally present in oil sands and may be concentrated by the extraction process. The environmental impact caused by oil sand extraction is frequently criticized by environmental groups such as Greenpeace, Climate Reality Project, Pembina Institute, 350.org, MoveOn.org, League of Conservation Voters, Patagonia, Sierra Club, and Energy Action Coalition. In particular, mercury contamination has been found around oil sands production in Alberta, Canada. The European Union has indicated that it may vote to label oil sands oil as "highly polluting". Although oil sands exports to Europe are minimal, the issue has caused friction between the EU and Canada. According to the California-based Jacobs Consultancy, the European Union used inaccurate and incomplete data in assigning a high greenhouse gas rating to gasoline derived from Alberta's oilsands. Also, Iran, Saudi Arabia, Nigeria and Russia do not provide data on how much natural gas is released via flaring or venting in the oil extraction process. The Jacobs report pointed out that extra carbon emissions from oil-sand crude are 12 percent higher than from regular crude, although it was assigned a GHG rating 22% above the conventional benchmark by EU. In 2014 results of a study published in the Proceedings of the National Academy of Sciences showed that official reports on emissions were not high enough. Report authors noted that, "emissions of organic substances with potential toxicity to humans and the environment are a major concern surrounding the rapid industrial development in the Athabasca oil sands region (AOSR)." This study found that tailings ponds were an indirect pathway transporting uncontrolled releases of evaporative emissions of three representative polycyclic aromatic hydrocarbon (PAH)s (phenanthrene, pyrene, and benzo(a)pyrene) and that these emissions had been previously unreported. Air pollution management The Alberta government computes an Air Quality Health Index (AQHI) from sensors in five communities in the oil sands region, operated by a "partner" called the Wood Buffalo Environmental Association (WBEA). Each of their 17 continuously monitoring stations measure 3 to 10 air quality parameters among carbon monoxide (CO), hydrogen sulfide (), total reduced sulfur (TRS), Ammonia (), nitric oxide (NO), nitrogen dioxide (), nitrogen oxides (NOx), ozone (), particulate matter (PM2.5), sulfur dioxide (), total hydrocarbons (THC), and methane/non-methane hydrocarbons (/NMHC). These AQHI are said to indicate "low risk" air quality more than 95% of the time. Prior to 2012, air monitoring showed significant increases in exceedances of hydrogen sulfide () both in the Fort McMurray area and near the oil sands upgraders. In 2007, the Alberta government issued an environmental protection order to Suncor in response to numerous occasions when ground level concentration for ) exceeded standards. The Alberta Ambient Air Data Management System (AAADMS) of the Clean Air Strategic Alliance (aka CASA Data Warehouse) records that, during the year ending on 1 November 2015, there were 6 hourly reports of values exceeding the limit of 10 ppb for , and 4 in 2013, down from 11 in 2014, and 73 in 2012. In September 2015, the Pembina Institute published a brief report about "a recent surge of odour and air quality concerns in northern Alberta associated with the expansion of oilsands development", contrasting the responses to these concerns in Peace River and Fort McKay. In Fort McKay, air quality is actively addressed by stakeholders represented in the WBEA, whereas the Peace River community must rely on the response of the Alberta Energy Regulator. In an effort to identify the sources of the noxious odours in the Fort McKay community, a Fort McKay Air Quality Index was established, extending the provincial Air Quality Health Index to include possible contributors to the problem: , TRS, and THC. Despite these advantages, more progress was made in remediating the odour problems in the Peace River community, although only after some families had already abandoned their homes. The odour concerns in Fort McKay were reported to remain unresolved. Land use and waste management A large part of oil sands mining operations involves clearing trees and brush from a site and removing the overburden—topsoil, muskeg, sand, clay and gravel—that sits atop the oil sands deposit. Approximately 2.5 tons of oil sands are needed to produce one barrel of oil (roughly of a ton). As a condition of licensing, projects are required to implement a reclamation plan. The mining industry asserts that the boreal forest will eventually colonize the reclaimed lands, but their operations are massive and work on long-term timeframes. As of 2013, about of land in the oil sands region have been disturbed, and of that land is under reclamation. In March 2008, Alberta issued the first-ever oil sands land reclamation certificate to Syncrude for the parcel of land known as Gateway Hill approximately north of Fort McMurray. Several reclamation certificate applications for oil sands projects are expected within the next 10 years. Water management Between 2 and 4.5 volume units of water are used to produce each volume unit of synthetic crude oil in an ex-situ mining operation. According to Greenpeace, the Canadian oil sands operations use of water, twice the amount of water used by the city of Calgary. However, in SAGD operations, 90–95% of the water is recycled and only about 0.2 volume units of water is used per volume unit of bitumen produced. For the Athabasca oil sand operations water is supplied from the Athabasca River, the ninth longest river in Canada. The average flow just downstream of Fort McMurray is with its highest daily average measuring . Oil sands industries water license allocations totals about 1.8% of the Athabasca river flow. Actual use in 2006 was about 0.4%. In addition, according to the Water Management Framework for the Lower Athabasca River, during periods of low river flow water consumption from the Athabasca River is limited to 1.3% of annual average flow. In December 2010, the Oil Sands Advisory Panel, commissioned by former environment minister Jim Prentice, found that the system in place for monitoring water quality in the region, including work by the Regional Aquatic Monitoring Program, the Alberta Water Research Institute, the Cumulative Environmental Management Association and others, was piecemeal and should become more comprehensive and coordinated. Greenhouse gas emissions The production of bitumen and synthetic crude oil emits more greenhouse gases than the production of conventional crude oil. A 2009 study by the consulting firm IHS CERA estimated that production from Canada's oil sands emits "about 5% to 15% more carbon dioxide, over the "well-to-wheels" (WTW) lifetime analysis of the fuel, than average crude oil." Author and investigative journalist David Strahan that same year stated that IEA figures show that carbon dioxide emissions from the oil sands are 20% higher than average emissions from the petroleum production. A Stanford University study commissioned by the EU in 2011 found that oil sands crude was as much as 22% more carbon-intensive than other fuels. According to the "Carnegie Endowment for International Peace" analysis, oil sands emit 31% more GHG that the average North American crude oil. In 2023 a federal study found that the real emissions from oil sands are 65% higher than reported by the industry. Greenpeace says the oil sands industry has been identified as the largest contributor to greenhouse gas emissions growth in Canada, as it accounts for 40 million tons of emissions per year. According to the Canadian Association of Petroleum Producers and Environment Canada the industrial activity undertaken to produce oil sands make up about 5% of Canada's greenhouse gas emissions, or 0.1% of global greenhouse gas emissions. It predicts the oil sands will grow to make up 8% of Canada's greenhouse gas emissions by 2015. While the production industrial activity emissions per barrel of bitumen produced decreased 26% over the decade 1992–2002, total emissions from production activity were expected to increase due to higher production levels. As of 2006, to produce one barrel of oil from the oil sands released almost of greenhouse gases with total emissions estimated to be per year by 2015. A study by IHS CERA found that fuels made from Canadian oil sands resulted in significantly lower greenhouse gas emissions than many commonly cited estimates. A 2012 study by Swart and Weaver estimated that if only the economically viable reserve of oil sands was burnt, the global mean temperature would increase by 0.02 to 0.05 °C. If the entire oil-in-place of 1.8 trillion barrels were to be burnt, the predicted global mean temperature increase is 0.24 to 0.50 °C. Bergerson et al. found that while the WTW emissions can be higher than crude oil, the lower emitting oil sands cases can outperform higher emitting conventional crude cases. To offset greenhouse gas emissions from the oil sands and elsewhere in Alberta, sequestering carbon dioxide emissions inside depleted oil and gas reservoirs has been proposed. This technology is inherited from enhanced oil recovery methods. In July 2008, the Alberta government announced a C$2 billion fund to support sequestration projects in Alberta power plants and oil sands extraction and upgrading facilities. In November 2014, Fatih Birol, the chief economist of the International Energy Agency, described additional greenhouse gas emissions from Canada's oil sands as "extremely low". The IEA forecasts that in the next 25 years oil sands production in Canada will increase by more than , but Dr. Birol said "the emissions of this additional production is equal to only 23 hours of emissions of China — not even one day." The IEA is charged with responsibility for battling climate change, but Dr. Birol said he spends little time worrying about carbon emissions from oil sands. "There is a lot of discussion on oil sands projects in Canada and the United States and other parts of the world, but to be frank, the additional CO2 emissions coming from the oil sands is extremely low." Dr. Birol acknowledged that there is tremendous difference of opinion on the course of action regarding climate change, but added, "I hope all these reactions are based on scientific facts and sound analysis." In 2014, the U.S. Congressional Research Service published a report in preparation for the decision about permitting construction of the Keystone XL pipeline. The report states in part: "Canadian oil sands crudes are generally more GHG emission-intensive than other crudes they may displace in U.S. refineries, and emit an estimated 17% more GHGs on a life-cycle basis than the average barrel of crude oil refined in the United States". According to Natural Resources Canada (NRCan), by 2017, the 23 percent increase in GHG emissions in Canada from 2005 to 2017, was "largely from increased oil sands production, particularly in-situ extraction". Aquatic life deformities There is conflicting research on the effects of the oil sands development on aquatic life. In 2007, Environment Canada completed a study that shows high deformity rates in fish embryos exposed to the oil sands. David W. Schindler, a limnologist from the University of Alberta, co-authored a study on Alberta's oil sands' contribution of aromatic polycyclic compounds, some of which are known carcinogens, to the Athabasca River and its tributaries. Scientists, local doctors, and residents supported a letter sent to the Prime Minister in September 2010 calling for an independent study of Lake Athabasca (which is downstream of the oil sands) to be initiated due to the rise of deformities and tumors found in fish caught there. The bulk of the research that defends the oil sands development is done by the Regional Aquatics Monitoring Program (RAMP), whose steering committee is composed largely of oil and gas companies. RAMP studies show that deformity rates are normal compared to historical data and the deformity rates in rivers upstream of the oil sands. Public health impacts In 2007, it was suggested that wildlife has been negatively affected by the oil sands; for instance, moose were found in a 2006 study to have as high as 453 times the acceptable levels of arsenic in their systems, though later studies lowered this to 17 to 33 times the acceptable level (although below international thresholds for consumption). Concerns have been raised concerning the negative impacts that the oil sands have on public health, including higher than normal rates of cancer among residents of Fort Chipewyan. However, John O'Connor, the doctor who initially reported the higher cancer rates and linked them to the oil sands development, was subsequently investigated by the Alberta College of Physicians and Surgeons. The College later reported that O'Connor's statements consisted of "mistruths, inaccuracies and unconfirmed information". In 2010, the Royal Society of Canada released a report stating that "there is currently no credible evidence of environmental contaminant exposures from oil sands reaching Fort Chipewyan at levels expected to cause elevated human cancer rates." In August 2011, the Alberta government initiated a provincial health study to examine whether a link exists between the higher rates of cancer and the oil sands emissions. In a report released in 2014, Alberta's Chief Medical Officer of Health, Dr. James Talbot, stated that "There isn't strong evidence for an association between any of these cancers and environmental exposure [to oil sands]." Rather, Talbot suggested that the cancer rates at Fort Chipewyan, which were slightly higher compared with the provincial average, were likely due to a combination of factors such as high rates of smoking, obesity, diabetes, and alcoholism as well as poor levels of vaccination. See also Athabasca oil sands Beaver River sandstone Cold Lake oil sands History of the petroleum industry in Canada (oil sands and heavy oil) Melville Island oil sands Oil megaprojects Oil shale Organic-rich sedimentary rocks Orinoco Belt Peace River oil sands Petroleum industry Project Oilsand Pyrobitumen RAVEN (Respecting Aboriginal Values & Environmental Needs) Shale gas Steam injection (oil industry) Stranded asset Thermal depolymerization Utah oil sands Wabasca oil field World energy consumption Asphalt concrete Notes References Further reading External links Oil Sands Discovery Centre, Fort McMurray, Alberta, Canada Edward Burtynsky, An aerial look at the Alberta Tar Sands G.R. Gray, R. Luhning: Bitumen The Canadian Encyclopedia Jiri Rezac, Alberta Oilsands photo story and aerials Exploring the Alberta tar sands, Citizenshift, National Film Board of Canada Indigenous Groups Lead Struggle Against Canada's Tar Sands – video report by Democracy Now! Extraction of vanadium from oil sands Canadian Oil Sands: Life-Cycle Assessments of Greenhouse Gas Emissions Congressional Research Service Alberta Government Oil Sands Information Portal Interactive Map and Data Library Petroleum geology Petroleum industry Articles containing video clips
Oil sands
[ "Chemistry" ]
15,787
[ "Bituminous sands", "Petroleum industry", "Petroleum", "Asphalt", "Chemical process engineering", "Petroleum geology" ]
299,790
https://en.wikipedia.org/wiki/Structural%20engineer
Structural engineers analyze, design, plan, and research structural components and structural systems to achieve design goals and ensure the safety and comfort of users or occupants. Their work takes account mainly of safety, technical, economic, and environmental concerns, but they may also consider aesthetic and social factors. Structural engineering is usually considered a specialty discipline within civil engineering, but it can also be studied in its own right. In the United States, most practicing structural engineers are currently licensed as civil engineers, but the situation varies from state to state. Some states have a separate license for structural engineers who are required to design special or high-risk structures such as schools, hospitals, or skyscrapers. In the United Kingdom, most structural engineers in the building industry are members of the Institution of Structural Engineers or the Institution of Civil Engineers. Typical structures designed by a structural engineer include buildings, towers, stadiums, and bridges. Other structures such as oil rigs, space satellites, aircraft, and ships may also be designed by a structural engineer. Most structural engineers are employed in the construction industry, however, there are also structural engineers in the aerospace, automobile, and shipbuilding industries. In the construction industry, they work closely with architects, civil engineers, mechanical engineers, electrical engineers, quantity surveyors, and construction managers. Structural engineers ensure that buildings and bridges are built to be strong enough and stable enough to resist all appropriate structural loads (e.g., gravity, wind, snow, rain, seismic (earthquake), earth pressure, temperature, and traffic) to prevent or reduce the loss of life or injury. They also design structures to be stiff enough to not deflect or vibrate beyond acceptable limits. Human comfort is an issue that is regularly considered limited. Fatigue is also an important consideration for bridges and aircraft design or for other structures that experience many stress cycles over their lifetimes. Consideration is also given to the durability of materials against possible deterioration which may impair performance over the design lifetime. Education The education of structural engineers is usually through a civil engineering bachelor's degree, and often a master's degree specializing in structural engineering. The fundamental core subjects for structural engineering are strength of materials or solid mechanics, structural analysis (static and dynamic), material science and numerical analysis. Reinforced concrete, composite structure, timber, masonry and structural steel designs are the general structural design courses that will be introduced in the next level of the education of structural engineering. The structural analysis courses which include structural mechanics, structural dynamics and structural failure analysis are designed to build up the fundamental analysis skills and theories for structural engineering students. At the senior year level or in graduate programs, prestressed concrete design, space frame design for building and aircraft, bridge engineering, civil and aerospace structure rehabilitation and other advanced structural engineering specializations are usually introduced. Recently in the United States, there have been discussions in the structural engineering community about the knowledge base of structural engineering graduates. Some have called for a master's degree to be the minimum standard for professional licensing as a civil engineer. There are separate structural engineering undergraduate degrees at the University of California, San Diego and the University of Architecture, Civil Engineering, and Geodesy, Sofia, Bulgaria. Many students who later become structural engineers major in civil, mechanical, or aerospace engineering degree programs, with an emphasis on structural engineering. Architectural engineering programs do offer structural emphases and are often in combined academic departments with civil engineering. Licensing or chartered status In many countries, structural engineering is a profession subject to licensure. Licensed engineers may receive the title of Professional Engineer, Chartered Engineer, Structural Engineer, or other title depending on the jurisdiction. The process to attain licensure to work as a structural engineer varies by location, but typically specifies university education, work experience, examination, and continuing education to maintain their mastery of the subject. Professional Engineers bear legal responsibility for their work to ensure the safety and performance of their structures and only practice within the scope of their expertise. In the United States, persons practicing structural engineering must be licensed in each state in which they practice. Licensure to practice as a structural engineer usually be obtained by the same qualifications as for a Civil Engineer, but some states require licensure specifically for structural engineering, with experience specific and non-concurrent with experience claimed for another engineering profession. The qualifications for licensure typically include a specified minimum level of practicing experience, as well as the successful completion of a nationally-administered 16-hour exam, and possibly an additional state-specific exam. For instance, California requires that candidates pass a national exam, written by the National Council of Examiners for Engineering and Surveying (NCEES), as well as a state-specific exam which includes a seismic portion and a surveying portion. In most states, application for license exam is requires four years of work experience after the candidate graduated from an ABET-accredited university and passing the fundamentals of Engineering exam, three years after receiving a master's degree, or two years after receiving a Ph.D. degree. Most US states do not have a separate structural engineering license. In 10 US states, including Alaska, California, Hawaii, Illinois, Nevada, Oregon, Utah, Washington, and others, there is an additional license or authority for Structural Engineering, obtained after the engineer has obtained a Civil Engineering license and practiced an additional amount of time with the Civil Engineering license. The scope of what structures must be designed by a Structural Engineer, not by a Civil Engineer without the S.E. license, is limited in Alaska, California, Nevada, Oregon, Utah, and Washington to some high importance structures such as stadiums, bridges, hospitals, and schools. The practice of structural engineering is reserved entirely to S.E. licensees in Hawaii and Illinois. The United Kingdom has one of the oldest professional institutions for structural engineers, the Institution of Structural Engineers. Founded as the Concrete Institute in 1908, it was renamed the Institution of Structural Engineers (IStructE) in 1922. It now has 22,000 members with branches in 32 countries. The IStructE is one of several UK professional bodies empowered to grant the title of Chartered Engineer; its members are granted the title of Chartered Structural Engineer. The overall process to become chartered begins after graduation from a UK MEng degree, or a BEng with an MSc degree. To qualify as a chartered structural engineer, a graduate needs to go through four years of Initial Professional Development followed by a professional review interview. After passing the interview, the candidate sits an eight-hour professional review examination. The election to chartered membership (MIStructE) depends on the examination result. The candidate can register at the Engineering Council UK as a Chartered Structural Engineer once he or she has been elected as a Chartered Member. Legally it is not necessary to be a member of the IStructE when working on structures in the UK, however, industry practice, insurance, and liabilities dictate that an appropriately qualified engineer be responsible for such work. Career and Remuneration A 2010 survey of professionals occupying jobs in the construction industry showed that structural engineers in the UK earn an average wage of £35,009. The salary of structural engineers varies from sector to sector within the construction and built environment industry worldwide, depending on the project. For example, structural engineers working in public sector projects earn on average £37,083 per annum compared to the £43,947 average earned by those in commercial projects. Certain regions also represent higher average salaries, with structural engineers in the Middle East in all sectors, and of every level of experience, earning £45,083, compared to UK and EU countries where the average is £35,164. See also Architects Architectural engineering Building officials Civil engineering Earthquake engineering List of structural engineers List of structural engineering companies Structural engineering Structural failure References Nabih Youssef Associates Structural Engineers (www.nyase.com) National Council of Structural Engineers Associations (www.ncsea.com) External links A day in the life of a structural engineer IABSE (International Association for Bridge and Structural Engineering) Building engineering Engineering occupations
Structural engineer
[ "Engineering" ]
1,633
[ "Structural engineering", "Building engineering", "Civil engineering", "Structural engineers", "Architecture" ]
299,801
https://en.wikipedia.org/wiki/RC%20circuit
A resistor–capacitor circuit (RC circuit), or RC filter or RC network, is an electric circuit composed of resistors and capacitors. It may be driven by a voltage or current source and these will produce different responses. A first order RC circuit is composed of one resistor and one capacitor and is the simplest type of RC circuit. RC circuits can be used to filter a signal by blocking certain frequencies and passing others. The two most common RC filters are the high-pass filters and low-pass filters; band-pass filters and band-stop filters usually require RLC filters, though crude ones can be made with RC filters. Natural response The simplest RC circuit consists of a resistor and a charged capacitor connected to one another in a single loop, without an external voltage source. Once the circuit is closed, the capacitor begins to discharge its stored energy through the resistor. The voltage across the capacitor, which is time-dependent, can be found by using Kirchhoff's current law. The current through the resistor must be equal in magnitude (but opposite in sign) to the time derivative of the accumulated charge on the capacitor. This results in the linear differential equation where is the capacitance of the capacitor. Solving this equation for yields the formula for exponential decay: where is the capacitor voltage at time . The time required for the voltage to fall to is called the RC time constant and is given by, In this formula, is measured in seconds, in ohms and in farads. Complex impedance The complex impedance, (in ohms) of a capacitor with capacitance (in farads) is The complex frequency is, in general, a complex number, where represents the imaginary unit: , is the exponential decay constant (in nepers per second), and is the sinusoidal angular frequency (in radians per second). Sinusoidal steady state Sinusoidal steady state is a special case in which the input voltage consists of a pure sinusoid (with no exponential decay). As a result, and the impedance becomes Series circuit By viewing the circuit as a voltage divider, the voltage across the capacitor is: and the voltage across the resistor is: Transfer functions The transfer function from the input voltage to the voltage across the capacitor is Similarly, the transfer function from the input to the voltage across the resistor is Poles and zeros Both transfer functions have a single pole located at In addition, the transfer function for the voltage across the resistor has a zero located at the origin. Gain and phase The magnitude of the gains across the two components are and and the phase angles are and These expressions together may be substituted into the usual expression for the phasor representing the output: Current The current in the circuit is the same everywhere since the circuit is in series: Impulse response The impulse response for each voltage is the inverse Laplace transform of the corresponding transfer function. It represents the response of the circuit to an input voltage consisting of an impulse or Dirac delta function. The impulse response for the capacitor voltage is where is the Heaviside step function and is the time constant. Similarly, the impulse response for the resistor voltage is where is the Dirac delta function Frequency-domain considerations These are frequency domain expressions. Analysis of them will show which frequencies the circuits (or filters) pass and reject. This analysis rests on a consideration of what happens to these gains as the frequency becomes very large and very small. As : As : This shows that, if the output is taken across the capacitor, high frequencies are attenuated (shorted to ground) and low frequencies are passed. Thus, the circuit behaves as a low-pass filter. If, though, the output is taken across the resistor, high frequencies are passed and low frequencies are attenuated (since the capacitor blocks the signal as its frequency approaches 0). In this configuration, the circuit behaves as a high-pass filter. The range of frequencies that the filter passes is called its bandwidth. The point at which the filter attenuates the signal to half its unfiltered power is termed its cutoff frequency. This requires that the gain of the circuit be reduced to . Solving the above equation yields which is the frequency that the filter will attenuate to half its original power. Clearly, the phases also depend on frequency, although this effect is less interesting generally than the gain variations. As : As : So at DC (0 Hz), the capacitor voltage is in phase with the signal voltage while the resistor voltage leads it by 90°. As frequency increases, the capacitor voltage comes to have a 90° lag relative to the signal and the resistor voltage comes to be in-phase with the signal. Time-domain considerations This section relies on knowledge of , the natural logarithmic constant. The most straightforward way to derive the time domain behaviour is to use the Laplace transforms of the expressions for and given above. This effectively transforms . Assuming a step input (i.e. before and then afterwards): Partial fractions expansions and the inverse Laplace transform yield: These equations are for calculating the voltage across the capacitor and resistor respectively while the capacitor is charging; for discharging, the equations are vice versa. These equations can be rewritten in terms of charge and current using the relationships and (see Ohm's law). Thus, the voltage across the capacitor tends towards as time passes, while the voltage across the resistor tends towards 0, as shown in the figures. This is in keeping with the intuitive point that the capacitor will be charging from the supply voltage as time passes, and will eventually be fully charged. These equations show that a series RC circuit has a time constant, usually denoted being the time it takes the voltage across the component to either rise (across the capacitor) or fall (across the resistor) to within of its final value. That is, is the time it takes to reach and to reach . The rate of change is a fractional per . Thus, in going from to , the voltage will have moved about 63.2% of the way from its level at toward its final value. So the capacitor will be charged to about 63.2% after , and essentially fully charged (99.3%) after about . When the voltage source is replaced with a short circuit, with the capacitor fully charged, the voltage across the capacitor drops exponentially with from towards 0. The capacitor will be discharged to about 36.8% after , and essentially fully discharged (0.7%) after about . Note that the current, , in the circuit behaves as the voltage across the resistor does, via Ohm's Law. These results may also be derived by solving the differential equations describing the circuit: The first equation is solved by using an integrating factor and the second follows easily; the solutions are exactly the same as those obtained via Laplace transforms. Integrator Consider the output across the capacitor at high frequency, i.e. This means that the capacitor has insufficient time to charge up and so its voltage is very small. Thus the input voltage approximately equals the voltage across the resistor. To see this, consider the expression for given above: but note that the frequency condition described means that so which is just Ohm's Law. Now, so which is an integrator across the capacitor. Differentiator Consider the output across the resistor at low frequency i.e., This means that the capacitor has time to charge up until its voltage is almost equal to the source's voltage. Considering the expression for again, when so Now, which is a differentiator across the resistor. Integration and differentiation can also be achieved by placing resistors and capacitors as appropriate on the input and feedback loop of operational amplifiers (see operational amplifier integrator and operational amplifier differentiator). Parallel circuit The parallel RC circuit is generally of less interest than the series circuit. This is largely because the output voltage is equal to the input voltage — as a result, this circuit does not act as a filter on the input signal unless fed by a current source. With complex impedances: This shows that the capacitor current is 90° out of phase with the resistor (and source) current. Alternatively, the governing differential equations may be used: When fed by a current source, the transfer function of a parallel RC circuit is: Synthesis It is sometimes required to synthesise an RC circuit from a given rational function in s. For synthesis to be possible in passive elements, the function must be a positive-real function. To synthesise as an RC circuit, all the critical frequencies (poles and zeroes) must be on the negative real axis and alternate between poles and zeroes with an equal number of each. Further, the critical frequency nearest the origin must be a pole, assuming the rational function represents an impedance rather than an admittance. The synthesis can be achieved with a modification of the Foster synthesis or Cauer synthesis used to synthesise LC circuits. In the case of Cauer synthesis, a ladder network of resistors and capacitors will result. See also RC time constant RL circuit LC circuit RLC circuit Electrical network List of electronics topics Step response References Bibliography Bakshi, U.A.; Bakshi, A.V., Circuit Analysis - II, Technical Publications, 2009 . Horowitz, Paul; Hill, Winfield, The Art of Electronics (3rd edition), Cambridge University Press, 2015 . Analog circuits Electronic filter topology
RC circuit
[ "Engineering" ]
2,008
[ "Analog circuits", "Electronic engineering" ]
299,813
https://en.wikipedia.org/wiki/Acoustical%20engineering
Acoustical engineering (also known as acoustic engineering) is the branch of engineering dealing with sound and vibration. It includes the application of acoustics, the science of sound and vibration, in technology. Acoustical engineers are typically concerned with the design, analysis and control of sound. One goal of acoustical engineering can be the reduction of unwanted noise, which is referred to as noise control. Unwanted noise can have significant impacts on animal and human health and well-being, reduce attainment by students in schools, and cause hearing loss. Noise control principles are implemented into technology and design in a variety of ways, including control by redesigning sound sources, the design of noise barriers, sound absorbers, suppressors, and buffer zones, and the use of hearing protection (earmuffs or earplugs). Besides noise control, acoustical engineering also covers positive uses of sound, such as the use of ultrasound in medicine, programming digital synthesizers, designing concert halls to enhance the sound of orchestras and specifying railway station sound systems so that announcements are intelligible. Acoustic engineer (professional) Acoustic engineers usually possess a bachelor's degree or higher qualification in acoustics, physics or another engineering discipline. Practicing as an acoustic engineer usually requires a bachelor's degree with significant scientific and mathematical content. Acoustic engineers might work in acoustic consultancy, specializing in particular fields, such as architectural acoustics, environmental noise or vibration control. In other industries, acoustic engineers might: design automobile sound systems; investigate human response to sounds, such as urban soundscapes and domestic appliances; develop audio signal processing software for mixing desks, and design loudspeakers and microphones for mobile phones. Acousticians are also involved in researching and understanding sound scientifically. Some positions, such as faculty require a Doctor of Philosophy. In most countries, a degree in acoustics can represent the first step towards professional certification and the degree program may be certified by a professional body. After completing a certified degree program the engineer must satisfy a range of requirements before being certified. Once certified, the engineer is designated the title of Chartered Engineer (in most Commonwealth countries). Subdisciplines The listed subdisciplines are loosely based on the PACS (Physics and Astronomy Classification Scheme) coding used by the Acoustical Society of America. Aeroacoustics Aeroacoustics is concerned with how noise is generated by the movement of air, for instance via turbulence, and how sound propagates through the fluid air. Aeroacoustics plays an important role in understanding how noise is generated by aircraft and wind turbines, as well as exploring how wind instruments work. Audio signal processing Audio signal processing is the electronic manipulation of audio signals using analog and digital signal processing. It is done for a variety of reasons, including: to enhance a sound, e.g. by applying an audio effect such as reverberation; to remove unwanted noises from a signal, e.g. echo cancellation in internet voice calls; to compress an audio signal to allow efficient transmission, e.g. perceptual coding in MP3 and Opus to understand the content of the signal, e.g. identification of music tracks via music information retrieval. Audio engineers develop and use audio signal processing algorithms. Architectural acoustics Architectural acoustics (also known as building acoustics) is the science and engineering of achieving a good sound within a building. Architectural acoustics can be about achieving good speech intelligibility in a theatre, restaurant or railway station, enhancing the quality of music in a concert hall or recording studio, or suppressing noise to make offices and homes more productive and pleasant places to work and live. Architectural acoustic design is usually done by acoustic consultants. Bioacoustics Bioacoustics concerns the scientific study of sound production and hearing in animals. It can include: acoustic communication and associated animal behavior and evolution of species; how sound is produced by animals; the auditory mechanisms and neurophysiology of animals; the use of sound to monitor animal populations, and the effect of man-made noise on animals. Electroacoustics This branch of acoustic engineering deals with the design of headphones, microphones, loudspeakers, sound systems, sound reproduction, and recording. There has been a rapid increase in the use of portable electronic devices which can reproduce sound and rely on electroacoustic engineering, e.g. mobile phones, portable media players, and tablet computers. The term "electroacoustics" is also used to describe a set of electrokinetic effects that occur in heterogeneous liquids under influence of ultrasound. Environmental noise Environmental acoustics is concerned with the control of noise and vibrations caused by traffic, aircraft, industrial equipment, recreational activities and anything else that might be considered a nuisance. Acoustical engineers concerned with environmental acoustics face the challenge of measuring or predicting likely noise levels, determining an acceptable level for that noise, and determining how the noise can be controlled. Environmental acoustics work is usually done by acoustic consultants or those working in environmental health. Recent research work has put a strong emphasis on soundscapes, the positive use of sound (e.g. fountains, bird song), and the preservation of tranquility. Musical acoustics Musical acoustics is concerned with researching and describing the physics of music and its perception – how sounds employed as music work. This includes: the function and design of musical instruments including electronic synthesizers; the human voice (the physics and neurophysiology of singing); computer analysis of music and composition; the clinical use of music in music therapy, and the perception and cognition of music. Noise control Noise control is a set of strategies to reduce noise pollution by reducing noise at its source, by inhibiting sound propagation using noise barriers or similar, or by the use of ear protection (earmuffs or earplugs). Control at the source is the most cost-effective way of providing noise control. Noise control engineering applied to cars and trucks is known as noise, vibration, and harshness (NVH). Other techniques to reduce product noise include vibration isolation, application of acoustic absorbent and acoustic enclosures. Acoustical engineering can go beyond noise control to look at what is the best sound for a product, for instance, manipulating the sound of door closures on automobiles. Psychoacoustics Psychoacoustics tries to explain how humans respond to what they hear, whether that is an annoying noise or beautiful music. In many branches of acoustic engineering, a human listener is a final arbitrator as to whether a design is successful, for instance, whether sound localisation works in a surround sound system. "Psychoacoustics seeks to reconcile acoustical stimuli and all the scientific, objective, and physical properties that surround them, with the physiological and psychological responses evoked by them." Speech Speech is a major area of study for acoustical engineering, including the production, processing and perception of speech. This can include physics, physiology, psychology, audio signal processing and linguistics. Speech recognition and speech synthesis are two important aspects of the machine processing of speech. Ensuring speech is transmitted intelligibly, efficiently and with high quality; in rooms, through public address systems and through telephone systems are other important areas of study. Ultrasonics Ultrasonics deals with sound waves in solids, liquids and gases at frequencies too high to be heard by the average person. Specialist areas include medical ultrasonics (including medical ultrasonography), sonochemistry, nondestructive testing, material characterisation and underwater acoustics (sonar). Underwater acoustics Underwater acoustics is the scientific study of sound in water. It is concerned with both natural and man-made sound and its generation underwater; how it propagates, and the perception of the sound by animals. Applications include sonar to locate submerged objects such as submarines, underwater communication by animals, observation of sea temperatures for climate change monitoring, and marine biology. Vibration and dynamics Acoustic engineers working on vibration study the motions and interactions of mechanical systems with their environments, including measurement, analysis and control. This might include: ground vibrations from railways and construction; vibration isolation to reduce noise getting into recording studios; studying the effects of vibration on humans (vibration white finger); vibration control to protect a bridge from earthquakes, or modelling the propagation of structure-borne sound through buildings. Fundamental science Although the way in which sound interacts with its surroundings is often extremely complex, there are a few ideal sound wave behaviours that are fundamental to understanding acoustical design. Complex sound wave behaviors include absorption, reverberation, diffraction, and refraction. Absorption is the loss of energy that occurs when a sound wave reflects off of a surface, and refers to both the sound energy transmitted through and dissipated by the surface material. Reverberation is the persistence of sound caused by repeated boundary reflections after the source of the sound stops. This principle is particularly important in enclosed spaces. Diffraction is the bending of sound waves around surfaces in the path of the wave. Refraction is the bending of sound waves caused by changes in the medium through which the wave is passing. For example, temperature gradients can cause sound wave refraction. Acoustical engineers apply these fundamental concepts, along with mathematical analysis, to control sound for a variety of applications. Associations Acoustical Society of America Technical Committee on Engineering Acoustics Audio Engineering Society Australian Acoustical Society Canadian Acoustical Association Institute of Acoustics, Chinese Academy of Sciences Institute of Acoustics (United Kingdom) Danish Sound Cluster (Denmark) See also Audio Engineering :Category:Acoustical engineers :Category:Audio engineers References Barron, R. (2003). Industrial noise control and acoustics. New York: Marcel Dekker Inc. Retrieved from CRCnetBase Hemond, C. (1983). In Ingerman S. ( Ed.), Engineering acoustics and noise control. New Jersey: Prentice-Hall. Highway traffic noise barriers at a glance. Retrieved February 1, 2010, from http://www.fhwa.dot.gov/environment/keepdown.htm Kinsler, L., Frey, A., Coppens, A., & Sanders, J. (Eds.). (2000). Fundamentals of acoustics (4th ed.). New York: John Wiley and Sons. Kleppe, J. (1989). Engineering applications of acoustics. Sparks, Nevada: Artech House. Moser, M. (2009). Engineering acoustics (S. Zimmerman, R. Ellis Trans.). (2nd ed.). Berlin: Springer-Verlag. Acoustics Noise reduction Engineering disciplines Sound Noise control
Acoustical engineering
[ "Physics", "Engineering" ]
2,168
[ "nan", "Classical mechanics", "Acoustics" ]
2,700,377
https://en.wikipedia.org/wiki/Thermosiphon
A thermosiphon (or thermosyphon) is a device that employs a method of passive heat exchange based on natural convection, which circulates a fluid without the necessity of a mechanical pump. Thermosiphoning is used for circulation of liquids and volatile gases in heating and cooling applications such as heat pumps, water heaters, boilers and furnaces. Thermosiphoning also occurs across air temperature gradients such as those occurring in a wood-fire chimney or solar chimney. This circulation can either be open-loop, as when the substance in a holding tank is passed in one direction via a heated transfer tube mounted at the bottom of the tank to a distribution point — even one mounted above the originating tank — or it can be a vertical closed-loop circuit with return to the original container. Its purpose is to simplify the transfer of liquid or gas while avoiding the cost and complexity of a conventional pump. Simple thermosiphon Natural convection of the liquid starts when heat transfer to the liquid gives rise to a temperature difference from one side of the loop to the other. The phenomenon of thermal expansion means that a temperature difference will have a corresponding difference in density across the loop. The warmer fluid on one side of the loop is less dense and thus more buoyant than the cooler fluid on the other side. The warmer fluid will "float" above the cooler fluid, and the cooler fluid will "sink" below the warmer fluid. This phenomenon of natural convection is known by the saying "heat rises". Convection moves the heated liquid upwards in the system as it is simultaneously replaced by cooler liquid returning by gravity. A good thermosiphon has very little hydraulic resistance so that liquid can flow easily under the relatively low pressure produced by natural convection. Heat pipes In some situations the flow of liquid may be reduced further, or stopped, perhaps because the loop is not entirely full of liquid. In this case, the system no longer convects, so it is not a usual "thermosiphon". Heat can still be transferred in this system by the evaporation and condensation of vapor; however, the system is properly classified as a heat pipe thermosyphon. If the system also contains other fluids, such as air, then the heat flux density will be less than in a real heat pipe, which contains only a single substance. The thermosiphon has been sometimes incorrectly described as a 'gravity return heat pipe'. Heat pipes usually have a wick to return the condensate to the evaporator via capillary action. A wick is not needed in a thermosiphon because gravity moves the liquid. The wick allows heat pipes to transfer heat when there is no gravity, which is useful in space. A thermosiphon is "simpler" than a heat pipe. (Single-phase) thermosiphons can only transfer heat "upward", or away from the acceleration vector. Thus, orientation is much more important for thermosiphons than for heatpipes. Also, thermosiphons can fail because of a bubble in the loop, and require a circulating loop of pipes. Reboilers and calandria If the piping of a thermosiphon resists flow, or excessive heat is applied, the liquid may boil. Since the gas is more buoyant than the liquid, the convective pressure is greater. This is a well known invention called a reboiler. A group of reboilers attached to a pair of plena is called a calandria. In some circumstances, for example the cooling system for an older (pre 1950s) car, the boiling of the fluid will cause the system to stop working, as the volume of steam created displaces too much of the water and circulation stops. The term "phase change thermosiphon" is a misnomer and should be avoided. When phase change occurs in a thermosiphon, it means that the system either does not have enough fluid, or it is too small to transfer all of the heat by convection alone. To improve the performance, either more fluid is needed (possibly in a larger thermosiphon), or all other fluids (including air) should be pumped out of the loop. Solar energy Thermosiphons are used in some liquid-based solar heating systems to heat a liquid such as water. The water is heated passively by solar energy and relies on heat energy being transferred from the sun to a solar collector. The heat from the collector can be transferred to water in two ways: directly where water circulates through the collector, or indirectly where an anti-freeze solution carries the heat from the collector and transfers it to water in the tank via a heat exchanger. Convection allows for the movement of the heated liquid out of the solar collector to be replaced by colder liquid which is in turn heated. Due to this principle, it is necessary for the water to be stored in a tank above the collector. Architecture In locations historically dominated by permafrost conditions, thermosiphons may be used to counter adverse geologic forces on the foundations of buildings, pipelines and other structures caused by the thawing of the permafrost. A study published in 2006 by oil giant ConocoPhillips reports that Alaska's permafrost, upon which much of the state's infrastructure is built, has degraded since 1982 amid record warm temperatures. According to the Alaska Climate Research Center at the University of Alaska Fairbanks, between 1949 and 2018 the average annual temperature in Alaska rose 4.0 degrees Fahrenheit, with an increase of 7.2 degrees Fahrenheit over the winter. Computing Thermosiphons are used for watercooling internal computer components, most commonly the processor. While any suitable liquid can be used, water is the easiest liquid to use in thermosiphon systems. Unlike traditional watercooling systems, thermosiphon systems do not rely on a pump but on convection for the movement of heated water (which may become vapour) from the components upwards to a heat exchanger. There the water is cooled and is ready to be recirculated. The most commonly used heat exchanger is a radiator, where fans actively blow air across an increased surface area to condense the vapour to a liquid. The denser liquid falls, thus recirculating through the system and repeating the process. No pump is required. The cycle of evaporation and condensation is driven by the difference in temperature and gravity. Uses Without proper cooling, a modern processor chip can rapidly reach temperatures that cause it to malfunction. Even with a common heat sink and fan attached, typical processor operating temperatures may still reach up to 70 °C (160 °F). A thermosiphon can efficiently transfer heat over a much wider temperature range and can typically maintain the processor temperature 10–20 °C cooler than a traditional heat sink and fan. In some cases, it is also possible that a thermosiphon may cover multiple heat sources and, design-wise, be more compact than an appropriately sized conventional heat sink and fan. Drawbacks Thermosiphons must be mounted such that vapor rises up and liquid flows down to the boiler, with no bends in the tubing for liquid to pool. Also, the thermosiphon's fan that cools the gas needs cool air to operate. The system has to be completely airtight; if not, the process of thermosiphon will not take effect and cause the water to only evaporate over a small period of time. Engine cooling Some early cars, motor vehicles, and engine-powered farm and industrial equipment used thermosiphon circulation to move cooling water between their cylinder block and radiator. This method of water circulation depends on keeping enough cool air moving past the radiator to provide a sufficient temperature differential; the air movement was accomplished by the forward motion of the vehicle and by the use of fans. As engine power increased, increased flow of water was required, so engine-driven pumps were added to assist circulation. More compact engines began to use smaller radiators and require more convoluted flow patterns, so the water circulation became entirely dependent on the pump and might even be reversed against its natural direction. An engine that circulates its cooling water only by thermosiphon is susceptible to overheating during prolonged periods of idling or very slow travel since the lack of forward motion provides too little airflow past the radiator, unless one or more fans are able to move enough air by themselves. Thermosiphon systems are also very sensitive to low coolant level, i.e. losing only a small amount of coolant stops the circulation; a pump-driven system is much more robust and can typically handle a lower coolant level. Espresso machines Many espresso machine designs use a thermosiphon in order to maintain a stable temperature. The E-61 espresso machine has a group head with a thermosiphon. This group head is common on many espresso machines today. Some lever espresso machines have a double wall around the piston in their group that is used for a thermosiphon. A modern example would be the machines from Londinium. See also and References External links HP Labs report on thermosiphons for computer cooling (PDF) Computer hardware cooling Heating, ventilation, and air conditioning Convection
Thermosiphon
[ "Physics", "Chemistry" ]
1,947
[ "Transport phenomena", "Physical phenomena", "Convection", "Thermodynamics" ]
2,701,007
https://en.wikipedia.org/wiki/Ruthenium%28III%29%20chloride
Ruthenium(III) chloride is the chemical compound with the formula RuCl3. "Ruthenium(III) chloride" more commonly refers to the hydrate RuCl3·xH2O. Both the anhydrous and hydrated species are dark brown or black solids. The hydrate, with a varying proportion of water of crystallization, often approximating to a trihydrate, is a commonly used starting material in ruthenium chemistry. Preparation and properties Anhydrous ruthenium(III) chloride is usually prepared by heating powdered ruthenium metal with chlorine. In the original synthesis, the chlorination was conducted in the presence of carbon monoxide, the product being carried by the gas stream and crystallising upon cooling. Two polymorphs of RuCl3 are known. The black α-form adopts the CrCl3-type structure with long Ru-Ru contacts of 346 pm. This polymorph has honeycomb layers of Ru3+ which are surrounded with an octahedral cage of Cl− anions. The ruthenium cations are magnetic residing in a low-spin J~1/2 ground state with net angular momentum L=1. Layers of α-RuCl3 are stacked on top of each other with weak Van der Waals forces. These can be cleaved to form mono-layers using scotch tape. The dark brown metastable β-form crystallizes in a hexagonal cell; this form consists of infinite chains of face-sharing octahedra with Ru-Ru contacts of 283 pm, similar to the structure of zirconium trichloride. The β-form is irreversibly converted to the α-form at 450–600 °C. The β-form is diamagnetic, whereas α-RuCl3 is paramagnetic at room temperature. RuCl3 vapour decomposes into the elements at high temperatures ; the enthalpy change at 750 °C (1020 K), ΔdissH1020 has been estimated as +240 kJ/mol. Solid state physics α-RuCl3 was proposed as a candidate for a Kitaev quantum spin liquid state when neutron scattering revealed an unusual magnetic spectrum, and thermal transport revealed chiral Majorana Fermions when subject to a magnetic field. Coordination chemistry of hydrated ruthenium trichloride As the most commonly available ruthenium compound, RuCl3·xH2O is the precursor to many hundreds of chemical compounds. The noteworthy property of ruthenium complexes, chlorides and otherwise, is the existence of more than one oxidation state, several of which are kinetically inert. All second and third-row transition metals form exclusively low spin complexes, whereas ruthenium is special in the stability of adjacent oxidation states, especially Ru(II), Ru(III) (as in the parent RuCl3·xH2O) and Ru(IV). Illustrative complexes derived from "ruthenium trichloride" RuCl2(PPh3)3, a chocolate-colored, benzene-soluble species, which in turn is also a versatile starting material. It arises approximately as follows: 2RuCl3·xH2O + 7PPh3 → 2RuCl2(PPh3)3 + OPPh3 + 5H2O + 2HCl Diruthenium tetraacetate chloride, a mixed valence polymer, is obtained by reduction of ruthenium trichloride in acetic acid. [RuCl2(C6H6)]2 arises from 1,3-cyclohexadiene or 1,4-cyclohexadiene as follows: 2RuCl3·xH2O + 2C6H8 → [RuCl2(C6H6)]2 + 6H2O + 2HCl + H2 Ru(bipy)3Cl2, an intensely luminescent salt with a long-lived excited state, arising as follows: 2 RuCl3·xH2O + 6 bipy + CH3CH2OH → 2 [Ru(bipy)3]Cl2 + 6 H2O + CH3CHO + 2 HCl This reaction proceeds via the intermediate cis-Ru(bipy)2Cl2. [RuCl2(C5Me5)]2, arising as follows: 2RuCl3·xH2O + 2C5Me5H → [RuCl2(C5Me5)]2 + 6H2O + 2 HCl [RuCl2(C5Me5)]2 can be further reduced to [RuCl(C5Me5)]4. Ru(C5H7O2)3 arises as follows: RuCl3·xH2O + 3C5H8O2 → Ru(C5H7O2)3 + 3H2O + 3HCl RuO4, is produced by oxidation. Some of these compounds were utilized in the research related to two Nobel Prizes. Ryōji Noyori was awarded the Nobel Prize in Chemistry in 2001 for the development of practical asymmetric hydrogenation catalysts based on ruthenium. Robert H. Grubbs was awarded the Nobel Prize in Chemistry in 2005 for the development of practical alkene metathesis catalysts based on ruthenium alkylidene derivatives. Carbon monoxide derivatives RuCl3(H2O)x reacts with carbon monoxide under mild conditions. In contrast, iron chlorides do not react with CO. CO reduces the red-brown trichloride to yellowish Ru(II) species. Specifically, exposure of an ethanol solution of RuCl3(H2O)x to 1 atm of CO gives, depending on the specific conditions, [Ru2Cl4(CO)4], [Ru2Cl4(CO)4]2−, and [RuCl3(CO)3]−. Addition of ligands (L) to such solutions gives Ru-Cl-CO-L compounds (L = PR3). Reduction of these carbonylated solutions with Zn affords the orange triangular cluster Ru3(CO)12. 3RuCl3·xH2O + 4.5Zn + 12CO (high pressure) → Ru3(CO)12 + 3xH2O + 4.5ZnCl2 Sources References Further reading Ruthenium(III) compounds Chlorides Platinum group halides Coordination complexes
Ruthenium(III) chloride
[ "Chemistry" ]
1,369
[ "Chlorides", "Inorganic compounds", "Coordination complexes", "Coordination chemistry", "Salts" ]
2,702,039
https://en.wikipedia.org/wiki/Composition%20operator
In mathematics, the composition operator with symbol is a linear operator defined by the rule where denotes function composition. The study of composition operators is covered by AMS category 47B33. In physics In physics, and especially the area of dynamical systems, the composition operator is usually referred to as the Koopman operator (and its wild surge in popularity is sometimes jokingly called "Koopmania"), named after Bernard Koopman. It is the left-adjoint of the transfer operator of Frobenius–Perron. In Borel functional calculus Using the language of category theory, the composition operator is a pull-back on the space of measurable functions; it is adjoint to the transfer operator in the same way that the pull-back is adjoint to the push-forward; the composition operator is the inverse image functor. Since the domain considered here is that of Borel functions, the above describes the Koopman operator as it appears in Borel functional calculus. In holomorphic functional calculus The domain of a composition operator can be taken more narrowly, as some Banach space, often consisting of holomorphic functions: for example, some Hardy space or Bergman space. In this case, the composition operator lies in the realm of some functional calculus, such as the holomorphic functional calculus. Interesting questions posed in the study of composition operators often relate to how the spectral properties of the operator depend on the function space. Other questions include whether is compact or trace-class; answers typically depend on how the function behaves on the boundary of some domain. When the transfer operator is a left-shift operator, the Koopman operator, as its adjoint, can be taken to be the right-shift operator. An appropriate basis, explicitly manifesting the shift, can often be found in the orthogonal polynomials. When these are orthogonal on the real number line, the shift is given by the Jacobi operator. When the polynomials are orthogonal on some region of the complex plane (viz, in Bergman space), the Jacobi operator is replaced by a Hessenberg operator. Applications In mathematics, composition operators commonly occur in the study of shift operators, for example, in the Beurling–Lax theorem and the Wold decomposition. Shift operators can be studied as one-dimensional spin lattices. Composition operators appear in the theory of Aleksandrov–Clark measures. The eigenvalue equation of the composition operator is Schröder's equation, and the principal eigenfunction is often called Schröder's function or Koenigs function. The composition operator has been used in data-driven techniques for dynamical systems in the context of dynamic mode decomposition algorithms, which approximate the modes and eigenvalues of the composition operator. See also Carleman linearization Dynamic mode decomposition References C. C. Cowen and B. D. MacCluer, Composition operators on spaces of analytic functions. Studies in Advanced Mathematics. CRC Press, Boca Raton, Florida, 1995. xii+388 pp. . J. H. Shapiro, Composition operators and classical function theory. Universitext: Tracts in Mathematics. Springer-Verlag, New York, 1993. xvi+223 pp. . Dynamical systems Functional analysis Operator theory Topological vector spaces
Composition operator
[ "Physics", "Mathematics" ]
671
[ "Functions and mappings", "Vector spaces", "Mathematical objects", "Linear operators", "Topological vector spaces", "Space (mathematics)", "Mechanics", "Mathematical relations", "Dynamical systems" ]
2,702,319
https://en.wikipedia.org/wiki/Nuclear%20operators%20between%20Banach%20spaces
In mathematics, nuclear operators between Banach spaces are a linear operators between Banach spaces in infinite dimensions that share some of the properties of their counter-part in finite dimension. In Hilbert spaces such operators are usually called trace class operators and one can define such things as the trace. In Banach spaces this is no longer possible for general nuclear operators, it is however possible for -nuclear operator via the Grothendieck trace theorem. The general definition for Banach spaces was given by Grothendieck. This article presents both cases but concentrates on the general case of nuclear operators on Banach spaces. Nuclear operators on Hilbert spaces An operator on a Hilbert space is compact if it can be written in the form where and and are (not necessarily complete) orthonormal sets. Here is a set of real numbers, the set of singular values of the operator, obeying if The bracket is the scalar product on the Hilbert space; the sum on the right hand side must converge in norm. An operator that is compact as defined above is said to be or if Properties A nuclear operator on a Hilbert space has the important property that a trace operation may be defined. Given an orthonormal basis for the Hilbert space, the trace is defined as Obviously, the sum converges absolutely, and it can be proven that the result is independent of the basis. It can be shown that this trace is identical to the sum of the eigenvalues of (counted with multiplicity). Nuclear operators on Banach spaces The definition of trace-class operator was extended to Banach spaces by Alexander Grothendieck in 1955. Let and be Banach spaces, and be the dual of that is, the set of all continuous or (equivalently) bounded linear functionals on with the usual norm. There is a canonical evaluation map (from the projective tensor product of and to the Banach space of continuous linear maps from to ). It is determined by sending and to the linear map An operator is called if it is in the image of this evaluation map. -nuclear operators An operator is said to be if there exist sequences of vectors with functionals with and complex numbers with such that the operator may be written as with the sum converging in the operator norm. Operators that are nuclear of order 1 are called : these are the ones for which the series is absolutely convergent. Nuclear operators of order 2 are called Hilbert–Schmidt operators. Relation to trace-class operators With additional steps, a trace may be defined for such operators when Properties The trace and determinant can no longer be defined in general in Banach spaces. However they can be defined for the so-called -nuclear operators via Grothendieck trace theorem. Generalizations More generally, an operator from a locally convex topological vector space to a Banach space is called if it satisfies the condition above with all bounded by 1 on some fixed neighborhood of 0. An extension of the concept of nuclear maps to arbitrary monoidal categories is given by . A monoidal category can be thought of as a category equipped with a suitable notion of a tensor product. An example of a monoidal category is the category of Banach spaces or alternatively the category of locally convex, complete, Hausdorff spaces; both equipped with the projective tensor product. A map in a monoidal category is called if it can be written as a composition for an appropriate object and maps where is the monoidal unit. In the monoidal category of Banach spaces, equipped with the projective tensor product, a map is thick if and only if it is nuclear. Examples Suppose that and are Hilbert-Schmidt operators between Hilbert spaces. Then the composition is a nuclear operator. See also References A. Grothendieck (1955), Produits tensoriels topologiques et espace nucléaires,Mem. Am. Math.Soc. 16. A. Grothendieck (1956), La theorie de Fredholm, Bull. Soc. Math. France, 84:319–384. A. Hinrichs and A. Pietsch (2010), p-nuclear operators in the sense of Grothendieck, Mathematische Nachrichen 283: 232–261. . Operator theory Topological tensor products Linear operators
Nuclear operators between Banach spaces
[ "Mathematics", "Engineering" ]
884
[ "Functions and mappings", "Tensors", "Mathematical objects", "Linear operators", "Mathematical relations", "Topological tensor products" ]
2,702,780
https://en.wikipedia.org/wiki/Fredholm%20kernel
In mathematics, a Fredholm kernel is a certain type of a kernel on a Banach space, associated with nuclear operators on the Banach space. They are an abstraction of the idea of the Fredholm integral equation and the Fredholm operator, and are one of the objects of study in Fredholm theory. Fredholm kernels are named in honour of Erik Ivar Fredholm. Much of the abstract theory of Fredholm kernels was developed by Alexander Grothendieck and published in 1955. Definition Let B be an arbitrary Banach space, and let B* be its dual, that is, the space of bounded linear functionals on B. The tensor product has a completion under the norm where the infimum is taken over all finite representations The completion, under this norm, is often denoted as and is called the projective topological tensor product. The elements of this space are called Fredholm kernels. Properties Every Fredholm kernel has a representation in the form with and such that and Associated with each such kernel is a linear operator which has the canonical representation Associated with every Fredholm kernel is a trace, defined as p-summable kernels A Fredholm kernel is said to be p-summable if A Fredholm kernel is said to be of order q if q is the infimum of all for all p for which it is p-summable. Nuclear operators on Banach spaces An operator : is said to be a nuclear operator if there exists an ∈ such that = . Such an operator is said to be -summable and of order if is. In general, there may be more than one associated with such a nuclear operator, and so the trace is not uniquely defined. However, if the order ≤ 2/3, then there is a unique trace, as given by a theorem of Grothendieck. Grothendieck's theorem If is an operator of order then a trace may be defined, with where are the eigenvalues of . Furthermore, the Fredholm determinant is an entire function of z. The formula holds as well. Finally, if is parameterized by some complex-valued parameter w, that is, , and the parameterization is holomorphic on some domain, then is holomorphic on the same domain. Examples An important example is the Banach space of holomorphic functions over a domain . In this space, every nuclear operator is of order zero, and is thus of trace-class. Nuclear spaces The idea of a nuclear operator can be adapted to Fréchet spaces. A nuclear space is a Fréchet space where every bounded map of the space to an arbitrary Banach space is nuclear. References Fredholm theory Banach spaces Topology of function spaces Topological tensor products Linear operators
Fredholm kernel
[ "Mathematics", "Engineering" ]
565
[ "Functions and mappings", "Tensors", "Mathematical objects", "Linear operators", "Mathematical relations", "Topological tensor products" ]
2,703,676
https://en.wikipedia.org/wiki/Loop%20integral
In quantum field theory and statistical mechanics, loop integrals are the integrals which appear when evaluating the Feynman diagrams with one or more loops by integrating over the internal momenta. These integrals are used to determine counterterms, which in turn allow evaluation of the beta function, which encodes the dependence of coupling for an interaction on an energy scale . One-loop integral Generic formula A generic one-loop integral, for example those appearing in one-loop renormalization of QED or QCD may be written as a linear combination of terms in the form where the are 4-momenta which are linear combinations of the external momenta, and the are masses of interacting particles. This expression uses Euclidean signature. In Lorentzian signature the denominator would instead be a product of expressions of the form . Using Feynman parametrization, this can be rewritten as a linear combination of integrals of the form where the 4-vector and are functions of the and the Feynman parameters. This integral is also integrated over the domain of the Feynman parameters. The integral is an isotropic tensor and so can be written as an isotropic tensor without dependence (but possibly dependent on the dimension ), multiplied by the integral Note that if were odd, then the integral vanishes, so we can define . Regularizing the integral Cutoff regularization In Wilsonian renormalization, the integral is made finite by specifying a cutoff scale . The integral to be evaluated is then where is shorthand for integration over the domain . The expression is finite, but in general as , the expression diverges. Dimensional regularization The integral without a momentum cutoff may be evaluated as where is the Beta function. For calculations in the renormalization of QED or QCD, takes values and . For loop integrals in QFT, actually has a pole for relevant values of and . For example in scalar theory in 4 dimensions, the loop integral in the calculation of one-loop renormalization of the interaction vertex has . We use the 'trick' of dimensional regularization, analytically continuing to with a small parameter. For calculation of counterterms, the loop integral should be expressed as a Laurent series in . To do this, it is necessary to use the Laurent expansion of the Gamma function, where is the Euler–Mascheroni constant. In practice the loop integral generally diverges as . For full evaluation of the Feynman diagram, there may be algebraic factors which must be evaluated. For example in QED, the tensor indices of the integral may be contracted with Gamma matrices, and identities involving these are needed to evaluate the integral. In QCD, there may be additional Lie algebra factors, such as the quadratic Casimir of the adjoint representation as well as of any representations that matter (scalar or spinor fields) in the theory transform under. Examples Scalar field theory φ4 theory The starting point is the action for theory in is Where . The domain is purposefully left ambiguous, as it varies depending on regularisation scheme. The Euclidean signature propagator in momentum space is The one-loop contribution to the two-point correlator (or rather, to the momentum space two-point correlator or Fourier transform of the two-point correlator) comes from a single Feynman diagram and is This is an example of a loop integral. If and the domain of integration is , this integral diverges. This is typical of the puzzle of divergences which plagued quantum field theory historically. To obtain finite results, we choose a regularization scheme. For illustration, we give two schemes. Cutoff regularization: fix . The regularized loop integral is the integral over the domain and it is typical to denote this integral by This integral is finite and in this case can be evaluated. Dimensional regularization: we integrate over all of , but instead of considering to be a positive integer, we analytically continue to , where is small. By the computation above, we showed that the integral can be written in terms of expressions which have a well-defined analytic continuation from integers to functions on : specifically the gamma function has an analytic continuation and taking powers, , is an operation which can be analytically continued. See also Regularization (physics) Renormalization References Further reading Vladimir A. Smirnov: "Evaluating Feynman Integrals", Springer,ISBN 978-3-540239338 (2004). Vladimir A. Smirnov: "Feynman Integral Calculus", Springer, ISBN 978-3-540306108 (2006). Vladimir A. Smirnov: "Analytic Tools for Feynman Integrals", Springer, ISBN 978-3642348853 (2013). Johannes Blümlein and Carsten Schneider (Eds.): "Anti-Differentiation and the Calculation of Feynman Amplitudes", Springer, ISBN 978-3-030-80218-9 (2021). Stefan Weinzierl: "Feynman Integrals: A Comprehensive Treatment for Students and Researchers", Springer, ISBN 978-3-030-99560-7 (2023). Quantum field theory Statistical mechanics Renormalization group
Loop integral
[ "Physics" ]
1,079
[ "Quantum field theory", "Physical phenomena", "Critical phenomena", "Quantum mechanics", "Renormalization group", "Statistical mechanics" ]
2,703,743
https://en.wikipedia.org/wiki/Feynman%20parametrization
Feynman parametrization is a technique for evaluating loop integrals which arise from Feynman diagrams with one or more loops. However, it is sometimes useful in integration in areas of pure mathematics as well. Formulas Richard Feynman observed that: which is valid for any complex numbers A and B as long as 0 is not contained in the line segment connecting A and B. The formula helps to evaluate integrals like: If A(p) and B(p) are linear functions of p, then the last integral can be evaluated using substitution. More generally, using the Dirac delta function : This formula is valid for any complex numbers A1,...,An as long as 0 is not contained in their convex hull. Even more generally, provided that for all : where the Gamma function was used. Derivation By using the substitution , we have , and , from which we get the desired result In more general cases, derivations can be done very efficiently using the Schwinger parametrization. For example, in order to derive the Feynman parametrized form of , we first reexpress all the factors in the denominator in their Schwinger parametrized form: and rewrite, Then we perform the following change of integration variables, to obtain, where denotes integration over the region with . The next step is to perform the integration. where we have defined Substituting this result, we get to the penultimate form, and, after introducing an extra integral, we arrive at the final form of the Feynman parametrization, namely, Similarly, in order to derive the Feynman parametrization form of the most general case, one could begin with the suitable different Schwinger parametrization form of factors in the denominator, namely, and then proceed exactly along the lines of previous case. Alternative form An alternative form of the parametrization that is sometimes useful is This form can be derived using the change of variables . We can use the product rule to show that , then More generally we have where is the gamma function. This form can be useful when combining a linear denominator with a quadratic denominator , such as in heavy quark effective theory (HQET). Symmetric form A symmetric form of the parametrization is occasionally used, where the integral is instead performed on the interval , leading to: References further books Michael E. Peskin and Daniel V. Schroeder , An Introduction To Quantum Field Theory, Addison-Wesley, Reading, 1995. Silvan S. Schweber, Feynman and the visualization of space-time processes, Rev. Mod. Phys, 58, p.449 ,1986 doi:10.1103/RevModPhys.58.449 Vladimir A. Smirnov: Evaluating Feynman Integrals, Springer, ISBN 978-3-54023933-8 (Dec.,2004). Vladimir A. Smirnov: Feynman Integral Calculus, Springer, ISBN 978-3-54030610-8 (Aug.,2006). Vladimir A. Smirnov: Analytic Tools for Feynman Integrals, Springer, ISBN 978-3-64234885-3 (Jan.,2013). Johannes Blümlein and Carsten Schneider (Eds.): Anti-Differentiation and the Calculation of Feynman Amplitudes, Springer, ISBN 978-3-030-80218-9 (2021). Stefan Weinzierl: Feynman Integrals: A Comprehensive Treatment for Students and Researchers, Springer, ISBN 978-3-030-99560-7 (Jun., 2023). Quantum field theory Richard Feynman
Feynman parametrization
[ "Physics" ]
769
[ "Quantum field theory", "Quantum mechanics", "Quantum physics stubs" ]
2,703,886
https://en.wikipedia.org/wiki/B%C3%B9i%20Thanh%20Li%C3%AAm
Bùi Thanh Liêm (June 30, 1949 - September 26, 1981) was a Vietnamese cosmonaut. Born in Hanoi, Vietnam, he was a pilot for the Vietnam People's Air Force who rose to the rank of captain and flew many combat missions during the Vietnam War. In 1978, Liêm graduated from Gagarin Military Air Academy at Monino, Moscow Oblast, Russian Soviet Federative Socialist Republic, Soviet Union. Liêm was selected as backup of Phạm Tuân, who was the first Vietnamese and the first Asian in space on the Soyuz 37 mission. In 1981, he was killed in a MiG-21 aeroplane crash during a training flight over the Gulf of Tonkin, off the coast of northern Vietnam. External links Spacefacts biography of Bui Thanh Liem 1949 births 1981 deaths Aviators killed in aviation accidents or incidents North Vietnamese military personnel of the Vietnam War People from Hanoi Space program fatalities Vietnamese aviators Vietnamese astronauts Vietnamese expatriates in the Soviet Union Victims of aviation accidents or incidents in 1981
Bùi Thanh Liêm
[ "Engineering" ]
212
[ "Space program fatalities", "Space programs" ]
2,704,038
https://en.wikipedia.org/wiki/%28%E2%88%921%29F
{{DISPLAYTITLE:(−1)F}} In a quantum field theory with fermions, (−1)F is a unitary, Hermitian, involutive operator where F is the fermion number operator. For the example of particles in the Standard Model, it is equal to the sum of the lepton number plus the baryon number, . The action of this operator is to multiply bosonic states by 1 and fermionic states by −1. This is always a global internal symmetry of any quantum field theory with fermions and corresponds to a rotation by 2π. This splits the Hilbert space into two superselection sectors. Bosonic operators commute with (−1)F whereas fermionic operators anticommute with it. This operator really shows its utility in supersymmetric theories. Its trace is the spectral asymmetry of the fermion spectrum, and can be understood physically as the Casimir effect. See also Parity (physics) Primon gas Möbius function References Further reading Quantum field theory Supersymmetric quantum field theory Fermions
(−1)F
[ "Physics", "Materials_science" ]
229
[ "Symmetry", "Quantum field theory", "Supersymmetric quantum field theory", "Fermions", "Quantum mechanics", "Subatomic particles", "Condensed matter physics", "Supersymmetry", "Matter" ]
2,704,065
https://en.wikipedia.org/wiki/Witten%20index
In quantum field theory and statistical mechanics, the Witten index at the inverse temperature β is defined as a modification of the standard partition function: Note the (-1)F operator, where F is the fermion number operator. This is what makes it different from the ordinary partition function. It is sometimes referred to as the spectral asymmetry. In a supersymmetric theory, each nonzero energy eigenvalue contains an equal number of bosonic and fermionic states. Because of this, the Witten index is independent of the temperature and gives the number of zero energy bosonic vacuum states minus the number of zero energy fermionic vacuum states. In particular, if supersymmetry is spontaneously broken then there are no zero energy ground states and so the Witten index is equal to zero. The Witten index of the supersymmetric sigma model on a manifold is given by the manifold's Euler characteristic. It is an example of a quasi-topological quantity, which is a quantity that depends only on F-terms and not on D-terms in the Lagrangian. A more refined invariant in 2-dimensional theories, constructed using only the right-moving part of the fermion number operator together with a 2-parameter family of variations, is the elliptic genus. See also Supersymmetric theory of stochastic dynamics References Edward Witten Constraints on Supersymmetry Breaking, Nucl. Phys. B202 (1982) 253-316 Supersymmetric quantum field theory Quantum field theory Statistical mechanics
Witten index
[ "Physics" ]
321
[ "Statistical mechanics stubs", "Quantum field theory", "Supersymmetric quantum field theory", "Quantum physics stubs", "Quantum mechanics", "Statistical mechanics", "Supersymmetry", "Symmetry" ]
4,960,605
https://en.wikipedia.org/wiki/Coarse%20structure
In the mathematical fields of geometry and topology, a coarse structure on a set X is a collection of subsets of the cartesian product X × X with certain properties which allow the large-scale structure of metric spaces and topological spaces to be defined. The concern of traditional geometry and topology is with the small-scale structure of the space: properties such as the continuity of a function depend on whether the inverse images of small open sets, or neighborhoods, are themselves open. Large-scale properties of a space—such as boundedness, or the degrees of freedom of the space—do not depend on such features. Coarse geometry and coarse topology provide tools for measuring the large-scale properties of a space, and just as a metric or a topology contains information on the small-scale structure of a space, a coarse structure contains information on its large-scale properties. Properly, a coarse structure is not the large-scale analog of a topological structure, but of a uniform structure. Definition A on a set is a collection of subsets of (therefore falling under the more general categorization of binary relations on ) called , and so that possesses the identity relation, is closed under taking subsets, inverses, and finite unions, and is closed under composition of relations. Explicitly: Identity/diagonal: The diagonal is a member of —the identity relation. Closed under taking subsets: If and then Closed under taking inverses: If then the inverse (or transpose) is a member of —the inverse relation. Closed under taking unions: If then their union is a member of Closed under composition: If then their product is a member of —the composition of relations. A set endowed with a coarse structure is a . For a subset of the set is defined as We define the of by to be the set also denoted The symbol denotes the set These are forms of projections. A subset of is said to be a if is a controlled set. Intuition The controlled sets are "small" sets, or "negligible sets": a set such that is controlled is negligible, while a function such that its graph is controlled is "close" to the identity. In the bounded coarse structure, these sets are the bounded sets, and the functions are the ones that are a finite distance from the identity in the uniform metric. Coarse maps Given a set and a coarse structure we say that the maps and are if is a controlled set. For coarse structures and we say that is a if for each bounded set of the set is bounded in and for each controlled set of the set is controlled in and are said to be if there exists coarse maps and such that is close to and is close to Examples The on a metric space is the collection of all subsets of such that is finite. With this structure, the integer lattice is coarsely equivalent to -dimensional Euclidean space. A space where is controlled is called a . Such a space is coarsely equivalent to a point. A metric space with the bounded coarse structure is bounded (as a coarse space) if and only if it is bounded (as a metric space). The trivial coarse structure only consists of the diagonal and its subsets. In this structure, a map is a coarse equivalence if and only if it is a bijection (of sets). The on a metric space is the collection of all subsets of such that for all there is a compact set of such that for all Alternatively, the collection of all subsets of such that is compact. The on a set consists of the diagonal together with subsets of which contain only a finite number of points off the diagonal. If is a topological space then the on consists of all subsets of meaning all subsets such that and are relatively compact whenever is relatively compact. See also References John Roe, Lectures in Coarse Geometry, University Lecture Series Vol. 31, American Mathematical Society: Providence, Rhode Island, 2003. Corrections to Lectures in Coarse Geometry General topology Metric geometry Topology
Coarse structure
[ "Physics", "Mathematics" ]
804
[ "General topology", "Topology", "Space", "Geometry", "Spacetime" ]
4,960,751
https://en.wikipedia.org/wiki/Rance%20Tidal%20Power%20Station
The Rance Tidal Power Station is a tidal power station located on the estuary of the Rance River in Brittany, France. Opened in 1966 as the world's first tidal power station, the 240-megawatt (MW) facility was the largest such power station in the world by installed capacity for 45 years until the 254-MW South Korean Sihwa Lake Tidal Power Station surpassed it in 2011. Characteristics The power station has 24 turbines. These reach total peak output at 240 MW, and produce an annual output of approximately 500 GWh (2023: 506 GWh; 491 GWh in 2009, 523 GWh in 2010); thus the average output is approximately 57 MW, and the capacity factor is approximately 24%. The turbines are "bulb" Kaplan turbines, of nominal power 10 MW; their diameter is 5.35 m, each has 4 blades, their nominal rotation speed is 93.75 rpm and their maximal speed 240 rpm. Half of the turbines were built from martensitic stainless steel, the other half from aluminium bronze. The plant is equipped with cathodic protection against corrosion. It supplies 0.12% of the power demand of France. The power density is of the order of 2.6 kW/m2. The cost of electricity production is estimated at 0.12€/KwH . The barrage is long, from Brebis point in the west to Briantais point in the east. The power plant portion of the dam is long and the tidal basin measures . History An early attempt to build a tidal power plant was made at Aber Wrac'h in the Finistère in 1925, but due to insufficient finance, it was abandoned in 1930. Plans for this plant served as the draft for follow-on work. Use of tidal energy is not an entirely new concept, since tidal mills have long existed in areas exposed to tides, particularly along the Rance. The idea of constructing a tidal power plant on the Rance dates to Gerard Boisnoer in 1921. The site was attractive because of the wide average-range between low and high tide levels, with a maximum perigean spring tide range of . The first studies which envisaged a tidal plant on the Rance were done by the Society for the Study of Utilization of the Tides in 1943. Nevertheless, work did not actually commence until 1961. Albert Caquot, the visionary engineer, was instrumental in the construction of the dam, designing an enclosure in order to protect the construction site from the ocean tides and the strong streams. Construction necessitated draining the area where the plant was to be built, which required construction of two dams which took two years. Construction of the plant commenced on 20 July 1963, while the Rance was entirely blocked by the two dams. Construction took three years and was completed in 1966. Charles de Gaulle, then President of France, inaugurated the plant on 26 November of the same year. Inauguration of the road crossing the plant took place on 1 July 1967, and connection of the plant to the French National Power Grid was carried out on 4 December 1967. In total, the plant cost ₣620 million (approximately €94.5 million). It took almost 20 years for the La Rance to pay for itself. Assessments In spite of the high development cost of the project, the costs have now been recovered, and electricity production costs are lower than that of nuclear power generation (1.8 ¢/kWh versus 2.5 ¢/kWh for nuclear). However, the capacity factor of the plant is 28%, lower than 85–90% for nuclear power. Environmental impact The barrage has caused progressive silting of the Rance ecosystem. Sand-eels and plaice have disappeared, though sea bass and cuttlefish have returned to the river. By definition, tides still flow in the estuary and the operator, EDF, endeavours to adjust their level to minimize the biological impact. Tourist attraction A tourist facility at the dam is open to visitors. The facility attracted approximately 40,000 visitors in 2011. A lock for navigation at the west end of the dam allows the passage of 1,600-tonne vessels between the English Channel and the Rance. Departmental road 168 crosses the dam and allows vehicles to travel between Dinard and Saint-Malo. There is a drawbridge where the road crosses the lock which is raised to allow larger vessels to pass. The Rance estuary is the first part of the inland waterway from the English Channel to the Bay of Biscay via the Canal d'Ille-et-Rance and the river Vilaine. See also List of tidal power stations List of largest power stations in the world Renewable energy in France References External links La Houille Blanche, n. 2-3, April 1973 La Houille Blanche, n. 3, April 1997 EDF website Energy infrastructure completed in 1963 Tidal power stations in France Coastal construction Buildings and structures in Ille-et-Vilaine Saint-Malo Tidal barrages Électricité de France Tourist attractions in Ille-et-Vilaine Articles containing video clips 1963 establishments in France 20th-century architecture in France
Rance Tidal Power Station
[ "Engineering" ]
1,050
[ "Construction", "Coastal construction" ]
4,961,315
https://en.wikipedia.org/wiki/CentiMark
CentiMark Corporation (est. 1968) is a national roofing contractor company headquartered in Canonsburg, Pennsylvania in the United States. The company also has a flooring division, QuestMark, and a Canadian Roofing subsidiary, CentiMark LTD. History Edward B. Dunlap started D&B Laboratories in 1967 as a part-time industrial cleaning products business in the basement of his home. In 1968, with $1,000 seed money from D&B Laboratories and one associate, Dunlap started Northern Chemical Company. In response to customer needs, Northern Chemical Company became involved in roofing and flooring maintenance. In the 1970s, the oil crisis negatively impacted the built-up roofing market that was dependent on crude oil for asphalt. The quality of asphalt decreased as oil companies were pressed to extract as much oil from crude as possible. The price of asphalt increased, thus resulting in higher roofing prices. Concerned about the quality of bituminous materials used in built-up tar and asphalt roofs, Northern Chemical Company began marketing and installing single-ply rubber (EPDM) roof systems. The newly developed EPDM polymer was both durable and waterproof. It was a cost-effective solution to the increasing costs associated with built-up roofing. In the late 1970s and early 1980s, EPDM was one of the fastest growing roofing products and accounted for almost 40% of new and replacement roofs on commercial and industrial properties. Growth The company grew through geographical expansion, diversification of product lines and an aggressive National Accounts Program. In 1987, the corporate name was officially changed to CentiMark Corporation. "Centi" refers to the 1987 goal of achieving $100 million in revenue. "Mark" recognizes the company's unique contributions to the roofing industry including: a National Account program in roofing and flooring, Single Source warranties on workmanship and materials and nationwide geographical expansion. In January 2003, Timothy M. Dunlap was appointed president and chief operating officer of CentiMark. Edward B. Dunlap, Founder of CentiMark, continues to serve as chairman and chief executive officer. Services According to the company website, CentiMark offers the following systems and services: Roof repairs Green roofing Metal roofing Asset Management Commercial roofing Emergency Response Preventative Maintenance Additionally, they offer Single Source Warranties on workmanship and materials. QuestMark In 2006, QuestMark was established to expand CentiMark's presence in the flooring industry. DiamondQuest, a new technology for concrete grinding and polishing, catered to a fast-growing market segment in the flooring industry. According to the company website, QuestMark operates out of numerous offices nationwide. Locations The CentiMark Corporation headquarters is located in Canonsburg, Pennsylvania, just outside Pittsburgh. CentiMark has over 80 offices and 3,500 associates across North America. They service all major cities. Recognition In 1991, CentiMark became the first and only roofing contractor to be rated 4A1 by Dun & Bradstreet based on a strong credit appraisal and net worth. By 2000, the rating increased to 5A1, the highest level by Dun & Bradstreet. CentiMark continues to be peerless in the commercial roofing industry regarding the 5A1 Dun & Bradstreet rating. In 2014, CentiMark was named the #1 Roofing Contractor in North America with revenue of $484.7 million, according to Roofing Contractor magazine, August 2014. Since 1991, in various roofing magazine rankings, CentiMark has consistently been the #1 or #2 roofing contractor in North America. References External links Official Website Roof Replacement Three Brothers Roofing Roofs Companies based in Pittsburgh Construction and civil engineering companies established in 1968 1968 establishments in Pennsylvania
CentiMark
[ "Technology", "Engineering" ]
769
[ "Structural system", "Structural engineering", "Roofs" ]
4,961,951
https://en.wikipedia.org/wiki/Transcytosis
Transcytosis (also known as cytopempsis) is a type of transcellular transport in which various macromolecules are transported across the interior of a cell. Macromolecules are captured in vesicles on one side of the cell, drawn across the cell, and ejected on the other side. Examples of macromolecules transported include IgA, transferrin, and insulin. While transcytosis is most commonly observed in epithelial cells, the process is also present elsewhere. Blood capillaries are a well-known site for transcytosis, though it occurs in other cells, including neurons, osteoclasts and M cells of the intestine. Regulation The regulation of transcytosis varies greatly due to the many different tissues in which this process is observed. Various tissue-specific mechanisms of transcytosis have been identified. Brefeldin A, a commonly used inhibitor of ER-to-Golgi apparatus transport, has been shown to inhibit transcytosis in dog kidney cells, which provided the first clues as to the nature of transcytosis regulation. Transcytosis in dog kidney cells has also been shown be regulated at the apical membrane by Rab17, as well as Rab11a and Rab25. Further work on dog kidney cells has shown that a signaling cascade involving the phosphorylation of EGFR by Yes leading to the activation of Rab11FIP5 by MAPK1 upregulates transcytosis. Transcytosis has been shown to be inhibited by the combination of progesterone and estradiol followed by activation mediated by prolactin in the rabbit mammary gland during pregnancy. In the thyroid, follicular cell transcytosis is regulated positively by TSH . The phosphorylation of caveolin 1 induced by hydrogen peroxide has been shown to be critical to the activation of transcytosis in pulmonary vascular tissue. It can therefore be concluded that the regulation of transcytosis is a complex process that varies between tissues. Role in pathogenesis Due to the function of transcytosis as a process that transports macromolecules across cells, it can be a convenient mechanism by which pathogens can invade a tissue. Transcytosis has been shown to be critical to the entry of Cronobacter sakazakii across the intestinal epithelium as well as the blood–brain barrier. Listeria monocytogenes has been shown to enter the intestinal lumen via transcytosis across goblet cells. Shiga toxin secreted by enterohemorrhagic E. coli has been shown to be transcytosed into the intestinal lumen. From these examples, it can be said that transcytosis is vital to the process of pathogenesis for a variety of infectious agents. Clinical applications Pharmaceutical companies, such as Lundbeck, are currently exploring the use of transcytosis as a mechanism for transporting therapeutic drugs across the human blood–brain barrier (BBB). Exploiting the body's own transport mechanism can help to overcome the high selectivity of the BBB, which typically blocks the uptake of most therapeutic antibodies into the brain and central nervous system (CNS). The pharmaceutical company Genentech, after having synthesized a therapeutic antibody that effectively inhibited BACE1 enzymatic function, experienced problems transferring adequate, efficient levels of the antibody within the brain. BACE1 is the enzyme which processes amyloid precursor proteins into amyloid-β peptides, including the species that aggregate to form amyloid plaques associated with Alzheimer's disease. Molecules are transported across an epithelial or endothelial barrier by one of two routes: 1) a transcellular route through the intracellular compartment of the cell, or 2) a paracellular route through the extracellular space between adjacent cells. The transcellular route is also called transcytosis. Transcytosis can be receptor-mediated and consists of three steps: 1) receptor-mediated endocytosis of the molecule on one side of the cell, e.g. the luminal side; 2) movement of the molecule through the intracellular compartment typically within the endosomal system; and 3) exocytosis of the molecule to the extracellular space on the other side of the cell, e.g. the abluminal side. Transcytosis may be either unidirectional or bidirectional. Unidirectional transcytosis may occur selectively in the luminal to abluminal direction, or in the reverse direction, in the abluminal to luminal direction. Transcytosis is prominent in brain microvascular peptide and protein transport, because the brain microvascular endothelium, which forms the blood-brain barrier (BBB) in vivo, expresses unique, epithelial-like, high-resistance tight junctions. The brain endothelial tight junctions virtually eliminate the paracellular pathway of solute transport across the microvascular endothelial wall in brain. In contrast, the endothelial barrier in peripheral organs does not express tight junctions, and solute movement through the paracellular pathway is prominent at the endothelial barrier in organs other than the brain or spinal cord. Receptor-mediated transcytosis, or RMT, across the BBB is a potential pathway for drug delivery to the brain, particularly for biologic drugs such as recombinant proteins. The non-transportable drug, or therapeutic protein, is genetically fused to a transporter protein. The transporter protein may be an endogenous peptide, or peptidomimetic monoclonal antibody, which undergoes RMT across the BBB via transport on brain endothelial receptors such as the insulin receptor or transferrin receptor. The transporter protein acts as a molecular Trojan horse to ferry into brain the therapeutic protein that is genetically fused to the receptor-specific Trojan horse protein. Monoclonal antibody Trojan horses that target the BBB insulin or transferrin receptor have been in drug development for over 10 years at ArmaGen, Inc., a biotechnology company in Los Angeles. ArmaGen has developed genetically engineered antibodies against both the insulin and transferrin receptors, and has fused to these antibodies different therapeutic proteins, including lysosomal enzymes, therapeutic antibodies, decoy receptors, and neurotrophins. These therapeutic proteins alone do not cross the BBB, but following genetic fusion to the Trojan horse antibody, the therapeutic protein penetrates the BBB at a rate comparable to small molecules. In 2015, ArmaGen will be the first to enter human clinical trials with the BBB Trojan horse fusion proteins that delivery protein drugs to the brain via the transcytosis pathway. The human diseases initially targeted by ArmaGen are lysosomal storage diseases that adversely affect the brain. Inherited diseases create a condition where a specific lysosomal enzyme is not produced, leading to serious brain conditions including mental retardation, behavioral problems, and then dementia. Although the missing enzyme can be manufactured by drug companies, the enzyme drug alone does not treat the brain, because the enzyme alone does not cross the BBB. ArmaGen has re-engineered the missing lysosomal enzyme as a Trojan horse-enzyme fusion protein that crosses the BBB. The first clinical trials of the new Trojan horse fusion protein technology will treat the brain in lysosomal storage disorders, including one of the mucopolysaccharidosis type I diseases, (MPSIH), also called Hurler syndrome, and MPS Type II, also called Hunter syndrome. Researchers at Genentech proposed the creation of a bispecific antibody that could bind the BBB membrane, induce receptor-mediated transcytosis, and release itself on the other side into the brain and CNS. They utilized a mouse bispecific antibody with two active sites performing different functions. One arm had a low-affinity anti-transferrin receptor binding site that induces transcytosis. A high-affinity binding site would result in the antibody not being able to release from the BBB membrane after transcytosis. This way, the amount of transported antibody is based on the concentration of antibody on either side of the barrier. The other arm had the previously developed high-affinity anti-BACE1 binding site that would inhibit BACE1 function and prevent amyloid plaque formation. Genentech was able to demonstrate in mouse models that the new bispecific antibody was able to reach therapeutic levels in the brain. Genentech's method of disguising and transporting the therapeutic antibody by attaching it to a receptor-mediated transcytosis activator has been referred to as the "Trojan Horse" method. References External links Macromolecules Can Be Transferred Across Epithelial Cell Sheets by Transcytosis Transcytosis of IgA Transcytosis of bacteria Cellular processes
Transcytosis
[ "Biology" ]
1,831
[ "Cellular processes" ]
4,962,816
https://en.wikipedia.org/wiki/Picul
A picul , dan or tam, is a traditional Asian unit of weight, defined as "as much as a man can carry on a shoulder-pole". Historically, it was defined as equivalent to 100 or 120 catties, depending on time and region. The picul is most commonly used in southern China and Maritime Southeast Asia. History The unit originated in China during the Qin dynasty (221–206 BC), where it was known as the shi (石 "stone"). During the Han dynasty, one stone was equal to 120 catties. Government officials were paid in grain, counted in stones, with top ranked ministers being paid 2000 stones. As a unit of measurement, the word shi (石) can also be pronounced dan. To avoid confusion, the character is sometimes changed to 擔 (dàn), meaning "burden" or "load". Likewise, in Cantonese the word is pronounced sek (石) or daam (擔), and in Hakka it is pronounced tam (擔). The word picul appeared as early as the mid 9th century in Javanese. In modern Malay, pikul is also a verb meaning 'to carry on the shoulder'. In the early days of Hong Kong as a British colony, the stone (石, with a Cantonese pronunciation given as shik) was used as a measurement of weight equal to 120 catties or , alongside the picul of 100 catties. It was made obsolete by subsequent overriding legislation in 1885, which included the picul but not the stone, to avoid confusion with European-origin measures that are similarly called stone. Following Spanish, Portuguese, British and most especially the Dutch colonial maritime trade, the term picul was both a convenient unit, and a lingua franca unit that was widely understood and employed by other Austronesians (in modern Malaysia and the Philippines) and their centuries-old trading relations with Indians, Chinese and Arabs. It remained a convenient reference unit for many commercial trade journals in the 19th century. One example is Hunts Merchant Magazine of 1859 giving detailed tables of expected prices of various commodities, such as coffee, e.g. one picul of Javanese coffee could be expected to be bought from 8 to 8.50 Spanish dollars in Batavia and Singapore. Definitions As for any traditional measurement unit, the exact definition of the picul varied historically and regionally. In imperial China and later, the unit was used for a measure equivalent to 100 catties. In 1831, the Dutch East Indies authorities acknowledged local variances in the definition of the pikul. In Hong Kong, one picul was defined in Ordinance No. 22 of 1844 as avoirdupois pounds. The modern definition is exactly 60.478982 kilograms. The measure was and remains used on occasion in Taiwan where it is defined as 60 kg. The last, a measure of rice, was 20 picul, or 1,200 kg. See also Dan (volume) References Chinese units in Hong Kong Units of mass Human-based units of measurement
Picul
[ "Physics", "Mathematics" ]
618
[ "Matter", "Quantity", "Units of mass", "Mass", "Units of measurement" ]
4,964,853
https://en.wikipedia.org/wiki/Langmuir%20%28journal%29
Langmuir is a peer-reviewed scientific journal that was established in 1985 and is published by the American Chemical Society. It is the leading journal focusing on the science and application of systems and materials in which the interface dominates structure and function. Research areas covered include surface and colloid chemistry. Langmuir publishes original research articles, invited feature articles, perspectives, and editorials. The title honors Irving Langmuir, winner of the 1932 Nobel Prize for Chemistry. The founding editor-in-chief was Arthur W. Adamson. Abstracting and indexing Langmuir is indexed in Chemical Abstracts Service, Scopus, EBSCOhost, British Library, PubMed, Web of Science, and SwetsWise. References External links American Chemical Society academic journals Weekly journals Academic journals established in 1985 English-language journals Surface science 1985 establishments in the United States
Langmuir (journal)
[ "Physics", "Chemistry", "Materials_science" ]
175
[ "Condensed matter physics", "Surface science" ]
22,582,731
https://en.wikipedia.org/wiki/Lewis%20signaling%20game
In game theory, the Lewis signaling game is a type of signaling game that features perfect common interest between players. It is named for the philosopher David Lewis who was the first to discuss this game in his Ph.D. dissertation, and later book, Convention. The game The underlying game has two players, the sender and the receiver. The world can be in any of a number of states and the sender is aware of that state. The sender has at its disposal a fixed set of signals that it can send to the receiver. The receiver can observe the signal sent, but not the state of the world, and must take some action. For each state, there is a unique correct action and both the sender and receiver prefer that the receiver take the correct action in every state. Because both the sender and receiver prefer the same outcomes as one another, this game is a game of perfect common interest. The simplest version of this game (pictured above) has two states, two signals, and two acts. Equilibria This game has many Nash equilibria. A few of them stand out where the sender sends a different signal in each state and the receiver takes the appropriate action in every state. Lewis dubbed these signaling systems. But there are also other equilibria. In some the sender sends the same signal in every state and the receiver takes the action that is best to take given no additional information about the state of the world (pooling equilibria). Also, when there are more than two states, signals, and acts, there are partial pooling equilibria where some information is conveyed, but some states are also pooled. References Non-cooperative games Asymmetric information
Lewis signaling game
[ "Physics", "Mathematics" ]
355
[ "Asymmetric information", "Game theory", "Non-cooperative games", "Asymmetry", "Symmetry" ]
22,582,958
https://en.wikipedia.org/wiki/List%20of%20gamma-ray%20bursts
The following is a list of significant gamma-ray bursts (GRBs) listed in chronological order. GRBs are named after the date on which they were detected: the first two numbers correspond to the year, the second two numbers to the month, and the last two numbers to the day. List Extremes Firsts Most distant GRB Notes Footnotes References Citations See also Lists of astronomical objects External links Jochen Greiner's afterglow table Stephen Holland's afterglow table GRBOX - Gamma-Ray Burst Online Index Official Swift GRB Table Official BATSE GRB Table
List of gamma-ray bursts
[ "Physics", "Astronomy" ]
121
[ "Physical phenomena", "Stellar phenomena", "Astronomical events", "Gamma-ray bursts" ]
22,583,255
https://en.wikipedia.org/wiki/Marcinkiewicz%E2%80%93Zygmund%20inequality
In mathematics, the Marcinkiewicz–Zygmund inequality, named after Józef Marcinkiewicz and Antoni Zygmund, gives relations between moments of a collection of independent random variables. It is a generalization of the rule for the sum of variances of independent random variables to moments of arbitrary order. It is a special case of the Burkholder-Davis-Gundy inequality in the case of discrete-time martingales. Statement of the inequality Theorem If , , are independent random variables such that and , , then where and are positive constants, which depend only on and not on the underlying distribution of the random variables involved. The second-order case In the case , the inequality holds with , and it reduces to the rule for the sum of variances of independent random variables with zero mean, known from elementary statistics: If and , then See also Several similar moment inequalities are known as Khintchine inequality and Rosenthal inequalities, and there are also extensions to more general symmetric statistics of independent random variables. Notes Statistical inequalities Probabilistic inequalities Probability theorems Theorems in functional analysis
Marcinkiewicz–Zygmund inequality
[ "Mathematics" ]
240
[ "Theorems in mathematical analysis", "Mathematical theorems", "Theorems in statistics", "Statistical inequalities", "Theorems in probability theory", "Theorems in functional analysis", "Probabilistic inequalities", "Inequalities (mathematics)", "Mathematical problems" ]