text
stringlengths
37
707k
source
stringclasses
2 values
In computer science, specifically in algorithms related to pathfinding, a heuristic function is said to be admissible if it never overestimates the cost of reaching the goal, i.e. the cost it estimates to reach the goal is not higher than the lowest possible cost from the current point in the path. In other words, it should act as a lower bound. It is related to the concept of consistent heuristics. While all consistent heuristics are admissible, not all admissible heuristics are consistent. Search algorithms An admissible heuristic is used to estimate the cost of reaching the goal state in an informed search algorithm. In order for a heuristic to be admissible to the search problem, the estimated cost must always be lower than or equal to the actual cost of reaching the goal state. The search algorithm uses the admissible heuristic to find an estimated optimal path to the goal state from the current node. For example, in A* search the evaluation function (where n {\displaystyle n} is the current node) is: f ( n ) = g ( n ) + h ( n ) {\displaystyle f(n)=g(n)+h(n)} where f ( n ) {\displaystyle f(n)} = the evaluation function. g ( n ) {\displaystyle g(n)} = the cost from the start node to the current node h ( n ) {\displaystyle h(n)} = estimated cost from current node to goal. h ( n ) {\displaystyle h(n)} is calculated using the heuristic function. With a non-admissible heuristic, the A* algorithm could overlook the optimal solution to a search problem due to an overestimation in f ( n ) {\displaystyle f(n)} . Formulation n {\displaystyle n} is a node h {\displaystyle h} is a heuristic h ( n ) {\displaystyle h(n)} is cost indicated by h {\displaystyle h} to reach a goal from n {\displaystyle n} h ∗ ( n ) {\displaystyle h^{*}(n)} is the optimal cost to reach a goal from n {\displaystyle n} h ( n ) {\displaystyle h(n)} is admissible if, ∀ n {\displaystyle \forall n} h ( n ) ≤ h ∗ ( n ) {\displaystyle h(n)\leq h^{*}(n)} Construction An admissible heuristic can be derived from a relaxed version of the problem, or by information from pattern databases that store exact solutions to subproblems of the problem, or by using inductive learning methods. Examples Two different examples of admissible heuristics apply to the fifteen puzzle problem: Hamming distance Manhattan distance The Hamming distance is the total number of misplaced tiles. It is clear that this heuristic is admissible since the total number of moves to order the tiles correctly is at least the number of misplaced tiles (each tile not in place must be moved at least once). The cost (number of moves) to the goal (an ordered puzzle) is at least the Hamming distance of the puzzle. The Manhattan distance of a puzzle is defined as: h ( n ) = ∑ all tiles d i s t a n c e ( tile, correct position ) {\displaystyle h(n)=\sum _{\text{all tiles}}{\mathit {distance}}({\text{tile, correct position}})} Consider the puzzle below in which the player wishes to move each tile such that the numbers are ordered. The Manhattan distance is an admissible heuristic in this case because every tile will have to be moved at least the number of spots in between itself and its correct position. The subscripts show the Manhattan distance for each tile. The total Manhattan distance for the shown puzzle is: h ( n ) = 3 + 1 + 0 + 1 + 2 + 3 + 3 + 4 + 3 + 2 + 4 + 4 + 4 + 1 + 1 = 36 {\displaystyle h(n)=3+1+0+1+2+3+3+4+3+2+4+4+4+1+1=36} Optimality proof If an admissible heuristic is used in an algorithm that, per iteration, progresses only the path of lowest evaluation (current cost + heuristic) of several candidate paths, terminates the moment its exploration reaches the goal and, crucially, never closes all optimal paths before terminating (something that's possible with A* search algorithm if special care isn't taken), then this algorithm can only terminate on an optimal path. To see why, consider the following proof by contradiction: Assume such an algorithm managed to terminate on a path T with a true cost Ttrue greater than the optimal path S with true cost Strue. This means that before terminating, the evaluated cost of T was less than or equal to the evaluated cost of S (or else S would have been picked). Denote these evaluated costs Teval and Seval respectively. The above can be summarized as follows, Strue < Ttrue Teval ≤ Seval If our heuristic is admissible it follows that at this penultimate step Teval = Ttrue because any increase on the true cost by the heuristic on T would be inadmissible and the heuristic cannot be negative. On the other hand, an admissible heuristic would require that Seval ≤ Strue which combined with the above inequalities gives us Teval < Ttrue and more specifically Teval ≠ Ttrue. As Teval and Ttrue cannot be both equal and unequal our assumption must have been false and so it must be impossible to terminate on a more costly than optimal path. As an example, let us say we have costs as follows:(the cost above/below a node is the heuristic, the cost at an edge is the actual cost) 0 10 0 100 0 START ---- O ----- GOAL | | 0| |100 | | O ------- O ------ O 100 1 100 1 100 So clearly we would start off visiting the top middle node, since the expected total cost, i.e. f ( n ) {\displaystyle f(n)} , is 10 + 0 = 10 {\displaystyle 10+0=10} . Then the goal would be a candidate, with f ( n ) {\displaystyle f(n)} equal to 10 + 100 + 0 = 110 {\displaystyle 10+100+0=110} . Then we would clearly pick the bottom nodes one after the other, followed by the updated goal, since they all have f ( n ) {\displaystyle f(n)} lower than the f ( n ) {\displaystyle f(n)} of the current goal, i.e. their f ( n ) {\displaystyle f(n)} is 100 , 101 , 102 , 102 {\displaystyle 100,101,102,102} . So even though the goal was a candidate, we could not pick it because there were still better paths out there. This way, an admissible heuristic can ensure optimality. However, note that although an admissible heuristic can guarantee final optimality, it is not necessarily efficient. See also Consistent heuristic Heuristic function Search algorithm
Wikipedia
This is a list of contributors to the mathematical background for general relativity. For ease of readability, the contributions (in brackets) are unlinked but can be found in the contributors' article. B Luigi Bianchi (Bianchi identities, Bianchi groups, differential geometry) C Élie Cartan (curvature computation, early extensions of GTR, Cartan geometries) Elwin Bruno Christoffel (connections, tensor calculus, Riemannian geometry) Clarissa-Marie Claudel (Geometry of photon surfaces) D Tevian Dray (The Geometry of General Relativity) E Luther P. Eisenhart (semi-Riemannian geometries) Frank B. Estabrook (Wahlquist-Estabrook approach to solving PDEs; see also parent list) Leonhard Euler (Euler-Lagrange equation, from which the geodesic equation is obtained) G Carl Friedrich Gauss (curvature, theory of surfaces, intrinsic vs. extrinsic) K Martin Kruskal (inverse scattering transform; see also parent list) L Joseph Louis Lagrange (Lagrangian mechanics, Euler-Lagrange equation) Tullio Levi-Civita (tensor calculus, Riemannian geometry; see also parent list) André Lichnerowicz (tensor calculus, transformation groups) M Alexander Macfarlane (space analysis and Algebra of Physics) Jerrold E. Marsden (linear stability) N Isaac Newton (Newton's identities for characteristic of Einstein tensor) R Gregorio Ricci-Curbastro (Ricci tensor, differential geometry) Georg Bernhard Riemann (Riemannian geometry, Riemann curvature tensor) S Richard Schoen (Yamabe problem; see also parent list) Corrado Segre (Segre classification) W Hugo D. Wahlquist (Wahlquist-Estabrook algorithm; see also parent list) Hermann Weyl (Weyl tensor, gauge theories; see also parent list) Eugene P. Wigner (stabilizers in Lorentz group) See also Contributors to differential geometry Contributors to general relativity
Wikipedia
Omega Chi Epsilon (or ΩΧΕ, sometimes simplified to OXE) is an International honor society for chemical engineering students. History The first chapter of Omega Chi Epsilon was formed at the University of Illinois in 1931 by a group of chemical engineering students. These Founders were: F. C. Howard A. Garrell Deem Ethan M. Stifle John W. Bertetti Professors D.B. Keyes and Norman Krase supported the students in their efforts. The Beta chapter was formed in the Iowa State University 1932. The society grew slowly at first. Baird's Manual indicates there were six chapters by 1957, of which three were inactive. However, interest was revived in the 1960s, allowing a sustained growth that has continued to the present day. There are approximately eighty active chapters of the society as of 2021. Omega Chi Epsilon amended its constitution to permit women to become members as of 1966. The organization became a member of the Association of College Honor Societies in 1967. Symbols The society's name comes from its motto "Ode Chrototos Eggegramai" or "In this Society, professionalism is engraved in our minds". The Greek letters ΩΧΕ were chosen to stand for "Order of Chemical Engineers". The society's official seal is made of two concentric circles, bearing at the top, center the words "Omega Chi Epsilon" with the words "Founded, 1931" at the bottom center. The letters of the society appear in the center of the seal. The society's colors are black, white, and maroon. The society's badge is a black Maltese cross background, on which is superimposed a circular maroon crest. The crest bears the letters ΩΧΕ on a white band passing across the horizontal midline. Above the white band are two crossed retorts rendered in gold. Below the white band are a gold integral sign and a lightning bolt. These symbols are noted to represent the roles of chemistry, mathematics, and physics in chemical engineering. Activities Chapter traditions of service to their chemical engineering departments commonly prevail rather than broader, national traditions. Membership Membership is limited to chemical engineering juniors, seniors, and graduate students. Associate membership may be offered to professors or other members of the staff of institutions within the field. Chapters Omega Chi Epsilon has chartered 80 chapters at colleges and universities in the United States, Qatar, and the United Arab Emirates. Governance The Society's annual meeting is held at the same time and place as the annual meeting of the American Institute of Chemical Engineers. Governance is vested in a national president, vice president, executive secretary, and treasurer. With the immediate past president, these constitute the Executive Committee. The current national president is Christi Luks of the Missouri University of Science and Technology. See also American Institute of Chemical Engineers Honor society Honor cord Professional fraternities and sororities References External links Omega Chi Epsilon homepage
Wikipedia
The distributional learning theory or learning of probability distribution is a framework in computational learning theory. It has been proposed from Michael Kearns, Yishay Mansour, Dana Ron, Ronitt Rubinfeld, Robert Schapire and Linda Sellie in 1994 and it was inspired from the PAC-framework introduced by Leslie Valiant. In this framework the input is a number of samples drawn from a distribution that belongs to a specific class of distributions. The goal is to find an efficient algorithm that, based on these samples, determines with high probability the distribution from which the samples have been drawn. Because of its generality, this framework has been used in a large variety of different fields like machine learning, approximation algorithms, applied probability and statistics. This article explains the basic definitions, tools and results in this framework from the theory of computation point of view. Definitions Let X {\displaystyle \textstyle X} be the support of the distributions of interest. As in the original work of Kearns et al. if X {\displaystyle \textstyle X} is finite it can be assumed without loss of generality that X = { 0 , 1 } n {\displaystyle \textstyle X=\{0,1\}^{n}} where n {\displaystyle \textstyle n} is the number of bits that have to be used in order to represent any y ∈ X {\displaystyle \textstyle y\in X} . We focus in probability distributions over X {\displaystyle \textstyle X} . There are two possible representations of a probability distribution D {\displaystyle \textstyle D} over X {\displaystyle \textstyle X} . probability distribution function (or evaluator) an evaluator E D {\displaystyle \textstyle E_{D}} for D {\displaystyle \textstyle D} takes as input any y ∈ X {\displaystyle \textstyle y\in X} and outputs a real number E D [ y ] {\displaystyle \textstyle E_{D}[y]} which denotes the probability that of y {\displaystyle \textstyle y} according to D {\displaystyle \textstyle D} , i.e. E D [ y ] = Pr [ Y = y ] {\displaystyle \textstyle E_{D}[y]=\Pr[Y=y]} if Y ∼ D {\displaystyle \textstyle Y\sim D} . generator a generator G D {\displaystyle \textstyle G_{D}} for D {\displaystyle \textstyle D} takes as input a string of truly random bits y {\displaystyle \textstyle y} and outputs G D [ y ] ∈ X {\displaystyle \textstyle G_{D}[y]\in X} according to the distribution D {\displaystyle \textstyle D} . Generator can be interpreted as a routine that simulates sampling from the distribution D {\displaystyle \textstyle D} given a sequence of fair coin tosses. A distribution D {\displaystyle \textstyle D} is called to have a polynomial generator (respectively evaluator) if its generator (respectively evaluator) exists and can be computed in polynomial time. Let C X {\displaystyle \textstyle C_{X}} a class of distribution over X, that is C X {\displaystyle \textstyle C_{X}} is a set such that every D ∈ C X {\displaystyle \textstyle D\in C_{X}} is a probability distribution with support X {\displaystyle \textstyle X} . The C X {\displaystyle \textstyle C_{X}} can also be written as C {\displaystyle \textstyle C} for simplicity. Before defining learnability, it is necessary to define good approximations of a distribution D {\displaystyle \textstyle D} . There are several ways to measure the distance between two distribution. The three more common possibilities are Kullback-Leibler divergence Total variation distance of probability measures Kolmogorov distance The strongest of these distances is the Kullback-Leibler divergence and the weakest is the Kolmogorov distance. This means that for any pair of distributions D {\displaystyle \textstyle D} , D ′ {\displaystyle \textstyle D'} : KL-distance ( D , D ′ ) ≥ TV-distance ( D , D ′ ) ≥ Kolmogorov-distance ( D , D ′ ) {\displaystyle {\text{KL-distance}}(D,D')\geq {\text{TV-distance}}(D,D')\geq {\text{Kolmogorov-distance}}(D,D')} Therefore, for example if D {\displaystyle \textstyle D} and D ′ {\displaystyle \textstyle D'} are close with respect to Kullback-Leibler divergence then they are also close with respect to all the other distances. Next definitions hold for all the distances and therefore the symbol d ( D , D ′ ) {\displaystyle \textstyle d(D,D')} denotes the distance between the distribution D {\displaystyle \textstyle D} and the distribution D ′ {\displaystyle \textstyle D'} using one of the distances that we describe above. Although learnability of a class of distributions can be defined using any of these distances, applications refer to a specific distance. The basic input that we use in order to learn a distribution is a number of samples drawn by this distribution. For the computational point of view the assumption is that such a sample is given in a constant amount of time. So it's like having access to an oracle G E N ( D ) {\displaystyle \textstyle GEN(D)} that returns a sample from the distribution D {\displaystyle \textstyle D} . Sometimes the interest is, apart from measuring the time complexity, to measure the number of samples that have to be used in order to learn a specific distribution D {\displaystyle \textstyle D} in class of distributions C {\displaystyle \textstyle C} . This quantity is called sample complexity of the learning algorithm. In order for the problem of distribution learning to be more clear consider the problem of supervised learning as defined in. In this framework of statistical learning theory a training set S = { ( x 1 , y 1 ) , … , ( x n , y n ) } {\displaystyle \textstyle S=\{(x_{1},y_{1}),\dots ,(x_{n},y_{n})\}} and the goal is to find a target function f : X → Y {\displaystyle \textstyle f:X\rightarrow Y} that minimizes some loss function, e.g. the square loss function. More formally f = arg ⁡ min g ∫ V ( y , g ( x ) ) d ρ ( x , y ) {\displaystyle f=\arg \min _{g}\int V(y,g(x))d\rho (x,y)} , where V ( ⋅ , ⋅ ) {\displaystyle V(\cdot ,\cdot )} is the loss function, e.g. V ( y , z ) = ( y − z ) 2 {\displaystyle V(y,z)=(y-z)^{2}} and ρ ( x , y ) {\displaystyle \rho (x,y)} the probability distribution according to which the elements of the training set are sampled. If the conditional probability distribution ρ x ( y ) {\displaystyle \rho _{x}(y)} is known then the target function has the closed form f ( x ) = ∫ y y d ρ x ( y ) {\displaystyle f(x)=\int _{y}yd\rho _{x}(y)} . So the set S {\displaystyle S} is a set of samples from the probability distribution ρ ( x , y ) {\displaystyle \rho (x,y)} . Now the goal of distributional learning theory if to find ρ {\displaystyle \rho } given S {\displaystyle S} which can be used to find the target function f {\displaystyle f} . Definition of learnability A class of distributions C {\displaystyle \textstyle C} is called efficiently learnable if for every ϵ > 0 {\displaystyle \textstyle \epsilon >0} and 0 < δ ≤ 1 {\displaystyle \textstyle 0<\delta \leq 1} given access to G E N ( D ) {\displaystyle \textstyle GEN(D)} for an unknown distribution D ∈ C {\displaystyle \textstyle D\in C} , there exists a polynomial time algorithm A {\displaystyle \textstyle A} , called learning algorithm of C {\displaystyle \textstyle C} , that outputs a generator or an evaluator of a distribution D ′ {\displaystyle \textstyle D'} such that Pr [ d ( D , D ′ ) ≤ ϵ ] ≥ 1 − δ {\displaystyle \Pr[d(D,D')\leq \epsilon ]\geq 1-\delta } If we know that D ′ ∈ C {\displaystyle \textstyle D'\in C} then A {\displaystyle \textstyle A} is called proper learning algorithm, otherwise is called improper learning algorithm. In some settings the class of distributions C {\displaystyle \textstyle C} is a class with well known distributions which can be described by a set of parameters. For instance C {\displaystyle \textstyle C} could be the class of all the Gaussian distributions N ( μ , σ 2 ) {\displaystyle \textstyle N(\mu ,\sigma ^{2})} . In this case the algorithm A {\displaystyle \textstyle A} should be able to estimate the parameters μ , σ {\displaystyle \textstyle \mu ,\sigma } . In this case A {\displaystyle \textstyle A} is called parameter learning algorithm. Obviously the parameter learning for simple distributions is a very well studied field that is called statistical estimation and there is a very long bibliography on different estimators for different kinds of simple known distributions. But distributions learning theory deals with learning class of distributions that have more complicated description. First results In their seminal work, Kearns et al. deal with the case where A {\displaystyle \textstyle A} is described in term of a finite polynomial sized circuit and they proved the following for some specific classes of distribution. O R {\displaystyle \textstyle OR} gate distributions for this kind of distributions there is no polynomial-sized evaluator, unless # P ⊆ P / poly {\displaystyle \textstyle \#P\subseteq P/{\text{poly}}} . On the other hand, this class is efficiently learnable with generator. Parity gate distributions this class is efficiently learnable with both generator and evaluator. Mixtures of Hamming Balls this class is efficiently learnable with both generator and evaluator. Probabilistic Finite Automata this class is not efficiently learnable with evaluator under the Noisy Parity Assumption which is an impossibility assumption in the PAC learning framework. ϵ − {\displaystyle \textstyle \epsilon -} Covers One very common technique in order to find a learning algorithm for a class of distributions C {\displaystyle \textstyle C} is to first find a small ϵ − {\displaystyle \textstyle \epsilon -} cover of C {\displaystyle \textstyle C} . Definition A set C ϵ {\displaystyle \textstyle C_{\epsilon }} is called ϵ {\displaystyle \textstyle \epsilon } -cover of C {\displaystyle \textstyle C} if for every D ∈ C {\displaystyle \textstyle D\in C} there is a D ′ ∈ C ϵ {\displaystyle \textstyle D'\in C_{\epsilon }} such that d ( D , D ′ ) ≤ ϵ {\displaystyle \textstyle d(D,D')\leq \epsilon } . An ϵ − {\displaystyle \textstyle \epsilon -} cover is small if it has polynomial size with respect to the parameters that describe D {\displaystyle \textstyle D} . Once there is an efficient procedure that for every ϵ > 0 {\displaystyle \textstyle \epsilon >0} finds a small ϵ − {\displaystyle \textstyle \epsilon -} cover C ϵ {\displaystyle \textstyle C_{\epsilon }} of C then the only left task is to select from C ϵ {\displaystyle \textstyle C_{\epsilon }} the distribution D ′ ∈ C ϵ {\displaystyle \textstyle D'\in C_{\epsilon }} that is closer to the distribution D ∈ C {\displaystyle \textstyle D\in C} that has to be learned. The problem is that given D ′ , D ″ ∈ C ϵ {\displaystyle \textstyle D',D\in C_{\epsilon }} it is not trivial how we can compare d ( D , D ′ ) {\displaystyle \textstyle d(D,D')} and d ( D , D ″ ) {\displaystyle \textstyle d(D,D)} in order to decide which one is the closest to D {\displaystyle \textstyle D} , because D {\displaystyle \textstyle D} is unknown. Therefore, the samples from D {\displaystyle \textstyle D} have to be used to do these comparisons. Obviously the result of the comparison always has a probability of error. So the task is similar with finding the minimum in a set of element using noisy comparisons. There are a lot of classical algorithms in order to achieve this goal. The most recent one which achieves the best guarantees was proposed by Daskalakis and Kamath This algorithm sets up a fast tournament between the elements of C ϵ {\displaystyle \textstyle C_{\epsilon }} where the winner D ∗ {\displaystyle \textstyle D^{*}} of this tournament is the element which is ϵ − {\displaystyle \textstyle \epsilon -} close to D {\displaystyle \textstyle D} (i.e. d ( D ∗ , D ) ≤ ϵ {\displaystyle \textstyle d(D^{*},D)\leq \epsilon } ) with probability at least 1 − δ {\displaystyle \textstyle 1-\delta } . In order to do so their algorithm uses O ( log ⁡ N / ϵ 2 ) {\displaystyle \textstyle O(\log N/\epsilon ^{2})} samples from D {\displaystyle \textstyle D} and runs in O ( N log ⁡ N / ϵ 2 ) {\displaystyle \textstyle O(N\log N/\epsilon ^{2})} time, where N = | C ϵ | {\displaystyle \textstyle N=|C_{\epsilon }|} . Learning sums of random variables Learning of simple well known distributions is a well studied field and there are a lot of estimators that can be used. One more complicated class of distributions is the distribution of a sum of variables that follow simple distributions. These learning procedure have a close relation with limit theorems like the central limit theorem because they tend to examine the same object when the sum tends to an infinite sum. Recently there are two results that described here include the learning Poisson binomial distributions and learning sums of independent integer random variables. All the results below hold using the total variation distance as a distance measure. Learning Poisson binomial distributions Consider n {\displaystyle \textstyle n} independent Bernoulli random variables X 1 , … , X n {\displaystyle \textstyle X_{1},\dots ,X_{n}} with probabilities of success p 1 , … , p n {\displaystyle \textstyle p_{1},\dots ,p_{n}} . A Poisson Binomial Distribution of order n {\displaystyle \textstyle n} is the distribution of the sum X = ∑ i X i {\displaystyle \textstyle X=\sum _{i}X_{i}} . For learning the class P B D = { D : D is a Poisson binomial distribution } {\displaystyle \textstyle PBD=\{D:D~{\text{ is a Poisson binomial distribution}}\}} . The first of the following results deals with the case of improper learning of P B D {\displaystyle \textstyle PBD} and the second with the proper learning of P B D {\displaystyle \textstyle PBD} . Theorem Let D ∈ P B D {\displaystyle \textstyle D\in PBD} then there is an algorithm which given n {\displaystyle \textstyle n} , ϵ > 0 {\displaystyle \textstyle \epsilon >0} , 0 < δ ≤ 1 {\displaystyle \textstyle 0<\delta \leq 1} and access to G E N ( D ) {\displaystyle \textstyle GEN(D)} finds a D ′ {\displaystyle \textstyle D'} such that Pr [ d ( D , D ′ ) ≤ ϵ ] ≥ 1 − δ {\displaystyle \textstyle \Pr[d(D,D')\leq \epsilon ]\geq 1-\delta } . The sample complexity of this algorithm is O ~ ( ( 1 / ϵ 3 ) log ⁡ ( 1 / δ ) ) {\displaystyle \textstyle {\tilde {O}}((1/\epsilon ^{3})\log(1/\delta ))} and the running time is O ~ ( ( 1 / ϵ 3 ) log ⁡ n log 2 ⁡ ( 1 / δ ) ) {\displaystyle \textstyle {\tilde {O}}((1/\epsilon ^{3})\log n\log ^{2}(1/\delta ))} . Theorem Let D ∈ P B D {\displaystyle \textstyle D\in PBD} then there is an algorithm which given n {\displaystyle \textstyle n} , ϵ > 0 {\displaystyle \textstyle \epsilon >0} , 0 < δ ≤ 1 {\displaystyle \textstyle 0<\delta \leq 1} and access to G E N ( D ) {\displaystyle \textstyle GEN(D)} finds a D ′ ∈ P B D {\displaystyle \textstyle D'\in PBD} such that Pr [ d ( D , D ′ ) ≤ ϵ ] ≥ 1 − δ {\displaystyle \textstyle \Pr[d(D,D')\leq \epsilon ]\geq 1-\delta } . The sample complexity of this algorithm is O ~ ( ( 1 / ϵ 2 ) ) log ⁡ ( 1 / δ ) {\displaystyle \textstyle {\tilde {O}}((1/\epsilon ^{2}))\log(1/\delta )} and the running time is ( 1 / ϵ ) O ( log 2 ⁡ ( 1 / ϵ ) ) O ~ ( log ⁡ n log ⁡ ( 1 / δ ) ) {\displaystyle \textstyle (1/\epsilon )^{O(\log ^{2}(1/\epsilon ))}{\tilde {O}}(\log n\log(1/\delta ))} . One part of the above results is that the sample complexity of the learning algorithm doesn't depend on n {\displaystyle \textstyle n} , although the description of D {\displaystyle \textstyle D} is linear in n {\displaystyle \textstyle n} . Also the second result is almost optimal with respect to the sample complexity because there is also a lower bound of O ( 1 / ϵ 2 ) {\displaystyle \textstyle O(1/\epsilon ^{2})} . The proof uses a small ϵ − {\displaystyle \textstyle \epsilon -} cover of P B D {\displaystyle \textstyle PBD} that has been produced by Daskalakis and Papadimitriou, in order to get this algorithm. Learning Sums of Independent Integer Random Variables Consider n {\displaystyle \textstyle n} independent random variables X 1 , … , X n {\displaystyle \textstyle X_{1},\dots ,X_{n}} each of which follows an arbitrary distribution with support { 0 , 1 , … , k − 1 } {\displaystyle \textstyle \{0,1,\dots ,k-1\}} . A k − {\displaystyle \textstyle k-} sum of independent integer random variable of order n {\displaystyle \textstyle n} is the distribution of the sum X = ∑ i X i {\displaystyle \textstyle X=\sum _{i}X_{i}} . For learning the class k − S I I R V = { D : D is a k-sum of independent integer random variable } {\displaystyle \textstyle k-SIIRV=\{D:D{\text{is a k-sum of independent integer random variable }}\}} there is the following result Theorem Let D ∈ k − S I I R V {\displaystyle \textstyle D\in k-SIIRV} then there is an algorithm which given n {\displaystyle \textstyle n} , ϵ > 0 {\displaystyle \textstyle \epsilon >0} and access to G E N ( D ) {\displaystyle \textstyle GEN(D)} finds a D ′ {\displaystyle \textstyle D'} such that Pr [ d ( D , D ′ ) ≤ ϵ ] ≥ 1 − δ {\displaystyle \textstyle \Pr[d(D,D')\leq \epsilon ]\geq 1-\delta } . The sample complexity of this algorithm is poly ( k / ϵ ) {\displaystyle \textstyle {\text{poly}}(k/\epsilon )} and the running time is also poly ( k / ϵ ) {\displaystyle \textstyle {\text{poly}}(k/\epsilon )} . Another part is that the sample and the time complexity does not depend on n {\displaystyle \textstyle n} . Its possible to conclude this independence for the previous section if we set k = 2 {\displaystyle \textstyle k=2} . Learning mixtures of Gaussians Let the random variables X ∼ N ( μ 1 , Σ 1 ) {\displaystyle \textstyle X\sim N(\mu _{1},\Sigma _{1})} and Y ∼ N ( μ 2 , Σ 2 ) {\displaystyle \textstyle Y\sim N(\mu _{2},\Sigma _{2})} . Define the random variable Z {\displaystyle \textstyle Z} which takes the same value as X {\displaystyle \textstyle X} with probability w 1 {\displaystyle \textstyle w_{1}} and the same value as Y {\displaystyle \textstyle Y} with probability w 2 = 1 − w 1 {\displaystyle \textstyle w_{2}=1-w_{1}} . Then if F 1 {\displaystyle \textstyle F_{1}} is the density of X {\displaystyle \textstyle X} and F 2 {\displaystyle \textstyle F_{2}} is the density of Y {\displaystyle \textstyle Y} the density of Z {\displaystyle \textstyle Z} is F = w 1 F 1 + w 2 F 2 {\displaystyle \textstyle F=w_{1}F_{1}+w_{2}F_{2}} . In this case Z {\displaystyle \textstyle Z} is said to follow a mixture of Gaussians. Pearson was the first who introduced the notion of the mixtures of Gaussians in his attempt to explain the probability distribution from which he got same data that he wanted to analyze. So after doing a lot of calculations by hand, he finally fitted his data to a mixture of Gaussians. The learning task in this case is to determine the parameters of the mixture w 1 , w 2 , μ 1 , μ 2 , Σ 1 , Σ 2 {\displaystyle \textstyle w_{1},w_{2},\mu _{1},\mu _{2},\Sigma _{1},\Sigma _{2}} . The first attempt to solve this problem was from Dasgupta. In this work Dasgupta assumes that the two means of the Gaussians are far enough from each other. This means that there is a lower bound on the distance | | μ 1 − μ 2 | | {\displaystyle \textstyle ||\mu _{1}-\mu _{2}||} . Using this assumption Dasgupta and a lot of scientists after him were able to learn the parameters of the mixture. The learning procedure starts with clustering the samples into two different clusters minimizing some metric. Using the assumption that the means of the Gaussians are far away from each other with high probability the samples in the first cluster correspond to samples from the first Gaussian and the samples in the second cluster to samples from the second one. Now that the samples are partitioned the μ i , Σ i {\displaystyle \textstyle \mu _{i},\Sigma _{i}} can be computed from simple statistical estimators and w i {\displaystyle \textstyle w_{i}} by comparing the magnitude of the clusters. If G M {\displaystyle \textstyle GM} is the set of all the mixtures of two Gaussians, using the above procedure theorems like the following can be proved. Theorem Let D ∈ G M {\displaystyle \textstyle D\in GM} with | | μ 1 − μ 2 | | ≥ c n max ( λ m a x ( Σ 1 ) , λ m a x ( Σ 2 ) ) {\displaystyle \textstyle ||\mu _{1}-\mu _{2}||\geq c{\sqrt {n\max(\lambda _{max}(\Sigma _{1}),\lambda _{max}(\Sigma _{2}))}}} , where c > 1 / 2 {\displaystyle \textstyle c>1/2} and λ m a x ( A ) {\displaystyle \textstyle \lambda _{max}(A)} the largest eigenvalue of A {\displaystyle \textstyle A} , then there is an algorithm which given ϵ > 0 {\displaystyle \textstyle \epsilon >0} , 0 < δ ≤ 1 {\displaystyle \textstyle 0<\delta \leq 1} and access to G E N ( D ) {\displaystyle \textstyle GEN(D)} finds an approximation w i ′ , μ i ′ , Σ i ′ {\displaystyle \textstyle w'_{i},\mu '_{i},\Sigma '_{i}} of the parameters such that Pr [ | | w i − w i ′ | | ≤ ϵ ] ≥ 1 − δ {\displaystyle \textstyle \Pr[||w_{i}-w'_{i}||\leq \epsilon ]\geq 1-\delta } (respectively for μ i {\displaystyle \textstyle \mu _{i}} and Σ i {\displaystyle \textstyle \Sigma _{i}} . The sample complexity of this algorithm is M = 2 O ( log 2 ⁡ ( 1 / ( ϵ δ ) ) ) {\displaystyle \textstyle M=2^{O(\log ^{2}(1/(\epsilon \delta )))}} and the running time is O ( M 2 d + M d n ) {\displaystyle \textstyle O(M^{2}d+Mdn)} . The above result could also be generalized in k − {\displaystyle \textstyle k-} mixture of Gaussians. For the case of mixture of two Gaussians there are learning results without the assumption of the distance between their means, like the following one which uses the total variation distance as a distance measure. Theorem Let F ∈ G M {\displaystyle \textstyle F\in GM} then there is an algorithm which given ϵ > 0 {\displaystyle \textstyle \epsilon >0} , 0 < δ ≤ 1 {\displaystyle \textstyle 0<\delta \leq 1} and access to G E N ( D ) {\displaystyle \textstyle GEN(D)} finds w i ′ , μ i ′ , Σ i ′ {\displaystyle \textstyle w'_{i},\mu '_{i},\Sigma '_{i}} such that if F ′ = w 1 ′ F 1 ′ + w 2 ′ F 2 ′ {\displaystyle \textstyle F'=w'_{1}F'_{1}+w'_{2}F'_{2}} , where F i ′ = N ( μ i ′ , Σ i ′ ) {\displaystyle \textstyle F'_{i}=N(\mu '_{i},\Sigma '_{i})} then Pr [ d ( F , F ′ ) ≤ ϵ ] ≥ 1 − δ {\displaystyle \textstyle \Pr[d(F,F')\leq \epsilon ]\geq 1-\delta } . The sample complexity and the running time of this algorithm is poly ( n , 1 / ϵ , 1 / δ , 1 / w 1 , 1 / w 2 , 1 / d ( F 1 , F 2 ) ) {\displaystyle \textstyle {\text{poly}}(n,1/\epsilon ,1/\delta ,1/w_{1},1/w_{2},1/d(F_{1},F_{2}))} . The distance between F 1 {\displaystyle \textstyle F_{1}} and F 2 {\displaystyle \textstyle F_{2}} doesn't affect the quality of the result of the algorithm but just the sample complexity and the running time.
Wikipedia
Chiral magnetic effect (CME) is the generation of electric current along an external magnetic field induced by chirality imbalance. Fermions are said to be chiral if they keep a definite projection of spin quantum number on momentum. The CME is a macroscopic quantum phenomenon present in systems with charged chiral fermions, such as the quark–gluon plasma, or Dirac and Weyl semimetals. The CME is a consequence of chiral anomaly in quantum field theory; unlike conventional superconductivity or superfluidity, it does not require a spontaneous symmetry breaking. The chiral magnetic current is non-dissipative, because it is topologically protected: the imbalance between the densities of left-handed and right-handed chiral fermions is linked to the topology of fields in gauge theory by the Atiyah-Singer index theorem. The experimental observation of CME in a Dirac semimetal, zirconium pentatelluride (ZrTe5), was reported in 2014 by a group from Brookhaven National Laboratory and Stony Brook University. The material showed a conductivity increase in the Lorentz force-free configuration of the parallel magnetic and electric fields. In 2015, the STAR detector at Brookhaven's Relativistic Heavy Ion Collider and ALICE at CERN presented experimental evidence for the existence of CME in the quark–gluon plasma. See also Euler–Heisenberg Lagrangian Chiral anomaly
Wikipedia
In mathematics Lévy's constant (sometimes known as the Khinchin–Lévy constant) occurs in an expression for the asymptotic behaviour of the denominators of the convergents of simple continued fractions. In 1935, the Soviet mathematician Aleksandr Khinchin showed that the denominators qn of the convergents of the continued fraction expansions of almost all real numbers satisfy lim n → ∞ q n 1 / n = e β {\displaystyle \lim _{n\to \infty }{q_{n}}^{1/n}=e^{\beta }} Soon afterward, in 1936, the French mathematician Paul Lévy found the explicit expression for the constant, namely e β = e π 2 / ( 12 ln ⁡ 2 ) = 3.275822918721811159787681882 … {\displaystyle e^{\beta }=e^{\pi ^{2}/(12\ln 2)}=3.275822918721811159787681882\ldots } (sequence A086702 in the OEIS) The term "Lévy's constant" is sometimes used to refer to π 2 / ( 12 ln ⁡ 2 ) {\displaystyle \pi ^{2}/(12\ln 2)} (the logarithm of the above expression), which is approximately equal to 1.1865691104… The value derives from the asymptotic expectation of the logarithm of the ratio of successive denominators, using the Gauss-Kuzmin distribution. In particular, the ratio has the asymptotic density function f ( z ) = 1 z ( z + 1 ) ln ⁡ ( 2 ) {\displaystyle f(z)={\frac {1}{z(z+1)\ln(2)}}} for z ≥ 1 {\displaystyle z\geq 1} and zero otherwise. This gives Lévy's constant as β = ∫ 1 ∞ ln ⁡ z z ( z + 1 ) ln ⁡ 2 d z = ∫ 0 1 ln ⁡ z − 1 ( z + 1 ) ln ⁡ 2 d z = π 2 12 ln ⁡ 2 {\displaystyle \beta =\int _{1}^{\infty }{\frac {\ln z}{z(z+1)\ln 2}}dz=\int _{0}^{1}{\frac {\ln z^{-1}}{(z+1)\ln 2}}dz={\frac {\pi ^{2}}{12\ln 2}}} . The base-10 logarithm of Lévy's constant, which is approximately 0.51532041…, is half of the reciprocal of the limit in Lochs' theorem. Proof The proof assumes basic properties of continued fractions. Let T : x ↦ 1 / x mod 1 {\displaystyle T:x\mapsto 1/x\mod 1} be the Gauss map. Lemma | ln ⁡ x − ln ⁡ p n ( x ) / q n ( x ) | ≤ 1 / q n ( x ) ≤ 1 / F n {\displaystyle |\ln x-\ln p_{n}(x)/q_{n}(x)|\leq 1/q_{n}(x)\leq 1/F_{n}} where F n {\textstyle F_{n}} is the Fibonacci number. Proof. Define the function f ( t ) = ln ⁡ p n + p n − 1 t q n + q n − 1 t {\textstyle f(t)=\ln {\frac {p_{n}+p_{n-1}t}{q_{n}+q_{n-1}t}}} . The quantity to estimate is then | f ( T n x ) − f ( 0 ) | {\displaystyle |f(T^{n}x)-f(0)|} . By the mean value theorem, for any t ∈ [ 0 , 1 ] {\textstyle t\in [0,1]} , | f ( t ) − f ( 0 ) | ≤ max t ∈ [ 0 , 1 ] | f ′ ( t ) | = max t ∈ [ 0 , 1 ] 1 ( p n + t p n − 1 ) ( q n + t q n − 1 ) = 1 p n q n ≤ 1 q n {\displaystyle |f(t)-f(0)|\leq \max _{t\in [0,1]}|f'(t)|=\max _{t\in [0,1]}{\frac {1}{(p_{n}+tp_{n-1})(q_{n}+tq_{n-1})}}={\frac {1}{p_{n}q_{n}}}\leq {\frac {1}{q_{n}}}} The denominator sequence q 0 , q 1 , q 2 , … {\displaystyle q_{0},q_{1},q_{2},\dots } satisfies a recurrence relation, and so it is at least as large as the Fibonacci sequence 1 , 1 , 2 , … {\displaystyle 1,1,2,\dots } . Ergodic argument Since p n ( x ) = q n − 1 ( T x ) {\textstyle p_{n}(x)=q_{n-1}(Tx)} , and p 1 = 1 {\textstyle p_{1}=1} , we have − ln ⁡ q n = ln ⁡ p n ( x ) q n ( x ) + ln ⁡ p n − 1 ( T x ) q n − 1 ( T x ) + ⋯ + ln ⁡ p 1 ( T n − 1 x ) q 1 ( T n − 1 x ) {\displaystyle -\ln q_{n}=\ln {\frac {p_{n}(x)}{q_{n}(x)}}+\ln {\frac {p_{n-1}(Tx)}{q_{n-1}(Tx)}}+\dots +\ln {\frac {p_{1}(T^{n-1}x)}{q_{1}(T^{n-1}x)}}} By the lemma, − ln ⁡ q n = ln ⁡ x + ln ⁡ T x + ⋯ + ln ⁡ T n − 1 x + δ {\displaystyle -\ln q_{n}=\ln x+\ln Tx+\dots +\ln T^{n-1}x+\delta } where | δ | ≤ ∑ k = 1 ∞ 1 / F n {\textstyle |\delta |\leq \sum _{k=1}^{\infty }1/F_{n}} is finite, and is called the reciprocal Fibonacci constant. By Birkhoff's ergodic theorem, the limit lim n → ∞ ln ⁡ q n n {\textstyle \lim _{n\to \infty }{\frac {\ln q_{n}}{n}}} converges to ∫ 0 1 ( − ln ⁡ t ) ρ ( t ) d t = π 2 12 ln ⁡ 2 {\displaystyle \int _{0}^{1}(-\ln t)\rho (t)dt={\frac {\pi ^{2}}{12\ln 2}}} almost surely, where ρ ( t ) = 1 ( 1 + t ) ln ⁡ 2 {\displaystyle \rho (t)={\frac {1}{(1+t)\ln 2}}} is the Gauss distribution. See also Khinchin's constant References Further reading Khinchin, A. Ya. (14 May 1997). Continued Fractions. Dover. ISBN 0-486-69630-8. External links Weisstein, Eric W. "Lévy Constant". MathWorld. OEIS sequence A086702 (Decimal expansion of Lévy's constant)
Wikipedia
The C programming language has a set of functions implementing operations on strings (character strings and byte strings) in its standard library. Various operations, such as copying, concatenation, tokenization and searching are supported. For character strings, the standard library uses the convention that strings are null-terminated: a string of n characters is represented as an array of n + 1 elements, the last of which is a "NUL character" with numeric value 0. The only support for strings in the programming language proper is that the compiler translates quoted string constants into null-terminated strings. Definitions A string is defined as a contiguous sequence of code units terminated by the first zero code unit (often called the NUL code unit). This means a string cannot contain the zero code unit, as the first one seen marks the end of the string. The length of a string is the number of code units before the zero code unit. The memory occupied by a string is always one more code unit than the length, as space is needed to store the zero terminator. Generally, the term string means a string where the code unit is of type char, which is exactly 8 bits on all modern machines. C90 defines wide strings which use a code unit of type wchar_t, which is 16 or 32 bits on modern machines. This was intended for Unicode but it is increasingly common to use UTF-8 in normal strings for Unicode instead. Strings are passed to functions by passing a pointer to the first code unit. Since char * and wchar_t * are different types, the functions that process wide strings are different than the ones processing normal strings and have different names. String literals ("text" in the C source code) are converted to arrays during compilation. The result is an array of code units containing all the characters plus a trailing zero code unit. In C90 L"text" produces a wide string. A string literal can contain the zero code unit (one way is to put \0 into the source), but this will cause the string to end at that point. The rest of the literal will be placed in memory (with another zero code unit added to the end) but it is impossible to know those code units were translated from the string literal, therefore such source code is not a string literal. Character encodings Each string ends at the first occurrence of the zero code unit of the appropriate kind (char or wchar_t). Consequently, a byte string (char*) can contain non-NUL characters in ASCII or any ASCII extension, but not characters in encodings such as UTF-16 (even though a 16-bit code unit might be nonzero, its high or low byte might be zero). The encodings that can be stored in wide strings are defined by the width of wchar_t. In most implementations, wchar_t is at least 16 bits, and so all 16-bit encodings, such as UCS-2, can be stored. If wchar_t is 32-bits, then 32-bit encodings, such as UTF-32, can be stored. (The standard requires a "type that holds any wide character", which on Windows no longer holds true since the UCS-2 to UTF-16 shift. This was recognized as a defect in the standard and fixed in C++.) C++11 and C11 add two types with explicit widths char16_t and char32_t. Variable-width encodings can be used in both byte strings and wide strings. String length and offsets are measured in bytes or wchar_t, not in "characters", which can be confusing to beginning programmers. UTF-8 and Shift JIS are often used in C byte strings, while UTF-16 is often used in C wide strings when wchar_t is 16 bits. Truncating strings with variable-width characters using functions like strncpy can produce invalid sequences at the end of the string. This can be unsafe if the truncated parts are interpreted by code that assumes the input is valid. Support for Unicode literals such as char foo[512] = "φωωβαρ"; (UTF-8) or wchar_t foo[512] = L"φωωβαρ"; (UTF-16 or UTF-32, depends on wchar_t) is implementation defined, and may require that the source code be in the same encoding, especially for char where compilers might just copy whatever is between the quotes. Some compilers or editors will require entering all non-ASCII characters as \xNN sequences for each byte of UTF-8, and/or \uNNNN for each word of UTF-16. Since C11 (and C++11), a new literal prefix u8 is available that guarantees UTF-8 for a bytestring literal, as in char foo[512] = u8"φωωβαρ";. Since C++20 and C23, a char8_t type was added that is meant to store UTF-8 characters and the types of u8 prefixed character and string literals were changed to char8_t and char8_t[] respectively. Features Terminology In historical documentation the term "character" was often used instead of "byte" for C strings, which leads many to believe that these functions somehow do not work for UTF-8. In fact all lengths are defined as being in bytes and this is true in all implementations, and these functions work as well with UTF-8 as with single-byte encodings. The BSD documentation has been fixed to make this clear, but POSIX, Linux, and Windows documentation still uses "character" in many places where "byte" or "wchar_t" is the correct term. Functions for handling memory buffers can process sequences of bytes that include null-byte as part of the data. Names of these functions typically start with mem, as opposite to the str prefix. Headers Most of the functions that operate on C strings are declared in the string.h header (cstring in C++), while functions that operate on C wide strings are declared in the wchar.h header (cwchar in C++). These headers also contain declarations of functions used for handling memory buffers; the name is thus something of a misnomer. Functions declared in string.h are extremely popular since, as a part of the C standard library, they are guaranteed to work on any platform which supports C. However, some security issues exist with these functions, such as potential buffer overflows when not used carefully and properly, causing the programmers to prefer safer and possibly less portable variants, out of which some popular ones are listed below. Some of these functions also violate const-correctness by accepting a const string pointer and returning a non-const pointer within the string. To correct this, some have been separated into two overloaded functions in the C++ version of the standard library. Constants and types Functions Multibyte functions These functions all need a mbstate_t object, originally in static memory (making the functions not be thread-safe) and in later additions the caller must maintain. This was originally intended to track shift states in the mb encodings, but modern ones such as UTF-8 do not need this. However these functions were designed on the assumption that the wc encoding is not a variable-width encoding and thus are designed to deal with exactly one wchar_t at a time, passing it by value rather than using a string pointer. As UTF-16 is a variable-width encoding, the mbstate_t has been reused to keep track of surrogate pairs in the wide encoding, though the caller must still detect and call mbtowc twice for a single character. Later additions to the standard admit that the only conversion programmers are interested in is between UTF-8 and UTF-16 and directly provide this. Numeric conversions The C standard library contains several functions for numeric conversions. The functions that deal with byte strings are defined in the stdlib.h header (cstdlib header in C++). The functions that deal with wide strings are defined in the wchar.h header (cwchar header in C++). The functions strchr, bsearch, strpbrk, strrchr, strstr, memchr and their wide counterparts are not const-correct, since they accept a const string pointer and return a non-const pointer within the string. This has been fixed in C23. Also, since the Normative Amendment 1 (C95), atoxx functions are considered subsumed by strtoxxx functions, for which reason neither C95 nor any later standard provides wide-character versions of these functions. The argument against atoxx is that they do not differentiate between an error and a 0. Popular extensions Replacements Despite the well-established need to replace strcat and strcpy with functions that do not allow buffer overflows, no accepted standard has arisen. This is partly due to the mistaken belief by many C programmers that strncat and strncpy have the desired behavior; however, neither function was designed for this (they were intended to manipulate null-padded fixed-size string buffers, a data format less commonly used in modern software), and the behavior and arguments are non-intuitive and often written incorrectly even by expert programmers. The most popular replacement are the strlcat and strlcpy functions, which appeared in OpenBSD 2.4 in December, 1998. These functions always write one NUL to the destination buffer, truncating the result if necessary, and return the size of buffer that would be needed, which allows detection of the truncation and provides a size for creating a new buffer that will not truncate. For a long time they have not been included in the GNU C library (used by software on Linux), on the basis of allegedly being inefficient, encouraging the use of C strings (instead of some superior alternative form of string), and hiding other potential errors. Even while glibc hadn't added support, strlcat and strlcpy have been implemented in a number of other C libraries including ones for OpenBSD, FreeBSD, NetBSD, Solaris, OS X, and QNX, as well as in alternative C libraries for Linux, such as libbsd, introduced in 2008, and musl, introduced in 2011, and the source code added directly to other projects such as SDL, GLib, ffmpeg, rsync, and even internally in the Linux kernel. This did change in 2024, the glibc FAQ notes that as of glibc 2.38, the code has been committed and thereby added. These functions were standardized as part of POSIX.1-2024, the Austin Group Defect Tracker ID 986 tracked some discussion about such plans for POSIX. Sometimes memcpy or memmove are used, as they may be more efficient than strcpy as they do not repeatedly check for NUL (this is less true on modern processors). Since they need a buffer length as a parameter, correct setting of this parameter can avoid buffer overflows. As part of its 2004 Security Development Lifecycle, Microsoft introduced a family of "secure" functions including strcpy_s and strcat_s (along with many others). These functions were standardized with some minor changes as part of the optional C11 (Annex K) proposed by ISO/IEC WDTR 24731. These functions perform various checks including whether the string is too long to fit in the buffer. If the checks fail, a user-specified "runtime-constraint handler" function is called, which usually aborts the program. These functions attracted considerable criticism because initially they were implemented only on Windows and at the same time warning messages started to be produced by Microsoft Visual C++ suggesting use of these functions instead of standard ones. This has been speculated by some to be an attempt by Microsoft to lock developers into its platform. Experience with these functions has shown significant problems with their adoption and errors in usage, so the removal of Annex K was proposed for the next revision of the C standard. Usage of memset_s has been suggested as a way to avoid unwanted compiler optimizations. See also C syntax § Strings – source code syntax, including backslash escape sequences String functions Perl Compatible Regular Expressions (PCRE) Notes References External links Fast memcpy in C, multiple C coding examples to target different types of CPU instruction architectures
Wikipedia
In mathematics, an N-topological space is a set equipped with N arbitrary topologies. If τ1, τ2, ..., τN are N topologies defined on a nonempty set X, then the N-topological space is denoted by (X,τ1,τ2,...,τN). For N = 1, the structure is simply a topological space. For N = 2, the structure becomes a bitopological space introduced by J. C. Kelly. Example Let X = {x1, x2, ...., xn} be any finite set. Suppose Ar = {x1, x2, ..., xr}. Then the collection τ1 = {φ, A1, A2, ..., An = X} will be a topology on X. If τ1, τ2, ..., τm be m such topologies (chain topologies) defined on X, then the structure (X, τ1, τ2, ..., τm) is an m-topological space.
Wikipedia
Pieter Adriaan Flach (born 8 April 1961, Sneek) is a Dutch computer scientist and a Professor of Artificial Intelligence in the Department of Computer Science at the University of Bristol. He is author of the acclaimed Simply Logical: Intelligent Reasoning by Example (John Wiley, 1994) and Machine Learning: the Art and Science of Algorithms that Make Sense of Data (Cambridge University Press, 2012). Education Flach received an MSc Electrical Engineering from Universiteit Twente in 1987 and a PhD in Computer Science from Tilburg University in 1995. Research Flach's research interests are in data mining and machine learning.
Wikipedia
The German Informatics Society (GI) (German: Gesellschaft für Informatik) is a German professional society for computer science, with around 20,000 personal and 250 corporate members. It is the biggest organized representation of its kind in the German-speaking world. History The German Informatics Society was founded in Bonn, Germany, on September 16, 1969. Initially aimed primarily at researchers, it expanded in the mid-1970s to include computer science professionals, and in 1978 it founded its journal Informatik Spektrum to reach this broader audience. The Deutsche Informatik-Akademie in Bonn was founded in 1987 by the German Informatics Society in order to provide seminars and continuing education for computer science professionals. In 1990, the German Informatics Society contributed to the founding of the International Conference and Research Center for Computer Science (renamed since as the Leibniz Center for Informatics) at Dagstuhl; since its founding, Schloss Dagstuhl has become a major center for international academic workshops. In 1983, the German Informatics Society became a member society of the International Federation for Information Processing (IFIP), taking over the role of representing Germany from the Deutsche Arbeitsgemeinschaft für Rechenanlagen. In 1989, it joined the Council of European Professional Informatics Societies. Activities The main activity of the association is to support the professional development of its members in every aspect of the rapidly changing field of informatics. In order to realise this aim the German Informatics Society maintains a large number of committees, special interest groups, and working groups in the field of theory of computation, artificial intelligence, bioinformatics, software engineering, human computer interaction, databases, technical informatics, graphics and information visualisation, business informatics, legal aspects of computing, computer science education, social computing, and computer security. Up to now, the GI runs more than 30 local groups in cooperation with the German chapter of the Association for Computing Machinery. Other important GI activities include raising public awareness of informatics, including its benefits and risks. Lobbying activities have been organised by the office in Berlin since 2013. Additionally, the GI runs programmes designed for young people and women to foster interest in informatics. In addition to the Informatik Spektrum, which is the journal of the society, most of the society's special interest groups maintain their own journals. Overall the society has approximately 40 regular publications, and it sponsors a similar number of conferences and events annually. Many of these conferences have their proceedings published in the GI's book series, Lecture Notes in Informatics, which also publishes Ph.D. thesis abstracts and research monographs. Every two years, the German Informatics Society awards the Konrad Zuse Medal to an outstanding German computer science researcher. It also offers prizes for the best Ph.D. thesis, for computer science education, for practical innovations, and for teams of student competitors. Each year beginning in 2002, the GI has elected a small number of its members as fellows, its highest membership category. Conferences One of the biggest informatics conferences in the German-speaking world is the INFORMATIK. The conference is organised in cooperation with universities, each year in a different location. More than 1.000 participants visit workshops and keynotes regarding current challenges in the field of information technology. In addition, several special interest groups organise large meetings with an international reputation, for example the „Software Engineering (SE)“, the „Multikonferenz Wirtschaftsinformatik (MKWI), the „Mensch-Computer-Interaktion (MCI)“ and the „Datenbanksysteme für Business, Technologie und Web (BTW)“. The Detection of Intrusions and Malware, and Vulnerability Assessment event, designed to serve as a general forum for discussing malware and the vulnerability of computing systems to attacks, is another annual project under the auspices of the organization. Its last conference was held from 6 July to 7 July in the city of Bonn, Germany, being sponsored by entities such as Google, Rohde & Schwarz, and VMRay. Honorary members The following people are honorary members of the German Informatics Society due to their achievements in the field of informatics. Konrad Zuse (since 1985) Friedrich Ludwig Bauer (since 1987) Wilfried Brauer (since 2000) Günter Hotz (since 2002) Joseph Weizenbaum (since 2003) Gerhard Krüger (since 2007) Heinz Schwärtzel (since 2008) Associated societies Swiss Informatics Society Gesellschaft für Informatik in der Land-, Forst- und Ernährungswirtschaft (GIL) German Chapter of the ACM (GChACM) References External links Official website
Wikipedia
Secret Invasion is an American television miniseries created by Kyle Bradstreet for the streaming service Disney+, based on the 2008 Marvel Comics storyline of the same name. It is the ninth television series in the Marvel Cinematic Universe (MCU) produced by Marvel Studios, sharing continuity with the films of the franchise. It follows Nick Fury and Talos as they uncover a conspiracy by a group of shapeshifting Skrulls to conquer Earth. Bradstreet serves as the head writer, with Ali Selim directing. Samuel L. Jackson and Ben Mendelsohn reprise their respective roles as Fury and Talos from previous MCU media, with Kingsley Ben-Adir, Killian Scott, Samuel Adewunmi, Dermot Mulroney, Richard Dormer, Emilia Clarke, Olivia Colman, Don Cheadle, Charlayne Woodard, Christopher McDonald, and Katie Finneran also starring. Development on the series began by September 2020, with Bradstreet and Jackson attached. The title and premise of the series, along with Mendelsohn's return, were revealed that December. Additional casting occurred throughout March and April 2021, followed by the hiring of Selim and Thomas Bezucha that May to direct the series. Filming began in London by September 2021 and wrapped in late April 2022, with additional filming around England. During production, much of the series' creative team was replaced, with Brian Tucker taking over as writer from Bradstreet and Bezucha exiting, and extensive reshoots took place from mid-June to late September 2022. Secret Invasion premiered on June 21, 2023, and ran for six episodes until July 26. It is the first series in Phase Five of the MCU. The series received mixed reviews from critics, who praised Jackson's and Mendelsohn's performances but criticized the writing (particularly that of the finale), pacing, and visual effects. Premise Nick Fury works with Talos, a shapeshifting alien Skrull, to uncover a conspiracy by a group of renegade Skrulls led by Gravik who plan to gain control of Earth by posing as different humans around the world. Cast and characters Samuel L. Jackson as Nick Fury:The former director of S.H.I.E.L.D. who has been working with the Skrulls in space for years before returning to Earth. Fury has been away from Earth so long in part because he is worn out and uncertain of his place in the world following the events of Avengers: Infinity War (2018) and Avengers: Endgame (2019). Jackson said the series would delve deeper into Fury's past and future, and allowed him to "explore something other than the badassery of who Nick Fury is" including the toll of his job on his personal life. He continued that Secret Invasion allowed him to work out some new elements of the character that his previous appearances in the MCU had not. Executive producer Jonathan Schwartz added that "sins from [Fury's] past start to haunt him once again" given the things he had to do to protect Earth in the past have ramifications. Ben Mendelsohn as Talos: The former leader of the Skrulls and an ally of Fury. Mendelsohn noted how Talos, along with Fury, have "lost their way" and are "up against it" since he was last seen in Captain Marvel (2019). Kingsley Ben-Adir as Gravik:The leader of a group of rebel Skrulls who has broken away from Talos and believe the best way to help their kind is to infiltrate Earth for the resources they need. He sets up his operation in a decommissioned radioactive site in Russia, and has a hatred for most of the Skrulls working for him, believing them to be idiots. Ben-Adir worked to find the proper level of hatred to portray in each scene, since he felt Gravik trusts no one and hates everyone but still needs the other Skrulls to accomplish his goals. Director Ali Selim said Gravik was not a terrorist or "just a bad guy with a bomb" and the series would explore the reasons for his actions. Lucas Persaud portrays Gravik as a child. Killian Scott as Pagon: A rebel Skrull and Gravik's second-in-command. Ben-Adir said Gravik sees that Pagon has ambition and wants to be a leader, but "he doesn't have the guts to take it". Scott also portrays the human counterpart whose form Pagon took in the final episode. Samuel Adewunmi as Beto: A rebel Skrull recruit. Dermot Mulroney as Ritson: The president of the United States. Richard Dormer as Prescod: A former S.H.I.E.L.D. agent who uncovered the Skrulls' plan to invade Earth. Emilia Clarke as G'iah:Talos's daughter who works for Gravik. Clarke described G'iah as having "a kind of punk feeling" to her, adding that being a refugee had "hardened her". She resents Fury since he has not been able to deliver on the promises he made in Captain Marvel to find the Skrulls a new home. Clarke worked with Mendelsohn to create G'iah and Talos's backstory to "fill in a lot of the gaps", with Clarke believing G'iah would have had an "upbringing that was regimented with training" since the Skrulls are a warring species, that would have led to a "fierce need for her own independence" while judging some of Talos's choices. G'iah was previously portrayed as a child in Captain Marvel by Auden L. Ophuls and Harriet L. Ophuls. Olivia Colman as Sonya Falsworth:A high-ranking MI6 agent and an old ally of Fury's who looks to protect the United Kingdom's national security interests during the invasion. Described as "a more antagonistic presence" in the series, Schwartz said Falsworth could be working either with or against Fury depending on their desired goals, with Jackson calling the two "frenemies". Jackson added that Colman's portrayal of Falsworth changed her dynamic with Fury, since she played the character "cozy and fuzzy" rather than contentious, which allowed for the two to "work together in a harmony that's more satisfying to the story and our backstory than any other way". Don Cheadle as Raava / James "Rhodey" Rhodes:A female Skrull posing as Rhodes (an officer in the U.S. Air Force and an Avenger) who serves as an envoy and advisor to President Ritson. Nisha Aaliya portrays Raava in her Skrull form. Jackson said Rhodes would be a "political animal" in the series rather than using the War Machine armor. Cheadle noted that this made Rhodes more of an adversary than in his previous MCU appearances, with the character caught between being "a military man following the chain of command" and someone who can go "outside the box". Once Fury becomes aware that Rhodes has been replaced by a Skrull, Cheadle felt the two enter "sort of a cat-and-mouse game" with each having compromising info on the other. The real Rhodes is ultimately released from his Skrull containment pod at the end of the series. Charlayne Woodard as Varra / Priscilla Davis: A Skrull who is the wife of Nick Fury and has a history with Gravik. Varra took the likeness of Dr. Priscilla Davis who was suffering from a congenital heart defect. Christopher McDonald as Chris Stearns: A Skrull posing as an FXN news host and member of the Skrull council. The character was based on real-life newscaster Tucker Carlson and the Fox News channel. Katie Finneran as Rosa Dalton: A scientist replaced by a Skrull that is researching various DNA samples for the Harvest project. Reprising their MCU roles are Cobie Smulders as Maria Hill, Martin Freeman as Everett K. Ross, and O-T Fagbenle as Rick Mason. The first episode reveals that Ross had been replaced by a Skrull infiltrator, and also features Hill's death. Smulders had been aware of the character's death during her initial discussions to join the series. Tony Curran appears as Derrik Weatherby, the director of MI6 who was replaced by a Skrull. Curran previously portrayed Bor in Thor: The Dark World (2013) and Finn Cooley in the second season of Daredevil (2016). Also appearing are Ben Peel as Brogan, a rebel Skrull who is tortured by Falsworth; Seeta Indrani as Shirley Sagar, Christopher Goh as Jack Hyuk-Bin, Giampiero Judica as NATO Secretary General Sergio Caspani, and Anna Madeley as the UK prime minister Pamela Lawton, all members of the Skrull Council; Juliet Stevenson as Maria Hill's mother Elizabeth; and Charlotte Baker and Kate Braithwaite as Soren, the wife of Talos and mother of G'iah who was killed by Gravik; Baker portrays Soren's human disguise while Braithwaite portrays her Skrull appearance. Soren was previously portrayed by Sharon Blynn in Captain Marvel and Spider-Man: Far From Home (2019). Episodes Production Development In September 2020, Kyle Bradstreet was revealed to be developing a television series for the streaming service Disney+ centered on the Marvel Comics character Nick Fury. The character had previously been one of ten properties announced in September 2005 by Marvel Entertainment chairman and CEO Avi Arad as being developed for film by the newly formed studio Marvel Studios, after Marvel received financing to produce the slate of films to be distributed by Paramount Pictures; Andrew W. Marlowe was hired to write a script for a Nick Fury film in April 2006. In April 2019, after Samuel L. Jackson had portrayed Nick Fury in ten Marvel Cinematic Universe (MCU) films as well as the Marvel Television series Agents of S.H.I.E.L.D., Richard Newby from The Hollywood Reporter felt it was time the character received his own film, calling the character "the MCU's most powerful asset yet to be fully untapped". Jackson was attached to reprise his role in Bradstreet's series, with the latter writing and serving as executive producer. In December 2020, Marvel Studios President Kevin Feige officially announced a new series titled Secret Invasion, with Jackson co-starring with Ben Mendelsohn in his MCU role of Talos. The series is based on the 2008–09 comic book storyline of the same name, with Feige describing it as a "crossover event series" that would tie-in with future MCU films; the series' official premise further described it as a crossover event series. Marvel Studios chose to make a Secret Invasion series instead of a film because it allowed them to do something different than they had done before. Bradstreet had worked on scripts for the series for about a year, before he was replaced with Brian Tucker. Directors were being lined up by April 2021. Thomas Bezucha and Ali Selim were attached to direct the series a month later, with each expected to direct three episodes and work on the story. However, Bezucha left the series during production because of scheduling conflicts with reshoots, and Selim ultimately directed all six episodes. The series reportedly went through multiple issues during pre-production, which necessitated Marvel Studios' executive Jonathan Schwartz becoming more involved with the series to get it "back on track" as it had fallen behind schedule and risked some actors becoming unavailable due to other commitments. The episodes were described as being an hour-long each, with the series ultimately totaling approximately 4.5 hours. Marvel Studios' Feige, Louis D'Esposito, Victoria Alonso, Brad Winderbaum, and Schwartz served as executive producers on the series alongside Jackson, Selim, Bradstreet, and Tucker. The budget for the series was $211.6 million. This was noted for being a large budget compared to the content in the series, which did not use large action set pieces or extensive visual effects. Extensive reshoots were believed to partially be the reason for the large budget. Writing Bradstreet, Tucker, Brant Englestein, Roxanne Paredes, and Michael Bhim served as writers on the series. Tucker received the majority of writing credits on the episodes. Feige said the series would not be looking to match the scope of the Secret Invasion comic book storyline, in terms of the number of characters featured or the impact on the wider universe, considering the comic book featured more characters than the crossover film Avengers: Endgame (2019). Instead, he described Secret Invasion as a showcase for Jackson and Mendelsohn that would explore the political paranoia elements of the Secret Invasion comic series "that was great with the twists and turns that that took". The creatives were also inspired by the Cold War-era espionage novels of John le Carré, the television series Homeland (2011–2020) and The Americans (2013–2018), and the film The Third Man (1949). Selim said the series transitions at times between espionage noir and a Western, highlighting the film The Searchers (1956) as a Western inspiration. Feige said the series would serve as a present-day follow-up to the 1990s story of Captain Marvel (2019), alongside that film's sequel The Marvels (2023), but was tonally different from the films. Jackson said the series would uncover some of the things that happened during the Blip. Cobie Smulders described the series as "a very grounded, on-this-earth drama" that was "dealing with real human issues and dealing with trust". Discussing the Skrulls, shapeshifting green-skinned extraterrestrials who can perfectly simulate any human being at will, Jackson felt their inclusion introduced "a political aspect" in that their ability to shape-shift makes people question who can be trusted and "What happens when people get afraid and don't understand other people? You can't tell who's innocent and who's guilty in this particular instance." The first episode reveals that Everett K. Ross had been replaced by a Skrull infiltrator, while the fourth episode reveals that James "Rhodey" Rhodes has been replaced by the Skrull Raava. Feige explained that the creators chose Rhodes to be a Skrull because they were looking for an established MCU character viewers would not be expecting to be a Skrull, and to introduce a new experience for viewers rewatching his past MCU appearances and questioning if he was a Skrull during them. They approached actor Don Cheadle during early development of the series about this, who liked the opportunity to be able to "play with different sides of Rhodey that we haven't seen before". It is revealed that Rhodes had been replaced by a Skrull "for a long time" and is seen wearing a hospital gown when being released from his containment pod. This was interpreted by some to mean he had been replaced after the events of Captain America: Civil War (2016), a theory which Selim acknowledged, though he would not confirm this specifically, saying "does it have to be definitive, or is it more fun for the audience to go back and revisit every moment" since Civil War to question whether Rhodes was a Skrull or not. Casting Jackson was expected to reprise his role in the series with the reveal of its development in September 2020. When the series was officially announced that December, Feige confirmed Jackson's casting and announced that Mendelsohn would co-star. Kingsley Ben-Adir was cast as the Skrull Gravik, the "lead villain" role, in March 2021, and the following month, Olivia Colman was cast as Sonya Falsworth, along with Emilia Clarke as Talos's daughter G'iah, and Killian Scott as Gravik's second-in-command Pagon. In May 2021, Christopher McDonald joined the cast as newscaster Chris Stearns, a newly created character rather than one from the comics, who had the potential to appear in other MCU series and films. Carmen Ejogo had joined the cast by November 2021 (although she ultimately did not appear in the series), and the next month, Smulders was set to reprise her MCU role as Maria Hill. In February 2022, set photos revealed that Don Cheadle would appear in his MCU role of James "Rhodey" Rhodes, along with Dermot Mulroney as United States President Ritson. The following month, Jackson confirmed that Martin Freeman and Cheadle would appear in the series, with Freeman reprising his MCU role as Everett K. Ross. In September 2022, it was revealed that Charlayne Woodard was cast in the series as Fury's Skrull wife Priscilla. Samuel Adewunmi and Katie Finneran were revealed as part of the cast in March 2023, with Adewunmi as the Skrull Beto and Finneran as the scientist Rosa Dalton. Richard Dormer appears as Agent Prescod, while O-T Fagbenle reprises his Black Widow (2021) role as Rick Mason. Design Sets and costumes Frank Walsh serves as production designer, while Claire Anderson serves as costume designer. In Secret Invasion, Fury does not wear his signature eyepatch, which Jackson noted was a character choice. He explained, "The patch is part of who the strong Nick Fury was. It's part of his vulnerability now. You can look at it and see he's not this perfectly indestructible person. He doesn't feel like that guy." Title sequence The opening title sequence was created by Method Studios using generative artificial intelligence, which prompted significant backlash online. Some commentators felt this was particularly poor timing given the series was released during the 2023 Writers Guild of America strike for which the use of artificial intelligence over real people was a key issue, with language about protecting writers against the use of AI in the writing process. Method Studios issued a statement in response to criticism stating that none of their artists had been replaced with artificial intelligence for the sequence and that the technology, both existing and custom-built for this series, was just one tool that their team used to achieve a specific final look. The statement elaborated that many elements in the sequence were created using traditional tools and techniques, and the artificial intelligence technology was just used to add an "otherworldly and alien look" which the creative team felt "perfectly aligned with the project's overall theme and the desired aesthetic". Storyboard artists and animators on the series expressed disappointment in the opening sequence being generated by AI. Filming Filming had begun by September 1, 2021, in London, under the working title Jambalaya, with Selim directing the series, and Remi Adefarasin serving as cinematographer. Filming was previously expected to begin in mid-August 2021. Jackson began filming his scenes on October 14, after already working on The Marvels which was filming in London at the same time. Filming occurred in West Yorkshire, England, including Leeds on January 22, Huddersfield on January 24, and in Halifax at Piece Hall from January 24 to 31, 2022. Filming occurred at the Liverpool Street station on February 28, 2022. Soundstage work occurred at Pinewood Studios on seven of its stages, as well as Hallmark House, and Versa Studios. Filming wrapped on April 25, 2022. Additional filming occurred in London's Brixton neighborhood, and was also expected to occur across Europe. In mid-2022, factions of the crew and the series' creative leaders experienced disagreements which "debilitated" the production. Jackson revealed in mid-June 2022 that he would return to London in August to work on reshoots for Secret Invasion, after doing the same for The Marvels. McDonald was returning to London by the end of July for the reshoots, which he said were to make the series "better" and to go "much deeper than before". He also indicated that a new writer was brought on to the production to work on the additional material. Jackson completed his reshoots by August 12, 2022, while Clarke filmed scenes in London at the end of September. By early September, many crew members on the series had been replaced, while co-executive producer Chris Gary, the Marvel Studios Production and Development executive overseeing the series, was reassigned and expected to leave the studio when his contract expired at the end of 2023. Jonathan Schwartz, a senior Marvel Studios executive and a member of the Marvel Studios Parliament group, was dispatched to oversee the production. Bezucha also left the series during this time due to new scheduling conflicts with the reshoots. Jackson said because Selim became the director of all the series' episodes, it provided consistency for the cast and crew with the ideas and concepts and allowed Selim to make the series his way. Eben Bolter served as the cinematographer during additional photography which lasted for four months. Post-production Pete Beaudreau, Melissa Lawson Cheung, Drew Kilcoin, and James Stanger serve as editors, while Georgina Street serves as the visual effects producer and Aharon Bourland as the visual effects supervisor. Visual effects for the series were provided by Digital Domain, FuseFX, Luma Pictures, MARZ, One of Us, Zoic Studios, and Cantina Creative. Music In February 2023, Kris Bowers was revealed to be composing for the series, and was working on the score at that time. The series' main title track, "Nick Fury (Main Title Theme)", was released digitally as a single by Marvel Music and Hollywood Records on June 20. Marketing The first footage of the series debuted on Disney+ Day on November 12, 2021. More footage was shown in July 2022 at San Diego Comic-Con. Adam B. Vary of Variety said the footage had an "overall vibe... of paranoia and foreboding", believing the series would fit with the larger "anti-heroic thread" building in Phase Five of the MCU. The first trailer for the series debuted at the 2022 D23 Expo in September 2022. Polygon's Austen Goslin felt the trailer was "mostly a recap of the series' plot", while Vanity Fair's Anthony Breznican noted how Fury had both eyes and said he "appears to be done relying on others to help save the world". Tamera Jones from Collider felt the trailer was "action-packed with explosions and intrigue, giving off more of a spy vibe than a fun paranoid mystery". The second trailer debuted during Sunday Night Baseball on ESPN on April 2, 2023. Edidiong Mboho of Collider felt the trailer "evokes the thrill and excitement" like the first one and provided the "same sense of urgency and paranoia from the Skrull infiltration". Mboho lauded the trailer for featuring the all-star cast of the series "without giving too much away" of its plot. Dais Johnston of Inverse felt that every shot of the trailer provided a "flashy-but-gritty spy-fi story that swaps out the powers and wisecracks of past works for the ingenuity and strategy Nick Fury is known for". Sam Barsanti of The A.V. Club said the trailer featured "more of the physical and psychological toll that life in general has taken on Fury". In early June 2023, a viral marketing website was created for the series that featured a five-minute clip from the first episode and a new trailer for the series. The locked website was initially revealed through cryptic images tweeted on the series' official Twitter account, which included clues to form the password that allowed access to it. At San Diego Comic-Con in July 2023, a Skrull "invasion" occurred, with fans seeing or becoming Skrulls around the convention. Release A red carpet premiere event for Secret Invasion was held in Los Angeles at the El Capitan Theater on June 13, 2023. The series debuted on Disney+ on June 21, 2023, consisting of six episodes, and concluding on July 26, 2023. It was previously expected to release within early 2023. It is the first series of Phase Five of the MCU. The first three episodes were made available on Hulu from July 21 to August 17, 2023, to promote the finale of the series. Reception Viewership According to market research company Parrot Analytics, which looks at consumer engagement in consumer research, streaming, downloads, and on social media, reported that Secret Invasion was the most in-demand new show in the U.S. for the quarter from April 1 to June 30, 2023. It garnered 42.1 times the average series demand in its first 30 days. The series experienced higher initial demand spikes compared to other Marvel series on Disney+. Whip Media, which tracks viewership data for the more than 25 million worldwide users of its TV Time app, calculated that Secret Invasion was the seventh most-watched streaming original television series of 2023. According to the file-sharing news website TorrentFreak, Secret Invasion was the fifth most-watched pirated television series of 2023. Parrot Analytics reported that Secret Invasion was the third most in-demand streaming original of 2023, with 40 times the average demand for shows. Critical response The review aggregator website Rotten Tomatoes reported an approval rating of 52%, with an average score of 6.1/10, based on 197 reviews. The site's critic's consensus states: "A well-deserved showcase for Samuel L. Jackson, Secret Invasion steadies itself after a somewhat slow start by taking the MCU in a darker, more mature direction." Metacritic, which uses a weighted average, assigned the series a score of 63 out of 100 based on 24 critics, indicating "generally favorable reviews". Richard Newby at Empire gave the series 4 out of 5 stars, feeling that it was "a riveting, tense drama that gifts its actors with weighty material and encourages its audience to look beyond the sheen of superheroism." Newby found the series had taken a "sharp turn" from the sense of comfort of previous MCU projects due to the depiction of mature themes, such as terrorism and torture. Eric Deggans of NPR praised the performance of Samuel L. Jackson and called the series an "antidote to superhero fatigue", writing, "By centering on an aging Nick Fury who is struggling to handle a crisis created by his own broken promises, we get a story focused much more on a flawed hero than some kind of super-person juggling computer-generated cars." Lucy Mangan of The Guardian gave the show a grade of 3 out of 5 stars, stating, "Some moments in Marvel's latest TV series remind you how utterly watchable brilliant actors are – despite this darker, more mature outing needing a tad more thought." Barry Hertz of The Globe and Mail said "The chases are slow, the explosions meh, the entire pace and tempo sluggish... The real folly of Secret Invasion is that it compels the best actors of any Marvel series so far to squirm while delivering soul-deadening expository dialogue." Accolades TVLine placed Secret Invasion third on their list of the 10 Worst Shows of 2023. Documentary special In February 2021, the documentary series Marvel Studios: Assembled was announced. The special on this series, "The Making of Secret Invasion", was released on Disney+ on September 20, 2023. Future In September 2022, Feige stated that Secret Invasion would lead into Armor Wars, with Cheadle set to reprise his role as Rhodes. The series was originally believed to tie in with the film The Marvels, in which Jackson reprises his role as Fury, but that film largely ignores the events of Secret Invasion. Matt Webb Mitovich at TVLine speculated that it likely was intended for The Marvels to be set before Secret Invasion, given that film had numerous previous release dates prior to Secret Invasion's premiere, though if so, that assumption "still leaves continuity issues all over the place". Notes References External links Official website at Marvel.com Secret Invasion at IMDb Secret Invasion on Disney+ The Invasion Has Begun viral marketing website (Archived July 21, 2023, at the Wayback Machine)
Wikipedia
The Pile is an 886.03 GB diverse, open-source dataset of English text created as a training dataset for large language models (LLMs). It was constructed by EleutherAI in 2020 and publicly released on December 31 of that year. It is composed of 22 smaller datasets, including 14 new ones. Creation Training LLMs requires sufficiently vast amounts of data that, before the introduction of the Pile, most data used for training LLMs was taken from the Common Crawl. However, LLMs trained on more diverse datasets are better able to handle a wider range of situations after training. The creation of the Pile was motivated by the need for a large enough dataset that contained data from a wide variety of sources and styles of writing. Compared to other datasets, the Pile's main distinguishing features are that it is a curated selection of data chosen by researchers at EleutherAI to contain information they thought language models should learn and that it is the only such dataset that is thoroughly documented by the researchers who developed it. Contents and filtering Artificial intelligences do not learn all they can from data on the first pass, so it is common practice to train an AI on the same data more than once with each pass through the entire dataset referred to as an "epoch". Each of the 22 sub-datasets that make up the Pile was assigned a different number of epochs according to the perceived quality of the data. The table below shows the relative size of each of the 22 sub-datasets before and after being multiplied by the number of epochs. Numbers have been converted to GB, and asterisks are used to indicate the newly introduced datasets. EleutherAI chose the datasets to try to cover a wide range of topics and styles of writing, including academic writing, which models trained on other datasets were found to struggle with. All data used in the Pile was taken from publicly accessible sources. EleutherAI then filtered the dataset as a whole to remove duplicates. Some sub-datasets were also filtered for quality control. Most notably, the Pile-CC is a modified version of the Common Crawl in which the data was filtered to remove parts that are not text, such as HTML formatting and links. Some potential sub-datasets were excluded for various reasons, such as the US Congressional Record, which was excluded due to its racist content. Within the sub-datasets that were included, individual documents were not filtered to remove non-English, biased, or profane text. It was also not filtered on the basis of consent, meaning that, for example, the Pile-CC has all of the same ethical issues as the Common Crawl itself. However, EleutherAI has documented the amount of bias (on the basis of gender, religion, and race) and profanity as well as the level of consent given for each of the sub-datasets, allowing an ethics-concerned researcher to use only those parts of the Pile that meet their own standards. Use The Pile was originally developed to train EleutherAI's GPT-Neo models but has become widely used to train other models, including Microsoft's Megatron-Turing Natural Language Generation, Meta AI's Open Pre-trained Transformers, LLaMA, and Galactica, Stanford University's BioMedLM 2.7B, the Beijing Academy of Artificial Intelligence's Chinese-Transformer-XL, Yandex's YaLM 100B, and Apple's OpenELM. In addition to being used as a training dataset, the Pile can also be used as a benchmark to test models and score how well they perform on a variety of writing styles. DMCA takedown The Books3 component of the dataset contains copyrighted material compiled from Bibliotik, a pirate website. In July 2023, the Rights Alliance took copies of The Pile down through DMCA notices. Users responded by creating copies of The Pile with the offending content removed. See also List of chatbots
Wikipedia
NovoGen is a proprietary form of 3D printing technology that allows scientists to assemble living tissue cells into a desired pattern. When combined with an extracellular matrix, the cells can be arranged into complex structures, such as organs. Designed by Organovo, the NovoGen technology has been successfully integrated by Invetech with a production printer that is intended to help develop processes for tissue repair and organ development.
Wikipedia
The Confederation of European Environmental Engineering Societies (CEEES) was created as a co-operative international organization for information exchange regarding environmental engineering between the various European societies in this field. The CEEES maintains an online public discussion forum for the interchange of information. The member societies of the CEEES As of 2012, these were the twelve member societies of the CEEES: Italy: Associazione Italia Tecnici Prove Ambientali (AITPA) France: Association pour le Développement des Sciences et Techniques de l'Environnement (ASTE) Belgium: Belgian Society of Mechanical and Environmental Engineering (BSMEE) Germany: Gesellschaft für Umweltsimulation (GUS) Finland: Finnish Society of Environmental Engineering (KOTEL) Czech Republic: National Association of Czech Environmental Engineers (NACEI) Austria: Österreichische Gesellschaft für Umweltsimulation (ÖGUS) Netherlands: PLatform Omgevings Technologie (PLOT) United Kingdom: Society of Environmental Engineers (SEE) Sweden: Swedish Environmental Engineering Society (SEES) Portugal: Sociedade Portuguesa de Simulacao Ambiental e Aveliaca de Riscos (SOPSAR) Switzerland: Swiss Society for Environmental Engineering (SSEE) Each member society successively holds the presidency and the secretariat for a period of two years. Technical Advisory Boards The CEEES has three major Technical Advisory Boards: Mechanical Environments: The aim of this board is to advance methodologies and technologies for quantifying, describing and simulating mechanical environmental conditions experienced by mechanical equipment during its useful life. Climatic and Atmospheric Pollution Effects: The aim of this board is the study of the climatic and atmospheric pollution effects on materials and mechanical equipment. Reliability and Environmental Stress Screening: The aim of this board is the study how the environmental effects the reliability of equipment. Publications These are some of the publications of the CEEES: A Bibliography on Transportation Environment, ISSN 1104-6341, published by the Swedish Packaging Research Institute (Packforsk) in 1994. Synthesis of an ESS-Survey at the European Level, ISSN 1104-6341, published by the Swiss Society for Environmental Engineering (SSEE) in 1998. List of Technical Documents Dedicated or Related to ESS, ISBN 91-974043-0-6, published by the Swiss Society for Environmental Engineering (SSEE) in 1998. Climatic and Air Pollution Effects on Material and Equipment,ISBN No. 978-3-9806167-2-0, published by Gesellschaft für Umweltsimulation (GUS) in 1999. Natural and Artificial Ageing of Polymers, 1st European Weathering Symposium, Prague. ISBN 3-9808382-5-0, published by Gesellschaft für Umweltsimulation (GUS) in 2004 Natural and Artificial Ageing of Polymers, 2nd European Weathering Symposium, Gothenburg. ISBN 3-9808382-9-3, published by Gesellschaft für Umweltsimulation (GUS) in 2005 Ultrafine Particles – Key in the Issue of Particulate Matter?, 18th European Federation of Clean Air (EFCA) International Symposium, published by the Karlsruhe Research Center (Forschungszentrum Karlsruhe FZK) in 2007. Natural and Artificial Ageing of Polymers, 3rd European Weathering Symposium, Kraków. ISBN No. 978-3-9810472-3-3, published by GUS in 2005. Reliability - For A Mature Product From The Beginning Of Useful Life. The Different Type Of Tests And Their Impact On Product Reliability. ISSN 1104-6341, published online by CEEES in 2009. See also European Environment Agency Environment Agency Ministry of Housing, Spatial Planning and the Environment (Netherlands) Environmental technology Environmental science Coordination of Information on the Environment External links Official website ASTE website Archived 2021-05-11 at the Wayback Machine BSMEE website CEEES website. GUS website KOTEL website ÖGUS website PLOT website SEE website SEES website SOPSAR website SSEE website
Wikipedia
In engineering, the mass transfer coefficient is a diffusion rate constant that relates the mass transfer rate, mass transfer area, and concentration change as driving force: k c = n ˙ A A Δ c A {\displaystyle k_{c}={\frac {{\dot {n}}_{A}}{A\Delta c_{A}}}} Where: k c {\displaystyle k_{c}} is the mass transfer coefficient [mol/(s·m2)/(mol/m3)], or m/s n ˙ A {\displaystyle {\dot {n}}_{A}} is the mass transfer rate [mol/s] A {\displaystyle A} is the effective mass transfer area [m2] Δ c A {\displaystyle \Delta c_{A}} is the driving force concentration difference [mol/m3]. This can be used to quantify the mass transfer between phases, immiscible and partially miscible fluid mixtures (or between a fluid and a porous solid). Quantifying mass transfer allows for design and manufacture of separation process equipment that can meet specified requirements, estimate what will happen in real life situations (chemical spill), etc. Mass transfer coefficients can be estimated from many different theoretical equations, correlations, and analogies that are functions of material properties, intensive properties and flow regime (laminar or turbulent flow). Selection of the most applicable model is dependent on the materials and the system, or environment, being studied. Mass transfer coefficient units (mol/s)/(m2·mol/m3) = m/s Note, the units will vary based upon which units the driving force is expressed in. The driving force shown here as ' Δ c A {\displaystyle {\Delta c_{A}}} ' is expressed in units of moles per unit of volume, but in some cases the driving force is represented by other measures of concentration with different units. For example, the driving force may be partial pressures when dealing with mass transfer in a gas phase and thus use units of pressure. See also Mass transfer Separation process Sieving coefficient
Wikipedia
A mathemagician is a mathematician who is also a magician. The term "mathemagic" is believed to have been introduced by Royal Vale Heath with his 1933 book "Mathemagic". The name "mathemagician" was probably first applied to Martin Gardner, but has since been used to describe many mathematician/magicians, including Arthur T. Benjamin, Persi Diaconis, and Colm Mulcahy. Diaconis has suggested that the reason so many mathematicians are magicians is that "inventing a magic trick and inventing a theorem are very similar activities." Mathemagician is a neologism, specifically a portmanteau, that combines mathematician and magician. A great number of self-working mentalism tricks rely on mathematical principles, such as Gilbreath's principle. Max Maven often utilizes this type of magic in his performance. The Mathemagician is the name of a character in the 1961 children's book The Phantom Tollbooth. He is the ruler of Digitopolis, the kingdom of mathematics. Notable mathemagicians Jin Akiyama Arthur T. Benjamin Persi Diaconis Alex Elmsley Richard Feynman Karl Fulves Martin Gardner Norman Laurence Gilbreath Ronald Graham Vi Hart Royal Vale Heath Colm Mulcahy W. W. Rouse Ball Raymond Smullyan References Further reading Diaconis, Persi & Graham, Ron. Magical Mathematics: The Mathematical Ideas That Animate Great Magic Tricks Princeton University Press, 2012. ISBN 0691169772 Fulves, Karl. Self-working Number Magic, New York London : Dover Constable, 1983. ISBN 0486243915 Gardner, Martin. Mathematics, Magic and Mystery, Dover, 1956. ISBN 0-486-20335-2 Graham, Ron. Juggling Mathematics and Magic University of California, San Diego Teixeira, Ricardo & Park, Jang Woo. Mathemagics: A Magical Journey Through Advanced Mathematics, Connecting More Than 60 Magic Tricks to High-Level Math World Scientific, 2020. ISBN 978-9811215308.
Wikipedia
Recoil is a rheological phenomenon observed only in non-Newtonian fluids that is characterized by a moving fluid's ability to snap back to a previous position when external forces are removed. Recoil is a result of the fluid's elasticity and memory where the speed and acceleration by which the fluid moves depends on the molecular structure and the location to which it returns depends on the conformational entropy. This effect is observed in numerous non-Newtonian liquids to a small degree, but is prominent in some materials such as molten polymers. Memory The degree to which a fluid will “remember” where it came from depends on the entropy. Viscoelastic properties in fluids cause them to snap back to entropically favorable conformations. Recoil is observed when a favorable conformation is in the fluid's recent past. However, the fluid cannot fully return to its original position due to energy losses stemming from less than perfect elasticity. Recoiling fluids display fading memory meaning the longer a fluid is elongated, the less it will recover. Recoil is related to characteristic time, an estimate of the order of magnitude of reaction for the system. Fluids that are described as recoiling generally have characteristic times on the order of a few seconds. Although recoiling fluids usually recover relatively small distances, some molten polymers can recover back to 1/10 of the total elongation. This property of polymers must be accounted for in polymer processing. Demonstrations of Recoil When a spinning rod is placed in a polymer solution, elastic forces generated by the rotation motion cause fluid to climb up the rod (a phenomenon known as the Weissenberg effect). If the torque being applied is immediately brought to a stop, the fluid recoils down the rod. When a viscoelastic fluid being poured from a beaker is quickly cut with a pair of scissors, the fluid recoils back into the beaker. When fluid at rest in a circular tube is subjected to a pressure drop, a parabolic flow distribution is observed that pulls the liquid down the tube. Immediately after the pressure is alleviated, the fluid recoils backward in the tube and forms a more blunt flow profile. When Silly Putty is rapidly stretched and held at an elongated position for a short period of time, it springs back. However, if it is held at an elongated position for a longer period of time, there is very little recovery and no visible recoil.
Wikipedia
Juergen Pirner (born 1956) is the German creator of Jabberwock, a chatterbot that won the 2003 Loebner prize. Pirner created Jabberwock modelling the Jabberwocky from Lewis Carroll's poem of the same name. Initially, Jabberwock would just give rude or fantasy-related answers; but over the years, Pirner has programmed better responses into it. As of 2007 he has taught it 2.7 million responses. Pirner lives in Hamburg, Germany. References External links Talk to Jabberwock
Wikipedia
A nuclear clock or nuclear optical clock is an atomic clock being developed that will use the energy of a nuclear isomeric transition as its reference frequency, instead of the atomic electron transition energy used by conventional atomic clocks. Such a clock is expected to be more accurate than the best current atomic clocks by a factor of about 10, with an achievable accuracy approaching the 10−19 level. The only nuclear state suitable for the development of a nuclear clock using existing technology is thorium-229m, an isomer of thorium-229 and the lowest-energy nuclear isomer known. With an energy of 8.355733554021(8) eV, this corresponds to a frequency of 2020407384335±2 kHz, or wavelength of 148.382182883 nm, in the vacuum ultraviolet region, making it accessible to laser excitation. Principle of operation Atomic clocks are today's most accurate timekeeping devices. They operate by exploiting the fact that the gap between the energy levels of two bound electron states in an atom is constant across space and time. A bound electron can be excited with electromagnetic radiation precisely when the radiation's photon energy matches the energy of the transition. Via the Planck relation, that transition energy corresponds to a particular frequency. By irradiating an appropriately prepared collection of identical atoms and measuring the number of excitations induced, a light source's frequency can be tuned to maximize this response and therefore closely match the corresponding electron transition energy. The transition energy thus provides a standard of reference which can be used to calibrate such a source reliably. Conventional atomic clocks use microwave (high-frequency radio wave) frequencies, but development of the laser has made it possible to generate very stable light frequencies, and the frequency comb makes it possible to count those oscillations (measured in hundreds of THz, meaning hundred of trillions of cycles per second) to extraordinarily high accuracy. A device which uses a laser in this way is known as an optical atomic clock. One prominent example of an optical atomic clock is the ytterbium (Yb) lattice clock, where a particular electron transition in the ytterbium-171 isotope is used for laser stabilization. In this case, one second has elapsed after 518295836590863.63±0.1 oscillations of the laser light stabilized to the corresponding electron transition. Other examples for optical atomic clocks of the highest accuracy are the Yb-171 single-ion clock, the strontium(Sr)-87 optical lattice clock, and the aluminum(Al)-27 single-ion clock. The achieved accuracies of these clocks vary around 10−18, corresponding to about 1 second of inaccuracy in 30 billion years, significantly longer than the age of the universe. A nuclear optical clock would use the same principle of operation, with the important difference that a nuclear transition instead of an atomic shell electron transition is used for laser stabilization. The expected advantage of a nuclear clock is that the atomic nucleus is smaller than the atomic shell by up to five orders of magnitude, with correspondingly smaller magnetic dipole and electric quadrupole moments, and is therefore significantly less affected by external magnetic and electric fields. Such external perturbations are the limiting factor for the achieved accuracies of electron-based atomic clocks. Due to this conceptual advantage, a nuclear optical clock is expected to achieve a time accuracy approaching 10−19, a ten-fold improvement over electron-based clocks. Ionization An excited atomic nucleus can shed its excess energy by two alternative paths: radiatively, by direct photon (gamma ray) emission, or by internal conversion, transferring the energy to a shell electron which is ejected from the atom. For most nuclear isomers, the available energy is sufficient to eject any electron, and the inner-shell electrons are the most frequently ejected. In the special case of 229mTh, the energy is sufficient only to eject an outer electron (thorium's first ionization energy is 6.3 eV), and if the atom is already ionized, there is not enough energy to eject a second (thorium's second ionization energy is 11.5 eV). The two decay paths have different half-lives. Neutral 229mTh decays almost exclusively by internal conversion, with a half-life of 7±1 μs. In thorium cations, internal conversion is energetically prohibited, and 229mTh+ is forced to take the slower path, decaying radiatively with a half-life of around half an hour. Thus, in the typical case that the clock is designed to measure radiated photons, it is necessary to hold the thorium in an ionized state. This can be done in an ion trap, or by embedding it in an ionic crystal with a band gap greater than the transition energy. In this case, the atoms are not 100% ionized, and a small amount of internal conversion is possible (reducing the half-life to approximately 10 minutes), but the loss is tolerable. Different nuclear clock concepts Two different concepts for nuclear optical clocks have been discussed in the literature: trap-based nuclear clocks and solid-state nuclear clocks. Trap-based nuclear clocks For a trap-based nuclear clock either a single 229Th3+ ion is trapped in a Paul trap, known as the single-ion nuclear clock, or a chain of multiple ions is trapped, considered as the multiple-ion nuclear clock. Such clocks are expected to achieve the highest time accuracy, as the ions are to a large extent isolated from their environment. A multiple-ion nuclear clock could have a significant advantage over the single-ion nuclear clock in terms of stability performance. Solid-state nuclear clocks As the nucleus is largely unaffected by the atomic shell, it is also intriguing to embed many nuclei into a crystal lattice environment. This concept is known as the crystal-lattice nuclear clock. Due to the high density of embedded nuclei of up to 1018 per cm3, this concept would allow irradiating a huge number of nuclei in parallel, thereby drastically increasing the achievable signal-to-noise ratio, but at the cost of potentially higher external perturbations. It has also been proposed to irradiate a metallic 229Th surface and to probe the isomer's excitation in the internal conversion channel, which is known as the internal-conversion nuclear clock. Both types of solid-state nuclear clocks were shown to offer the potential for comparable performance. Transition requirements From the principle of operation of a nuclear optical clock, it is evident that direct laser excitation of a nuclear state is a central requirement for the development of such a clock. This is impossible for most nuclear transitions, as the typical energy range of nuclear transitions (keV to MeV) is orders of magnitude above the maximum energy which is accessible with significant intensity by today's narrow-bandwidth laser technology (a few eV). There are only two nuclear excited states known which possess a sufficiently low excitation energy (below 100 eV). These are 229mTh, a metastable nuclear excited state of the isotope thorium-229 with an excitation energy of only about 8 eV, and 235m1U, a metastable excited state of uranium-235 with an energy of 76.7 eV. However, 235m1U has such an extraordinarily long radiative half-life (on the order of 1022 s, 20,000 times the age of the universe, and far longer than its internal conversion half-life of 26 minutes) that it is not practical to use for a clock. This leaves only 229mTh with a realistic chance of direct nuclear laser excitation. Further requirements for the development of a nuclear clock are that the lifetime of the nuclear excited state is relatively long, thereby leading to a resonance of narrow bandwidth (a high quality factor) and the ground-state nucleus is easily available and sufficiently long-lived to allow one to work with moderate quantities of the material. Fortunately, with 229mTh+ having a radiative half-life (time to decay to 229Th+) of around 103 s, and 229Th having a half-life (time to decay to 225Ra) of 7917±48 years, both conditions are fulfilled for 229mTh+, making it an ideal candidate for the development of a nuclear clock. History History of nuclear clocks As early as 1996 it was proposed by Eugene V. Tkalya to use the nuclear excitation as a "highly stable source of light for metrology". With the development (around 2000) of the frequency comb for measuring optical frequencies exactly, a nuclear optical clock based on 229mTh was first proposed in 2003 by Ekkehard Peik and Christian Tamm, who developed an idea of Uwe Sterr. The paper contains both concepts, the single-ion nuclear clock, as well as the solid-state nuclear clock. In their pioneering work, Peik and Tamm proposed to use individual laser-cooled 229Th3+ ions in a Paul trap to perform nuclear laser spectroscopy. Here the 3+ charge state is advantageous, as it possesses a shell structure suitable for direct laser cooling. It was further proposed to excite an electronic shell state, to achieve 'good' quantum numbers of the total system of the shell plus nucleus that will lead to a reduction of the influence induced by external perturbing fields. A central idea is to probe the successful laser excitation of the nuclear state via the hyperfine-structure shift induced into the electronic shell due to the different nuclear spins of ground- and excited state. This method is known as the double-resonance method. The expected performance of a single-ion nuclear clock was further investigated in 2012 by Corey Campbell et al. with the result that a systematic frequency uncertainty (accuracy) of the clock of 1.5×10−19 could be achieved, which would be by about an order of magnitude better than the accuracy achieved by the best optical atomic clocks today. The nuclear clock approach proposed by Campbell et al. slightly differs from the original one proposed by Peik and Tamm. Instead of exciting an electronic shell state in order to obtain the highest insensitivity against external perturbing fields, the nuclear clock proposed by Campbell et al. uses a stretched pair of nuclear hyperfine states in the electronic ground-state configuration, which appears to be advantageous in terms of the achievable quality factor and an improved suppression of the quadratic Zeeman shift. In 2010, Eugene V. Tkalya showed that it was theoretically possible to use 229mTh as a lasing medium to generate an ultraviolet laser. The solid-state nuclear clock approach was further developed in 2010 by W.G. Rellergert et al. with the result of an expected long-term accuracy of about 2×10−16. Although expected to be less accurate than the single-ion nuclear clock approach due to line-broadening effects and temperature shifts in the crystal lattice environment, this approach may have advantages in terms of compactness, robustness and power consumption. The expected stability performance was investigated by G. Kazakov et al. in 2012. In 2020, the development of an internal conversion nuclear clock was proposed. Important steps on the road towards a nuclear clock include the successful direct laser cooling of 229Th3+ ions in a Paul trap achieved in 2011, and a first detection of the isomer-induced hyperfine-structure shift, enabling the double-resonance method to probe a successful nuclear excitation in 2018. History of 229mTh Since 1976, the 229Th nucleus has been known to possess a low energy excited state, whose excitation energy was originally shown to less than 100 eV, and then shown to be less than 10 eV in 1990. This was, however, too broad an energy range to apply high-resolution spectroscopy techniques; the transition energy had to be narrowed down first. Initial efforts used the fact that, after the alpha decay of 233U, the resultant 229Th nucleus is in an excited state and promptly emits a gamma ray to decay to either the base state or the metastable state. Measuring the small difference in the gamma-ray energies emitted in these processes allows the metastable state energy to be found by subtraction.: §5.1 : §2.3 However, nuclear experiments are not capable of finely measuring the difference in frequency between two high gamma-ray energies, so other experiments were needed. Because of the natural radioactive decay of 229Th nuclei, a tightly concentrated laser frequency was required to excite enough nuclei in an experiment to outcompete the background radiation and give a more accurate measurement of the excitation energy. Because it was infeasible to scan the entire 100eV range, an estimate of the correct frequency was needed. An early mis-step was the (incorrect) measurement of the energy value as 3.5±1.0 eV in 1994. This frequency of light is relatively easy to work with, so many direct detection experiments were attempted which had no hope of success because they were built of materials opaque to photons at the true, higher, energy. In particular: thorium oxide is transparent to 3.5 eV photons, but opaque at 8.3 eV, common optical lens and window materials such as fused quartz are opaque at energies above 8 eV, molecular oxygen (air) is opaque to photons above 6.2 eV; experiments must be conducted in a nitrogen or argon atmosphere, and the ionization energy of thorium is 6.3 eV so the nuclei will decay by internal conversion unless prevented (see § Ionization). The energy value remained elusive until 2003, when the nuclear clock proposal triggered a multitude of experimental efforts to pin down the excited state's parameters like energy and half-life. The detection of light emitted in the direct decay of 229mTh would significantly help to determine its energy to higher precision, but all efforts to observe the light emitted in the decay of 229mTh were failing. The energy level was corrected to 7.6±0.5 eV in 2007 (slightly revised to 7.8±0.5 eV in 2009). Subsequent experiments continued to fail to observe any signal of light emitted in the direct decay, leading people to suspect the existence of a strong non-radiative decay channel. The detection of light emitted by the decay of 229mTh was reported in 2012, and again in 2018, but the observed signals were the subject of controversy within the community. A direct detection of electrons emitted by the isomer's internal conversion decay channel was achieved in 2016. This detection laid the foundation for the determination of the 229mTh half-life in neutral, surface-bound atoms in 2017 and a first laser-spectroscopic characterization in 2018. In 2019, the isomer's energy was measured via the detection of internal conversion electrons emitted in its direct ground-state decay to 8.28±0.17 eV. Also a first successful excitation of the 29 keV nuclear excited state of 229Th via synchrotron radiation was reported, enabling a clock transition energy measurement of 8.30±0.92 eV. In 2020, an energy of 8.10±0.17 eV was obtained from precision gamma-ray spectroscopy. Finally, precise measurements were achieved in 2023 by unambiguous detection of the emitted photons (8.338(24) eV) and in April 2024 by two reports of excitation with a tunable laser at 8.355733(10) eV and 8.35574(3) eV. The light frequency is now known with sufficient accuracy to enable future construction of a prototype clock, and determine the transition's exact frequency and its stability. Precision frequency measurements began immediately, with Jun Ye's laboratory at JILA making a direct comparison to a 87Sr optical atomic clock. Published in September 2024, the frequency was measured as 2020407384335±2 kHz, a relative uncertainty of 10−12. This implies a wavelength of 148.3821828827(15) nm and an energy of 8.355733554021(8) eV. The work also resolved different nuclear quadrupole sublevels and measured the ratio of the ground and excited state nuclear quadrupole moment. Improvements will surely follow. Applications When operational, a nuclear optical clock is expected to be applicable in various fields. In addition to the capabilities of today's atomic clocks, such as satellite-based navigation or data transfer, its high precision will allow new applications inaccessible to other atomic clocks, such as relativistic geodesy, the search for topological dark matter, or the determination of time variations of fundamental constants. A nuclear clock has the potential to be particularly sensitive to possible time variations of the fine-structure constant. The central idea is that the low energy is due to a fortuitous cancellation between strong nuclear and electromagnetic effects within the nucleus which are individually much stronger. Any variation the fine-structure constant would affect the electromagnetic half of this balance, resulting in a proportionally very large change in the total transition energy. A change of even one part in 1018 could be detected by comparison with a conventional atomic clock (whose frequency would also be altered, but not nearly as much), so this measurement would be extraordinarily sensitive to any potential variation of the constant. Recent measurements and analysis are consistent with enhancement factors on the order of 104. References Further reading "The 229Th isomer: prospects for a nuclear optical clock" (November 2020) European Physics Journal A. External links EU thorium nuclear clock (nuClock) project
Wikipedia
Shaft voltage occurs in electric motors and generators due to leakage, induction, or capacitive coupling with the windings of the motor. It can occur in motors powered by variable-frequency drives, as often used in heating, ventilation, air conditioning and refrigeration systems. DC machines may have leakage current from the armature windings that energizes the shaft. Currents due to shaft voltage causes deterioration of motor bearings, but can be prevented with a grounding brush on the shaft, grounding of the motor frame, insulation of the bearing supports, or shielding. Shaft voltage can be induced by non-symmetrical magnetic fields of the motor (or generator) itself. External sources of shaft voltage include other coupled machines, and electrostatic charging due to rubber belts rubbing on drive pulleys. Every rotor has some degree of capacitive coupling to the motor's electrical windings, but the effective inline capacitor acts as a high-pass filter, so the coupling is often weak at 50–60 Hz line frequency. But many Variable Frequency Drives (VFD) induce significant voltage onto the shaft of the driven motor, because of the kilohertz switching of the insulated gate bipolar transistors (IGBTs), which produce the pulse-width modulation used to control the motor. The presence of high frequency ground currents can cause sparks, arcing and electrical shocks and can damage bearings. Counter-measures Techniques used to minimise this problem include: insulation, alternate discharge paths, Faraday shield, insulated bearings, ceramic bearings, grounding brush and shaft grounding ring. Faraday shield An electrostatic shielded induction motor (ESIM) is one approach to the shaft-voltage problem, as the insulation reduces voltage levels below the dielectric breakdown. This effectively stops bearing degradation and offers one solution to accelerated bearing wear caused by fluting, induced by pulsewidth modulated (PWM) inverters. Grounding brush Grounding the shaft by installing a grounding brush device on either the non-drive end or drive end of a VFD electric motor provides an alternate low-impedance path from the motor shaft to the motor case. This method channels the current away from the bearings. It significantly reduces shaft voltage and therefore bearing current by not allowing voltage to build up on the rotor. Shaft grounding ring A shaft grounding ring is installed around the motor shaft and creates a low impedance pathway for current to flow back to the motor frame and to ground. Various styles of rings exist such as those containing microfilaments making direct contact with the shaft or rings that clamp onto the shaft with a carbon brush riding on the ring (not directly on the shaft). Insulated bearings Insulated bearings eliminate the path to ground through the bearing for current to flow. However, installing insulated bearings does not eliminate the shaft voltage, which will still find the lowest impedance path to ground. This can potentially cause a problem if the path happens to be through the driven load or through some other component. Shielded cable High frequency grounding can be significantly improved by installing shielded cable with an extremely low impedance path between the VFD and the motor. One popular cable type is continuous corrugated aluminum sheath cable. See also Stray voltage References External links "Technical guide No. 5 – Bearing currents in modern AC drive systems" (PDF). Archived from the original (PDF) on July 20, 2011. Retrieved May 23, 2011. A Unique System for Reducing High Frequency Stray Noise and Transient Common Mode Ground Currents to Zero, While Enhancing Other Ground Issues Meeting Notices and Rule Changes from Electrical Manufacturing and Coil Winding
Wikipedia
The history of the programming language Scheme begins with the development of earlier members of the Lisp family of languages during the second half of the twentieth century. During the design and development period of Scheme, language designers Guy L. Steele and Gerald Jay Sussman released an influential series of Massachusetts Institute of Technology (MIT) AI Memos known as the Lambda Papers (1975–1980). This resulted in the growth of popularity in the language and the era of standardization from 1990 onward. Much of the history of Scheme has been documented by the developers themselves. Prehistory The development of Scheme was heavily influenced by two predecessors that were quite different from one another: Lisp provided its general semantics and syntax, and ALGOL provided its lexical scope and block structure. Scheme is a dialect of Lisp but Lisp has evolved; the Lisp dialects from which Scheme evolved—although they were in the mainstream at the time—are quite different from any modern Lisp. Lisp Lisp was invented by John McCarthy in 1958 while he was at the Massachusetts Institute of Technology (MIT). McCarthy published its design in a paper in Communications of the ACM in 1960, entitled "Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I" (Part II was never published). He showed that with a few simple operators and a notation for functions, one can build a Turing-complete language for algorithms. The use of s-expressions which characterize the syntax of Lisp was initially intended to be an interim measure pending the development of a language employing what McCarthy called "m-expressions". As an example, the m-expression car[cons[A,B]] is equivalent to the s-expression (car (cons A B)). S-expressions proved popular, however, and the many attempts to implement m-expressions failed to catch on. The first implementation of Lisp was on an IBM 704 by Steve Russell, who read McCarthy's paper and coded the eval function he described in machine code. The familiar (but puzzling to newcomers) names CAR and CDR used in Lisp to describe the head element of a list and its tail, evolved from two IBM 704 assembly language commands: Contents of Address Register and Contents of Decrement Register, each of which returned the contents of a 15-bit register corresponding to segments of a 36-bit IBM 704 instruction word. The first complete Lisp compiler, written in Lisp, was implemented in 1962 by Tim Hart and Mike Levin at MIT. This compiler introduced the Lisp model of incremental compilation, in which compiled and interpreted functions can intermix freely. The two variants of Lisp most significant in the development of Scheme were both developed at MIT: LISP 1.5 developed by McCarthy and others, and Maclisp – developed for MIT's Project MAC, a direct descendant of LISP 1.5. which ran on the PDP-10 and Multics systems. Since its inception, Lisp was closely connected with the artificial intelligence (AI) research community, especially on PDP-10. The 36-bit word size of the PDP-6 and PDP-10 was influenced by the usefulness of having two Lisp 18-bit pointers in one word. ALGOL ALGOL 58, originally to be called IAL for "International Algorithmic Language", was developed jointly by a committee of European and American computer scientists in a meeting in 1958 at ETH Zurich. ALGOL 60, a later revision developed at the ALGOL 60 meeting in Paris and now commonly named ALGOL, became the standard for the publication of algorithms and had a profound effect on future language development, despite the language's lack of commercial success and its limitations. Tony Hoare has remarked: "Here is a language so far ahead of its time that it was not only an improvement on its predecessors but also on nearly all its successors." ALGOL introduced the use of block structure and lexical scope. It was also notorious for its difficult call by name default parameter passing mechanism, which was defined so as to require textual substitution of the expression representing the working parameter in place of the formal parameter during execution of a procedure or function, causing it to be re-evaluated each time it is referenced during execution. ALGOL implementors developed a mechanism they called a thunk, which captured the context of the working parameter, enabling it to be evaluated during execution of the procedure or function. Carl Hewitt, the Actor model, and the birth of Scheme In 1971 Sussman, Drew McDermott, and Eugene Charniak had developed a system called Micro-Planner which was a partial and somewhat unsatisfactory implementation of Carl Hewitt's ambitious Planner project. Sussman and Hewitt worked together along with others on Muddle, later renamed MDL, an extended Lisp which formed a component of Hewitt's project. Drew McDermott, and Sussman in 1972 developed the Lisp-based language Conniver, which revised the use of automatic backtracking in Planner which they thought was unproductive. Hewitt was dubious that the "hairy control structure" in Conniver was a solution to the problems with Planner. Pat Hayes remarked: "Their [Sussman and McDermott] solution, to give the user access to the implementation primitives of Planner, is however, something of a retrograde step (what are Conniver's semantics?)" In November 1972, Hewitt and his students invented the Actor model of computation as a solution to the problems with Planner. A partial implementation of Actors was developed called Planner-73 (later called PLASMA). Steele, then a graduate student at MIT, had been following these developments, and he and Sussman decided to implement a version of the Actor model in their own "tiny Lisp" developed on Maclisp, to understand the model better. Using this basis they then began to develop mechanisms for creating actors and sending messages. PLASMA's use of lexical scope was similar to the lambda calculus. Sussman and Steele decided to try to model Actors in the lambda calculus. They called their modeling system Schemer, eventually changing it to Scheme to fit the six-character limit on the ITS file system on their DEC PDP-10. They soon concluded Actors were essentially closures that never return but instead invoke a continuation, and thus they decided that the closure and the Actor were, for the purposes of their investigation, essentially identical concepts. They eliminated what they regarded as redundant code and, at that point, discovered that they had written a very small and capable dialect of Lisp. Hewitt remained critical of the "hairy control structure" in Scheme and considered primitives (e.g., START!PROCESS, STOP!PROCESS, and EVALUATE!UNINTERRUPTIBLY) used in the Scheme implementation to be a backward step. 25 years later, in 1998, Sussman and Steele reflected that the minimalism of Scheme was not a conscious design goal, but rather the unintended outcome of the design process. "We were actually trying to build something complicated and discovered, serendipitously, that we had accidentally designed something that met all our goals but was much simpler than we had intended... we realized that the lambda calculus—a small, simple formalism—could serve as the core of a powerful and expressive programming language." On the other hand, Hewitt remained critical of the lambda calculus as a foundation for computation writing "The actual situation is that the λ-calculus is capable of expressing some kinds of sequential and parallel control structures but, in general, not the concurrency expressed in the Actor model. On the other hand, the Actor model is capable of expressing everything in the λ-calculus and more." He has also been critical of aspects of Scheme that derive from the lambda calculus such as reliance on continuation functions and the lack of exceptions. The Lambda Papers Between 1975 and 1980 Sussman and Steele worked on developing their ideas about using the lambda calculus, continuations and other advanced programming concepts such as optimization of tail recursion, and published them in a series of AI Memos which have become collectively termed the Lambda Papers. List of papers 1975: Scheme: An Interpreter for Extended Lambda Calculus 1976: Lambda: The Ultimate Imperative 1976: Lambda: The Ultimate Declarative 1977: Debunking the 'Expensive Procedure Call' Myth, or, Procedure Call Implementations Considered Harmful, or, Lambda: The Ultimate GOTO 1978: The Art of the Interpreter or, the Modularity Complex (Parts Zero, One, and Two) 1978: RABBIT: A Compiler for SCHEME 1979: Design of LISP-based Processors, or SCHEME: A Dialect of LISP, or Finite Memories Considered Harmful, or LAMBDA: The Ultimate Opcode 1980: Compiler Optimization Based on Viewing LAMBDA as RENAME + GOTO 1980: Design of a Lisp-based Processor Influence Scheme was the first dialect of Lisp to choose lexical scope. It was also one of the first programming languages after Reynold's Definitional Language to support first-class continuations. It had a large impact on the effort that led to the development of its sister-language, Common Lisp, to which Guy Steele was a contributor. Standardization The Scheme language is standardized in the official Institute of Electrical and Electronics Engineers (IEEE) standard, and a de facto standard called the Revisedn Report on the Algorithmic Language Scheme (RnRS). The most widely implemented standard is R5RS (1998), and a new standard, R6RS, was ratified in 2007. Besides the RnRS standards there are also Scheme Requests for Implementation documents, that contain additional libraries that may be added by Scheme implementations. Timeline
Wikipedia
In classical mechanics and kinematics, Galileo's law of odd numbers states that the distance covered by a falling object in successive equal time intervals is linearly proportional to the odd numbers. That is, if a body falling from rest covers a certain distance during an arbitrary time interval, it will cover 3, 5, 7, etc. times that distance in the subsequent time intervals of the same length. This mathematical model is accurate if the body is not subject to any forces besides uniform gravity (for example, it is falling in a vacuum in a uniform gravitational field). This law was established by Galileo Galilei who was the first to make quantitative studies of free fall. Explanation Using a speed-time graph The graph in the figure is a plot of speed versus time. Distance covered is the area under the line. Each time interval is coloured differently. The distance covered in the second and subsequent intervals is the area of its trapezium, which can be subdivided into triangles as shown. As each triangle has the same base and height, they have the same area as the triangle in the first interval. It can be observed that every interval has two more triangles than the previous one. Since the first interval has one triangle, this leads to the odd numbers. Using the sum of first n odd numbers From the equation for uniform linear acceleration, the distance covered s = u t + 1 2 a t 2 {\displaystyle s=ut+{\tfrac {1}{2}}at^{2}} for initial speed u = 0 , {\displaystyle u=0,} constant acceleration a {\displaystyle a} (acceleration due to gravity without air resistance), and time elapsed t , {\displaystyle t,} it follows that the distance s {\displaystyle s} is proportional to t 2 {\displaystyle t^{2}} (in symbols, s ∝ t 2 {\displaystyle s\propto t^{2}} ), thus the distance from the starting point are consecutive squares for integer values of time elapsed. The middle figure in the diagram is a visual proof that the sum of the first n {\displaystyle n} odd numbers is n 2 . {\displaystyle n^{2}.} In equations: That the pattern continues forever can also be proven algebraically: ∑ k = 1 n ( 2 k − 1 ) = 1 2 ( ∑ k = 1 n ( 2 k − 1 ) + ∑ k = 1 n ( 2 ( n − k + 1 ) − 1 ) ) = 1 2 ∑ k = 1 n ( 2 ( n + 1 ) − 1 − 1 ) = n 2 {\displaystyle {\begin{aligned}\sum _{k=1}^{n}(2\,k-1)&={\frac {1}{2}}\,\left(\sum _{k=1}^{n}(2\,k-1)+\sum _{k=1}^{n}(2\,(n-k+1)-1)\right)\\&={\frac {1}{2}}\,\sum _{k=1}^{n}(2\,(n+1)-1-1)\\&=n^{2}\end{aligned}}} To clarify this proof, since the n {\displaystyle n} th odd positive integer is m : = 2 n − 1 , {\displaystyle m\,\colon =\,2n-1,} if S : = ∑ k = 1 n ( 2 k − 1 ) = 1 + 3 + ⋯ + ( m − 2 ) + m {\displaystyle S\,\colon =\,\sum _{k=1}^{n}(2\,k-1)\,=\,1+3+\cdots +(m-2)+m} denotes the sum of the first n {\displaystyle n} odd integers then S + S = 1 + 3 + ⋯ + ( m − 2 ) + m + m + ( m − 2 ) + ⋯ + 3 + 1 = ( m + 1 ) + ( m + 1 ) + ⋯ + ( m + 1 ) + ( m + 1 ) ( n terms) = n ( m + 1 ) {\displaystyle {\begin{alignedat}{4}S+S&=\;\;1&&+\;\;3&&\;+\cdots +(m-2)&&+\;\;m\\&+\;\;m&&+(m-2)&&\;+\cdots +\;\;3&&+\;\;1\\&=\;(m+1)&&+(m+1)&&\;+\cdots +(m+1)&&+(m+1)\quad {\text{ (}}n{\text{ terms)}}\\&=\;n\,(m+1)&&&&&&&&\\\end{alignedat}}} so that S = 1 2 n ( m + 1 ) . {\displaystyle S={\tfrac {1}{2}}\,n\,(m+1).} Substituting n = 1 2 ( m + 1 ) {\displaystyle n={\tfrac {1}{2}}(m+1)} and m + 1 = 2 n {\displaystyle m+1=2\,n} gives, respectively, the formulas 1 + 3 + ⋯ + m = 1 4 ( m + 1 ) 2 and 1 + 3 + ⋯ + ( 2 n − 1 ) = n 2 {\displaystyle 1+3+\cdots +m\;=\;{\tfrac {1}{4}}(m+1)^{2}\quad {\text{ and }}\quad 1+3+\cdots +(2\,n-1)\;=\;n^{2}} where the first formula expresses the sum entirely in terms of the odd integer m {\displaystyle m} while the second expresses it entirely in terms of n , {\displaystyle n,} which is m {\displaystyle m} 's ordinal position in the list of odd integers 1 , 3 , 5 , … . {\displaystyle 1,3,5,\ldots .} See also Equations of motion – Equations that describe the behavior of a physical system Square numbers – Product of an integer with itselfPages displaying short descriptions of redirect targets
Wikipedia
In statistics and research design, an index is a composite statistic – a measure of changes in a representative group of individual data points, or in other words, a compound measure that aggregates multiple indicators. Indices – also known as indexes and composite indicators – summarize and rank specific observations. Much data in the field of social sciences and sustainability are represented in various indices such as Gender Gap Index, Human Development Index or the Dow Jones Industrial Average. The ‘Report by the Commission on the Measurement of Economic Performance and Social Progress’, written by Joseph Stiglitz, Amartya Sen, and Jean-Paul Fitoussi in 2009 suggests that these measures have experienced a dramatic growth in recent years due to three concurring factors: improvements in the level of literacy (including statistical) increased complexity of modern societies and economies, and widespread availability of information technology. According to Earl Babbie, items in indices are usually weighted equally, unless there are some reasons against it (for example, if two items reflect essentially the same aspect of a variable, they could have a weight of 0.5 each). According to the same author, constructing the items involves four steps. First, items should be selected based on their content validity, unidimensionality, the degree of specificity in which a dimension is to be measured, and their amount of variance. Items should be empirically related to one another, which leads to the second step of examining their multivariate relationships. Third, index scores are designed, which involves determining score ranges and weights for the items. Finally, indices should be validated, which involves testing whether they can predict indicators related to the measured variable not used in their construction. A handbook for the construction of composite indicators (CIs) was published jointly by the OECD and by the European Commission's Joint Research Centre in 2008. The handbook – officially endorsed by the OECD high level statistical committee, describe ten recursive steps for developing an index: Step 1: Theoretical framework Step 2: Data selection Step 3: Imputation of missing data Step 4: Multivariate analysis Step 5: Normalisation Step 6: Weighting Step 7: Aggregating indicators Step 8: Sensitivity analysis Step 9: Link to other measures Step 10: Visualisation As suggested by the list, many modelling choices are needed to construct a composite indicator, which makes their use controversial. The delicate issue of assigning and validating weights is discussed e.g. in. A sociological reading of the nature of composite indicators is offered by Paul-Marie Boulanger, who sees these measures at the intersection of three movements: the democratisation of expertise, the concept that more knowledge is needed to tackle societal and environmental issues that can be provided by the sole experts – this line of thought connects to the concept of extended peer community developed by post-normal science the impulse to the creation of a new public through a process of social discovery, which can be reconnected to the work of pragmatists such as John Dewey the semiotic of Charles Sanders Peirce; Thus a CI is not just a sign or a number, but suggests an action or a behaviour. A subsequent work by Boulanger analyses composite indicators in light of the social system theories of Niklas Luhmann to investigate how different measurements of progress are or are not taken up. See also Index (economics) Scale (social sciences)
Wikipedia
Linguamatics, headquartered in Cambridge, England, with offices in the United States and UK, is a provider of text mining systems through software licensing and services, primarily for pharmaceutical and healthcare applications. Founded in 2001, the company was purchased by IQVIA in January 2019. Technology The company develops enterprise search tools for the life sciences sector. The core natural language processing engine (I2E) uses a federated architecture to incorporate data from 3rd party resources. Initially developed to be used interactively through a graphic user interface, the core software also has an application programming interface that can be used to automate searches. LabKey, Penn Medicine, Atrius Health and Mercy all use Linguamatics software to extract electronic health record data into data warehouses. Linguamatics software is used by 17 of the top 20 global pharmaceutical companies, the US Food and Drug Administration, as well as healthcare providers. Software community The core software, "I2E", is used by a number of companies to either extend their own software or to publish their data. Copyright Clearance Center uses I2E to produce searchable indexes of material that would otherwise be unsearchable due to copyright. Thomson Reuters produces Cortellis Informatics Clinical Text Analytics, which depends on I2E to make clinical data accessible and searchable. Pipeline Pilot can integrate I2E as part of a workflow. ChemAxon can be used alongside I2E to allow named entity recognition of chemicals within unstructured data. Data sources include MEDLINE, ClinicalTrials.gov, FDA Drug Labels, PubMed Central, and Patent Abstracts. See also List of academic databases and search engines
Wikipedia
In mathematics and astrophysics, the Strömgren integral, introduced by Bengt Strömgren (1932, p.123) while computing the Rosseland mean opacity, is the integral: 15 4 π 4 ∫ 0 x t 7 e 2 t ( e t − 1 ) 3 d t . {\displaystyle {\frac {15}{4\pi ^{4}}}\int _{0}^{x}{\frac {t^{7}e^{2t}}{(e^{t}-1)^{3}}}\,dt.} Cox (1964) discussed applications of the Strömgren integral in astrophysics, and MacLeod (1996) discussed how to compute it. References Cox, A. N. (1964), "Stellar absorption coefficients and opacities", in Adler, Lawrence Hugh; McLaughlin, Dean Benjamin (eds.), Stellar Structure, Stars and Stellar Systems: Compendium of Astronomy and Astrophysics, vol. VIII, Chicago, Ill: University of Chicago Press, p. 195, ISBN 978-0-226-45969-1 : ISBN / Date incompatibility (help) MacLeod, Allan J. (1996), "Algorithm 757: MISCFUN, a software package to compute uncommon special functions", ACM Transactions on Mathematical Software, 22 (3), NY, USA: ACM New York: 288–301, doi:10.1145/232826.232846 Strömgren, B. (1932), "The opacity of stellar matter and the hydrogen content of the stars", Zeitschrift für Astrophysik, 4: 118–152, Bibcode:1932ZA......4..118S Strömgren, B. (1933), "On the Interpretation of the Hertzsprung-Russell-Diagram", Zeitschrift für Astrophysik, 7: 222, Bibcode:1933ZA......7..222S External links Stromgren integral
Wikipedia
Mahaney's theorem is a theorem in computational complexity theory proven by Stephen Mahaney that states that if any sparse language is NP-complete, then P = NP. Also, if any sparse language is NP-complete with respect to Turing reductions, then the polynomial-time hierarchy collapses to Δ 2 P {\displaystyle \Delta _{2}^{P}} . Mahaney's argument does not actually require the sparse language to be in NP, so there is a sparse NP-hard set if and only if P = NP. This is because the existence of an NP-hard sparse set implies the existence of an NP-complete sparse set.
Wikipedia
The C++ programming language has support for string handling, mostly implemented in its standard library. The language standard specifies several string types, some inherited from C, some designed to make use of the language's features, such as classes and RAII. The most-used of these is std::string. Since the initial versions of C++ had only the "low-level" C string handling functionality and conventions, multiple incompatible designs for string handling classes have been designed over the years and are still used instead of std::string, and C++ programmers may need to handle multiple conventions in a single application. History The std::string type is the main string datatype in standard C++ since 1998, but it was not always part of C++. From C, C++ inherited the convention of using null-terminated strings that are handled by a pointer to their first element, and a library of functions that manipulate such strings. In modern standard C++, a string literal such as "hello" still denotes a NUL-terminated array of characters. Using C++ classes to implement a string type offers several benefits of automated memory management and a reduced risk of out-of-bounds accesses, and more intuitive syntax for string comparison and concatenation. Therefore, it was strongly tempting to create such a class. Over the years, C++ application, library and framework developers produced their own, incompatible string representations, such as the one in AT&T's Standard Components library (the first such implementation, 1983) or the CString type in Microsoft's MFC. While std::string standardized strings, legacy applications still commonly contain such custom string types and libraries may expect C-style strings, making it "virtually impossible" to avoid using multiple string types in C++ programs and requiring programmers to decide on the desired string representation ahead of starting a project. In a 1991 retrospective on the history of C++, its inventor Bjarne Stroustrup called the lack of a standard string type (and some other standard types) in C++ 1.0 the worst mistake he made in its development; "the absence of those led to everybody re-inventing the wheel and to an unnecessary diversity in the most fundamental classes". Implementation issues The various vendors' string types have different implementation strategies and performance characteristics. In particular, some string types use a copy-on-write strategy, where an operation such as does not actually copy the content of a to b; instead, both strings share their contents and a reference count on the content is incremented. The actual copying is postponed until a mutating operation, such as appending a character to either string, makes the strings' contents differ. Copy-on-write can make major performance changes to code using strings (making some operations much faster and some much slower). Though std::string no longer uses it, many (perhaps most) alternative string libraries still implement copy-on-write strings. Some string implementations store 16-bit or 32-bit code points instead of bytes, this was intended to facilitate processing of Unicode text. However, it means that conversion to these types from std::string or from arrays of bytes is dependent on the "locale" and can throw exceptions. Any processing advantages of 16-bit code units vanished when the variable-width UTF-16 encoding was introduced (though there are still advantages if you must communicate with a 16-bit API such as Windows). Qt's QString is an example. Third-party string implementations also differed considerably in the syntax to extract or compare substrings, or to perform searches in the text. Standard string types The std::string class is the standard representation for a text string since C++98. The class provides some typical string operations like comparison, concatenation, find and replace, and a function for obtaining substrings. An std::string can be constructed from a C-style string, and a C-style string can also be obtained from one. The individual units making up the string are of type char, at least (and almost always) 8 bits each. In modern usage these are often not "characters", but parts of a multibyte character encoding such as UTF-8. The copy-on-write strategy was deliberately allowed by the initial C++ Standard for std::string because it was deemed a useful optimization, and used by nearly all implementations. However, there were mistakes, in particular the operator[] returned a non-const reference in order to make it easy to port C in-place string manipulations (such code often assumed one byte per character and thus this may not have been a good idea!) This allowed the following code that shows that it must make a copy even though it is almost always used only to examine the string and not modify it: This caused implementations, first MSVC and later GCC, to move away from copy-on-write. It was also discovered that the overhead in multi-threaded applications due to the locking needed to examine or change the reference count was greater than the overhead of copying small strings on modern processors (especially for strings smaller than the size of a pointer). The optimization was finally disallowed in C++11, with the result that even passing a std::string as an argument to a function, for example void function_name(std::string s); must be expected to perform a full copy of the string into newly allocated memory. The common idiom to avoid such copying is to pass as a const reference. The C++17 standard added a new string_view class that is only a pointer and length to read-only data, makes passing arguments far faster than either of the above examples: Example usage Related classes std::string is a typedef for a particular instantiation of the std::basic_string template class. Its definition is found in the <string> header: Thus string provides basic_string functionality for strings having elements of type char. There is a similar class std::wstring, which consists of wchar t, and is most often used to store UTF-16 text on Windows and UTF-32 on most Unix-like platforms. The C++ standard, however, does not impose any interpretation as Unicode code points or code units on these types and does not even guarantee that a wchar_t holds more bits than a char. To resolve some of the incompatibilities resulting from wchar_t's properties, C++11 added two new classes: std::u16string and std::u32string (made up of the new types char16_t and char32_t), which are the given number of bits per code unit on all platforms. C++11 also added new string literals of 16-bit and 32-bit "characters" and syntax for putting Unicode code points into null-terminated (C-style) strings. A basic_string is guaranteed to be specializable for any type with a char_traits struct to accompany it. As of C++11, only char, wchar_t, char16_t and char32_t specializations are required to be implemented. A basic_string is also a Standard Library container, and thus the Standard Library algorithms can be applied to the code units in strings. Critiques The design of std::string has been held up as an example of monolithic design by Herb Sutter, who reckons that of the 103 member functions on the class in C++98, 71 could have been decoupled without loss of implementation efficiency.
Wikipedia
The Message Understanding Conferences (MUC) for computing and computer science, were initiated and financed by DARPA (Defense Advanced Research Projects Agency) to encourage the development of new and better methods of information extraction. The character of this competition, many concurrent research teams competing against one another—required the development of standards for evaluation, e.g. the adoption of metrics like precision and recall. Topics and exercises Only for the first conference (MUC-1) could the participant choose the output format for the extracted information. From the second conference the output format, by which the participants' systems would be evaluated, was prescribed. For each topic fields were given, which had to be filled with information from the text. Typical fields were, for example, the cause, the agent, the time and place of an event, the consequences etc. The number of fields increased from conference to conference. At the sixth conference (MUC-6) the task of recognition of named entities and coreference was added. For named entity all phrases in the text were supposed to be marked as person, location, organization, time or quantity. The topics and text sources, which were processed, show a continuous move from military to civil themes, which mirrored the change in business interest in information extraction taking place at the time. Literature Ralph Grishman, Beth Sundheim: Message Understanding Conference - 6: A Brief History. In: Proceedings of the 16th International Conference on Computational Linguistics (COLING), I, Copenhagen, 1996, 466–471. See also DARPA TIPSTER Program External links MUC-7 MUC-6 SAIC Information Extraction
Wikipedia
This article contains economic statistics of the country Singapore. The GDP, GDP Per Capita, GNI Per Capita, Total Trade, Total Imports, Total Exports, Foreign Reserves, Current Account Balance, Average Exchange Rate, Operating Revenue and Total Expenditure are mentioned in the table below for years 1965 through 2018. 1965 to 2014 2014 to 2018 See also Economy of Singapore
Wikipedia
An electromagnetic pulse (EMP), also referred to as a transient electromagnetic disturbance (TED), is a brief burst of electromagnetic energy. The origin of an EMP can be natural or artificial, and can occur as an electromagnetic field, as an electric field, as a magnetic field, or as a conducted electric current. The electromagnetic interference caused by an EMP can disrupt communications and damage electronic equipment. An EMP such as a lightning strike can physically damage objects such as buildings and aircraft. The management of EMP effects is a branch of electromagnetic compatibility (EMC) engineering. The first recorded damage from an electromagnetic pulse came with the solar storm of August 1859, or the Carrington Event. In modern warfare, weapons delivering a high energy EMP are designed to disrupt communications equipment, computers needed to operate modern warplanes, or even put the entire electrical network of a target country out of commission. General characteristics An electromagnetic pulse is a short surge of electromagnetic energy. Its short duration means that it will be spread over a range of frequencies. Pulses are typically characterized by: The mode of energy transfer (radiated, electric, magnetic or conducted). The range or spectrum of frequencies present. Pulse waveform: shape, duration and amplitude. The frequency spectrum and the pulse waveform are interrelated via the Fourier transform which describes how component waveforms may sum to the observed frequency spectrum. Types of energy EMP energy may be transferred in any of four forms: Electric field Magnetic field Electromagnetic radiation Electrical conduction According to Maxwell's equations, a pulse of electric energy will always be accompanied by a pulse of magnetic energy. In a typical pulse, either the electric or the magnetic form will dominate. It can be shown that the non-linear Maxwell's equations can have time-dependent self-similar electromagnetic shock wave solutions where the electric and the magnetic field components have a discontinuity. In general, only radiation acts over long distances, with the magnetic and electric fields acting over short distances. There are a few exceptions, such as a solar magnetic flare. Frequency ranges A pulse of electromagnetic energy typically comprises many frequencies from very low to some upper limit depending on the source. The range defined as EMP, sometimes referred to as "DC [direct current] to daylight", excludes the highest frequencies comprising the optical (infrared, visible, ultraviolet) and ionizing (X and gamma rays) ranges. Some types of EMP events can leave an optical trail, such as lightning and sparks, but these are side effects of the current flow through the air and are not part of the EMP itself. Pulse waveforms The waveform of a pulse describes how its instantaneous amplitude (field strength or current) changes over time. Real pulses tend to be quite complicated, so simplified models are often used. Such a model is typically described either in a diagram or as a mathematical equation. Most electromagnetic pulses have a very sharp leading edge, building up quickly to their maximum level. The classic model is a double-exponential curve which climbs steeply, quickly reaches a peak and then decays more slowly. However, pulses from a controlled switching circuit often approximate the form of a rectangular or "square" pulse. EMP events usually induce a corresponding signal in the surrounding environment or material. Coupling usually occurs most strongly over a relatively narrow frequency band, leading to a characteristic damped sine wave. Visually it is shown as a high frequency sine wave growing and decaying within the longer-lived envelope of the double-exponential curve. A damped sinewave typically has much lower energy and a narrower frequency spread than the original pulse, due to the transfer characteristic of the coupling mode. In practice, EMP test equipment often injects these damped sinewaves directly rather than attempting to recreate the high-energy threat pulses. In a pulse train, such as from a digital clock circuit, the waveform is repeated at regular intervals. A single complete pulse cycle is sufficient to characterise such a regular, repetitive train. Types An EMP arises where the source emits a short-duration pulse of energy. The energy is usually broadband by nature, although it often excites a relatively narrow-band damped sine wave response in the surrounding environment. Some types are generated as repetitive and regular pulse trains. Different types of EMP arise from natural, man-made, and weapons effects. Types of natural EMP events include: Lightning electromagnetic pulse (LEMP). The discharge is typically an initial current flow of perhaps millions of amps, followed by a train of pulses of decreasing energy. Electrostatic discharge (ESD), as a result of two charged objects coming into proximity or even contact. Meteoric EMP. The discharge of electromagnetic energy resulting from either the impact of a meteoroid with a spacecraft or the explosive breakup of a meteoroid passing through the Earth's atmosphere. Coronal mass ejection (CME), sometimes referred to as a solar EMP. A burst of plasma and accompanying magnetic field, ejected from the solar corona and released into the solar wind. Types of (civil) man-made EMP events include: Switching action of electrical circuitry, whether isolated or repetitive (as a pulse train). Electric motors can create a train of pulses as the internal electrical contacts make and break connections as the armature rotates. Gasoline engine ignition systems can create a train of pulses as the spark plugs are energized or fired. Continual switching actions of digital electronic circuitry. Power line surges. These can be up to several kilovolts, enough to damage electronic equipment that is insufficiently protected. Types of military EMP include: Nuclear electromagnetic pulse (NEMP), as a result of a nuclear explosion. A variant of this is the high altitude nuclear EMP (HEMP), which produces a secondary pulse due to particle interactions with the Earth's atmosphere and magnetic field. Non-nuclear electromagnetic pulse (NNEMP) weapons. Lightning electromagnetic pulse (LEMP) Lightning is unusual in that it typically has a preliminary "leader" discharge of low energy building up to the main pulse, which in turn may be followed at intervals by several smaller bursts. Electrostatic discharge (ESD) ESD events are characterized by high voltages of many kV, but small currents sometimes cause visible sparks. ESD is treated as a small, localized phenomenon, although technically a lightning flash is a very large ESD event. ESD can also be man-made, as in the shock received from a Van de Graaff generator. An ESD event can damage electronic circuitry by injecting a high-voltage pulse, besides giving people an unpleasant shock. Such an ESD event can also create sparks, which may in turn ignite fires or fuel-vapour explosions. For this reason, before refueling an aircraft or exposing any fuel vapor to the air, the fuel nozzle is first connected to the aircraft to safely discharge any static. Switching pulses The switching action of an electrical circuit creates a sharp change in the flow of electricity. This sharp change is a form of EMP. Simple electrical sources include inductive loads such as relays, solenoids, and brush contacts in electric motors. These typically send a pulse down any electrical connections present, as well as radiating a pulse of energy. The amplitude is usually small and the signal may be treated as "noise" or "interference". The switching off or "opening" of a circuit causes an abrupt change in the current flowing. This can in turn cause a large pulse in the electric field across the open contacts, causing arcing and damage. It is often necessary to incorporate design features to limit such effects. Electronic devices such as vacuum tubes or valves, transistors, and diodes can also switch on and off very quickly, causing similar issues. One-off pulses may be caused by solid-state switches and other devices used only occasionally. However, the many millions of transistors in a modern computer may switch repeatedly at frequencies above 1 GHz, causing interference that appears to be continuous. Nuclear electromagnetic pulse (NEMP) A nuclear electromagnetic pulse is the abrupt pulse of electromagnetic radiation resulting from a nuclear explosion. The resulting rapidly changing electric fields and magnetic fields may couple with electrical/electronic systems to produce damaging current and voltage surges. The intense gamma radiation emitted can also ionize the surrounding air, creating a secondary EMP as the atoms of air first lose their electrons and then regain them. NEMP weapons are designed to maximize such EMP effects as the primary damage mechanism, and some are capable of destroying susceptible electronic equipment over a wide area. A high-altitude electromagnetic pulse (HEMP) weapon is a NEMP warhead designed to be detonated far above the Earth's surface. The explosion releases a blast of gamma rays into the mid-stratosphere, which ionizes as a secondary effect and the resultant energetic free electrons interact with the Earth's magnetic field to produce a much stronger EMP than is normally produced in the denser air at lower altitudes. Non-nuclear electromagnetic pulse (NNEMP) Non-nuclear electromagnetic pulse (NNEMP) is a weapon-generated electromagnetic pulse without use of nuclear technology. Devices that can achieve this objective include a large low-inductance capacitor bank discharged into a single-loop antenna, a microwave generator, and an explosively pumped flux compression generator. To achieve the frequency characteristics of the pulse needed for optimal coupling into the target, wave-shaping circuits or microwave generators are added between the pulse source and the antenna. Vircators are vacuum tubes that are particularly suitable for microwave conversion of high-energy pulses. NNEMP generators can be carried as a payload of bombs, cruise missiles (such as the CHAMP missile) and drones, with diminished mechanical, thermal and ionizing radiation effects, but without the consequences of deploying nuclear weapons. The range of NNEMP weapons is much less than nuclear EMP. Nearly all NNEMP devices used as weapons require chemical explosives as their initial energy source, producing only one millionth the energy of nuclear explosives of similar weight. The electromagnetic pulse from NNEMP weapons must come from within the weapon, while nuclear weapons generate EMP as a secondary effect. These facts limit the range of NNEMP weapons, but allow finer target discrimination. The effect of small e-bombs has proven to be sufficient for certain terrorist or military operations. Examples of such operations include the destruction of electronic control systems critical to the operation of many ground vehicles and aircraft. The concept of the explosively pumped flux compression generator for generating a non-nuclear electromagnetic pulse was conceived as early as 1951 by Andrei Sakharov in the Soviet Union, but nations kept work on non-nuclear EMP classified until similar ideas emerged in other nations. Effects Minor EMP events, and especially pulse trains, cause low levels of electrical noise or interference which can affect the operation of susceptible devices. For example, a common problem in the mid-twentieth century was interference emitted by the ignition systems of gasoline engines, which caused radio sets to crackle and TV sets to show stripes on the screen. CISPR 25 was established to set threshold standards that vehicles must meet for electromagnetic interference(EMI) emissions. At a high voltage level an EMP can induce a spark, for example from an electrostatic discharge when fuelling a gasoline-engined vehicle. Such sparks have been known to cause fuel-air explosions and precautions must be taken to prevent them. A large and energetic EMP can induce high currents and voltages in the victim unit, temporarily disrupting its function or even permanently damaging it. A powerful EMP can also directly affect magnetic materials and corrupt the data stored on media such as magnetic tape and computer hard drives. Hard drives are usually shielded by heavy metal casings. Some IT asset disposal service providers and computer recyclers use a controlled EMP to wipe such magnetic media. A very large EMP event, such as a lightning strike or an air bursted nuclear weapon, is also capable of damaging objects such as trees, buildings and aircraft directly, either through heating effects or the disruptive effects of the very large magnetic field generated by the current. An indirect effect can be electrical fires caused by heating. Most engineered structures and systems require some form of protection against lightning to be designed in. A good means of protection is a Faraday shield designed to protect certain items from being destroyed. Control Like any electromagnetic interference, the threat from EMP is subject to control measures. This is true whether the threat is natural or man-made. Therefore, most control measures focus on the susceptibility of equipment to EMP effects, and hardening or protecting it from harm. Man-made sources, other than weapons, are also subject to control measures in order to limit the amount of pulse energy emitted. The discipline of ensuring correct equipment operation in the presence of EMP and other RF threats is known as electromagnetic compatibility (EMC). Test simulation To test the effects of EMP on engineered systems and equipment, an EMP simulator may be used. Induced pulse simulation Induced pulses are of much lower energy than threat pulses and so are more practicable to create, but they are less predictable. A common test technique is to use a current clamp in reverse, to inject a range of damped sine wave signals into a cable connected to the equipment under test. The damped sine wave generator is able to reproduce the range of induced signals likely to occur. Threat pulse simulation Sometimes the threat pulse itself is simulated in a repeatable way. The pulse may be reproduced at low energy in order to characterise the subject's response prior to damped sinewave injection, or at high energy to recreate the actual threat conditions. A small-scale ESD simulator may be hand-held. Bench- or room-sized simulators come in a range of designs, depending on the type and level of threat to be generated. At the top end of the scale, large outdoor test facilities incorporating high-energy EMP simulators have been built by several countries. The largest facilities are able to test whole vehicles including ships and aircraft for their susceptibility to EMP. Nearly all of these large EMP simulators used a specialized version of a Marx generator. Examples include the huge wooden-structured ATLAS-I simulator (also known as TRESTLE) at Sandia National Labs, New Mexico, which was at one time the world's largest EMP simulator. Papers on this and other large EMP simulators used by the United States during the latter part of the Cold War, along with more general information about electromagnetic pulses, are now in the care of the SUMMA Foundation, which is hosted at the University of New Mexico. The US Navy also has a large facility called the Electro Magnetic Pulse Radiation Environmental Simulator for Ships I (EMPRESS I). Safety High-level EMP signals can pose a threat to human safety. In such circumstances, direct contact with a live electrical conductor should be avoided. Where this occurs, such as when touching a Van de Graaff generator or other highly charged object, care must be taken to release the object and then discharge the body through a high resistance, in order to avoid the risk of a harmful shock pulse when stepping away. Very high electric field strengths can cause breakdown of the air and a potentially lethal arc current similar to lightning to flow, but electric field strengths of up to 200 kV/m are regarded as safe. According to research from Edd Gent, a 2019 report by the Electric Power Research Institute, which is funded by utility companies, found that a large EMP attack would probably cause regional blackouts but not a nationwide grid failure and that recovery times would be similar to those of other large-scale outages. It is not known how long these electrical blackouts would last, or what extent of damage would occur across the country. It is possible that neighboring countries of the U.S. could also be affected by such an attack, depending on the targeted area and people. According to an article from Naureen Malik, with North Korea's increasingly successful missile and warhead tests in mind, Congress moved to renew funding for the Commission to Assess the Threat to the U.S. from Electromagnetic Pulse Attack as part of the National Defense Authorization Act. According to research from Yoshida Reiji, in a 2016 article for the Tokyo-based nonprofit organization Center for Information and Security Trade Control, Onizuka warned that a high-altitude EMP attack would damage or destroy Japan's power, communications and transport systems as well as disable banks, hospitals and nuclear power plants. In popular culture By 1981, a number of articles on electromagnetic pulse in the popular press spread knowledge of the EMP phenomenon into the popular culture. EMP has been subsequently used in a wide variety of fiction and other aspects of popular culture. Popular media often depict EMP effects incorrectly, causing misunderstandings among the public and even professionals. Official efforts have been made in the U.S. to remedy these misconceptions. The novel One Second After by William R. Forstchen and the following books One Year After, The Final Day and Five Years After portrait the story of a fictional character named John Matherson and his community in Black Mountain, North Carolina that after the US loses a war and an EMP attack "sends our nation [the US] back to the Dark Ages". See also References Citations Sources Katayev, I.G. (1966). Electromagnetic Shock Waves Iliffe Books Ltd. Dorset House, Stanford Street, London, England External links TRESTLE: Landmark of the Cold War, a short documentary film on the SUMMA Foundation website
Wikipedia
In mathematical set theory, the multiverse view is that there are many models of set theory, but no "absolute", "canonical" or "true" model. The various models are all equally valid or true, though some may be more useful or attractive than others. The opposite view is the "universe" view of set theory in which all sets are contained in some single ultimate model. The collection of countable transitive models of ZFC (in some universe) is called the hyperverse and is very similar to the "multiverse". A typical difference between the universe and multiverse views is the attitude to the continuum hypothesis. In the universe view the continuum hypothesis is a meaningful question that is either true or false though we have not yet been able to decide which. In the multiverse view it is meaningless to ask whether the continuum hypothesis is true or false before selecting a model of set theory. Another difference is that the statement "For every transitive model of ZFC there is a larger model of ZFC in which it is countable" is true in some versions of the multiverse view of mathematics but is false in the universe view. References Antos, Carolin; Friedman, Sy-David; Honzik, Radek; Ternullo, Claudio (2015), "Multiverse conceptions in set theory", Synthese, 192 (8): 2463–2488, doi:10.1007/s11229-015-0819-9, MR 3400617 Hamkins, J. D. (2012), "The set-theoretic multiverse", Rev. Symb. Log., 5 (3): 416–449, arXiv:1108.4223, Bibcode:2011arXiv1108.4223H, doi:10.1017/S1755020311000359, MR 2970696
Wikipedia
Service Data Objects is a technology that allows heterogeneous data to be accessed in a uniform way. The SDO specification was originally developed in 2004 as a joint collaboration between Oracle (BEA) and IBM and approved by the Java Community Process in JSR 235. Version 2.0 of the specification was introduced in November 2005 as a key part of the Service Component Architecture. Relation to other technologies Originally, the technology was known as Web Data Objects, or WDO, and was shipped in IBM WebSphere Application Server 5.1 and IBM WebSphere Studio Application Developer 5.1.2. Other similar technologies are JDO, EMF, JAXB and ADO.NET. Design Service Data Objects denote the use of language-agnostic data structures that facilitate communication between structural tiers and various service-providing entities. They require the use of a tree structure with a root node and provide traversal mechanisms (breadth/depth-first) that allow client programs to navigate the elements. Objects can be static (fixed number of fields) or dynamic with a map-like structure allowing for unlimited fields. The specification defines meta-data for all fields and each object graph can also be provided with change summaries that can allow receiving programs to act more efficiently on them. Developers The specification is now being developed by IBM, Rogue Wave, Oracle, SAP, Siebel, Sybase, Xcalia, Software AG within the OASIS Member Section Open CSA since April 2007. Collaborative work and materials remain on the collaboration platform of Open SOA, an informal group of actors of the industry. Implementations The following SDO products are available: Rogue Wave Software HydraSDO Xcalia (for Java and .Net) Oracle (Data Service Integrator) IBM (Virtual XML Garden) IBM (WebSphere Process Server) There are open source implementations of SDO from: The Eclipse Persistence Services Project (EclipseLink) The Apache Tuscany project for Java and C++ The fcl-sdo library included with FreePascal References External links Specification versions and history can be found on Latest materials at OASIS Open CSA Service Data Objects SDO Specifications at OpenSOA Introducing Service Data Objects for PHP Using PHP's SDO and SCA extensions
Wikipedia
A magnetohydrodynamic converter (MHD converter) is an electromagnetic machine with no moving parts involving magnetohydrodynamics, the study of the kinetics of electrically conductive fluids (liquid or ionized gas) in the presence of electromagnetic fields. Such converters act on the fluid using the Lorentz force to operate in two possible ways: either as an electric generator called an MHD generator, extracting energy from a fluid in motion; or as an electric motor called an MHD accelerator or magnetohydrodynamic drive, putting a fluid in motion by injecting energy. MHD converters are indeed reversible, like many electromagnetic devices. Michael Faraday first attempted to test a MHD converter in 1832. MHD converters involving plasmas were highly studied in the 1960s and 1970s, with many government funding and dedicated international conferences. One major conceptual application was the use of MHD converters on the hot exhaust gas in a coal fired power plant, where it could extract some of the energy with very high efficiency, and then pass it into a conventional steam turbine. The research almost stopped after it was considered the electrothermal instability would severely limit the efficiency of such converters when intense magnetic fields are used, although solutions may exist. MHD power generation A magnetohydrodynamic generator is an MHD converter that transforms the kinetic energy of an electrically conductive fluid, in motion with respect to a steady magnetic field, into electricity. MHD power generation has been tested extensively in the 1960s with liquid metals and plasmas as working fluids. Basically, a plasma is hurtling down within a channel whose walls are fitted with electrodes. Electromagnets create a uniform transverse magnetic field within the cavity of the channel. The Lorentz force then acts upon the trajectory of the incoming electrons and positive ions, separating the opposite charge carriers according to their sign. As negative and positive charges are spatially separated within the chamber, an electric potential difference can be retrieved across the electrodes. While work is extracted from the kinetic energy of the incoming high-velocity plasma, the fluid slows down during the process. MHD propulsion A magnetohydrodynamic accelerator is an MHD converter that imparts motion to an electrically conductive fluid initially at rest, using cross electric current and magnetic field both applied within the fluid. MHD propulsion has been mostly tested with models of ships and submarines in seawater. Studies are also ongoing since the early 1960s about aerospace applications of MHD to aircraft propulsion and flow control to enable hypersonic flight: action on the boundary layer to prevent laminar flow from becoming turbulent, shock wave mitigation or cancellation for thermal control and reduction of the wave drag and form drag, inlet flow control and airflow velocity reduction with an MHD generator section ahead of a scramjet or turbojet to extend their regimes at higher Mach numbers, combined to an MHD accelerator in the exhaust nozzle fed by the MHD generator through a bypass system. Research on various designs are also conducted on electromagnetic plasma propulsion for space exploration. In an MHD accelerator, the Lorentz force accelerates all charge carriers in the same direction whatever their sign, as well as neutral atoms and molecules of the fluid through collisions. The fluid is ejected toward the rear and as a reaction, the vehicle accelerates forward. See also Plasma (physics) Lorentz force Electrothermal instability Wingless Electromagnetic Air Vehicle References Further reading Sutton, George W.; Sherman, Arthur (July 2006). Engineering Magnetohydrodynamics. Dover Civil and Mechanical Engineering. Dover Publications. ISBN 978-0486450322. Weier, Tom; Shatrov, Victor; Gerbeth, Gunter (2007). "Flow Control and Propulsion in Poor Conductors". In Molokov, Sergei S.; Moreau, R.; Moffatt, H. Keith (eds.). Magnetohydrodynamics: Historical Evolution and Trends. Springer Science+Business Media. pp. 295–312. doi:10.1007/978-1-4020-4833-3. ISBN 978-1-4020-4832-6.
Wikipedia
Tractable is a technology company specializing in the development of Artificial Intelligence (AI) to assess damage to property and vehicles. The AI allows users to appraise damage digitally. Technology Tractable's technology uses computer vision and deep learning to automate the appraisal of visual damage in accident and disaster recovery, for example to a vehicle. Drivers can be directed to use the application by their insurer after an accident, with the aim of settling their claim more quickly. The AI evaluates the damage from images, and therefore doesn't assess what isn't visible (such as, for example, interior damage to a vehicle or property). History Alexandre Dalyac and Razvan Ranca founded Tractable in 2014, and Adrien Cohen joined as co-founder in 2015. The company employs more than 300 staff members, largely in the United Kingdom. Tractable was named one of the 100 leading AI companies in the world in 2020 and 2021 by CB Insights. It won the Best Technology Award in the 2020 British Insurance Awards. In June 2021, Tractable announced a venture round that valued the company at $1 billion. Tractable was the UK's 100th billion-dollar tech company, or unicorn. In July 2023, the company received a $65 million investment from SoftBank Group, through its Vision Fund 2.
Wikipedia
EpiData is a group of applications used in combination for creating documented data structures and analysis of quantitative data. Overview The EpiData Association, which created the software, was created in 1999 and is based in Denmark. EpiData was developed in Pascal and uses open standards such as HTML where possible. EpiData is widely used by organizations and individuals to create and analyze large amounts of data. The World Health Organization (WHO) uses EpiData in its STEPS method of collecting epidemiological, medical, and public health data, for biostatistics, and for other quantitative-based projects. Epicentre, the research wing of Médecins Sans Frontières, uses EpiData to manage data from its international research studies and field epidemiology studies. E.g.: Piola P, Fogg C et al.: Supervised versus unsupervised intake of six-dose artemether-lumefantrine for treatment of acute, uncomplicated Plasmodium falciparum malaria in Mbarara, Uganda: a randomised trial. Lancet. 2005 Apr 23–29;365(9469):1467-73 'PMID 15850630'. Other examples: 'PMID 16765397', 'PMID 15569777' or 'PMID 17160135'. EpiData has two parts: Epidata Entry – used for simple or programmed data entry and data documentation. It handles simple forms or related systems EpiData Analysis – performs basic statistical analysis, graphs, and comprehensive data management, such as recoding data, label values and variables, and basic statistics. This application can create control charts, such as pareto charts or p-charts, and many other methods to visualize and describe statistical data. The software is free; development is funded by governmental and non-governmental organizations like WHO. See also Clinical surveillance Disease surveillance Epidemiological methods Control chart References External links EpiData official site EpiData Wiki EpiData-list Archived 2021-07-19 at the Wayback Machine – mailing list for EpiData World Health Organization STEPS approach to surveillance Médecins Sans Frontières Epicentre
Wikipedia
In mathematics and formal logic, a theorem is a statement that has been proven, or can be proven. The proof of a theorem is a logical argument that uses the inference rules of a deductive system to establish that the theorem is a logical consequence of the axioms and previously proved theorems. In mainstream mathematics, the axioms and the inference rules are commonly left implicit, and, in this case, they are almost always those of Zermelo–Fraenkel set theory with the axiom of choice (ZFC), or of a less powerful theory, such as Peano arithmetic. Generally, an assertion that is explicitly called a theorem is a proved result that is not an immediate consequence of other known theorems. Moreover, many authors qualify as theorems only the most important results, and use the terms lemma, proposition and corollary for less important theorems. In mathematical logic, the concepts of theorems and proofs have been formalized in order to allow mathematical reasoning about them. In this context, statements become well-formed formulas of some formal language. A theory consists of some basis statements called axioms, and some deducing rules (sometimes included in the axioms). The theorems of the theory are the statements that can be derived from the axioms by using the deducing rules. This formalization led to proof theory, which allows proving general theorems about theorems and proofs. In particular, Gödel's incompleteness theorems show that every consistent theory containing the natural numbers has true statements on natural numbers that are not theorems of the theory (that is they cannot be proved inside the theory). As the axioms are often abstractions of properties of the physical world, theorems may be considered as expressing some truth, but in contrast to the notion of a scientific law, which is experimental, the justification of the truth of a theorem is purely deductive. A conjecture is a tentative proposition that may evolve to become a theorem if proven true. Theoremhood and truth Until the end of the 19th century and the foundational crisis of mathematics, all mathematical theories were built from a few basic properties that were considered as self-evident; for example, the facts that every natural number has a successor, and that there is exactly one line that passes through two given distinct points. These basic properties that were considered as absolutely evident were called postulates or axioms; for example Euclid's postulates. All theorems were proved by using implicitly or explicitly these basic properties, and, because of the evidence of these basic properties, a proved theorem was considered as a definitive truth, unless there was an error in the proof. For example, the sum of the interior angles of a triangle equals 180°, and this was considered as an undoubtable fact. One aspect of the foundational crisis of mathematics was the discovery of non-Euclidean geometries that do not lead to any contradiction, although, in such geometries, the sum of the angles of a triangle is different from 180°. So, the property "the sum of the angles of a triangle equals 180°" is either true or false, depending whether Euclid's fifth postulate is assumed or denied. Similarly, the use of "evident" basic properties of sets leads to the contradiction of Russell's paradox. This has been resolved by elaborating the rules that are allowed for manipulating sets. This crisis has been resolved by revisiting the foundations of mathematics to make them more rigorous. In these new foundations, a theorem is a well-formed formula of a mathematical theory that can be proved from the axioms and inference rules of the theory. So, the above theorem on the sum of the angles of a triangle becomes: Under the axioms and inference rules of Euclidean geometry, the sum of the interior angles of a triangle equals 180°. Similarly, Russell's paradox disappears because, in an axiomatized set theory, the set of all sets cannot be expressed with a well-formed formula. More precisely, if the set of all sets can be expressed with a well-formed formula, this implies that the theory is inconsistent, and every well-formed assertion, as well as its negation, is a theorem. In this context, the validity of a theorem depends only on the correctness of its proof. It is independent from the truth, or even the significance of the axioms. This does not mean that the significance of the axioms is uninteresting, but only that the validity of a theorem is independent from the significance of the axioms. This independence may be useful by allowing the use of results of some area of mathematics in apparently unrelated areas. An important consequence of this way of thinking about mathematics is that it allows defining mathematical theories and theorems as mathematical objects, and to prove theorems about them. Examples are Gödel's incompleteness theorems. In particular, there are well-formed assertions than can be proved to not be a theorem of the ambient theory, although they can be proved in a wider theory. An example is Goodstein's theorem, which can be stated in Peano arithmetic, but is proved to be not provable in Peano arithmetic. However, it is provable in some more general theories, such as Zermelo–Fraenkel set theory. Epistemological considerations Many mathematical theorems are conditional statements, whose proofs deduce conclusions from conditions known as hypotheses or premises. In light of the interpretation of proof as justification of truth, the conclusion is often viewed as a necessary consequence of the hypotheses. Namely, that the conclusion is true in case the hypotheses are true—without any further assumptions. However, the conditional could also be interpreted differently in certain deductive systems, depending on the meanings assigned to the derivation rules and the conditional symbol (e.g., non-classical logic). Although theorems can be written in a completely symbolic form (e.g., as propositions in propositional calculus), they are often expressed informally in a natural language such as English for better readability. The same is true of proofs, which are often expressed as logically organized and clearly worded informal arguments, intended to convince readers of the truth of the statement of the theorem beyond any doubt, and from which a formal symbolic proof can in principle be constructed. In addition to the better readability, informal arguments are typically easier to check than purely symbolic ones—indeed, many mathematicians would express a preference for a proof that not only demonstrates the validity of a theorem, but also explains in some way why it is obviously true. In some cases, one might even be able to substantiate a theorem by using a picture as its proof. Because theorems lie at the core of mathematics, they are also central to its aesthetics. Theorems are often described as being "trivial", or "difficult", or "deep", or even "beautiful". These subjective judgments vary not only from person to person, but also with time and culture: for example, as a proof is obtained, simplified or better understood, a theorem that was once difficult may become trivial. On the other hand, a deep theorem may be stated simply, but its proof may involve surprising and subtle connections between disparate areas of mathematics. Fermat's Last Theorem is a particularly well-known example of such a theorem. Informal account of theorems Logically, many theorems are of the form of an indicative conditional: If A, then B. Such a theorem does not assert B — only that B is a necessary consequence of A. In this case, A is called the hypothesis of the theorem ("hypothesis" here means something very different from a conjecture), and B the conclusion of the theorem. The two together (without the proof) are called the proposition or statement of the theorem (e.g. "If A, then B" is the proposition). Alternatively, A and B can be also termed the antecedent and the consequent, respectively. The theorem "If n is an even natural number, then n/2 is a natural number" is a typical example in which the hypothesis is "n is an even natural number", and the conclusion is "n/2 is also a natural number". In order for a theorem to be proved, it must be in principle expressible as a precise, formal statement. However, theorems are usually expressed in natural language rather than in a completely symbolic form—with the presumption that a formal statement can be derived from the informal one. It is common in mathematics to choose a number of hypotheses within a given language and declare that the theory consists of all statements provable from these hypotheses. These hypotheses form the foundational basis of the theory and are called axioms or postulates. The field of mathematics known as proof theory studies formal languages, axioms and the structure of proofs. Some theorems are "trivial", in the sense that they follow from definitions, axioms, and other theorems in obvious ways and do not contain any surprising insights. Some, on the other hand, may be called "deep", because their proofs may be long and difficult, involve areas of mathematics superficially distinct from the statement of the theorem itself, or show surprising connections between disparate areas of mathematics. A theorem might be simple to state and yet be deep. An excellent example is Fermat's Last Theorem, and there are many other examples of simple yet deep theorems in number theory and combinatorics, among other areas. Other theorems have a known proof that cannot easily be written down. The most prominent examples are the four color theorem and the Kepler conjecture. Both of these theorems are only known to be true by reducing them to a computational search that is then verified by a computer program. Initially, many mathematicians did not accept this form of proof, but it has become more widely accepted. The mathematician Doron Zeilberger has even gone so far as to claim that these are possibly the only nontrivial results that mathematicians have ever proved. Many mathematical theorems can be reduced to more straightforward computation, including polynomial identities, trigonometric identities and hypergeometric identities. Relation with scientific theories Theorems in mathematics and theories in science are fundamentally different in their epistemology. A scientific theory cannot be proved; its key attribute is that it is falsifiable, that is, it makes predictions about the natural world that are testable by experiments. Any disagreement between prediction and experiment demonstrates the incorrectness of the scientific theory, or at least limits its accuracy or domain of validity. Mathematical theorems, on the other hand, are purely abstract formal statements: the proof of a theorem cannot involve experiments or other empirical evidence in the same way such evidence is used to support scientific theories. Nonetheless, there is some degree of empiricism and data collection involved in the discovery of mathematical theorems. By establishing a pattern, sometimes with the use of a powerful computer, mathematicians may have an idea of what to prove, and in some cases even a plan for how to set about doing the proof. It is also possible to find a single counter-example and so establish the impossibility of a proof for the proposition as-stated, and possibly suggest restricted forms of the original proposition that might have feasible proofs. For example, both the Collatz conjecture and the Riemann hypothesis are well-known unsolved problems; they have been extensively studied through empirical checks, but remain unproven. The Collatz conjecture has been verified for start values up to about 2.88 × 1018. The Riemann hypothesis has been verified to hold for the first 10 trillion non-trivial zeroes of the zeta function. Although most mathematicians can tolerate supposing that the conjecture and the hypothesis are true, neither of these propositions is considered proved. Such evidence does not constitute proof. For example, the Mertens conjecture is a statement about natural numbers that is now known to be false, but no explicit counterexample (i.e., a natural number n for which the Mertens function M(n) equals or exceeds the square root of n) is known: all numbers less than 1014 have the Mertens property, and the smallest number that does not have this property is only known to be less than the exponential of 1.59 × 1040, which is approximately 10 to the power 4.3 × 1039. Since the number of particles in the universe is generally considered less than 10 to the power 100 (a googol), there is no hope to find an explicit counterexample by exhaustive search. The word "theory" also exists in mathematics, to denote a body of mathematical axioms, definitions and theorems, as in, for example, group theory (see mathematical theory). There are also "theorems" in science, particularly physics, and in engineering, but they often have statements and proofs in which physical assumptions and intuition play an important role; the physical axioms on which such "theorems" are based are themselves falsifiable. Terminology A number of different terms for mathematical statements exist; these terms indicate the role statements play in a particular subject. The distinction between different terms is sometimes rather arbitrary, and the usage of some terms has evolved over time. An axiom or postulate is a fundamental assumption regarding the object of study, that is accepted without proof. A related concept is that of a definition, which gives the meaning of a word or a phrase in terms of known concepts. Classical geometry discerns between axioms, which are general statements; and postulates, which are statements about geometrical objects. Historically, axioms were regarded as "self-evident"; today they are merely assumed to be true. A conjecture is an unproved statement that is believed to be true. Conjectures are usually made in public, and named after their maker (for example, Goldbach's conjecture and Collatz conjecture). The term hypothesis is also used in this sense (for example, Riemann hypothesis), which should not be confused with "hypothesis" as the premise of a proof. Other terms are also used on occasion, for example problem when people are not sure whether the statement should be believed to be true. Fermat's Last Theorem was historically called a theorem, although, for centuries, it was only a conjecture. A theorem is a statement that has been proven to be true based on axioms and other theorems. A proposition is a theorem of lesser importance, or one that is considered so elementary or immediately obvious, that it may be stated without proof. This should not be confused with "proposition" as used in propositional logic. In classical geometry the term "proposition" was used differently: in Euclid's Elements (c. 300 BCE), all theorems and geometric constructions were called "propositions" regardless of their importance. A lemma is an "accessory proposition" - a proposition with little applicability outside its use in a particular proof. Over time a lemma may gain in importance and be considered a theorem, though the term "lemma" is usually kept as part of its name (e.g. Gauss's lemma, Zorn's lemma, and the fundamental lemma). A corollary is a proposition that follows immediately from another theorem or axiom, with little or no required proof. A corollary may also be a restatement of a theorem in a simpler form, or for a special case: for example, the theorem "all internal angles in a rectangle are right angles" has a corollary that "all internal angles in a square are right angles" - a square being a special case of a rectangle. A generalization of a theorem is a theorem with a similar statement but a broader scope, from which the original theorem can be deduced as a special case (a corollary). Other terms may also be used for historical or customary reasons, for example: An identity is a theorem stating an equality between two expressions, that holds for any value within its domain (e.g. Bézout's identity and Vandermonde's identity). A rule is a theorem that establishes a useful formula (e.g. Bayes' rule and Cramer's rule). A law or principle is a theorem with wide applicability (e.g. the law of large numbers, law of cosines, Kolmogorov's zero–one law, Harnack's principle, the least-upper-bound principle, and the pigeonhole principle). A few well-known theorems have even more idiosyncratic names, for example, the division algorithm, Euler's formula, and the Banach–Tarski paradox. Layout A theorem and its proof are typically laid out as follows: Theorem (name of the person who proved it, along with year of discovery or publication of the proof) Statement of theorem (sometimes called the proposition) Proof Description of proof End The end of the proof may be signaled by the letters Q.E.D. (quod erat demonstrandum) or by one of the tombstone marks, such as "□" or "∎", meaning "end of proof", introduced by Paul Halmos following their use in magazines to mark the end of an article. The exact style depends on the author or publication. Many publications provide instructions or macros for typesetting in the house style. It is common for a theorem to be preceded by definitions describing the exact meaning of the terms used in the theorem. It is also common for a theorem to be preceded by a number of propositions or lemmas which are then used in the proof. However, lemmas are sometimes embedded in the proof of a theorem, either with nested proofs, or with their proofs presented after the proof of the theorem. Corollaries to a theorem are either presented between the theorem and the proof, or directly after the proof. Sometimes, corollaries have proofs of their own that explain why they follow from the theorem. Lore It has been estimated that over a quarter of a million theorems are proved every year. The well-known aphorism, "A mathematician is a device for turning coffee into theorems", is probably due to Alfréd Rényi, although it is often attributed to Rényi's colleague Paul Erdős (and Rényi may have been thinking of Erdős), who was famous for the many theorems he produced, the number of his collaborations, and his coffee drinking. The classification of finite simple groups is regarded by some to be the longest proof of a theorem. It comprises tens of thousands of pages in 500 journal articles by some 100 authors. These papers are together believed to give a complete proof, and several ongoing projects hope to shorten and simplify this proof. Another theorem of this type is the four color theorem whose computer generated proof is too long for a human to read. It is among the longest known proofs of a theorem whose statement can be easily understood by a layman. Theorems in logic In mathematical logic, a formal theory is a set of sentences within a formal language. A sentence is a well-formed formula with no free variables. A sentence that is a member of a theory is one of its theorems, and the theory is the set of its theorems. Usually a theory is understood to be closed under the relation of logical consequence. Some accounts define a theory to be closed under the semantic consequence relation ( ⊨ {\displaystyle \models } ), while others define it to be closed under the syntactic consequence, or derivability relation ( ⊢ {\displaystyle \vdash } ). For a theory to be closed under a derivability relation, it must be associated with a deductive system that specifies how the theorems are derived. The deductive system may be stated explicitly, or it may be clear from the context. The closure of the empty set under the relation of logical consequence yields the set that contains just those sentences that are the theorems of the deductive system. In the broad sense in which the term is used within logic, a theorem does not have to be true, since the theory that contains it may be unsound relative to a given semantics, or relative to the standard interpretation of the underlying language. A theory that is inconsistent has all sentences as theorems. The definition of theorems as sentences of a formal language is useful within proof theory, which is a branch of mathematics that studies the structure of formal proofs and the structure of provable formulas. It is also important in model theory, which is concerned with the relationship between formal theories and structures that are able to provide a semantics for them through interpretation. Although theorems may be uninterpreted sentences, in practice mathematicians are more interested in the meanings of the sentences, i.e. in the propositions they express. What makes formal theorems useful and interesting is that they may be interpreted as true propositions and their derivations may be interpreted as a proof of their truth. A theorem whose interpretation is a true statement about a formal system (as opposed to within a formal system) is called a metatheorem. Some important theorems in mathematical logic are: Compactness of first-order logic Completeness of first-order logic Gödel's incompleteness theorems of first-order arithmetic Consistency of first-order arithmetic Tarski's undefinability theorem Church-Turing theorem of undecidability Löb's theorem Löwenheim–Skolem theorem Lindström's theorem Craig's theorem Cut-elimination theorem Syntax and semantics The concept of a formal theorem is fundamentally syntactic, in contrast to the notion of a true proposition, which introduces semantics. Different deductive systems can yield other interpretations, depending on the presumptions of the derivation rules (i.e. belief, justification or other modalities). The soundness of a formal system depends on whether or not all of its theorems are also validities. A validity is a formula that is true under any possible interpretation (for example, in classical propositional logic, validities are tautologies). A formal system is considered semantically complete when all of its theorems are also tautologies. Interpretation of a formal theorem Theorems and theories See also Law (mathematics) List of theorems List of theorems called fundamental Formula Inference Toy theorem Citations Notes References Works cited Boolos, George; Burgess, John; Jeffrey, Richard (2007). Computability and Logic (5th ed.). Cambridge University Press. Enderton, Herbert (2001). A Mathematical Introduction to Logic (2nd ed.). Harcourt Academic Press. Heath, Sir Thomas Little (1897). The works of Archimedes. Dover. Retrieved 2009-11-15. Hedman, Shawn (2004). A First Course in Logic. Oxford University Press. Hinman, Peter (2005). Fundamentals of Mathematical Logic. Wellesley, MA: A K Peters. Hoffman, Paul (1998). The Man Who Loved Only Numbers: The Story of Paul Erdős and the Search for Mathematical Truth. Hyperion, New York. ISBN 1-85702-829-5. Hodges, Wilfrid (1993). Model Theory. Cambridge University Press. Johnstone, P. T. (1987). Notes on Logic and Set Theory. Cambridge University Press. Monk, J. Donald (1976). Mathematical Logic. Springer-Verlag. Petkovsek, Marko; Wilf, Herbert; Zeilberger, Doron (1996). A = B. A.K. Peters, Wellesley, Massachusetts. ISBN 1-56881-063-6. Rautenberg, Wolfgang (2010). A Concise Introduction to Mathematical Logic (3rd ed.). Springer. van Dalen, Dirk (1994). Logic and Structure (3rd ed.). Springer-Verlag. Wentworth, G.; Smith, D.E. (1913). Plane Geometry. Ginn & Co. Further reading Chiswell, Ian; Hodges, Wilfred (2007). Mathematical Logic. Oxford University Press. Hunter, Geoffrey (1996) [1971]. Metalogic: An Introduction to the Metatheory of Standard First-Order Logic. University of California Press (published 1973). ISBN 9780520023567. OCLC 36312727. (accessible to patrons with print disabilities) Mates, Benson (1972). Elementary Logic. Oxford University Press. ISBN 0-19-501491-X. External links Media related to Theorems at Wikimedia Commons Weisstein, Eric W. "Theorem". MathWorld. Theorem of the Day
Wikipedia
Barthélémy Bisengimana Rwema (born 12 May 1935) was a Zairean official who served as head of the Bureau of the President under Mobutu Sese Seko from May 1969 to February 1977. Bisengimana was a member of the Tutsi ethnic group whose rise to prominence was largely the result of the complete dependence of the Banyarwanda upon the central government for power, which made them reliable supporters. A native of Cyangugu Province in Rwanda, in 1961 Bisengimana was the first graduate with a degree in electrical engineering from Lovanium University in Kinshasa. Bisengima's aided many Rwandan Tutsis in North and South Kivu to acquire land and start lucrative businesses. Andre Kalinda, a chief of the Hunde and territorial administrator of Masisi, became the most powerful chief due to his connections with both Bisengimana and the Acogenoki. At his height in 1972, Bisengimana managed to get the Political Bureau of the ruling Mouvement Populaire de la Révolution (MPR) to pass a citizenship decree in which everyone originating from "Ruanda-Urundi" and residing in then-Belgian Congo on or before January 1950 was automatically granted citizenship. This Law 72-002 amended the MPR's statutes and became referred to as "Article 15". When the law, which further allowed the new citizens to claim land rights, went into effect in 1973, a number of Tutsi refugees legally received plantations and ranches that had been previously owned by Belgian settlers. Among these was Bisengimana, who claimed the Osso concession, which contained the largest number of cattle owned by white settlers in Masisi. Bisengimana was dismissed in 1977, followed allegations of getting kickbacks from a textile plant in Kisangani. Following his removal, there was increasing pressure to reverse Article 15, resulting in the passing of Law 81-002 on 29 June 1981. Footnotes References Lemarchand, René (2009). The Dynamics of Violence in Central Africa. Philadelphia: University of Pennsylvania Press. ISBN 978-0-8122-4120-4. Prunier, Gérard (2009). Africa's World War: Congo, the Rwandan Genocide, and the Making of a Continental Catastrophe. Oxford: Oxford University Press. ISBN 978-0-19-537420-9.
Wikipedia
A reciprocating electric motor is a motor in which the armature moves back and forth rather than circularly. Early electric motors were sometimes of the reciprocating type, such as those made by Daniel Davis in the 1840s. Today, reciprocating electric motors are rare but they do have some niche applications, e.g. in linear compressors for cryogenics and as educational toys. History Daniel Davis was an early maker of reciprocating electric motors. As can be seen in these examples, early motors of this type often followed the general layout of the steam engines of the day, simply replacing the piston-and-cylinder with an electromagnetic solenoid. Design A reciprocating electric motor uses an alternating magnetic field to move its armature back and forth, rather than circularly as in a conventional electric motor. A single field coil may be placed at one end of the armature's possible movement, or a field coil may be used at each end. The armature may be a permanent magnet, in which case the coil or coils can exert both repulsive and attractive force on the armature. If there are two coils, they will be wound and connected so that their like poles face each other, so that when (for example) the poles facing the armature are both negative, one pole will attract the armature's south pole while the other will repel its north pole. When the armature reaches the extreme of its movement, polarity to the coils is reversed. The armature may instead be made of ferromagnetic material, as in an electromagnetic solenoid. In this case the current in the coils will alternate between on and off, rather than between polarities. A single-coil motor with a non-magnetic armature would require a spring or some other "return" mechanism to move the armature away from the coil upon completion of the "attract" cycle. An "interrupter"-style electromechanical buzzer operates on this same principle. A dual-coil motor would alternately energize the two coils. Where the motor is adapted to produce rotary motion, the return mechanism consists of a crankshaft and flywheel. This is an extremely simple motor, such that demonstration models may be easily constructed for teaching purposes. As a practical motor it has several disadvantages. Magnetic field strength drops off rapidly with increasing distance. In the reciprocating electric motor the distance between armature and field coil must necessarily increase considerably over its minimum value; this reduces the motor's output power and starting force. Vibration is also an issue. Applications Linear compressors A design for a linear compressor of this type has been produced by the Cryogenic Engineering Group at the University of Oxford. Pumps See Plunger pump Electric shavers Some electric shavers use reciprocating motors. Toys Educational toys can be built as DIY projects. Some of them have even been patented (for e.g. one in 1929, another in 1963). See also Reciprocating engine
Wikipedia
In classical mechanics, impulse (symbolized by J or Imp) is the change in momentum of an object. If the initial momentum of an object is p1, and a subsequent momentum is p2, the object has received an impulse J: J = p 2 − p 1 . {\displaystyle \mathbf {J} =\mathbf {p} _{2}-\mathbf {p} _{1}.} Momentum is a vector quantity, so impulse is also a vector quantity: ∑ F × Δ t = Δ p . {\displaystyle \sum \mathbf {F} \times \Delta t=\Delta \mathbf {p} .} Newton’s second law of motion states that the rate of change of momentum of an object is equal to the resultant force F acting on the object: F = p 2 − p 1 Δ t , {\displaystyle \mathbf {F} ={\frac {\mathbf {p} _{2}-\mathbf {p} _{1}}{\Delta t}},} so the impulse J delivered by a steady force F acting for time Δt is: J = F Δ t . {\displaystyle \mathbf {J} =\mathbf {F} \Delta t.} The impulse delivered by a varying force acting from time a to b is the integral of the force F with respect to time: J = ∫ a b F d t . {\displaystyle \mathbf {J} =\int _{a}^{b}\mathbf {F} \,\mathrm {d} t.} The SI unit of impulse is the newton second (N⋅s), and the dimensionally equivalent unit of momentum is the kilogram metre per second (kg⋅m/s). The corresponding English engineering unit is the pound-second (lbf⋅s), and in the British Gravitational System, the unit is the slug-foot per second (slug⋅ft/s). Mathematical derivation in the case of an object of constant mass Impulse J produced from time t1 to t2 is defined to be J = ∫ t 1 t 2 F d t , {\displaystyle \mathbf {J} =\int _{t_{1}}^{t_{2}}\mathbf {F} \,\mathrm {d} t,} where F is the resultant force applied from t1 to t2. From Newton's second law, force is related to momentum p by F = d p d t . {\displaystyle \mathbf {F} ={\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}}.} Therefore, J = ∫ t 1 t 2 d p d t d t = ∫ p 1 p 2 d p = p 2 − p 1 = Δ p , {\displaystyle {\begin{aligned}\mathbf {J} &=\int _{t_{1}}^{t_{2}}{\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}}\,\mathrm {d} t\\&=\int _{\mathbf {p} _{1}}^{\mathbf {p} _{2}}\mathrm {d} \mathbf {p} \\&=\mathbf {p} _{2}-\mathbf {p} _{1}=\Delta \mathbf {p} ,\end{aligned}}} where Δp is the change in linear momentum from time t1 to t2. This is often called the impulse-momentum theorem (analogous to the work-energy theorem). As a result, an impulse may also be regarded as the change in momentum of an object to which a resultant force is applied. The impulse may be expressed in a simpler form when the mass is constant: J = ∫ t 1 t 2 F d t = Δ p = m v 2 − m v 1 , {\displaystyle \mathbf {J} =\int _{t_{1}}^{t_{2}}\mathbf {F} \,\mathrm {d} t=\Delta \mathbf {p} =m\mathbf {v_{2}} -m\mathbf {v_{1}} ,} where F is the resultant force applied, t1 and t2 are times when the impulse begins and ends, respectively, m is the mass of the object, v2 is the final velocity of the object at the end of the time interval, and v1 is the initial velocity of the object when the time interval begins. Impulse has the same units and dimensions (MLT−1) as momentum. In the International System of Units, these are kg⋅m/s = N⋅s. In English engineering units, they are slug⋅ft/s = lbf⋅s. The term "impulse" is also used to refer to a fast-acting force or impact. This type of impulse is often idealized so that the change in momentum produced by the force happens with no change in time. This sort of change is a step change, and is not physically possible. However, this is a useful model for computing the effects of ideal collisions (such as in videogame physics engines). Additionally, in rocketry, the term "total impulse" is commonly used and is considered synonymous with the term "impulse". Variable mass The application of Newton's second law for variable mass allows impulse and momentum to be used as analysis tools for jet- or rocket-propelled vehicles. In the case of rockets, the impulse imparted can be normalized by unit of propellant expended, to create a performance parameter, specific impulse. This fact can be used to derive the Tsiolkovsky rocket equation, which relates the vehicle's propulsive change in velocity to the engine's specific impulse (or nozzle exhaust velocity) and the vehicle's propellant-mass ratio. See also Wave–particle duality defines the impulse of a wave collision. The preservation of momentum in the collision is then called phase matching. Applications include: Compton effect Nonlinear optics Acousto-optic modulator Electron phonon scattering Dirac delta function, mathematical abstraction of a pure impulse Notes References Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks/Cole. ISBN 0-534-40842-7. Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W. H. Freeman. ISBN 0-7167-0809-4. External links Dynamics
Wikipedia
In computer networking, Rate Based Satellite Control Protocol (RBSCP) is a tunneling method proposed by Cisco to improve the performance of satellite network links with high latency and error rates. The problem RBSCP addresses is that the long RTT on the link keeps TCP virtual circuits in slow start for a long time. This, in addition to the high loss give a very low amount of bandwidth on the channel. Since satellite links may be high-throughput, the overall link utilized may be below what is optimal from a technical and economic view. Means of operation RBSCP works by tunneling the usual IP packets within IP packets. The transport protocol identifier is 199. On each end of the tunnel, routers buffer packets to utilize the link better. In addition to this, RBSCP tunnel routers: modify TCP options at connection setup. implement a Performance Enhancing Proxy (PEP) that resends lost packets on behalf of the client, so loss is not interpreted as congestion. External links https://web.archive.org/web/20110706144353/http://cisco.biz/en/US/docs/ios/12_3t/12_3t7/feature/guide/gt_rbscp.html
Wikipedia
A waveguide is a structure that guides waves by restricting the transmission of energy to one direction. Common types of waveguides include acoustic waveguides which direct sound, optical waveguides which direct light, and radio-frequency waveguides which direct electromagnetic waves other than light like radio waves. Without the physical constraint of a waveguide, waves would expand into three-dimensional space and their intensities would decrease according to the inverse square law. There are different types of waveguides for different types of waves. The original and most common meaning is a hollow conductive metal pipe used to carry high frequency radio waves, particularly microwaves. Dielectric waveguides are used at higher radio frequencies, and transparent dielectric waveguides and optical fibers serve as waveguides for light. In acoustics, air ducts and horns are used as waveguides for sound in musical instruments and loudspeakers, and specially-shaped metal rods conduct ultrasonic waves in ultrasonic machining. The geometry of a waveguide reflects its function; in addition to more common types that channel the wave in one dimension, there are two-dimensional slab waveguides which confine waves to two dimensions. The frequency of the transmitted wave also dictates the size of a waveguide: each waveguide has a cutoff wavelength determined by its size and will not conduct waves of greater wavelength; an optical fiber that guides light will not transmit microwaves which have a much larger wavelength. Some naturally occurring structures can also act as waveguides. The SOFAR channel layer in the ocean can guide the sound of whale song across enormous distances. Any shape of cross section of waveguide can support EM waves. Irregular shapes are difficult to analyse. Commonly used waveguides are rectangular and circular in shape. Uses The uses of waveguides for transmitting signals were known even before the term was coined. The phenomenon of sound waves guided through a taut wire have been known for a long time, as well as sound through a hollow pipe such as a cave or medical stethoscope. Other uses of waveguides are in transmitting power between the components of a system such as radio, radar or optical devices. Waveguides are the fundamental principle of guided wave testing (GWT), one of the many methods of non-destructive evaluation. Specific examples: Optical fibers transmit light and signals for long distances with low attenuation and a wide usable range of wavelengths. In a microwave oven a waveguide transfers power from the magnetron, where waves are formed, to the cooking chamber. In a radar, a waveguide transfers radio frequency energy to and from the antenna, where the impedance needs to be matched for efficient power transmission (see below). Rectangular and circular waveguides are commonly used to connect feeds of parabolic dishes to their electronics, either low-noise receivers or power amplifier/transmitters. Waveguides are used in scientific instruments to measure optical, acoustic and elastic properties of materials and objects. The waveguide can be put in contact with the specimen (as in a medical ultrasonography), in which case the waveguide ensures that the power of the testing wave is conserved, or the specimen may be put inside the waveguide (as in a dielectric constant measurement, so that smaller objects can be tested and the accuracy is better. A transmission line is a commonly used specific type of waveguide. History The first structure for guiding waves was proposed by J. J. Thomson in 1893, and was first experimentally tested by Oliver Lodge in 1894. The first mathematical analysis of electromagnetic waves in a metal cylinder was performed by Lord Rayleigh in 1897.: 8 For sound waves, Lord Rayleigh published a full mathematical analysis of propagation modes in his seminal work, "The Theory of Sound". Jagadish Chandra Bose researched millimeter wavelengths using waveguides, and in 1897 described to the Royal Institution in London his research carried out in Kolkata. The study of dielectric waveguides (such as optical fibers, see below) began as early as the 1920s, by several people, most famous of which are Rayleigh, Sommerfeld and Debye. Optical fiber began to receive special attention in the 1960s due to its importance to the communications industry. The development of radio communication initially occurred at the lower frequencies because these could be more easily propagated over large distances. The long wavelengths made these frequencies unsuitable for use in hollow metal waveguides because of the impractically large diameter tubes required. Consequently, research into hollow metal waveguides stalled and the work of Lord Rayleigh was forgotten for a time and had to be rediscovered by others. Practical investigations resumed in the 1930s by George C. Southworth at Bell Labs and Wilmer L. Barrow at MIT. Southworth at first took the theory from papers on waves in dielectric rods because the work of Lord Rayleigh was unknown to him. This misled him somewhat; some of his experiments failed because he was not aware of the phenomenon of waveguide cutoff frequency already found in Lord Rayleigh's work. Serious theoretical work was taken up by John R. Carson and Sallie P. Mead. This work led to the discovery that for the TE01 mode in circular waveguide losses go down with frequency and at one time this was a serious contender for the format for long-distance telecommunications.: 544–548 The importance of radar in World War II gave a great impetus to waveguide research, at least on the Allied side. The magnetron, developed in 1940 by John Randall and Harry Boot at the University of Birmingham in the United Kingdom, provided a good power source and made microwave radar feasible. The most important centre of US research was at the Radiation Laboratory (Rad Lab) at MIT but many others took part in the US, and in the UK such as the Telecommunications Research Establishment. The head of the Fundamental Development Group at Rad Lab was Edward Mills Purcell. His researchers included Julian Schwinger, Nathan Marcuvitz, Carol Gray Montgomery, and Robert H. Dicke. Much of the Rad Lab work concentrated on finding lumped element models of waveguide structures so that components in waveguide could be analysed with standard circuit theory. Hans Bethe was also briefly at Rad Lab, but while there he produced his small aperture theory which proved important for waveguide cavity filters, first developed at Rad Lab. The German side, on the other hand, largely ignored the potential of waveguides in radar until very late in the war. So much so that when radar parts from a downed British plane were sent to Siemens & Halske for analysis, even though they were recognised as microwave components, their purpose could not be identified. At that time, microwave techniques were badly neglected in Germany. It was generally believed that it was of no use for electronic warfare, and those who wanted to do research work in this field were not allowed to do so. German academics were even allowed to continue publicly publishing their research in this field because it was not felt to be important.: 548–554 : 1055, 1057 Immediately after World War II waveguide was the technology of choice in the microwave field. However, it has some problems; it is bulky, expensive to produce, and the cutoff frequency effect makes it difficult to produce wideband devices. Ridged waveguide can increase bandwidth beyond an octave, but a better solution is to use a technology working in TEM mode (that is, non-waveguide) such as coaxial conductors since TEM does not have a cutoff frequency. A shielded rectangular conductor can also be used and this has certain manufacturing advantages over coax and can be seen as the forerunner of the planar technologies (stripline and microstrip). However, planar technologies really started to take off when printed circuits were introduced. These methods are significantly cheaper than waveguide and have largely taken its place in most bands. However, waveguide is still favoured in the higher microwave bands from around Ku band upwards.: 556–557 : 21–27, 21–50 Properties Propagation modes and cutoff frequencies A propagation mode in a waveguide is one solution of the wave equations, or, in other words, the form of the wave. Due to the constraints of the boundary conditions, there are only limited frequencies and forms for the wave function which can propagate in the waveguide. The lowest frequency in which a certain mode can propagate is the cutoff frequency of that mode. The mode with the lowest cutoff frequency is the fundamental mode of the waveguide, and its cutoff frequency is the waveguide cutoff frequency.: 38 Propagation modes are computed by solving the Helmholtz equation alongside a set of boundary conditions depending on the geometrical shape and materials bounding the region. The usual assumption for infinitely long uniform waveguides allows us to assume a propagating form for the wave, i.e. stating that every field component has a known dependency on the propagation direction (i.e. z {\displaystyle z} ). More specifically, the common approach is to first replace all unknown time-varying fields u ( x , y , z , t ) {\displaystyle u(x,y,z,t)} (assuming for simplicity to describe the fields in cartesian components) with their complex phasors representation U ( x , y , z ) {\displaystyle U(x,y,z)} , sufficient to fully describe any infinitely long single-tone signal at frequency f {\displaystyle f} , (angular frequency ω = 2 π f {\displaystyle \omega =2\pi f} ), and rewrite the Helmholtz equation and boundary conditions accordingly. Then, every unknown field is forced to have a form like U ( x , y , z ) = U ^ ( x , y ) e − γ z {\displaystyle U(x,y,z)={\hat {U}}(x,y)e^{-\gamma z}} , where the γ {\displaystyle \gamma } term represents the propagation constant (still unknown) along the direction along which the waveguide extends to infinity. The Helmholtz equation can be rewritten to accommodate such form and the resulting equality needs to be solved for γ {\displaystyle \gamma } and U ^ ( x , y ) {\displaystyle {\hat {U}}(x,y)} , yielding in the end an eigenvalue equation for γ {\displaystyle \gamma } and a corresponding eigenfunction U ^ ( x , y ) γ {\displaystyle {\hat {U}}(x,y)_{\gamma }} for each solution of the former. The propagation constant γ {\displaystyle \gamma } of the guided wave is complex, in general. For a lossless case, the propagation constant might be found to take on either real or imaginary values, depending on the chosen solution of the eigenvalue equation and on the angular frequency ω {\displaystyle \omega } . When γ {\displaystyle \gamma } is purely real, the mode is said to be "below cutoff", since the amplitude of the field phasors tends to exponentially decrease with propagation; an imaginary γ {\displaystyle \gamma } , instead, represents modes said to be "in propagation" or "above cutoff", as the complex amplitude of the phasors does not change with z {\displaystyle z} . Impedance matching In circuit theory, the impedance is a generalization of electrical resistance in the case of alternating current, and is measured in ohms ( Ω {\displaystyle \Omega } ). A waveguide in circuit theory is described by a transmission line having a length and characteristic impedance.: 2–3, 6–12 : 14 In other words, the impedance indicates the ratio of voltage to current of the circuit component (in this case a waveguide) during propagation of the wave. This description of the waveguide was originally intended for alternating current, but is also suitable for electromagnetic and sound waves, once the wave and material properties (such as pressure, density, dielectric constant) are properly converted into electrical terms (current and impedance for example).: 14 Impedance matching is important when components of an electric circuit are connected (waveguide to antenna for example): The impedance ratio determines how much of the wave is transmitted forward and how much is reflected. In connecting a waveguide to an antenna a complete transmission is usually required, so an effort is made to match their impedances. The reflection coefficient can be calculated using: Γ = Z 2 − Z 1 Z 2 + Z 1 {\displaystyle \Gamma ={\frac {Z_{2}-Z_{1}}{Z_{2}+Z_{1}}}} , where Γ {\displaystyle \Gamma } (Gamma) is the reflection coefficient (0 denotes full transmission, 1 full reflection, and 0.5 is a reflection of half the incoming voltage), Z 1 {\displaystyle Z_{1}} and Z 2 {\displaystyle Z_{2}} are the impedance of the first component (from which the wave enters) and the second component, respectively. An impedance mismatch creates a reflected wave, which added to the incoming waves creates a standing wave. An impedance mismatch can be also quantified with the standing wave ratio (SWR or VSWR for voltage), which is connected to the impedance ratio and reflection coefficient by: V S W R = | V | m a x | V | m i n = 1 + | Γ | 1 − | Γ | {\displaystyle \mathrm {VSWR} ={\frac {|V|_{\rm {max}}}{|V|_{\rm {min}}}}={\frac {1+|\Gamma |}{1-|\Gamma |}}} , where | V | m i n / m a x {\displaystyle \left|V\right|_{\rm {min/max}}} are the minimum and maximum values of the voltage absolute value, and the VSWR is the voltage standing wave ratio, which value of 1 denotes full transmission, without reflection and thus no standing wave, while very large values mean high reflection and standing wave pattern. Electromagnetic waveguides Radio-frequency waveguides Waveguides can be constructed to carry waves over a wide portion of the electromagnetic spectrum, but are especially useful in the microwave and optical frequency ranges. Depending on the frequency, they can be constructed from either conductive or dielectric materials. Waveguides are used for transferring both power and communication signals.: 1–3 : xiii–xiv Optical waveguides Waveguides used at optical frequencies are typically dielectric waveguides, structures in which a dielectric material with high permittivity, and thus high index of refraction, is surrounded by a material with lower permittivity. The structure guides optical waves by total internal reflection. An example of an optical waveguide is optical fiber. Other types of optical waveguide are also used, including photonic-crystal fiber, which guides waves by any of several distinct mechanisms. Guides in the form of a hollow tube with a highly reflective inner surface have also been used as light pipes for illumination applications. The inner surfaces may be polished metal, or may be covered with a multilayer film that guides light by Bragg reflection (this is a special case of a photonic-crystal fiber). One can also use small prisms around the pipe which reflect light via total internal reflection —such confinement is necessarily imperfect, however, since total internal reflection can never truly guide light within a lower-index core (in the prism case, some light leaks out at the prism corners). Acoustic waveguides An acoustic waveguide is a physical structure for guiding sound waves. Sound in an acoustic waveguide behaves like electromagnetic waves on a transmission line. Waves on a string, like the ones in a tin can telephone, are a simple example of an acoustic waveguide. Another example are pressure waves in the pipes of an organ. The term acoustic waveguide is also used to describe elastic waves guided in micro-scale devices, like those employed in piezoelectric delay lines and in stimulated Brillouin scattering. Mathematical waveguides Waveguides are interesting objects of study from a strictly mathematical perspective. A waveguide (or tube) is defined as type of boundary condition on the wave equation such that the wave function must be equal to zero on the boundary and that the allowed region is finite in all dimensions but one (an infinitely long cylinder is an example.) A large number of interesting results can be proven from these general conditions. It turns out that any tube with a bulge (where the width of the tube increases) admits at least one bound state that exist inside the mode gaps. The frequencies of all the bound states can be identified by using a pulse short in time. This can be shown using the variational principles. An interesting result by Jeffrey Goldstone and Robert Jaffe is that any tube of constant width with a twist, admits a bound state. Sound synthesis Sound synthesis uses digital delay lines as computational elements to simulate wave propagation in tubes of wind instruments and the vibrating strings of string instruments. See also Circular polarization Earth–ionosphere waveguide Linear polarization Orthomode transducer Polarization Flap attenuator Notes References External links Electromagnetic Waves and Antennas: Waveguides Sophocles J. Orfanidis, Department of Electrical and Computer Engineering, Rutgers University
Wikipedia
In fluid dynamics, a Mach wave, also known as a weak discontinuity, is a pressure wave traveling with the speed of sound caused by a slight change of pressure added to a compressible flow. These weak waves can combine in supersonic flow to become a shock wave if sufficient Mach waves are present at any location. Such a shock wave is called a Mach stem or Mach front. Thus, it is possible to have shockless compression or expansion in a supersonic flow by having the production of Mach waves sufficiently spaced (cf. isentropic compression in supersonic flows). A Mach wave is the weak limit of an oblique shock wave where time averages of flow quantities don't change (a normal shock is the other limit). If the size of the object moving at the speed of sound is near 0, then this domain of influence of the wave is called a Mach cone. Mach angle A Mach wave propagates across the flow at the Mach angle μ, which is the angle formed between the Mach wave wavefront and a vector that points opposite to the vector of motion. It is given by μ = arcsin ⁡ ( 1 M ) , {\displaystyle \mu =\arcsin \left({\frac {1}{M}}\right),} where M is the Mach number. Mach waves can be used in schlieren or shadowgraph observations to determine the local Mach number of the flow. Early observations by Ernst Mach used grooves in the wall of a duct to produce Mach waves in a duct, which were then photographed by the schlieren method, to obtain data about the flow in nozzles and ducts. Mach angles may also occasionally be visualized out of their condensation in air, for example vapor cones around aircraft during transonic flight. See also Compressible flow Prandtl–Meyer expansion fan Shadowgraph technique Schlieren photography Shock wave References External links Supersonic wind tunnel test demonstration (Mach 2.5) with flat plate and wedge creating an oblique shock along with numerous Mach waves(Video)
Wikipedia
The Richtmyer–Meshkov instability (RMI) occurs when two fluids of different density are impulsively accelerated. Normally this is by the passage of a shock wave. The development of the instability begins with small amplitude perturbations which initially grow linearly with time. This is followed by a nonlinear regime with bubbles appearing in the case of a light fluid penetrating a heavy fluid, and with spikes appearing in the case of a heavy fluid penetrating a light fluid. A chaotic regime eventually is reached and the two fluids mix. This instability can be considered the impulsive-acceleration limit of the Rayleigh–Taylor instability. Dispersion Relation For ideal MHD ( ω 2 − 2 k ∥ 2 / β ) ( ω 4 − ( 2 / β + 1 ) k 2 ω 2 + 2 k ∥ 2 k 2 / β ) = 0 {\displaystyle (\omega ^{2}-2k_{\parallel }^{2}/\beta )(\omega ^{4}-(2/\beta +1)k^{2}\omega ^{2}+2k_{\parallel }^{2}k^{2}/\beta )=0} For Hall MHD ( ω 2 − 2 k ∥ 2 / β ) ( ω 4 − ( 2 / β + 1 ) k 2 ω 2 + 2 k ∥ 2 k 2 / β ) − 2 d s 2 k ∥ 2 k 2 ω 2 ( ω 2 − k 2 ) / β = 0 {\displaystyle {\displaystyle (\omega ^{2}-2k_{\parallel }^{2}/\beta )(\omega ^{4}-(2/\beta +1)k^{2}\omega ^{2}+2k_{\parallel }^{2}k^{2}/\beta )-2d_{s}^{2}k_{\parallel }^{2}k^{2}\omega ^{2}(\omega ^{2}-k^{2})/\beta =0}} For QMHD ( ( 1 + 2 / β c 2 ) ω 2 − 2 k ∥ 2 / β ) ( ( 1 + 2 / β c 2 ) ω 4 − ( 2 / β + 1 ) k 2 ω 2 + 2 k ∥ 2 k 2 / β ) − 2 d s 2 k ∥ 2 k 2 ω 2 ( ω 2 − k 2 ) / β = 0 {\displaystyle {\displaystyle {\displaystyle ((1+2/\beta c^{2})\omega ^{2}-2k_{\parallel }^{2}/\beta )((1+2/\beta c^{2})\omega ^{4}-(2/\beta +1)k^{2}\omega ^{2}+2k_{\parallel }^{2}k^{2}/\beta )-2d_{s}^{2}k_{\parallel }^{2}k^{2}\omega ^{2}(\omega ^{2}-k^{2})/\beta =0}}} History R. D. Richtmyer provided a theoretical prediction, and E. E. Meshkov (Евгений Евграфович Мешков)(ru) provided experimental verification. Materials in the cores of stars, like Cobalt-56 from Supernova 1987A were observed earlier than expected. This was evidence of mixing due to Richtmyer–Meshkov and Rayleigh–Taylor instabilities. Examples During the implosion of an inertial confinement fusion target, the hot shell material surrounding the cold D–T fuel layer is shock-accelerated. This instability is also seen in magnetized target fusion (MTF). Mixing of the shell material and fuel is not desired and efforts are made to minimize any tiny imperfections or irregularities which will be magnified by RMI. Supersonic combustion in a scramjet may benefit from RMI as the fuel-oxidants interface is enhanced by the breakup of the fuel into finer droplets. Also in studies of deflagration to detonation transition (DDT) processes show that RMI-induced flame acceleration can result in detonation. See also Rayleigh–Taylor instability Mushroom cloud Plateau–Rayleigh instability Salt fingering Kármán vortex street Kelvin–Helmholtz instability Hydrodynamics References Mikaelian, Karnig O. (1985-01-01). "Richtmyer–Meshkov instabilities in stratified fluids". Physical Review A. 31 (1). American Physical Society (APS): 410–419. Bibcode:1985PhRvA..31..410M. doi:10.1103/physreva.31.410. ISSN 0556-2791. PMID 9895490. S2CID 21139629. External links Wisconsin Shock Tube Laboratory New type of interface evolution in the Richtmyer–Meshkov instability Recent Advances in Indirect Drive ICF Target Physics at LLNL Emergence of Detonation in the Flowfield Induced by Richtmyer–Meshkov Instability Propagation of Fast Deflagrations and Marginal Detonations in Hydrogen-Air Mixtures Mushrooms+Snakes: a visualization of Richtmyer–Meshkov instability Conjugate Filter OscillationReduction (CFOR) scheme for the 2D Richtmyer–Meshkov instability Experiments on the Richtmyer–Meshkov instability at the University of Arizona Archived 2006-12-30 at the Wayback Machine
Wikipedia
Swelling index may refer to the following material parameters that quantify volume change: Crucible swelling index, also known as free swelling index, in coal assay Swelling capacity, the amount of a liquid that can be absorbed by a polymer Shrink–swell capacity in soil mechanics Unload-reload constant (κ) in critical state soil mechanics
Wikipedia
In physics, topological order describes a state or phase of matter that arises system with non-local interactions, such as entanglement in quantum mechanics, and floppy modes in elastic systems. Whereas classical phases of matter such as gases and solids correspond to microscopic patterns in the spatial arrangement of particles arising from short range interactions, topological orders correspond to patterns of long-range quantum entanglement. States with different topological orders (or different patterns of long range entanglements) cannot change into each other without a phase transition. Technically, topological order occurs at zero temperature. Various topologically ordered states have interesting properties, such as (1) ground state degeneracy and fractional statistics or non-abelian group statistics that can be used to realize a topological quantum computer; (2) perfect conducting edge states that may have important device applications; (3) emergent gauge field and Fermi statistics that suggest a quantum information origin of elementary particles; (4) topological entanglement entropy that reveals the entanglement origin of topological order, etc. Topological order is important in the study of several physical systems such as spin liquids, and the quantum Hall effect, along with potential applications to fault-tolerant quantum computation. Topological insulators and topological superconductors (beyond 1D) do not have topological order as defined above, their entanglements being only short-ranged, but are examples of symmetry-protected topological order. Background Matter composed of atoms can have different properties and appear in different forms, such as solid, liquid, superfluid, etc. These various forms of matter are often called states of matter or phases. According to condensed matter physics and the principle of emergence, the different properties of materials generally arise from the different ways in which the atoms are organized in the materials. Those different organizations of the atoms (or other particles) are formally called the orders in the materials. Atoms can organize in many ways which lead to many different orders and many different types of materials. Landau symmetry-breaking theory provides a general understanding of these different orders. It points out that different orders really correspond to different symmetries in the organizations of the constituent atoms. As a material changes from one order to another order (i.e., as the material undergoes a phase transition), what happens is that the symmetry of the organization of the atoms changes. For example, atoms have a random distribution in a liquid, so a liquid remains the same as we displace atoms by an arbitrary distance. We say that a liquid has a continuous translation symmetry. After a phase transition, a liquid can turn into a crystal. In a crystal, atoms organize into a regular array (a lattice). A lattice remains unchanged only when we displace it by a particular distance (integer times a lattice constant), so a crystal has only discrete translation symmetry. The phase transition between a liquid and a crystal is a transition that reduces the continuous translation symmetry of the liquid to the discrete symmetry of the crystal. Similarly this holds for rotational symmetry. Such a change in symmetry is called symmetry breaking. The essence of the difference between liquids and crystals is therefore that the organizations of atoms have different symmetries in the two phases. Landau symmetry-breaking theory has been a very successful theory. For a long time, physicists believed that Landau Theory described all possible orders in materials, and all possible (continuous) phase transitions. Discovery and characterization However, since the late 1980s, it has become gradually apparent that Landau symmetry-breaking theory may not describe all possible orders. In an attempt to explain high temperature superconductivity the chiral spin state was introduced. At first, physicists still wanted to use Landau symmetry-breaking theory to describe the chiral spin state. They identified the chiral spin state as a state that breaks the time reversal and parity symmetries, but not the spin rotation symmetry. This should be the end of the story according to Landau's symmetry breaking description of orders. However, it was quickly realized that there are many different chiral spin states that have exactly the same symmetry, so symmetry alone was not enough to characterize different chiral spin states. This means that the chiral spin states contain a new kind of order that is beyond the usual symmetry description. The proposed, new kind of order was named "topological order". The name "topological order" is motivated by the low energy effective theory of the chiral spin states which is a topological quantum field theory (TQFT). New quantum numbers, such as ground state degeneracy (which can be defined on a closed space or an open space with gapped boundaries, including both Abelian topological orders and non-Abelian topological orders) and the non-Abelian geometric phase of degenerate ground states, were introduced to characterize and define the different topological orders in chiral spin states. More recently, it was shown that topological orders can also be characterized by topological entropy. But experiments soon indicated that chiral spin states do not describe high-temperature superconductors, and the theory of topological order became a theory with no experimental realization. However, the similarity between chiral spin states and quantum Hall states allows one to use the theory of topological order to describe different quantum Hall states. Just like chiral spin states, different quantum Hall states all have the same symmetry and are outside the Landau symmetry-breaking description. One finds that the different orders in different quantum Hall states can indeed be described by topological orders, so the topological order does have experimental realizations. The fractional quantum Hall (FQH) state was discovered in 1982 before the introduction of the concept of topological order in 1989. But the FQH state is not the first experimentally discovered topologically ordered state. The superconductor, discovered in 1911, is the first experimentally discovered topologically ordered state; it has Z2 topological order. Although topologically ordered states usually appear in strongly interacting boson/fermion systems, a simple kind of topological order can also appear in free fermion systems. This kind of topological order corresponds to integral quantum Hall state, which can be characterized by the Chern number of the filled energy band if we consider integer quantum Hall state on a lattice. Theoretical calculations have proposed that such Chern numbers can be measured for a free fermion system experimentally. It is also well known that such a Chern number can be measured (maybe indirectly) by edge states. The most important characterization of topological orders would be the underlying fractionalized excitations (such as anyons) and their fusion statistics and braiding statistics (which can go beyond the quantum statistics of bosons or fermions). Current research works show that the loop and string like excitations exist for topological orders in the 3+1 dimensional spacetime, and their multi-loop/string-braiding statistics are the crucial signatures for identifying 3+1 dimensional topological orders. The multi-loop/string-braiding statistics of 3+1 dimensional topological orders can be captured by the link invariants of particular topological quantum field theory in 4 spacetime dimensions. Mechanism A large class of 2+1D topological orders is realized through a mechanism called string-net condensation. This class of topological orders can have a gapped edge and are classified by unitary fusion category (or monoidal category) theory. One finds that string-net condensation can generate infinitely many different types of topological orders, which may indicate that there are many different new types of materials remaining to be discovered. The collective motions of condensed strings give rise to excitations above the string-net condensed states. Those excitations turn out to be gauge bosons. The ends of strings are defects which correspond to another type of excitations. Those excitations are the gauge charges and can carry Fermi or fractional statistics. The condensations of other extended objects such as "membranes", "brane-nets", and fractals also lead to topologically ordered phases and "quantum glassiness". Mathematical formulation We know that group theory is the mathematical foundation of symmetry-breaking orders. What is the mathematical foundation of topological order? It was found that a subclass of 2+1D topological orders—Abelian topological orders—can be classified by a K-matrix approach. The string-net condensation suggests that tensor category (such as fusion category or monoidal category) is part of the mathematical foundation of topological order in 2+1D. The more recent researches suggest that (up to invertible topological orders that have no fractionalized excitations): 2+1D bosonic topological orders are classified by unitary modular tensor categories. 2+1D bosonic topological orders with symmetry G are classified by G-crossed tensor categories. 2+1D bosonic/fermionic topological orders with symmetry G are classified by unitary braided fusion categories over symmetric fusion category, that has modular extensions. The symmetric fusion category Rep(G) for bosonic systems and sRep(G) for fermionic systems. Topological order in higher dimensions may be related to n-Category theory. Quantum operator algebra is a very important mathematical tool in studying topological orders. Some also suggest that topological order is mathematically described by extended quantum symmetry. Applications The materials described by Landau symmetry-breaking theory have had a substantial impact on technology. For example, ferromagnetic materials that break spin rotation symmetry can be used as the media of digital information storage. A hard drive made of ferromagnetic materials can store gigabytes of information. Liquid crystals that break the rotational symmetry of molecules find wide application in display technology. Crystals that break translation symmetry lead to well defined electronic bands which in turn allow us to make semiconducting devices such as transistors. Different types of topological orders are even richer than different types of symmetry-breaking orders. This suggests their potential for exciting, novel applications. One theorized application would be to use topologically ordered states as media for quantum computing in a technique known as topological quantum computing. A topologically ordered state is a state with complicated non-local quantum entanglement. The non-locality means that the quantum entanglement in a topologically ordered state is distributed among many different particles. As a result, the pattern of quantum entanglements cannot be destroyed by local perturbations. This significantly reduces the effect of decoherence. This suggests that if we use different quantum entanglements in a topologically ordered state to encode quantum information, the information may last much longer. The quantum information encoded by the topological quantum entanglements can also be manipulated by dragging the topological defects around each other. This process may provide a physical apparatus for performing quantum computations. Therefore, topologically ordered states may provide natural media for both quantum memory and quantum computation. Such realizations of quantum memory and quantum computation may potentially be made fault tolerant. Topologically ordered states in general have a special property that they contain non-trivial boundary states. In many cases, those boundary states become perfect conducting channel that can conduct electricity without generating heat. This can be another potential application of topological order in electronic devices. Similarly to topological order, topological insulators also have gapless boundary states. The boundary states of topological insulators play a key role in the detection and the application of topological insulators. This observation naturally leads to a question: are topological insulators examples of topologically ordered states? In fact topological insulators are different from topologically ordered states defined in this article. Topological insulators only have short-ranged entanglements and have no topological order, while the topological order defined in this article is a pattern of long-range entanglement. Topological order is robust against any perturbations. It has emergent gauge theory, emergent fractional charge and fractional statistics. In contrast, topological insulators are robust only against perturbations that respect time-reversal and U(1) symmetries. Their quasi-particle excitations have no fractional charge and fractional statistics. Strictly speaking, topological insulator is an example of symmetry-protected topological (SPT) order, where the first example of SPT order is the Haldane phase of spin-1 chain. But the Haldane phase of spin-2 chain has no SPT order. Potential impact Landau symmetry-breaking theory is a cornerstone of condensed matter physics. It is used to define the territory of condensed matter research. The existence of topological order appears to indicate that nature is much richer than Landau symmetry-breaking theory has so far indicated. So topological order opens up a new direction in condensed matter physics—a new direction of highly entangled quantum matter. We realize that quantum phases of matter (i.e. the zero-temperature phases of matter) can be divided into two classes: long range entangled states and short range entangled states. Topological order is the notion that describes the long range entangled states: topological order = pattern of long range entanglements. Short range entangled states are trivial in the sense that they all belong to one phase. However, in the presence of symmetry, even short range entangled states are nontrivial and can belong to different phases. Those phases are said to contain SPT order. SPT order generalizes the notion of topological insulator to interacting systems. Some suggest that topological order (or more precisely, string-net condensation) in local bosonic (spin) models has the potential to provide a unified origin for photons, electrons and other elementary particles in our universe. See also Notes References References by categories Fractional quantum Hall states Tsui, D. C.; Stormer, H. L.; Gossard, A. C. (1982). "Two-Dimensional Magnetotransport in the Extreme Quantum Limit". Phys. Rev. Lett. 48 (22): 1559–62. Bibcode:1982PhRvL..48.1559T. doi:10.1103/physrevlett.48.1559. Laughlin, R. B. (1983). "Anomalous Quantum Hall Effect: An Incompressible Quantum Fluid with Fractionally Charged Excitations". Phys. Rev. Lett. 50 (18): 1395–98. Bibcode:1983PhRvL..50.1395L. doi:10.1103/physrevlett.50.1395. S2CID 120080343. Chiral spin states Kalmeyer, V.; Laughlin, R. B. (2 November 1987). "Equivalence of the resonating-valence-bond and fractional quantum Hall states". Physical Review Letters. 59 (18): 2095–8. Bibcode:1987PhRvL..59.2095K. doi:10.1103/physrevlett.59.2095. PMID 10035416. Wen, X. G.; Wilczek, Frank; Zee, A. (1 June 1989). "Chiral spin states and superconductivity". Physical Review B. 39 (16): 11413–23. Bibcode:1989PhRvB..3911413W. doi:10.1103/PhysRevB.39.11413. PMID 9947970. Early characterization of FQH states Off-diagonal long-range order, oblique confinement, and the fractional quantum Hall effect, S. M. Girvin and A. H. MacDonald, Phys. Rev. Lett., 58, 1252 (1987) Effective-Field-Theory Model for the Fractional Quantum Hall Effect, S. C. Zhang and T. H. Hansson and S. Kivelson, Phys. Rev. Lett., 62, 82 (1989) Topological order Xiao-Gang Wen, Phys. Rev. B, 40, 7387 (1989), "Vacuum Degeneracy of Chiral Spin State in Compactified Spaces" Wen, Xiao-Gang (1990). "Topological Orders in Rigid States" (PDF). Int. J. Mod. Phys. B. 4 (2): 239. Bibcode:1990IJMPB...4..239W. CiteSeerX 10.1.1.676.4078. doi:10.1142/S0217979290000139. Archived from the original (PDF) on 2011-07-20. Retrieved 2009-04-09. Xiao-Gang Wen, Quantum Field Theory of Many Body Systems – From the Origin of Sound to an Origin of Light and Electrons, Oxford Univ. Press, Oxford, 2004. Characterization of topological order D. Arovas and J. R. Schrieffer and F. Wilczek, Phys. Rev. Lett., 53, 722 (1984), "Fractional Statistics and the Quantum Hall Effect" Wen, Xiao-Gang; Niu, Qian (1990). "Ground state degeneracy of the FQH states in presence of random potential and on high genus Riemann surfaces" (PDF). Phys. Rev. B. 41 (13): 9377–96. Bibcode:1990PhRvB..41.9377W. doi:10.1103/physrevb.41.9377. PMID 9993283. Wen, Xiao-Gang (1991a). "Gapless Boundary Excitations in the FQH States and in the Chiral Spin States" (PDF). Phys. Rev. B. 43 (13): 11025–36. Bibcode:1991PhRvB..4311025W. doi:10.1103/physrevb.43.11025. PMID 9996836. Kitaev, Alexei; Preskill, John (24 March 2006). "Topological Entanglement Entropy". Physical Review Letters. 96 (11): 110404. arXiv:hep-th/0510092. Bibcode:2006PhRvL..96k0404K. doi:10.1103/physrevlett.96.110404. PMID 16605802. S2CID 18480266. Levin, Michael; Wen, Xiao-Gang (24 March 2006). "Detecting Topological Order in a Ground State Wave Function". Physical Review Letters. 96 (11): 110405. arXiv:cond-mat/0510613. Bibcode:2006PhRvL..96k0405L. doi:10.1103/physrevlett.96.110405. PMID 16605803. S2CID 206329868. Effective theory of topological order Witten, E. (1989). "Quantum field theory and the Jones polynomial". Comm. Math. Phys. 121 (3): 351–399. Bibcode:1989CMaPh.121..351W. doi:10.1007/bf01217730. MR 0990772. S2CID 14951363. Zbl 0667.57005. Mechanism of topological order Levin, Michael A.; Wen, Xiao-Gang (12 January 2005). "String-net condensation: A physical mechanism for topological phases". Physical Review B. 71 (4): 045110. arXiv:cond-mat/0404617. Bibcode:2005PhRvB..71d5110L. doi:10.1103/physrevb.71.045110. S2CID 51962817. Chamon, C (2005). "Quantum Glassiness in Strongly Correlated Clean Systems: An Example of Topological Overprotection". Phys. Rev. Lett. 94 (4): 040402. arXiv:cond-mat/0404182. Bibcode:2005PhRvL..94d0402C. doi:10.1103/PhysRevLett.94.040402. PMID 15783534. S2CID 25731669. Hamma, Alioscia; Zanardi, Paolo; Wen, Xiao-Gang (2005). "String and Membrane condensation on 3D lattices". Phys. Rev. B. 72 (3): 035307. arXiv:cond-mat/0411752. Bibcode:2005PhRvB..72c5307H. doi:10.1103/physrevb.72.035307. S2CID 118956379. Bombin, H.; Martin-Delgado, M. A. (7 February 2007). "Exact topological quantum order inD=3and beyond: Branyons and brane-net condensates". Physical Review B. 75 (7): 075103. arXiv:cond-mat/0607736. Bibcode:2007PhRvB..75g5103B. doi:10.1103/physrevb.75.075103. S2CID 119460756. Quantum computing Nayak, Chetan; Simon, Steven H.; Stern, Ady; Freedman, Michael; Das Sarma, Sankar (2008). "Non-Abelian anyons and topological quantum computation". Reviews of Modern Physics. 80 (3): 1083–1159. arXiv:0707.1889. Bibcode:2008RvMP...80.1083N. doi:10.1103/RevModPhys.80.1083. Kitaev, Alexei Yu (2003). "Fault-tolerant quantum computation by anyons". Annals of Physics. 303 (1): 2–30. arXiv:quant-ph/9707021. Bibcode:2003AnPhy.303....2K. doi:10.1016/S0003-4916(02)00018-0. S2CID 119087885. Freedman, Michael H.; Kitaev, Alexei; Larsen, Michael J.; Wang, Zhenghan (2003). "Topological quantum computation". Bull. Amer. Math. Soc. 40: 31. arXiv:quant-ph/0101025. doi:10.1090/s0273-0979-02-00964-3. Dennis, Eric; Kitaev, Alexei; Landahl, Andrew; Preskill, John (2002). "Topological quantum memory". J. Math. Phys. 43 (9): 4452–4505. arXiv:quant-ph/0110143. Bibcode:2002JMP....43.4452D. doi:10.1063/1.1499754. S2CID 36673677. Ady Stern and Bertrand I. Halperin, Phys. Rev. Lett., 96, 016802 (2006), Proposed Experiments to probe the Non-Abelian nu=5/2 Quantum Hall State Emergence of elementary particles Xiao-Gang Wen, Phys. Rev. D68, 024501 (2003), Quantum order from string-net condensations and origin of light and massless fermions Levin, Michael; Wen, Xiao-Gang (20 June 2003). "Fermions, strings, and gauge fields in lattice spin models". Physical Review B. 67 (24): 245316. arXiv:cond-mat/0302460. Bibcode:2003PhRvB..67x5316L. doi:10.1103/physrevb.67.245316. S2CID 29180411. Levin, Michael; Wen, Xiao-Gang (2005a). "Colloquium: Photons and electrons as emergent phenomena". Reviews of Modern Physics. 77 (3): 871–9. arXiv:cond-mat/0407140. Bibcode:2005RvMP...77..871L. doi:10.1103/RevModPhys.77.871. S2CID 117563047. See also Levin, Michael; Wen, Xiao-Gang (2006a). "Quantum ether: Photons and electrons from a rotor model". Physical Review B. 73 (3): 035122. arXiv:hep-th/0507118. Bibcode:2006PhRvB..73c5122L. doi:10.1103/PhysRevB.73.035122. S2CID 119481786. Zheng-Cheng Gu and Xiao-Gang Wen, gr-qc/0606100, A lattice bosonic model as a quantum theory of gravity, Quantum operator algebra Yetter, David N. (1993). "TQFT'S from Homotopy 2-Types". Journal of Knot Theory and Its Ramifications. 2 (1): 113–123. doi:10.1142/s0218216593000076. Landsman N. P. and Ramazan B., Quantization of Poisson algebras associated to Lie algebroids, in Proc. Conf. on Groupoids in Physics, Analysis and Geometry(Boulder CO, 1999)', Editors J. Kaminker et al.,159{192 Contemp. Math. 282, Amer. Math. Soc., Providence RI, 2001, (also math{ph/001005.) Non-Abelian Quantum Algebraic Topology (NAQAT) 20 Nov. (2008),87 pages, Baianu, I.C. Levin A. and Olshanetsky M., Hamiltonian Algebroids and deformations of complex structures on Riemann curves, hep-th/0301078v1. Xiao-Gang Wen, Yong-Shi Wu and Y. Hatsugai., Chiral operator product algebra and edge excitations of a FQH droplet (pdf),Nucl. Phys. B422, 476 (1994): Used chiral operator product algebra to construct the bulk wave function, characterize the topological orders and calculate the edge states for some non-Abelian FQH states. Xiao-Gang Wen and Yong-Shi Wu., Chiral operator product algebra hidden in certain FQH states (pdf),Nucl. Phys. B419, 455 (1994): Demonstrated that non-Abelian topological orders are closely related to chiral operator product algebra (instead of conformal field theory). Non-Abelian theory. Baianu, I. C. (2007). "A Non-Abelian, Categorical Ontology of Spacetimes and Quantum Gravity". Axiomathes. 17 (3–4): 353–408. doi:10.1007/s10516-007-9012-1. S2CID 3909409.. R. Brown, P.J. Higgins, P. J. and R. Sivera, "Nonabelian Algebraic Topology: filtered spaces, crossed complexes, cubical homotopy groupoids" EMS Tracts in Mathematics Vol 15 (2011), A Bibliography for Categories and Algebraic Topology Applications in Theoretical Physics Quantum Algebraic Topology (QAT)
Wikipedia
Spin-density wave (SDW) and charge-density wave (CDW) are names for two similar low-energy ordered states of solids. Both these states occur at low temperature in anisotropic, low-dimensional materials or in metals that have high densities of states at the Fermi level N ( E F ) {\displaystyle N(E_{F})} . Other low-temperature ground states that occur in such materials are superconductivity, ferromagnetism and antiferromagnetism. The transition to the ordered states is driven by the condensation energy which is approximately N ( E F ) Δ 2 {\displaystyle N(E_{F})\Delta ^{2}} where Δ {\displaystyle \Delta } is the magnitude of the energy gap opened by the transition. Fundamentally SDWs and CDWs involve the development of a superstructure in the form of a periodic modulation in the density of the electronic spins and charges with a characteristic spatial frequency q {\displaystyle q} that does not transform according to the symmetry group that describes the ionic positions. The new periodicity associated with CDWs can easily be observed using scanning tunneling microscopy or electron diffraction while the more elusive SDWs are typically observed via neutron diffraction or susceptibility measurements. If the new periodicity is a rational fraction or multiple of the lattice constant, the density wave is said to be commensurate; otherwise the density wave is termed incommensurate. Some solids with a high N ( E F ) {\displaystyle N(E_{F})} form density waves while others choose a superconducting or magnetic ground state at low temperatures, because of the existence of nesting vectors in the materials' Fermi surfaces. The concept of a nesting vector is illustrated in the Figure for the famous case of chromium, which transitions from a paramagnetic to SDW state at a Néel temperature of 311 K. Cr is a body-centered cubic metal whose Fermi surface features many parallel boundaries between electron pockets centered at Γ {\displaystyle \Gamma } and hole pockets at H. These large parallel regions can be spanned by the nesting wavevector q {\displaystyle q} shown in red. The real-space periodicity of the resulting spin-density wave is given by 2 π / q {\displaystyle 2\pi /q} . The formation of an SDW with a corresponding spatial frequency causes the opening of an energy gap that lowers the system's energy. The existence of the SDW in Cr was first posited in 1960 by Albert Overhauser of Purdue. The theory of CDWs was first put forth by Rudolf Peierls of Oxford University, who was trying to explain superconductivity. Many low-dimensional solids have anisotropic Fermi surfaces that have prominent nesting vectors. Well-known examples include layered materials like NbSe3, TaSe2 and K0.3MoO3 (a Chevrel phase) and quasi-1D organic conductors like TMTSF or TTF-TCNQ. CDWs are also common at the surface of solids where they are more commonly called surface reconstructions or even dimerization. Surfaces so often support CDWs because they can be described by two-dimensional Fermi surfaces like those of layered materials. Chains of Au and In on semiconducting substrates have been shown to exhibit CDWs. More recently, monatomic chains of Co on a metallic substrate were experimentally shown to exhibit a CDW instability and was attributed to ferromagnetic correlations. The most intriguing properties of density waves are their dynamics. Under an appropriate electric field or magnetic field, a density wave will "slide" in the direction indicated by the field due to the electrostatic or magnetostatic force. Typically the sliding will not begin until a "depinning" threshold field is exceeded where the wave can escape from a potential well caused by a defect. The hysteretic motion of density waves is therefore not unlike that of dislocations or magnetic domains. The current-voltage curve of a CDW solid therefore shows a very high electrical resistance up to the depinning voltage, above which it shows a nearly ohmic behavior. Under the depinning voltage (which depends on the purity of the material), the crystal is an insulator. See also Peierls transition Superstructure (condensed matter) References General References A pedagogical article about the topic: "Charge and Spin Density Waves," Stuart Brown and George Gruner, Scientific American 270, 50 (1994). Authoritative work on Cr: Fawcett, Eric (1988-01-01). "Spin-density-wave antiferromagnetism in chromium". Reviews of Modern Physics. 60 (1). American Physical Society (APS): 209–283. Bibcode:1988RvMP...60..209F. doi:10.1103/revmodphys.60.209. ISSN 0034-6861. About Fermi surfaces and nesting: Electronic Structure and the Properties of Solids, Walter A. Harrison, ISBN 0-486-66021-4. Observation of CDW by ARPES: Borisenko, S. V.; Kordyuk, A. A.; Yaresko, A. N.; Zabolotnyy, V. B.; Inosov, D. S.; et al. (2008-05-13). "Pseudogap and Charge Density Waves in Two Dimensions". Physical Review Letters. 100 (19): 196402. arXiv:0704.1544. Bibcode:2008PhRvL.100s6402B. doi:10.1103/physrevlett.100.196402. ISSN 0031-9007. PMID 18518466. S2CID 5532038. Peierls instability. An extensive review of experiments as of 2013 by Pierre Monceau. Monceau, Pierre (2012). "Electronic crystals: an experimental overview". Advances in Physics. 61 (4). Informa UK Limited: 325–581. arXiv:1307.0929. Bibcode:2012AdPhy..61..325M. doi:10.1080/00018732.2012.719674. ISSN 0001-8732. S2CID 119271518.
Wikipedia
The hierarchical equations of motion (HEOM) technique derived by Yoshitaka Tanimura and Ryogo Kubo in 1989, is a non-perturbative approach developed to study the evolution of a density matrix ρ ( t ) {\displaystyle \rho (t)} of quantum dissipative systems. The method can treat system-bath interaction non-perturbatively as well as non-Markovian noise correlation times without the hindrance of the typical assumptions that conventional Redfield (master) equations suffer from such as the Born, Markovian and rotating-wave approximations. HEOM is applicable even at low temperatures where quantum effects are not negligible. The hierarchical equation of motion for a system in a harmonic Markovian bath is ∂ ∂ t ρ ^ n = − ( i ℏ H ^ A × + n γ ) ρ ^ n − i ℏ V ^ × ρ ^ n + 1 + i n ℏ Θ ^ ρ ^ n − 1 {\displaystyle {\frac {\partial }{\partial t}}{\hat {\rho }}_{n}=-\left({\frac {i}{\hbar }}{\hat {H}}_{A}^{\times }+n\gamma \right){\hat {\rho }}_{n}-{i \over \hbar }{\hat {V}}^{\times }{\hat {\rho }}_{n+1}+{in \over \hbar }{\hat {\Theta }}{\hat {\rho }}_{n-1}} where the superscript × {\displaystyle ^{\times }} denoting a commutator and the temperature-dependent super-operator Θ ^ {\displaystyle {\hat {\Theta }}} are defined below. The parameter γ {\displaystyle \gamma } is the frequency width of the Drude spectral function J ( ω ) {\displaystyle J(\omega )} (see below). Equations of motion for the density matrix HEOMs are developed to describe the time evolution of the density matrix ρ ( t ) {\displaystyle \rho (t)} for an open quantum system. It is a non-perturbative, non-Markovian approach to propagating in time a quantum state. Motivated by the path integral formalism presented by Feynman and Vernon, Tanimura derive the HEOM from a combination of statistical and quantum dynamical techniques. Using a two level spin-boson system Hamiltonian H ^ = H ^ A ( a ^ + , a ^ − ) + V ( a ^ + , a ^ − ) ∑ j c j x ^ j + ∑ j [ p ^ 2 2 + 1 2 x ^ j 2 ] {\displaystyle {\hat {H}}={\hat {H}}_{A}({\hat {a}}^{+},{\hat {a}}^{-})+V({\hat {a}}^{+},{\hat {a}}^{-})\sum _{j}c_{j}{\hat {x}}_{j}+\sum _{j}\left[{\ {\hat {p}}^{2} \over {2}}+{\frac {1}{2}}{\hat {x}}_{j}^{2}\right]} By writing the density matrix in path integral notation and making use of Feynman–Vernon influence functional, all the bath coordinates x j {\displaystyle x_{j}} in the interaction terms can be grouped into this influence functional which in some specific cases can be calculated in closed form. Assuming a Drude spectral function J ( ω ) = ∑ j c j 2 δ ( ω − ω j ) = ℏ η γ 2 ω π ( γ 2 + ω 2 ) {\displaystyle J(\omega )=\sum \nolimits _{j}c_{j}^{2}\delta (\omega -\omega _{j})={\frac {\hbar \eta \gamma ^{2}\omega }{\pi (\gamma ^{2}+\omega ^{2})}}} and a high temperature heat bath, taking the time derivative of the system density matrix, and writing it in hierarchical form yields ( n = 0 , 1 , … {\displaystyle n=0,1,\ldots } ) ∂ ∂ t ρ ^ n = − ( i ℏ H ^ A × + n γ ) ρ ^ n − i ℏ V ^ × ρ ^ n + 1 + i n ℏ Θ ^ ρ ^ n − 1 {\displaystyle {\frac {\partial }{\partial t}}{\hat {\rho }}_{n}=-\left({\frac {i}{\hbar }}{\hat {H}}_{A}^{\times }+n\gamma \right){\hat {\rho }}_{n}-{i \over \hbar }{\hat {V}}^{\times }{\hat {\rho }}_{n+1}+{in \over \hbar }{\hat {\Theta }}{\hat {\rho }}_{n-1}} Here Θ {\displaystyle \Theta } reduces the system excitation and hence is referred to as the relaxation operator: Θ ^ = − η γ β ( V ^ × − i β ℏ γ 2 V ^ ∘ ) {\displaystyle {\hat {\Theta }}=-{\frac {\eta \gamma }{\beta }}\left({\hat {V}}^{\times }-i{\frac {\beta \hbar \gamma }{2}}{\hat {V}}^{\circ }\right)} with the inverse temperature β = 1 / k B T {\displaystyle \beta =1/k_{B}T} and the following "super-operator" notation: A ^ × ρ ^ = A ^ ρ ^ − ρ ^ A ^ A ^ ∘ ρ ^ = A ^ ρ ^ + ρ ^ A ^ {\displaystyle {\begin{aligned}{\hat {A}}^{\times }{\hat {\rho }}&={\hat {A}}{\hat {\rho }}-{\hat {\rho }}{\hat {A}}\\{\hat {A}}^{\circ }{\hat {\rho }}&={\hat {A}}{\hat {\rho }}+{\hat {\rho }}{\hat {A}}\end{aligned}}} The counter n {\displaystyle n} provides for n = 0 {\displaystyle n=0} the system density matrix. As with Kubo's stochastic Liouville equation in hierarchical form, it goes up to infinity in the hierarchy which is a problem numerically. Tanimura and Kubo, however, provide a method by which the hierarchy can be truncated to a finite set of N {\displaystyle N} differential equations. This "terminator" N {\displaystyle N} defines the depth of the hierarchy and is determined by some constraint sensitive to the characteristics of the system, i.e. frequency, amplitude of fluctuations, bath coupling etc. A simple relation to eliminate the ρ ^ n + 1 {\displaystyle {\hat {\rho }}_{n+1}} term is ρ ^ N + 1 = − Θ ^ ρ ^ N / ℏ γ . {\displaystyle {\hat {\rho }}_{N+1}=-{\hat {\Theta }}{\hat {\rho }}_{N}/\hbar \gamma .} The closing line of the hierarchy is thus: ∂ ∂ t ρ ^ N = − ( i ℏ H ^ A × + N γ ) ρ ^ N − i γ ℏ 2 V ^ × Θ ^ ρ ^ N + i N ℏ Θ ^ ρ ^ N − 1 {\displaystyle {\frac {\partial }{\partial t}}{\hat {\rho }}_{N}=-\left({\frac {i}{\hbar }}{\hat {H}}_{A}^{\times }+N\gamma \right){\hat {\rho }}_{N}-{i \over \gamma \hbar ^{2}}{\hat {V}}^{\times }{\hat {\Theta }}{\hat {\rho }}_{N}+{iN \over \hbar }{\hat {\Theta }}{\hat {\rho }}_{N-1}} . The HEOM approach allows information about the bath noise and system response to be encoded into the equations of motion. It cures the infinite energy problem of Kubo's stochastic Liouville equation by introducing the relaxation operator that ensures a return to equilibrium. Computational cost When the open quantum system is represented by M {\displaystyle M} levels and M {\displaystyle M} baths with each bath response function represented by K {\displaystyle K} exponentials, a hierarchy with N {\displaystyle {\mathcal {N}}} layers will contain: ( M K + N ) ! ( M K ) ! N ! {\displaystyle {\frac {\left(MK+{\mathcal {N}}\right)!}{\left(MK\right)!{\mathcal {N}}!}}} matrices, each with M 2 {\displaystyle M^{2}} complex-valued (containing both real and imaginary parts) elements. Therefore, the limiting factor in HEOM calculations is the amount of RAM required, since if one copy of each matrix is stored, the total RAM required would be: 16 M 2 ( M K + N ) ! ( M K ) ! N ! {\displaystyle 16M^{2}{\frac {\left(MK+{\mathcal {N}}\right)!}{\left(MK\right)!{\mathcal {N}}!}}} bytes (assuming double-precision). Implementations The HEOM method is implemented in a number of freely available codes. A number of these are at the website of Yoshitaka Tanimura including a version for GPUs which used improvements introduced by David Wilkins and Nike Dattani. The nanoHUB version provides a very flexible implementation. An open source parallel CPU implementation is available from the Schulten group. See also Quantum master equation Open quantum system Fokker–Planck equation Quantum dynamical semigroup Quantum dissipation
Wikipedia
Karl Küpfmüller (6 October 1897 – 26 December 1977) was a German electrical engineer, who was prolific in the areas of communications technology, measurement and control engineering, acoustics, communication theory, and theoretical electro-technology. Biography Küpfmüller was born in Nuremberg, where he studied at the Ohm-Polytechnikum. After returning from military service in World War I, he worked at the telegraph research division of the German Post in Berlin as a co-worker of Karl Willy Wagner, and, from 1921, he was lead engineer at the central laboratory of Siemens & Halske AG in the same city. In 1928 he became full professor of general and theoretical electrical engineering at the Technische Hochschule in Danzig, and later held the same position in Berlin. Küpfmüller joined the National Socialist Motor Corps in 1933. In the following year he also joined the SA. In 1937 Küpfmüller joined the NSDAP and became a member of the SS, where he reached the rank of Obersturmbannführer. Küpfmüller was appointed as director of communication technology Research & Development at the Siemens-Wernerwerk for telegraphy. In 1941–1945 he was director of the central R&D division at Siemens & Halske in 1937. From 1952 until his retirement in 1963, he held the chair for general communications engineering at Technische Hochschule Darmstadt. Later he was honorary professor at the Technische Hochschule Berlin. In 1968, he received the Werner von Siemens Ring for his contributions to the theory of telecommunications and other electro-technology. He died at Darmstadt. Studies in communication theory About 1928, he did the same analysis that Harry Nyquist did, to show that not more than 2B independent pulses per second could be put through a channel of bandwidth B. He did this by quantifying the time-bandwidth product k of various communication signal types, and showing that k could never be less than 1/2. From his 1931 paper (rough translation from Swedish): "The time law allows comparison of the capacity of each transfer method with various known methods. On the other hand it indicates the limits that the development of technology must stay within. One interesting question for example is where the lower limit for k lies. The answer is acquired by at least one power change being needed to achieve one signal. So the frequency range must be at least so wide that the settling time becomes less than the duration of a signal, and from this comes k=1/2. So we can never get below this value, no matter how technology develops." Textbooks by Küpfmüller K. Küpfmüller, Einführung in die theoretische Elektrotechnik [Introduction to the theory of electrical engineering]. Berlin: Julius Springer, 1932. K. Küpfmüller (revised and extended by W. Mathis and A. Reibiger), Theoretische Elektrotechnik: Eine Einführung [Theory of electrical engineering: An introduction], 19th ed. New York: Springer-Verlag, 2013. K. Küpfmüller "Die Systemtheorie der elektrischen Nachrichtenübertragung" S. Hirzel; 4., berichtigte Aufl edition (1974) References Further reading Bissell, C.C. (translator, 2005) "On the Dynamics of Automatic Gain Controllers" Archived 2019-05-21 at the Wayback Machine, K. Küpfmüller, Elektrische Nachrichtentechnik, Vol. 5, No. 11, 1928, pp. 459–467. Bissell, C.C. (2006) Karl Küpfmüller, 1928: An early time-domain, closed-loop, stability criterion. Historic Perspective. IEEE Control Systems Magazine, 26 (3). 115-116, 126. ISSN 0272-1708 Küpfmüller biography at the University of Hannover (German)
Wikipedia
A driven shield is a method of electrical shielding used to protect low-current circuits against leakage current. A driven shield is often referred to as a driven guard, especially when applied to PCB traces. Description It is used in situations where the tiny leakage of current through the insulating surfaces of a wire or PCB board would otherwise cause error in the measurements or functionality of the device. The basic principle is to protect the sensing wire by surrounding it with a guard conductor that is held at the same voltage as the wire so that no current will flow into or from the wire. This is typically achieved using a voltage buffer/follower that matches the guard voltage to the sensing wire voltage, or in low-voltage differential sensing with an instrumentation amplifier, the common-mode voltage. The leakage from the shield to other circuit elements is of little concern as it is being sourced from a buffer which has a low output impedance. The technique is used in equipment such as sensitive photomultiplier tubes, electrostatic sensors, precision low-current measurement, and some medical electrography machines, where leakage current would alter the measurement. Any situation in which the source to be measured has a very high output impedance is vulnerable to leakage current and if sufficient insulation is not practical then a driven shield may be required. Coaxial cable is well suited for use as a guard; if electromagnetic shielding is also required then triaxial cable may be used as depending on the type of buffer circuit any noise on the guard may be amplified in the output. The limiting factor for this method is the input impedance of the voltage buffer, the JFET or CMOS op-amps typically used may have input impedances of many teraohms which is sufficient for most applications. Care must also be taken to ensure there are no unexpected paths by which leakage current may bypass the guard as this will defeat the system, and extra care must be taken in the design of the amplifier/buffer circuit to prevent oscillation as the guard, especially if it is used over a coaxial cable, may have a strong capacitive coupling to the sensing wire. See also Electric-field screening
Wikipedia
Furiosa: A Mad Max Saga is a 2024 post-apocalyptic action film directed and produced by George Miller, who wrote the screenplay with Nico Lathouris. It is the fifth installment in the Mad Max franchise, and the first not focused on Max Rockatansky, instead a spinoff prequel to Mad Max: Fury Road (2015) and an origin story for Furiosa. Starring Anya Taylor-Joy and Alyla Browne as said character and years before Fury Road, the film follows her life for over a decade, from her kidnapping by the forces of warlord Dementus (Chris Hemsworth) to her ascension to the rank of Imperator. Tom Burke also stars as Praetorian Jack, a military commander who befriends Furiosa. Several Fury Road cast members return in supporting roles, including John Howard, Nathan Jones, and Angus Sampson reprising their characters. The film begins in what can be seen as a green paradise of a solarpunk future and quickly moves to the more traditional dieselpunk, which this franchise is known for. Miller initially intended to shoot Furiosa back-to-back with Fury Road, but the former spent several years in development hell amidst salary disputes with Warner Bros. Pictures, Fury Road's distributor. Several crew members from Fury Road returned for Furiosa, including Lathouris, producer Doug Mitchell, composer Tom Holkenborg, costume designer Jenny Beavan, and editor Margaret Sixel (Miller's wife). Filming took place in Australia from June to October 2022. Furiosa: A Mad Max Saga premiered at the 77th Cannes Film Festival on 15 May 2024. It was released theatrically in Australia on 23 May 2024 and in the United States the following day. The film received highly positive reviews from critics and multiple award nominations. It was named one of the Top Ten Films of 2024 by the National Board of Review, but was a box-office bomb, grossing $174.3 million against its budget of $168 million. Plot Decades after the apocalypse, Australia is a radioactive wasteland and the Green Place of Many Mothers is one of the last remaining areas with fresh water and agriculture. Roobillies uncover the Green Place while two children Furiosa and Valkyrie are picking peaches. Furiosa tries to sabotage their motorcycles, but the raiders capture the barefoot girl as a prize for Dementus of the Biker Horde. Furiosa's mother Mary Jabrassa pursues them to the Horde's camp to rescue Furiosa, but Dementus tracks them down. Mary stays behind to buy Furiosa time to escape and gives her a peach pit to remember her by, but Furiosa refuses to leave Mary behind. Dementus forces Furiosa to watch her mother's crucifixion. Haunted by his own family's death, Dementus adopts Furiosa as his daughter, hoping she will lead him to the Green Place. Sometime after, Dementus besieges the Citadel, another settlement with fresh water and agriculture. However, the Horde is repelled by the War Boys, the fanatical army of Citadel warlord Immortan Joe. Dementus uses a trojan horse strategy to capture Gastown, an oil refinery that supplies the Citadel with gasoline. At peace negotiations, Joe recognizes Dementus's authority over Gastown and increases its supplies of food and water in exchange for the Horde's physician and Furiosa, who has tattooed a star chart to the Green Place on her left arm to find her way home. Afterwards, Joe imprisons Furiosa with his stable of "wives" inside a vault. After Joe's son, Rictus, shows an attraction towards her, Furiosa devises a plan to escape. One night, Rictus breaks Furiosa out of Joe's vault to rape her, but she slips from his grasp using a wig made from her own hair and disappears. Disguised as a mute War Boy, Furiosa works her way up the ranks of Joe's men for over a decade. She helps build the War Rig, a heavily armed supply tanker that can withstand raider attacks in the lawless Wasteland. Furiosa plans to escape by hiding on the Rig when Joe sends his top driver, Praetorian Jack, on a supply run. Disillusioned by Dementus's callousness, his lieutenant, The Octoboss, goes rogue and launches an air assault on the Rig. His Mortifiers slaughter the Rig's entire crew and destroy Furiosa's hidden motorcycle, but Furiosa and Jack team up to defeat them. Furiosa tries to carjack the Rig and drive home, but Jack easily thwarts her. However, he recognizes her potential and offers to train her to escape if she helps him rebuild his crew. Furiosa becomes Jack's second-in-command and is promoted to Praetorian. She and Jack develop a bond and resolve to escape together. They see an opportunity when Joe decides to attack Gastown, which Dementus has mismanaged to near-ruin. Joe orders Furiosa and Jack to collect weapons and ammunition from the Bullet Farm, an allied mining facility. However, Dementus, having already taken possession of the farm, ambushes them when they arrive. Furiosa and Jack barely escape, and Furiosa's left arm is injured and pinned to an overturned car. Dementus chases them down and has Jack dragged to death. Furiosa escapes her chains by severing her own injured arm, sacrificing her star map to escape. A lone man watches from afar as Furiosa struggles back to the Citadel, where she and Joe's aide The People Eater form a strategy to avoid a trap planned by Dementus. Instead, Dementus is lured into a trap at the Citadel, and the War Boys crush the Horde. Having lost her path home, Furiosa shaves her head, builds a mechanical prosthetic in place of her severed arm, and pursues the fleeing Dementus. After an extended chase, Furiosa subdues Dementus in the desert. She imprisons Dementus in the Citadel and uses his still-living body as fertilizer to grow a peach tree from her mother's pit. Joe promotes Furiosa to "Imperator" and gives her command of a new War Rig. She meets Joe's five breeder wives in the vault where Joe once held her prisoner and shows them a peach from the tree. The night before another supply run, the "Five Wives" hide in Furiosa's Rig. Cast Anya Taylor-Joy as Furiosa Alyla Browne as child and teen Furiosa Archive footage of Charlize Theron as the older Furiosa from Mad Max: Fury Road is used during the final scene and end credits Chris Hemsworth as Dementus, the deranged warlord leader of the Biker Horde who abducted Furiosa, and eventual ruler of Gastown Tom Burke as Praetorian Jack, the commander of the Citadel's first War Rig George Shevtsov as The History Man, an expert in pre-apocalyptic history, science, and technology who serves Dementus. He serves as the film's narrator Lachy Hulme as: Immortan Joe, the warlord leader of the Citadel and enemy of the Biker Horde Archive footage of Hugh Keays-Byrne as Immortan Joe from Mad Max: Fury Road is used during the end credits Rizzdale Pell, Dementus' lieutenant John Howard as The People Eater, Joe's advisor and military strategist, and the future ruler of Gastown in Mad Max: Fury Road Angus Sampson as The Organic Mechanic, Dementus' personal physician that he later gives to Joe Charlee Fraser as Mary Jabassa, a barefoot woman who is Furiosa's mother and a top member of the Vuvalini Elsa Pataky as: The Vulalini General, the barefoot second-in-command of Mary and member of the Vuvalini Mr. Norton, a deformed survivor who joins the Biker Horde Nathan Jones as Rictus Erectus, Joe's muscular but dim-witted son Josh Helman as Scrotus, Joe's psychologically unstable son David Field as Toe Jam, a member of the nomadic Roobillies biker gang that's loyal to Dementus who was responsible for capturing Furiosa Rahel Romahn as Vulture, a member of the nomadic Roobillies biker gang who is later killed by Mary who steals some of his clothes in order to get into Dementus' camp David Collins as Smeg, Dementus's henchman, dance proclaimer, and messenger Goran D. Kleut as The Octoboss, the leader of the Mortifiers biker gang, who begins the film as Dementus's temporary ally CJ Bloomfield as Big Jilly, a member of Dementus' biker horde Matuse as Fang, Dementus's henchman Ian Roberts as Mr. Harley, a member of Dementus' biker horde Guy Spence as Mr. Davidson, a member of Dementus' biker horde Rob Jones as Squint Clarence Ryan as Black Thumb, Praetorian Jack's mechanic on the War Rig Tim Burns as Hungry Eyes Tim Rogers as Snapper Florence Mezzara as Sad Eyes Quaden Bayles as War Pup, a young War Boy on board the War Rig during Jack's supply run Peter Stephens as the Guardian of Gastown Sean Millias Lone War Boy Lee Perry as The Bullet Farmer, ruler of the Bullet Farm and the Citadel's arms supplier Archive footage of Richard Carter as The Bullet Farmer from Mad Max: Fury Road is used during the end credits Daniel Webber as War Boy (uncredited) The rest of the cast were listed under these categories: The Green Place Dylan Adonis as Valkyrie, a barefoot girl and member of the Vulvalini who is Furiosa's childhood friend Archive footage of Megan Gale as an older Valkyrie from Mad Max: Fury Road is used during the end credits. Anna Adams and Peter Sammak as the Vuvalini Roobillies Shea Adams as Cannibal Josh Randall as Savage Karl Van Moorsel as Hacker Citadel Siege Dawn Clingberg as Corpse Minder Richard Norton as the Prime Imperator, a high-ranking lieuteant of the War Boys. Stephen Amadasun as the Shotgun Imperator Nick Annas as the First Pick War Boy Ripley Voeten as the Chosen War Boy Gastown and the Trojan Truck Matt Van Leeve as Mortifyer Matt Shane Dundas as a Gastown Gate Watchman Jamie Cluff and Adam Thompson as the Gastown Gate Openers Shyan Tonga as the Gastown Rioter A Deal Is Done Nellie Collins as a Winchman Adam Washbourne, James Corcoran, Sasha Vitanovic, Tige Sixel Miller as Watch Tower Praetorian Guards Justice Jones as Praetorian Pup Immortan's Harem Maleeka Gasbarri as one of Immortan Joe's "wives" who gives birth to a baby with a conjoined body Keza Ishimwe and Nat Buchanan as two of Immortan Joe's "wives" The House of Holy Motors Jacob Tomuri as The Dogman, a worker at the Citadel Mark Wales as Brake Man, a mechanic at the Citadel Bryan Probets as Chumbucket, a hunchbacked auto mechanic at the Citadel Danny Lim as High Master Black Thumb Darcy Brice as Pissboy, a War Boy maintenance worker Chudier Gatwech as Wretched Man Recruit Spencer Connelly as Rakka the Brakkish Stowaway to Nowhere Ben Smith-Person as Ace War Boy Toby Fuller as Lookout War Boy Jayden Irving as Witness War Boy Karl Van Moorsel as Mortifyer Bomb Setter Happy Bullet Farm Jon Iles as Bullet Farm Senior War Boy The Escape Plan Harrison Norris as Hazz the Valiant Ash Hodgkinson as Valiant Lancer Sean Renfrey as the Echo Man Jacob Tomuri as Max Rockatansky, a loner living in his V8 Interceptor who witnesses Furiosa return to the Citadel. He later helps Furiosa and the "Five Wives" defeat Immortan Joe and his army, and claim control of the Citadel in Mad Max: Fury Road. Tomuri served as Tom Hardy's stunt double during the filming of Fury Road. Archive footage of Hardy as Max from Mad Max: Fury Road is used during the end credits Additionally, the end credits are intercut with archive footage from Mad Max: Fury Road, in which Nicholas Hoult, Rosie Huntington-Whiteley, Zoë Kravitz, Riley Keough, Abbey Lee, and Courtney Eaton appear as Nux, The Splendid Angharad, Toast the Knowing, Capable, The Dag, and Cheedo the Fragile, respectively; the latter five are also portrayed by stand-ins in silhouette during the film's final scene. Production Pre-production Director George Miller and co-writer Nico Lathouris spent over fifteen years writing the script for Mad Max: Fury Road (2015), and developed backstories for every character, particularly co-protagonist Imperator Furiosa. Miller and Lathouris eventually wrote a Furiosa-centered screenplay, which actress Charlize Theron used as a reference for her performance in Fury Road. According to Miller, Furiosa "probably" takes place after Mad Max Beyond Thunderdome (1985), but the Mad Max franchise has "no strict chronology". The first trailer of the film, released on 30 November 2023, stated that Furiosa takes place "45 years after the collapse". In July 2010, Miller announced plans to shoot Fury Road back-to-back with a live-action prequel film entitled Mad Max: Furiosa, but, during pre-production, it was decided to shoot only Fury Road. At one point, Miller and Lathouris hoped to turn the Furiosa screenplay into an anime film directed by Mahiro Maeda, who had previously worked on The Animatrix (2003), Neon Genesis Evangelion (1995–96), and Porco Rosso (1992). In May 2015, Miller stated that if Fury Road became successful, he would develop two more films. In November 2017, Miller's production company filed a lawsuit against Warner Bros. over a Fury Road salary dispute, which delayed the production of any additional entries in the franchise. In July 2019, Miller revealed that a Furiosa film was still being planned in addition to two Mad Max sequels. By March 2020, Miller and Warner Bros. settled their lawsuit and began casting the Furiosa prequel, which Miller intended to make after Three Thousand Years of Longing (2022). It was reported that the film would take place over a timeframe of fifteen years, depicting Furiosa's backstory of how she was displaced from her home and spent her life "trying to get back". Multiple Fury Road crew members agreed to return for the film, including composer Tom Holkenborg, costume designer Jenny Beavan, editor Margaret Sixel, makeup designer Lesley Vanderwalt, production designer Colin Gibson (no relation to original Mad Max star Mel Gibson), and sound mixer Ben Osmo; Beavan, Sixel, Vanderwalt, Gibson, and Osmo had all previously won Academy Awards for their work on Fury Road. In 2020, Miller said that the semi-retired John Seale had agreed to return as cinematographer, but Seale retired after shooting Miller's Three Thousand Years of Longing, explaining that he wanted to spend more time with his grandchildren. Simon Duggan took over as Furiosa's cinematographer. "George was definitely looking to find an Australian cinematographer", Duggan recalled. "P.J. [Voeten] told him, 'Look, it's a no-brainer — just get Simon to come in and do it', and George trusted him. George knew the work I was doing and thought it was amazing, so when we first met, we just wanted to talk about the Australian industry, the people we knew and the experiences we had. And we knew that Fury Road was the starting point to the look and feel of what Furiosa was going to be — but only a starting point." Village Roadshow Pictures, which had co-financed Fury Road, was credited in initial marketing material for Furiosa, with its involvement also acknowledged by an official press kit related to the film and a press release by the Cannes Film Festival. However, following its premiere, all mentions of the company were omitted from promotions and the film itself. In May 2024, a box office report by Deadline Hollywood stated that Warner Bros. "is all in on Furiosa" and financed the bulk of the film's budget without co-financiers, such as Village Roadshow, most likely the result of a content dispute related to Warner Bros.' simultaneous release strategy followed in 2021. The online news site also reported that Domain Entertainment (a private equity fund that co-financed other Warner Bros. productions like Aquaman and the Lost Kingdom and Wonka in 2023) was listed in the opening credits after the Warner Bros. logo. Casting Miller sought to cast a younger actress for the role in lieu of using de-aging technology for Theron, explaining that the technology still leaves "an uncanny valley" effect. Theron admitted that the decision was "a little heartbreaking, for sure", but understood Miller's rationale. In March 2020, during the COVID-19 lockdown in Australia, Miller auditioned several actresses over Skype for the Furiosa role. In October 2020, Anya Taylor-Joy, Chris Hemsworth, and Yahya Abdul-Mateen II were cast, although Abdul-Mateen later dropped out due to a scheduling conflict. Miller chose Taylor-Joy after seeing her performance in an early cut of the film Last Night in Soho (2021) and auditioning her with the "Mad as Hell" monologue, quoted by the character Howard Beale (portrayed by Peter Finch) from Sidney Lumet's Network (1976). Edgar Wright, the director of Last Night in Soho, told Miller to "do yourself a favor and grab the opportunity to work with her". Miller felt that "there's a kind of timelessness to her, there's a mystery to her, and yet she's accessible". Taylor-Joy received advice from Nicholas Hoult, who had previously portrayed Nux in Fury Road and worked alongside her in The Menu (2022). According to Goran D. Kleut, who portrayed The Octoboss, Miller asks every actor who auditions with him to try out the "Mad as Hell" monologue. Hemsworth, an Australian, had grown up watching the Mad Max films. In 2010, he had previously applied for the title role in Fury Road that eventually went to Tom Hardy. Hemsworth later explained that he was not a big enough star at the time to earn the role; his most notable role, Thor, had not yet debuted in the 2011 film of the same name. To accurately portray resource scarcity in the wasteland, Hemsworth cut his calorie intake in half compared to when he prepares for a Marvel Cinematic Universe (MCU) film. Describing and elaborating on his character's motivations, Hemsworth said: "He's a pretty horrible individual. Through the whole film we kept coming back to, 'This is evil, but what is the intention behind it?' It's not just sadistic insanity. There is a real purpose, the wheels are turning, he's plotting and planning and ten steps ahead of everyone else." Amid the character's harshness, Hemsworth imagines Dementus as something of a father figure to Furiosa, adding: "I think that's how he sees himself. I think there's a paternal quality and nature to the relationship in his eyes. [Furiosa] would, I'm sure, argue to her death the complete opposite." In 2021, Miller cast Alyla Browne as a young Furiosa; she had previously worked with Miller on Three Thousand Years of Longing. Miller said that she reminded him of a young Furiosa and that she impressed him while doing the splits on set. Tom Burke joined the cast in the autumn of 2021 as Praetorian Jack, replacing Abdul-Mateen. Burke said that while most of his scenes were shot sitting down in a truck, he had to spend long hours in the gym becoming "as lithe as possible", given that he might have to safely jump off the War Rig many times in a row until Miller got a take he was satisfied with. When principal photography commenced in June 2022, the producers disclosed that Nathan Jones and Angus Sampson were set to reprise their roles from Fury Road. They additionally announced that Quaden Bayles, who worked on Three Thousand Years of Longing after a video about his mistreatment at school went viral, would appear in a minor role. During the production, Miller agreed to cast Lachy Hulme, who was already playing the role of Rizzdale Pell, as a younger Immortan Joe, succeeding Fury Road's late Hugh Keays-Byrne, who died in 2020. Miller initially wanted to use a body double for Joe and record his lines in post-production with ADR, but Hulme convinced Miller that he could replicate Keays-Byrne's voice and eyes. "When you are working on a Dr. George Miller movie, there's no pressure on you because you're in an incredibly supportive environment", Hulme said. During pre-production, Hulme fell off a motorcycle while practicing for a scene as Rizzdale Pell. Afterwards, Miller decided all the actor's bike riding would be performed by his stunt double, Chris Matheson. Filming Filming took place in Australia from May to October 2022. Principal photography began on 1 June 2022. Miller shot the film at various locations in New South Wales: Broken Hill/Silverton, Hay (the "Stowaway" action sequence), Kurnell (the Bullet Farm and the final confrontation between Furiosa and Dementus), Terrey Hills (the Green Place), Melrose Park (Gastown), and the Disney lot in Sydney (the Citadel). The action sequence where the raiders ambush the War Rig took 78 days to shoot, where close to 200 stunt performers worked on it every day; the sequence became known during production as "Stowaway to Nowhere". Miller stated that Furiosa was an easier shoot than Fury Road, alluding to the latter's troubled production, and complimented Warner Bros.' new leadership for implementing an "approach to filmmaking [that was] much more collaborative than it was adversarial". Burke said that Miller wanted a different kind of filming style from Fury Road, which used short takes and long cuts; he noted that Miller specifically wanted to shoot the scene where Dementus taunts Jack and Furiosa in one take. In lighting most of Furiosa's exterior shots, Duggan and gaffer Shaun Conway recognized that "nearly all of our story would be told during daytime, so everything relied on sunlight", Duggan said. "And because we were in mid-El Niño, the weather during the shoot was unpredictable, with pouring rains and winds up to 50 miles per hour. So, we added a lot of artificial light to create sunlight, which helped us create a harsh look that was quite different than what you'd expect." To augment their locations' natural light, the filmmakers relied on a uniform approach, employing an array of six 18K HMI PARs at varying degrees of spot. "The six PARs were mounted to three heavy-duty telescopic handlers that were easily moveable and could withstand the conditions, and the lamps were protected from the rain", Duggan noted. "We found that the 18K array could cover almost half a football field and accommodated the size of most of our exterior-location sets. We could put the light wherever we wanted to; whenever the sun was coming in and out, we'd follow and match that direction." He continued: "George told us we weren't going to wait for anything to light our sets, so we had to be prepared with one solution. And that meant we didn't have to stop shooting — we could just keep on going. Of course, we always oriented the sets, or our camera, to make use of sidelight or backlight from the actual sun, but the brute power of those PARs gave us all the light we needed." Taylor-Joy praised Miller's commitment to safety on set, but said that working on the film was, nonetheless, a challenging experience. With just 30-odd spoken lines of dialogue, she would go "months" on the film's set without speaking a single word on camera: "I've never been more alone than making that movie ... I don't want to go too deep into it, but everything that I thought was going to be easy was hard." When asked what proved more difficult than she expected on the Furiosa set, the actress said: "Next question, sorry. Talk to me in 20 years." In a later interview, responding to a question regarding how she was "able to portray the nuance and complexity of Furiosa without much dialogue", Taylor-Joy explained: "[The character] was just immediately there. The second that I read the first script, even though the script changed quite a bit by the time we got to filming, I had her essence very deeply embedded within me. I was also supremely protective. I think I fought more for this character than I had fought for any other character. George had such a specific vision for what he wanted her to be and I just felt like it was my responsibility to fight for any moment where you could see a little bit of her rage come out." Sharing her most memorable note given by Miller throughout the filming process, she said: "[George] wanted [Furiosa] to be incredibly stoic. And I felt like my contribution was that I've always felt like you need to see the humanity behind that, if you want people to fully invest in a character. George encourages you to be in almost like a university-type setting where every choice you have to justify – and you don't justify it once. You justify it thousands of times if it's going to make it in the movie. It was really great training for me not only as an actor but also as somebody that hopes to direct one day. Your conviction has to be unwavering if you want something to make it into one of his movies." Hemsworth arrived on set nursing a back injury, but said that he was excited to work on Furiosa because playing Dementus allowed him to get "out of that typecast space of the muscly action guy and ... play a character with complications and darkness". He explained that "suffering without a purpose is awful", but "suffering with purpose can be rejuvenating and replenishing". Burke said that Miller was willing to collaborate with his actors to structure Furiosa and Jack's relationship, explaining that while Miller wanted a romance, Burke felt the characters should "push romance to the side until they believe they are riding off to a safer place". While promoting the film, the actors disclosed several ideas that Miller considered or even shot but ultimately cut. Taylor-Joy said that Miller shot but ultimately deleted a scene where Furiosa cuts off Dementus's tongue, an act which is mentioned but not shown in the theatrical version. Burke said that Miller vetoed the idea of a training montage where Jack teaches Furiosa about road war because it was too much of a cliché. Post-production Fury Road's VFX supervisor Andrew Jackson returned for Furiosa. His home studio DNEG worked with Framestore, Metaphysic.ai, Rising Sun Pictures, and slatevfx. He had received an Academy Award nomination for Fury Road and subsequently won the Oscar for Tenet (2020). Jackson said that Furiosa "leans much more heavily [than Fury Road] on visual effects" and that Miller "completely embraced the idea that CG is the way to go to build worlds and do whatever we need to do in post". In addition to traditional CGI work, such as augmenting backdrops and stitching together the work of multiple stuntmen who shot their scenes separately for safety reasons, Jackson used VFX to heighten certain action scenes, such as the "Stowaway" action sequence, where VFX animated The Octoboss's demise and the final chase sequence, where a CGI version of Furiosa's (otherwise practical) car was used to animate "things just far too dangerous to be doing with a real car, like side-swiping motorbikes". VFX helped "generat[e] a feeling of movement" by making background elements move faster and animating flying equipment like The Octoboss's rippling black parachute. As Furiosa ages from a child to a young woman over the course of the film, Miller and Jackson hired Rising Sun Pictures to use machine learning (a non-generative form of artificial intelligence) to blend Taylor-Joy's and Browne's faces together. Taylor-Joy said that Miller "wanted the transition ... to be seamless". She spent two days shooting with Rising Sun so that they could map her facial expressions. By her estimate, at the start of the film, about 35% of Browne's face was modified to look like her own, a figure that increased to 80% during Browne's final scenes in the Citadel. Additionally, in Taylor-Joy's early scenes in the Citadel, her eyes were partially modified to look more like Browne's. Taylor-Joy stressed that the actors' union went on strike in part to demand better regulation of AI tools and that Miller's use of AI was consensual. Metaphysic.ai performed a similar function for the Bullet Farmer, blending Lee Perry's facial features with those of the late Richard Carter, who portrayed the Bullet Farmer in Fury Road. Impact on the Australian film industry According to the Australian Broadcasting Corporation (ABC), Furiosa was the most expensive film in Australian history, with a budget of AU$333.2 million. Over 3,000 people worked on the film, including some ex-convicts who were hired as supporting artists. In addition, to take advantage of Australian tax credits for VFX work, DNEG opened a Sydney branch to spearhead special effects work on Furiosa; it estimated that the office, once fully staffed, would employ up to 500 VFX artists. The film was awarded extensive government subsidies, including a filming incentive from the New South Wales government and various "offsets from the federal government". Miller said that government support "made [shooting in Australia] possible". Queensland University of Technology professor Amanda Lotz estimated that Screen NSW contributed AU$50 million in direct subsidies to Furiosa's budget. Additionally, the federal government offers all qualifying films a tax rebate equivalent to 40% of the amount of money the production spends in Australia. Lotz estimated this federal rebate at AU$133 million (40% of AU$333.2 million), and NSW premier Gladys Berejiklian said that she hoped Furiosa would contribute AU$350 million to the Australian economy. Although a May 2024 ABC report estimated the size of the NSW filming incentive as AU$175 million (over half the film's reported budget), the ABC subsequently amended its report to remove that estimate. Other reports suggested that the AU$175 million figure applies to the total size of the NSW subsidy fund (spread out over five years) and not to Furiosa specifically. Furiosa's VFX artists allowed production to keep shooting in Australia even though the weather in New South Wales was not ideal for a desert-based film. By contrast, Miller had to move the Fury Road shoot from Australia to Namibia because rain caused wildflowers to grow in the Australian desert, which would not have happened in a post-apocalyptic wasteland. According to Framestore, during the opening chase sequence where Mary tracks Furiosa's kidnappers through a series of sand dunes, "we ended up pretty much having to replace all of the ground". Colourist Eric Whipp, who also worked on Fury Road, said that because the desert scenes needed to look like Namibia and "there was a huge mix of sunny and cloudy days" during the shoot, "a lot of the backgrounds in this film are full CG". Music Fury Road composer Tom Holkenborg (also known as Junkie XL) returned to score Furiosa, as his third collaboration with Miller after Fury Road and Three Thousand Years of Longing. Holkenborg moved to Sydney to pen the score and also helped prepare the final mix. He explained that because Furiosa was a character-driven film, the film's score had to be character-driven as well: "Musically, everything was being told from a first-person perspective, which is being her [Furiosa], how she, watches the world around her, The Wasteland, its cruelties." Holkenborg added that Miller wanted a "way, way more subtle score" for Furiosa than Fury Road, the latter of which was "just massive action over 48 hours" and an "over-the-top rock opera". In particular, Miller vetoed reusing the song "Brothers in Arms" from Fury Road (which plays when Max and Furiosa help each other escape the Rock Rider canyon) during the Bullet Farm action sequence because he wanted to focus the audience's attention on the fact that Furiosa was willing to sacrifice her best chance at finding the Green Place to save Jack's life, and directed Holkenborg to use Fury Road's musical motifs for the Green Place instead. Miller explained that the Bullet Farm sequence needed to be "kind of a love story in the middle of an action scene". He wanted to show that "through their actions, [Furiosa and Jack] actually are prepared to give of themselves entirely to the other". Furthermore, Holkenborg "used AI to make deep fake voices from another voice", explaining: "What if the source sound was a drum rhythm, and what if the destination sound was an electric guitar? But the software doesn't know what to make of it. So it gave us a happy accident that we used throughout the score". Warner Bros. in-house record label WaterTower Music released the official soundtrack album on 17 May 2024. Marketing On 29 November 2023, the Warner Bros. booth at CCXP featured a first-look image of Taylor-Joy's Furiosa. The following day, the teaser trailer of the film was released. On 19 March 2024, the official trailer debuted. At CinemaCon, Warner Bros. screened extended footage of the film on 9 April; Miller, Taylor-Joy, and Hemsworth appeared together for the first time in public to promote the film. Running about five minutes, the extended preview shown at CinemaCon revealed that the film would be split into three distinct chapters ("Her Odyssey Begins", "A Warrior Awakens", and "A Ride Into Vengeance"). The final film actually features a total of five chapters: "The Pole of Inaccessibility", "Lessons from the Wasteland", "The Stowaway", "Homeward", and "Beyond Vengeance". A trio of first-look images from the film were released exclusively by Total Film on 19 April. On 16 May, an extended sneak peek was released by Odeon Cinemas and was released on YouTube the following day. Release Theatrical Furiosa: A Mad Max Saga had its world premiere at the 77th Cannes Film Festival, screening out-of-competition, on 15 May 2024. The film was released theatrically in Australia and India on 23 May 2024, and in the United States on 24 May 2024. It was originally scheduled to be released on 23 June 2023, but was delayed to May 2024. The film opened in China on 7 June 2024, becoming the first Mad Max film to be theatrically released there. Home media In May 2024, on the Happy Sad Confused podcast, Miller confirmed that the film would be receiving a black-and-white treatment, similar to what he did for Fury Road (referred to as the "Black & Chrome" edition) in 2016, expressing his interest in black-and-white as a format for film. The film was released on VOD and digital platforms on 24 June. Following its digital release, it was reported that Furiosa led the home viewing charts. The film came in at #1 on the iTunes VOD chart for 1 July and held the same position on the Fandango at Home chart for the week of 24–30 June. It was released on 4K UHD, Blu-ray, and DVD on 13 August. Special features include the hour-long behind-the-scenes documentary "Highway to Valhalla: In Pursuit of Furiosa", which interweaves concept art, set footage and interviews with cast and crew, providing an overview of the project from conception to post-production, while shorter featurettes delve into Furiosa and Dementus, and Taylor-Joy's and Hemsworth's performances. An extended breakdown of the "Stowaway" action sequence is also included. The disc is rounded out by a featurette on the film's vehicular designs and their construction. The same day, Furiosa: A Mad Max Saga Black & Chrome Edition was released simultaneously on 4K Ultra HD Blu-ray and digital streaming, and is also featured in the "Mad Max 5-Film 4K Collector's Edition", which was released on 24 September. The film became available for streaming on Max on 16 August. On 30 December, alongside Fury Road, the film became available to stream on Netflix. Reception Box office Furiosa: A Mad Max Saga grossed $67.6 million in the United States and Canada, and $105.3 million in other territories, for a worldwide total of $172.8 million. The film's box-office performance has been deemed a failure. Critics and film pundits noted that the franchise's limited appeal in general and prequels, in particular, contributed to poor box-office performance. Variety reported that industry insiders estimated that Furiosa needed to gross $350–375 million to turn a profit and that it would end up losing $75–95 million for the studio and its co-funders. However, a Warner Bros. spokesperson claimed that the film had a lower break-even point. As much as half of the film's budget was covered by the NSW and Australian federal governments. In April 2025, Deadline Hollywood calculated the film lost the studio $119.6 million, when factoring together all expenses and revenues. Performance In the United States, Furiosa's $32.3 million gross in its four-day opening weekend was described as "disappointing"; industry analysts had projected $40 million. Of that $32.3 million, the film earned $10.4 million on its first Friday, including an estimated $3.5 million from Thursday night previews, the latter of which was similar to the $3.7 million made by Fury Road. Although Furiosa was the highest-grossing film during the Memorial Day weekend, beating The Garfield Movie and Sight, that weekend had the lowest total box-office receipts since 1995, and Furiosa was the lowest-grossing film to finish in first over Memorial Day since Casper ($22 million before inflation adjustment; also released in 1995). Furiosa's $10.8 million second weekend (a 59% drop from the opening weekend, excluding Memorial Day itself) was also considered a disappointment, as The Garfield Movie dropped only 42% and won the weekend; for comparison, Fury Road had a 46% drop in its second weekend. As a whole, ticket sales for that weekend were down 65% from 2023. Domestic receipts fell an additional 61% in the third weekend, when Furiosa was pre-scheduled to surrender its premium large format screens, which are typically booked months in advance, to the premiere weekend of Bad Boys: Ride or Die. Furiosa's domestic box-office relied heavily on PLF screens. Internationally, Furiosa grossed $33.3 million in the first three days, industry analysts had projected $40–45 million. A week prior, the film had opened at number one in Australia, earning AUD$3.33 million. During its second weekend, Furiosa performed relatively better in international markets, grossing $21 million, which Deadline Hollywood called "a good 38% drop ... but coming off a low base". The film premiered in Japan on 31 May 2024, one week behind the United States, and became the first non-Japanese film of 2024 to debut at number one. Reported by Variety, it also held the number one spot in China on its first day (7 June), over the country's Dragon Boat Festival holiday weekend, but its $3.58 million take in its first three-day weekend left it outside the top five. However, The Hollywood Reporter reported that the film opened in sixth place with $3.7 million as a batch of local releases dominated ticket sales; Chinese ticketing app Maoyan forecasted Furiosa to finish its run with about $7.5 million. Analysis Industry analysts identified a variety of reasons for Furiosa's opening weekend, which was considered weak in light of what film consultant David A. Gross called "outstanding reviews and a good audience score". The opening weekend was not entirely unexpected; in April 2024, Deadline Hollywood reported that "some" sources were expecting Furiosa to compete neck-and-neck with The Garfield Movie. However, following Furiosa's disappointing second weekend, an anonymous studio executive told a reporter that "it's mind-numbing that Furiosa hasn't grossed $50 million domestically." Gross and The Hollywood Reporter's Pamela McClintock wrote that Furiosa was hurt by the industry-wide disruption to the film production schedule caused by the 2023 Hollywood labor disputes, a position that (according to The New York Times) the film studios had been pushing for several months. According to Gross: "Moviegoing thrives on momentum and rhythm: one strong movie after another bringing fans to the multiplex once or more per month. Right now, the schedule is thin." Despite being a female-fronted action film, Furiosa's opening weekend viewership skewed heavily male (72%) and young (55% of viewers were between ages 18–34). TheWrap's Jeremy Fuster speculated that one of the reasons for the "awful" underperformance was because the film is not a four-quadrant tentpole, writing: "Furiosa wasn't ever expected to be a Fast & Furious or Disney remake-level moneymaker for theaters, skewing more towards male audiences and to longtime Mad Max fans." Forbes' Paul Tassi praised Hemsworth and Taylor-Joy's performances, but questioned whether Furiosa commanded the same kind of brand recognition as a traditional IP-led tentpole feature, given that it was "a prequel spin-off of a side character in Fury Road who is not even being played by the same actress this time". After Bad Boys: Ride or Die debuted above expectations, box-office analyst Scott Mendelson tweeted that "sequels soar, prequels stumble, and 'originals' struggle, just like nearly every other summer". Furiosa's domestic grosses disproportionately came from premium large format screenings (PLFs) like IMAX and Dolby Cinema, which command higher ticket prices than screenings in regular theatres, and (according to Warner Bros. executives) may appeal more to diehard fans. In its opening weekend, 54% of Furiosa's domestic gross came from PLFs, compared to 48% for Oppenheimer, 44% for Dune: Part Two, 42% for Mission: Impossible – Dead Reckoning Part One, and 37% for Indiana Jones and the Dial of Destiny. Conversely, Screen Rant's Kate Bove suggested that Hollywood studios' eagerness to push content onto their own streaming services had encouraged everyday filmgoers to put off watching Furiosa in theatres on the assumption that Warner Bros. would "quickly transition" Furiosa to Max. She added that high budgets for tentpole features were "rais[ing] the bar for box office success". Several critics and filmmakers urged audiences to watch the film without being put off by its weak box-office results. Vulture's Bilge Ebiri, who praised Miller for taking "a big franchise sequel and turn[ing] it into something strange, sublime, and potentially off-putting", urged analysts to focus on long-term performance and to give Furiosa time to grab a foothold in the marketplace, rather than write off the film based on its opening weekend. Ebiri also pointed to the 2023 cultural phenomenon of "Barbenheimer" (Barbie and Oppenheimer), which accumulated 75% of its total grosses after the opening weekend due to strong word of mouth. Director Wes Ball, whose film Kingdom of the Planet of the Apes had moved up its release date to avoid directly competing with Furiosa, encouraged people to go and watch the film on the big screen on Twitter by tweeting: "Like the movie or not, creative swings like this don't come around often. When they do, try to enjoy the ambition of it all in a great theater ... Furiosa was made because Fury Road was beloved, not because it was a box office hit." Critical response On the review aggregator website Rotten Tomatoes, 90% of 419 critics' reviews are positive, with an average rating of 7.9/10. The website's consensus reads: "Retroactively enriching Fury Road with greater emotional heft if not quite matching it in propulsive throttle, Furiosa is another glorious swerve in mastermind George Miller's breathless race towards cinematic Valhalla." Metacritic, which uses a weighted average, assigned the film a score of 79 out of 100, based on 64 critics, indicating "generally favorable" reviews. Audiences polled by CinemaScore gave the film an average grade of "B+" on an A+ to F scale, the same as Fury Road, while those polled by PostTrak gave the film an average of 4 1/2 stars out of 5, with 70% saying they would definitely recommend it. Writing for RogerEbert.com, Robert Daniels awarded the film 4 out of 4 stars, and called it "one of the best prequels ever made". He praised the action sequences, performances, and storyline. Pete Hammond of Deadline Hollywood viewed the film as possessing "the best screenplay of any Mad Max film". The Guardian's Peter Bradshaw called Taylor-Joy "an overwhelmingly convincing action heroine". Writing for Empire, John Nugent awarded the film 5 out of 5 stars, and described Taylor-Joy as "phenomenal", finding the "right balance of steeliness and fractured humanity that Theron instilled". Jada Yuan from The Washington Post thought that Hemsworth had "created one of the all-time-great screen villains" and Jake Wilson of The Sydney Morning Herald saw him "steal[ing] the show". In a critical review, Owen Gleiberman of Variety perceived Furiosa as "franchise overkill" and as filled with "pretension". Nicholas Barber of BBC also disliked some aspects of the film, giving it 3 out of 5 stars. He viewed the plot as meandering and as draining, writing: "You soon reach the point where you're sick of sand, sick of explosions, sick of off-puttingly sadistic violence." Stephanie Zacharek's review in Time similarly criticised the film as "a slog that's working hard to persuade us we're having a good time". John McDonald of the Australian Financial Review opined that part of the film's "failure may be attributed to the writing, but also to Hemsworth's woodenness as an actor". In July 2024, Theron confirmed she had seen the film, stating: "It's amazing, it's a beautiful film." When asked if she had talked to Taylor-Joy at all throughout the process or since its release, Theron said: "No, we've really been trying to connect. It's been one of those – we can actually make a comedy out of it. We keep running into each other and in places when we don't have time to really talk to each other, so we're constantly like, 'Oh my god, OK, let's get together!' And then life takes over. But it will happen when it's right." By year's end, RogerEbert.com critic Matt Zoller Seitz, along with Cortlyn Kelly, named Furiosa as the best of the top ten films of 2024. having called it "a triumph from George Miller" despite its imperfections and "a film in the tradition of other late-career masterpieces by great directors, clearly less interested in recycling the same established templates yet again, and revisiting familiar themes and situations that were once presented more straightforwardly with a more ambivalent or complicated attitude." Lucas Kloberdanz-Dyck, writer for Collider, ranked it as the best action film of the year, writing: "Furiosa produces another classic tale of vengeance. This movie is a worthy entry in the Mad Max franchise and continues to revolutionize action movies with impressive practical stunts. The oil tanker scene alone is one of the best sequences in an action movie, but the nonstop narrative has exhilarating and heart-pumping moments that establish Furiosa as one of the greatest action movies ever, let alone the best of 2024." Many filmmakers, including Maggie Betts, Davy Chou, Robert Eggers, Jeff Fowler, Drew Goddard, Luca Guadagnino, Don Hertzfeldt, David Lowery, Pascal Plante, Celine Song, Nacho Vigalondo and Adam Wingard cited the film as among their favorites of 2024. Accolades Future In the days after the film's disappointing opening weekend, pundits suggested that it had lowered the chances that Warner Bros. would greenlight Mad Max: The Wasteland, a second Fury Road prequel focusing on Max Rockatansky that Miller had teased for years. At that time (May 2024), The Hollywood Reporter reported that The Wasteland was not yet in development. Miller had previously clarified that The Wasteland's source material (lore and other background material written in preparation for Fury Road) had not yet been adapted into a screenplay. He also said that he was "waiting to see the reception on Furiosa" before taking more concrete steps to develop The Wasteland into a feature film. Several weeks after the release of Furiosa, Hardy (who was promoting The Bikeriders at the time) said "I don't think it's happening" in an interview, either talking about his involvement in the film or the film itself. In October 2024, while promoting Venom: The Last Dance, Hardy was further asked about whether he'll reprise his role for an additional Mad Max film, responding: "No, I haven't been told anything about it yet, but obviously I'd love to do that ... George already has a script called The Wasteland, which is like quite specific, so I'm aware of that. It depends on whether they're making it." In February 2025, Miller stated in an interview with Vulture that he was still interested in making The Wasteland despite Furiosa's underperformance at the box office and would do so if given permission by Warner Bros, but he admitted that he wanted to focus on other projects first. Notes References External links Official website Furiosa: A Mad Max Saga at IMDb
Wikipedia
The principle of rationality (or rationality principle) was coined by Karl R. Popper in his Harvard Lecture of 1963, and published in his book Myth of Framework. It is related to what he called the 'logic of the situation' in an Economica article of 1944/1945, published later in his book The Poverty of Historicism. According to Popper's rationality principle, agents act in the most adequate way according to the objective situation. It is an idealized conception of human behavior which he used to drive his model of situational analysis. Cognitive scientist Allen Newell elaborated on the principle in his account of knowledge level modeling. Popper Popper called for social science to be grounded in what he called situational analysis or situational logic. This requires building models of social situations which include individual actors and their relationship to social institutions, e.g. markets, legal codes, bureaucracies, etc. These models attribute certain aims and information to the actors. This forms the 'logic of the situation', the result of reconstructing meticulously all circumstances of an historical event. The 'principle of rationality' is the assumption that people are instrumental in trying to reach their goals, and this is what drives the model. Popper believed that this model could be continuously refined to approach the objective truth. Popper called his principle of rationality nearly empty (a technical term meaning without empirical content) and strictly speaking false, but nonetheless tremendously useful. These remarks earned him a lot of criticism because seemingly he had swerved from his famous Logic of Scientific Discovery. Among the many philosophers having discussed Popper's principle of rationality from the 1960s up to now are Noretta Koertge, R. Nadeau, Viktor J. Vanberg, Hans Albert, E. Matzner, Ian C. Jarvie, Mark A. Notturno, John Wettersten, Ian C. Böhm. Newell In the context of knowledge-based systems, Newell (in 1982) proposed the following principle of rationality: "If an agent has knowledge that one of its actions will lead to one of its goals, then the agent will select that action." This principle is employed by agents at the knowledge level to move closer to a desired goal. An important philosophical difference between Newell and Popper is that Newell argued that the knowledge level is real in the sense that it exists in nature and is not made up. This allowed Newell to treat the rationality principle as a way of understanding nature and avoid the problems Popper ran into by treating knowledge as non physical and therefore non empirical. See also Hermeneutics Rational choice
Wikipedia
FreeFem++ is a programming language and a software focused on solving partial differential equations using the finite element method. FreeFem++ is written in C++ and developed and maintained by Université Pierre et Marie Curie and Laboratoire Jacques-Louis Lions. It runs on Linux, Solaris, macOS and Microsoft Windows systems. FreeFem++ is free software (LGPL). FreeFem++ language is inspired by C++. There is an IDE called FreeFem++-cs. History The first version was created in 1987 by Olivier Pironneau and was named MacFem (it only worked on Macintosh); PCFem appeared some time later. Both were written in Pascal. In 1992 it was re-written in C++ and named FreeFem. Later versions, FreeFem+ (1996) and FreeFem++ (1998), used that programming language too. Other versions FreeFem++ includes versions for console mode and MPI FreeFem3D Deprecated versions: FreeFem+ FreeFem See also List of finite element software packages References External links Official website
Wikipedia
The Modulation sphere or M-space formulation is a scheme or theory representing the system of effects of phase modulation and amplitude modulation as applied together on a carrier wave. The relations between both modulations on the carrier are also accounted for. The modulation sphere representation relates three variables in three space, M1, M2 and M3: The M1 axis defines which modulation type (AM or PM) dominates over the other at a set time instance on the carrier and at which degree. The M2 axis defines if the interaction between the two modulations is correlative, or anti-correlative (see Correlation) in phase, and at which degree, at the same instance. The M3 axis defines the degree the two values are in quadrature phase with each other at that instance, showing also which sideband of those created (LSB or USB) has more power and at which degree. References Cusack, Benedict (September 2004). Modulation Locking Subsystems for Gravitational Wave Detectors (PDF) (MPhil). Australian National University. Archived (PDF) from the original on 2021-03-26. Retrieved 2021-08-16. Cusack, Benedict J.; Sheard, Benjamin S.; Shaddock, Daniel A.; Gray, Malcolm B.; Lam, Ping Koy; Whitcomb., Stan E. (10 September 2004). "Electro-optic modulator capable of generating simultaneous amplitude and phase modulations" (PDF). Applied Optics. 43 (26): 5079–5091. Archived (PDF) from the original on 2018-07-24. Retrieved 2021-08-16.
Wikipedia
Linear (or Longitudinal) Timecode (LTC) is an encoding of SMPTE timecode data in an audio signal, as defined in SMPTE 12M specification. The audio signal is commonly recorded on a VTR track or other storage media. The bits are encoded using the biphase mark code (also known as FM): a 0 bit has a single transition at the start of the bit period. A 1 bit has two transitions, at the beginning and middle of the period. This encoding is self-clocking. Each frame is terminated by a 'sync word' which has a special predefined sync relationship with any video or film content. A special bit in the linear timecode frame, the biphase mark correction bit, ensures that there are an even number of AC transitions in each timecode frame. The sound of linear timecode is a jarring and distinctive noise and has been used as a sound-effects shorthand to imply telemetry or computers. Generation and Distribution In broadcast video situations, the LTC generator should be tied into house black burst, as should all devices using timecode, to ensure correct color framing and correct synchronization of all digital clocks. When synchronizing multiple clock-dependent digital devices together with video, such as digital audio recorders, the devices must be connected to a common word clock signal that is derived from the house black burst signal. This can be accomplished by using a generator that generates both black burst and video-resolved word clock, or by synchronizing the master digital device to video, and synchronizing all subsequent devices to the word clock output of the master digital device (and to LTC). Made up of 80 bits per frame, where there may be 24, 25 or 30 frames per second, LTC timecode varies from 960 Hz (binary zeros at 24 frames/s) to 2400 Hz (binary ones at 30 frames/s), and thus is comfortably in the audio frequency range. LTC can exist as either a balanced or unbalanced signal, and can be treated as an audio signal in regards to distribution. Like audio, LTC can be distributed by standard audio wiring, connectors, distribution amplifiers, and patchbays, and can be ground-isolated with audio transformers. It can also be distributed via 75 ohm video cable and video distribution amplifiers, although the voltage attenuation caused by using a 75 ohm system may cause the signal to drop to a level that can not be read by some equipment. Care has to be taken with analog audio to avoid audible 'breakthrough' (aka "crosstalk") from the LTC track to the audio tracks. LTC care: Avoid percussive sounds close to LTC Never process an LTC with noise reduction, eq or compressor Allow pre roll and post roll To create negative time code add one hour to time (avoid midnight effect) Always put slowest device as a master Longitudinal SMPTE timecode should be played back at a middle-level when recorded on an audio track, as both low and high levels will introduce distortion. Longitudinal timecode data format The basic format is an 80-bit code that gives the time of day to the second, and the frame number within the second. Values are stored in binary-coded decimal, least significant bit first. There are thirty-two bits of user data, usually used for a reel number and date. Bit 10 is set to 1 if drop frame numbering is in use; frame numbers 0 and 1 are skipped during the first second of every minute, except multiples of 10 minutes. This converts 30 frame/second time code to the 29.97 frame/second NTSC standard. Bit 11, the color framing bit, is set to 1 if the time code is synchronized to a color video signal. The frame number modulo 2 (for NTSC and SECAM) or modulo 4 (for PAL) should be preserved across cuts in order to avoid phase jumps in the chrominance subcarrier. Bits 27, 43, and 59 differ between 25 frame/s time code, and other frame rates (30, 29.97, or 24).: 9 The bits are: "Polarity correction bit" (bit 59 at 25 frame/s, bit 27 at other rates): this bit is chosen to provide an even number of 0 bits in the whole frame, including the sync code. (Since the frame is an even number of bits long, this implies an even number of 1 bits, and is thus an even parity bit. Since the sync code includes an odd number of 1 bits, it is an odd parity bit over the data.) This keeps the phase of each frame consistent, so it always starts with a rising edge at the beginning of bit 0. This allows seamless splicing of different time codes, and lets it be more easily read with an oscilloscope. "Binary group flag" bits BGF0 and BGF2 (bits 27 and 43 at 25 frame/s, bits 43 and 59 at other rates): these indicate the format of the user bits. Both 0 indicates no (or unspecified) format. Only BGF0 set indicates four 8-bit characters (transmitted little-endian). The combinations with BGF2 set are reserved.: 7–8 Bit 58, unused in earlier versions of the specification, is now defined as "binary group flag 1" and indicates that the time code is synchronized to an external clock.: 7 if zero, the time origin is arbitrary. The sync pattern in bits 64 through 79 includes 12 consecutive 1 bits, which cannot appear anywhere else in the time code. Assuming all user bits are set to 1, the longest run of 1 bits that can appear elsewhere in the time code is 10, bits 9 to 18 inclusive. The sync pattern is preceded by 00 and followed by 01. This is used to determine whether an audio tape is running forward or backward. See also Vertical interval timecode Burnt-in timecode MIDI timecode CTL timecode AES-EBU embedded timecode Rewritable consumer timecode VTR Manchester code Biphase mark code References External links LGPL library to en/decode LTC in software
Wikipedia
William C. "Bill" Mann (died August 13, 2004, aged 69) was a computer scientist and computational linguist, the originator of rhetorical structure theory (RST) and a president of the Association for Computational Linguistics (1987–1988). He is especially well known for his work in text generation. He received a Ph.D. in artificial intelligence and computer science at Carnegie Mellon University under Herbert Simon and Allen Newell. From the mid-1970s until 1990, he was a researcher at the Information Sciences Institute of the University of Southern California. From 1990 to 1996, he was a consultant with the Summer Institute of Linguistics, based in Nairobi. William C. Mann died on August 13, 2004, after a long struggle with leukemia. Publications William C. Mann and Sandra A. Thompson, "Rhetorical structure theory: toward a functional theory of text organization", Text 8:243-281 (1988). Maite Taboada, William C. Mann, "Applications of Rhetorical Structure Theory", Discourse Studies 8:3:567-588 (2006) Notes Bibliography Christian M.I.M. Matthiessen, "Remembering Bill Mann", Computational Linguistics 31:2:161-171 External links Bibliography of publications and reports by the creators of RST
Wikipedia
In computing, data deduplication is a technique for eliminating duplicate copies of repeating data. Successful implementation of the technique can improve storage utilization, which may in turn lower capital expenditure by reducing the overall amount of storage media required to meet storage capacity needs. It can also be applied to network data transfers to reduce the number of bytes that must be sent. The deduplication process requires comparison of data 'chunks' (also known as 'byte patterns') which are unique, contiguous blocks of data. These chunks are identified and stored during a process of analysis, and compared to other chunks within existing data. Whenever a match occurs, the redundant chunk is replaced with a small reference that points to the stored chunk. Given that the same byte pattern may occur dozens, hundreds, or even thousands of times (the match frequency is dependent on the chunk size), the amount of data that must be stored or transferred can be greatly reduced. A related technique is single-instance (data) storage, which replaces multiple copies of content at the whole-file level with a single shared copy. While possible to combine this with other forms of data compression and deduplication, it is distinct from newer approaches to data deduplication (which can operate at the segment or sub-block level). Deduplication is different from data compression algorithms, such as LZ77 and LZ78. Whereas compression algorithms identify redundant data inside individual files and encodes this redundant data more efficiently, the intent of deduplication is to inspect large volumes of data and identify large sections – such as entire files or large sections of files – that are identical, and replace them with a shared copy. Functioning principle For example, a typical email system might contain 100 instances of the same 1 MB (megabyte) file attachment. Each time the email platform is backed up, all 100 instances of the attachment are saved, requiring 100 MB storage space. With data deduplication, only one instance of the attachment is actually stored; the subsequent instances are referenced back to the saved copy for deduplication ratio of roughly 100 to 1. Deduplication is often paired with data compression for additional storage saving: Deduplication is first used to eliminate large chunks of repetitive data, and compression is then used to efficiently encode each of the stored chunks. In computer code, deduplication is done by, for example, storing information in variables so that they don't have to be written out individually but can be changed all at once at a central referenced location. Examples are CSS classes and named references in MediaWiki. Benefits Storage-based data deduplication reduces the amount of storage needed for a given set of files. It is most effective in applications where many copies of very similar or even identical data are stored on a single disk. In the case of data backups, which routinely are performed to protect against data loss, most data in a given backup remain unchanged from the previous backup. Common backup systems try to exploit this by omitting (or hard linking) files that haven't changed or storing differences between files. Neither approach captures all redundancies, however. Hard-linking does not help with large files that have only changed in small ways, such as an email database; differences only find redundancies in adjacent versions of a single file (consider a section that was deleted and later added in again, or a logo image included in many documents). In-line network data deduplication is used to reduce the number of bytes that must be transferred between endpoints, which can reduce the amount of bandwidth required. See WAN optimization for more information. Virtual servers and virtual desktops benefit from deduplication because it allows nominally separate system files for each virtual machine to be coalesced into a single storage space. At the same time, if a given virtual machine customizes a file, deduplication will not change the files on the other virtual machines—something that alternatives like hard links or shared disks do not offer. Backing up or making duplicate copies of virtual environments is similarly improved. Classification Post-process versus in-line deduplication Deduplication may occur "in-line", as data is flowing, or "post-process" after it has been written. With post-process deduplication, new data is first stored on the storage device and then a process at a later time will analyze the data looking for duplication. The benefit is that there is no need to wait for the hash calculations and lookup to be completed before storing the data, thereby ensuring that store performance is not degraded. Implementations offering policy-based operation can give users the ability to defer optimization on "active" files, or to process files based on type and location. One potential drawback is that duplicate data may be unnecessarily stored for a short time, which can be problematic if the system is nearing full capacity. Alternatively, deduplication hash calculations can be done in-line: synchronized as data enters the target device. If the storage system identifies a block which it has already stored, only a reference to the existing block is stored, rather than the whole new block. The advantage of in-line deduplication over post-process deduplication is that it requires less storage and network traffic, since duplicate data is never stored or transferred. On the negative side, hash calculations may be computationally expensive, thereby reducing the storage throughput. However, certain vendors with in-line deduplication have demonstrated equipment which performs in-line deduplication at high rates. Post-process and in-line deduplication methods are often heavily debated. Data formats The SNIA Dictionary identifies two methods: content-agnostic data deduplication - a data deduplication method that does not require awareness of specific application data formats. content-aware data deduplication - a data deduplication method that leverages knowledge of specific application data formats. Source versus target deduplication Another way to classify data deduplication methods is according to where they occur. Deduplication occurring close to where data is created, is referred to as "source deduplication". When it occurs near where the data is stored, it is called "target deduplication". Source deduplication ensures that data on the data source is deduplicated. This generally takes place directly within a file system. The file system will periodically scan new files creating hashes and compare them to hashes of existing files. When files with same hashes are found then the file copy is removed and the new file points to the old file. Unlike hard links however, duplicated files are considered to be separate entities and if one of the duplicated files is later modified, then using a system called copy-on-write a copy of that changed file or block is created. The deduplication process is transparent to the users and backup applications. Backing up a deduplicated file system will often cause duplication to occur resulting in the backups being bigger than the source data. Source deduplication can be declared explicitly for copying operations, as no calculation is needed to know that the copied data is in need of deduplication. This leads to a new form of "linking" on file systems called the reflink (Linux) or clonefile (MacOS), where one or more inodes (file information entries) are made to share some or all of their data. It is named analogously to hard links, which work at the inode level, and symbolic links that work at the filename level. The individual entries have a copy-on-write behavior that is non-aliasing, i.e. changing one copy afterwards will not affect other copies. Microsoft's ReFS also supports this operation. Target deduplication is the process of removing duplicates when the data was not generated at that location. Example of this would be a server connected to a SAN/NAS, The SAN/NAS would be a target for the server (target deduplication). The server is not aware of any deduplication, the server is also the point of data generation. A second example would be backup. Generally this will be a backup store such as a data repository or a virtual tape library. Deduplication methods One of the most common forms of data deduplication implementations works by comparing chunks of data to detect duplicates. For that to happen, each chunk of data is assigned an identification, calculated by the software, typically using cryptographic hash functions. In many implementations, the assumption is made that if the identification is identical, the data is identical, even though this cannot be true in all cases due to the pigeonhole principle; other implementations do not assume that two blocks of data with the same identifier are identical, but actually verify that data with the same identification is identical. If the software either assumes that a given identification already exists in the deduplication namespace or actually verifies the identity of the two blocks of data, depending on the implementation, then it will replace that duplicate chunk with a link. Once the data has been deduplicated, upon read back of the file, wherever a link is found, the system simply replaces that link with the referenced data chunk. The deduplication process is intended to be transparent to end users and applications. Commercial deduplication implementations differ by their chunking methods and architectures. Chunking: In some systems, chunks are defined by physical layer constraints (e.g. 4 KB block size in WAFL). In some systems only complete files are compared, which is called single-instance storage or SIS. The most intelligent (but CPU intensive) method to chunking is generally considered to be sliding-block, also called Content-Defined Chunking. In sliding block, a window is passed along the file stream to seek out more naturally occurring internal file boundaries. Client backup deduplication: This is the process where the deduplication hash calculations are initially created on the source (client) machines. Files that have identical hashes to files already in the target device are not sent, the target device just creates appropriate internal links to reference the duplicated data. The benefit of this is that it avoids data being unnecessarily sent across the network thereby reducing traffic load. Primary storage and secondary storage: By definition, primary storage systems are designed for optimal performance, rather than lowest possible cost. The design criteria for these systems is to increase performance, at the expense of other considerations. Moreover, primary storage systems are much less tolerant of any operation that can negatively impact performance. Also by definition, secondary storage systems contain primarily duplicate, or secondary copies of data. These copies of data are typically not used for actual production operations and as a result are more tolerant of some performance degradation, in exchange for increased efficiency. To date, data deduplication has predominantly been used with secondary storage systems. The reasons for this are two-fold: First, data deduplication requires overhead to discover and remove the duplicate data. In primary storage systems, this overhead may impact performance. The second reason why deduplication is applied to secondary data, is that secondary data tends to have more duplicate data. Backup application in particular commonly generate significant portions of duplicate data over time. Data deduplication has been deployed successfully with primary storage in some cases where the system design does not require significant overhead, or impact performance. Single instance storage Single-instance storage (SIS) is a system's ability to take multiple copies of content objects and replace them by a single shared copy. It is a means to eliminate data duplication and to increase efficiency. SIS is frequently implemented in file systems, email server software, data backup, and other storage-related computer software. Single-instance storage is a simple variant of data deduplication. While data deduplication may work at a segment or sub-block level, single instance storage works at the object level, eliminating redundant copies of objects such as entire files or email messages. Single-instance storage can be used alongside (or layered upon) other data duplication or data compression methods to improve performance in exchange for an increase in complexity and for (in some cases) a minor increase in storage space requirements. Drawbacks and concerns One method for deduplicating data relies on the use of cryptographic hash functions to identify duplicate segments of data. If two different pieces of information generate the same hash value, this is known as a collision. The probability of a collision depends mainly on the hash length (see birthday attack). Thus, the concern arises that data corruption can occur if a hash collision occurs, and additional means of verification are not used to verify whether there is a difference in data, or not. Both in-line and post-process architectures may offer bit-for-bit validation of original data for guaranteed data integrity. The hash functions used include standards such as SHA-1, SHA-256, and others. The computational resource intensity of the process can be a drawback of data deduplication. To improve performance, some systems utilize both weak and strong hashes. Weak hashes are much faster to calculate but there is a greater risk of a hash collision. Systems that utilize weak hashes will subsequently calculate a strong hash and will use it as the determining factor to whether it is actually the same data or not. Note that the system overhead associated with calculating and looking up hash values is primarily a function of the deduplication workflow. The reconstitution of files does not require this processing and any incremental performance penalty associated with re-assembly of data chunks is unlikely to impact application performance. Another concern is the interaction of compression and encryption. The goal of encryption is to eliminate any discernible patterns in the data. Thus encrypted data cannot be deduplicated, even though the underlying data may be redundant. Although not a shortcoming of data deduplication, there have been data breaches when insufficient security and access validation procedures are used with large repositories of deduplicated data. In some systems, as typical with cloud storage, an attacker can retrieve data owned by others by knowing or guessing the hash value of the desired data. Implementations Deduplication is implemented in some filesystems such as in ZFS or Write Anywhere File Layout and in different disk arrays models. It is a service available on both NTFS and ReFS on Windows servers. See also References External links Biggar, Heidi(2007.12.11). WebCast: The Data Deduplication Effect Using Latent Semantic Indexing for Data Deduplication. A Better Way to Store Data. What Is the Difference Between Data Deduplication, File Deduplication, and Data Compression? - Database from eWeek SNIA DDSR SIG Understanding Data Deduplication Ratios Doing More with Less by Jatinder Singh DeDuplication Demo
Wikipedia
This is a collection of lists of mammal species by the estimated global population, divided by orders. Lists only exist for some orders; for example, the most diverse order - rodents - is missing. Much of the data in these lists were created by the International Union for Conservation of Nature (IUCN) Global Mammal Assessment Team, which consists of 1700 mammalogists from over 130 countries. They recognize 5488 species in the class. These lists are not comprehensive, as not all mammals have had their numbers estimated. For example, a live specimen of the spade-toothed whale was first observed in December 2010, and the event only recognized as such in November 2012; no estimate yet exists for the global population. The accuracy of the quote numbers may only be an order of magnitude. It is estimated that the total number of wild mammals in the world is about 130 billion. Lists by taxonomic order List of even-toed ungulates by population – bos species, bovidae artiodactyls, suiformes, camelidae species, cervidae artiodactyls, giraffa species, hippopotami. List of cetacean species with population estimates – dolphins, porpoises, whales. List of odd-toed ungulates by population – equines, rhinoceros, tapirs. List of carnivorans by population – domestic and wild feliformians and caniformians, pinnipeds, ursid species, musteloidea species, herpestidae species, etc. List of bats by population – Chiropterans. List of elephant species by population – Elephants. List of marsupials by population – Wombats, koalas and kangaroos. List of lagomorphs by population – rabbits, hares, and pikas. List of other Afrotheres by population – seacows, sengis, golden moles, otter shrews, tenrecs, hyraxes and the aardvark. List of rodents by population – cavies, squirrels, springhares, mice, beaver etc. List of eulipotyds by population – true moles, shrews, shrew-like moles, hedgehogs, moonrats, solenodons, and desmans See also List of birds by population Lists of organisms by population World population (humans)
Wikipedia
Ambiguity is the type of meaning in which a phrase, statement, or resolution is not explicitly defined, making for several interpretations; others describe it as a concept or statement that has no real reference. A common aspect of ambiguity is uncertainty. It is thus an attribute of any idea or statement whose intended meaning cannot be definitively resolved, according to a rule or process with a finite number of steps. (The prefix ambi- reflects the idea of "two", as in "two meanings"). The concept of ambiguity is generally contrasted with vagueness. In ambiguity, specific and distinct interpretations are permitted (although some may not be immediately obvious), whereas with vague information it is difficult to form any interpretation at the desired level of specificity. Linguistic forms Lexical ambiguity is contrasted with semantic ambiguity. The former represents a choice between a finite number of known and meaningful context-dependent interpretations. The latter represents a choice between any number of possible interpretations, none of which may have a standard agreed-upon meaning. This form of ambiguity is closely related to vagueness. Ambiguity in human language is argued to reflect principles of efficient communication. Languages that communicate efficiently will avoid sending information that is redundant with information provided in the context. This can be shown mathematically to result in a system that is ambiguous when context is neglected. In this way, ambiguity is viewed as a generally useful feature of a linguistic system. Linguistic ambiguity can be a problem in law, because the interpretation of written documents and oral agreements is often of paramount importance. Lexical ambiguity The lexical ambiguity of a word or phrase applies to it having more than one meaning in the language to which the word belongs. "Meaning" here refers to whatever should be represented by a good dictionary. For instance, the word "bank" has several distinct lexical definitions, including "financial institution" and "edge of a river". Or consider "apothecary". One could say "I bought herbs from the apothecary". This could mean one actually spoke to the apothecary (pharmacist) or went to the apothecary (pharmacy). The context in which an ambiguous word is used often makes it clearer which of the meanings is intended. If, for instance, someone says "I put $100 in the bank", most people would not think someone used a shovel to dig in the mud. However, some linguistic contexts do not provide sufficient information to make a used word clearer. Lexical ambiguity can be addressed by algorithmic methods that automatically associate the appropriate meaning with a word in context, a task referred to as word-sense disambiguation. The use of multi-defined words requires the author or speaker to clarify their context, and sometimes elaborate on their specific intended meaning (in which case, a less ambiguous term should have been used). The goal of clear concise communication is that the receiver(s) have no misunderstanding about what was meant to be conveyed. An exception to this could include a politician whose "weasel words" and obfuscation are necessary to gain support from multiple constituents with mutually exclusive conflicting desires from his or her candidate of choice. Ambiguity is a powerful tool of political science. More problematic are words whose multiple meanings express closely related concepts. "Good", for example, can mean "useful" or "functional" (That's a good hammer), "exemplary" (She's a good student), "pleasing" (This is good soup), "moral" (a good person versus the lesson to be learned from a story), "righteous", etc. "I have a good daughter" is not clear about which sense is intended. The various ways to apply prefixes and suffixes can also create ambiguity ("unlockable" can mean "capable of being opened" or "impossible to lock"). Semantic and syntactic ambiguity Semantic ambiguity occurs when a word, phrase or sentence, taken out of context, has more than one interpretation. In "We saw her duck" (example due to Richard Nordquist), the words "her duck" can refer either to the person's bird (the noun "duck", modified by the possessive pronoun "her"), or to a motion she made (the verb "duck", the subject of which is the objective pronoun "her", object of the verb "saw"). Syntactic ambiguity arises when a sentence can have two (or more) different meanings because of the structure of the sentence—its syntax. This is often due to a modifying expression, such as a prepositional phrase, the application of which is unclear. "He ate the cookies on the couch", for example, could mean that he ate those cookies that were on the couch (as opposed to those that were on the table), or it could mean that he was sitting on the couch when he ate the cookies. "To get in, you will need an entrance fee of $10 or your voucher and your drivers' license." This could mean that you need EITHER ten dollars OR BOTH your voucher and your license. Or it could mean that you need your license AND you need EITHER ten dollars OR a voucher. Only rewriting the sentence, or placing appropriate punctuation can resolve a syntactic ambiguity. For the notion of, and theoretic results about, syntactic ambiguity in artificial, formal languages (such as computer programming languages), see Ambiguous grammar. Usually, semantic and syntactic ambiguity go hand in hand. The sentence "We saw her duck" is also syntactically ambiguous. Conversely, a sentence like "He ate the cookies on the couch" is also semantically ambiguous. Rarely, but occasionally, the different parsings of a syntactically ambiguous phrase result in the same meaning. For example, the command "Cook, cook!" can be parsed as "Cook (noun used as vocative), cook (imperative verb form)!", but also as "Cook (imperative verb form), cook (noun used as vocative)!". It is more common that a syntactically unambiguous phrase has a semantic ambiguity; for example, the lexical ambiguity in "Your boss is a funny man" is purely semantic, leading to the response "Funny ha-ha or funny peculiar?" Spoken language can contain many more types of ambiguities that are called phonological ambiguities, where there is more than one way to compose a set of sounds into words. For example, "ice cream" and "I scream". Such ambiguity is generally resolved according to the context. A mishearing of such, based on incorrectly resolved ambiguity, is called a mondegreen. Philosophy Philosophers (and other users of logic) spend a lot of time and effort searching for and removing (or intentionally adding) ambiguity in arguments because it can lead to incorrect conclusions and can be used to deliberately conceal bad arguments. For example, a politician might say, "I oppose taxes which hinder economic growth", an example of a glittering generality. Some will think they oppose taxes in general because they hinder economic growth. Others may think they oppose only those taxes that they believe will hinder economic growth. In writing, the sentence can be rewritten to reduce possible misinterpretation, either by adding a comma after "taxes" (to convey the first sense) or by changing "which" to "that" (to convey the second sense) or by rewriting it in other ways. The devious politician hopes that each constituent will interpret the statement in the most desirable way, and think the politician supports everyone's opinion. However, the opposite can also be true—an opponent can turn a positive statement into a bad one if the speaker uses ambiguity (intentionally or not). The logical fallacies of amphiboly and equivocation rely heavily on the use of ambiguous words and phrases. In continental philosophy (particularly phenomenology and existentialism), there is much greater tolerance of ambiguity, as it is generally seen as an integral part of the human condition. Martin Heidegger argued that the relation between the subject and object is ambiguous, as is the relation of mind and body, and part and whole. In Heidegger's phenomenology, Dasein is always in a meaningful world, but there is always an underlying background for every instance of signification. Thus, although some things may be certain, they have little to do with Dasein's sense of care and existential anxiety, e.g., in the face of death. In calling his work Being and Nothingness an "essay in phenomenological ontology" Jean-Paul Sartre follows Heidegger in defining the human essence as ambiguous, or relating fundamentally to such ambiguity. Simone de Beauvoir tries to base an ethics on Heidegger's and Sartre's writings (The Ethics of Ambiguity), where she highlights the need to grapple with ambiguity: "as long as there have been philosophers and they have thought, most of them have tried to mask it ... And the ethics which they have proposed to their disciples has always pursued the same goal. It has been a matter of eliminating the ambiguity by making oneself pure inwardness or pure externality, by escaping from the sensible world or being engulfed by it, by yielding to eternity or enclosing oneself in the pure moment." Ethics cannot be based on the authoritative certainty given by mathematics and logic, or prescribed directly from the empirical findings of science. She states: "Since we do not succeed in fleeing it, let us, therefore, try to look the truth in the face. Let us try to assume our fundamental ambiguity. It is in the knowledge of the genuine conditions of our life that we must draw our strength to live and our reason for acting". Other continental philosophers suggest that concepts such as life, nature, and sex are ambiguous. Corey Anton has argued that we cannot be certain what is separate from or unified with something else: language, he asserts, divides what is not, in fact, separate. Following Ernest Becker, he argues that the desire to 'authoritatively disambiguate' the world and existence has led to numerous ideologies and historical events such as genocide. On this basis, he argues that ethics must focus on 'dialectically integrating opposites' and balancing tension, rather than seeking a priori validation or certainty. Like the existentialists and phenomenologists, he sees the ambiguity of life as the basis of creativity. Literature and rhetoric In literature and rhetoric, ambiguity can be a useful tool. Groucho Marx's classic joke depends on a grammatical ambiguity for its humor, for example: "Last night I shot an elephant in my pajamas. How he got in my pajamas, I'll never know". Songs and poetry often rely on ambiguous words for artistic effect, as in the song title "Don't It Make My Brown Eyes Blue" (where "blue" can refer to the color, or to sadness). In the narrative, ambiguity can be introduced in several ways: motive, plot, character. F. Scott Fitzgerald uses the latter type of ambiguity with notable effect in his novel The Great Gatsby. Mathematical notation Mathematical notation is a helpful tool that eliminates a lot of misunderstandings associated with natural language in physics and other sciences. Nonetheless, there are still some inherent ambiguities due to lexical, syntactic, and semantic reasons that persist in mathematical notation. Names of functions The ambiguity in the style of writing a function should not be confused with a multivalued function, which can (and should) be defined in a deterministic and unambiguous way. Several special functions still do not have established notations. Usually, the conversion to another notation requires to scale the argument or the resulting value; sometimes, the same name of the function is used, causing confusions. Examples of such underestablished functions: Sinc function Elliptic integral of the third kind; translating elliptic integral form MAPLE to Mathematica, one should replace the second argument to its square; dealing with complex values, this may cause problems. Exponential integral Hermite polynomial: 775 Expressions Ambiguous expressions often appear in physical and mathematical texts. It is common practice to omit multiplication signs in mathematical expressions. Also, it is common to give the same name to a variable and a function, for example, f = f ( x ) {\displaystyle f=f(x)} . Then, if one sees f = f ( y + 1 ) {\displaystyle f=f(y+1)} , there is no way to distinguish whether it means f = f ( x ) {\displaystyle f=f(x)} multiplied by ( y + 1 ) {\displaystyle (y+1)} , or function f {\displaystyle f} evaluated at argument equal to ( y + 1 ) {\displaystyle (y+1)} . In each case of use of such notations, the reader is supposed to be able to perform the deduction and reveal the true meaning. Creators of algorithmic languages try to avoid ambiguities. Many algorithmic languages (C++ and Fortran) require the character * as a symbol of multiplication. The Wolfram Language used in Mathematica allows the user to omit the multiplication symbol, but requires square brackets to indicate the argument of a function; square brackets are not allowed for grouping of expressions. Fortran, in addition, does not allow use of the same name (identifier) for different objects, for example, function and variable; in particular, the expression f = f ( x ) {\displaystyle f=f(x)} is qualified as an error. The order of operations may depend on the context. In most programming languages, the operations of division and multiplication have equal priority and are executed from left to right. Until the last century, many editorials assumed that multiplication is performed first, for example, a / b c {\displaystyle a/bc} is interpreted as a / ( b c ) {\displaystyle a/(bc)} in this case, the insertion of parentheses is required when translating the formulas to an algorithmic language. In addition, it is common to write an argument of a function without parenthesis, which also may lead to ambiguity. In the scientific journal style, one uses roman letters to denote elementary functions, whereas variables are written using italics. For example, in mathematical journals the expression s i n {\displaystyle sin} does not denote the sine function, but the product of the three variables s {\displaystyle s} , i {\displaystyle i} , n {\displaystyle n} , although in the informal notation of a slide presentation it may stand for sin {\displaystyle \sin } . Commas in multi-component subscripts and superscripts are sometimes omitted; this is also potentially ambiguous notation. For example, in the notation T m n k {\displaystyle T_{mnk}} , the reader can only infer from the context whether it means a single-index object, taken with the subscript equal to product of variables m {\displaystyle m} , n {\displaystyle n} and k {\displaystyle k} , or it is an indication to a trivalent tensor. Examples of potentially confusing ambiguous mathematical expressions An expression such as sin 2 ⁡ α / 2 {\displaystyle \sin ^{2}\alpha /2} can be understood to mean either ( sin ⁡ ( α / 2 ) ) 2 {\displaystyle (\sin(\alpha /2))^{2}} or ( sin ⁡ α ) 2 / 2 {\displaystyle (\sin \alpha )^{2}/2} . Often the author's intention can be understood from the context, in cases where only one of the two makes sense, but an ambiguity like this should be avoided, for example by writing sin 2 ⁡ ( α / 2 ) {\displaystyle \sin ^{2}(\alpha /2)} or 1 2 sin 2 ⁡ α {\textstyle {\frac {1}{2}}\sin ^{2}\alpha } . The expression sin − 1 ⁡ α {\displaystyle \sin ^{-1}\alpha } means arcsin ⁡ ( α ) {\displaystyle \arcsin(\alpha )} in several texts, though it might be thought to mean ( sin ⁡ α ) − 1 {\displaystyle (\sin \alpha )^{-1}} , since sin n ⁡ α {\displaystyle \sin ^{n}\alpha } commonly means ( sin ⁡ α ) n {\displaystyle (\sin \alpha )^{n}} . Conversely, sin 2 ⁡ α {\displaystyle \sin ^{2}\alpha } might seem to mean sin ⁡ ( sin ⁡ α ) {\displaystyle \sin(\sin \alpha )} , as this exponentiation notation usually denotes function iteration: in general, f 2 ( x ) {\displaystyle f^{2}(x)} means f ( f ( x ) ) {\displaystyle f(f(x))} . However, for trigonometric and hyperbolic functions, this notation conventionally means exponentiation of the result of function application. The expression a / 2 b {\displaystyle a/2b} can be interpreted as meaning ( a / 2 ) b {\displaystyle (a/2)b} however, it is more commonly understood to mean a / ( 2 b ) {\displaystyle a/(2b)} . Notations in quantum optics and quantum mechanics It is common to define the coherent states in quantum optics with | α ⟩ {\displaystyle ~|\alpha \rangle ~} and states with fixed number of photons with | n ⟩ {\displaystyle ~|n\rangle ~} . Then, there is an "unwritten rule": the state is coherent if there are more Greek characters than Latin characters in the argument, and n {\displaystyle n} -photon state if the Latin characters dominate. The ambiguity becomes even worse, if | x ⟩ {\displaystyle ~|x\rangle ~} is used for the states with certain value of the coordinate, and | p ⟩ {\displaystyle ~|p\rangle ~} means the state with certain value of the momentum, which may be used in books on quantum mechanics. Such ambiguities easily lead to confusions, especially if some normalized adimensional, dimensionless variables are used. Expression | 1 ⟩ {\displaystyle |1\rangle } may mean a state with single photon, or the coherent state with mean amplitude equal to 1, or state with momentum equal to unity, and so on. The reader is supposed to guess from the context. Ambiguous terms in physics and mathematics Some physical quantities do not yet have established notations; their value (and sometimes even dimension, as in the case of the Einstein coefficients), depends on the system of notations. Many terms are ambiguous. Each use of an ambiguous term should be preceded by the definition, suitable for a specific case. Just like Ludwig Wittgenstein states in Tractatus Logico-Philosophicus: "... Only in the context of a proposition has a name meaning." A highly confusing term is gain. For example, the sentence "the gain of a system should be doubled", without context, means close to nothing. It may mean that the ratio of the output voltage of an electric circuit to the input voltage should be doubled. It may mean that the ratio of the output power of an electric or optical circuit to the input power should be doubled. It may mean that the gain of the laser medium should be doubled, for example, doubling the population of the upper laser level in a quasi-two level system (assuming negligible absorption of the ground-state). The term intensity is ambiguous when applied to light. The term can refer to any of irradiance, luminous intensity, radiant intensity, or radiance, depending on the background of the person using the term. Also, confusions may be related with the use of atomic percent as measure of concentration of a dopant, or resolution of an imaging system, as measure of the size of the smallest detail that still can be resolved at the background of statistical noise. See also Accuracy and precision. The Berry paradox arises as a result of systematic ambiguity in the meaning of terms such as "definable" or "nameable". Terms of this kind give rise to vicious circle fallacies. Other terms with this type of ambiguity are: satisfiable, true, false, function, property, class, relation, cardinal, and ordinal. Mathematical interpretation of ambiguity In mathematics and logic, ambiguity can be considered to be an instance of the logical concept of underdetermination—for example, X = Y {\displaystyle X=Y} leaves open what the value of X {\displaystyle X} is—while overdetermination, except when like X = 1 , X = 1 , X = 1 {\displaystyle X=1,X=1,X=1} , is a self-contradiction, also called inconsistency, paradoxicalness, or oxymoron, or in mathematics an inconsistent system—such as X = 2 , X = 3 {\displaystyle X=2,X=3} , which has no solution. Logical ambiguity and self-contradiction is analogous to visual ambiguity and impossible objects, such as the Necker cube and impossible cube, or many of the drawings of M. C. Escher. Constructed language Some languages have been created with the intention of avoiding ambiguity, especially lexical ambiguity. Lojban and Loglan are two related languages that have been created for this, focusing chiefly on syntactic ambiguity as well. The languages can be both spoken and written. These languages are intended to provide a greater technical precision over big natural languages, although historically, such attempts at language improvement have been criticized. Languages composed from many diverse sources contain much ambiguity and inconsistency. The many exceptions to syntax and semantic rules are time-consuming and difficult to learn. Biology In structural biology, ambiguity has been recognized as a problem for studying protein conformations. The analysis of a protein three-dimensional structure consists in dividing the macromolecule into subunits called domains. The difficulty of this task arises from the fact that different definitions of what a domain is can be used (e.g. folding autonomy, function, thermodynamic stability, or domain motions), which sometimes results in a single protein having different—yet equally valid—domain assignments. Christianity and Judaism Christianity and Judaism employ the concept of paradox synonymously with "ambiguity". Many Christians and Jews endorse Rudolf Otto's description of the sacred as 'mysterium tremendum et fascinans', the awe-inspiring mystery that fascinates humans. The apocryphal Book of Judith is noted for the "ingenious ambiguity" expressed by its heroine; for example, she says to the villain of the story, Holofernes, "my lord will not fail to achieve his purposes", without specifying whether my lord refers to the villain or to God. The orthodox Catholic writer G. K. Chesterton regularly employed paradox to tease out the meanings in common concepts that he found ambiguous or to reveal meaning often overlooked or forgotten in common phrases: the title of one of his most famous books, Orthodoxy (1908), itself employed such a paradox. Music In music, pieces or sections that confound expectations and may be or are interpreted simultaneously in different ways are ambiguous, such as some polytonality, polymeter, other ambiguous meters or rhythms, and ambiguous phrasing, or (Stein 2005, p. 79) any aspect of music. The music of Africa is often purposely ambiguous. To quote Sir Donald Francis Tovey (1935, p. 195), "Theorists are apt to vex themselves with vain efforts to remove uncertainty just where it has a high aesthetic value." Visual art In visual art, certain images are visually ambiguous, such as the Necker cube, which can be interpreted in two ways. Perceptions of such objects remain stable for a time, then may flip, a phenomenon called multistable perception. The opposite of such ambiguous images are impossible objects. Pictures or photographs may also be ambiguous at the semantic level: the visual image is unambiguous, but the meaning and narrative may be ambiguous: is a certain facial expression one of excitement or fear, for instance? Social psychology and the bystander effect In social psychology, ambiguity is a factor used in determining peoples' responses to various situations. High levels of ambiguity in an emergency (e.g. an unconscious man lying on a park bench) make witnesses less likely to offer any sort of assistance, due to the fear that they may have misinterpreted the situation and acted unnecessarily. Alternately, non-ambiguous emergencies (e.g. an injured person verbally asking for help) elicit more consistent intervention and assistance. With regard to the bystander effect, studies have shown that emergencies deemed ambiguous trigger the appearance of the classic bystander effect (wherein more witnesses decrease the likelihood of any of them helping) far more than non-ambiguous emergencies. Computer science In computer science, the SI prefixes kilo-, mega- and giga- were historically used in certain contexts to mean either the first three powers of 1024 (1024, 10242 and 10243) contrary to the metric system in which these units unambiguously mean one thousand, one million, and one billion. This usage is particularly prevalent with electronic memory devices (e.g. DRAM) addressed directly by a binary machine register where a decimal interpretation makes no practical sense. Subsequently, the Ki, Mi, and Gi prefixes were introduced so that binary prefixes could be written explicitly, also rendering k, M, and G unambiguous in texts conforming to the new standard—this led to a new ambiguity in engineering documents lacking outward trace of the binary prefixes (necessarily indicating the new style) as to whether the usage of k, M, and G remains ambiguous (old style) or not (new style). 1 M (where M is ambiguously 1000000 or 1048576) is less uncertain than the engineering value 1.0×106 (defined to designate the interval 950000 to 1050000). As non-volatile storage devices begin to exceed 1 GB in capacity (where the ambiguity begins to routinely impact the second significant digit), GB and TB almost always mean 109 and 1012 bytes. See also References External links Media related to Ambiguity at Wikimedia Commons Zalta, Edward N. (ed.). "Ambiguity". Stanford Encyclopedia of Philosophy. Ambiguity at the Indiana Philosophy Ontology Project Ambiguity at PhilPapers Collection of Ambiguous or Inconsistent/Incomplete Statements Leaving out ambiguities when writing
Wikipedia
The Registry of World Record Size Shells is a conchological work listing the largest (and in some cases smallest) verified shell specimens of various marine molluscan taxa. A successor to the earlier World Size Records of Robert J. L. Wagner and R. Tucker Abbott, it has been published on a semi-regular basis since 1997, changing ownership and publisher a number of times. Originally planned for release every two years, new editions are now published annually. Since 2008 the entire registry has been available online in the form of a searchable database. The registry is continuously expanded and now contains more than 25,000 listings and 85,000 supporting images. Certain families of attractive shells (such as cones, cowries, marginellas, and murex) are particularly favoured by collectors. World record size shells (commonly indicated by the acronym 'WRS') of species in the most popular families are much sought after by some shell collectors, and can thus command high prices. Collections of such shells are exhibited at a number of specialist museums, including the Bailey-Matthews National Shell Museum. Maximum and minimum sizes are also of interest to malacologists, and may be useful in delimiting closely related species. As an extensive compilation of maximum shell sizes, the registry has found use as a data source for scientific studies. Overview Scope Throughout its history the registry has covered four classes of molluscs: bivalves, cephalopods, gastropods, and scaphopods. Chitons have been excluded because their shells are formed from eight articulated plates and therefore the size of a fixed specimen depends in large part on the preservation method used. Smallest adult sizes have been listed beginning with a few specimens of Cypraeidae and Strombidae in the first edition, and they now additionally encompass a third family: Marginellidae. Separate records for sinistral (left-handed) shells and obviously rostrate cowries (family Cypraeidae) are also included. Terrestrial and freshwater species, as well as fossils (of extant taxa or otherwise), are not covered by the registry. Content The bulk of the publication—which apart from the cover is unillustrated—comprises a list of taxa and their corresponding world record sizes. Each specimen in the registry is listed alphabetically under its recognised scientific name. This is usually a binomen (species name), but subspecies, varieties and forms are also included (the latter two are used informally and are not regulated by the ICZN). In addition to the shell size, each specimen is listed with its location, owner or repository, and the year it was collected, acquired, or registered (whichever is known, listed in decreasing order of preference). Each print edition has an appendix with an alphabetical listing of entry totals for all private collectors and repositories having ownership of specimens in the registry. In the first edition, the most individual entries belonged to Victor Dan, with 369, closely followed by co-author Don Pisor on 325. This title subsequently went to Tennessee physician and world-renowned collector Pete Stimpson, who for a time held over 2,000 entries in the registry, and whose WRS specimens have been exhibited at museums including the McClung Museum of Natural History and Culture. As of the fifth print edition from 2008, the distinction of having the most WRS entries belonged to Havelet Marine, with 3,244 specimens, compared to Stimpson's 1,963. Measurements The registry's rules specify that specimens "should be measured with vernier type calipers and should reflect the greatest measurable dimension of the shell in any direction including any processes of hard shell material produced by the animal (i.e. spines, wings, keels, siphonal canals, etc.) and not including attachments, barnacles, coralline algae, or any other encrusting organisms. Long, hair-like periostracum is not to be included." This "greatest measurable dimension" can be at odds with the standard scientific definition of shell length (from base to apex along the central axis for gastropods, and from the umbo to the ventral margin in bivalves). Shell sizes are given in millimetres and recorded to the nearest 0.1 millimetres (0.0039 in), as is standard in conchology. To account for human error and environmental effects, new records are only accepted if they exceed the standing record by at least 0.3 mm (0.012 in). This 0.3 mm margin also applies to smallest adult sizes exceeding 10.0 mm (0.39 in). Entries for specimens that tie the standing record can also be submitted. Though not included in the registry, they are kept on file for future use in the event that the current record holder is shown to be misidentified or smaller than originally claimed. Superlative species The three largest species in the registry are the bivalves Kuphus polythalamia, Tridacna gigas and Pinna nobilis, with maximum recorded shell sizes of 1,532.0 mm (5 ft 0.31 in), 1,368.7 mm (4 ft 5.89 in) and 970.0 mm (3 ft 2.19 in), respectively. The fourth largest species, and the largest of all gastropods, is Syrinx aruanus with a maximum length of 772.0 mm (2 ft 6.39 in). There are literature records of an even larger S. aruanus specimen measuring 36 inches (914 mm), but these are erroneous and actually refer to the same specimen, which is on display at the Houston Museum of Natural Science. The extinct gastropod Campanile giganteum reached a similar or slightly greater size, while the largest extinct bivalves and especially the largest extinct shelled cephalopods were much larger still (and the internal shells of the largest extant squid species also reach much greater lengths, as they approximate the mantle length). However, only external shells (with some exceptions, e.g. Spirula spirula) of extant species are covered by the registry. The largest listed cephalopod "shell" is that of Argonauta argo, at 300.0 mm (11.81 in), though this is technically an eggcase rather than a true molluscan shell. Adult external shells down to 0.4 mm (0.016 in) are known (Ammonicera minortalis), as are fully-grown larval shells as small as 0.2 mm (0.0079 in) (Paedoclione doliiformis). But the smallest end-stage "shells" of all, broadly defined, are likely to be vestigial internal gastropod shells, which could be almost arbitrarily small, perhaps consisting of only a few molecules. Size verification For inclusion in the registry, size records must be verified either by a recognised second party ("a professional malacologist, a reputable shell dealer, or an advanced collector who is recognized as a specialist in the applicable family") or through photographic evidence (three images, showing the shell being measured with calipers and its dorsal and ventral aspects). Entries may be submitted by regular mail or e-mail; a submission form is included at the back of each print edition. These requirements mean that in some cases older or even current malacological literature may include size records which exceed those found in the registry. In the early years of the registry, shells were sometimes officially measured for world record size status at Conchologists of America conventions, as in 1999 when the measurements were carried out by senior author Kim Hutsell. History Background Beginning in the mid–20th century, several attempts were made to produce a list of the largest shell specimens. Perhaps the earliest of these was the Lost Operculum Club List of Champions, initiated in 1950 by John Q. Burch (1894–1974) of the Los Angeles–based Conchological Club of Southern California. This project was overseen by Bertram C. Draper (1904–2000) between 1966 and 1987, and bore four major print publications during this time. The list was limited in scope compared to later efforts, encompassing only marine species of the Eastern Pacific, from Alaska to Chile. Unlike later publications it notably included a small number of fossil specimens. The direct predecessor to the Registry of World Record Size Shells was World Size Records, compiled by renowned malacologists Robert J. L. Wagner (1905–1992) and R. Tucker Abbott (1919–1995). These record sizes originally appeared in 1964, in the first edition of Van Nostrand's Standard Catalog of Shells, not as a separate list but interspersed among other species-specific information that made up the bulk of the work. An updated list—now with a section unto itself and running to eight pages—was published as part of the book's second edition, in 1967. Though not stated as such, records from 1950 to 1959 were taken from lists in the "Minutes of the Conchological Club of Southern California" and included outdated information, including long-deceased owners. The next update appeared in the work's third edition, which was renamed Wagner and Abbott's Standard Catalog of Shells. Unlike previous editions, this third and final installment of the catalog was a ring binder with loose-leaf content, intended as a continually updated resource. To match the newly retitled work, the list's name was modified to Wagner and Abbott's World Size Records. In this final incarnation, the list appeared as a series of four supplements: the first two were loose-leaf publications that appeared in 1978 and 1982, and these were followed by hole-punched paperback titles in 1985 and 1990. The records were to be maintained "by a special committee of editors through which accurate measurements and correct identifications are verified by knowledgeable conchologists". The third supplement encompassed shells from 21 museums and more than 300 private collections, with the authors of the opinion that "many new records lurk in museums where scientists do not have the time or inclination to measure largest specimens". At the time, the American Museum of Natural History officially held the most record specimens, followed by the British Museum (Natural History) (London's Natural History Museum) and the Natural History Museum of Los Angeles County. The much-expanded fourth supplement incorporated many records from the final (1987) edition of the Lost Operculum Club List of Champions. It also lowered the minimum shell size to one inch (2.54 cm) from the previous 4.00 cm (except for Cypraea, which did not have a lower limit). Carole Hertz, long-time editor of The Festivus, noted that "a few" records were outdated upon publication as they listed deceased shell owners. Wagner died in 1992 and, though it was announced the following year that World Size Records would continue to be published, no further supplements were completed before Abbott's death in 1995. Registry of World Record Size Shells Following the deaths of Wagner and Abbott, Barbara Haviland continued to compile data for world records. In 1997 Kim C. Hutsell acquired the rights from Cynthia Abbott to World Size Records to continue the project as a stand-alone book; those rights were subsequently sold to Don L. Pisor. The first edition therefore incorporated many of the earlier entries, as well as additional data compiled for World Size Records by Haviland. The format of the new publication differed from World Size Records in several important ways. Significantly, the minimum size threshold was lowered from 2.5 centimetres (0.98 in) to 1 centimetre (0.39 in). In another change, the registry listed taxa alphabetically by species name within families, instead of within genera as they had been previously. This system allows for easy comparison between closely related genera, is more resilient to the frequent taxonomic changes that occur at the genus level, and gets around the issue that many shell dealers prefer to lump numerous genera for the sake of simplicity. R. Tucker Abbott had himself intended to use this arrangement in the fifth edition of his World Size Records, but died before he could see his plans through. Another difference was the presentation of sizes in millimetres to the nearest tenth instead of centimetres to the nearest hundredth, moving the publication in line with standard usage in conchology. Finally, references were added for each specimen to aid in identification. The opening paragraph of the first edition set out the project's goals and invited submissions: While the importance of maximum shell size and its role in the overall scheme of Conchology and Malacology is debatable, it remains one point of continual interest among collectors and researchers. The purpose of the Registry of World Record Size Shells is to provide a single publication containing a list of the largest known specimens for as many shelled molluscan species as possible in a format designed for ease of use. Undoubtedly, specimens may exist, hidden in museums, universities and private collections, which exceed some of the sizes listed herein. It is the sincere hope of the editors that information concerning such specimens be shared with others by submitting accurate and verifiable records to the Registry of World Record Size Shells. The largest shell specimens are often housed in museums, but such institutions can rarely commit significant resources towards mensuration, and most records pertaining to museum specimens were collated by volunteers. Kim C. Hutsell, Linda L. Hutsell and Don L. Pisor released the first three editions in 1997, 1999, and 2001. These were published by the Hutsells' company, Snail's Pace Productions, and distributed by Pisor's Marine Shells, both based in San Diego, California. The first edition went on sale at the 1997 Conchologists of America (COA) convention in Captiva Island, Florida. Originally called Hutsell & Pisor's Registry of World Record Size Shells, for the third edition the title was modified slightly to Hutsell, Hutsell and Pisor's Registry of World Record Size Shells (as given on the respective title pages). The text of the fourth edition was completed by Kim and Linda Hutsell and originally planned for release at the 2003 COA convention in Tacoma, Washington, but was greatly delayed and only came out in 2005. The fifth edition appeared three years later. With the departure of the first two authors, the titles of the fourth and fifth editions were shortened to Pisor's Registry of World Record Size Shells (as given on the title pages). These also marked a switch from Snail's Pace Productions to a new main publisher, ConchBooks of Hackenheim, Germany. They were compiled with the help of Conchology, Inc., with the company's founder, Guido T. Poppe, providing an introduction for the fifth edition. On 2 April 2008, the copyrights to the registry were transferred from Pisor to Jean-Pierre Barbier of Topseashells and Philippe Quiquandon of Shell's Passion. With Olivier Santini, Barbier and Quiquandon subsequently launched an official website where all registry listings can be accessed for a fee. The online database includes photographs of the listed specimens that have been gathered with the help of collectors, dealers, and institutions such as the Muséum national d'Histoire naturelle in Paris. Older records for which photographic evidence could not be obtained were removed from the database and replaced by verified specimens, even if these were smaller, to ensure that all specimens were measured and identified correctly. Barbier, Quiquandon and Santini moved the print title to an annual publication cycle, starting with the sixth edition in 2009, with Shell's Passion and Topseashells taking over as publishers. Under the new ownership the print edition was titled simply Registry of World Record Size Shells. The number of listings increased rapidly during the first few years, from just over 12,200 in 2009 to more than 17,000 in 2011. It was announced that the fifteenth print edition (2018) would be the last single-volume work, owing to the publication's increasingly unwieldy size; the first two-volume edition appeared the following year, spanning around 750 pages. The two volumes of the seventeenth edition (2020) weigh some 2.5 kg (5.5 lb). List of publications The Lost Operculum Club List of Champions appeared in four main editions and at least one supplement between 1969 and 1987. Prior to this the list had appeared in abbreviated form in publications such as The Echo. First edition (May 1969). B.C. Draper. Conchological Club of Southern California. 44 pp. Second edition (April 1973). B.C. Draper. Conchological Club of Southern California. 64 pp. OCLC 456652651 Supplement (1975). B.C. Draper. Conchological Club of Southern California. 13 pp. Third edition (May 1980). B.C. Draper. Conchological Club of Southern California. 32 pp. OCLC 143491787 Fourth edition (June 1987). B.C. Draper. Conchological Club of Southern California. 43 pp. OCLC 40531677 World Size Records originally appeared in the first and second editions of the Standard Catalog of Shells in 1964 and 1967, and then as four supplements to the third edition (one of which was titled a revision) between 1978 and 1990. A list of new entries—submitted for inclusion in World Size Records as of November 1987—appeared across the 1988 and 1989 issues of Hawaiian Shell News and also in American Conchologist. First edition (December 1964). [part of Van Nostrand's Standard Catalog of Shells, 1st edition; R.J.L. Wagner & R.T. Abbott (eds.). Van Nostrand. ix + 190 pp. OCLC 1306235.] Second edition (September 1967). [part of Van Nostrand's Standard Catalog of Shells, 2nd edition; R.J.L. Wagner & R.T. Abbott (eds.). Van Nostrand. 303 pp. OCLC 318015256.] Supplement 1 (September 1978). [part of Wagner and Abbott's Standard Catalog of Shells, 3rd edition with supplements (February 1978 onwards); R.J.L. Wagner & R.T. Abbott. American Malacologists. 700+ pp. ISBN 0915826038. OCLC 727857925.] Revision 1 (October 1982). R.J.L. Wagner & R.T. Abbott. American Malacologists. 25 pp. Supplement 3 (January 1985). R.J.L. Wagner & R.T. Abbott. American Malacologists. 30 pp. ~1,200 listings. [content actually dated February 1985] Supplement 4 (September 1990). R.J.L. Wagner & R.T. Abbott. American Malacologists. ii + 80 pp. OCLC 47902507. 2,318 listings. [content actually dated 27 April 1990] The Registry of World Record Size Shells has appeared in seventeen print editions since 1997. From the fifth (2008) edition onwards, new editions have been released on an annual basis. All editions are of the approximate dimensions 8.5 by 11 inches (22 cm × 28 cm) and have coil or comb binding, with heavy paper covers and a clear plastic sheet protecting the front cover. First edition (June 1997). K.C. Hutsell, L.L. Hutsell & D.L. Pisor. Snail's Pace Productions. ii + 101 pp. ISBN 0965901718. OCLC 40486364. 4,470+ listings. Second edition (June 1999). K.C. Hutsell, L.L. Hutsell & D.L. Pisor. Snail's Pace Productions. vii + 131 pp. ISBN 0965901726. OCLC 174637035. 6,100+ listings. Third edition (June 2001). K.C. Hutsell, L.L. Hutsell & D.L. Pisor. Snail's Pace Productions. vii + 158 pp. ISBN 0965901734. OCLC 183130294. 7,100+ listings. Fourth edition (March 2005). D.L. Pisor. ConchBooks. 171 pp. ISBN 0965901742. OCLC 76736724. 9,500+ listings. Fifth edition (March 2008). D.L. Pisor (introduction by G.T. Poppe). ConchBooks. 207 pp. ISBN 0615194753. OCLC 227336824. 11,500+ listings. Sixth edition (2009). J.-P. Barbier, P. Quiquandon & O. Santini. Shell's Passion & Topseashells. 304 pp. ISBN 2746606836. OCLC 495193919. 12,200+ listings. Seventh edition (2010). J.-P. Barbier, P. Quiquandon & O. Santini. Shell's Passion & Topseashells. Eighth edition (2011). J.-P. Barbier, P. Quiquandon & O. Santini. Shell's Passion & Topseashells. Unpaginated. OCLC 800559734. 17,000+ listings. Ninth edition (2012). J.-P. Barbier, P. Quiquandon & O. Santini. Shell's Passion & Topseashells. Unpaginated. 17,300+ listings. Tenth edition (2013). J.-P. Barbier, P. Quiquandon & O. Santini. Shell's Passion & Topseashells. Unpaginated. OCLC 971890560. 18,000+ listings. Eleventh edition (2014). J.-P. Barbier, P. Quiquandon & O. Santini. Shell's Passion & Topseashells. Unpaginated. OCLC 971890566. 18,400+ listings. Twelfth edition (2015). P. Quiquandon, J.-P. Barbier & A. Brunella. Shell's Passion & Topseashells. Unpaginated. OCLC 971890438. 19,600+ listings. Thirteenth edition (2016). P. Quiquandon, J.-P. Barbier & A. Brunella. Shell's Passion & Topseashells. Unpaginated. OCLC 971890446. 20,800+ listings. Fourteenth edition (2017). P. Quiquandon, J.-P. Barbier & A. Brunella. Shell's Passion & Topseashells. Unpaginated. OCLC 1002074760. 21,560+ listings. Fifteenth edition (2018). P. Quiquandon, J.-P. Barbier & A. Brunella. Shell's Passion & Topseashells. Unpaginated. OCLC 1031380206. 22,600+ listings. Sixteenth edition [2 volumes] (2019). Tome 1: Abyssochrysidae–Modulidae; Tome 2: Montacutidae–Yoldiidae. P. Quiquandon, J.-P. Barbier & A. Brunella. Shell's Passion & Topseashells. 749 pp. 23,500+ listings. Seventeenth edition [2 volumes] (2020). Tome 1: Abyssochrysidae–Modulidae; Tome 2: Montacutidae–Yoldiidae. P. Quiquandon, J.-P. Barbier & A. Brunella. Shell's Passion & Topseashells. c. 700 pp. 24,200+ listings. Reviews First edition The first edition of the Registry of World Record Size Shells was reviewed for American Conchologist by malacologist Gary Rosenberg of the Academy of Natural Sciences of Philadelphia and conchologist Gene Everson. Rosenberg spoke favourably of the changes made since World Size Records, namely the lowering of the minimum size for inclusion, listing sizes in millimetres, ordering species alphabetically within families irrespective of genus, and adding a reference field for identification purposes. He also welcomed the inclusion of separate entries for infraspecific taxa such as subspecies, varieties, and forms, noting that these "might someday prove to be full species, and maximum sizes might provide evidence as to their status". Rosenberg identified a minor inconsistency in the grouping of cephalopods (Argonautidae and Nautilidae) and scaphopods (Dentaliidae) with gastropods while listing bivalves separately, opining that "a single alphabetic sequence would be preferable". Rosenberg also found "an unusual number of typographical errors", which he attributed to a rush to have the work ready for the 1997 COA convention. Other issues identified by Rosenberg included the listing of synonyms (e.g. Oliva sericea and its junior synonym Oliva textilina) and different combinations of the same species (e.g. Ancilla lienardi and Eburna lienardii). While suggesting that errors of the second type would be easier to catch if author citations were included, Rosenberg conceded that this might not be practical due to space limitations. Rosenberg also noted entries where the cited location fell well outside the species's natural range (e.g. a supposed West African specimen of the East African Ancilla ventricosa). Comparing around a quarter of the size records from the first edition against shells in the collections of the Academy of Natural Sciences of Philadelphia, Rosenberg found that the latter had larger specimens in some 10% of cases. What Rosenberg considered "[o]f greatest concern", however, were discrepancies between the record sizes listed in the registry and larger specimens found in contemporary monographic works, and this even extended to type specimens in some cases. The problem seemed to be particularly pronounced in the Pleurotomariidae—a family highly prized by shell collectors—where nine out of sixteen species had larger shells listed in the standard work on the family, that of Anseeuw & Goto (1996). Similar issues were found with the record sizes of Terebridae. Rosenberg concluded: "The Registry would be much more authoritative if it included record sizes from the literature, and I recommend this be done in future editions." Gene Everson echoed Rosenberg in highlighting the changes from World Size Records as major improvements. In particular he praised the format and exhaustiveness of the publication, writing that, despite having almost double the entries of its predecessor, the registry was much easier to handle compared with the "heavy and bulky" Standard Catalog of Shells. He added that the authors' choice of coil binding made it easier to keep the registry open at the desired page. Everson questioned some of the publication choices for the reference field, such as the conspicuous omission of "the definitive work on New Zealand shells", New Zealand Mollusca by Arthur William Baden Powell. He continued: "Many other landmark works by highly respected malacologists are omitted, while articles in periodicals frequently written by amateurs, such as La Conchliglia and American Conchologist, are [not]. It would seem the included 49 references were just those used by the authors for this edition." He suggested that future editions should clearly set out what types of references are acceptable. Summarising, Everson wrote: This book is meant to be a useful, working and evolving tool and should not be nitpicked on details that are not relevant to its purpose. The authors are not producing a treatise on each family with hot-off-the-press reclassifications. I suspect they are following Vaught's classification which has been out for a few years, and is by no means perfect or up-to-date, but is readily available and inexpensive, and thus a good choice. But if so, this standard should be identified in the introduction, so that one knows where to find a shell. Also, typos are found in almost every book, and this is no exception, but they do not lessen the useful information, which is the main reason why we will buy this book. Two thumbs up! Later editions A review of the 11th edition criticised the lack of pagination and, consequently, of an index, as well as the lack of page headings indicating families. Notes References External links Official website
Wikipedia
The impedance analogy is a method of representing a mechanical system by an analogous electrical system. The advantage of doing this is that there is a large body of theory and analysis techniques concerning complex electrical systems, especially in the field of filters. By converting to an electrical representation, these tools in the electrical domain can be directly applied to a mechanical system without modification. A further advantage occurs in electromechanical systems: Converting the mechanical part of such a system into the electrical domain allows the entire system to be analysed as a unified whole. The mathematical behaviour of the simulated electrical system is identical to the mathematical behaviour of the represented mechanical system. Each element in the electrical domain has a corresponding element in the mechanical domain with an analogous constitutive equation. All laws of circuit analysis, such as Kirchhoff's circuit laws, that apply in the electrical domain also apply to the mechanical impedance analogy. The impedance analogy is one of the two main mechanical–electrical analogies used for representing mechanical systems in the electrical domain, the other being the mobility analogy. The roles of voltage and current are reversed in these two methods, and the electrical representations produced are the dual circuits of each other. The impedance analogy preserves the analogy between electrical impedance and mechanical impedance whereas the mobility analogy does not. On the other hand, the mobility analogy preserves the topology of the mechanical system when transferred to the electrical domain whereas the impedance analogy does not. Applications The impedance analogy is widely used to model the behaviour of mechanical filters. These are filters that are intended for use in an electronic circuit but work entirely by mechanical vibrational waves. Transducers are provided at the input and output of the filter to convert between the electrical and mechanical domains. Another very common use is in the field of audio equipment, such as loudspeakers. Loudspeakers consist of a transducer and mechanical moving parts. Acoustic waves themselves are waves of mechanical motion: of air molecules or some other fluid medium. A very early application of this type was to make significant improvements to the abysmal audio performance of phonographs. In 1929 Edward Norton designed the mechanical parts of a phonograph to behave as a maximally flat filter, thus anticipating the electronic Butterworth filter. Elements Before an electrical analogy can be developed for a mechanical system, it must first be described as an abstract mechanical network. The mechanical system is broken down into a number of ideal elements each of which can then be paired with an electrical analogue. The symbols used for these mechanical elements on network diagrams are shown in the following sections on each individual element. The mechanical analogies of lumped electrical elements are also lumped elements, that is, it is assumed that the mechanical component possessing the element is small enough that the time taken by mechanical waves to propagate from one end of the component to the other can be neglected. Analogies can also be developed for distributed elements such as transmission lines but the greatest benefits are with lumped-element circuits. Mechanical analogies are required for the three passive electrical elements, namely, resistance, inductance and capacitance. What these analogies are is determined by what mechanical property is chosen to represent "effort", the analogy of voltage, and the property chosen to represent "flow", the analogy of current. In the impedance analogy the effort variable is force and the flow variable is velocity. Resistance The mechanical analogy of electrical resistance is the loss of energy of a moving system through such processes as friction. A mechanical component analogous to a resistor is a shock absorber and the property analogous to resistance is damping. A resistor is governed by the constitutive equation of Ohm's law, v = i R . {\displaystyle v=iR\,.} The analogous equation in the mechanical domain is, F = u R m , {\displaystyle F=uR_{\mathrm {m} }\,,} where Electrical resistance represents the real part of electrical impedance. Likewise, mechanical resistance is the real part of mechanical impedance. Inductance The mechanical analogy of inductance in the impedance analogy is mass. A mechanical component analogous to an inductor is a large, rigid weight. An inductor is governed by the constitutive equation, v = L d i d t . {\displaystyle v=L{\frac {di}{dt}}\,.} The analogous equation in the mechanical domain is Newton's second law of motion, F = M d u d t , {\displaystyle F=M{\frac {du}{dt}}\,,} where The impedance of an inductor is purely imaginary and is given by, Z = j ω L . {\displaystyle Z=j\omega L\,.} The analogous mechanical impedance is given by, Z m = j ω M , {\displaystyle Z_{\mathrm {m} }=j\omega M\,,} where Capacitance The mechanical analogy of capacitance in the impedance analogy is compliance. It is more common in mechanics to discuss stiffness, the inverse of compliance. The analogy of stiffness in the electrical domain is the less commonly used elastance, the inverse of capacitance. A mechanical component analogous to a capacitor is a spring. A capacitor is governed by the constitutive equation, v = D ∫ i d t . {\displaystyle v=D\int idt\,.} The analogous equation in the mechanical domain is a form of Hooke's law, F = S ∫ u d t , {\displaystyle F=S\int udt\,,} where The impedance of a capacitor is purely imaginary and is given by, Z = D j ω . {\displaystyle Z={\frac {D}{j\omega }}\,.} The analogous mechanical impedance is given by, Z m = S j ω . {\displaystyle Z_{\mathrm {m} }={\frac {S}{j\omega }}\,.} Alternatively, one can write, Z m = 1 j ω C m , {\displaystyle Z_{\mathrm {m} }={\frac {1}{j\omega C_{\mathrm {m} }}}\,,} where C m = 1 / S {\displaystyle C_{m}=1/S} is mechanical compliance. This is more directly analogous to the electrical expression when capacitance is used. Resonator A mechanical resonator consists of both a mass element and a compliance element. Mechanical resonators are analogous to electrical LC circuits consisting of inductance and capacitance. Real mechanical components unavoidably have both mass and compliance, so it is a practical proposition to make resonators as a single component. In fact, it is more difficult to make a pure mass or pure compliance as a single component. A spring can be made with a certain compliance and mass minimized, or a mass can be made with compliance minimized, but neither can be eliminated altogether. Mechanical resonators are a key component of mechanical filters. Generators Analogues exist for the active electrical elements of the voltage source and the current source (generators). The mechanical analogue in the impedance analogy of the constant voltage generator is the constant force generator. The mechanical analogue of the constant current generator is the constant velocity generator. An example of a constant force generator is the constant-force spring. This is analogous to a real voltage source, such as a battery, which remains near constant-voltage with load provided that the load resistance is much higher than the battery internal resistance. An example of a practical constant velocity generator is a lightly loaded powerful machine, such as a motor, driving a belt. Transducers Electromechanical systems require transducers to convert between the electrical and mechanical domains. They are analogous to two-port networks and like those can be described by a pair of simultaneous equations and four arbitrary parameters. There are numerous possible representations, but the form most applicable to the impedance analogy has the arbitrary parameters in units of impedance. In matrix form (with the electrical side taken as port 1) this representation is, [ v F ] = [ z 11 z 12 z 21 z 22 ] [ i u ] . {\displaystyle {\begin{bmatrix}v\\F\end{bmatrix}}={\begin{bmatrix}z_{11}&z_{12}\\z_{21}&z_{22}\end{bmatrix}}{\begin{bmatrix}i\\u\end{bmatrix}}\,.} The element z 22 {\displaystyle z_{22}\,} is the open circuit mechanical impedance, that is, the impedance presented by the mechanical side of the transducer when no current (open circuit) is entering the electrical side. The element z 11 {\displaystyle z_{11}\,} , conversely, is the clamped electrical impedance, that is, the impedance presented to the electrical side when the mechanical side is clamped and prevented from moving (velocity is zero). The remaining two elements, z 21 {\displaystyle z_{21}\,} and z 12 , {\displaystyle z_{12}\,,} describe the transducer forward and reverse transfer functions respectively. They are both analogous to transfer impedances and are hybrid ratios of an electrical and mechanical quantity. Transformers The mechanical analogy of a transformer is a simple machine such as a pulley or a lever. The force applied to the load can be greater or less than the input force depending on whether the mechanical advantage of the machine is greater or less than unity respectively. Mechanical advantage is analogous to transformer turns ratio in the impedance analogy. A mechanical advantage greater than unity is analogous to a step-up transformer and less than unity is analogous to a step-down transformer. Power and energy equations Examples Simple resonant circuit The figure shows a mechanical arrangement of a platform of mass M {\displaystyle M} that is suspended above the substrate by a spring of stiffness S {\displaystyle S} and a damper of resistance R . {\displaystyle R\,.} The impedance analogy equivalent circuit is shown to the right of this arrangement and consists of a series resonant circuit. This system has a resonant frequency and may have a natural frequency of oscillation if not too heavily damped. Model of the human ear The circuit diagram shows an impedance analogy model of the human ear. The ear canal section is followed by a transformer representing the eardrum. The eardrum is the transducer between the acoustic waves in air in the ear canal and the mechanical vibrations in the bones of the middle ear. At the cochlea there is another change of medium from mechanical vibrations to the fluid filling the cochlea. This example thus demonstrates the power of electrical analogies in bringing together three domains (acoustic, mechanical and fluid flow) into a single unified whole. If the nerve impulses flowing to the brain had also been included in the model, then the electrical domain would have made four domains encompassed in the model. The cochlea portion of the circuit uses a finite element analysis of the continuous transmission line of the cochlear duct. An ideal representation of such a structure would use infinitesimal elements, and there would thus be an infinite number of them. In this model the cochlea is divided into 350 sections and each section is modelled using a small number of lumped elements. Advantages and disadvantages The principal advantage of the impedance analogy over its alternative, the mobility analogy, is that it maintains the analogy between electrical and mechanical impedance. That is, a mechanical impedance is represented as an electrical impedance and a mechanical resistance is represented as an electrical resistance in the electrical equivalent circuit. It is also natural to think of force as analogous to voltage (generator voltages are often called electromotive force) and velocity as analogous to current. It is this basic analogy that leads to the analogy between electrical and mechanical impedance. The principal disadvantage of the impedance analogy is that it does not preserve the topology of the mechanical system. Elements that are in series in the mechanical system are in parallel in the electrical equivalent circuit and vice versa. The impedance matrix representation of a transducer transforms force in the mechanical domain into current in the electrical domain. Likewise, velocity in the mechanical domain is transformed into voltage in the electrical domain. A two-port device that transforms a voltage into an analogous quantity can be represented as a simple transformer. A device that transforms a voltage into an analogue of the dual property of voltage (that is, current, whose analogue is velocity) is represented as a gyrator. Since force is analogous to voltage, not current, this may seem like a disadvantage on the face of it. However, many practical transducers, especially at audio frequencies, work by electromagnetic induction and are governed by just such a relationship. For instance, the force on a current-carrying conductor is given by, F = B I l , {\displaystyle F=BIl\,,} where History The impedance analogy is sometimes called the Maxwell analogy after James Clerk Maxwell (1831–1879) who used mechanical analogies to explain his ideas of electromagnetic fields. However, the term impedance was not coined until 1886 (by Oliver Heaviside), the idea of complex impedance was introduced by Arthur E. Kennelly in 1893, and the concept of impedance was not extended into the mechanical domain until 1920 by Kennelly and Arthur Gordon Webster. Henri Poincaré in 1907 was the first to describe a transducer as a pair of linear algebraic equations relating electrical variables (voltage and current) to mechanical variables (force and velocity). Wegel, in 1921, was the first to express these equations in terms of mechanical impedance as well as electrical impedance. References Bibliography Beranek, Leo Leroy; Mellow, Tim J., Acoustics: Sound Fields and Transducers, Academic Press, 2012 ISBN 0123914213. Busch-Vishniac, Ilene J., Electromechanical Sensors and Actuators, Springer Science & Business Media, 1999 ISBN 038798495X. Carr, Joseph J., RF Components and Circuits, Newnes, 2002 ISBN 0-7506-4844-9. Darlington, S. "A history of network synthesis and filter theory for circuits composed of resistors, inductors, and capacitors", IEEE Transactions on Circuits and Systems, vol. 31, no. 1, pp. 3–13, 1984. Eargle, John, Loudspeaker Handbook, Kluwer Academic Publishers, 2003 ISBN 1402075847. Fukazawa, Tatsuya; Tanaka, Yasuo, "Evoked otoacoustic emissions in a cochlear model", pp. 191–196 in Hohmann, D. (ed), ECoG, OAE and Intraoperative Monitoring: Proceedings of the First International Conference, Würzburg, Germany, September 20–24, 1992, Kugler Publications, 1993 ISBN 9062990975. Harrison, Henry C. "Acoustic device", U.S. patent 1,730,425, filed 11 October 1927 (and in Germany 21 October 1923), issued 8 October 1929. Hunt, Frederick V., Electroacoustics: the Analysis of Transduction, and its Historical Background, Harvard University Press, 1954 OCLC 2042530. Jackson, Roger G., Novel Sensors and Sensing, CRC Press, 2004 ISBN 1420033808. Kleiner, Mendel, Electroacoustics, CRC Press, 2013 ISBN 1439836183. Martinsen, Orjan G.; Grimnes, Sverre, Bioimpedance and Bioelectricity Basics, Academic Press, 2011 ISBN 0080568807. Paik, H. J., "Superconduction accelerometers, gravitational-wave transducers, and gravity gradiometers", pp. 569–598, in Weinstock, Harold, SQUID Sensors: Fundamentals, Fabrication, and Applications, Springer Science & Business Media, 1996 ISBN 0792343506. Pierce, Allan D., Acoustics: an Introduction to its Physical Principles and Applications, Acoustical Society of America 1989 ISBN 0883186128. Pipes, Louis A.; Harvill, Lawrence R., Applied Mathematics for Engineers and Physicists, Courier Dover Publications, 2014 ISBN 0486779513. Poincaré, H., "Study of telephonic reception", Eclairage Electrique, vol. 50, pp. 221–372, 1907. Stephens, Raymond William Barrow; Bate, A. E., Acoustics and vibrational physics, Edward Arnold, 1966 OCLC 912579. Talbot-Smith, Michael, Audio Engineer's Reference Book, Taylor & Francis, 2013 ISBN 1136119736. Taylor, John; Huang, Qiuting, CRC Handbook of Electrical Filters, CRC Press, 1997 ISBN 0849389518. Wegel, R. L., "Theory of magneto-mechanical systems as applied to telephone receivers and similar structures", Journal of the American Institute of Electrical Engineers, vol. 40, pp. 791–802, 1921.
Wikipedia
The Self-Service Semantic Suite (S4) provides on-demand access to text mining and linked open data technology in the cloud. The S4 stack is based on enterprise-grade technology from Ontotext including their leading RDF engine (GraphDB, formerly OWLIM) and high performance text mining solutions successfully applied in some of the largest enterprises in the world. History It was launched in the summer of 2014. Overview S4 offers a suite of text analytics and linked data management in the cloud. You can analyze news, social media, biomedical documents and query Linked Data knowledge graphs. You can also create your own RDF knowledge graphs using GraphDB™. S4 is low cost, on demand and pay-as-you-go providing affordable, easy access to companies of any size. The RDF triplestore included with S4 is GraphDB™ which is known for scalability and query performance. GraphDB™ is the only triplestore that performs inferencing at scale. Users realize improved query speed, data availability and accurate analysis. With GraphDB it is possible to store, manage and search semantic triples extracted from S4 text mining or to create private Knowledge Graphs integrating structured and unstructured data with facts from public LOD datasets. Usability All functionality of the S4 can be accessed via RESTful services. Users are provided with Getting Started guide. Also there is a complete set of documentation and sample code in JAVA, C#, Python and JavaScript. Feature Events Presentation 4-5 Dec 2014 - LT-Accelerate Conference - Brussels
Wikipedia
Eurisko (Gr., I discover) is a discovery system written by Douglas Lenat in RLL-1, a representation language itself written in the Lisp programming language. A sequel to Automated Mathematician, it consists of heuristics, i.e. rules of thumb, including heuristics describing how to use and change its own heuristics. Lenat was frustrated by Automated Mathematician's constraint to a single domain and so developed Eurisko; his frustration with the effort of encoding domain knowledge for Eurisko led to Lenat's subsequent development of Cyc. Lenat envisioned ultimately coupling the Cyc knowledge base with the Eurisko discovery engine. History Development commenced at Carnegie Mellon in 1976 and continued at Stanford University in 1978 when Lenat returned to teach. "For the first five years, nothing good came out of it", Lenat said. But when the implementation was changed to a frame language based representation he called RLL (Representation Language Language), heuristic creation and modification became much simpler. Eurisko was then applied to a number of domains with surprising success, including VLSI chip design. Previously, Lenat had worked at the automatic-programming research group at the Stanford Artificial Intelligence Laboratory, and was coauthor of a report in 1974 on "Program-Understanding Systems". Inspired by work on Eurisko, Lenat proposed that mutations may be highly non-random, since the DNA can code for (meta-)heuristic rules by which likely useful mutations can be made, allowing increasingly rapid evolution over time. Lenat and Eurisko gained notoriety by submitting the winning fleet (a large number of stationary, lightly-armored ships with many small weapons) to the United States Traveller TCS national championship in 1981, forcing extensive changes to the game's rules. The fleet had 96 ships, 75 of which were of the "Eurisko class". The detailed composition was published. However, Eurisko won again in 1982 when the program discovered that the rules permitted the program to destroy its own ships, permitting it to continue to use much the same strategy. Tournament officials announced that if Eurisko won another championship the competition would be abolished; Lenat retired Eurisko from the game. The Traveller TCS wins brought Lenat to the attention of DARPA, which has funded much of his subsequent work. A screenshot of Eurisko in action is printed in a 1984 Scientific American article. Lenat was known for keeping his source code confidential during his lifetime. In 2023, it was reported that source code for both Eurisko and the previous Automated Mathematician system had been found in public code archives. The following year, Eurisko code was shown running under Medley Interlisp. In popular culture In the first-season The X-Files episode "Ghost in the Machine", Eurisko is the name of a fictional software company responsible for the episode's "monster of the week", facilities management software known as "Central Operating System", or "COS". COS (described in the episode as an "adaptive network") is shown to be capable of learning when its designer arrives at Eurisko headquarters and is surprised to find that COS has given itself the ability to speak. The designer is forced to create a virus to destroy COS after COS commits a series of murders in an apparent effort to prevent its own destruction. Lenat is mentioned and Eurisko is discussed at the end of Richard Feynman's Computer Heuristics Lecture as part of the Idiosyncratic Thinking Workshop Series. Lenat and Eurisko are mentioned in the 2019 James Rollins novel Crucible that deals with artificial intelligence and artificial general intelligence. Notes References Understanding Computers: Artificial Intelligence. Amsterdam: Time-Life Books. 1986. pp. 81–84. ISBN 978-0-7054-0915-5. Lenat, Douglas; Brown, J.S. (1984). "Why AM and EURISKO appear to work" (PDF). Artificial Intelligence. 23 (3): 269–294. CiteSeerX 10.1.1.565.8830. doi:10.1016/0004-3702(84)90016-X. Haase, Kenneth W (February 1990). "Invention and exploration in discovery". Massachusetts Institute of Technology. Archived from the original (PDF) on 2005-01-22. Retrieved 2008-12-13. External Links Eurisko on Github
Wikipedia
In the area of abstract algebra known as group theory, the diameter of a finite group is a measure of its complexity. Consider a finite group ( G , ∘ ) {\displaystyle \left(G,\circ \right)} , and any set of generators S. Define D S {\displaystyle D_{S}} to be the graph diameter of the Cayley graph Λ = ( G , S ) {\displaystyle \Lambda =\left(G,S\right)} . Then the diameter of ( G , ∘ ) {\displaystyle \left(G,\circ \right)} is the largest value of D S {\displaystyle D_{S}} taken over all generating sets S. For instance, every finite cyclic group of order s, the Cayley graph for a generating set with one generator is an s-vertex cycle graph. The diameter of this graph, and of the group, is ⌊ s / 2 ⌋ {\displaystyle \lfloor s/2\rfloor } . It is conjectured, for all non-abelian finite simple groups G, that diam ⁡ ( G ) ⩽ ( log ⁡ | G | ) O ( 1 ) . {\displaystyle \operatorname {diam} (G)\leqslant \left(\log |G|\right)^{{\mathcal {O}}(1)}.} Many partial results are known but the full conjecture remains open.
Wikipedia
dplyr is an R package whose set of functions are designed to enable dataframe (a spreadsheet-like data structure) manipulation in an intuitive, user-friendly way. It is one of the core packages of the popular tidyverse set of packages in the R programming language. Data analysts typically use dplyr in order to transform existing datasets into a format better suited for some particular type of analysis, or data visualization. For instance, someone seeking to analyze a large dataset may wish to only view a smaller subset of the data. Alternatively, a user may wish to rearrange the data in order to see the rows ranked by some numerical value, or even based on a combination of values from the original dataset. Functions within the dplyr package will allow a user to perform such tasks. dplyr was launched in 2014. On the dplyr web page, the package is described as "a grammar of data manipulation, providing a consistent set of verbs that help you solve the most common data manipulation challenges." The five core verbs While dplyr actually includes several dozen functions that enable various forms of data manipulation, the package features five primary verbs or actions: filter(), which is used to extract rows from a dataframe, based on conditions specified by a user; select(), which is used to subset a dataframe by its columns; arrange(), which is used to sort rows in a dataframe based on attributes held by particular columns; mutate(), which is used to create new variables, by altering and/or combining values from existing columns; and summarize(), also spelled summarise(), which is used to collapse values from a dataframe into a single summary. Additional functions In addition to its five main verbs, dplyr also includes several other functions that enable exploration and manipulation of dataframes. Included among these are: count(), which is used to sum the number of unique observations that contain some particular value or categorical attribute; rename(), which enables a user to alter the column names for variables, often to improve ease of use and intuitive understanding of a dataset; slice_max(), which returns a data subset that contains the rows with the highest number of values for some particular variable; slice_min(), which returns a data subset that contains the rows with the lowest number of values for some particular variable. Built-in datasets The dplyr package comes with five datasets. These are: band_instruments, band_instruments2, band_members, starwars, storms. Copyright & license The copyright to dplyr is held by Posit PBC, formerly RStudio PBC. dplyr was originally released under a GPL license, but in 2022, Posit changed the license terms for the package to the "more permissive" MIT License. The main difference between the two types of license is that the MIT license allows subsequent re-use of code within proprietary software, whereas a GPL license does not.
Wikipedia
Path dependence is a concept in the social sciences, referring to processes where past events or decisions constrain later events or decisions. It can be used to refer to outcomes at a single point in time or to long-run equilibria of a process. Path dependence has been used to describe institutions, technical standards, patterns of economic or social development, organizational behavior, and more. In common usage, the phrase can imply two types of claims. The first is the broad concept that "history matters", often articulated to challenge explanations that pay insufficient attention to historical factors. This claim can be formulated simply as "the future development of an economic system is affected by the path it has traced out in the past" or "particular events in the past can have crucial effects in the future." The second is a more specific claim about how past events or decisions affect future events or decisions in significant or disproportionate ways, through mechanisms such as increasing returns, positive feedback effects, or other mechanisms. Commercial examples Videocassette recording systems The videotape format war is a key example of path dependence. Three mechanisms independent of product quality could explain how VHS achieved dominance over Betamax from a negligible early adoption lead: A network effect: videocassette rental stores observed more VHS rentals and stocked up on VHS tapes, leading renters to buy VHS players and rent more VHS tapes, until there was complete vendor lock-in. A VCR manufacturer bandwagon effect of switching to VHS-production because they expected it to win the standards battle. Sony, the original developer of Betamax, did not let pornography companies license their technology for mass production, which meant that nearly all pornographic motion pictures released on video used VHS format. An alternative analysis is that VHS was better-adapted to market demands (e.g. having a longer recording time). In this interpretation, path dependence had little to do with VHS's success, which would have occurred even if Betamax had established an early lead. QWERTY keyboard The QWERTY keyboard is a prominent example of path dependence due to the widespread emergence and persistence of the QWERTY keyboard. QWERTY has persisted over time despite potentially more efficient keyboard arrangements being developed – QWERTY vs. Dvorak is an example of this. However as it is not clear whether other keyboard layouts really are better, there is still debate if this is a good example of path dependence. Railway track gauges The standard gauge of railway tracks is another example of path dependence which explains how a seemingly insignificant event or circumstance can change the choice of technology over the long run despite contemporary know-how showing such a choice to be inefficient. More than half the world's railway gauges are 4 feet 8+1⁄2 inches (143.5 cm), known as standard gauge, despite the consensus among engineers being that wider gauges have increased performance and speed. The path to the adoption of the standard gauge began in the late 1820s when George Stephenson, a British engineer, began work on the Liverpool and Manchester Railway. His experience with primitive coal tramways resulted in this gauge width being copied by the Liverpool and Manchester Railway, then the rest of Great Britain, and finally by railroads in Europe and North America. There are tradeoffs involved in the choice of rail gauge between the cost of constructing a line (which rises with wider gauges) and various performance metrics, including maximum speed, low center of gravity (desirable, especially in double-stack rail transport). While the attempts with Brunel gauge, a significantly broader gauge failed, the widespread use of Iberian gauge, Russian gauge and Indian gauge, all of which are broader than Stephenson's choice, show that there is nothing inherent to the 1435 mm gauge that led to its global success. Economics Path dependence theory was originally developed by economists to explain technology adoption processes and industry evolution. The theoretical ideas have had a strong influence on evolutionary economics. A common expression of the concept is the claim that predictable amplifications of small differences are a disproportionate cause of later circumstances, and, in the "strong" form, that this historical hang-over is inefficient. There are many models and empirical cases where economic processes do not progress steadily toward some pre-determined and unique equilibrium, but rather the nature of any equilibrium achieved depends partly on the process of getting there. Therefore, the outcome of a path-dependent process will often not converge towards a unique equilibrium, but will instead reach one of several equilibria (sometimes known as absorbing states). This dynamic vision of economic evolution is very different from the tradition of neo-classical economics, which in its simplest form assumed that only a single outcome could possibly be reached, regardless of initial conditions or transitory events. With path dependence, both the starting point and 'accidental' events (noise) can have significant effects on the ultimate outcome. In each of the following examples it is possible to identify some random events that disrupted the ongoing course, with irreversible consequences. Economic development In economic development, it is said (initially by Paul David in 1985) that a standard that is first-to-market can become entrenched (like the QWERTY layout in typewriters still used in computer keyboards). He called this "path dependence", and said that inferior standards can persist simply because of the legacy they have built up. That QWERTY vs. Dvorak is an example of this phenomenon, has been re-asserted, questioned, and continues to be argued. Economic debate continues on the significance of path dependence in determining how standards form. Economists from Alfred Marshall to Paul Krugman have noted that similar businesses tend to congregate geographically ("agglomerate"); opening near similar companies attracts workers with skills in that business, which draws in more businesses seeking experienced employees. There may have been no reason to prefer one place to another before the industry developed, but as it concentrates geographically, participants elsewhere are at a disadvantage, and will tend to move into the hub, further increasing its relative efficiency. This network effect follows a statistical power law in the idealized case, though negative feedback can occur (through rising local costs). Buyers often cluster around sellers, and related businesses frequently form business clusters, so a concentration of producers (initially formed by accident and agglomeration) can trigger the emergence of many dependent businesses in the same region. In the 1980s, the US dollar exchange rate appreciated, lowering the world price of tradable goods below the cost of production in many (previously successful) U.S. manufacturers. Some of the factories that closed as a result, could later have been operated at a (cash-flow) profit after dollar depreciation, but reopening would have been too expensive. This is an example of hysteresis, switching barriers, and irreversibility. If the economy follows adaptive expectations, future inflation is partly determined by past experience with inflation, since experience determines expected inflation and this is a major determinant of realized inflation. A transitory high rate of unemployment during a recession can lead to a permanently higher unemployment rate because of the skills loss (or skill obsolescence) by the unemployed, along with a deterioration of work attitudes. In other words, cyclical unemployment may generate structural unemployment. This structural hysteresis model of the labour market differs from the prediction of a "natural" unemployment rate or NAIRU, around which 'cyclical' unemployment is said to move without influencing the "natural" rate itself. Types of path dependence Liebowitz and Margolis distinguish types of path dependence; some do not imply inefficiencies and do not challenge the policy implications of neoclassical economics. Only "third-degree" path dependence—where switching gains are high, but transition is impractical—involves such a challenge. They argue that such situations should be rare for theoretical reasons, and that no real-world cases of private locked-in inefficiencies exist. Vergne and Durand qualify this critique by specifying the conditions under which path dependence theory can be tested empirically. Technically, a path-dependent stochastic process has an asymptotic distribution that "evolves as a consequence (function of) the process's own history". This is also known as a non-ergodic stochastic process. In The Theory of the Growth of the Firm (1959), Edith Penrose analyzed how the growth of a firm both organically and through acquisition is strongly influenced by the experience of its managers and the history of the firm's development. Conditions which give rise to path dependence Path dependence may arise or be hindered by a number of important factors, these may include Durability of capital equipment Technical interrelatedness Increasing returns Dynamic increasing returns to adoption Social sciences Institutions Recent methodological work in comparative politics and sociology has adapted the concept of path dependence into analyses of political and social phenomena. Path dependence has primarily been used in comparative-historical analyses of the development and persistence of institutions, whether they be social, political, or cultural. There are arguably two types of path-dependent processes: One is the critical juncture framework, most notably utilized by Ruth and David Collier in political science. In the critical juncture, antecedent conditions allow contingent choices that set a specific trajectory of institutional development and consolidation that is difficult to reverse. As in economics, the generic drivers are: lock-in, positive feedback, increasing returns (the more a choice is made, the bigger its benefits), and self-reinforcement (which creates forces sustaining the decision). The other path-dependent process deals with reactive sequences where a primary event sets off a temporally-linked and causally-tight deterministic chain of events that is nearly uninterruptible. These reactive sequences have been used to link such things as the assassination of Martin Luther King Jr. with welfare expansion, or the Industrial Revolution in England with the development of the steam engine. The critical juncture framework has been used to explain the development and persistence of welfare states, labor incorporation in Latin America, and the variations in economic development between countries, among other things. Scholars such as Kathleen Thelen caution that the historical determinism in path-dependent frameworks is subject to constant disruption from institutional evolution. Kathleen Thelen has criticized the application of QWERTY keyboard-style mechanisms to politics. She argues that such applications to politics are both too contingent and too deterministic. Too contingent in the sense that the initial choice is open and flukey, and too deterministic in the sense that once the initial choice is made, an unavoidable path inevitably forms from which there is no return. Based on the theory of path dependence, Monika Stachowiak-Kudła and Janusz Kudła show that legal tradition affects the administrative court’s rulings in Poland. It also complements the two other reasons for diversified verdicts: the experience of the judges and courts (specialization) and preference (bias) for one of the parties. This effect is persistent even if the verdicts are controversial and result in serious consequences for a party and when the penalty paid by the complainant is perceived as excessive but fulfilling the strict rules of law. The German tradition of law favours legal certainty, while the courts from the former Russian and Austrian partitions are more likely to refer to the principle of justice. Interestingly, the institutional factors can be identified almost one hundred years after the end of the partition period and the unification of formal and material law, corroborating the existence of path dependence. Organizations Paul Pierson's influential attempt to rigorously formalize path dependence within political science, draws partly on ideas from economics. Herman Schwartz has questioned those efforts, arguing that forces analogous to those identified in the economic literature are not pervasive in the political realm, where the strategic exercise of power gives rise to, and transforms, institutions. Especially sociology and organizational theory, a distinct yet closely related concept to path dependence is the concept of imprinting which captures how initial environmental conditions leave a persistent mark (or imprint) on organizations and organizational collectives (such as industries and communities), thus continuing to shape organizational behaviours and outcomes in the long run, even as external environmental conditions change. Individuals and groups The path dependence of emergent strategy has been observed in behavioral experiments with individuals and groups. Other examples A general type of path dependence is a typological vestige. In typography, for example, some customs persist, although the reason for their existence no longer applies; for example, the placement of the period inside a quotation in U.S. spelling. In metal type, pieces of terminal punctuation, such as the comma and period, are comparatively small and delicate (as they must be x-height for proper kerning.) Placing the full-height quotation mark on the outside protected the smaller cast metal sort from damage if the word needed to be moved around within or between lines. This would be done even if the period did not belong to the text being quoted. Evolution is considered by some to be path-dependent and historically contingent: mutations occurring in the past have had long-term effects on current life forms, some of which may no longer be adaptive to current conditions. For instance, there is a controversy about whether the panda's thumb is a leftover trait or not. In the computer and software markets, legacy systems indicate path dependence: customers' needs in the present market often include the ability to read data or run programs from past generations of products. Thus, for instance, a customer may need not merely the best available word processor, but rather the best available word processor that can read Microsoft Word files. Such limitations in compatibility contribute to lock-in, and more subtly, to design compromises for independently developed products, if they attempt to be compatible. Also see embrace, extend and extinguish. In socioeconomic systems, commercial fisheries' harvest rates and conservation consequences are found to be path dependent as predicted by the interaction between slow institutional adaptation, fast ecological dynamics, and diminishing returns. In physics and mathematics, a non-holonomic system is a physical system in which the states depend on the physical paths taken. See also Critical juncture theory Imprinting (organizational theory) Innovation butterfly Historicism Network effect Opportunity cost Ratchet effect Technological determinism Tyranny of small decisions Notes References Arrow, Kenneth J. (1963), 2nd ed. Social Choice and Individual Values. Yale University Press, New Haven, pp. 119–120 (constitutional transitivity as alternative to path dependence on the status quo). Arthur, W. Brian (1994), Increasing Returns and Path Dependence in the Economy. University of Michigan Press. Boas, Taylor C (2007). "Conceptualizing Continuity and Change: The Composite-Standard Model of Path Dependence" (PDF). Journal of Theoretical Politics. 19 (1): 33–54. CiteSeerX 10.1.1.466.8147. doi:10.1177/0951629807071016. S2CID 11323786. Archived from the original (PDF) on 2008-09-05. Retrieved 2007-10-20. Collier, Ruth Berins; Collier, David (1991). Shaping the Political Arena: Critical Junctures, the Labor Movement, and Regime Dynamics in Latin America. Princeton: Princeton University Press. ISBN 9780268077105. Retrieved 6 July 2018. David, Paul A. (June 2000). "Path dependence, its critics and the quest for 'historical economics'" (PDF). Archived from the original (PDF) on 2014-03-24., in P. Garrouste and S. Ioannides (eds), Evolution and Path Dependence in Economic Ideas: Past and Present, Edward Elgar Publishing, Cheltenham, England. Hargreaves Heap, Shawn (1980), "Choosing the Wrong 'Natural' Rate: Accelerating Inflation or Decelerating Employment and Growth?" Economic Journal 90(359) (Sept): 611–20 (ISSN 0013-0133) Mahoney, James (2000). "Path Dependence in Historical Sociology". Theory and Society. 29 (4): 507–548. doi:10.1023/A:1007113830879. S2CID 145564738. Stephen E. Margolis and S.J. Liebowitz (2000), "Path Dependence, Lock-In, and History" Nelson, R. and S. Winter (1982), An evolutionary theory of economic change, Harvard University Press. Page, Scott E. (January 2006). "Path dependence". Quarterly Journal of Political Science. 1 (1): 88. doi:10.1561/100.00000006. Pdf. Penrose, E. T., (1959), The Theory of the Growth of the Firm, New York: Wiley. Pierson, Paul (2000). "Increasing Returns, Path Dependence, and the Study of Politics". American Political Science Review, June. _____ (2004), Politics in Time: Politics in Time: History, Institutions, and Social Analysis, Princeton University Press. Puffert, Douglas J. (1999), "Path Dependence in Economic History" (based on the entry "Pfadabhängigkeit in der Wirtschaftsgeschichte", in the Handbuch zur evolutorischen Ökonomik) _____ (2001), "Path Dependence in Spatial Networks: The Standardization of Railway Track Gauge" _____ (2009), Tracks across continents, paths through history: the economic dynamics of standardization in railway gauge, University of Chicago Press. Schwartz, Herman. "Down the Wrong Path: Path Dependence, Increasing Returns, and Historical Institutionalism"., undated mimeo Shalizi, Cosma (2001), "QWERTY, Lock-in, and Path Dependence", unpublished website, with extensive references Vergne, J. P. and R. Durand (2010), "The missing link between the theory and empirics of path dependence", Journal of Management Studies, 47(4):736–59, with extensive references
Wikipedia
Measurement of wow and flutter is carried out on audio tape machines, cassette recorders and players, and other analog recording and reproduction devices with rotary components (e.g. movie projectors, turntables (vinyl recording), etc.) This measurement quantifies the amount of 'frequency wobble' (caused by speed fluctuations) present in subjectively valid terms. Turntables tend to suffer mainly slow wow. In digital systems, which are locked to crystal oscillators, variations in clock timing are referred to as wander or jitter, depending on speed. While the terms wow and flutter used to be used separately (for wobbles at a rate below and above 4 Hz respectively), they tend to be combined now that universal standards exist for measurement which take both into account simultaneously. Listeners find flutter most objectionable when the actual frequency of wobble is 4 Hz, and less audible above and below this rate. This fact forms the basis for the weighting curve shown here. The weighting curve is misleading, inasmuch as it presumes inaudibility of flutters above 200 Hz, when actually faster flutters are quite damaging to the sound. A flutter of 200 Hz at a level of -50db will create 0.3% intermodulation distortion, which would be considered unacceptable in a preamp or amplifier. Measurement techniques Measuring instruments use a frequency discriminator to translate the pitch variations of a recorded tone into a flutter waveform, which is then passed through the weighting filter, before being full-wave rectified to produce a slowly varying signal which drives a meter or recording device. The maximum meter indication should be read as the flutter value. The following standards all specify the weighting filter shown above, together with a special slow-quasi-peak full-wave rectifier designed to register any brief speed excursions. As with many audio standards, these are identical derivatives of a common specification. IEC 386 DIN45507 BS4847 CCIR 409-3 AES6-2008 Measurement is usually made on a 3.15 kHz (or sometimes 3 kHz) tone, a frequency chosen because it is high enough to give good resolution, but low enough not to be affected by drop-outs and high-frequency losses. Ideally, flutter should be measured using a pre-recorded tone free from flutter. Record-replay flutter will then be around twice as high as pre-recorded, because worst case variations will add during recording and playback. When a recording is played back on the same machine it was made on, a very slow change from low to high flutter will often be observed, because any cyclic flutter caused by capstan rotation may go from adding to cancelling as the tape slips slightly out of synchronism. A good technique is to stop the tape from time to time and start it again. This will often result in different readings as the correlation between record and playback flutter shifts. On well maintained, precise machines, it may be difficult to procure a reference tape with higher tolerances. Therefore, a record-playback test using the stop-start technique, can be, for practical purposes, the best that can be accomplished. Audible effects Wow and flutter are particularly audible on music with oboe, string, guitar, flute, brass, or piano solo playing. While wow is perceived clearly as pitch variation, flutter can alter the sound of the music differently, making it sound ‘cracked’ or ‘ugly’. A recorded 1 kHz tone with a small amount of flutter (around 0.1%) can sound fine in a ‘dead’ listening room, but in a reverberant room constant fluctuations will often be clearly heard. These are the result of the current tone ‘beating’ with its echo, which since it originated slightly earlier, has a slightly different pitch. What is heard is quite pronounced amplitude variation, which the ear is very sensitive to. This probably explains why piano notes sound ‘cracked’. Because they start loud and then gradually tail off, piano notes leave an echo that can be as loud as the dying note that it beats with, resulting in a level that varies from complete cancellation to double-amplitude at a rate of a few Hz: instead of a smoothly dying note we hear a heavily modulated one. Oboe notes may be particularly affected because of their harmonic structure. Another way that flutter manifests is as a truncation of reverb tails. This may be due to the persistence of memory with regard to spatial location based on early reflections and comparison of Doppler effects over time. The auditory system may become distracted by pitch shifts in the reverberation of a signal that should be of fixed and solid pitch. The term "flutter echo" is used in relation to a particular form of reverberation that flutters in amplitude. It has no direct connection with flutter as described here, though the mechanism of modulation through cancellation may have something in common with that described above. Equipment performance Professional tape machines can achieve a weighted flutter figure of around 0.02%, which is considered inaudible. High end cassette decks struggle to manage around 0.08% weighted, which is still audible under some conditions. Digital music players such as CD, DAT, or MP3 use electronic clocks to govern the speed of replay. The circuits used to control these frequencies do permit a very small amount of flutter (usually termed jitter), but the level is far below that which the human ear can discern. The linear sound track on VCR video recorders has much higher wow and flutter than the VHS-HiFi high fidelity track which is contained within the video signal. Absolute speed Absolute speed error causes a change in pitch, and it is useful to know that a semitone in music represents a 6% frequency change. This is because Western music uses the ‘equal temperament scale' based on a constant geometric ratio between twelve notes; and the twelfth root of 2 is 1.05946. Anyone with a good musical ear can detect a pitch change of around 1%, though an error of up to 3% is likely to go unnoticed, except by those few with ‘absolute pitch’. Most ‘movie’ films shown on European television are sped up by 4.166% because they were shot at 24 frames per second, but are scanned at 25 frames per second to match the PAL standard of 25 frame/s 50 field/s. This causes a noticeable increase in pitch on voices, which often brings surprised comment from the actors themselves when they hear their performance on video. It can also frustrate attempts to play along with film music, which is closer to a semitone sharp than its intended pitch. Recently, digital pitch correction has been applied to some films, which corrects the pitch without altering lip-sync, by adding in extra cycles of sound. This has to be regarded as a form of distortion, as there is no way to change the pitch of a sound without also slowing it down that does not change the waveform itself. Scrape flutter High-frequency flutter, above 100 Hz, can sometimes result from tape vibrating as it passes over a head (or other non-rotating element in the tape path), as a result of rapidly interacting stretching in the tape and stick-slip at the head. This is termed 'scrape flutter'. It adds a roughness to the sound that is not typical of wow & flutter, and damping devices or heavy rollers are sometimes employed on professional tape machines to reduce or prevent it. Scrape flutter measurement requires special techniques, often using a 10 kHz tone. See also Audio quality measurement Noise measurement Headroom Rumble measurement ITU-R 468 noise weighting A-weighting Weighting filter Equal-loudness contour Fletcher–Munson curves Flutter (electronics and communication) Wow (recording)
Wikipedia
The Association for Computing Machinery Special Interest Group on University and College Computing Services Hall of Fame Award was established by the Association for Computing Machinery to recognize individuals whose specific contributions have had a positive impact on the organization and therefore on the professional careers of the members and their institutions. Recipients See also List of computer science awards
Wikipedia
Govinda Bhaṭṭathiri (also known as Govinda Bhattathiri of Thalakkulam or Thalkkulathur) (c. 1237 – 1295) was an Indian astrologer and astronomer who flourished in Kerala during the thirteenth century CE. Govinda Bhaṭṭatiri was born in the Nambudiri family known by the name Thalakkulathur in the village of Alathiyur, Tirur in Kerala. He was traditionally considered to be the progenitor of the Pazhur Kaniyar family of astrologers. He is an important figure in the Kerala astrological traditions. Works Govinda wrote Nauka, a commentary on Brihat Jataka. Earlier scholars also assigned to him the authorship of Daśādhyāyī, another commentary on Brihat Jataka written with same narrative style. Recent research suggests that Nauka was the original commentary written by Govinda and Daśādhyāyī was an abridged version rearranged by another person in the 15th century. The authorship of the Daśādhyāyī was assigned to Govinda Bhattathiri in the Ithihyamala written by Sankunni during late 19th century. Daśādhyāyī is considered to be the most important of the 70 known commentaries on this text. Govinda wrote another important work in astrology titled Muhūrttaratnaṃ. Paramesvara (ca.1380–1460), an astronomer of the Kerala school of astronomy and mathematics known for the introduction of the Dṛggaṇita system of astronomical computations, composed an extensive commentary on this work. In this commentary Paramesvara had indicated that he was a grandson of a disciple of the author of Muhūrttaratnaṃ. See also List of astronomers and mathematicians of the Kerala school
Wikipedia
In plasma physics, the Vlasov equation is a differential equation describing time evolution of the distribution function of collisionless plasma consisting of charged particles with long-range interaction, such as the Coulomb interaction. The equation was first suggested for the description of plasma by Anatoly Vlasov in 1938 and later discussed by him in detail in a monograph. The Vlasov equation, combined with Landau kinetic equation describe collisional plasma. Difficulties of the standard kinetic approach First, Vlasov argues that the standard kinetic approach based on the Boltzmann equation has difficulties when applied to a description of the plasma with long-range Coulomb interaction. He mentions the following problems arising when applying the kinetic theory based on pair collisions to plasma dynamics: Theory of pair collisions disagrees with the discovery by Rayleigh, Irving Langmuir and Lewi Tonks of natural vibrations in electron plasma. Theory of pair collisions is formally not applicable to Coulomb interaction due to the divergence of the kinetic terms. Theory of pair collisions cannot explain experiments by Harrison Merrill and Harold Webb on anomalous electron scattering in gaseous plasma. Vlasov suggests that these difficulties originate from the long-range character of Coulomb interaction. He starts with the collisionless Boltzmann equation (sometimes called the Vlasov equation, anachronistically in this context), in generalized coordinates: d d t f ( r , p , t ) = 0 , {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}f(\mathbf {r} ,\mathbf {p} ,t)=0,} explicitly a PDE: ∂ f ∂ t + d r d t ⋅ ∂ f ∂ r + d p d t ⋅ ∂ f ∂ p = 0 , {\displaystyle {\frac {\partial f}{\partial t}}+{\frac {\mathrm {d} \mathbf {r} }{\mathrm {d} t}}\cdot {\frac {\partial f}{\partial \mathbf {r} }}+{\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}}\cdot {\frac {\partial f}{\partial \mathbf {p} }}=0,} and adapted it to the case of a plasma, leading to the systems of equations shown below. Here f is a general distribution function of particles with momentum p at coordinates r and given time t. Note that the term d p d t {\displaystyle {\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}}} is the force F acting on the particle. The Vlasov–Maxwell system of equations (Gaussian units) Instead of collision-based kinetic description for interaction of charged particles in plasma, Vlasov utilizes a self-consistent collective field created by the charged plasma particles. Such a description uses distribution functions f e ( r , p , t ) {\displaystyle f_{e}(\mathbf {r} ,\mathbf {p} ,t)} and f i ( r , p , t ) {\displaystyle f_{i}(\mathbf {r} ,\mathbf {p} ,t)} for electrons and (positive) plasma ions. The distribution function f α ( r , p , t ) {\displaystyle f_{\alpha }(\mathbf {r} ,\mathbf {p} ,t)} for species α describes the number of particles of the species α having approximately the momentum p {\displaystyle \mathbf {p} } near the position r {\displaystyle \mathbf {r} } at time t. Instead of the Boltzmann equation, the following system of equations was proposed for description of charged components of plasma (electrons and positive ions): ∂ f e ∂ t + v e ⋅ ∇ f e − e ( E + v e c × B ) ⋅ ∂ f e ∂ p = 0 ∂ f i ∂ t + v i ⋅ ∇ f i + Z i e ( E + v i c × B ) ⋅ ∂ f i ∂ p = 0 {\displaystyle {\begin{aligned}{\frac {\partial f_{e}}{\partial t}}+\mathbf {v} _{e}\cdot \nabla f_{e}-\;\;e\left(\mathbf {E} +{\frac {\mathbf {v} _{e}}{c}}\times \mathbf {B} \right)\cdot {\frac {\partial f_{e}}{\partial \mathbf {p} }}&=0\\{\frac {\partial f_{i}}{\partial t}}+\mathbf {v} _{i}\cdot \nabla f_{i}+Z_{i}e\left(\mathbf {E} +{\frac {\mathbf {v} _{i}}{c}}\times \mathbf {B} \right)\cdot {\frac {\partial f_{i}}{\partial \mathbf {p} }}&=0\end{aligned}}} ∇ × B = 4 π c j + 1 c ∂ E ∂ t , ∇ ⋅ B = 0 , ∇ × E = − 1 c ∂ B ∂ t , ∇ ⋅ E = 4 π ρ , {\displaystyle {\begin{aligned}\nabla \times \mathbf {B} &={\frac {4\pi }{c}}\mathbf {j} +{\frac {1}{c}}{\frac {\partial \mathbf {E} }{\partial t}},&\nabla \cdot \mathbf {B} &=0,$nabla \times \mathbf {E} &=-{\frac {1}{c}}{\frac {\partial \mathbf {B} }{\partial t}},&\nabla \cdot \mathbf {E} &=4\pi \rho ,\end{aligned}}} ρ = e ∫ ( Z i f i − f e ) d 3 p , j = e ∫ ( Z i f i v i − f e v e ) d 3 p , v α = p / m α 1 + p 2 / ( m α c ) 2 {\displaystyle {\begin{aligned}\rho &=e\int \left(Z_{i}f_{i}-f_{e}\right)\mathrm {d} ^{3}\mathbf {p} ,\\\$mathbf {j} &=e\int \left(Z_{i}f_{i}\mathbf {v} _{i}-f_{e}\mathbf {v} _{e}\right)\mathrm {d} ^{3}\mathbf {p} ,\\\mathbf {v} _{\alpha }&={\frac {\mathbf {p} /m_{\alpha }}{\sqrt {1+p^{2}/\left(m_{\alpha }c\right)^{2}}}}\end{aligned}}} Here e is the elementary charge ( e > 0 {\displaystyle e>0} ), c is the speed of light, Zi e is the charge of the ions, mi is the mass of the ion, E ( r , t ) {\displaystyle \mathbf {E} (\mathbf {r} ,t)} and B ( r , t ) {\displaystyle \mathbf {B} (\mathbf {r} ,t)} represent collective self-consistent electromagnetic field created in the point r {\displaystyle \mathbf {r} } at time moment t by all plasma particles. The essential difference of this system of equations from equations for particles in an external electromagnetic field is that the self-consistent electromagnetic field depends in a complex way on the distribution functions of electrons and ions f e ( r , p , t ) {\displaystyle f_{e}(\mathbf {r} ,\mathbf {p} ,t)} and f i ( r , p , t ) {\displaystyle f_{i}(\mathbf {r} ,\mathbf {p} ,t)} . The Vlasov–Poisson equation The Vlasov–Poisson equations are an approximation of the Vlasov–Maxwell equations in the non-relativistic zero-magnetic field limit: ∂ f α ∂ t + v α ⋅ ∂ f α ∂ x + q α E m α ⋅ ∂ f α ∂ v = 0 , {\displaystyle {\frac {\partial f_{\alpha }}{\partial t}}+\mathbf {v} _{\alpha }\cdot {\frac {\partial f_{\alpha }}{\partial \mathbf {x} }}+{\frac {q_{\alpha }\mathbf {E} }{m_{\alpha }}}\cdot {\frac {\partial f_{\alpha }}{\partial \mathbf {v} }}=0,} and Poisson's equation for self-consistent electric field: ∇ 2 ϕ + ρ ε = 0. {\displaystyle \nabla ^{2}\phi +{\frac {\rho }{\varepsilon }}=0.} Here qα is the particle's electric charge, mα is the particle's mass, E ( x , t ) {\displaystyle \mathbf {E} (\mathbf {x} ,t)} is the self-consistent electric field, ϕ ( x , t ) {\displaystyle \phi (\mathbf {x} ,t)} the self-consistent electric potential, ρ is the electric charge density, and ε {\displaystyle \varepsilon } is the electric permitivity. Vlasov–Poisson equations are used to describe various phenomena in plasma, in particular Landau damping and the distributions in a double layer plasma, where they are necessarily strongly non-Maxwellian, and therefore inaccessible to fluid models. Moment equations In fluid descriptions of plasmas (see plasma modeling and magnetohydrodynamics (MHD)) one does not consider the velocity distribution. This is achieved by replacing f ( r , v , t ) {\displaystyle f(\mathbf {r} ,\mathbf {v} ,t)} with plasma moments such as number density n, flow velocity u and pressure p. They are named plasma moments because the n-th moment of f {\displaystyle f} can be found by integrating v n f {\displaystyle v^{n}f} over velocity. These variables are only functions of position and time, which means that some information is lost. In multifluid theory, the different particle species are treated as different fluids with different pressures, densities and flow velocities. The equations governing the plasma moments are called the moment or fluid equations. Below the two most used moment equations are presented (in SI units). Deriving the moment equations from the Vlasov equation requires no assumptions about the distribution function. Continuity equation The continuity equation describes how the density changes with time. It can be found by integration of the Vlasov equation over the entire velocity space. ∫ d f d t d 3 v = ∫ ( ∂ f ∂ t + ( v ⋅ ∇ r ) f + ( a ⋅ ∇ v ) f ) d 3 v = 0 {\displaystyle \int {\frac {\mathrm {d} f}{\mathrm {d} t}}\mathrm {d} ^{3}v=\int \left({\frac {\partial f}{\partial t}}+(\mathbf {v} \cdot \nabla _{r})f+(\mathbf {a} \cdot \nabla _{v})f\right)\mathrm {d} ^{3}v=0} After some calculations, one ends up with ∂ n ∂ t + ∇ ⋅ ( n u ) = 0. {\displaystyle {\frac {\partial n}{\partial t}}+\nabla \cdot (n\mathbf {u} )=0.} The number density n, and the momentum density nu, are zeroth and first order moments: n = ∫ f d 3 v {\displaystyle n=\int f\,\mathrm {d^{3}} v} n u = ∫ v f d 3 v {\displaystyle n\mathbf {u} =\int \mathbf {v} f\,\mathrm {d} ^{3}v} Momentum equation The rate of change of momentum of a particle is given by the Lorentz equation: m d v d t = q ( E + v × B ) {\displaystyle m{\frac {\mathrm {d} \mathbf {v} }{\mathrm {d} t}}=q(\mathbf {E} +\mathbf {v} \times \mathbf {B} )} By using this equation and the Vlasov Equation, the momentum equation for each fluid becomes m n D D t u = − ∇ ⋅ P + q n E + q n u × B , {\displaystyle mn{\frac {\mathrm {D} }{\mathrm {D} t}}\mathbf {u} =-\nabla \cdot {\mathcal {P}}+qn\mathbf {E} +qn\mathbf {u} \times \mathbf {B} ,} where P {\displaystyle {\mathcal {P}}} is the pressure tensor. The material derivative is D D t = ∂ ∂ t + u ⋅ ∇ . {\displaystyle {\frac {\mathrm {D} }{\mathrm {D} t}}={\frac {\partial }{\partial t}}+\mathbf {u} \cdot \nabla .} The pressure tensor is defined as the particle mass times the covariance matrix of the velocity: p i j = m ∫ ( v i − u i ) ( v j − u j ) f d 3 v . {\displaystyle p_{ij}=m\int (v_{i}-u_{i})(v_{j}-u_{j})f\mathrm {d} ^{3}v.} The frozen-in approximation As for ideal MHD, the plasma can be considered as tied to the magnetic field lines when certain conditions are fulfilled. One often says that the magnetic field lines are frozen into the plasma. The frozen-in conditions can be derived from Vlasov equation. We introduce the scales T, L, and V for time, distance and speed respectively. They represent magnitudes of the different parameters which give large changes in f {\displaystyle f} . By large we mean that ∂ f ∂ t T ∼ f | ∂ f ∂ r | L ∼ f | ∂ f ∂ v | V ∼ f . {\displaystyle {\frac {\partial f}{\partial t}}T\sim f\quad \left|{\frac {\partial f}{\partial \mathbf {r} }}\right|L\sim f\quad \left|{\frac {\partial f}{\partial \mathbf {v} }}\right|V\sim f.} We then write t ′ = t T , r ′ = r L , v ′ = v V . {\displaystyle t'={\frac {t}{T}},\quad \mathbf {r} '={\frac {\mathbf {r} }{L}},\quad \mathbf {v} '={\frac {\mathbf {v} }{V}}.} Vlasov equation can now be written 1 T ∂ f ∂ t ′ + V L v ′ ⋅ ∂ f ∂ r ′ + q m V ( E + V v ′ × B ) ⋅ ∂ f ∂ v ′ = 0. {\displaystyle {\frac {1}{T}}{\frac {\partial f}{\partial t'}}+{\frac {V}{L}}\mathbf {v} '\cdot {\frac {\partial f}{\partial \mathbf {r} '}}+{\frac {q}{mV}}(\mathbf {E} +V\mathbf {v} '\times \mathbf {B} )\cdot {\frac {\partial f}{\partial \mathbf {v} '}}=0.} So far no approximations have been done. To be able to proceed we set V = R ω g {\displaystyle V=R\omega _{g}} , where ω g = q B / m {\displaystyle \omega _{g}=qB/m} is the gyro frequency and R is the gyroradius. By dividing by ωg, we get 1 ω g T ∂ f ∂ t ′ + R L v ′ ⋅ ∂ f ∂ r ′ + ( E V B + v ′ × B B ) ⋅ ∂ f ∂ v ′ = 0 {\displaystyle {\frac {1}{\omega _{g}T}}{\frac {\partial f}{\partial t'}}+{\frac {R}{L}}\mathbf {v} '\cdot {\frac {\partial f}{\partial \mathbf {r} '}}+\left({\frac {\mathbf {E} }{VB}}+\mathbf {v} '\times {\frac {\mathbf {B} }{B}}\right)\cdot {\frac {\partial f}{\partial \mathbf {v} '}}=0} If 1 / ω g ≪ T {\displaystyle 1/\omega _{g}\ll T} and R ≪ L {\displaystyle R\ll L} , the two first terms will be much less than f {\displaystyle f} since ∂ f / ∂ t ′ ∼ f , v ′ ≲ 1 {\displaystyle \partial f/\partial t'\sim f,v'\lesssim 1} and ∂ f / ∂ r ′ ∼ f {\displaystyle \partial f/\partial \mathbf {r} '\sim f} due to the definitions of T, L, and V above. Since the last term is of the order of f {\displaystyle f} , we can neglect the two first terms and write ( E V B + v ′ × B B ) ⋅ ∂ f ∂ v ′ ≈ 0 ⇒ ( E + v × B ) ⋅ ∂ f ∂ v ≈ 0 {\displaystyle \left({\frac {\mathbf {E} }{VB}}+\mathbf {v} '\times {\frac {\mathbf {B} }{B}}\right)\cdot {\frac {\partial f}{\partial \mathbf {v} '}}\approx 0\Rightarrow (\mathbf {E} +\mathbf {v} \times \mathbf {B} )\cdot {\frac {\partial f}{\partial \mathbf {v} }}\approx 0} This equation can be decomposed into a field aligned and a perpendicular part: E ∥ ∂ f ∂ v ∥ + ( E ⊥ + v × B ) ⋅ ∂ f ∂ v ⊥ ≈ 0 {\displaystyle \mathbf {E} _{\parallel }{\frac {\partial f}{\partial \mathbf {v} _{\parallel }}}+(\mathbf {E} _{\perp }+\mathbf {v} \times \mathbf {B} )\cdot {\frac {\partial f}{\partial \mathbf {v} _{\perp }}}\approx 0} The next step is to write v = v 0 + Δ v {\displaystyle \mathbf {v} =\mathbf {v} _{0}+\Delta \mathbf {v} } , where v 0 × B = − E ⊥ {\displaystyle \mathbf {v} _{0}\times \mathbf {B} =-\mathbf {E} _{\perp }} It will soon be clear why this is done. With this substitution, we get E ∥ ∂ f ∂ v ∥ + ( Δ v ⊥ × B ) ⋅ ∂ f ∂ v ⊥ ≈ 0 {\displaystyle \mathbf {E} _{\parallel }{\frac {\partial f}{\partial \mathbf {v} _{\parallel }}}+(\Delta \mathbf {v} _{\perp }\times \mathbf {B} )\cdot {\frac {\partial f}{\partial \mathbf {v} _{\perp }}}\approx 0} If the parallel electric field is small, ( Δ v ⊥ × B ) ⋅ ∂ f ∂ v ⊥ ≈ 0 {\displaystyle (\Delta \mathbf {v} _{\perp }\times \mathbf {B} )\cdot {\frac {\partial f}{\partial \mathbf {v} _{\perp }}}\approx 0} This equation means that the distribution is gyrotropic. The mean velocity of a gyrotropic distribution is zero. Hence, v 0 {\displaystyle \mathbf {v} _{0}} is identical with the mean velocity, u, and we have E + u × B ≈ 0 {\displaystyle \mathbf {E} +\mathbf {u} \times \mathbf {B} \approx 0} To summarize, the gyro period and the gyro radius must be much smaller than the typical times and lengths which give large changes in the distribution function. The gyro radius is often estimated by replacing V with the thermal velocity or the Alfvén velocity. In the latter case R is often called the inertial length. The frozen-in conditions must be evaluated for each particle species separately. Because electrons have much smaller gyro period and gyro radius than ions, the frozen-in conditions will more often be satisfied. See also Fokker–Planck equation References Further reading Vlasov, A. A. (1961). "Many-Particle Theory and Its Application to Plasma". New York. Bibcode:1961temc.book.....V.
Wikipedia
Computer Science and Artificial Intelligence Laboratory (CSAIL) is a research institute at the Massachusetts Institute of Technology (MIT) formed by the 2003 merger of the Laboratory for Computer Science (LCS) and the Artificial Intelligence Laboratory (AI Lab). Housed within the Ray and Maria Stata Center, CSAIL is the largest on-campus laboratory as measured by research scope and membership. It is part of the Schwarzman College of Computing but is also overseen by the MIT Vice President of Research. Research activities CSAIL's research activities are organized around a number of semi-autonomous research groups, each of which is headed by one or more professors or research scientists. These groups are divided up into seven general areas of research: Artificial intelligence Computational biology Graphics and vision Language and learning Theory of computation Robotics Systems (includes computer architecture, databases, distributed systems, networks and networked systems, operating systems, programming methodology, and software engineering, among others) History Computing Research at MIT began with Vannevar Bush's research into a differential analyzer and Claude Shannon's electronic Boolean algebra in the 1930s, the wartime MIT Radiation Laboratory, the post-war Project Whirlwind and Research Laboratory of Electronics (RLE), and MIT Lincoln Laboratory's SAGE in the early 1950s. At MIT, research in the field of artificial intelligence began in the late 1950s. Project MAC On July 1, 1963, Project MAC (the Project on Mathematics and Computation, later backronymed to Multiple Access Computer, Machine Aided Cognitions, or Man and Computer) was launched with a $2 million grant from the Defense Advanced Research Projects Agency (DARPA). Project MAC's original director was Robert Fano of MIT's Research Laboratory of Electronics (RLE). Fano decided to call MAC a "project" rather than a "laboratory" for reasons of internal MIT politics – if MAC had been called a laboratory, then it would have been more difficult to raid other MIT departments for research staff. The program manager responsible for the DARPA grant was J. C. R. Licklider, who had previously been at MIT conducting research in RLE, and would later succeed Fano as director of Project MAC. Project MAC would become famous for groundbreaking research in operating systems, artificial intelligence, and the theory of computation. Its contemporaries included Project Genie at Berkeley, the Stanford Artificial Intelligence Laboratory, and (somewhat later) University of Southern California's (USC's) Information Sciences Institute. An "AI Group" including Marvin Minsky (the director), John McCarthy (inventor of Lisp), and a talented community of computer programmers were incorporated into Project MAC. They were interested principally in the problems of vision, mechanical motion and manipulation, and language, which they view as the keys to more intelligent machines. In the 1960s and 1970s the AI Group developed a time-sharing operating system called Incompatible Timesharing System (ITS) which ran on PDP-6 and later PDP-10 computers. The early Project MAC community included Fano, Minsky, Licklider, Fernando J. Corbató, and a community of computer programmers and enthusiasts among others who drew their inspiration from former colleague John McCarthy. These founders envisioned the creation of a computer utility whose computational power would be as reliable as an electric utility. To this end, Corbató brought the first computer time-sharing system, Compatible Time-Sharing System (CTSS), with him from the MIT Computation Center, using the DARPA funding to purchase an IBM 7094 for research use. One of the early focuses of Project MAC would be the development of a successor to CTSS, Multics, which was to be the first high availability computer system, developed as a part of an industry consortium including General Electric and Bell Laboratories. In 1966, Scientific American featured Project MAC in the September thematic issue devoted to computer science, that was later published in book form. At the time, the system was described as having approximately 100 TTY terminals, mostly on campus but with a few in private homes. Only 30 users could be logged in at the same time. The project enlisted students in various classes to use the terminals simultaneously in problem solving, simulations, and multi-terminal communications as tests for the multi-access computing software being developed. AI Lab and LCS In the late 1960s, Minsky's artificial intelligence group was seeking more space, and was unable to get satisfaction from project director Licklider. Minsky found that although Project MAC as a single entity could not get the additional space he wanted, he could split off to form his own laboratory and then be entitled to more office space. As a result, the MIT AI Lab was formed in 1970, and many of Minsky's AI colleagues left Project MAC to join him in the new laboratory, while most of the remaining members went on to form the Laboratory for Computer Science. Talented programmers such as Richard Stallman, who used TECO to develop EMACS, flourished in the AI Lab during this time. Those researchers who did not join the smaller AI Lab formed the Laboratory for Computer Science and continued their research into operating systems, programming languages, distributed systems, and the theory of computation. Two professors, Hal Abelson and Gerald Jay Sussman, chose to remain neutral — their group was referred to variously as Switzerland and Project MAC for the next 30 years. Among much else, the AI Lab led to the invention of Lisp machines and their attempted commercialization by two companies in the 1980s: Symbolics and Lisp Machines Inc. This divided the AI Lab into "camps" which resulted in a hiring away of many of the talented programmers. The incident inspired Richard Stallman's later work on the GNU Project. "Nobody had envisioned that the AI lab's hacker group would be wiped out, but it was." ... "That is the basis for the free software movement — the experience I had, the life that I've lived at the MIT AI lab — to be working on human knowledge, and not be standing in the way of anybody's further using and further disseminating human knowledge". CSAIL On the fortieth anniversary of Project MAC's establishment, July 1, 2003, LCS was merged with the AI Lab to form the MIT Computer Science and Artificial Intelligence Laboratory, or CSAIL. This merger created the largest laboratory (over 600 personnel) on the MIT campus and was regarded as a reuniting of the diversified elements of Project MAC. In 2018, CSAIL launched a five-year collaboration program with IFlytek, a company sanctioned the following year for allegedly using its technology for surveillance and human rights abuses in Xinjiang. In October 2019, MIT announced that it would review its partnerships with sanctioned firms such as iFlyTek and SenseTime. In April 2020, the agreement with iFlyTek was terminated. CSAIL moved from the School of Engineering to the newly formed Schwarzman College of Computing by February 2020. Offices From 1963 to 2004, Project MAC, LCS, the AI Lab, and CSAIL had their offices at 545 Technology Square, taking over more and more floors of the building over the years. In 2004, CSAIL moved to the new Ray and Maria Stata Center, which was built specifically to house it and other departments. Outreach activities The IMARA (from Swahili word for "power") group sponsors a variety of outreach programs that bridge the global digital divide. Its aim is to find and implement long-term, sustainable solutions which will increase the availability of educational technology and resources to domestic and international communities. These projects are run under the aegis of CSAIL and staffed by MIT volunteers who give training, install and donate computer setups in greater Boston, Massachusetts, Kenya, Native American Indian tribal reservations in the American Southwest such as the Navajo Nation, the Middle East, and Fiji Islands. The CommuniTech project strives to empower under-served communities through sustainable technology and education and does this through the MIT Used Computer Factory (UCF), providing refurbished computers to under-served families, and through the Families Accessing Computer Technology (FACT) classes, it trains those families to become familiar and comfortable with computer technology. Notable researchers (Including members and alumni of CSAIL's predecessor laboratories) MacArthur Fellows Tim Berners-Lee, Erik Demaine, Dina Katabi, Daniela L. Rus, Regina Barzilay, Peter Shor, Richard Stallman, and Joshua Tenenbaum Turing Award recipients Leonard M. Adleman, Fernando J. Corbató, Shafi Goldwasser, Butler W. Lampson, John McCarthy, Silvio Micali, Marvin Minsky, Ronald L. Rivest, Adi Shamir, Barbara Liskov, and Michael Stonebraker IJCAI Computers and Thought Award recipients Terry Winograd, Patrick Winston, David Marr, Gerald Jay Sussman, Rodney Brooks Rolf Nevanlinna Prize recipients Madhu Sudan, Peter Shor, Constantinos Daskalakis Gödel Prize recipients Shafi Goldwasser (two-time recipient), Silvio Micali, Maurice Herlihy, Charles Rackoff, Johan Håstad, Peter Shor, and Madhu Sudan Grace Murray Hopper Award recipients Robert Metcalfe, Shafi Goldwasser, Guy L. Steele, Jr., Richard Stallman, and W. Daniel Hillis Textbook authors Harold Abelson and Gerald Jay Sussman, Richard Stallman, Thomas H. Cormen, Charles E. Leiserson, Patrick Winston, Ronald L. Rivest, Barbara Liskov, John Guttag, Jerome H. Saltzer, Frans Kaashoek, Clifford Stein, and Nancy Lynch David D. Clark, former chief protocol architect for the Internet; co-author with Jerome H. Saltzer (also a CSAIL member) and David P. Reed of the influential paper "End-to-End Arguments in Systems Design" Eric Grimson, expert on computer vision and its applications to medicine, appointed Chancellor of MIT March 2011 Bob Frankston, co-developer of VisiCalc, the first computer spreadsheet Seymour Papert, inventor of the Logo programming language Joseph Weizenbaum, creator of the ELIZA computer-simulated therapist Notable alumni Robert Metcalfe, who later invented Ethernet at Xerox PARC and later founded 3Com Marc Raibert, who created the robot company Boston Dynamics Drew Houston, co-founder of Dropbox Colin Angle and Helen Greiner who, with previous CSAIL director Rodney Brooks, founded iRobot Jeremy Wertheimer, who developed ITA Software used by travel websites like Kayak and Orbitz Max Krohn, co-founder of OkCupid Directors Directors of Project MAC Robert Fano, 1963–1968 J. C. R. Licklider, 1968–1971 Edward Fredkin, 1971–1974 Michael Dertouzos, 1974–1975 Directors of the Artificial Intelligence Laboratory Marvin Minsky, 1970–1972 Patrick Winston, 1972–1997 Rodney Brooks, 1997–2003 Directors of the Laboratory for Computer Science Michael Dertouzos, 1975–2001 Victor Zue, 2001–2003 Directors of CSAIL Rodney Brooks, 2003–2007 Victor Zue, 2007–2011 Anant Agarwal, 2011–2012 Daniela L. Rus, 2012– CSAIL Alliances CSAIL Alliances is the industry connection arm of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). CSAIL Alliances offers companies programs to connect with the research, faculty, students, and startups of CSAIL by providing organizations with opportunities to learn about the research, engage with students, explore collaborations with researchers, and join research initiatives such as FinTech at CSAIL, MIT Future of Data, and Machine Learning Applications. See also References Further reading "A Marriage of Convenience: The Founding of the MIT Artificial Intelligence Laboratory" (PDF)., Chious et al. — includes important information on the Incompatible Timesharing System Weizenbaum. Rebel at Work: a documentary film with and about Joseph Weizenbaum Garfinkel, Simson (1999). Abelson, Hall (ed.). Architects of the Information Society: Thirty-Five Years of the Laboratory for Computer Science at MIT. Cambridge, Massachusetts: MIT Press. ISBN 0-262-07196-7. External links Official website of CSAIL, successor of the AI Lab
Wikipedia
The term Science DMZ refers to a computer subnetwork that is structured to be secure, but without the performance limits that would otherwise result from passing data through a stateful firewall. The Science DMZ is designed to handle high volume data transfers, typical with scientific and high-performance computing, by creating a special DMZ to accommodate those transfers. It is typically deployed at or near the local network perimeter, and is optimized for a moderate number of high-speed flows, rather than for general-purpose business systems or enterprise computing. The term Science DMZ was coined by collaborators at the US Department of Energy's ESnet in 2010. A number of universities and laboratories have deployed or are deploying a Science DMZ. In 2012 the National Science Foundation funded the creation or improvement of Science DMZs on several university campuses in the United States. The Science DMZ is a network architecture to support Big Data. The so-called information explosion has been discussed since the mid 1960s, and more recently the term data deluge has been used to describe the exponential growth in many types of data sets. These huge data sets, often need to be copied from one location to another using the Internet. The movement of data sets of this magnitude in a reasonable amount of time should be possible on modern networks. For example, it should only take less than 4 hours to transfer 10 Terabytes of data on a 10 Gigabit Ethernet network path, assuming disk performance is adequate The problem is that this requires networks that are free from packet loss and middleboxes such as traffic shapers or firewalls that slow network performance. Stateful firewalls Most businesses and other institutions use a firewall to protect their internal network from malicious attacks originating from outside. All traffic between the internal network and the external Internet must pass through a firewall, which discards traffic likely to be harmful. A stateful firewall tracks the state of each logical connection passing through it, and rejects data packets inappropriate for the state of the connection. For example, a website would not be allowed to send a page to a computer on the internal network, unless the computer had requested it. This requires a firewall to keep track of the pages recently requested, and match requests with responses. A firewall must also analyze network traffic in much more detail, compared to other networking components, such as routers and switches. Routers only have to deal with the network layer, but firewalls must also process the transport and application layers as well. All this additional processing takes time, and limits network throughput. While routers and most other networking components can handle speeds of 100 billion bits per second (Gbps), firewalls limit traffic to about 1 Gbit/s, which is unacceptable for passing large amounts of scientific data. Modern firewalls can leverage custom hardware (ASIC) to accelerate traffic and inspection, in order to achieve higher throughput. This can present an alternative to Science DMZs and allows in place inspection through existing firewalls, as long as unified threat management (UTM) inspection is disabled. While stateful firewall may be necessary for critical business data, such as financial records, credit cards, employment data, student grades, trade secrets, etc., science data requires less protection, because copies usually exist in multiple locations and there is less economic incentive to tamper. DMZ A firewall must restrict access to the internal network but allow external access to services offered to the public, such as web servers on the internal network. This is usually accomplished by creating a separate internal network called a DMZ, a play on the term "demilitarized zone." External devices are allowed to access devices in the DMZ. Devices in the DMZ are usually maintained more carefully to reduce their vulnerability to malware. Hardened devices are sometimes called bastion hosts. The Science DMZ takes the DMZ idea one step farther, by moving high performance computing into its own DMZ. Specially configured routers pass science data directly to or from designated devices on an internal network, thereby creating a virtual DMZ. Security is maintained by setting access control lists (ACLs) in the routers to only allow traffic to/from particular sources and destinations. Security is further enhanced by using an intrusion detection system (IDS) to monitor traffic, and look for indications of attack. When an attack is detected, the IDS can automatically update router tables, resulting in what some call a Remotely Triggered BlackHole (RTBH). Justification The Science DMZ provides a well-configured location for the networking, systems, and security infrastructure that supports high-performance data movement. In data-intensive science environments, data sets have outgrown portable media, and the default configurations used by many equipment and software vendors are inadequate for high performance applications. The components of the Science DMZ are specifically configured to support high performance applications, and to facilitate the rapid diagnosis of performance problems. Without the deployment of dedicated infrastructure, it is often impossible to achieve acceptable performance. Simply increasing network bandwidth is usually not good enough, as performance problems are caused by many factors, ranging from underpowered firewalls to dirty fiber optics to untuned operating systems. The Science DMZ is the codification of a set of shared best practices—concepts that have been developed over the years—from the scientific networking and systems community. The Science DMZ model describes the essential components of high-performance data transfer infrastructure in a way that is accessible to non-experts and scalable across any size of institution or experiment. Components The primary components of a Science DMZ are: A high performance Data Transfer Node (DTN) running parallel data transfer tools such as GridFTP A network performance monitoring host, such as perfSONAR A high performance router/switch Optional Science DMZ components include: Support for layer-2 Multiprotocol Label Switching (MPLS) Virtual Private Networks (VPN) Support for software-defined networking See also Big Data perfSONAR References External links ESnet web pages describing the Science DMZ NSF Program funding Science DMZs "science_dmz"_internet Announcement on Ohio State University Science DMZ NSF Solicitation on funding to build Science DMZs University of Utah's Science DMZ
Wikipedia
Hostile Waters, released as Hostile Waters: Antaeus Rising in America, is a hybrid vehicle and strategy game developed and published by Rage Software for Microsoft Windows. It was inspired by Carrier Command (Realtime Games, 1988). It has won several awards and one unofficial award from Rock Paper Shotgun as a "lost classic" or "The best game you've never played". Plot Hostile Waters takes place in a Utopian future where war has been abolished. In the year 2012, a revolutionary war takes place between the corrupt and power-hungry politicians, leaders and businessmen (described as the "Old Guard") and the people. The Old Guard were defeated, with only a few of their leaders escaping. By 2032, the world has been rebuilt as a utopia, with the help of nano-technological assemblers, which are used in "creation engines" to create matter from energy and waste, for free. The newly united world is governed from a capital city known as Central. Missile attacks are suddenly launched against major cities all over the world from an unknown location. This is eventually discovered to be an island chain in the South Pacific Ocean. A response to the missile attacks was a special forces team sent in to investigate the area for preliminary investigations. The Ministry of Intelligence (MinIntel) loses contact with it shortly thereafter. The world government authorises a reactivation of the Antaeus program, a series of warships able to create any weapon of their choosing using their on-board nano-technological creation engine. Two of these were left on the seabed in the case of an emergency, capable of being re-activated and refloating itself. On board are a series of "soulcatcher" chips, a classified 1990s military program researched into for the storage of human brain functions on a silicon chip. The soulcatcher technology was used to store the minds of every crew member ever assigned to an Antaeus vessel. It is soon discovered that one of the cruisers does not respond to the awakening signal. The other cruiser, however, is refloated and re-activated, with heavy damage to vital ship components. A course is plotted for a nearby disused wet-dock. As the Antaeus progresses from the wet-dock, unusual biological life-forms are discovered amongst the enemy bases on the islands. The identity of the aggressor firing the missiles is confirmed as the leftovers of the old, pre-Central forces, known as the Cabal. Outnumbering Central's army a thousand to one, they are fighting with thousands of troops and weapons that they hid away when it was apparent that the war was lost. The Antaeus is deployed into the chicane to stop the Cabal's operations there. It's later discovered that along with their superior numbers, they have also biologically engineered a species of organic machines, designed in the popular likeness of extraterrestrials, which they intend to use to create the fear of an alien invasion, to facilitate their taking over the world and the removal of the public use of creation engines. The Cabal later lose control of the species, which eventually turns on its masters, destroying them. The species starts spreading, modifying the planetary climate and geographical features in an attempt to exterminate humanity and make the planet more hospitable to itself. Having exterminated its creators, the species resolves to cleanse humanity as a whole from the planet using a massive 'disassembler cannon', only to be stopped by the Antaeus. The species subsequently attempts to flee into the cosmos and colonise the surrounding planets and stars, by launching a massive number of 'culture stones' (information devices that also double as creation engines) into space from an enormous, artificially-grown organic "island", the final staging point. Central's only option is to bind the Antaeus' creation engine and the disassembler cannon stolen from the aliens together to create a makeshift bomb, and detonate it at the central "column" containing the culture stones. The plan succeeds, and the Antaeus is sacrificed to save the world. The final cinematic show the organic disassembler cannon and the Antaeus' creation engine moving closer together and fusing, creating something new. A post-credits scene also shows that two of the species' culture stones have managed to get into space. Gameplay Each Mission takes place on and or near a fortified enemy island containing various forms of anti-air and ground defence, with scattered unit-production complexes powered by oil-derricks and fuel containers (which are dependent on the oil-derricks) that the player can destroy to keep the enemy from replacing destroyed forces. Vehicles are built on the Antaeus and, if desired, land vehicles can be delivered to a location by the air-lifting "magpie". Units are created by providing Antaeus with a number of resources which are obtained at the beginning of the level and debris which are taken from destroyed enemy units and structures. Transport helicopters such as the "Pegasus" can fly to an object and airlift it to the ship-board recycling system with little resources required. The carrier can analyse objects it disassembles at the rear of the Antaeus cruiser, and several of the game's vehicles and items are unlocked by "sampling" them in this fashion. The game has a number of vehicles that are progressively unlocked as the missions progress. Vehicles contain a number of slots for equipment and a selection of different types of weapons to use in the vehicle. A variety of vehicle equipment combinations can be designed. Vehicles have an individual damage multiplier such that different vehicles with the same weapon will do different damage. In addition to this, each soul-chip personality specializes in one unit along with specific equipment, which, if equipped will gain them a bonus in efficiency. Development The game was developed by 12 people. Reception The game received "favourable" reviews according to the review aggregation website Metacritic. Carla Harker of NextGen said, "You'll feel like a real battlefield general when you take to the field in Antaeus Rising." Jake The Snake of GamePro said, "If the usual game categories leave you unscathed, get bloodied in these Hostile Waters." Notes References External links Hostile Waters: Antaeus Rising at MobyGames
Wikipedia
In statistical analysis of binary classification and information retrieval systems, the F-score or F-measure is a measure of predictive performance. It is calculated from the precision and recall of the test, where the precision is the number of true positive results divided by the number of all samples predicted to be positive, including those not identified correctly, and the recall is the number of true positive results divided by the number of all samples that should have been identified as positive. Precision is also known as positive predictive value, and recall is also known as sensitivity in diagnostic binary classification. The F1 score is the harmonic mean of the precision and recall. It thus symmetrically represents both precision and recall in one metric. The more generic F β {\displaystyle F_{\beta }} score applies additional weights, valuing one of precision or recall more than the other. The highest possible value of an F-score is 1.0, indicating perfect precision and recall, and the lowest possible value is 0, if the precision or the recall is zero. Etymology The name F-measure is believed to be named after a different F function in Van Rijsbergen's book, when introduced to the Fourth Message Understanding Conference (MUC-4, 1992). Definition The traditional F-measure or balanced F-score (F1 score) is the harmonic mean of precision and recall: F 1 = 2 r e c a l l − 1 + p r e c i s i o n − 1 = 2 p r e c i s i o n ⋅ r e c a l l p r e c i s i o n + r e c a l l = 2 T P 2 T P + F P + F N {\displaystyle F_{1}={\frac {2}{\mathrm {recall} ^{-1}+\mathrm {precision} ^{-1}}}=2{\frac {\mathrm {precision} \cdot \mathrm {recall} }{\mathrm {precision} +\mathrm {recall} }}={\frac {2\mathrm {TP} }{2\mathrm {TP} +\mathrm {FP} +\mathrm {FN} }}} With precision = TP / (TP + FP) and recall = TP / (TP + FN), it follows that the numerator of F1 is the sum of their numerators and the denominator of F1 is the sum of their denominators. To see it as a harmonic mean, note that F 1 − 1 = 1 2 ( r e c a l l − 1 + p r e c i s i o n − 1 ) {\displaystyle F_{1}^{-1}={\frac {1}{2}}(\mathrm {recall} ^{-1}+\mathrm {precision} ^{-1})} . Fβ score A more general F score, F β {\displaystyle F_{\beta }} , that uses a positive real factor β {\displaystyle \beta } , where β {\displaystyle \beta } is chosen such that recall is considered β {\displaystyle \beta } times as important as precision, is: F β = β 2 + 1 ( β 2 ⋅ r e c a l l − 1 ) + p r e c i s i o n − 1 = ( 1 + β 2 ) ⋅ p r e c i s i o n ⋅ r e c a l l ( β 2 ⋅ p r e c i s i o n ) + r e c a l l {\displaystyle F_{\beta }={\frac {\beta ^{2}+1}{(\beta ^{2}\cdot \mathrm {recall} ^{-1})+\mathrm {precision} ^{-1}}}={\frac {(1+\beta ^{2})\cdot \mathrm {precision} \cdot \mathrm {recall} }{(\beta ^{2}\cdot \mathrm {precision} )+\mathrm {recall} }}} To see that it as a weighted harmonic mean, note that F β − 1 = 1 β + β − 1 ( β ⋅ r e c a l l − 1 + β − 1 ⋅ p r e c i s i o n − 1 ) {\displaystyle F_{\beta }^{-1}={\frac {1}{\beta +\beta ^{-1}}}(\beta \cdot \mathrm {recall} ^{-1}+\beta ^{-1}\cdot \mathrm {precision} ^{-1})} . In terms of Type I and type II errors this becomes: F β = ( 1 + β 2 ) ⋅ T P ( 1 + β 2 ) ⋅ T P + β 2 ⋅ F N + F P {\displaystyle F_{\beta }={\frac {(1+\beta ^{2})\cdot \mathrm {TP} }{(1+\beta ^{2})\cdot \mathrm {TP} +\beta ^{2}\cdot \mathrm {FN} +\mathrm {FP} }}\,} Two commonly used values for β {\displaystyle \beta } are 2, which weighs recall higher than precision, and 1/2, which weighs recall lower than precision. The F-measure was derived so that F β {\displaystyle F_{\beta }} "measures the effectiveness of retrieval with respect to a user who attaches β {\displaystyle \beta } times as much importance to recall as precision". It is based on Van Rijsbergen's effectiveness measure E = 1 − ( α p + 1 − α r ) − 1 {\displaystyle E=1-\left({\frac {\alpha }{p}}+{\frac {1-\alpha }{r}}\right)^{-1}} Their relationship is: F β = 1 − E {\displaystyle F_{\beta }=1-E} where α = 1 1 + β 2 {\displaystyle \alpha ={\frac {1}{1+\beta ^{2}}}} Diagnostic testing This is related to the field of binary classification where recall is often termed "sensitivity". Dependence of the F-score on class imbalance Precision-recall curve, and thus the F β {\displaystyle F_{\beta }} score, explicitly depends on the ratio r {\displaystyle r} of positive to negative test cases. This means that comparison of the F-score across different problems with differing class ratios is problematic. One way to address this issue (see e.g., Siblini et al., 2020 ) is to use a standard class ratio r 0 {\displaystyle r_{0}} when making such comparisons. Applications The F-score is often used in the field of information retrieval for measuring search, document classification, and query classification performance. It is particularly relevant in applications which are primarily concerned with the positive class and where the positive class is rare relative to the negative class. Earlier works focused primarily on the F1 score, but with the proliferation of large scale search engines, performance goals changed to place more emphasis on either precision or recall and so F β {\displaystyle F_{\beta }} is seen in wide application. The F-score is also used in machine learning. However, the F-measures do not take true negatives into account, hence measures such as the Matthews correlation coefficient, Informedness or Cohen's kappa may be preferred to assess the performance of a binary classifier. The F-score has been widely used in the natural language processing literature, such as in the evaluation of named entity recognition and word segmentation. Properties The F1 score is the Dice coefficient of the set of retrieved items and the set of relevant items. The F1-score of a classifier which always predicts the positive class converges to 1 as the probability of the positive class increases. The F1-score of a classifier which always predicts the positive class is equal to 2 * proportion_of_positive_class / ( 1 + proportion_of_positive_class ), since the recall is 1, and the precision is equal to the proportion of the positive class. If the scoring model is uninformative (cannot distinguish between the positive and negative class) then the optimal threshold is 0 so that the positive class is always predicted. F1 score is concave in the true positive rate. Criticism David Hand and others criticize the widespread use of the F1 score since it gives equal importance to precision and recall. In practice, different types of mis-classifications incur different costs. In other words, the relative importance of precision and recall is an aspect of the problem. According to Davide Chicco and Giuseppe Jurman, the F1 score is less truthful and informative than the Matthews correlation coefficient (MCC) in binary evaluation classification. David M W Powers has pointed out that F1 ignores the True Negatives and thus is misleading for unbalanced classes, while kappa and correlation measures are symmetric and assess both directions of predictability - the classifier predicting the true class and the true class predicting the classifier prediction, proposing separate multiclass measures Informedness and Markedness for the two directions, noting that their geometric mean is correlation. Another source of critique of F1 is its lack of symmetry. It means it may change its value when dataset labeling is changed - the "positive" samples are named "negative" and vice versa. This criticism is met by the P4 metric definition, which is sometimes indicated as a symmetrical extension of F1. Finally, Ferrer and Dyrland et al. argue that the expected cost (or its counterpart, the expected utility) is the only principled metric for evaluation of classification decisions, having various advantages over the F-score and the MCC. Both works show that the F-score can result in wrong conclusions about the absolute and relative quality of systems. Difference from Fowlkes–Mallows index While the F-measure is the harmonic mean of recall and precision, the Fowlkes–Mallows index is their geometric mean. Extension to multi-class classification The F-score is also used for evaluating classification problems with more than two classes (Multiclass classification). A common method is to average the F-score over each class, aiming at a balanced measurement of performance. Macro F1 Macro F1 is a macro-averaged F1 score aiming at a balanced performance measurement. To calculate macro F1, two different averaging-formulas have been used: the F1 score of (arithmetic) class-wise precision and recall means or the arithmetic mean of class-wise F1 scores, where the latter exhibits more desirable properties. Micro F1 Micro F1 is the harmonic mean of micro precision and micro recall. In single-label multi-class classification, micro precision equals micro recall, thus micro F1 is equal to both. However, contrary to a common misconception, micro F1 does not generally equal accuracy, because accuracy takes true negatives into account while micro F1 does not. See also BLEU Confusion matrix Hypothesis tests for accuracy METEOR NIST (metric) Receiver operating characteristic ROUGE (metric) Uncertainty coefficient, aka Proficiency Word error rate LEPOR
Wikipedia
In thermodynamics, nucleation is the first step in the formation of either a new thermodynamic phase or structure via self-assembly or self-organization within a substance or mixture. Nucleation is typically defined to be the process that determines how long an observer has to wait before the new phase or self-organized structure appears. For example, if a volume of water is cooled (at atmospheric pressure) significantly below 0 °C, it will tend to freeze into ice, but volumes of water cooled only a few degrees below 0 °C often stay completely free of ice for long periods (supercooling). At these conditions, nucleation of ice is either slow or does not occur at all. However, at lower temperatures nucleation is fast, and ice crystals appear after little or no delay. Nucleation is a common mechanism which generates first-order phase transitions, and it is the start of the process of forming a new thermodynamic phase. In contrast, new phases at continuous phase transitions start to form immediately. Nucleation is often very sensitive to impurities in the system. These impurities may be too small to be seen by the naked eye, but still can control the rate of nucleation. Because of this, it is often important to distinguish between heterogeneous nucleation and homogeneous nucleation. Heterogeneous nucleation occurs at nucleation sites on surfaces in the system. Homogeneous nucleation occurs away from a surface. Characteristics Nucleation is usually a stochastic (random) process, so even in two identical systems nucleation will occur at different times. A common mechanism is illustrated in the animation to the right. This shows nucleation of a new phase (shown in red) in an existing phase (white). In the existing phase microscopic fluctuations of the red phase appear and decay continuously, until an unusually large fluctuation of the new red phase is so large it is more favourable for it to grow than to shrink back to nothing. This nucleus of the red phase then grows and converts the system to this phase. The standard theory that describes this behaviour for the nucleation of a new thermodynamic phase is called classical nucleation theory. However, the CNT fails in describing experimental results of vapour to liquid nucleation even for model substances like argon by several orders of magnitude. For nucleation of a new thermodynamic phase, such as the formation of ice in water below 0 °C, if the system is not evolving with time and nucleation occurs in one step, then the probability that nucleation has not occurred should undergo exponential decay. This is seen for example in the nucleation of ice in supercooled small water droplets. The decay rate of the exponential gives the nucleation rate. Classical nucleation theory is a widely used approximate theory for estimating these rates, and how they vary with variables such as temperature. It correctly predicts that the time you have to wait for nucleation decreases extremely rapidly when supersaturated. It is not just new phases such as liquids and crystals that form via nucleation followed by growth. The self-assembly process that forms objects like the amyloid aggregates associated with Alzheimer's disease also starts with nucleation. Energy consuming self-organising systems such as the microtubules in cells also show nucleation and growth. Heterogeneous nucleation often dominates homogeneous nucleation Heterogeneous nucleation, nucleation with the nucleus at a surface, is much more common than homogeneous nucleation. For example, in the nucleation of ice from supercooled water droplets, purifying the water to remove all or almost all impurities results in water droplets that freeze below around −35 °C, whereas water that contains impurities may freeze at −5 °C or warmer. This observation that heterogeneous nucleation can occur when the rate of homogeneous nucleation is essentially zero, is often understood using classical nucleation theory. This predicts that the nucleation slows exponentially with the height of a free energy barrier ΔG*. This barrier comes from the free energy penalty of forming the surface of the growing nucleus. For homogeneous nucleation the nucleus is approximated by a sphere, but as we can see in the schematic of macroscopic droplets to the right, droplets on surfaces are not complete spheres and so the area of the interface between the droplet and the surrounding fluid is less than a sphere's 4 π r 2 {\displaystyle 4\pi r^{2}} . This reduction in surface area of the nucleus reduces the height of the barrier to nucleation and so speeds nucleation up exponentially. Nucleation can also start at the surface of a liquid. For example, computer simulations of gold nanoparticles show that the crystal phase sometimes nucleates at the liquid-gold surface. Computer simulation studies of simple models Classical nucleation theory makes a number of assumptions, for example it treats a microscopic nucleus as if it is a macroscopic droplet with a well-defined surface whose free energy is estimated using an equilibrium property: the interfacial tension σ. For a nucleus that may be only of order ten molecules across it is not always clear that we can treat something so small as a volume plus a surface. Also nucleation is an inherently out of thermodynamic equilibrium phenomenon so it is not always obvious that its rate can be estimated using equilibrium properties. However, modern computers are powerful enough to calculate essentially exact nucleation rates for simple models. These have been compared with the classical theory, for example for the case of nucleation of the crystal phase in the model of hard spheres. This is a model of perfectly hard spheres in thermal motion, and is a simple model of some colloids. For the crystallization of hard spheres the classical theory is a very reasonable approximate theory. So for the simple models we can study, classical nucleation theory works quite well, but we do not know if it works equally well for (say) complex molecules crystallising out of solution. The spinodal region Phase-transition processes can also be explained in terms of spinodal decomposition, where phase separation is delayed until the system enters the unstable region where a small perturbation in composition leads to a decrease in energy and, thus, spontaneous growth of the perturbation. This region of a phase diagram is known as the spinodal region and the phase separation process is known as spinodal decomposition and may be governed by the Cahn–Hilliard equation. The nucleation of crystals In many cases, liquids and solutions can be cooled down or concentrated up to conditions where the liquid or solution is significantly less thermodynamically stable than the crystal, but where no crystals will form for minutes, hours, weeks or longer; this process is called supercooling. Nucleation of the crystal is then being prevented by a substantial barrier. This has consequences, for example cold high altitude clouds may contain large numbers of small liquid water droplets that are far below 0 °C. In small volumes, such as in small droplets, only one nucleation event may be needed for crystallisation. In these small volumes, the time until the first crystal appears is usually defined to be the nucleation time. Calcium carbonate crystal nucleation depends not only on degree of supersaturation but also the ratio of calcium to carbonate ions in aqueous solutions. In larger volumes many nucleation events will occur. A simple model for crystallisation in that case, that combines nucleation and growth is the KJMA or Avrami model. Although the existing theories including the classical nucleation theory explain well the steady nucleation state when the crystal nucleation rate is not time dependent, the initial non-steady state transient nucleation, and even more mysterious incubation period, require more attention of the scientific community. Chemical ordering of the undercooling liquid prior to crystal nucleation was suggested to be responsible for that feature by reducing the energy barrier for nucleation. Primary and secondary nucleation The time until the appearance of the first crystal is also called primary nucleation time, to distinguish it from secondary nucleation times. Primary here refers to the first nucleus to form, while secondary nuclei are crystal nuclei produced from a preexisting crystal. Primary nucleation describes the transition to a new phase that does not rely on the new phase already being present, either because it is the very first nucleus of that phase to form, or because the nucleus forms far from any pre-existing piece of the new phase. Particularly in the study of crystallisation, secondary nucleation can be important. This is the formation of nuclei of a new crystal directly caused by pre-existing crystals. For example, if the crystals are in a solution and the system is subject to shearing forces, small crystal nuclei could be sheared off a growing crystal, thus increasing the number of crystals in the system. So both primary and secondary nucleation increase the number of crystals in the system but their mechanisms are very different, and secondary nucleation relies on crystals already being present. Experimental observations on the nucleation times for the crystallisation of small volumes It is typically difficult to experimentally study the nucleation of crystals. The nucleus is microscopic, and thus too small to be directly observed. In large liquid volumes there are typically multiple nucleation events, and it is difficult to disentangle the effects of nucleation from those of growth of the nucleated phase. These problems can be overcome by working with small droplets. As nucleation is stochastic, many droplets are needed so that statistics for the nucleation events can be obtained. To the right is shown an example set of nucleation data. It is for the nucleation at constant temperature and hence supersaturation of the crystal phase in small droplets of supercooled liquid tin; this is the work of Pound and La Mer. Nucleation occurs in different droplets at different times, hence the fraction is not a simple step function that drops sharply from one to zero at one particular time. The red curve is a fit of a Gompertz function to the data. This is a simplified version of the model Pound and La Mer used to model their data. The model assumes that nucleation occurs due to impurity particles in the liquid tin droplets, and it makes the simplifying assumption that all impurity particles produce nucleation at the same rate. It also assumes that these particles are Poisson distributed among the liquid tin droplets. The fit values are that the nucleation rate due to a single impurity particle is 0.02/s, and the average number of impurity particles per droplet is 1.2. Note that about 30% of the tin droplets never freeze; the data plateaus at a fraction of about 0.3. Within the model this is assumed to be because, by chance, these droplets do not have even one impurity particle and so there is no heterogeneous nucleation. Homogeneous nucleation is assumed to be negligible on the timescale of this experiment. The remaining droplets freeze in a stochastic way, at rates 0.02/s if they have one impurity particle, 0.04/s if they have two, and so on. These data are just one example, but they illustrate common features of the nucleation of crystals in that there is clear evidence for heterogeneous nucleation, and that nucleation is clearly stochastic. Ice The freezing of small water droplets to ice is an important process, particularly in the formation and dynamics of clouds. Water (at atmospheric pressure) does not freeze at 0 °C, but rather at temperatures that tend to decrease as the volume of the water decreases and as the concentration of dissolved chemicals in the water increases. Thus small droplets of water, as found in clouds, may remain liquid far below 0 °C. An example of experimental data on the freezing of small water droplets is shown at the right. The plot shows the fraction of a large set of water droplets, that are still liquid water, i.e., have not yet frozen, as a function of temperature. Note that the highest temperature at which any of the droplets freezes is close to -19 °C, while the last droplet to freeze does so at almost -35 °C. Examples Nucleation of fluids (gases and liquids) Clouds form when wet air cools (often because the air rises) and many small water droplets nucleate from the supersaturated air. The amount of water vapour that air can carry decreases with lower temperatures. The excess vapor begins to nucleate and to form small water droplets which form a cloud. Nucleation of the droplets of liquid water is heterogeneous, occurring on particles referred to as cloud condensation nuclei. Cloud seeding is the process of adding artificial condensation nuclei to quicken the formation of clouds. Bubbles of carbon dioxide nucleate shortly after the pressure is released from a container of carbonated liquid. Nucleation in boiling can occur in the bulk liquid if the pressure is reduced so that the liquid becomes superheated with respect to the pressure-dependent boiling point. More often, nucleation occurs on the heating surface, at nucleation sites. Typically, nucleation sites are tiny crevices where free gas-liquid surface is maintained or spots on the heating surface with lower wetting properties. Substantial superheating of a liquid can be achieved after the liquid is de-gassed and if the heating surfaces are clean, smooth and made of materials well wetted by the liquid. Some champagne stirrers operate by providing many nucleation sites via high surface-area and sharp corners, speeding the release of bubbles and removing carbonation from the wine. The Diet Coke and Mentos eruption offers another example. The surface of Mentos candy provides nucleation sites for the formation of carbon-dioxide bubbles from carbonated soda. Both the bubble chamber and the cloud chamber rely on nucleation, of bubbles and droplets, respectively. Nucleation of crystals The most common crystallisation process on Earth is the formation of ice. Liquid water does not freeze at 0 °C unless there is ice already present; cooling significantly below 0 °C is required to nucleate ice and for the water to freeze. For example, small droplets of very pure water can remain liquid down to below -30 °C although ice is the stable state below 0 °C. Many of the materials we make and use are crystalline, but are made from liquids, e.g. crystalline iron made from liquid iron cast into a mold, so the nucleation of crystalline materials is widely studied in industry. It is used heavily in the chemical industry for cases such as in the preparation of metallic ultradispersed powders that can serve as catalysts. For example, platinum deposited onto TiO2 nanoparticles catalyses the decomposition of water. It is an important factor in the semiconductor industry, as the band gap energy in semiconductors is influenced by the size of nanoclusters. Nucleation in solids In addition to the nucleation and growth of crystals e.g. in non-crystalline glasses, the nucleation and growth of impurity precipitates in crystals at, and between, grain boundaries is quite important industrially. For example in metals solid-state nucleation and precipitate growth plays an important role e.g. in modifying mechanical properties like ductility, while in semiconductors it plays an important role e.g. in trapping impurities during integrated circuit manufacture.
Wikipedia
Engineering physics (EP), sometimes engineering science, is the field of study combining pure science disciplines (such as physics, mathematics, chemistry or biology) and engineering disciplines (computer, nuclear, electrical, aerospace, medical, materials, mechanical, etc.). In many languages, the term technical physics is also used. It has been used since 1861 by the German physics teacher J. Frick in his publications. Terminology In some countries, both what would be translated as "engineering physics" and what would be translated as "technical physics" are disciplines leading to academic degrees. In China, for example, with the former specializing in nuclear power research (i.e. nuclear engineering), and the latter closer to engineering physics. In some universities and their institutions, an engineering physics (or applied physics) major is a discipline or specialization within the scope of engineering science, or applied science. Several related names have existed since the inception of the interdisciplinary field. For example, some university courses are called or contain the phrase "physical technologies" or "physical engineering sciences" or "physical technics". In some cases, a program formerly called "physical engineering" has been renamed "applied physics" or has evolved into specialized fields such as "photonics engineering". Expertise Unlike traditional engineering disciplines, engineering science or engineering physics is not necessarily confined to a particular branch of science, engineering or physics. Instead, engineering science or engineering physics is meant to provide a more thorough grounding in applied physics for a selected specialty such as optics, quantum physics, materials science, applied mechanics, electronics, nanotechnology, microfabrication, microelectronics, computing, photonics, mechanical engineering, electrical engineering, nuclear engineering, biophysics, control theory, aerodynamics, energy, solid-state physics, etc. It is the discipline devoted to creating and optimizing engineering solutions through enhanced understanding and integrated application of mathematical, scientific, statistical, and engineering principles. The discipline is also meant for cross-functionality and bridges the gap between theoretical science and practical engineering with emphasis in research and development, design, and analysis. Degrees In many universities, engineering science programs may be offered at the levels of B.Tech., B.Sc., M.Sc. and Ph.D. Usually, a core of basic and advanced courses in mathematics, physics, chemistry, and biology forms the foundation of the curriculum, while typical elective areas may include fluid dynamics, quantum physics, economics, plasma physics, relativity, solid mechanics, operations research, quantitative finance, information technology and engineering, dynamical systems, bioengineering, environmental engineering, computational engineering, engineering mathematics and statistics, solid-state devices, materials science, electromagnetism, nanoscience, nanotechnology, energy, and optics. Awards There are awards for excellence in engineering physics. For example, Princeton University's Jeffrey O. Kephart '80 Prize is awarded annually to the graduating senior with the best record. Since 2002, the German Physical Society has awarded the Georg-Simon-Ohm-Preis for outstanding research in this field. See also Applied physics Engineering Engineering science and mechanics Environmental engineering science Index of engineering science and mechanics articles Industrial engineering Notes and references External links "Engineering Physics at Xavier" "The Engineering Physicist Profession" "Engineering Physicist Professional Profile" Society of Engineering Science Inc. Archived 2017-08-07 at the Wayback Machine
Wikipedia
In physics, quantum tunnelling, barrier penetration, or simply tunnelling is a quantum mechanical phenomenon in which an object such as an electron or atom passes through a potential energy barrier that, according to classical mechanics, should not be passable due to the object not having sufficient energy to pass or surmount the barrier. Tunneling is a consequence of the wave nature of matter, where the quantum wave function describes the state of a particle or other physical system, and wave equations such as the Schrödinger equation describe their behavior. The probability of transmission of a wave packet through a barrier decreases exponentially with the barrier height, the barrier width, and the tunneling particle's mass, so tunneling is seen most prominently in low-mass particles such as electrons or protons tunneling through microscopically narrow barriers. Tunneling is readily detectable with barriers of thickness about 1–3 nm or smaller for electrons, and about 0.1 nm or smaller for heavier particles such as protons or hydrogen atoms. Some sources describe the mere penetration of a wave function into the barrier, without transmission on the other side, as a tunneling effect, such as in tunneling into the walls of a finite potential well. Tunneling plays an essential role in physical phenomena such as nuclear fusion and alpha radioactive decay of atomic nuclei. Tunneling applications include the tunnel diode, quantum computing, flash memory, and the scanning tunneling microscope. Tunneling limits the minimum size of devices used in microelectronics because electrons tunnel readily through insulating layers and transistors that are thinner than about 1 nm. The effect was predicted in the early 20th century. Its acceptance as a general physical phenomenon came mid-century. Introduction to the concept Quantum tunnelling falls under the domain of quantum mechanics. To understand the phenomenon, particles attempting to travel across a potential barrier can be compared to a ball trying to roll over a hill. Quantum mechanics and classical mechanics differ in their treatment of this scenario. Classical mechanics predicts that particles that do not have enough energy to classically surmount a barrier cannot reach the other side. Thus, a ball without sufficient energy to surmount the hill would roll back down. In quantum mechanics, a particle can, with a small probability, tunnel to the other side, thus crossing the barrier. The reason for this difference comes from treating matter as having properties of waves and particles. Tunnelling problem The wave function of a physical system of particles specifies everything that can be known about the system. Therefore, problems in quantum mechanics analyze the system's wave function. Using mathematical formulations, such as the Schrödinger equation, the time evolution of a known wave function can be deduced. The square of the absolute value of this wave function is directly related to the probability distribution of the particle positions, which describes the probability that the particles would be measured at those positions. As shown in the animation, when a wave packet impinges on the barrier, most of it is reflected and some is transmitted through the barrier. The wave packet becomes more de-localized: it is now on both sides of the barrier and lower in maximum amplitude, but equal in integrated square-magnitude, meaning that the probability the particle is somewhere remains unity. The wider the barrier and the higher the barrier energy, the lower the probability of tunneling. Some models of a tunneling barrier, such as the rectangular barriers shown, can be analysed and solved algebraically.: 96 Most problems do not have an algebraic solution, so numerical solutions are used. "Semiclassical methods" offer approximate solutions that are easier to compute, such as the WKB approximation. History The Schrödinger equation was published in 1926. The first person to apply the Schrödinger equation to a problem that involved tunneling between two classically allowed regions through a potential barrier was Friedrich Hund in a series of articles published in 1927. He studied the solutions of a double-well potential and discussed molecular spectra. Leonid Mandelstam and Mikhail Leontovich discovered tunneling independently and published their results in 1928. In 1927, Lothar Nordheim, assisted by Ralph Fowler, published a paper that discussed thermionic emission and reflection of electrons from metals. He assumed a surface potential barrier that confines the electrons within the metal and showed that the electrons have a finite probability of tunneling through or reflecting from the surface barrier when their energies are close to the barrier energy. Classically, the electron would either transmit or reflect with 100% certainty, depending on its energy. In 1928 J. Robert Oppenheimer published two papers on field emission, i.e. the emission of electrons induced by strong electric fields. Nordheim and Fowler simplified Oppenheimer's derivation and found values for the emitted currents and work functions that agreed with experiments. A great success of the tunnelling theory was the mathematical explanation for alpha decay, which was developed in 1928 by George Gamow and independently by Ronald Gurney and Edward Condon. The latter researchers simultaneously solved the Schrödinger equation for a model nuclear potential and derived a relationship between the half-life of the particle and the energy of emission that depended directly on the mathematical probability of tunneling. All three researchers were familiar with the works on field emission, and Gamow was aware of Mandelstam and Leontovich's findings. In the early days of quantum theory, the term tunnel effect was not used, and the effect was instead referred to as penetration of, or leaking through, a barrier. The German term wellenmechanische Tunneleffekt was used in 1931 by Walter Schottky. The English term tunnel effect entered the language in 1932 when it was used by Yakov Frenkel in his textbook. In 1957 Leo Esaki demonstrated tunneling of electrons over a few nanometer wide barrier in a semiconductor structure and developed a diode based on tunnel effect. In 1960, following Esaki's work, Ivar Giaever showed experimentally that tunnelling also took place in superconductors. The tunnelling spectrum gave direct evidence of the superconducting energy gap. In 1962, Brian Josephson predicted the tunneling of superconducting Cooper pairs. Esaki, Giaever and Josephson shared the 1973 Nobel Prize in Physics for their works on quantum tunneling in solids. In 1981, Gerd Binnig and Heinrich Rohrer developed a new type of microscope, called scanning tunneling microscope, which is based on tunnelling and is used for imaging surfaces at the atomic level. Binnig and Rohrer were awarded the Nobel Prize in Physics in 1986 for their discovery. Applications Tunnelling is the cause of some important macroscopic physical phenomena. Solid-state physics Electronics Tunnelling is a source of current leakage in very-large-scale integration (VLSI) electronics and results in a substantial power drain and heating effects that plague such devices. It is considered the lower limit on how microelectronic device elements can be made. Tunnelling is a fundamental technique used to program the floating gates of flash memory. Cold emission Cold emission of electrons is relevant to semiconductors and superconductor physics. It is similar to thermionic emission, where electrons randomly jump from the surface of a metal to follow a voltage bias because they statistically end up with more energy than the barrier, through random collisions with other particles. When the electric field is very large, the barrier becomes thin enough for electrons to tunnel out of the atomic state, leading to a current that varies approximately exponentially with the electric field. These materials are important for flash memory, vacuum tubes, and some electron microscopes. Tunnel junction A simple barrier can be created by separating two conductors with a very thin insulator. These are tunnel junctions, the study of which requires understanding quantum tunnelling. Josephson junctions take advantage of quantum tunnelling and superconductivity to create the Josephson effect. This has applications in precision measurements of voltages and magnetic fields, as well as the multijunction solar cell. Tunnel diode Diodes are electrical semiconductor devices that allow electric current flow in one direction more than the other. The device depends on a depletion layer between N-type and P-type semiconductors to serve its purpose. When these are heavily doped the depletion layer can be thin enough for tunnelling. When a small forward bias is applied, the current due to tunnelling is significant. This has a maximum at the point where the voltage bias is such that the energy level of the p and n conduction bands are the same. As the voltage bias is increased, the two conduction bands no longer line up and the diode acts typically. Because the tunnelling current drops off rapidly, tunnel diodes can be created that have a range of voltages for which current decreases as voltage increases. This peculiar property is used in some applications, such as high speed devices where the characteristic tunnelling probability changes as rapidly as the bias voltage. The resonant tunnelling diode makes use of quantum tunnelling in a very different manner to achieve a similar result. This diode has a resonant voltage for which a current favors a particular voltage, achieved by placing two thin layers with a high energy conductance band near each other. This creates a quantum potential well that has a discrete lowest energy level. When this energy level is higher than that of the electrons, no tunnelling occurs and the diode is in reverse bias. Once the two voltage energies align, the electrons flow like an open wire. As the voltage further increases, tunnelling becomes improbable and the diode acts like a normal diode again before a second energy level becomes noticeable. Tunnel field-effect transistors A European research project demonstrated field effect transistors in which the gate (channel) is controlled via quantum tunnelling rather than by thermal injection, reducing gate voltage from ≈1 volt to 0.2 volts and reducing power consumption by up to 100×. If these transistors can be scaled up into VLSI chips, they would improve the performance per power of integrated circuits. Conductivity of crystalline solids While the Drude-Lorentz model of electrical conductivity makes excellent predictions about the nature of electrons conducting in metals, it can be furthered by using quantum tunnelling to explain the nature of the electron's collisions. When a free electron wave packet encounters a long array of uniformly spaced barriers, the reflected part of the wave packet interferes uniformly with the transmitted one between all barriers so that 100% transmission becomes possible. The theory predicts that if positively charged nuclei form a perfectly rectangular array, electrons will tunnel through the metal as free electrons, leading to extremely high conductance, and that impurities in the metal will disrupt it. Scanning tunneling microscope The scanning tunnelling microscope (STM), invented by Gerd Binnig and Heinrich Rohrer, may allow imaging of individual atoms on the surface of a material. It operates by taking advantage of the relationship between quantum tunnelling with distance. When the tip of the STM's needle is brought close to a conduction surface that has a voltage bias, measuring the current of electrons that are tunnelling between the needle and the surface reveals the distance between the needle and the surface. By using piezoelectric rods that change in size when voltage is applied, the height of the tip can be adjusted to keep the tunnelling current constant. The time-varying voltages that are applied to these rods can be recorded and used to image the surface of the conductor. STMs are accurate to 0.001 nm, or about 1% of atomic diameter. Nuclear physics Nuclear fusion Quantum tunnelling is an essential phenomenon for nuclear fusion. The temperature in stellar cores is generally insufficient to allow atomic nuclei to overcome the Coulomb barrier and achieve thermonuclear fusion. Quantum tunnelling increases the probability of penetrating this barrier. Though this probability is still low, the extremely large number of nuclei in the core of a star is sufficient to sustain a steady fusion reaction. Radioactive decay Radioactive decay is the process of emission of particles and energy from the unstable nucleus of an atom to form a stable product. This is done via the tunnelling of a particle out of the nucleus (an electron tunneling into the nucleus is electron capture). This was the first application of quantum tunnelling. Radioactive decay is a relevant issue for astrobiology as this consequence of quantum tunnelling creates a constant energy source over a large time interval for environments outside the circumstellar habitable zone where insolation would not be possible (subsurface oceans) or effective. Quantum tunnelling may be one of the mechanisms of hypothetical proton decay. Chemistry Energetically forbidden reactions Chemical reactions in the interstellar medium occur at extremely low energies. Probably the most fundamental ion-molecule reaction involves hydrogen ions with hydrogen molecules. The quantum mechanical tunnelling rate for the same reaction using the hydrogen isotope deuterium, D− + H2 → H− + HD, has been measured experimentally in an ion trap. The deuterium was placed in an ion trap and cooled. The trap was then filled with hydrogen. At the temperatures used in the experiment, the energy barrier for reaction would not allow the reaction to succeed with classical dynamics alone. Quantum tunneling allowed reactions to happen in rare collisions. It was calculated from the experimental data that collisions happened one in every hundred billion. Kinetic isotope effect In chemical kinetics, the substitution of a light isotope of an element with a heavier one typically results in a slower reaction rate. This is generally attributed to differences in the zero-point vibrational energies for chemical bonds containing the lighter and heavier isotopes and is generally modeled using transition state theory. However, in certain cases, large isotopic effects are observed that cannot be accounted for by a semi-classical treatment, and quantum tunnelling is required. R. P. Bell developed a modified treatment of Arrhenius kinetics that is commonly used to model this phenomenon. Astrochemistry in interstellar clouds By including quantum tunnelling, the astrochemical syntheses of various molecules in interstellar clouds can be explained, such as the synthesis of molecular hydrogen, water (ice) and the prebiotic important formaldehyde. Tunnelling of molecular hydrogen has been observed in the lab. Quantum biology Quantum tunnelling is among the central non-trivial quantum effects in quantum biology. Here it is important both as electron tunnelling and proton tunnelling. Electron tunnelling is a key factor in many biochemical redox reactions (photosynthesis, cellular respiration) as well as enzymatic catalysis. Proton tunnelling is a key factor in spontaneous DNA mutation. Spontaneous mutation occurs when normal DNA replication takes place after a particularly significant proton has tunnelled. A hydrogen bond joins DNA base pairs. A double well potential along a hydrogen bond separates a potential energy barrier. It is believed that the double well potential is asymmetric, with one well deeper than the other such that the proton normally rests in the deeper well. For a mutation to occur, the proton must have tunnelled into the shallower well. The proton's movement from its regular position is called a tautomeric transition. If DNA replication takes place in this state, the base pairing rule for DNA may be jeopardised, causing a mutation. Per-Olov Lowdin was the first to develop this theory of spontaneous mutation within the double helix. Other instances of quantum tunnelling-induced mutations in biology are believed to be a cause of ageing and cancer. Mathematical discussion Schrödinger equation The time-independent Schrödinger equation for one particle in one dimension can be written as − ℏ 2 2 m d 2 d x 2 Ψ ( x ) + V ( x ) Ψ ( x ) = E Ψ ( x ) {\displaystyle -{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}\Psi (x)+V(x)\Psi (x)=E\Psi (x)} or d 2 d x 2 Ψ ( x ) = 2 m ℏ 2 ( V ( x ) − E ) Ψ ( x ) ≡ 2 m ℏ 2 M ( x ) Ψ ( x ) , {\displaystyle {\frac {d^{2}}{dx^{2}}}\Psi (x)={\frac {2m}{\hbar ^{2}}}\left(V(x)-E\right)\Psi (x)\equiv {\frac {2m}{\hbar ^{2}}}M(x)\Psi (x),} where ℏ {\displaystyle \hbar } is the reduced Planck constant, m is the particle mass, x represents distance measured in the direction of motion of the particle, Ψ is the Schrödinger wave function, V is the potential energy of the particle (measured relative to any convenient reference level), E is the energy of the particle that is associated with motion in the x-axis (measured relative to V), M(x) is a quantity defined by V(x) − E, which has no accepted name in physics. The solutions of the Schrödinger equation take different forms for different values of x, depending on whether M(x) is positive or negative. When M(x) is constant and negative, then the Schrödinger equation can be written in the form d 2 d x 2 Ψ ( x ) = 2 m ℏ 2 M ( x ) Ψ ( x ) = − k 2 Ψ ( x ) , where k 2 = − 2 m ℏ 2 M . {\displaystyle {\frac {d^{2}}{dx^{2}}}\Psi (x)={\frac {2m}{\hbar ^{2}}}M(x)\Psi (x)=-k^{2}\Psi (x),\qquad {\text{where}}\quad k^{2}=-{\frac {2m}{\hbar ^{2}}}M.} The solutions of this equation represent travelling waves, with phase-constant +k or −k. Alternatively, if M(x) is constant and positive, then the Schrödinger equation can be written in the form d 2 d x 2 Ψ ( x ) = 2 m ℏ 2 M ( x ) Ψ ( x ) = κ 2 Ψ ( x ) , where κ 2 = 2 m ℏ 2 M . {\displaystyle {\frac {d^{2}}{dx^{2}}}\Psi (x)={\frac {2m}{\hbar ^{2}}}M(x)\Psi (x)={\kappa }^{2}\Psi (x),\qquad {\text{where}}\quad {\kappa }^{2}={\frac {2m}{\hbar ^{2}}}M.} The solutions of this equation are rising and falling exponentials in the form of evanescent waves. When M(x) varies with position, the same difference in behaviour occurs, depending on whether M(x) is negative or positive. It follows that the sign of M(x) determines the nature of the medium, with negative M(x) corresponding to medium A and positive M(x) corresponding to medium B. It thus follows that evanescent wave coupling can occur if a region of positive M(x) is sandwiched between two regions of negative M(x), hence creating a potential barrier. The mathematics of dealing with the situation where M(x) varies with x is difficult, except in special cases that usually do not correspond to physical reality. A full mathematical treatment appears in the 1965 monograph by Fröman and Fröman. Their ideas have not been incorporated into physics textbooks, but their corrections have little quantitative effect. WKB approximation The wave function is expressed as the exponential of a function: Ψ ( x ) = e Φ ( x ) , {\displaystyle \Psi (x)=e^{\Phi (x)},} where Φ ″ ( x ) + Φ ′ ( x ) 2 = 2 m ℏ 2 ( V ( x ) − E ) . {\displaystyle \Phi ''(x)+\Phi '(x)^{2}={\frac {2m}{\hbar ^{2}}}\left(V(x)-E\right).} Φ ′ ( x ) {\displaystyle \Phi '(x)} is then separated into real and imaginary parts: Φ ′ ( x ) = A ( x ) + i B ( x ) , {\displaystyle \Phi '(x)=A(x)+iB(x),} where A(x) and B(x) are real-valued functions. Substituting the second equation into the first and using the fact that the imaginary part needs to be 0 results in: A ′ ( x ) + A ( x ) 2 − B ( x ) 2 = 2 m ℏ 2 ( V ( x ) − E ) . {\displaystyle A'(x)+A(x)^{2}-B(x)^{2}={\frac {2m}{\hbar ^{2}}}\left(V(x)-E\right).} To solve this equation using the semiclassical approximation, each function must be expanded as a power series in ℏ {\displaystyle \hbar } . From the equations, the power series must start with at least an order of ℏ − 1 {\displaystyle \hbar ^{-1}} to satisfy the real part of the equation; for a good classical limit starting with the highest power of the Planck constant possible is preferable, which leads to A ( x ) = 1 ℏ ∑ k = 0 ∞ ℏ k A k ( x ) {\displaystyle A(x)={\frac {1}{\hbar }}\sum _{k=0}^{\infty }\hbar ^{k}A_{k}(x)} and B ( x ) = 1 ℏ ∑ k = 0 ∞ ℏ k B k ( x ) , {\displaystyle B(x)={\frac {1}{\hbar }}\sum _{k=0}^{\infty }\hbar ^{k}B_{k}(x),} with the following constraints on the lowest order terms, A 0 ( x ) 2 − B 0 ( x ) 2 = 2 m ( V ( x ) − E ) {\displaystyle A_{0}(x)^{2}-B_{0}(x)^{2}=2m\left(V(x)-E\right)} and A 0 ( x ) B 0 ( x ) = 0. {\displaystyle A_{0}(x)B_{0}(x)=0.} At this point two extreme cases can be considered. Case 1 If the amplitude varies slowly as compared to the phase A 0 ( x ) = 0 {\displaystyle A_{0}(x)=0} and B 0 ( x ) = ± 2 m ( E − V ( x ) ) {\displaystyle B_{0}(x)=\pm {\sqrt {2m\left(E-V(x)\right)}}} which corresponds to classical motion. Resolving the next order of expansion yields Ψ ( x ) ≈ C e i ∫ d x 2 m ℏ 2 ( E − V ( x ) ) + θ 2 m ℏ 2 ( E − V ( x ) ) 4 {\displaystyle \Psi (x)\approx C{\frac {e^{i\int dx{\sqrt {{\frac {2m}{\hbar ^{2}}}\left(E-V(x)\right)}}+\theta }}{\sqrt[{4}]{{\frac {2m}{\hbar ^{2}}}\left(E-V(x)\right)}}}} Case 2 If the phase varies slowly as compared to the amplitude, B 0 ( x ) = 0 {\displaystyle B_{0}(x)=0} and A 0 ( x ) = ± 2 m ( V ( x ) − E ) {\displaystyle A_{0}(x)=\pm {\sqrt {2m\left(V(x)-E\right)}}} which corresponds to tunneling. Resolving the next order of the expansion yields Ψ ( x ) ≈ C + e + ∫ d x 2 m ℏ 2 ( V ( x ) − E ) + C − e − ∫ d x 2 m ℏ 2 ( V ( x ) − E ) 2 m ℏ 2 ( V ( x ) − E ) 4 {\displaystyle \Psi (x)\approx {\frac {C_{+}e^{+\int dx{\sqrt {{\frac {2m}{\hbar ^{2}}}\left(V(x)-E\right)}}}+C_{-}e^{-\int dx{\sqrt {{\frac {2m}{\hbar ^{2}}}\left(V(x)-E\right)}}}}{\sqrt[{4}]{{\frac {2m}{\hbar ^{2}}}\left(V(x)-E\right)}}}} In both cases it is apparent from the denominator that both these approximate solutions are bad near the classical turning points E = V ( x ) {\displaystyle E=V(x)} . Away from the potential hill, the particle acts similar to a free and oscillating wave; beneath the potential hill, the particle undergoes exponential changes in amplitude. By considering the behaviour at these limits and classical turning points a global solution can be made. To start, a classical turning point, x 1 {\displaystyle x_{1}} is chosen and 2 m ℏ 2 ( V ( x ) − E ) {\displaystyle {\frac {2m}{\hbar ^{2}}}\left(V(x)-E\right)} is expanded in a power series about x 1 {\displaystyle x_{1}} 2 m ℏ 2 ( V ( x ) − E ) = v 1 ( x − x 1 ) + v 2 ( x − x 1 ) 2 + ⋯ {\displaystyle {\frac {2m}{\hbar ^{2}}}\left(V(x)-E\right)=v_{1}(x-x_{1})+v_{2}(x-x_{1})^{2}+\cdots } Keeping only the first order term ensures linearity: 2 m ℏ 2 ( V ( x ) − E ) = v 1 ( x − x 1 ) . {\displaystyle {\frac {2m}{\hbar ^{2}}}\left(V(x)-E\right)=v_{1}(x-x_{1}).} Using this approximation, the equation near x 1 {\displaystyle x_{1}} becomes a differential equation: d 2 d x 2 Ψ ( x ) = v 1 ( x − x 1 ) Ψ ( x ) . {\displaystyle {\frac {d^{2}}{dx^{2}}}\Psi (x)=v_{1}(x-x_{1})\Psi (x).} This can be solved using Airy functions as solutions. Ψ ( x ) = C A A i ( v 1 3 ( x − x 1 ) ) + C B B i ( v 1 3 ( x − x 1 ) ) {\displaystyle \Psi (x)=C_{A}Ai\left({\sqrt[{3}]{v_{1}}}(x-x_{1})\right)+C_{B}Bi\left({\sqrt[{3}]{v_{1}}}(x-x_{1})\right)} Taking these solutions for all classical turning points, a global solution can be formed that links the limiting solutions. Given the two coefficients on one side of a classical turning point, the two coefficients on the other side of a classical turning point can be determined by using this local solution to connect them. Hence, the Airy function solutions will asymptote into sine, cosine and exponential functions in the proper limits. The relationships between C , θ {\displaystyle C,\theta } and C + , C − {\displaystyle C_{+},C_{-}} are C + = 1 2 C cos ⁡ ( θ − π 4 ) {\displaystyle C_{+}={\frac {1}{2}}C\cos {\left(\theta -{\frac {\pi }{4}}\right)}} and C − = − C sin ⁡ ( θ − π 4 ) {\displaystyle C_{-}=-C\sin {\left(\theta -{\frac {\pi }{4}}\right)}} With the coefficients found, the global solution can be found. Therefore, the transmission coefficient for a particle tunneling through a single potential barrier is T ( E ) = e − 2 ∫ x 1 x 2 d x 2 m ℏ 2 [ V ( x ) − E ] , {\displaystyle T(E)=e^{-2\int _{x_{1}}^{x_{2}}dx{\sqrt {{\frac {2m}{\hbar ^{2}}}\left[V(x)-E\right]}}},} where x 1 , x 2 {\displaystyle x_{1},x_{2}} are the two classical turning points for the potential barrier. For a rectangular barrier, this expression simplifies to: T ( E ) = e − 2 2 m ℏ 2 ( V 0 − E ) ( x 2 − x 1 ) . {\displaystyle T(E)=e^{-2{\sqrt {{\frac {2m}{\hbar ^{2}}}(V_{0}-E)}}(x_{2}-x_{1})}.} Faster than light Some physicists have claimed that it is possible for spin-zero particles to travel faster than the speed of light when tunnelling. This appears to violate the principle of causality, since a frame of reference then exists in which the particle arrives before it has left. In 1998, Francis E. Low reviewed briefly the phenomenon of zero-time tunnelling. More recently, experimental tunnelling time data of phonons, photons, and electrons was published by Günter Nimtz. Another experiment overseen by A. M. Steinberg, seems to indicate that particles could tunnel at apparent speeds faster than light. Other physicists, such as Herbert Winful, disputed these claims. Winful argued that the wave packet of a tunnelling particle propagates locally, so a particle can't tunnel through the barrier non-locally. Winful also argued that the experiments that are purported to show non-local propagation have been misinterpreted. In particular, the group velocity of a wave packet does not measure its speed, but is related to the amount of time the wave packet is stored in the barrier. Moreover, if quantum tunneling is modeled with the relativistic Dirac equation, well established mathematical theorems imply that the process is completely subluminal. Dynamical tunneling The concept of quantum tunneling can be extended to situations where there exists a quantum transport between regions that are classically not connected even if there is no associated potential barrier. This phenomenon is known as dynamical tunnelling. Tunnelling in phase space The concept of dynamical tunnelling is particularly suited to address the problem of quantum tunnelling in high dimensions (d>1). In the case of an integrable system, where bounded classical trajectories are confined onto tori in phase space, tunnelling can be understood as the quantum transport between semi-classical states built on two distinct but symmetric tori. Chaos-assisted tunnelling In real life, most systems are not integrable and display various degrees of chaos. Classical dynamics is then said to be mixed and the system phase space is typically composed of islands of regular orbits surrounded by a large sea of chaotic orbits. The existence of the chaotic sea, where transport is classically allowed, between the two symmetric tori then assists the quantum tunnelling between them. This phenomenon is referred as chaos-assisted tunnelling. and is characterized by sharp resonances of the tunnelling rate when varying any system parameter. Resonance-assisted tunnelling When ℏ {\displaystyle \hbar } is small in front of the size of the regular islands, the fine structure of the classical phase space plays a key role in tunnelling. In particular the two symmetric tori are coupled "via a succession of classically forbidden transitions across nonlinear resonances" surrounding the two islands. Related phenomena Several phenomena have the same behavior as quantum tunnelling. Two examples are evanescent wave coupling (the application of Maxwell's wave-equation to light) and the application of the non-dispersive wave-equation from acoustics applied to "waves on strings". These effects are modeled similarly to the rectangular potential barrier. In these cases, one transmission medium through which the wave propagates that is the same or nearly the same throughout, and a second medium through which the wave travels differently. This can be described as a thin region of medium B between two regions of medium A. The analysis of a rectangular barrier by means of the Schrödinger equation can be adapted to these other effects provided that the wave equation has travelling wave solutions in medium A but real exponential solutions in medium B. In optics, medium A is a vacuum while medium B is glass. In acoustics, medium A may be a liquid or gas and medium B a solid. For both cases, medium A is a region of space where the particle's total energy is greater than its potential energy and medium B is the potential barrier. These have an incoming wave and resultant waves in both directions. There can be more mediums and barriers, and the barriers need not be discrete. Approximations are useful in this case. A classical wave-particle association was originally analyzed as analogous to quantum tunneling, but subsequent analysis found a fluid dynamics cause related to the vertical momentum imparted to particles near the barrier. See also Dielectric barrier discharge Field electron emission Holstein–Herring method Proton tunneling Quantum cloning Superconducting tunnel junction Tunnel diode Tunnel junction White hole References Further reading Binney, James; Skinner, David (2010). The physics of quantum mechanics (3. ed.). Great Malvern: Cappella Archive. ISBN 978-1-902918-51-8. Fröman, Nanny; Fröman, Per Olof (1965). JWKB Approximation: Contributions to the Theory. Amsterdam: North-Holland. ISBN 978-0-7204-0085-4. Griffiths, David J. (2004). Introduction to electrodynamics (3. ed.). Upper Saddle River, NJ: Prentice Hall. ISBN 978-0-13-805326-0. Liboff, Richard L. (2002). Introductory quantum mechanics (4th ed.). San Francisco: Addison-Wesley. ISBN 978-0-8053-8714-8. Muller-kirsten, Harald J. W. (2012). Introduction To Quantum Mechanics: Schrodinger Equation And Path Integral (2nd ed.). Singapore: World Scientific Publishing Company. ISBN 978-981-4397-76-6. Razavy, Mohsen (2003). Quantum theory of tunneling. River Edge, NJ: World Scientific. ISBN 978-981-238-019-7. OCLC 52498470. Hong, Jooyoo; Vilenkin, Alexander; Winitzki, Serge (2003). "Particle creation in a tunneling universe". Physical Review D. 68 (2): 023520. arXiv:gr-qc/0210034. Bibcode:2003PhRvD..68b3520H. doi:10.1103/PhysRevD.68.023520. ISSN 0556-2821. S2CID 118969589. Wolf, E. L. (2012). Principles of electron tunneling spectroscopy. International series of monographs on physics (2nd ed.). Oxford; New York: Oxford University Press. ISBN 978-0-19-958949-4. OCLC 768067375. External links Animation, applications and research linked to tunnel effect and other quantum phenomena (Université Paris Sud) Animated illustration of quantum tunneling Animated illustration of quantum tunneling in a RTD device Interactive Solution of Schrodinger Tunnel Equation
Wikipedia
A Viterbi decoder uses the Viterbi algorithm for decoding a bitstream that has been encoded using a convolutional code or trellis code. There are other algorithms for decoding a convolutionally encoded stream (for example, the Fano algorithm). The Viterbi algorithm is the most resource-consuming, but it does the maximum likelihood decoding. It is most often used for decoding convolutional codes with constraint lengths k≤3, but values up to k=15 are used in practice. Viterbi decoding was developed by Andrew J. Viterbi and published in the paper Viterbi, A. (April 1967). "Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm". IEEE Transactions on Information Theory. 13 (2): 260–269. doi:10.1109/tit.1967.1054010. There are both hardware (in modems) and software implementations of a Viterbi decoder. Viterbi decoding is used in the iterative Viterbi decoding algorithm. Hardware implementation A hardware Viterbi decoder for basic (not punctured) code usually consists of the following major blocks: Branch metric unit (BMU) Path metric unit (PMU) Traceback unit (TBU) Branch metric unit (BMU) A branch metric unit's function is to calculate branch metrics, which are normed distances between every possible symbol in the code alphabet, and the received symbol. There are hard decision and soft decision Viterbi decoders. A hard decision Viterbi decoder receives a simple bitstream on its input, and a Hamming distance is used as a metric. A soft decision Viterbi decoder receives a bitstream containing information about the reliability of each received symbol. For instance, in a 3-bit encoding, this reliability information can be encoded as follows: Of course, it is not the only way to encode reliability data. The squared Euclidean distance is used as a metric for soft decision decoders. Path metric unit (PMU) A path metric unit summarizes branch metrics to get metrics for 2 K − 1 {\displaystyle 2^{K-1}} paths, where K is the constraint length of the code, one of which can eventually be chosen as optimal. Every clock it makes 2 K − 1 {\displaystyle 2^{K-1}} decisions, throwing off wittingly nonoptimal paths. The results of these decisions are written to the memory of a traceback unit. The core elements of a PMU are ACS (Add-Compare-Select) units. The way in which they are connected between themselves is defined by a specific code's trellis diagram. Since branch metrics are always ≥ 0 {\displaystyle \geq 0} , there must be an additional circuit (not shown on the image) preventing metric counters from overflow. An alternate method that eliminates the need to monitor the path metric growth is to allow the path metrics to "roll over"; to use this method it is necessary to make sure the path metric accumulators contain enough bits to prevent the "best" and "worst" values from coming within 2(n-1) of each other. The compare circuit is essentially unchanged. It is possible to monitor the noise level on the incoming bit stream by monitoring the rate of growth of the "best" path metric. A simpler way to do this is to monitor a single location or "state" and watch it pass "upward" through say four discrete levels within the range of the accumulator. As it passes upward through each of these thresholds, a counter is incremented that reflects the "noise" present on the incoming signal. Traceback unit (TBU) Back-trace unit restores an (almost) maximum-likelihood path from the decisions made by PMU. Since it does it in inverse direction, a viterbi decoder comprises a FILO (first-in-last-out) buffer to reconstruct a correct order. Note that the implementation shown on the image requires double frequency. There are some tricks that eliminate this requirement. Implementation issues Quantization for soft decision decoding In order to fully exploit benefits of soft decision decoding, one needs to quantize the input signal properly. The optimal quantization zone width is defined by the following formula: T = N 0 2 k , {\displaystyle \,\!T={\sqrt {\frac {N_{0}}{2^{k}}}},} where N 0 {\displaystyle N_{0}} is a noise power spectral density, and k is a number of bits for soft decision. Euclidean metric computation The squared norm ( ℓ 2 {\displaystyle \ell _{2}} ) distance between the received and the actual symbols in the code alphabet may be further simplified into a linear sum/difference form, which makes it less computationally intensive. Consider a 1/2 convolutional code, which generates 2 bits (00, 01, 10 or 11) for every input bit (1 or 0). These Return-to-Zero signals are translated into a Non-Return-to-Zero form shown alongside. Each received symbol may be represented in vector form as vr = {r0, r1}, where r0 and r1 are soft decision values, whose magnitudes signify the joint reliability of the received vector, vr. Every symbol in the code alphabet may, likewise, be represented by the vector vi = {±1, ±1}. The actual computation of the Euclidean distance metric is: D = ( v r → − v i → ) 2 = v r → 2 − 2 v r → v i → + v i → 2 {\displaystyle \,\!D=({\overrightarrow {v_{r}}}-{\overrightarrow {v_{i}}})^{2}={\overrightarrow {v_{r}}}^{2}-2{\overrightarrow {v_{r}}}{\overrightarrow {v_{i}}}+{\overrightarrow {v_{i}}}^{2}} Each square term is a normed distance, depicting the energy of the symbol. For ex., the energy of the symbol vi = {±1, ±1} may be computed as v i → 2 = ( ± 1 ) 2 + ( ± 1 ) 2 = 2 {\displaystyle \,\!{\overrightarrow {v_{i}}}^{2}=(\pm 1)^{2}+(\pm 1)^{2}=2} Thus, the energy term of all symbols in the code alphabet is constant (at (normalized) value 2). The Add-Compare-Select (ACS) operation compares the metric distance between the received symbol ||vr|| and any 2 symbols in the code alphabet whose paths merge at a node in the corresponding trellis, ||vi(0)|| and ||vi(1)||. This is equivalent to comparing D 0 = v r → 2 − 2 v r → v i 0 → + v i 0 → 2 {\displaystyle \,\!D_{0}={\overrightarrow {v_{r}}}^{2}-2{\overrightarrow {v_{r}}}{\overrightarrow {v_{i}^{0}}}+{\overrightarrow {v_{i}^{0}}}^{2}} and D 1 = v r → 2 − 2 v r → v i 1 → + v i 1 → 2 {\displaystyle \,\!D_{1}={\overrightarrow {v_{r}}}^{2}-2{\overrightarrow {v_{r}}}{\overrightarrow {v_{i}^{1}}}+{\overrightarrow {v_{i}^{1}}}^{2}} But, from above we know that the energy of vi is constant (equal to (normalized) value of 2), and the energy of vr is the same in both cases. This reduces the comparison to a minima function between the 2 (middle) dot product terms, min ( − 2 v r → v i 0 → , − 2 v r → v i 1 → ) = max ( v r → v i 0 → , v r → v i 1 → ) {\displaystyle \,\!\min(-2{\overrightarrow {v_{r}}}{\overrightarrow {v_{i}^{0}}},-2{\overrightarrow {v_{r}}}{\overrightarrow {v_{i}^{1}}})=\max({\overrightarrow {v_{r}}}{\overrightarrow {v_{i}^{0}}},{\overrightarrow {v_{r}}}{\overrightarrow {v_{i}^{1}}})} since a min operation on negative numbers may be interpreted as an equivalent max operation on positive quantities. Each dot product term may be expanded as max ( ± r 0 ± r 1 , ± r 0 ± r 1 ) {\displaystyle \,\!\max(\pm r_{0}\pm r_{1},\pm r_{0}\pm r_{1})} where, the signs of each term depend on symbols, vi(0) and vi(1), being compared. Thus, the squared Euclidean metric distance calculation to compute the branch metric may be performed with a simple add/subtract operation. Traceback The general approach to traceback is to accumulate path metrics for up to five times the constraint length (5 (K - 1)), find the node with the largest accumulated cost, and begin traceback from this node. The commonly used rule of thumb of a truncation depth of five times the memory (constraint length K-1) of a convolutional code is accurate only for rate 1/2 codes. For an arbitrary rate, an accurate rule of thumb is 2.5(K - 1)/(1−r) where r is the code rate. However, computing the node which has accumulated the largest cost (either the largest or smallest integral path metric) involves finding the maxima or minima of several (usually 2K-1) numbers, which may be time consuming when implemented on embedded hardware systems. Most communication systems employ Viterbi decoding involving data packets of fixed sizes, with a fixed bit/byte pattern either at the beginning or/and at the end of the data packet. By using the known bit/byte pattern as reference, the start node may be set to a fixed value, thereby obtaining a perfect Maximum Likelihood Path during traceback. Limitations A physical implementation of a Viterbi decoder will not yield an exact maximum-likelihood stream due to quantization of the input signal, branch and path metrics, and finite traceback length. Practical implementations do approach within 1 dB of the ideal. The output of a Viterbi decoder, when decoding a message damaged by an additive Gaussian channel, has errors grouped in error bursts. Single-error-correcting codes alone can't correct such bursts, so either the convolutional code and the Viterbi decoder must be designed powerful enough to drive down errors to an acceptable rate, or burst error-correcting codes must be used. Punctured codes A hardware viterbi decoder of punctured codes is commonly implemented in such a way: A depuncturer, which transforms the input stream into the stream which looks like an original (non punctured) stream with ERASE marks at the places where bits were erased. A basic Viterbi decoder understanding these ERASE marks (that is, not using them for branch metric calculation). Software implementation One of the most time-consuming operations is an ACS butterfly, which is usually implemented using assembly language and an appropriate instruction set extensions (such as SSE2) to speed up the decoding time. Applications The Viterbi decoding algorithm is widely used in the following areas: Radio communication: digital TV (ATSC, QAM, DVB-T, etc.), radio relay, satellite communications, PSK31 digital mode for amateur radio. Decoding trellis-coded modulation (TCM), the technique used in telephone-line modems to squeeze high spectral efficiency out of 3 kHz-bandwidth analog telephone lines. Computer storage devices such as hard disk drives. Automatic speech recognition References External links Forney, G. David Jr (29 Apr 2005). "The Viterbi Algorithm: A Personal History". arXiv:cs/0504020. Details on Viterbi decoding, as well as a bibliography. Viterbi algorithm explanation with the focus on hardware implementation issues. r=1/6 k=15 coding for the Cassini mission to Saturn. Online Generator of optimized software Viterbi decoders (GPL). GPL Viterbi decoder software for four standard codes. Description of a k=24 Viterbi decoder, believed to be the largest ever in practical use. Generic Viterbi decoder hardware (GPL).
Wikipedia
In computer science, the matrix mortality problem (or mortal matrix problem) is a decision problem that asks, given a finite set of n×n matrices with integer coefficients, whether the zero matrix can be expressed as a finite product of matrices from this set. The matrix mortality problem is known to be undecidable when n ≥ 3. In fact, it is already undecidable for sets of 6 matrices (or more) when n = 3, for 4 matrices when n = 5, for 3 matrices when n = 9, and for 2 matrices when n = 15. In the case n = 2, it is an open problem whether matrix mortality is decidable, but several special cases have been solved: the problem is decidable for sets of 2 matrices, and for sets of matrices which contain at most one invertible matrix.
Wikipedia
During sampling of granular materials (whether airborne, suspended in liquid, aerosol, or aggregated), correct sampling is defined in Gy's sampling theory as a sampling scenario in which all particles in a population have the same probability of ending up in the sample. The concentration of the property of interest in a sample can be a biased estimate for the concentration of the property of interest in the population from which the sample is drawn. Although generally non-zero, for correct sampling this bias is thought to be negligible. See also Particle filter Particle in a box Particulate matter sampler Statistical sampling Gy's sampling theory
Wikipedia
Apostolos K. Doxiadis (; Greek: Απόστολος Κ. Δοξιάδης [ðoksiˈaðis]; born 1953) is a Greek writer. He is best known for his international bestsellers Uncle Petros and Goldbach's Conjecture (2000) and Logicomix (2009). Early life Doxiadis was born in Australia, where his father, the architect Constantinos Apostolou Doxiadis was working. Soon after his birth, the family returned to Athens in Greece, where Doxiadis grew up. Though his earliest interests were in poetry, fiction and the theatre, an intense interest in mathematics led Doxiadis to leave school at age fifteen, to attend Columbia University, in New York, from which he obtained a bachelor's degree in mathematics. He then attended the École Pratique des Hautes Études in Paris from which he got a master's degree, with a thesis on the mathematical modelling of the nervous system. His father's death and family reasons made him return to Greece in 1975, interrupting his graduate studies. In Greece, although involved for some years with the computer software industry, Doxiadis returned to his childhood and adolescence loves of theatre and the cinema, before becoming a full-time writer. Work Fiction in Greek Doxiadis began to write in Greek. His first published work was A Parallel Life (Βίος Παράλληλος, 1985), a novella set in the monastic communities of 4th-century CE Egypt. His first novel, Makavettas (Μακαβέττας, 1988), recounted the adventures of a fictional power-hungry colonel at the time of the Greek military junta of 1967–1974. Written in a tongue-in-cheek imitation of Greek folk military memoirs, such as that of Yannis Makriyannis, it follows the plot of Shakespeare's Macbeth, of which the eponymous hero's name is a Hellenized form. Doxiadis next novel, Uncle Petros and Goldbach's Conjecture (Ο Θείος Πέτρος και η Εικασία του Γκόλντμπαχ, 1992), was the first long work of fiction whose plot takes place in the world of pure mathematics research. The first Greek critics did not find the mathematical themes appealing, and it received mediocre reviews, unlike Doxiadis's first two works, which were well received. The novella The Three Little Men (Τα Τρία Ανθρωπάκια, 1998), attempts a modern-day retelling of the tale of a classic fairy-tale. Fiction in English In 1998, Doxiadis translated into English, significantly re-working, his third novel, which was published in England in 2000 as Uncle Petros and Goldbach's Conjecture (UK publisher: Faber and Faber; United States publisher: Bloomsbury USA.) The book became an international bestseller, and has been published to date in more than thirty-five languages. It has received the praise of, among others, Nobel laureate John Nash, British mathematician Sir Michael Atiyah, critic George Steiner and neurologist Oliver Sacks. Uncle Petros is one of the 1001 Books You Must Read Before You Die. Doxiadis' next project, which took over five years to complete, was the graphic novel Logicomix (2009), a number one bestseller on the New York Times Best Seller list and an international bestseller, already published in over twenty languages. Logicomix was co-authored with computer scientist Christos Papadimitriou, with art work by Alecos Papadatos (pencils) and Annie Di Donna (colour). Renowned comics historian and critic R. C. Harvey, in the Comics Journal, called Logicomix "a tour-de-force" a "virtuoso performance", while The Sunday Times' Bryan Appleyard called it "probably the best and certainly the most extraordinary graphic novel" he has read. Logicomix is one of Paul Gravett's 1001 Comics You Must Read Before you Die. Theatre and cinema In the early stage of his career, Doxiadis directed in the professional theatre, in Athens, and worked as a translator, translating, among other plays, William Shakespeare's Romeo and Juliet, Hamlet and Midsummer Night's Dream, as well as Eugene O'Neill's Mourning Becomes Electra. He has written two plays for the theatre. The first was a full-length shadow-puppet play The Tragical History of Jackson Pollock, Abstract Expressionist (1999), in English, of which he also designed and directed the Athens performance. In this play, Doxiadis realized some of his views on "epic theatre", in other words a theatre based on storytelling. His second play, Incompleteness (2005), is an imaginary account of the last seventeen days in the life of the great logician Kurt Gödel, which Gödel spent in a Princeton, New Jersey, hospital, refusing to eat out of fear that he was being poisoned. The play was staged in Athens, in 2006, as Dekati Evdomi Nyhta (Seventeenth Night) with the actor Yorgos Kotanidis in the role of Kurt Gödel. Doxiadis has also written and directed two feature-length films, in Greek, Underground Passage (Υπόγεια Διαδρομή, 1983) and Terirem (Τεριρέμ, 1987). The latter won the CICAE (International Confederation of Art Cinemas) prize for Best Film in the 1988 Berlin International Film Festival. Scholarship Doxiadis has a lifelong interest in logic, cognitive psychology and rhetoric, as well as the theoretical study of narrative. In 2007, he organized, with mathematician Barry Mazur, a meeting on the theoretical investigation of the relationship of mathematics and narrative, whose proceedings were published as Circles Disturbed: The Interplay of Mathematics and Narrative (2012). Doxiadis has lectured extensively on his theoretical interests. Doxiadis' recent work has led him to formulate a theory about the development of deductive proof in classical Greece, which lays emphasis on influences from pre-existing patterns in narrative and, especially, Archaic Age poetry. Awards and honours Uncle Petros and Goldbach's Conjecture was the first recipient of the Premio Peano the first international award for books inspired by mathematics and short-listed for the Prix Médicis. Logicomix has earned numerous awards, among them the Bertrand Russell Society Award, the Royal Booksellers Association Award (the Netherlands), the New Atlantic Booksellers Award (US), the Prix Tangente (France), the Premio Carlo Boscarato (Italy), the Comicdom Award (Greece). It was chosen as "Book of the Year" by Time, Publishers Weekly, The Washington Post, The Financial Times, The Globe and Mail, and other publications. References External links Official website Official Logicomix website
Wikipedia
In programming, the strangler fig pattern or strangler pattern is an architectural pattern that involves wrapping old code, with the intent of redirecting it to newer code or to log uses of the old code. Coined by Martin Fowler, its name derives from the strangler fig plant, which tends to grow on trees and eventually kill them. It has also been called Ship of Theseus pattern, named after a philosophical paradox. The pattern can be used at the method level or the class level. Rewrites One use of this pattern is during software rewrites. Code can be divided into many small sections, wrapped with the strangler fig pattern, then that section of old code can be swapped out with new code before moving on to the next section. This is less risky and more incremental than swapping out the entire piece of software. The strangler fig pattern can be used on monolithic applications to migrate them to a microservices architecture. Logging Another use of this pattern is the addition of logging to old code. For example, logging can be used to see how frequently the code is used in production, which can be used to decide whether to delete low-usage code, or to rewrite high-usage code. See also List of software architecture styles and patterns External links https://learn.microsoft.com/en-us/azure/architecture/patterns/strangler-fig https://martinfowler.com/bliki/StranglerFigApplication.html
Wikipedia
The history of group theory, a mathematical domain studying groups in their various forms, has evolved in various parallel threads. There are three historical roots of group theory: the theory of algebraic equations, number theory and geometry. Joseph Louis Lagrange, Niels Henrik Abel and Évariste Galois were early researchers in the field of group theory. Early 19th century The earliest study of groups as such probably goes back to the work of Lagrange in the late 18th century. However, this work was somewhat isolated, and 1846 publications of Augustin Louis Cauchy and Galois are more commonly referred to as the beginning of group theory. The theory did not develop in a vacuum, and so three important threads in its pre-history are developed here. Development of permutation groups One foundational root of group theory was the quest of solutions of polynomial equations of degree higher than 4. An early source occurs in the problem of forming an equation of degree m having as its roots m of the roots of a given equation of degree n > m {\displaystyle n>m} . For simple cases, the problem goes back to Johann van Waveren Hudde (1659). Nicholas Saunderson (1740) noted that the determination of the quadratic factors of a biquadratic expression necessarily leads to a sextic equation, and Thomas Le Seur (1703–1770) (1748) and Edward Waring (1762 to 1782) still further elaborated the idea. Waring proved the fundamental theorem of symmetric polynomials, and specially considered the relation between the roots of a quartic equation and its resolvent cubic. Lagrange's goal (1770, 1771) was to understand why equations of third and fourth degree admit formulas for solutions, and a key object was the group of permutations of the roots. On this was built the theory of substitutions. He discovered that the roots of all Lagrange resolvents (résolvantes, réduites) which he examined are rational functions of the roots of the respective equations. To study the properties of these functions, he invented a Calcul des Combinaisons. The contemporary work of Alexandre-Théophile Vandermonde (1770) developed the theory of symmetric functions and solution of cyclotomic polynomials. Leopold Kronecker has been quoted as saying that a new boom in algebra began with Vandermonde's first paper. Similarly Cauchy gave credit to both Lagrange and Vandermonde for studying symmetric functions and permutations of variables. Paolo Ruffini (1799) attempted a proof of the impossibility of solving the quintic and higher equations. Ruffini was the first person to explore ideas in the theory of permutation groups such as the order of an element of a group, conjugacy, and the cycle decomposition of elements of permutation groups. Ruffini distinguished what are now called intransitive and transitive, and imprimitive and primitive groups, and (1801) uses the group of an equation under the name l'assieme delle permutazioni. He also published a letter from Pietro Abbati to himself, in which the group idea is prominent. However, he never formalized the concept of a group, or even of a permutation group. Évariste Galois is honored as the first mathematician linking group theory and field theory, with the theory that is now called Galois theory. Galois also contributed to the theory of modular equations and to that of elliptic functions. His first publication on group theory was made at the age of eighteen (1829), but his contributions attracted little attention until the posthumous publication of his collected papers in 1846 (Liouville, Vol. XI). He considered for the first time what is now called the closure property of a group of permutations, which he expressed as if in such a group one has the substitutions S and T then one has the substitution ST. Galois found that if r 1 , r 2 , … , r n {\displaystyle r_{1},r_{2},\ldots ,r_{n}} are the n roots of an equation, there is always a group of permutations of the r's such that every function of the roots invariable by the substitutions of the group is rationally known, and conversely, every rationally determinable function of the roots is invariant under the substitutions of the group. In modern terms, the solvability of the Galois group attached to the equation determines the solvability of the equation with radicals. Galois was the first to use the words group (groupe in French) and primitive in their modern meanings. He did not use primitive group but called equation primitive an equation whose Galois group is primitive. He discovered the notion of normal subgroups and found that a solvable primitive group may be identified to a subgroup of the affine group of an affine space over a finite field of prime order. Groups similar to Galois groups are (today) called permutation groups. The theory of permutation groups received further far-reaching development in the hands of Augustin Cauchy and Camille Jordan, both through introduction of new concepts and, primarily, a great wealth of results about special classes of permutation groups and even some general theorems. Among other things, Jordan defined a notion of isomorphism, although limited to the context of permutation groups. It was also Jordan who put the term group in wide use. An abstract notion of a (finite) group appeared for the first time in Arthur Cayley's 1854 paper On the theory of groups, as depending on the symbolic equation θ n = 1 {\displaystyle \theta ^{n}=1} . Cayley proposed that any finite group is isomorphic to a subgroup of a permutation group, a result known today as Cayley's theorem. In succeeding years, Cayley systematically investigated infinite groups and the algebraic properties of matrices, such as the associativity of multiplication, existence of inverses, and characteristic polynomials. Groups related to geometry Secondly, the systematic use of groups in geometry, mainly in the guise of symmetry groups, was initiated by Felix Klein's 1872 Erlangen program. The study of what are now called Lie groups started systematically in 1884 with Sophus Lie, followed by work of Wilhelm Killing, Eduard Study, Issai Schur, Ludwig Maurer, and Élie Cartan. The discontinuous (discrete group) theory was built up by Klein, Lie, Henri Poincaré, and Charles Émile Picard, in connection in particular with modular forms and monodromy. Appearance of groups in number theory The third root of group theory was number theory. Leonhard Euler considered algebraic operations on numbers modulo an integer—modular arithmetic—in his generalization of Fermat's little theorem. These investigations were taken much further by Carl Friedrich Gauss, who considered the structure of multiplicative groups of residues mod n and established many properties of cyclic and more general abelian groups that arise in this way. In his investigations of composition of binary quadratic forms, Gauss explicitly stated the associative law for the composition of forms. In 1870, Leopold Kronecker gave a definition of an abelian group in the context of ideal class groups of a number field, generalizing Gauss's work. Ernst Kummer's attempts to prove Fermat's Last Theorem resulted in work introducing groups describing factorization into prime numbers. In 1882, Heinrich M. Weber realized the connection between permutation groups and abelian groups and gave a definition that included a two-sided cancellation property but omitted the existence of the inverse element, which was sufficient in his context (finite groups). Convergence Group theory as an increasingly independent subject was popularized by Serret, who devoted section IV of his algebra to the theory; by Camille Jordan, whose Traité des substitutions et des équations algébriques (1870) is a classic; and to Eugen Netto (1882), whose Theory of Substitutions and its Applications to Algebra was translated into English by Cole (1892). Other group theorists of the 19th century were Joseph Louis François Bertrand, Charles Hermite, Ferdinand Georg Frobenius, Leopold Kronecker, and Émile Mathieu; as well as William Burnside, Leonard Eugene Dickson, Otto Hölder, E. H. Moore, Ludwig Sylow, and Heinrich Martin Weber. The convergence of the above three sources into a uniform theory started with Jordan's Traité and Walther von Dyck (1882) who first defined a group in the full modern sense. The textbooks of Weber and Burnside helped establish group theory as a discipline. The abstract group formulation did not apply to a large portion of 19th century group theory, and an alternative formalism was given in terms of Lie algebras. Late 19th century Groups in the 1870-1900 period were described as the continuous groups of Lie, the discontinuous groups, finite groups of substitutions of roots (gradually being called permutations), and finite groups of linear substitutions (usually of finite fields). During the 1880-1920 period, groups described by presentations came into a life of their own through the work of Cayley, Walther von Dyck, Max Dehn, Jakob Nielsen, Otto Schreier, and continued in the 1920-1940 period with the work of H. S. M. Coxeter, Wilhelm Magnus, and others to form the field of combinatorial group theory. Finite groups in the 1870-1900 period saw such highlights as the Sylow theorems, Hölder's classification of groups of square-free order, and the early beginnings of the character theory of Frobenius. Already by 1860, the groups of automorphisms of the finite projective planes had been studied (by Mathieu), and in the 1870s Klein's group-theoretic vision of geometry was being realized in his Erlangen program. The automorphism groups of higher dimensional projective spaces were studied by Jordan in his Traité and included composition series for most of the so-called classical groups, though he avoided non-prime fields and omitted the unitary groups. The study was continued by Moore and Burnside, and brought into comprehensive textbook form by Leonard Dickson in 1901. The role of simple groups was emphasized by Jordan, and criteria for non-simplicity were developed by Hölder until he was able to classify the simple groups of order less than 200. The study was continued by Frank Nelson Cole (up to 660) and Burnside (up to 1092), and finally in an early "millennium project", up to 2001 by Miller and Ling in 1900. Continuous groups in the 1870-1900 period developed rapidly. Killing and Lie's foundational papers were published, Hilbert's theorem in invariant theory 1882, etc. Early 20th century In the period 1900–1940, infinite "discontinuous" groups (now called discrete groups) gained life of their own. Burnside's famous problem ushered in the study of arbitrary subgroups of finite-dimensional linear groups over arbitrary fields, and indeed arbitrary groups. Fundamental groups and reflection groups encouraged the developments of J. A. Todd and Coxeter, such as the Todd–Coxeter algorithm in combinatorial group theory. Algebraic groups, defined as solutions of polynomial equations (rather than acting on them, as in the earlier century), benefited heavily from the continuous theory of Lie. Bernard Neumann and Hanna Neumann produced their study of varieties of groups, groups defined by group-theoretic equations rather than polynomial ones. Continuous groups also had explosive growth in the 1900-1940 period. Topological groups began to be studied as such. There were many great achievements in continuous groups: Cartan's classification of semisimple Lie algebras, Hermann Weyl's theory of representations of compact groups, Alfréd Haar's work in the locally compact case. Finite groups in the 1900-1940 grew immensely. This period witnessed the birth of character theory by Frobenius, Burnside, and Schur which helped answer many of the 19th century questions in permutation groups, and opened the way to entirely new techniques in abstract finite groups. This period saw the work of Philip Hall: on a generalization of Sylow's theorem to arbitrary sets of primes which revolutionized the study of finite soluble groups, and on the power-commutator structure of p-groups, including the ideas of regular p-groups and isoclinism of groups, which revolutionized the study of p-groups and was the first major result in this area since Sylow. This period saw Hans Zassenhaus's famous Schur-Zassenhaus theorem on the existence of complements to Hall's generalization of Sylow subgroups, as well as his progress on Frobenius groups, and a near classification of Zassenhaus groups. Mid-20th century Both depth, breadth and also the impact of group theory subsequently grew. The domain started branching out into areas such as algebraic groups, group extensions, and representation theory. Starting in the 1950s, in a huge collaborative effort, group theorists succeeded to classify all finite simple groups in 1982. Completing and simplifying the proof of the classification are areas of active research. Anatoly Maltsev also made important contributions to group theory during this time; his early work was in logic in the 1930s, but in the 1940s he proved important embedding properties of semigroups into groups, studied the isomorphism problem of group rings, established the Malçev correspondence for polycyclic groups, and in the 1960s return to logic proving various theories within the study of groups to be undecidable. Earlier, Alfred Tarski proved elementary group theory undecidable. The period of 1960-1980 was one of excitement in many areas of group theory. In finite groups, there were many independent milestones. One had the discovery of 22 new sporadic groups, and the completion of the first generation of the classification of finite simple groups. One had the influential idea of the Carter subgroup, and the subsequent creation of formation theory and the theory of classes of groups. One had the remarkable extensions of Clifford theory by Green to the indecomposable modules of group algebras. During this era, the field of computational group theory became a recognized field of study, due in part to its tremendous success during the first generation classification. In discrete groups, the geometric methods of Jacques Tits and the availability the surjectivity of Serge Lang's map allowed a revolution in algebraic groups. The Burnside problem had tremendous progress, with better counterexamples constructed in the 1960s and early 1980s, but the finishing touches "for all but finitely many" were not completed until the 1990s. The work on the Burnside problem increased interest in Lie algebras in exponent p, and the methods of Michel Lazard began to see a wider impact, especially in the study of p-groups. Continuous groups broadened considerably, with p-adic analytic questions becoming important. Many conjectures were made during this time, including the coclass conjectures. Late 20th century The last twenty years of the 20th century enjoyed the successes of over one hundred years of study in group theory. In finite groups, post classification results included the O'Nan–Scott theorem, the Aschbacher classification, the classification of multiply transitive finite groups, the determination of the maximal subgroups of the simple groups and the corresponding classifications of primitive groups. In finite geometry and combinatorics, many problems could now be settled. The modular representation theory entered a new era as the techniques of the classification were axiomatized, including fusion systems, Luis Puig's theory of pairs and nilpotent blocks. The theory of finite soluble groups was likewise transformed by the influential book of Klaus Doerk and Trevor Hawkes which brought the theory of projectors and injectors to a wider audience. In discrete groups, several areas of geometry came together to produce exciting new fields. Work on knot theory, orbifolds, hyperbolic manifolds, and groups acting on trees (the Bass–Serre theory), much enlivened the study of hyperbolic groups, automatic groups. Questions such as William Thurston's 1982 geometrization conjecture, inspired entirely new techniques in geometric group theory and low-dimensional topology, and was involved in the solution of one of the Millennium Prize Problems, the Poincaré conjecture. Continuous groups saw the solution of the problem of hearing the shape of a drum in 1992 using symmetry groups of the laplacian operator. Continuous techniques were applied to many aspects of group theory using function spaces and quantum groups. Many 18th and 19th century problems are now revisited in this more general setting, and many questions in the theory of the representations of groups have answers. Today Group theory continues to be an intensely studied matter. Its importance to contemporary mathematics as a whole may be seen from the 2008 Abel Prize, awarded to John Griggs Thompson and Jacques Tits for their contributions to group theory. Notes References Historically important publications in group theory. Curtis, Charles W. (2003), Pioneers of Representation Theory: Frobenius, Burnside, Schur, and Brauer, History of Mathematics, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-2677-5 Galois, Évariste (1908), Tannery, Jules (ed.), Manuscrits de Évariste Galois, Paris: Gauthier-Villars Kleiner, Israel (1986), "The evolution of group theory: a brief survey", Mathematics Magazine, 59 (4): 195–215, doi:10.2307/2690312, ISSN 0025-570X, JSTOR 2690312, MR 0863090 Kleiner, Israel (2007). Kleiner, Israel (ed.). A history of abstract algebra. Boston, Mass.: Birkhäuser. doi:10.1007/978-0-8176-4685-1. ISBN 978-0-8176-4685-1. Smith, David Eugene (1906), History of Modern Mathematics, Mathematical Monographs, No. 1 Wussing, Hans (2007), The Genesis of the Abstract Group Concept: A Contribution to the History of the Origin of Abstract Group Theory, New York: Dover Publications, ISBN 978-0-486-45868-7 du Sautoy, Marcus (2008), Finding Moonshine, London: Fourth Estate, ISBN 978-0-00-721461-7
Wikipedia
The Goldhaber Experiment, named after Maurice Goldhaber, was a particle physics experiment carried out in 1957 at Brookhaven National Laboratory. It was the first experiment to determine the helicity of the neutrino, following the discovery of parity violation in the weak interaction just a year earlier. Background The experiment used a 152Eu nucleus in an isomeric (metastable) state, which decays via K-capture, emitting a neutrino: 152m Eu + e − → 152 Sm ∗ + ν e + 950 keV {\displaystyle {}^{\text{152m}}{\text{Eu}}+{\text{e}}^{-}\rightarrow {}^{\text{152}}{\text{Sm}}^{*}+\nu _{e}+950\,{\text{keV}}} After the decay, the daughter nucleus 152Sm is in an excited state, indicated by the asterisk. This excitation energy is released shortly afterward through the emission of a gamma ray: 152 Sm ∗ → 152 Sm + γ + 961 keV {\displaystyle {}^{\text{152}}{\text{Sm}}^{*}\rightarrow {}^{\text{152}}{\text{Sm}}+\gamma +961\,{\text{keV}}} The de-excitation energy is shared between the recoil of the Sm nucleus and the gamma ray. The electron capture and subsequent de-excitation satisfy several conditions necessary for the experiment to work: Spin sequence: 0− → 1− → 0+ Approximately equal decay energies in both transitions (difference of about 1%) Very short lifetime of the excited 152Sm* (τ = 3×10−14 s) During the planning phase, Goldhaber was initially unsure whether any isotope even existed that would meet all these criteria. Determining the direction of the neutrino The detection of gamma rays from the Sm decay relies on resonant scattering of the gamma photons at a Sm2O3 target arranged in a ring around the detector. Lead shielding prevents decay photons from the 152 source from directly reaching the detector. Resonant scattering occurs via nuclear resonance absorption of the photon by a Sm nucleus, followed by spontaneous emission: γ + 152 Sm → 152 Sm ∗ → 152 Sm + γ {\displaystyle \gamma +{}^{\text{152}}{\text{Sm}}\rightarrow {}^{\text{152}}{\text{Sm}}^{*}\rightarrow {}^{\text{152}}{\text{Sm}}+\gamma } Under normal conditions, resonant absorption by samarium would not be possible, since the photon emitted by 152Sm* after 152Eu decay doesn’t carry the full 961 keV energy due to nuclear recoil. The recoil energy is about 3.2 eV, while the natural linewidth is only about 10−2 eV, making the photon’s energy too low for absorption. However, in this case, the 152Sm* atom is not at rest but is moving due to the prior emission of the neutrino. Because of the very short lifetime, no relaxation via interactions with the crystal lattice occurs. Since the emitted neutrino’s energy is approximately equal to that of the gamma transition, their energies can compensate via Doppler shifting if the gamma ray and the neutrino are emitted in opposite directions (as shown in the schematic). When emitted 180° apart, the energy mismatch of the gamma ray is only about 10−4 eV—well within the natural linewidth. This “trick” allows resonant absorption, but only if the neutrino was emitted upwards. Otherwise, the energy difference is too large and the gamma rays do not reach the detector. This setup thus gives information about the emission direction of the neutrino. Determining neutrino helicity The helicity of the neutrino can be inferred from the spin structure of the decay, taking angular momentum conservation into account. In the following description, single arrows indicate particle momenta, and each double arrow represents a ½-unit of spin. In the decay of 152mEu, the initial nucleus is in a 0− state. Since the transition is a pure Gamow-Teller decay, the daughter nucleus ends up in a 1− state. The total angular momentum of the initial state is ½, since the nucleus has spin 0 and the captured K-shell electron has orbital angular momentum ℓ = 0 and spin ½. Because the neutrino carries away spin ½, the daughter nucleus’s spin must be oriented opposite to that of the neutrino. This allows two possible decay configurations, depending on the spin alignment: ⇐ ⇒ ⇐⇐ ⇒ ⇐ ⇒⇒ 152 Eu ⟶ ν e + 152 Sm ∗ or 152 Eu ⟶ ν e + 152 Sm ∗ ⟵ ⟶ ⟵ ⟶ {\displaystyle {\begin{array}{ccccccccccc}\Leftarrow &&\Rightarrow &&\Leftarrow \Leftarrow &&\Rightarrow &&\Leftarrow &&\Rightarrow \Rightarrow \\{}^{152}{\text{Eu}}&\longrightarrow &\nu _{e}&+&{}^{152}{\text{Sm}}^{*}&\quad {\text{or}}\quad &{}^{152}{\text{Eu}}&\longrightarrow &\nu _{e}&+&{}^{152}{\text{Sm}}^{*}\\&&\longleftarrow &&\longrightarrow &&&&\longleftarrow &&\longrightarrow $end{array}}} This implies that the neutrino in the lab frame has the same helicity as the 152Sm* daughter nucleus: −1 in the first case, +1 in the second. In the subsequent gamma emission, the photon carries quantum numbers 1−. The 152Sm nucleus (Z = 62, N = 90) is an even-even nucleus, meaning it is in a 0+ state. For emission at 180° relative to the neutrino emission direction: ⇐⇐ ⇐⇐ ⇒⇒ ⇒⇒ 152 Sm ∗ ⟶ 152 Sm + γ or 152 Sm ∗ ⟶ 152 Sm + γ ⟶ ⟶ ⟶ ⟶ {\displaystyle {\begin{array}{ccccccccccc}\Leftarrow \Leftarrow &&&&\Leftarrow \Leftarrow &&\Rightarrow \Rightarrow &&&&\Rightarrow \Rightarrow \\{}^{152}{\text{Sm}}^{*}&\longrightarrow &{}^{152}{\text{Sm}}&+&\gamma &\quad {\text{or}}\quad &{}^{152}{\text{Sm}}^{*}&\longrightarrow &{}^{152}{\text{Sm}}&+&\gamma \\\$longrightarrow &&&&\longrightarrow &&\longrightarrow &&&&\longrightarrow \\\end{array}}} In resonant scattering, the helicity of the photon corresponds to that of the 152Sm* nucleus, and thus to that of the neutrino: h ( γ ) = h ( ν ) {\displaystyle h(\gamma )=h(\nu )} The photon’s helicity can now be determined from the cross-section for Compton scattering, which depends strongly on the polarization of the scattering medium. This is implemented in the experiment by placing a magnetized iron block between the source and the absorber (see schematic). About 7–8% of the electrons in the iron are polarized. A photon scattered in the iron loses some energy, which prevents resonant absorption. If there is a preferred photon helicity (and hence a preferred neutrino helicity), the counting rate should vary depending on the magnetization direction of the iron block due to the difference in scattering efficiency. (Note: Only neutrinos emitted upwards result in photon detection in the setup!) Indeed, comparison of counting rates yields a neutrino helicity of: h ( ν e ) = − 1.0 ± 0.3 {\displaystyle h(\nu _{e})=-1.0\pm 0.3} . Results The experiment demonstrated that neutrinos in nature are exclusively left-handed, while antineutrinos are right-handed. This is a striking confirmation of the V-A theory, which predicts the parity violation of the weak interaction.
Wikipedia
In fluid dynamics, Rayleigh flow (after English physicist Lord Rayleigh) refers to frictionless, non-adiabatic fluid flow through a constant-area duct where the effect of heat transfer is considered. Compressibility effects often come into consideration, although the Rayleigh flow model certainly also applies to incompressible flow. For this model, the duct area remains constant and no mass is added within the duct. Therefore, unlike Fanno flow, the stagnation temperature is a variable. The heat addition causes a decrease in stagnation pressure, which is known as the Rayleigh effect and is critical in the design of combustion systems. Heat addition will cause both supersonic and subsonic Mach numbers to approach Mach 1, resulting in choked flow. Conversely, heat rejection decreases a subsonic Mach number and increases a supersonic Mach number along the duct. It can be shown that for calorically perfect flows the maximum entropy occurs at M = 1. Theory The Rayleigh flow model begins with a differential equation that relates the change in Mach number with the change in stagnation temperature, T0. The differential equation is shown below. d M 2 M 2 = 1 + γ M 2 1 − M 2 ( 1 + γ − 1 2 M 2 ) d T 0 T 0 {\displaystyle \ {\frac {dM^{2}}{M^{2}}}={\frac {1+\gamma M^{2}}{1-M^{2}}}\left(1+{\frac {\gamma -1}{2}}M^{2}\right){\frac {dT_{0}}{T_{0}}}} Solving the differential equation leads to the relation shown below, where T0* is the stagnation temperature at the throat location of the duct which is required for thermally choking the flow. T 0 T 0 ∗ = 2 ( γ + 1 ) M 2 ( 1 + γ M 2 ) 2 ( 1 + γ − 1 2 M 2 ) {\displaystyle \ {\frac {T_{0}}{T_{0}^{*}}}={\frac {2\left(\gamma +1\right)M^{2}}{\left(1+\gamma M^{2}\right)^{2}}}\left(1+{\frac {\gamma -1}{2}}M^{2}\right)} These values are significant in the design of combustion systems. For example, if a turbojet combustion chamber has a maximum temperature of T0* = 2000 K, T0 and M at the entrance to the combustion chamber must be selected so thermal choking does not occur, which will limit the mass flow rate of air into the engine and decrease thrust. For the Rayleigh flow model, the dimensionless change in entropy relation is shown below. Δ S = Δ s c p = ln ⁡ [ M 2 ( γ + 1 1 + γ M 2 ) γ + 1 γ ] {\displaystyle \ \Delta S={\frac {\Delta s}{c_{p}}}=\ln \left[M^{2}\left({\frac {\gamma +1}{1+\gamma M^{2}}}\right)^{\frac {\gamma +1}{\gamma }}\right]} The above equation can be used to plot the Rayleigh line on a Mach number versus ΔS graph, but the dimensionless enthalpy, H, versus ΔS diagram, is more often used. The dimensionless enthalpy equation is shown below with an equation relating the static temperature with its value at the choke location for a calorically perfect gas where the heat capacity at constant pressure, cp, remains constant. H = h h ∗ = c p T c p T ∗ = T T ∗ T T ∗ = ( γ + 1 ) 2 M 2 ( 1 + γ M 2 ) 2 {\displaystyle {\begin{aligned}H&={\frac {h}{h^{*}}}={\frac {c_{p}T}{c_{p}T^{*}}}={\frac {T}{T^{*}}}\\{\frac {T}{T^{*}}}&={\frac {\left(\gamma +1\right)^{2}M^{2}}{\left(1+\gamma M^{2}\right)^{2}}}$end{aligned}}} The above equation can be manipulated to solve for M as a function of H. However, due to the form of the T/T* equation, a complicated multi-root relation is formed for M = M(T/T*). Instead, M can be chosen as an independent variable where ΔS and H can be matched up in a chart as shown in Figure 1. Figure 1 shows that heating will increase an upstream, subsonic Mach number until M = 1.0 and the flow chokes. Conversely, adding heat to a duct with an upstream, supersonic Mach number will cause the Mach number to decrease until the flow chokes. Cooling produces the opposite result for each of those two cases. The Rayleigh flow model reaches maximum entropy at M = 1.0 For subsonic flow, the maximum value of H occurs at M = 0.845. This indicates that cooling, instead of heating, causes the Mach number to move from 0.845 to 1.0 This is not necessarily correct as the stagnation temperature always increases to move the flow from a subsonic Mach number to M = 1, but from M = 0.845 to M = 1.0 the flow accelerates faster than heat is added to it. Therefore, this is a situation where heat is added but T/T* decreases in that region. Additional relations The area and mass flow rate are held constant for Rayleigh flow. Unlike Fanno flow, the Fanning friction factor, f, remains constant. These relations are shown below with the * symbol representing the throat location where choking can occur. A = A ∗ = constant m ˙ = m ˙ ∗ = constant {\displaystyle {\begin{aligned}A&=A^{*}={\mbox{constant}}\\{\dot {m}}&={\dot {m}}^{*}={\mbox{constant}}\\\$end{aligned}}} Differential equations can also be developed and solved to describe Rayleigh flow property ratios with respect to the values at the choking location. The ratios for the pressure, density, static temperature, velocity and stagnation pressure are shown below, respectively. They are represented graphically along with the stagnation temperature ratio equation from the previous section. A stagnation property contains a '0' subscript. p p ∗ = γ + 1 1 + γ M 2 ρ ρ ∗ = 1 + γ M 2 ( γ + 1 ) M 2 T T ∗ = ( γ + 1 ) 2 M 2 ( 1 + γ M 2 ) 2 v v ∗ = ( γ + 1 ) M 2 1 + γ M 2 p 0 p 0 ∗ = γ + 1 1 + γ M 2 [ ( 2 γ + 1 ) ( 1 + γ − 1 2 M 2 ) ] γ γ − 1 {\displaystyle {\begin{aligned}{\frac {p}{p^{*}}}&={\frac {\gamma +1}{1+\gamma M^{2}}}\\{\frac {\rho }{\rho ^{*}}}&={\frac {1+\gamma M^{2}}{\left(\gamma +1\right)M^{2}}}\\{\frac {T}{T^{*}}}&={\frac {\left(\gamma +1\right)^{2}M^{2}}{\left(1+\gamma M^{2}\right)^{2}}}\\{\frac {v}{v^{*}}}&={\frac {\left(\gamma +1\right)M^{2}}{1+\gamma M^{2}}}\\{\frac {p_{0}}{p_{0}^{*}}}&={\frac {\gamma +1}{1+\gamma M^{2}}}\left[\left({\frac {2}{\gamma +1}}\right)\left(1+{\frac {\gamma -1}{2}}M^{2}\right)\right]^{\frac {\gamma }{\gamma -1}}\end{aligned}}} Applications The Rayleigh flow model has many analytical uses, most notably involving aircraft engines. For instance, the combustion chambers inside turbojet engines usually have a constant area and the fuel mass addition is negligible. These properties make the Rayleigh flow model applicable for heat addition to the flow through combustion, assuming the heat addition does not result in dissociation of the air-fuel mixture. Producing a shock wave inside the combustion chamber of an engine due to thermal choking is very undesirable due to the decrease in mass flow rate and thrust. Therefore, the Rayleigh flow model is critical for an initial design of the duct geometry and combustion temperature for an engine. The Rayleigh flow model is also used extensively with the Fanno flow model. These two models intersect at points on the enthalpy-entropy and Mach number-entropy diagrams, which is meaningful for many applications. However, the entropy values for each model are not equal at the sonic state. The change in entropy is 0 at M = 1 for each model, but the previous statement means the change in entropy from the same arbitrary point to the sonic point is different for the Fanno and Rayleigh flow models. If initial values of si and Mi are defined, a new equation for dimensionless entropy versus Mach number can be defined for each model. These equations are shown below for Fanno and Rayleigh flow, respectively. Δ S F = s − s i c p = l n [ ( M M i ) γ − 1 γ ( 1 + γ − 1 2 M i 2 1 + γ − 1 2 M 2 ) γ + 1 2 γ ] Δ S R = s − s i c p = l n [ ( M M i ) 2 ( 1 + γ M i 2 1 + γ M 2 ) γ + 1 γ ] {\displaystyle {\begin{aligned}\Delta S_{F}&={\frac {s-s_{i}}{c_{p}}}=ln\left[\left({\frac {M}{M_{i}}}\right)^{\frac {\gamma -1}{\gamma }}\left({\frac {1+{\frac {\gamma -1}{2}}M_{i}^{2}}{1+{\frac {\gamma -1}{2}}M^{2}}}\right)^{\frac {\gamma +1}{2\gamma }}\right]\\\Delta S_{R}&={\frac {s-s_{i}}{c_{p}}}=ln\left[\left({\frac {M}{M_{i}}}\right)^{2}\left({\frac {1+\gamma M_{i}^{2}}{1+\gamma M^{2}}}\right)^{\frac {\gamma +1}{\gamma }}\right]\end{aligned}}} Figure 3 shows the Rayleigh and Fanno lines intersecting with each other for initial conditions of si = 0 and Mi = 3.0 The intersection points are calculated by equating the new dimensionless entropy equations with each other, resulting in the relation below. ( 1 + γ − 1 2 M i 2 ) [ M i 2 ( 1 + γ M i 2 ) 2 ] = ( 1 + γ − 1 2 M 2 ) [ M 2 ( 1 + γ M 2 ) 2 ] {\displaystyle \ \left(1+{\frac {\gamma -1}{2}}M_{i}^{2}\right)\left[{\frac {M_{i}^{2}}{\left(1+\gamma M_{i}^{2}\right)^{2}}}\right]=\left(1+{\frac {\gamma -1}{2}}M^{2}\right)\left[{\frac {M^{2}}{\left(1+\gamma M^{2}\right)^{2}}}\right]} The intersection points occur at the given initial Mach number and its post-normal shock value. For Figure 3, these values are M = 3.0 and 0.4752, which can be found the normal shock tables listed in most compressible flow textbooks. A given flow with a constant duct area can switch between the Rayleigh and Fanno models at these points. See also Fanno flow Mass injection flow Isentropic process Isothermal flow Gas dynamics Compressible flow Choked flow Enthalpy Entropy References Strutt, John William (Lord Rayleigh) (1910). "Aerial plane waves of finite amplitudes". Proc. R. Soc. Lond. A. 84 (570): 247–284. Bibcode:1910RSPSA..84..247R. doi:10.1098/rspa.1910.0075., also in: Dover, ed. (1964). Scientific papers of Lord Rayleigh (John William Strutt). Vol. 5. pp. 573–610. Zucker, Robert D.; Biblarz O. (2002). "Chapter 10. Rayleigh flow". Fundamentals of Gas Dynamics. John Wiley & Sons. pp. 277–313. ISBN 0-471-05967-6. Shapiro, Ascher H. (1953). The Dynamics and Thermodynamics of Compressible Fluid Flow, Volume 1. Ronald Press. ISBN 978-0-471-06691-0. : ISBN / Date incompatibility (help) Hodge, B. K.; Koenig K. (1995). Compressible Fluid Dynamics with Personal Computer Applications. Prentice Hall. ISBN 0-13-308552-X. Emanuel, G. (1986). "Chapter 8.2 Rayleigh flow". Gasdynamics: Theory and Applications. AIAA. pp. 121–133. ISBN 0-930403-12-6. External links Purdue University Rayleigh flow calculator University of Kentucky Rayleigh flow Webcalculator
Wikipedia
In the study of the arithmetic of elliptic curves, the j-line over a ring R is the coarse moduli scheme attached to the moduli problem sending a ring R {\displaystyle R} to the set of isomorphism classes of elliptic curves over R {\displaystyle R} . Since elliptic curves over the complex numbers are isomorphic (over an algebraic closure) if and only if their j {\displaystyle j} -invariants agree, the affine space A j 1 {\displaystyle \mathbb {A} _{j}^{1}} parameterizing j-invariants of elliptic curves yields a coarse moduli space. However, this fails to be a fine moduli space due to the presence of elliptic curves with automorphisms, necessitating the construction of the Moduli stack of elliptic curves. This is related to the congruence subgroup Γ ( 1 ) {\displaystyle \Gamma (1)} in the following way: M ( [ Γ ( 1 ) ] ) = S p e c ( R [ j ] ) {\displaystyle M([\Gamma (1)])=\mathrm {Spec} (R[j])} Here the j-invariant is normalized such that j = 0 {\displaystyle j=0} has complex multiplication by Z [ ζ 3 ] {\displaystyle \mathbb {Z} [\zeta _{3}]} , and j = 1728 {\displaystyle j=1728} has complex multiplication by Z [ i ] {\displaystyle \mathbb {Z} [i]} . The j-line can be seen as giving a coordinatization of the classical modular curve of level 1, X 0 ( 1 ) {\displaystyle X_{0}(1)} , which is isomorphic to the complex projective line P / C 1 {\displaystyle \mathbb {P} _{/\mathbb {C} }^{1}} .
Wikipedia
Graphmatica is a graphing program created by Keith Hertzer, a graduate of the University of California, Berkeley. It runs on Microsoft Windows (all versions), Mac OS X 10.5 and higher, and iOS 5.0 and higher. Graphmatica for Windows and Macs is distributed free of charge for evaluation purposes. After one month, non-commercial users are asked to pay a $25 licensing fee. Other licensing plans are available for commercial users. Graphmatica for iOS is distributed via the Apple App Store. Capabilities Graphmatica can graph Cartesian functions, relations, and inequalities, plus polar, parametric and ordinary differential equations. See also C.a.R. KmPlot References External links Official website
Wikipedia
The three-process view is a psychological term coined by Janet E. Davidson and Robert Sternberg. According to this concept, there are three kinds of insight: selective-encoding, selective-comparison, and selective-combination. Selective-encoding insight – Distinguishing what is important in a problem and what is irrelevant. Selective-comparison insight – Identifying information by finding a connection between acquired knowledge and experience. Selective-combination insight – Identifying a problem through understanding the different components and putting everything together.
Wikipedia
The principle of detailed balance can be used in kinetic systems which are decomposed into elementary processes (collisions, or steps, or elementary reactions). It states that at equilibrium, each elementary process is in equilibrium with its reverse process. History The principle of detailed balance was explicitly introduced for collisions by Ludwig Boltzmann. In 1872, he proved his H-theorem using this principle. The arguments in favor of this property are founded upon microscopic reversibility. Five years before Boltzmann, James Clerk Maxwell used the principle of detailed balance for gas kinetics with the reference to the principle of sufficient reason. He compared the idea of detailed balance with other types of balancing (like cyclic balance) and found that "Now it is impossible to assign a reason" why detailed balance should be rejected (p. 64). In 1901, Rudolf Wegscheider introduced the principle of detailed balance for chemical kinetics. In particular, he demonstrated that the irreversible cycles A 1 ⟶ A 2 ⟶ ⋯ ⟶ A n ⟶ A 1 {\displaystyle {\ce {A1->A2->\cdots ->A_{\mathit {n}}->A1}}} are impossible and found explicitly the relations between kinetic constants that follow from the principle of detailed balance. In 1931, Lars Onsager used these relations in his works, for which he was awarded the 1968 Nobel Prize in Chemistry. The principle of detailed balance has been used in Markov chain Monte Carlo methods since their invention in 1953. In particular, in the Metropolis–Hastings algorithm and in its important particular case, Gibbs sampling, it is used as a simple and reliable condition to provide the desirable equilibrium state. Now, the principle of detailed balance is a standard part of the university courses in statistical mechanics, physical chemistry, chemical and physical kinetics. Microscopic background The microscopic "reversing of time" turns at the kinetic level into the "reversing of arrows": the elementary processes transform into their reverse processes. For example, the reaction ∑ i α i A i ⟶ ∑ j β j B j {\displaystyle \sum _{i}\alpha _{i}{\ce {A}}_{i}{\ce {->}}\sum _{j}\beta _{j}{\ce {B}}_{j}} transforms into ∑ j β j B j ⟶ ∑ i α i A i {\displaystyle \sum _{j}\beta _{j}{\ce {B}}_{j}{\ce {->}}\sum _{i}\alpha _{i}{\ce {A}}_{i}} and conversely. (Here, A i , B j {\displaystyle {\ce {A}}_{i},{\ce {B}}_{j}} are symbols of components or states, α i , β j ≥ 0 {\displaystyle \alpha _{i},\beta _{j}\geq 0} are coefficients). The equilibrium ensemble should be invariant with respect to this transformation because of microreversibility and the uniqueness of thermodynamic equilibrium. This leads us immediately to the concept of detailed balance: each process is equilibrated by its reverse process. This reasoning is based on three assumptions: A i {\displaystyle {\ce {A}}_{i}} does not change under time reversal; Equilibrium is invariant under time reversal; The macroscopic elementary processes are microscopically distinguishable. That is, they represent disjoint sets of microscopic events. Any of these assumptions may be violated. For example, Boltzmann's collision can be represented as A v + A w ⟶ A v ′ + A w ′ {\displaystyle {\ce {{A_{\mathit {v}}}+A_{\mathit {w}}->{A_{\mathit {v'}}}+A_{\mathit {w'}}}}} , where A v {\displaystyle {\ce {A}}_{v}} is a particle with velocity v. Under time reversal A v {\displaystyle {\ce {A}}_{v}} transforms into A − v {\displaystyle {\ce {A}}_{-v}} . Therefore, the collision is transformed into the reverse collision by the PT transformation, where P is the space inversion and T is the time reversal. Detailed balance for Boltzmann's equation requires PT-invariance of collisions' dynamics, not just T-invariance. Indeed, after the time reversal the collision A v + A w ⟶ A v ′ + A w ′ {\displaystyle {\ce {{A_{\mathit {v}}}+A_{\mathit {w}}->{A_{\mathit {v'}}}+A_{\mathit {w'}}}}} , transforms into A − v ′ + A − w ′ ⟶ A − v + A − w {\displaystyle {\ce {{A_{\mathit {-v'}}}+A_{\mathit {-w'}}->{A_{\mathit {-v}}}+A_{\mathit {-w}}}}} . For the detailed balance we need transformation into A v ′ + A w ′ ⟶ A v + A w {\displaystyle {\ce {{A_{\mathit {v'}}}+A_{\mathit {w'}}->{A_{\mathit {v}}}+A_{\mathit {w}}}}} . For this purpose, we need to apply additionally the space reversal P. Therefore, for the detailed balance in Boltzmann's equation not T-invariance but PT-invariance is needed. Equilibrium may be not T- or PT-invariant even if the laws of motion are invariant. This non-invariance may be caused by the spontaneous symmetry breaking. There exist nonreciprocal media (for example, some bi-isotropic materials) without T and PT invariance. If different macroscopic processes are sampled from the same elementary microscopic events then macroscopic detailed balance may be violated even when microscopic detailed balance holds. Now, after almost 150 years of development, the scope of validity and the violations of detailed balance in kinetics seem to be clear. Detailed balance Reversibility A Markov process is called a reversible Markov process or reversible Markov chain if there exists a positive stationary distribution π that satisfies the detailed balance equations π i P i j = π j P j i , {\displaystyle \pi _{i}P_{ij}=\pi _{j}P_{ji}\,,} where Pij is the Markov transition probability from state i to state j, i.e. Pij = P(Xt = j | Xt − 1 = i), and πi and πj are the equilibrium probabilities of being in states i and j, respectively. When Pr(Xt−1 = i) = πi for all i, this is equivalent to the joint probability matrix, Pr(Xt−1 = i, Xt = j) being symmetric in i and j; or symmetric in t − 1 and t. The definition carries over straightforwardly to continuous variables, where π becomes a probability density, and P(s′, s) a transition kernel probability density from state s′ to state s: π ( s ′ ) P ( s ′ , s ) = π ( s ) P ( s , s ′ ) . {\displaystyle \pi (s')P(s',s)=\pi (s)P(s,s')\,.} The detailed balance condition is stronger than that required merely for a stationary distribution, because there are Markov processes with stationary distributions that do not have detailed balance. Transition matrices that are symmetric (Pij = Pji or P(s′, s) = P(s, s′)) always have detailed balance. In these cases, a uniform distribution over the states is an equilibrium distribution. Kolmogorov's criterion Reversibility is equivalent to Kolmogorov's criterion: the product of transition rates over any closed loop of states is the same in both directions. For example, it implies that, for all a, b and c, P ( a , b ) P ( b , c ) P ( c , a ) = P ( a , c ) P ( c , b ) P ( b , a ) . {\displaystyle P(a,b)P(b,c)P(c,a)=P(a,c)P(c,b)P(b,a)\,.} For example, if we have a Markov chain with three states such that only these transitions are possible: A → B , B → C , C → A , B → A {\displaystyle A\to B,B\to C,C\to A,B\to A} , then they violate Kolmogorov's criterion. Closest reversible Markov chain For continuous systems with detailed balance, it may be possible to continuously transform the coordinates until the equilibrium distribution is uniform, with a transition kernel which then is symmetric. In the case of discrete states, it may be possible to achieve something similar by breaking the Markov states into appropriately-sized degenerate sub-states. For a Markov transition matrix and a stationary distribution, the detailed balance equations may not be valid. However, it can be shown that a unique Markov transition matrix exists which is closest according to the stationary distribution and a given norm. The closest Matrix can be computed by solving a quadratic-convex optimization problem. Detailed balance and entropy increase For many systems of physical and chemical kinetics, detailed balance provides sufficient conditions for the strict increase of entropy in isolated systems. For example, the famous Boltzmann H-theorem states that, according to the Boltzmann equation, the principle of detailed balance implies positivity of entropy production. The Boltzmann formula (1872) for entropy production in rarefied gas kinetics with detailed balance served as a prototype of many similar formulas for dissipation in mass action kinetics and generalized mass action kinetics with detailed balance. Nevertheless, the principle of detailed balance is not necessary for entropy growth. For example, in the linear irreversible cycle A 1 ⟶ A 2 ⟶ A 3 ⟶ A 1 {\displaystyle {\ce {A1 -> A2 -> A3 -> A1}}} , entropy production is positive but the principle of detailed balance does not hold. Thus, the principle of detailed balance is a sufficient but not necessary condition for entropy increase in Boltzmann kinetics. These relations between the principle of detailed balance and the second law of thermodynamics were clarified in 1887 when Hendrik Lorentz objected to the Boltzmann H-theorem for polyatomic gases. Lorentz stated that the principle of detailed balance is not applicable to collisions of polyatomic molecules. Boltzmann immediately invented a new, more general condition sufficient for entropy growth. Boltzmann's condition holds for all Markov processes, irrespective of time-reversibility. Later, entropy increase was proved for all Markov processes by a direct method. These theorems may be considered as simplifications of the Boltzmann result. Later, this condition was referred to as the "cyclic balance" condition (because it holds for irreversible cycles) or the "semi-detailed balance" or the "complex balance". In 1981, Carlo Cercignani and Maria Lampis proved that the Lorentz arguments were wrong and the principle of detailed balance is valid for polyatomic molecules. Nevertheless, the extended semi-detailed balance conditions invented by Boltzmann in this discussion remain the remarkable generalization of the detailed balance. Wegscheider's conditions for the generalized mass action law In chemical kinetics, the elementary reactions are represented by the stoichiometric equations ∑ i α r i A i ⟶ ∑ j β r j A j ( r = 1 , … , m ) , {\displaystyle \sum _{i}\alpha _{ri}{\ce {A}}_{i}{\ce {->}}\sum _{j}\beta _{rj}{\ce {A}}_{j}\;\;(r=1,\ldots ,m)\,,} where A i {\displaystyle {\ce {A}}_{i}} are the components and α r i , β r j ≥ 0 {\displaystyle \alpha _{ri},\beta _{rj}\geq 0} are the stoichiometric coefficients. Here, the reverse reactions with positive constants are included in the list separately. We need this separation of direct and reverse reactions to apply later the general formalism to the systems with some irreversible reactions. The system of stoichiometric equations of elementary reactions is the reaction mechanism. The stoichiometric matrix is Γ = ( γ r i ) {\displaystyle {\boldsymbol {\Gamma }}=(\gamma _{ri})} , γ r i = β r i − α r i {\displaystyle \gamma _{ri}=\beta _{ri}-\alpha _{ri}} (gain minus loss). This matrix need not be square. The stoichiometric vector γ r {\displaystyle \gamma _{r}} is the rth row of Γ {\displaystyle {\boldsymbol {\Gamma }}} with coordinates γ r i = β r i − α r i {\displaystyle \gamma _{ri}=\beta _{ri}-\alpha _{ri}} . According to the generalized mass action law, the reaction rate for an elementary reaction is w r = k r ∏ i = 1 n a i α r i , {\displaystyle w_{r}=k_{r}\prod _{i=1}^{n}a_{i}^{\alpha _{ri}}\,,} where a i ≥ 0 {\displaystyle a_{i}\geq 0} is the activity (the "effective concentration") of A i {\displaystyle A_{i}} . The reaction mechanism includes reactions with the reaction rate constants k r > 0 {\displaystyle k_{r}>0} . For each r the following notations are used: k r + = k r {\displaystyle k_{r}^{+}=k_{r}} w r + = w r {\displaystyle w_{r}^{+}=w_{r}} k r − {\displaystyle k_{r}^{-}} is the reaction rate constant for the reverse reaction if it is in the reaction mechanism and 0 if it is not; w r − {\displaystyle w_{r}^{-}} is the reaction rate for the reverse reaction if it is in the reaction mechanism and 0 if it is not. For a reversible reaction, K r = k r + / k r − {\displaystyle K_{r}=k_{r}^{+}/k_{r}^{-}} is the equilibrium constant. The principle of detailed balance for the generalized mass action law is: For given values k r {\displaystyle k_{r}} there exists a positive equilibrium a i e q > 0 {\displaystyle a_{i}^{\rm {eq}}>0} that satisfies detailed balance, that is, w r + = w r − {\displaystyle w_{r}^{+}=w_{r}^{-}} . This means that the system of linear detailed balance equations ∑ i γ r i x i = ln ⁡ k r + − ln ⁡ k r − = ln ⁡ K r {\displaystyle \sum _{i}\gamma _{ri}x_{i}=\ln k_{r}^{+}-\ln k_{r}^{-}=\ln K_{r}} is solvable ( x i = ln ⁡ a i e q {\displaystyle x_{i}=\ln a_{i}^{\rm {eq}}} ). The following classical result gives the necessary and sufficient conditions for the existence of a positive equilibrium a i e q > 0 {\displaystyle a_{i}^{\rm {eq}}>0} with detailed balance (see, for example, the textbook). Two conditions are sufficient and necessary for solvability of the system of detailed balance equations: If k r + > 0 {\displaystyle k_{r}^{+}>0} then k r − > 0 {\displaystyle k_{r}^{-}>0} and, conversely, if k r − > 0 {\displaystyle k_{r}^{-}>0} then k r + > 0 {\displaystyle k_{r}^{+}>0} (reversibility); For any solution λ = ( λ r ) {\displaystyle {\boldsymbol {\lambda }}=(\lambda _{r})} of the system λ Γ = 0 ( i.e. ∑ r λ r γ r i = 0 for all i ) {\displaystyle {\boldsymbol {\lambda \Gamma }}=0\;\;\left({\mbox{i.e.}}\;\;\sum _{r}\lambda _{r}\gamma _{ri}=0\;\;{\mbox{for all}}\;\;i\right)} the Wegscheider's identity holds: ∏ r = 1 m ( k r + ) λ r = ∏ r = 1 m ( k r − ) λ r . {\displaystyle \prod _{r=1}^{m}(k_{r}^{+})^{\lambda _{r}}=\prod _{r=1}^{m}(k_{r}^{-})^{\lambda _{r}}\,.} Remark. It is sufficient to use in the Wegscheider conditions a basis of solutions of the system λ Γ = 0 {\displaystyle {\boldsymbol {\lambda \Gamma }}=0} . In particular, for any cycle in the monomolecular (linear) reactions the product of the reaction rate constants in the clockwise direction is equal to the product of the reaction rate constants in the counterclockwise direction. The same condition is valid for the reversible Markov processes (it is equivalent to the "no net flow" condition). A simple nonlinear example gives us a linear cycle supplemented by one nonlinear step: A 1 ↽ − − ⇀ A 2 {\displaystyle {\ce {A1 <=> A2}}} A 2 ↽ − − ⇀ A 3 {\displaystyle {\ce {A2 <=> A3}}} A 3 ↽ − − ⇀ A 1 {\displaystyle {\ce {A3 <=> A1}}} A 1 + A 2 ↽ − − ⇀ 2 A 3 {\displaystyle {\ce {{A1}+A2 <=> 2A3}}} There are two nontrivial independent Wegscheider's identities for this system: k 1 + k 2 + k 3 + = k 1 − k 2 − k 3 − {\displaystyle k_{1}^{+}k_{2}^{+}k_{3}^{+}=k_{1}^{-}k_{2}^{-}k_{3}^{-}} and k 3 + k 4 + / k 2 + = k 3 − k 4 − / k 2 − {\displaystyle k_{3}^{+}k_{4}^{+}/k_{2}^{+}=k_{3}^{-}k_{4}^{-}/k_{2}^{-}} They correspond to the following linear relations between the stoichiometric vectors: γ 1 + γ 2 + γ 3 = 0 {\displaystyle \gamma _{1}+\gamma _{2}+\gamma _{3}=0} and γ 3 + γ 4 − γ 2 = 0. {\displaystyle \gamma _{3}+\gamma _{4}-\gamma _{2}=0.} The computational aspect of the Wegscheider conditions was studied by D. Colquhoun with co-authors. The Wegscheider conditions demonstrate that whereas the principle of detailed balance states a local property of equilibrium, it implies the relations between the kinetic constants that are valid for all states far from equilibrium. This is possible because a kinetic law is known and relations between the rates of the elementary processes at equilibrium can be transformed into relations between kinetic constants which are used globally. For the Wegscheider conditions this kinetic law is the law of mass action (or the generalized law of mass action). Dissipation in systems with detailed balance To describe dynamics of the systems that obey the generalized mass action law, one has to represent the activities as functions of the concentrations cj and temperature. For this purpose, use the representation of the activity through the chemical potential: a i = exp ⁡ ( μ i − μ i ⊖ R T ) {\displaystyle a_{i}=\exp \left({\frac {\mu _{i}-\mu _{i}^{\ominus }}{RT}}\right)} where μi is the chemical potential of the species under the conditions of interest, ⁠ μ i ⊖ {\displaystyle \mu _{i}^{\ominus }} ⁠ is the chemical potential of that species in the chosen standard state, R is the gas constant and T is the thermodynamic temperature. The chemical potential can be represented as a function of c and T, where c is the vector of concentrations with components cj. For the ideal systems, μ i = R T ln ⁡ c i + μ i ⊖ {\displaystyle \mu _{i}=RT\ln c_{i}+\mu _{i}^{\ominus }} and a j = c j {\displaystyle a_{j}=c_{j}} the activity is the concentration and the generalized mass action law is the usual law of mass action. Consider a system in isothermal (T=const) isochoric (the volume V=const) condition. For these conditions, the Helmholtz free energy ⁠ F ( T , V , N ) {\displaystyle F(T,V,N)} ⁠ measures the “useful” work obtainable from a system. It is a functions of the temperature T, the volume V and the amounts of chemical components Nj (usually measured in moles), N is the vector with components Nj. For the ideal systems, F = R T ∑ i N i ( ln ⁡ ( N i V ) − 1 + μ i ⊖ ( T ) R T ) . {\displaystyle F=RT\sum _{i}N_{i}\left(\ln \left({\frac {N_{i}}{V}}\right)-1+{\frac {\mu _{i}^{\ominus }(T)}{RT}}\right).} The chemical potential is a partial derivative: μ i = ∂ F ( T , V , N ) / ∂ N i {\displaystyle \mu _{i}=\partial F(T,V,N)/\partial N_{i}} . The chemical kinetic equations are d N i d t = V ∑ r γ r i ( w r + − w r − ) . {\displaystyle {\frac {dN_{i}}{dt}}=V\sum _{r}\gamma _{ri}(w_{r}^{+}-w_{r}^{-}).} If the principle of detailed balance is valid then for any value of T there exists a positive point of detailed balance ceq: w r + ( c e q , T ) = w r − ( c e q , T ) = w r e q {\displaystyle w_{r}^{+}(c^{\rm {eq}},T)=w_{r}^{-}(c^{\rm {eq}},T)=w_{r}^{\rm {eq}}} Elementary algebra gives w r + = w r e q exp ⁡ ( ∑ i α r i ( μ i − μ i e q ) R T ) ; w r − = w r e q exp ⁡ ( ∑ i β r i ( μ i − μ i e q ) R T ) ; {\displaystyle w_{r}^{+}=w_{r}^{\rm {eq}}\exp \left(\sum _{i}{\frac {\alpha _{ri}(\mu _{i}-\mu _{i}^{\rm {eq}})}{RT}}\right);\;\;w_{r}^{-}=w_{r}^{\rm {eq}}\exp \left(\sum _{i}{\frac {\beta _{ri}(\mu _{i}-\mu _{i}^{\rm {eq}})}{RT}}\right);} where μ i e q = μ i ( c e q , T ) {\displaystyle \mu _{i}^{\rm {eq}}=\mu _{i}(c^{\rm {eq}},T)} For the dissipation we obtain from these formulas: d F d t = ∑ i ∂ F ( T , V , N ) ∂ N i d N i d t = ∑ i μ i d N i d t = − V R T ∑ r ( ln ⁡ w r + − ln ⁡ w r − ) ( w r + − w r − ) ≤ 0 {\displaystyle {\frac {dF}{dt}}=\sum _{i}{\frac {\partial F(T,V,N)}{\partial N_{i}}}{\frac {dN_{i}}{dt}}=\sum _{i}\mu _{i}{\frac {dN_{i}}{dt}}=-VRT\sum _{r}(\ln w_{r}^{+}-\ln w_{r}^{-})(w_{r}^{+}-w_{r}^{-})\leq 0} The inequality holds because ln is a monotone function and, hence, the expressions ln ⁡ w r + − ln ⁡ w r − {\displaystyle \ln w_{r}^{+}-\ln w_{r}^{-}} and w r + − w r − {\displaystyle w_{r}^{+}-w_{r}^{-}} have always the same sign. Similar inequalities are valid for other classical conditions for the closed systems and the corresponding characteristic functions: for isothermal isobaric conditions the Gibbs free energy decreases, for the isochoric systems with the constant internal energy (isolated systems) the entropy increases as well as for isobaric systems with the constant enthalpy. Onsager reciprocal relations and detailed balance Let the principle of detailed balance be valid. Then, for small deviations from equilibrium, the kinetic response of the system can be approximated as linearly related to its deviation from chemical equilibrium, giving the reaction rates for the generalized mass action law as: w r + = w r e q ( 1 + ∑ i α r i ( μ i − μ i e q ) R T ) ; w r − = w r e q ( 1 + ∑ i β r i ( μ i − μ i e q ) R T ) ; {\displaystyle w_{r}^{+}=w_{r}^{\rm {eq}}\left(1+\sum _{i}{\frac {\alpha _{ri}(\mu _{i}-\mu _{i}^{\rm {eq}})}{RT}}\right);\;\;w_{r}^{-}=w_{r}^{\rm {eq}}\left(1+\sum _{i}{\frac {\beta _{ri}(\mu _{i}-\mu _{i}^{\rm {eq}})}{RT}}\right);} Therefore, again in the linear response regime near equilibrium, the kinetic equations are ( γ r i = β r i − α r i {\displaystyle \gamma _{ri}=\beta _{ri}-\alpha _{ri}} ): d N i d t = − V ∑ j [ ∑ r w r e q γ r i γ r j ] μ j − μ j e q R T . {\displaystyle {\frac {dN_{i}}{dt}}=-V\sum _{j}\left[\sum _{r}w_{r}^{\rm {eq}}\gamma _{ri}\gamma _{rj}\right]{\frac {\mu _{j}-\mu _{j}^{\rm {eq}}}{RT}}.} This is exactly the Onsager form: following the original work of Onsager, we should introduce the thermodynamic forces X j {\displaystyle X_{j}} and the matrix of coefficients L i j {\displaystyle L_{ij}} in the form X j = μ j − μ j e q T ; d N i d t = ∑ j L i j X j {\displaystyle X_{j}={\frac {\mu _{j}-\mu _{j}^{\rm {eq}}}{T}};\;\;{\frac {dN_{i}}{dt}}=\sum _{j}L_{ij}X_{j}} The coefficient matrix L i j {\displaystyle L_{ij}} is symmetric: L i j = − V R ∑ r w r e q γ r i γ r j {\displaystyle L_{ij}=-{\frac {V}{R}}\sum _{r}w_{r}^{\rm {eq}}\gamma _{ri}\gamma _{rj}} These symmetry relations, L i j = L j i {\displaystyle L_{ij}=L_{ji}} , are exactly the Onsager reciprocal relations. The coefficient matrix L {\displaystyle L} is non-positive. It is negative on the linear span of the stoichiometric vectors γ r {\displaystyle \gamma _{r}} . So, the Onsager relations follow from the principle of detailed balance in the linear approximation near equilibrium. Local detailed balance Local detailed balance is an extension of detailed balance for modeling open systems that are coupled to various mutually separate mechanical, chemical or thermal baths. It gives a physically motivated way and interpretation for constructing stochastic dynamical models for nonequilibrium processes. That question was already explicitly discussed by Bergmann and Lebowitz (1955) where they proposed it for a description of irreversible processes. It gets discussed in The point is to get sensible ways for effectively taking into account the presence of reservoirs, where the change in the reservoir is a function of the system trajectories. It naturally leads to stochastic energetics and the developments in stochastic thermodynamics. In that sense, the condition of local detailed balance stands crucially at the beginning of nonequilibrium statistical mechanics (directly) for stationary open systems, driven by the coupling with different spacetime-well-separated equilibrium baths. Central to local detailed balance is the idea that each transition of the system state is accompanied by an exchange of energy or particles with a specific equilibrium reservoir, and that the corresponding updating follows the condition of detailed balance using the intensive variables of that reservoir. There need not be a (global) detailed balance as reservoirs can have different temperatures, chemical potentials, etc. In mathematical terms, the condition of local detailed balance assures that the logarithmic ratio of the probability of a trajectory to the probability of the time-reversed trajectory equals the entropy flux per kB to the system environment. It is important here that the environment consists of mutually separated thermodynamic equilibrium baths. In particular, local detailed balance allows identification of currents and entropy flows, and is directly related to the so-called fluctuation theorems for entropy fluxes. As shown in a series of publications, local detailed balance implies detailed, integrated, local, steady-state or transient fluctuation theorems for the entropy flux satisfying a Gallavotti–Cohen-like symmetry. Discussions and derivations of local detailed balance are found in. Not all models that are commonly used in nonequilibrium statistical mechanics satisfy local detailed balance, which makes it less evident how to associate heat and entropy fluxes to the proposed dynamics. Semi-detailed balance To formulate the principle of semi-detailed balance, it is convenient to count the direct and inverse elementary reactions separately. In this case, the kinetic equations have the form: d N i d t = V ∑ r γ r i w r = V ∑ r ( β r i − α r i ) w r {\displaystyle {\frac {dN_{i}}{dt}}=V\sum _{r}\gamma _{ri}w_{r}=V\sum _{r}(\beta _{ri}-\alpha _{ri})w_{r}} Let us use the notations α r = α r i {\displaystyle \alpha _{r}=\alpha _{ri}} , β r = β r i {\displaystyle \beta _{r}=\beta _{ri}} for the input and the output vectors of the stoichiometric coefficients of the rth elementary reaction. Let Y {\displaystyle Y} be the set of all these vectors α r , β r {\displaystyle \alpha _{r},\beta _{r}} . For each ν ∈ Y {\displaystyle \nu \in Y} , let us define two sets of numbers: R ν + = { r | α r = ν } ; R ν − = { r | β r = ν } {\displaystyle R_{\nu }^{+}=\{r|\alpha _{r}=\nu \};\;\;\;R_{\nu }^{-}=\{r|\beta _{r}=\nu \}} r ∈ R ν + {\displaystyle r\in R_{\nu }^{+}} if and only if ν {\displaystyle \nu } is the vector of the input stoichiometric coefficients α r {\displaystyle \alpha _{r}} for the rth elementary reaction; r ∈ R ν − {\displaystyle r\in R_{\nu }^{-}} if and only if ν {\displaystyle \nu } is the vector of the output stoichiometric coefficients β r {\displaystyle \beta _{r}} for the rth elementary reaction. The principle of semi-detailed balance means that in equilibrium the semi-detailed balance condition holds: for every ν ∈ Y {\displaystyle \nu \in Y} ∑ r ∈ R ν − w r = ∑ r ∈ R ν + w r {\displaystyle \sum _{r\in R_{\nu }^{-}}w_{r}=\sum _{r\in R_{\nu }^{+}}w_{r}} The semi-detailed balance condition is sufficient for the stationarity: it implies that d N d t = V ∑ r γ r w r = 0. {\displaystyle {\frac {dN}{dt}}=V\sum _{r}\gamma _{r}w_{r}=0.} For the Markov kinetics the semi-detailed balance condition is just the elementary balance equation and holds for any steady state. For the nonlinear mass action law it is, in general, sufficient but not necessary condition for stationarity. The semi-detailed balance condition is weaker than the detailed balance one: if the principle of detailed balance holds then the condition of semi-detailed balance also holds. For systems that obey the generalized mass action law the semi-detailed balance condition is sufficient for the dissipation inequality d F / d t ≥ 0 {\displaystyle dF/dt\geq 0} (for the Helmholtz free energy under isothermal isochoric conditions and for the dissipation inequalities under other classical conditions for the corresponding thermodynamic potentials). Boltzmann introduced the semi-detailed balance condition for collisions in 1887 and proved that it guaranties the positivity of the entropy production. For chemical kinetics, this condition (as the complex balance condition) was introduced by Horn and Jackson in 1972. The microscopic backgrounds for the semi-detailed balance were found in the Markov microkinetics of the intermediate compounds that are present in small amounts and whose concentrations are in quasiequilibrium with the main components. Under these microscopic assumptions, the semi-detailed balance condition is just the balance equation for the Markov microkinetics according to the Michaelis–Menten–Stueckelberg theorem. Dissipation in systems with semi-detailed balance Let us represent the generalized mass action law in the equivalent form: the rate of the elementary process ∑ i α r i A i ⟶ ∑ i β r i A i {\displaystyle \sum _{i}\alpha _{ri}{\ce {A}}_{i}{\ce {->}}\sum _{i}\beta _{ri}{\ce {A}}_{i}} is w r = φ r exp ⁡ ( ∑ i α r i μ i R T ) {\displaystyle w_{r}=\varphi _{r}\exp \left(\sum _{i}{\frac {\alpha _{ri}\mu _{i}}{RT}}\right)} where μ i = ∂ F ( T , V , N ) / ∂ N i {\displaystyle \mu _{i}=\partial F(T,V,N)/\partial N_{i}} is the chemical potential and F ( T , V , N ) {\displaystyle F(T,V,N)} is the Helmholtz free energy. The exponential term is called the Boltzmann factor and the multiplier φ r ≥ 0 {\displaystyle \varphi _{r}\geq 0} is the kinetic factor. Let us count the direct and reverse reaction in the kinetic equation separately: d N i d t = V ∑ r γ r i w r {\displaystyle {\frac {dN_{i}}{dt}}=V\sum _{r}\gamma _{ri}w_{r}} An auxiliary function θ ( λ ) {\displaystyle \theta (\lambda )} of one variable λ ∈ [ 0 , 1 ] {\displaystyle \lambda \in [0,1]} is convenient for the representation of dissipation for the mass action law θ ( λ ) = ∑ r φ r exp ⁡ ( ∑ i ( λ α r i + ( 1 − λ ) β r i ) μ i R T ) {\displaystyle \theta (\lambda )=\sum _{r}\varphi _{r}\exp \left(\sum _{i}{\frac {(\lambda \alpha _{ri}+(1-\lambda )\beta _{ri})\mu _{i}}{RT}}\right)} This function θ ( λ ) {\displaystyle \theta (\lambda )} may be considered as the sum of the reaction rates for deformed input stoichiometric coefficients α ~ ρ ( λ ) = λ α ρ + ( 1 − λ ) β ρ {\displaystyle {\tilde {\alpha }}_{\rho }(\lambda )=\lambda \alpha _{\rho }+(1-\lambda )\beta _{\rho }} . For λ = 1 {\displaystyle \lambda =1} it is just the sum of the reaction rates. The function θ ( λ ) {\displaystyle \theta (\lambda )} is convex because θ ″ ( λ ) ≥ 0 {\displaystyle \theta ''(\lambda )\geq 0} . Direct calculation gives that according to the kinetic equations d F d t = − V R T d θ ( λ ) d λ | λ = 1 {\displaystyle {\frac {dF}{dt}}=-VRT\left.{\frac {d\theta (\lambda )}{d\lambda }}\right|_{\lambda =1}} This is the general dissipation formula for the generalized mass action law. Convexity of θ ( λ ) {\displaystyle \theta (\lambda )} gives the sufficient and necessary conditions for the proper dissipation inequality: d F d t < 0 if and only if θ ( λ ) < θ ( 1 ) for some λ < 1 ; {\displaystyle {\frac {dF}{dt}}<0{\text{ if and only if }}\theta (\lambda )<\theta (1){\text{ for some }}\lambda <1;} d F d t ≤ 0 if and only if θ ( λ ) ≤ θ ( 1 ) for some λ < 1. {\displaystyle {\frac {dF}{dt}}\leq 0{\text{ if and only if }}\theta (\lambda )\leq \theta (1){\text{ for some }}\lambda <1.} The semi-detailed balance condition can be transformed into identity θ ( 0 ) ≡ θ ( 1 ) {\displaystyle \theta (0)\equiv \theta (1)} . Therefore, for the systems with semi-detailed balance d F / d t ≤ 0 {\displaystyle {dF}/{dt}\leq 0} . Cone theorem and local equivalence of detailed and complex balance For any reaction mechanism and a given positive equilibrium a cone of possible velocities for the systems with detailed balance is defined for any non-equilibrium state N Q D B ( N ) = c o n e { γ r s g n ( w r + ( N ) − w r − ( N ) ) | r = 1 , … , m } , {\displaystyle \mathbf {Q} _{\rm {DB}}(N)={\rm {cone}}\{\gamma _{r}{\rm {sgn}}(w_{r}^{+}(N)-w_{r}^{-}(N))\ |\ r=1,\ldots ,m\},} where cone stands for the conical hull and the piecewise-constant functions s g n ( w r + ( N ) − w r − ( N ) ) {\displaystyle {\rm {sgn}}(w_{r}^{+}(N)-w_{r}^{-}(N))} do not depend on (positive) values of equilibrium reaction rates w r e q {\displaystyle w_{r}^{\rm {eq}}} and are defined by thermodynamic quantities under assumption of detailed balance. The cone theorem states that for the given reaction mechanism and given positive equilibrium, the velocity (dN/dt) at a state N for a system with complex balance belongs to the cone Q D B ( N ) {\displaystyle \mathbf {Q} _{\rm {DB}}(N)} . That is, there exists a system with detailed balance, the same reaction mechanism, the same positive equilibrium, that gives the same velocity at state N. According to cone theorem, for a given state N, the set of velocities of the semidetailed balance systems coincides with the set of velocities of the detailed balance systems if their reaction mechanisms and equilibria coincide. This means local equivalence of detailed and complex balance. Detailed balance for systems with irreversible reactions Detailed balance states that in equilibrium each elementary process is equilibrated by its reverse process and requires reversibility of all elementary processes. For many real physico-chemical complex systems (e.g. homogeneous combustion, heterogeneous catalytic oxidation, most enzyme reactions etc.), detailed mechanisms include both reversible and irreversible reactions. If one represents irreversible reactions as limits of reversible steps, then it becomes obvious that not all reaction mechanisms with irreversible reactions can be obtained as limits of systems or reversible reactions with detailed balance. For example, the irreversible cycle A 1 ⟶ A 2 ⟶ A 3 ⟶ A 1 {\displaystyle {\ce {A1 -> A2 -> A3 -> A1}}} cannot be obtained as such a limit but the reaction mechanism A 1 ⟶ A 2 ⟶ A 3 ⟵ A 1 {\displaystyle {\ce {A1 -> A2 -> A3 <- A1}}} can. Gorban–Yablonsky theorem. A system of reactions with some irreversible reactions is a limit of systems with detailed balance when some constants tend to zero if and only if (i) the reversible part of this system satisfies the principle of detailed balance and (ii) the convex hull of the stoichiometric vectors of the irreversible reactions has empty intersection with the linear span of the stoichiometric vectors of the reversible reactions. Physically, the last condition means that the irreversible reactions cannot be included in oriented cyclic pathways. See also T-symmetry Frenesy Microscopic reversibility Master equation Balance equation Gibbs sampling Metropolis–Hastings algorithm Atomic spectral line (deduction of the Einstein coefficients) Random walks on graphs
Wikipedia
The Association for the Advancement of Artificial Intelligence (AAAI) is an international scientific society devoted to promote research in, and responsible use of, artificial intelligence. AAAI also aims to increase public understanding of artificial intelligence (AI), improve the teaching and training of AI practitioners, and provide guidance for research planners and funders concerning the importance and potential of current AI developments and future directions. History The organization was founded in 1979 under the name "American Association for Artificial Intelligence" and changed its name in 2007 to "Association for the Advancement of Artificial Intelligence". It has in excess of 4,000 members worldwide. In its early history, the organization was presided over by notable figures in computer science such as Allen Newell, Edward Feigenbaum, Marvin Minsky and John McCarthy. Since July 2022, Francesca Rossi has been serving as president. She will serve as president until July 2024 when president-elect Stephen Smith will begin his term. Conferences and publications The AAAI provides many services to the Artificial Intelligence community. The AAAI sponsors many conferences and symposia each year as well as providing support to 14 journals in the field of artificial intelligence. AAAI produces a quarterly publication, AI Magazine, which seeks to publish significant new research and literature across the entire field of artificial intelligence and to help members to keep abreast of research outside their immediate specialties. The magazine has been published continuously since 1980. AAAI organises the "AAAI Conference on Artificial Intelligence", which is considered to be one of the top conferences in the field of artificial intelligence. Awards In addition to AAAI Fellowship, the AAAI grants several other awards: ACM-AAAI Allen Newell Award The ACM-AAAI Allen Newell Award is presented to an individual selected for career contributions that have breadth within computer science, or that bridge computer science and other disciplines. This endowed award is accompanied by a prize of $10,000, and is supported by the Association for the Advancement of Artificial Intelligence (AAAI), Association for Computing Machinery (ACM), and by individual contributions. Past recipients: Fred Brooks (1994) Joshua Lederberg (1995) Carver Mead (1997) Saul Amarel (1998) Nancy Leveson (1999) Lotfi A. Zadeh (2000) Ruzena Bajcsy (2001) Peter Chen (2002) David Haussler and Judea Pearl (2003) Richard P. Gabriel (2004) Jack Minker (2005) Karen Spärck Jones (2006) Leonidas Guibas (2007) Barbara J. Grosz and Joseph Halpern (2008) Michael I. Jordan (2009) Takeo Kanade (2010) Stephanie Forrest (2011) Moshe Tennenholtz and Yoav Shoham (2012) Jon Kleinberg (2014) Eric Horvitz (2015) Jitendra Malik (2016) Margaret A. Boden (2017) Henry Kautz (2018) Lydia Kavraki and Daphne Koller (2019) Moshe Y. Vardi and Hector J. Levesque (2020) Carla Gomes (2021) Stuart Russell and Bernhard Schölkopf (2022) David Blei (2023) Peter Stone (2024) AAAI/EAAI Outstanding Educator Award The annual AAAI/EAAI Outstanding Educator Award was created in 2016 to honor a person (or group of people) who has made major contributions to AI education that provide long-lasting benefits to the AI community. Past recipients: Peter Norvig and Stuart Russell (2016) Sebastian Thrun (2017) Todd W. Neller (2018) Ashok Goel (2019) Marie desJardins (2020) Michael Wooldridge (2021) AI4K12.org team: David S. Touretzky, Christina Gardner-McCune, Fred G. Martin, and Deborah Seehorn (2022) Ayanna Howard (2023) Michael Littman and Charles Isbell (2024) Subbarao Kambhampati (2025) AAAI Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity The AAAI Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity is a $1 million award that recognizes the positive impacts of AI to meaningfully improve, protect, and enhance human life. Membership grades AAAI Senior Members Senior Member status is designed to recognize AAAI members who have achieved significant accomplishments within the field of artificial intelligence. To be eligible for nomination for Senior Member, candidates must be consecutive members of AAAI for at least five years and have been active in the professional arena for at least ten years. Applications should include information that details the candidate's scholarship, leadership, and/or professional service. See also List of computer science awards
Wikipedia
scRGB is a wide color gamut RGB color space created by Microsoft and HP that uses the same color primaries and white/black points as the sRGB color space but allows coordinates below zero and greater than one. The full range is −0.5 through just less than +7.5. Negative numbers enables scRGB to encompass most of the CIE 1931 color space while maintaining simplicity and backward compatibility with sRGB by not changing the primary colors. However this means approximately 80% of the scRGB color space consists of imaginary colors. Numbers greater than 1.0 allow high dynamic range images to be represented, though the dynamic range is less than other formats. Encoding Two encodings are defined for the individual primaries: a linear 16 bit per channel encoding and a nonlinear 12 bit per channel encoding. The 16 bit scRGB(16) encoding is the linear RGB channels converted by 8192x + 4096. Compared to 8-bit sRGB this ranges from almost 2+1⁄2 times the color resolution near 0.0 to more than 14 times the color resolution near 1.0. Storage as 16 bits clamps the linear range to −0.5..7.4999. The 12-bit scRGB-nl encoding is the linear RGB channels passed through the same opto-electric conversion function as sRGB (for negative numbers use −f(−x)) and then converted by 1280x + 1024. This is exactly 5 times the color resolution of 8-bit sRGB, and 8-bit sRGB can be converted directly with 5x + 1024. The linear range is clamped to the slightly larger −0.6038..7.5913. A 12-bit encoding called scYCC-nl is the conversion of the non-linear sRGB levels to JFIF-Y'CbCr and then converted by 1280Y′ + 1024, 1280Cb + 2048, 1280Cr + 2048. This form can allow greater compression and direct conversion to/from JPEG files and video hardware. With the addition of an alpha channel with the same number of bits the 16-bit encoding may be seen referred to as 64 bit and the 12-bit encoding referred to as 48-bit. Alpha is not encoded as above, however. Alpha is instead a linear 0-1 range multiplied by 2n − 1 where n is 12 or 16. The much newer DXGI scRGB HDR swapchains store the linear sRGB channels as 16-bit half float and has a much larger range of over ±60,000, without any enforced clamps. Storing linear sRGB values as floating point is very common in modern computer graphics software. Usage The first implementation of scRGB was the GDI+ API in Windows Vista. At WinHEC 2008 Microsoft announced that Windows 7 would support 48-bit scRGB (which for HDMI can be converted and output as xvYCC). The components in Windows 7 that support 48-bit scRGB are Direct3D, the Windows Imaging Component, and the Windows Color System and they support it in both full screen exclusive mode and in video overlays. Origin of sc in scRGB The origin of the sc in scRGB is shrouded in mystery. Officially it stands for nothing. According to Michael Stokes (the national and international leader of the International Electrotechnical Commission, or IEC, group working on scRGB), the name appeared when the Japanese national committee requested a name change from the earlier XsRGB (excess RGB). The two leading candidates for meaning are "specular RGB" because scRGB supports whites greater than the diffuse 1.0 values, and "standard compositing RGB" because the linearity, floating-point support, HDR (high dynamic range) support, and wide gamut support are ideally suited for compositing. This meaning also implicitly emphasizes that scRGB is not intended to be directly supported in devices or formats, since by definition scRGB encompasses values that are beyond both the human visual system and (even theoretically) realizable physical devices. References External links The standard IEC 61966-2-2 Annex B: Non-linear encoding for scRGB: scRGB-nl A working draft of IEC 61966-2-2 is available online. PCMag.com: Defining scRGB
Wikipedia
An autotroph is an organism that can convert abiotic sources of energy into energy stored in organic compounds, which can be used by other organisms. Autotrophs produce complex organic compounds (such as carbohydrates, fats, and proteins) using carbon from simple substances such as carbon dioxide, generally using energy from light or inorganic chemical reactions. Autotrophs do not need a living source of carbon or energy and are the producers in a food chain, such as plants on land or algae in water. Autotrophs can reduce carbon dioxide to make organic compounds for biosynthesis and as stored chemical fuel. Most autotrophs use water as the reducing agent, but some can use other hydrogen compounds such as hydrogen sulfide. The primary producers can convert the energy in the light (phototroph and photoautotroph) or the energy in inorganic chemical compounds (chemotrophs or chemolithotrophs) to build organic molecules, which is usually accumulated in the form of biomass and will be used as carbon and energy source by other organisms (e.g. heterotrophs and mixotrophs). The photoautotrophs are the main primary producers, converting the energy of the light into chemical energy through photosynthesis, ultimately building organic molecules from carbon dioxide, an inorganic carbon source. Examples of chemolithotrophs are some archaea and bacteria (unicellular organisms) that produce biomass from the oxidation of inorganic chemical compounds, these organisms are called chemoautotrophs, and are frequently found in hydrothermal vents in the deep ocean. Primary producers are at the lowest trophic level, and are the reasons why Earth sustains life to this day. Autotrophs use a portion of the ATP produced during photosynthesis or the oxidation of chemical compounds to reduce NADP+ to NADPH to form organic compounds. Most chemoautotrophs are lithotrophs, using inorganic electron donors such as hydrogen sulfide, hydrogen gas, elemental sulfur, ammonium and ferrous oxide as reducing agents and hydrogen sources for biosynthesis and chemical energy release. Chemolithoautotrophs are microorganisms that synthesize energy through the oxidation of inorganic compounds. They can sustain themselves entirely on atmospheric CO₂ and inorganic chemicals without the need for light or organic compounds. They enzymatically catalyze redox reactions using mineral substrates to generate ATP energy. These substrates primarily include hydrogen, iron, nitrogen, and sulfur. Its ecological niche is often specialized to extreme environments, including deep marine hydrothermal vents, stratified sediment, and acidic hot springs. Their metabolic processes play a key role in supporting microbial food webs as primary producers, and biogeochemical fluxes. History The term autotroph was coined by the German botanist Albert Bernhard Frank in 1892. It stems from the ancient Greek word τροφή (trophḗ), meaning "nourishment" or "food". The first autotrophic organisms likely evolved early in the Archean but proliferated across Earth's Great Oxidation Event with an increase to the rate of oxygenic photosynthesis by cyanobacteria. Photoautotrophs evolved from heterotrophic bacteria by developing photosynthesis. The earliest photosynthetic bacteria used hydrogen sulphide. Due to the scarcity of hydrogen sulphide, some photosynthetic bacteria evolved to use water in photosynthesis, leading to cyanobacteria. Variants Some organisms rely on organic compounds as a source of carbon, but are able to use light or inorganic compounds as a source of energy. Such organisms are mixotrophs. An organism that obtains carbon from organic compounds but obtains energy from light is called a photoheterotroph, while an organism that obtains carbon from organic compounds and energy from the oxidation of inorganic compounds is termed a chemolithoheterotroph. Evidence suggests that some fungi may also obtain energy from ionizing radiation: Such radiotrophic fungi were found growing inside a reactor of the Chernobyl nuclear power plant. Examples There are many different types of autotrophs in Earth's ecosystems. Lichens located in tundra climates are an exceptional example of a primary producer that, by mutualistic symbiosis, combines photosynthesis by algae (or additionally nitrogen fixation by cyanobacteria) with the protection of a decomposer fungus. As there are many examples of primary producers, two dominant types are coral and one of the many types of brown algae, kelp. Photosynthesis Gross primary production occurs by photosynthesis. This is the main way that primary producers get energy and make it available to other forms of life. Plants, many corals (by means of intracellular algae), some bacteria (cyanobacteria), and algae do this. During photosynthesis, primary producers receive energy from the sun and use it to produce sugar and oxygen. Ecology Without primary producers, organisms that are capable of producing energy on their own, the biological systems of Earth would be unable to sustain themselves. Plants, along with other primary producers, produce the energy that other living beings consume, and the oxygen that they breathe. It is thought that the first organisms on Earth were primary producers located on the ocean floor. Autotrophs are fundamental to the food chains of all ecosystems in the world. They take energy from the environment in the form of sunlight or inorganic chemicals and use it to create fuel molecules such as carbohydrates. This mechanism is called primary production. Other organisms, called heterotrophs, take in autotrophs as food to carry out functions necessary for their life. Thus, heterotrophs – all animals, almost all fungi, as well as most bacteria and protozoa – depend on autotrophs, or primary producers, for the raw materials and fuel they need. Heterotrophs obtain energy by breaking down carbohydrates or oxidizing organic molecules (carbohydrates, fats, and proteins) obtained in food. Carnivorous organisms rely on autotrophs indirectly, as the nutrients obtained from their heterotrophic prey come from autotrophs they have consumed. Most ecosystems are supported by the autotrophic primary production of plants and cyanobacteria that capture photons initially released by the sun. Plants can only use a fraction (approximately 1%) of this energy for photosynthesis. The process of photosynthesis splits a water molecule (H2O), releasing oxygen (O2) into the atmosphere, and reducing carbon dioxide (CO2) to release the hydrogen atoms that fuel the metabolic process of primary production. Plants convert and store the energy of the photons into the chemical bonds of simple sugars during photosynthesis. These plant sugars are polymerized for storage as long-chain carbohydrates, such as starch and cellulose; glucose is also used to make fats and proteins. When autotrophs are eaten by heterotrophs, i.e., consumers such as animals, the carbohydrates, fats, and proteins contained in them become energy sources for the heterotrophs. Proteins can be made using nitrates, sulfates, and phosphates in the soil. Primary production in tropical streams and rivers Aquatic algae are a significant contributor to food webs in tropical rivers and streams. This is displayed by net primary production, a fundamental ecological process that reflects the amount of carbon that is synthesized within an ecosystem. This carbon ultimately becomes available to consumers. Net primary production displays that the rates of in-stream primary production in tropical regions are at least an order of magnitude greater than in similar temperate systems. Origin of autotrophs Researchers believe that the first cellular lifeforms were not heterotrophs as they would rely upon autotrophs since organic substrates delivered from space were either too heterogeneous to support microbial growth or too reduced to be fermented. Instead, they consider that the first cells were autotrophs. These autotrophs might have been thermophilic and anaerobic chemolithoautotrophs that lived at deep sea alkaline hydrothermal vents. This view is supported by phylogenetic evidence – the physiology and habitat of the last universal common ancestor (LUCA) is inferred to have also been a thermophilic anaerobe with a Wood-Ljungdahl pathway, its biochemistry was replete with FeS clusters and radical reaction mechanisms. It was dependent upon Fe, H2, and CO2. The high concentration of K+ present within the cytosol of most life forms suggests that early cellular life had Na+/H+ antiporters or possibly symporters. Autotrophs possibly evolved into heterotrophs when they were at low H2 partial pressures where the first form of heterotrophy were likely amino acid and clostridial type purine fermentations. It has been suggested that photosynthesis emerged in the presence of faint near infrared light emitted by hydrothermal vents. The first photochemically active pigments are then thought to be Zn-tetrapyrroles. See also Electrolithoautotroph Electrotroph Heterotrophic nutrition Organotroph Primary nutritional groups References External links "Lichen Biology and the Environment". www.lichen.com. Archived from the original on 21 June 2013. Retrieved 11 May 2014. "Lichens". herbarium.usu.edu. Archived from the original on 1 January 2014. "Lichens". archive.bio.ed.ac.uk.
Wikipedia
The forward–backward algorithm is an inference algorithm for hidden Markov models which computes the posterior marginals of all hidden state variables given a sequence of observations/emissions o 1 : T := o 1 , … , o T {\displaystyle o_{1:T}:=o_{1},\dots ,o_{T}} , i.e. it computes, for all hidden state variables X t ∈ { X 1 , … , X T } {\displaystyle X_{t}\in \{X_{1},\dots ,X_{T}\}} , the distribution P ( X t | o 1 : T ) {\displaystyle P(X_{t}\ |\ o_{1:T})} . This inference task is usually called smoothing. The algorithm makes use of the principle of dynamic programming to efficiently compute the values that are required to obtain the posterior marginal distributions in two passes. The first pass goes forward in time while the second goes backward in time; hence the name forward–backward algorithm. The term forward–backward algorithm is also used to refer to any algorithm belonging to the general class of algorithms that operate on sequence models in a forward–backward manner. In this sense, the descriptions in the remainder of this article refer only to one specific instance of this class. Overview In the first pass, the forward–backward algorithm computes a set of forward probabilities which provide, for all t ∈ { 1 , … , T } {\displaystyle t\in \{1,\dots ,T\}} , the probability of ending up in any particular state given the first t {\displaystyle t} observations in the sequence, i.e. P ( X t | o 1 : t ) {\displaystyle P(X_{t}\ |\ o_{1:t})} . In the second pass, the algorithm computes a set of backward probabilities which provide the probability of observing the remaining observations given any starting point t {\displaystyle t} , i.e. P ( o t + 1 : T | X t ) {\displaystyle P(o_{t+1:T}\ |\ X_{t})} . These two sets of probability distributions can then be combined to obtain the distribution over states at any specific point in time given the entire observation sequence: P ( X t | o 1 : T ) = P ( X t | o 1 : t , o t + 1 : T ) ∝ P ( o t + 1 : T | X t ) P ( X t | o 1 : t ) {\displaystyle P(X_{t}\ |\ o_{1:T})=P(X_{t}\ |\ o_{1:t},o_{t+1:T})\propto P(o_{t+1:T}\ |\ X_{t})P(X_{t}|o_{1:t})} The last step follows from an application of the Bayes' rule and the conditional independence of o t + 1 : T {\displaystyle o_{t+1:T}} and o 1 : t {\displaystyle o_{1:t}} given X t {\displaystyle X_{t}} . As outlined above, the algorithm involves three steps: computing forward probabilities computing backward probabilities computing smoothed values. The forward and backward steps may also be called "forward message pass" and "backward message pass" - these terms are due to the message-passing used in general belief propagation approaches. At each single observation in the sequence, probabilities to be used for calculations at the next observation are computed. The smoothing step can be calculated simultaneously during the backward pass. This step allows the algorithm to take into account any past observations of output for computing more accurate results. The forward–backward algorithm can be used to find the most likely state for any point in time. It cannot, however, be used to find the most likely sequence of states (see Viterbi algorithm). Forward probabilities The following description will use matrices of probability values instead of probability distributions. However, it is important to note that the forward-backward algorithm can generally be applied to both continuous and discrete probability models. We transform the probability distributions related to a given hidden Markov model into matrix notation as follows. The transition probabilities P ( X t ∣ X t − 1 ) {\displaystyle \mathbf {P} (X_{t}\mid X_{t-1})} of a given random variable X t {\displaystyle X_{t}} representing all possible states in the hidden Markov model will be represented by the matrix T {\displaystyle \mathbf {T} } where the column index j {\displaystyle j} will represent the target state and the row index i {\displaystyle i} represents the start state. A transition from row-vector state π t {\displaystyle \mathbf {\pi _{t}} } to the incremental row-vector state π t + 1 {\displaystyle \mathbf {\pi _{t+1}} } is written as π t + 1 = π t T {\displaystyle \mathbf {\pi _{t+1}} =\mathbf {\pi _{t}} \mathbf {T} } . The example below represents a system where the probability of staying in the same state after each step is 70% and the probability of transitioning to the other state is 30%. The transition matrix is then: T = ( 0.7 0.3 0.3 0.7 ) {\displaystyle \mathbf {T} ={\begin{pmatrix}0.7&0.3\\0.3&0.7\end{pmatrix}}} In a typical Markov model, we would multiply a state vector by this matrix to obtain the probabilities for the subsequent state. In a hidden Markov model the state is unknown, and we instead observe events associated with the possible states. An event matrix of the form: B = ( 0.9 0.1 0.2 0.8 ) {\displaystyle \mathbf {B} ={\begin{pmatrix}0.9&0.1\\0.2&0.8\end{pmatrix}}} provides the probabilities for observing events given a particular state. In the above example, event 1 will be observed 90% of the time if we are in state 1 while event 2 has a 10% probability of occurring in this state. In contrast, event 1 will only be observed 20% of the time if we are in state 2 and event 2 has an 80% chance of occurring. Given an arbitrary row-vector describing the state of the system ( π {\displaystyle \mathbf {\pi } } ), the probability of observing event j is then: P ( O = j ) = ∑ i π i B i , j {\displaystyle \mathbf {P} (O=j)=\sum _{i}\pi _{i}B_{i,j}} The probability of a given state leading to the observed event j can be represented in matrix form by multiplying the state row-vector ( π {\displaystyle \mathbf {\pi } } ) with an observation matrix ( O j = d i a g ( B ∗ , o j ) {\displaystyle \mathbf {O_{j}} =\mathrm {diag} (B_{*,o_{j}})} ) containing only diagonal entries. Continuing the above example, the observation matrix for event 1 would be: O 1 = ( 0.9 0.0 0.0 0.2 ) {\displaystyle \mathbf {O_{1}} ={\begin{pmatrix}0.9&0.0\\0.0&0.2\end{pmatrix}}} This allows us to calculate the new unnormalized probabilities state vector π ′ {\displaystyle \mathbf {\pi '} } through Bayes rule, weighting by the likelihood that each element of π {\displaystyle \mathbf {\pi } } generated event 1 as: π ′ = π O 1 {\displaystyle \mathbf {\pi '} =\mathbf {\pi } \mathbf {O_{1}} } We can now make this general procedure specific to our series of observations. Assuming an initial state vector π 0 {\displaystyle \mathbf {\pi } _{0}} , (which can be optimized as a parameter through repetitions of the forward-backward procedure), we begin with f 0 : 0 = π 0 {\displaystyle \mathbf {f_{0:0}} =\mathbf {\pi } _{0}} , then updating the state distribution and weighting by the likelihood of the first observation: f 0 : 1 = π 0 T O o 1 {\displaystyle \mathbf {f_{0:1}} =\mathbf {\pi } _{0}\mathbf {T} \mathbf {O_{o_{1}}} } This process can be carried forward with additional observations using: f 0 : t = f 0 : t − 1 T O o t {\displaystyle \mathbf {f_{0:t}} =\mathbf {f_{0:t-1}} \mathbf {T} \mathbf {O_{o_{t}}} } This value is the forward unnormalized probability vector. The i'th entry of this vector provides: f 0 : t ( i ) = P ( o 1 , o 2 , … , o t , X t = x i | π 0 ) {\displaystyle \mathbf {f_{0:t}} (i)=\mathbf {P} (o_{1},o_{2},\dots ,o_{t},X_{t}=x_{i}|\mathbf {\pi } _{0})} Typically, we will normalize the probability vector at each step so that its entries sum to 1. A scaling factor is thus introduced at each step such that: f ^ 0 : t = c t − 1 f ^ 0 : t − 1 T O o t {\displaystyle \mathbf {{\hat {f}}_{0:t}} =c_{t}^{-1}\ \mathbf {{\hat {f}}_{0:t-1}} \mathbf {T} \mathbf {O_{o_{t}}} } where f ^ 0 : t − 1 {\displaystyle \mathbf {{\hat {f}}_{0:t-1}} } represents the scaled vector from the previous step and c t {\displaystyle c_{t}} represents the scaling factor that causes the resulting vector's entries to sum to 1. The product of the scaling factors is the total probability for observing the given events irrespective of the final states: P ( o 1 , o 2 , … , o t | π 0 ) = ∏ s = 1 t c s {\displaystyle \mathbf {P} (o_{1},o_{2},\dots ,o_{t}|\mathbf {\pi } _{0})=\prod _{s=1}^{t}c_{s}} This allows us to interpret the scaled probability vector as: f ^ 0 : t ( i ) = f 0 : t ( i ) ∏ s = 1 t c s = P ( o 1 , o 2 , … , o t , X t = x i | π 0 ) P ( o 1 , o 2 , … , o t | π 0 ) = P ( X t = x i | o 1 , o 2 , … , o t , π 0 ) {\displaystyle \mathbf {{\hat {f}}_{0:t}} (i)={\frac {\mathbf {f_{0:t}} (i)}{\prod _{s=1}^{t}c_{s}}}={\frac {\mathbf {P} (o_{1},o_{2},\dots ,o_{t},X_{t}=x_{i}|\mathbf {\pi } _{0})}{\mathbf {P} (o_{1},o_{2},\dots ,o_{t}|\mathbf {\pi } _{0})}}=\mathbf {P} (X_{t}=x_{i}|o_{1},o_{2},\dots ,o_{t},\mathbf {\pi } _{0})} We thus find that the product of the scaling factors provides us with the total probability for observing the given sequence up to time t and that the scaled probability vector provides us with the probability of being in each state at this time. Backward probabilities A similar procedure can be constructed to find backward probabilities. These intend to provide the probabilities: b t : T ( i ) = P ( o t + 1 , o t + 2 , … , o T | X t = x i ) {\displaystyle \mathbf {b_{t:T}} (i)=\mathbf {P} (o_{t+1},o_{t+2},\dots ,o_{T}|X_{t}=x_{i})} That is, we now want to assume that we start in a particular state ( X t = x i {\displaystyle X_{t}=x_{i}} ), and we are now interested in the probability of observing all future events from this state. Since the initial state is assumed as given (i.e. the prior probability of this state = 100%), we begin with: b T : T = [ 1 1 1 … ] T {\displaystyle \mathbf {b_{T:T}} =[1\ 1\ 1\ \dots ]^{T}} Notice that we are now using a column vector while the forward probabilities used row vectors. We can then work backwards using: b t − 1 : T = T O t b t : T {\displaystyle \mathbf {b_{t-1:T}} =\mathbf {T} \mathbf {O_{t}} \mathbf {b_{t:T}} } While we could normalize this vector as well so that its entries sum to one, this is not usually done. Noting that each entry contains the probability of the future event sequence given a particular initial state, normalizing this vector would be equivalent to applying Bayes' theorem to find the likelihood of each initial state given the future events (assuming uniform priors for the final state vector). However, it is more common to scale this vector using the same c t {\displaystyle c_{t}} constants used in the forward probability calculations. b T : T {\displaystyle \mathbf {b_{T:T}} } is not scaled, but subsequent operations use: b ^ t − 1 : T = c t − 1 T O t b ^ t : T {\displaystyle \mathbf {{\hat {b}}_{t-1:T}} =c_{t}^{-1}\mathbf {T} \mathbf {O_{t}} \mathbf {{\hat {b}}_{t:T}} } where b ^ t : T {\displaystyle \mathbf {{\hat {b}}_{t:T}} } represents the previous, scaled vector. This result is that the scaled probability vector is related to the backward probabilities by: b ^ t : T ( i ) = b t : T ( i ) ∏ s = t + 1 T c s {\displaystyle \mathbf {{\hat {b}}_{t:T}} (i)={\frac {\mathbf {b_{t:T}} (i)}{\prod _{s=t+1}^{T}c_{s}}}} This is useful because it allows us to find the total probability of being in each state at a given time, t, by multiplying these values: γ t ( i ) = P ( X t = x i | o 1 , o 2 , … , o T , π 0 ) = P ( o 1 , o 2 , … , o T , X t = x i | π 0 ) P ( o 1 , o 2 , … , o T | π 0 ) = f 0 : t ( i ) ⋅ b t : T ( i ) ∏ s = 1 T c s = f ^ 0 : t ( i ) ⋅ b ^ t : T ( i ) {\displaystyle \mathbf {\gamma _{t}} (i)=\mathbf {P} (X_{t}=x_{i}|o_{1},o_{2},\dots ,o_{T},\mathbf {\pi } _{0})={\frac {\mathbf {P} (o_{1},o_{2},\dots ,o_{T},X_{t}=x_{i}|\mathbf {\pi } _{0})}{\mathbf {P} (o_{1},o_{2},\dots ,o_{T}|\mathbf {\pi } _{0})}}={\frac {\mathbf {f_{0:t}} (i)\cdot \mathbf {b_{t:T}} (i)}{\prod _{s=1}^{T}c_{s}}}=\mathbf {{\hat {f}}_{0:t}} (i)\cdot \mathbf {{\hat {b}}_{t:T}} (i)} To understand this, we note that f 0 : t ( i ) ⋅ b t : T ( i ) {\displaystyle \mathbf {f_{0:t}} (i)\cdot \mathbf {b_{t:T}} (i)} provides the probability for observing the given events in a way that passes through state x i {\displaystyle x_{i}} at time t. This probability includes the forward probabilities covering all events up to time t as well as the backward probabilities which include all future events. This is the numerator we are looking for in our equation, and we divide by the total probability of the observation sequence to normalize this value and extract only the probability that X t = x i {\displaystyle X_{t}=x_{i}} . These values are sometimes called the "smoothed values" as they combine the forward and backward probabilities to compute a final probability. The values γ t ( i ) {\displaystyle \mathbf {\gamma _{t}} (i)} thus provide the probability of being in each state at time t. As such, they are useful for determining the most probable state at any time. The term "most probable state" is somewhat ambiguous. While the most probable state is the most likely to be correct at a given point, the sequence of individually probable states is not likely to be the most probable sequence. This is because the probabilities for each point are calculated independently of each other. They do not take into account the transition probabilities between states, and it is thus possible to get states at two moments (t and t+1) that are both most probable at those time points but which have very little probability of occurring together, i.e. P ( X t = x i , X t + 1 = x j ) ≠ P ( X t = x i ) P ( X t + 1 = x j ) {\displaystyle \mathbf {P} (X_{t}=x_{i},X_{t+1}=x_{j})\neq \mathbf {P} (X_{t}=x_{i})\mathbf {P} (X_{t+1}=x_{j})} . The most probable sequence of states that produced an observation sequence can be found using the Viterbi algorithm. Example This example takes as its basis the umbrella world in Russell & Norvig 2010 Chapter 15 pp. 567 in which we would like to infer the weather given observation of another person either carrying or not carrying an umbrella. We assume two possible states for the weather: state 1 = rain, state 2 = no rain. We assume that the weather has a 70% chance of staying the same each day and a 30% chance of changing. The transition probabilities are then: T = ( 0.7 0.3 0.3 0.7 ) {\displaystyle \mathbf {T} ={\begin{pmatrix}0.7&0.3\\0.3&0.7\end{pmatrix}}} We also assume each state generates one of two possible events: event 1 = umbrella, event 2 = no umbrella. The conditional probabilities for these occurring in each state are given by the probability matrix: B = ( 0.9 0.1 0.2 0.8 ) {\displaystyle \mathbf {B} ={\begin{pmatrix}0.9&0.1\\0.2&0.8\end{pmatrix}}} We then observe the following sequence of events: {umbrella, umbrella, no umbrella, umbrella, umbrella} which we will represent in our calculations as: O 1 = ( 0.9 0.0 0.0 0.2 ) O 2 = ( 0.9 0.0 0.0 0.2 ) O 3 = ( 0.1 0.0 0.0 0.8 ) O 4 = ( 0.9 0.0 0.0 0.2 ) O 5 = ( 0.9 0.0 0.0 0.2 ) {\displaystyle \mathbf {O_{1}} ={\begin{pmatrix}0.9&0.0\\0.0&0.2\end{pmatrix}}~~\mathbf {O_{2}} ={\begin{pmatrix}0.9&0.0\\0.0&0.2\end{pmatrix}}~~\mathbf {O_{3}} ={\begin{pmatrix}0.1&0.0\\0.0&0.8\end{pmatrix}}~~\mathbf {O_{4}} ={\begin{pmatrix}0.9&0.0\\0.0&0.2\end{pmatrix}}~~\mathbf {O_{5}} ={\begin{pmatrix}0.9&0.0\\0.0&0.2\end{pmatrix}}} Note that O 3 {\displaystyle \mathbf {O_{3}} } differs from the others because of the "no umbrella" observation. In computing the forward probabilities we begin with: f 0 : 0 = ( 0.5 0.5 ) {\displaystyle \mathbf {f_{0:0}} ={\begin{pmatrix}0.5&0.5\end{pmatrix}}} which is our prior state vector indicating that we don't know which state the weather is in before our observations. While a state vector should be given as a row vector, we will use the transpose of the matrix so that the calculations below are easier to read. Our calculations are then written in the form: ( f ^ 0 : t ) T = c t − 1 O t ( T ) T ( f ^ 0 : t − 1 ) T {\displaystyle (\mathbf {{\hat {f}}_{0:t}} )^{T}=c_{t}^{-1}\mathbf {O_{t}} (\mathbf {T} )^{T}(\mathbf {{\hat {f}}_{0:t-1}} )^{T}} instead of: f ^ 0 : t = c t − 1 f ^ 0 : t − 1 T O t {\displaystyle \mathbf {{\hat {f}}_{0:t}} =c_{t}^{-1}\mathbf {{\hat {f}}_{0:t-1}} \mathbf {T} \mathbf {O_{t}} } Notice that the transformation matrix is also transposed, but in our example the transpose is equal to the original matrix. Performing these calculations and normalizing the results provides: ( f ^ 0 : 1 ) T = c 1 − 1 ( 0.9 0.0 0.0 0.2 ) ( 0.7 0.3 0.3 0.7 ) ( 0.5000 0.5000 ) = c 1 − 1 ( 0.4500 0.1000 ) = ( 0.8182 0.1818 ) {\displaystyle (\mathbf {{\hat {f}}_{0:1}} )^{T}=c_{1}^{-1}{\begin{pmatrix}0.9&0.0\\0.0&0.2\end{pmatrix}}{\begin{pmatrix}0.7&0.3\\0.3&0.7\end{pmatrix}}{\begin{pmatrix}0.5000\\0.5000\end{pmatrix}}=c_{1}^{-1}{\begin{pmatrix}0.4500\\0.1000\end{pmatrix}}={\begin{pmatrix}0.8182\\0.1818\end{pmatrix}}} ( f ^ 0 : 2 ) T = c 2 − 1 ( 0.9 0.0 0.0 0.2 ) ( 0.7 0.3 0.3 0.7 ) ( 0.8182 0.1818 ) = c 2 − 1 ( 0.5645 0.0745 ) = ( 0.8834 0.1166 ) {\displaystyle (\mathbf {{\hat {f}}_{0:2}} )^{T}=c_{2}^{-1}{\begin{pmatrix}0.9&0.0\\0.0&0.2\end{pmatrix}}{\begin{pmatrix}0.7&0.3\\0.3&0.7\end{pmatrix}}{\begin{pmatrix}0.8182\\0.1818\end{pmatrix}}=c_{2}^{-1}{\begin{pmatrix}0.5645\\0.0745\end{pmatrix}}={\begin{pmatrix}0.8834\\0.1166\end{pmatrix}}} ( f ^ 0 : 3 ) T = c 3 − 1 ( 0.1 0.0 0.0 0.8 ) ( 0.7 0.3 0.3 0.7 ) ( 0.8834 0.1166 ) = c 3 − 1 ( 0.0653 0.2772 ) = ( 0.1907 0.8093 ) {\displaystyle (\mathbf {{\hat {f}}_{0:3}} )^{T}=c_{3}^{-1}{\begin{pmatrix}0.1&0.0\\0.0&0.8\end{pmatrix}}{\begin{pmatrix}0.7&0.3\\0.3&0.7\end{pmatrix}}{\begin{pmatrix}0.8834\\0.1166\end{pmatrix}}=c_{3}^{-1}{\begin{pmatrix}0.0653\\0.2772\end{pmatrix}}={\begin{pmatrix}0.1907\\0.8093\end{pmatrix}}} ( f ^ 0 : 4 ) T = c 4 − 1 ( 0.9 0.0 0.0 0.2 ) ( 0.7 0.3 0.3 0.7 ) ( 0.1907 0.8093 ) = c 4 − 1 ( 0.3386 0.1247 ) = ( 0.7308 0.2692 ) {\displaystyle (\mathbf {{\hat {f}}_{0:4}} )^{T}=c_{4}^{-1}{\begin{pmatrix}0.9&0.0\\0.0&0.2\end{pmatrix}}{\begin{pmatrix}0.7&0.3\\0.3&0.7\end{pmatrix}}{\begin{pmatrix}0.1907\\0.8093\end{pmatrix}}=c_{4}^{-1}{\begin{pmatrix}0.3386\\0.1247\end{pmatrix}}={\begin{pmatrix}0.7308\\0.2692\end{pmatrix}}} ( f ^ 0 : 5 ) T = c 5 − 1 ( 0.9 0.0 0.0 0.2 ) ( 0.7 0.3 0.3 0.7 ) ( 0.7308 0.2692 ) = c 5 − 1 ( 0.5331 0.0815 ) = ( 0.8673 0.1327 ) {\displaystyle (\mathbf {{\hat {f}}_{0:5}} )^{T}=c_{5}^{-1}{\begin{pmatrix}0.9&0.0\\0.0&0.2\end{pmatrix}}{\begin{pmatrix}0.7&0.3\\0.3&0.7\end{pmatrix}}{\begin{pmatrix}0.7308\\0.2692\end{pmatrix}}=c_{5}^{-1}{\begin{pmatrix}0.5331\\0.0815\end{pmatrix}}={\begin{pmatrix}0.8673\\0.1327\end{pmatrix}}} For the backward probabilities, we start with: b 5 : 5 = ( 1.0 1.0 ) {\displaystyle \mathbf {b_{5:5}} ={\begin{pmatrix}1.0\\1.0\end{pmatrix}}} We are then able to compute (using the observations in reverse order and normalizing with different constants): b ^ 4 : 5 = α ( 0.7 0.3 0.3 0.7 ) ( 0.9 0.0 0.0 0.2 ) ( 1.0000 1.0000 ) = α ( 0.6900 0.4100 ) = ( 0.6273 0.3727 ) {\displaystyle \mathbf {{\hat {b}}_{4:5}} =\alpha {\begin{pmatrix}0.7&0.3\\0.3&0.7\end{pmatrix}}{\begin{pmatrix}0.9&0.0\\0.0&0.2\end{pmatrix}}{\begin{pmatrix}1.0000\\1.0000\end{pmatrix}}=\alpha {\begin{pmatrix}0.6900\\0.4100\end{pmatrix}}={\begin{pmatrix}0.6273\\0.3727\end{pmatrix}}} b ^ 3 : 5 = α ( 0.7 0.3 0.3 0.7 ) ( 0.9 0.0 0.0 0.2 ) ( 0.6273 0.3727 ) = α ( 0.4175 0.2215 ) = ( 0.6533 0.3467 ) {\displaystyle \mathbf {{\hat {b}}_{3:5}} =\alpha {\begin{pmatrix}0.7&0.3\\0.3&0.7\end{pmatrix}}{\begin{pmatrix}0.9&0.0\\0.0&0.2\end{pmatrix}}{\begin{pmatrix}0.6273\\0.3727\end{pmatrix}}=\alpha {\begin{pmatrix}0.4175\\0.2215\end{pmatrix}}={\begin{pmatrix}0.6533\\0.3467\end{pmatrix}}} b ^ 2 : 5 = α ( 0.7 0.3 0.3 0.7 ) ( 0.1 0.0 0.0 0.8 ) ( 0.6533 0.3467 ) = α ( 0.1289 0.2138 ) = ( 0.3763 0.6237 ) {\displaystyle \mathbf {{\hat {b}}_{2:5}} =\alpha {\begin{pmatrix}0.7&0.3\\0.3&0.7\end{pmatrix}}{\begin{pmatrix}0.1&0.0\\0.0&0.8\end{pmatrix}}{\begin{pmatrix}0.6533\\0.3467\end{pmatrix}}=\alpha {\begin{pmatrix}0.1289\\0.2138\end{pmatrix}}={\begin{pmatrix}0.3763\\0.6237\end{pmatrix}}} b ^ 1 : 5 = α ( 0.7 0.3 0.3 0.7 ) ( 0.9 0.0 0.0 0.2 ) ( 0.3763 0.6237 ) = α ( 0.2745 0.1889 ) = ( 0.5923 0.4077 ) {\displaystyle \mathbf {{\hat {b}}_{1:5}} =\alpha {\begin{pmatrix}0.7&0.3\\0.3&0.7\end{pmatrix}}{\begin{pmatrix}0.9&0.0\\0.0&0.2\end{pmatrix}}{\begin{pmatrix}0.3763\\0.6237\end{pmatrix}}=\alpha {\begin{pmatrix}0.2745\\0.1889\end{pmatrix}}={\begin{pmatrix}0.5923\\0.4077\end{pmatrix}}} b ^ 0 : 5 = α ( 0.7 0.3 0.3 0.7 ) ( 0.9 0.0 0.0 0.2 ) ( 0.5923 0.4077 ) = α ( 0.3976 0.2170 ) = ( 0.6469 0.3531 ) {\displaystyle \mathbf {{\hat {b}}_{0:5}} =\alpha {\begin{pmatrix}0.7&0.3\\0.3&0.7\end{pmatrix}}{\begin{pmatrix}0.9&0.0\\0.0&0.2\end{pmatrix}}{\begin{pmatrix}0.5923\\0.4077\end{pmatrix}}=\alpha {\begin{pmatrix}0.3976\\0.2170\end{pmatrix}}={\begin{pmatrix}0.6469\\0.3531\end{pmatrix}}} Finally, we will compute the smoothed probability values. These results must also be scaled so that its entries sum to 1 because we did not scale the backward probabilities with the c t {\displaystyle c_{t}} 's found earlier. The backward probability vectors above thus actually represent the likelihood of each state at time t given the future observations. Because these vectors are proportional to the actual backward probabilities, the result has to be scaled an additional time. ( γ 0 ) T = α ( 0.5000 0.5000 ) ∘ ( 0.6469 0.3531 ) = α ( 0.3235 0.1765 ) = ( 0.6469 0.3531 ) {\displaystyle (\mathbf {\gamma _{0}} )^{T}=\alpha {\begin{pmatrix}0.5000\\0.5000\end{pmatrix}}\circ {\begin{pmatrix}0.6469\\0.3531\end{pmatrix}}=\alpha {\begin{pmatrix}0.3235\\0.1765\end{pmatrix}}={\begin{pmatrix}0.6469\\0.3531\end{pmatrix}}} ( γ 1 ) T = α ( 0.8182 0.1818 ) ∘ ( 0.5923 0.4077 ) = α ( 0.4846 0.0741 ) = ( 0.8673 0.1327 ) {\displaystyle (\mathbf {\gamma _{1}} )^{T}=\alpha {\begin{pmatrix}0.8182\\0.1818\end{pmatrix}}\circ {\begin{pmatrix}0.5923\\0.4077\end{pmatrix}}=\alpha {\begin{pmatrix}0.4846\\0.0741\end{pmatrix}}={\begin{pmatrix}0.8673\\0.1327\end{pmatrix}}} ( γ 2 ) T = α ( 0.8834 0.1166 ) ∘ ( 0.3763 0.6237 ) = α ( 0.3324 0.0728 ) = ( 0.8204 0.1796 ) {\displaystyle (\mathbf {\gamma _{2}} )^{T}=\alpha {\begin{pmatrix}0.8834\\0.1166\end{pmatrix}}\circ {\begin{pmatrix}0.3763\\0.6237\end{pmatrix}}=\alpha {\begin{pmatrix}0.3324\\0.0728\end{pmatrix}}={\begin{pmatrix}0.8204\\0.1796\end{pmatrix}}} ( γ 3 ) T = α ( 0.1907 0.8093 ) ∘ ( 0.6533 0.3467 ) = α ( 0.1246 0.2806 ) = ( 0.3075 0.6925 ) {\displaystyle (\mathbf {\gamma _{3}} )^{T}=\alpha {\begin{pmatrix}0.1907\\0.8093\end{pmatrix}}\circ {\begin{pmatrix}0.6533\\0.3467\end{pmatrix}}=\alpha {\begin{pmatrix}0.1246\\0.2806\end{pmatrix}}={\begin{pmatrix}0.3075\\0.6925\end{pmatrix}}} ( γ 4 ) T = α ( 0.7308 0.2692 ) ∘ ( 0.6273 0.3727 ) = α ( 0.4584 0.1003 ) = ( 0.8204 0.1796 ) {\displaystyle (\mathbf {\gamma _{4}} )^{T}=\alpha {\begin{pmatrix}0.7308\\0.2692\end{pmatrix}}\circ {\begin{pmatrix}0.6273\\0.3727\end{pmatrix}}=\alpha {\begin{pmatrix}0.4584\\0.1003\end{pmatrix}}={\begin{pmatrix}0.8204\\0.1796\end{pmatrix}}} ( γ 5 ) T = α ( 0.8673 0.1327 ) ∘ ( 1.0000 1.0000 ) = α ( 0.8673 0.1327 ) = ( 0.8673 0.1327 ) {\displaystyle (\mathbf {\gamma _{5}} )^{T}=\alpha {\begin{pmatrix}0.8673\\0.1327\end{pmatrix}}\circ {\begin{pmatrix}1.0000\\1.0000\end{pmatrix}}=\alpha {\begin{pmatrix}0.8673\\0.1327\end{pmatrix}}={\begin{pmatrix}0.8673\\0.1327\end{pmatrix}}} Notice that the value of γ 0 {\displaystyle \mathbf {\gamma _{0}} } is equal to b ^ 0 : 5 {\displaystyle \mathbf {{\hat {b}}_{0:5}} } and that γ 5 {\displaystyle \mathbf {\gamma _{5}} } is equal to f ^ 0 : 5 {\displaystyle \mathbf {{\hat {f}}_{0:5}} } . This follows naturally because both f ^ 0 : 5 {\displaystyle \mathbf {{\hat {f}}_{0:5}} } and b ^ 0 : 5 {\displaystyle \mathbf {{\hat {b}}_{0:5}} } begin with uniform priors over the initial and final state vectors (respectively) and take into account all of the observations. However, γ 0 {\displaystyle \mathbf {\gamma _{0}} } will only be equal to b ^ 0 : 5 {\displaystyle \mathbf {{\hat {b}}_{0:5}} } when our initial state vector represents a uniform prior (i.e. all entries are equal). When this is not the case b ^ 0 : 5 {\displaystyle \mathbf {{\hat {b}}_{0:5}} } needs to be combined with the initial state vector to find the most likely initial state. We thus find that the forward probabilities by themselves are sufficient to calculate the most likely final state. Similarly, the backward probabilities can be combined with the initial state vector to provide the most probable initial state given the observations. The forward and backward probabilities need only be combined to infer the most probable states between the initial and final points. The calculations above reveal that the most probable weather state on every day except for the third one was "rain". They tell us more than this, however, as they now provide a way to quantify the probabilities of each state at different times. Perhaps most importantly, our value at γ 5 {\displaystyle \mathbf {\gamma _{5}} } quantifies our knowledge of the state vector at the end of the observation sequence. We can then use this to predict the probability of the various weather states tomorrow as well as the probability of observing an umbrella. Performance The forward–backward algorithm runs with time complexity O ( S 2 T ) {\displaystyle O(S^{2}T)} in space O ( S T ) {\displaystyle O(ST)} , where T {\displaystyle T} is the length of the time sequence and S {\displaystyle S} is the number of symbols in the state alphabet. The algorithm can also run in constant space with time complexity O ( S 2 T 2 ) {\displaystyle O(S^{2}T^{2})} by recomputing values at each step. For comparison, a brute-force procedure would generate all possible S T {\displaystyle S^{T}} state sequences and calculate the joint probability of each state sequence with the observed series of events, which would have time complexity O ( T ⋅ S T ) {\displaystyle O(T\cdot S^{T})} . Brute force is intractable for realistic problems, as the number of possible hidden node sequences typically is extremely high. An enhancement to the general forward-backward algorithm, called the Island algorithm, trades smaller memory usage for longer running time, taking O ( S 2 T log ⁡ T ) {\displaystyle O(S^{2}T\log T)} time and O ( S log ⁡ T ) {\displaystyle O(S\log T)} memory. Furthermore, it is possible to invert the process model to obtain an O ( S ) {\displaystyle O(S)} space, O ( S 2 T ) {\displaystyle O(S^{2}T)} time algorithm, although the inverted process may not exist or be ill-conditioned. In addition, algorithms have been developed to compute f 0 : t + 1 {\displaystyle \mathbf {f_{0:t+1}} } efficiently through online smoothing such as the fixed-lag smoothing (FLS) algorithm. Pseudocode algorithm forward_backward is input: guessState int sequenceIndex output: result if sequenceIndex is past the end of the sequence then return 1 if (guessState, sequenceIndex) has been seen before then return saved result result := 0 for each neighboring state n: result := result + (transition probability from guessState to n given observation element at sequenceIndex) × Backward(n, sequenceIndex + 1) save result for (guessState, sequenceIndex) return result Python example Given HMM (just like in Viterbi algorithm) represented in the Python programming language: We can write the implementation of the forward-backward algorithm like this: The function fwd_bkw takes the following arguments: x is the sequence of observations, e.g. ['normal', 'cold', 'dizzy']; states is the set of hidden states; a_0 is the start probability; a are the transition probabilities; and e are the emission probabilities. For simplicity of code, we assume that the observation sequence x is non-empty and that a[i][j] and e[i][j] is defined for all states i,j. In the running example, the forward-backward algorithm is used as follows: See also Baum–Welch algorithm Viterbi algorithm BCJR algorithm References Lawrence R. Rabiner, A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proceedings of the IEEE, 77 (2), p. 257–286, February 1989. 10.1109/5.18626 Lawrence R. Rabiner, B. H. Juang (January 1986). "An introduction to hidden Markov models". IEEE ASSP Magazine: 4–15. Eugene Charniak (1993). Statistical Language Learning. Cambridge, Massachusetts: MIT Press. ISBN 978-0-262-53141-2. Stuart Russell and Peter Norvig (2010). Artificial Intelligence A Modern Approach 3rd Edition. Upper Saddle River, New Jersey: Pearson Education/Prentice-Hall. ISBN 978-0-13-604259-4. External links An interactive spreadsheet for teaching the forward–backward algorithm (spreadsheet and article with step-by-step walk-through) Tutorial of hidden Markov models including the forward–backward algorithm Collection of AI algorithms implemented in Java (including HMM and the forward–backward algorithm)
Wikipedia
In genetics, a haplotype block is a region of an organism's genome in which there is little evidence of a history of genetic recombination, and which contain only a small number of distinct haplotypes. According to the haplotype-block model, such blocks should show high levels of linkage disequilibrium and be separated from one another by numerous recombination events. The boundaries of haplotype blocks cannot be directly observed; they must instead be inferred indirectly through the use of algorithms. However, some evidence suggests that different algorithms for identifying haplotype blocks give very different results when used on the same data, though another study suggests that their results are generally consistent. The National Institutes of Health funded the HapMap project to catalog haplotype blocks throughout the human genome. Definition There are two main ways that the term "haplotype block" is defined: one based on whether a given genomic sequence displays higher linkage disequilibrium than a predetermined threshold, and one based on whether the sequence consists of a minimum number of single nucleotide polymorphisms (SNPs) that explain a majority of the common haplotypes in the sequence (or a lower-than-usual number of unique haplotypes). In 2001, Patil et al. proposed the following definition of the term: "Suppose we have a number of haplotypes consisting of a set of consecutive SNPs. A segment of consecutive SNPs is a block if at least α percent of haplotypes are represented more than once".
Wikipedia
Unary coding, or the unary numeral system, is an entropy encoding that represents a natural number, n, with n ones followed by a zero (if the term natural number is understood as non-negative integer) or with n − 1 ones followed by a zero (if the term natural number is understood as strictly positive integer). A unary number's code length would thus be n + 1 with that first definition, or n with that second definition. Unary code when vertical behaves like mercury in a thermometer that gets taller or shorter as n gets bigger or smaller, and so is sometimes called thermometer code. An alternative representation uses n or n − 1 zeros followed by a one, effectively swapping the ones and zeros, without loss of generality. For example, the first ten unary codes are: Unary coding is an optimally efficient encoding for the following discrete probability distribution P ⁡ ( n ) = 2 − n {\displaystyle \operatorname {P} (n)=2^{-n}\,} for n = 1 , 2 , 3 , . . . {\displaystyle n=1,2,3,...} . In symbol-by-symbol coding, it is optimal for any geometric distribution P ⁡ ( n ) = ( k − 1 ) k − n {\displaystyle \operatorname {P} (n)=(k-1)k^{-n}\,} for which k ≥ φ = 1.61803398879..., the golden ratio, or, more generally, for any discrete distribution for which P ⁡ ( n ) ≥ P ⁡ ( n + 1 ) + P ⁡ ( n + 2 ) {\displaystyle \operatorname {P} (n)\geq \operatorname {P} (n+1)+\operatorname {P} (n+2)\,} for n = 1 , 2 , 3 , . . . {\displaystyle n=1,2,3,...} . Although it is the optimal symbol-by-symbol coding for such probability distributions, Golomb coding achieves better compression capability for the geometric distribution because it does not consider input symbols independently, but rather implicitly groups the inputs. For the same reason, arithmetic encoding performs better for general probability distributions, as in the last case above. Unary coding is both a prefix-free code and a self-synchronizing code. Unary code in use today Examples of unary code uses include: In Golomb Rice code, unary encoding is used to encode the quotient part of the Golomb code word. In UTF-8, unary encoding is used in the leading byte of a multi-byte sequence to indicate the number of bytes in the sequence so that the length of the sequence can be determined without examining the continuation bytes. Instantaneously trained neural networks use unary coding for efficient data representation. Unary coding in biological networks Unary coding is used in the neural circuits responsible for birdsong production. The nucleus in the brain of the songbirds that plays a part in both the learning and the production of bird song is the HVC (high vocal center). The command signals for different notes in the birdsong emanate from different points in the HVC. This coding works as space coding which is an efficient strategy for biological circuits due to its inherent simplicity and robustness. Standard run-length unary codes All binary data is defined by the ability to represent unary numbers in alternating run-lengths of 1s and 0s. This conforms to the standard definition of unary i.e. N digits of the same number 1 or 0. All run-lengths by definition have at least one digit and thus represent strictly positive integers. These codes are guaranteed to end validly on any length of data (when reading arbitrary data) and in the (separate) write cycle allow for the use and transmission of an extra bit (the one used for the first bit) while maintaining overall and per-integer unary code lengths of exactly N. Uniquely decodable non-prefix unary codes Following is an example of uniquely decodable unary codes that is not a prefix code and is not instantaneously decodable (need look-ahead to decode) These codes also (when writing unsigned integers) allow for the use and transmission of an extra bit (the one used for the first bit). Thus they are able to transmit 'm' integers * N unary bits and 1 additional bit of information within m*N bits of data. Symmetric unary codes The following symmetric unary codes can be read and instantaneously decoded in either direction: Canonical unary codes For unary values where the maximum is known, one can use canonical unary codes that are of a somewhat numerical nature and different from character based codes. The largest n numerical '0' or '-1' ( 2 n − 1 {\displaystyle \operatorname {2} ^{n}-1\,} ) and the maximum number of digits then for each step reducing the number of digits by one and increasing/decreasing the result by numerical '1'. Canonical codes can require less processing time to decode when they are processed as numbers not a string. If the number of codes required per symbol length is different to 1, i.e. there are more non-unary codes of some length required, those would be achieved by increasing/decreasing the values numerically without reducing the length in that case. Generalized unary coding A generalized version of unary coding was presented by Subhash Kak to represent numbers much more efficiently than standard unary coding. Here's an example of generalized unary coding for integers from 0 through 15 that requires only 7 bits (where three bits are arbitrarily chosen in place of a single one in standard unary to show the number). Note that the representation is cyclic where one uses markers to represent higher integers in higher cycles. Generalized unary coding requires that the range of numbers to be represented to be pre-specified because this range determines the number of bits that are needed. See also Unary numeral system Notes
Wikipedia
In machine learning, the perceptron is an algorithm for supervised learning of binary classifiers. A binary classifier is a function that can decide whether or not an input, represented by a vector of numbers, belongs to some specific class. It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector. History The artificial neuron network was invented in 1943 by Warren McCulloch and Walter Pitts in A logical calculus of the ideas immanent in nervous activity. In 1957, Frank Rosenblatt was at the Cornell Aeronautical Laboratory. He simulated the perceptron on an IBM 704. Later, he obtained funding by the Information Systems Branch of the United States Office of Naval Research and the Rome Air Development Center, to build a custom-made computer, the Mark I Perceptron. It was first publicly demonstrated on 23 June 1960. The machine was "part of a previously secret four-year NPIC [the US' National Photographic Interpretation Center] effort from 1963 through 1966 to develop this algorithm into a useful tool for photo-interpreters". Rosenblatt described the details of the perceptron in a 1958 paper. His organization of a perceptron is constructed of three kinds of cells ("units"): AI, AII, R, which stand for "projection", "association" and "response". He presented at the first international symposium on AI, Mechanisation of Thought Processes, which took place in 1958 November. Rosenblatt's project was funded under Contract Nonr-401(40) "Cognitive Systems Research Program", which lasted from 1959 to 1970, and Contract Nonr-2381(00) "Project PARA" ("PARA" means "Perceiving and Recognition Automata"), which lasted from 1957 to 1963. In 1959, the Institute for Defense Analysis awarded his group a $10,000 contract. By September 1961, the ONR awarded further $153,000 worth of contracts, with $108,000 committed for 1962. The ONR research manager, Marvin Denicoff, stated that ONR, instead of ARPA, funded the Perceptron project, because the project was unlikely to produce technological results in the near or medium term. Funding from ARPA go up to the order of millions dollars, while from ONR are on the order of 10,000 dollars. Meanwhile, the head of IPTO at ARPA, J.C.R. Licklider, was interested in 'self-organizing', 'adaptive' and other biologically-inspired methods in the 1950s; but by the mid-1960s he was openly critical of these, including the perceptron. Instead he strongly favored the logical AI approach of Simon and Newell. Mark I Perceptron machine The perceptron was intended to be a machine, rather than a program, and while its first implementation was in software for the IBM 704, it was subsequently implemented in custom-built hardware as the Mark I Perceptron with the project name "Project PARA", designed for image recognition. The machine is currently in Smithsonian National Museum of American History. The Mark I Perceptron had three layers. One version was implemented as follows: An array of 400 photocells arranged in a 20x20 grid, named "sensory units" (S-units), or "input retina". Each S-unit can connect to up to 40 A-units. A hidden layer of 512 perceptrons, named "association units" (A-units). An output layer of eight perceptrons, named "response units" (R-units). Rosenblatt called this three-layered perceptron network the alpha-perceptron, to distinguish it from other perceptron models he experimented with. The S-units are connected to the A-units randomly (according to a table of random numbers) via a plugboard (see photo), to "eliminate any particular intentional bias in the perceptron". The connection weights are fixed, not learned. Rosenblatt was adamant about the random connections, as he believed the retina was randomly connected to the visual cortex, and he wanted his perceptron machine to resemble human visual perception. The A-units are connected to the R-units, with adjustable weights encoded in potentiometers, and weight updates during learning were performed by electric motors.: 193 The hardware details are in an operators' manual. In a 1958 press conference organized by the US Navy, Rosenblatt made statements about the perceptron that caused a heated controversy among the fledgling AI community; based on Rosenblatt's statements, The New York Times reported the perceptron to be "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence." The Photo Division of Central Intelligence Agency, from 1960 to 1964, studied the use of Mark I Perceptron machine for recognizing militarily interesting silhouetted targets (such as planes and ships) in aerial photos. Principles of Neurodynamics (1962) Rosenblatt described his experiments with many variants of the Perceptron machine in a book Principles of Neurodynamics (1962). The book is a published version of the 1961 report. Among the variants are: "cross-coupling" (connections between units within the same layer) with possibly closed loops, "back-coupling" (connections from units in a later layer to units in a previous layer), four-layer perceptrons where the last two layers have adjustible weights (and thus a proper multilayer perceptron), incorporating time-delays to perceptron units, to allow for processing sequential data, analyzing audio (instead of images). The machine was shipped from Cornell to Smithsonian in 1967, under a government transfer administered by the Office of Naval Research. Perceptrons (1969) Although the perceptron initially seemed promising, it was quickly proved that perceptrons could not be trained to recognise many classes of patterns. This caused the field of neural network research to stagnate for many years, before it was recognised that a feedforward neural network with two or more layers (also called a multilayer perceptron) had greater processing power than perceptrons with one layer (also called a single-layer perceptron). Single-layer perceptrons are only capable of learning linearly separable patterns. For a classification task with some step activation function, a single node will have a single line dividing the data points forming the patterns. More nodes can create more dividing lines, but those lines must somehow be combined to form more complex classifications. A second layer of perceptrons, or even linear nodes, are sufficient to solve many otherwise non-separable problems. In 1969, a famous book entitled Perceptrons by Marvin Minsky and Seymour Papert showed that it was impossible for these classes of network to learn an XOR function. It is often incorrectly believed that they also conjectured that a similar result would hold for a multi-layer perceptron network. However, this is not true, as both Minsky and Papert already knew that multi-layer perceptrons were capable of producing an XOR function. (See the page on Perceptrons (book) for more information.) Nevertheless, the often-miscited Minsky and Papert text caused a significant decline in interest and funding of neural network research. It took ten more years until neural network research experienced a resurgence in the 1980s. This text was reprinted in 1987 as "Perceptrons - Expanded Edition" where some errors in the original text are shown and corrected. Subsequent work Rosenblatt continued working on perceptrons despite diminishing funding. The last attempt was Tobermory, built between 1961 and 1967, built for speech recognition. It occupied an entire room. It had 4 layers with 12,000 weights implemented by toroidal magnetic cores. By the time of its completion, simulation on digital computers had become faster than purpose-built perceptron machines. He died in a boating accident in 1971. The kernel perceptron algorithm was already introduced in 1964 by Aizerman et al. Margin bounds guarantees were given for the Perceptron algorithm in the general non-separable case first by Freund and Schapire (1998), and more recently by Mohri and Rostamizadeh (2013) who extend previous results and give new and more favorable L1 bounds. The perceptron is a simplified model of a biological neuron. While the complexity of biological neuron models is often required to fully understand neural behavior, research suggests a perceptron-like linear model can produce some behavior seen in real neurons. The solution spaces of decision boundaries for all binary functions and learning behaviors are studied in. Definition In the modern sense, the perceptron is an algorithm for learning a binary classifier called a threshold function: a function that maps its input x {\displaystyle \mathbf {x} } (a real-valued vector) to an output value f ( x ) {\displaystyle f(\mathbf {x} )} (a single binary value): f ( x ) = h ( w ⋅ x + b ) {\displaystyle f(\mathbf {x} )=h(\mathbf {w} \cdot \mathbf {x} +b)} where h {\displaystyle h} is the Heaviside step-function (where an input of > 0 {\textstyle >0} outputs 1; otherwise 0 is the output ), w {\displaystyle \mathbf {w} } is a vector of real-valued weights, w ⋅ x {\displaystyle \mathbf {w} \cdot \mathbf {x} } is the dot product ∑ i = 1 m w i x i {\textstyle \sum _{i=1}^{m}w_{i}x_{i}} , where m is the number of inputs to the perceptron, and b is the bias. The bias shifts the decision boundary away from the origin and does not depend on any input value. Equivalently, since w ⋅ x + b = ( w , b ) ⋅ ( x , 1 ) {\displaystyle \mathbf {w} \cdot \mathbf {x} +b=(\mathbf {w} ,b)\cdot (\mathbf {x} ,1)} , we can add the bias term b {\displaystyle b} as another weight w m + 1 {\displaystyle \mathbf {w} _{m+1}} and add a coordinate 1 {\displaystyle 1} to each input x {\displaystyle \mathbf {x} } , and then write it as a linear classifier that passes the origin: f ( x ) = h ( w ⋅ x ) {\displaystyle f(\mathbf {x} )=h(\mathbf {w} \cdot \mathbf {x} )} The binary value of f ( x ) {\displaystyle f(\mathbf {x} )} (0 or 1) is used to perform binary classification on x {\displaystyle \mathbf {x} } as either a positive or a negative instance. Spatially, the bias shifts the position (though not the orientation) of the planar decision boundary. In the context of neural networks, a perceptron is an artificial neuron using the Heaviside step function as the activation function. The perceptron algorithm is also termed the single-layer perceptron, to distinguish it from a multilayer perceptron, which is a misnomer for a more complicated neural network. As a linear classifier, the single-layer perceptron is the simplest feedforward neural network. Power of representation Information theory From an information theory point of view, a single perceptron with K inputs has a capacity of 2K bits of information. This result is due to Thomas Cover. Specifically let T ( N , K ) {\displaystyle T(N,K)} be the number of ways to linearly separate N points in K dimensions, then T ( N , K ) = { 2 N K ≥ N 2 ∑ k = 0 K − 1 ( N − 1 k ) K < N {\displaystyle T(N,K)=\left\{{\begin{array}{cc}2^{N}&K\geq N\\2\sum _{k=0}^{K-1}\left({\begin{array}{c}N-1\\k\end{array}}\right)&K<N\end{array}}\right.} When K is large, T ( N , K ) / 2 N {\displaystyle T(N,K)/2^{N}} is very close to one when N ≤ 2 K {\displaystyle N\leq 2K} , but very close to zero when N > 2 K {\displaystyle N>2K} . In words, one perceptron unit can almost certainly memorize a random assignment of binary labels on N points when N ≤ 2 K {\displaystyle N\leq 2K} , but almost certainly not when N > 2 K {\displaystyle N>2K} . Boolean function When operating on only binary inputs, a perceptron is called a linearly separable Boolean function, or threshold Boolean function. The sequence of numbers of threshold Boolean functions on n inputs is OEIS A000609. The value is only known exactly up to n = 9 {\displaystyle n=9} case, but the order of magnitude is known quite exactly: it has upper bound 2 n 2 − n log 2 ⁡ n + O ( n ) {\displaystyle 2^{n^{2}-n\log _{2}n+O(n)}} and lower bound 2 n 2 − n log 2 ⁡ n − O ( n ) {\displaystyle 2^{n^{2}-n\log _{2}n-O(n)}} . Any Boolean linear threshold function can be implemented with only integer weights. Furthermore, the number of bits necessary and sufficient for representing a single integer weight parameter is Θ ( n ln ⁡ n ) {\displaystyle \Theta (n\ln n)} . Universal approximation theorem A single perceptron can learn to classify any half-space. It cannot solve any linearly nonseparable vectors, such as the Boolean exclusive-or problem (the famous "XOR problem"). A perceptron network with one hidden layer can learn to classify any compact subset arbitrarily closely. Similarly, it can also approximate any compactly-supported continuous function arbitrarily closely. This is essentially a special case of the theorems by George Cybenko and Kurt Hornik. Conjunctively local perceptron Perceptrons (Minsky and Papert, 1969) studied the kind of perceptron networks necessary to learn various Boolean functions. Consider a perceptron network with n {\displaystyle n} input units, one hidden layer, and one output, similar to the Mark I Perceptron machine. It computes a Boolean function of type f : 2 n → 2 {\displaystyle f:2^{n}\to 2} . They call a function conjunctively local of order k {\displaystyle k} , iff there exists a perceptron network such that each unit in the hidden layer connects to at most k {\displaystyle k} input units. Theorem. (Theorem 3.1.1): The parity function is conjunctively local of order n {\displaystyle n} . Theorem. (Section 5.5): The connectedness function is conjunctively local of order Ω ( n 1 / 2 ) {\displaystyle \Omega (n^{1/2})} . Learning algorithm for a single-layer perceptron Below is an example of a learning algorithm for a single-layer perceptron with a single output unit. For a single-layer perceptron with multiple output units, since the weights of one output unit are completely separate from all the others', the same algorithm can be run for each output unit. For multilayer perceptrons, where a hidden layer exists, more sophisticated algorithms such as backpropagation must be used. If the activation function or the underlying process being modeled by the perceptron is nonlinear, alternative learning algorithms such as the delta rule can be used as long as the activation function is differentiable. Nonetheless, the learning algorithm described in the steps below will often work, even for multilayer perceptrons with nonlinear activation functions. When multiple perceptrons are combined in an artificial neural network, each output neuron operates independently of all the others; thus, learning each output can be considered in isolation. Definitions We first define some variables: r {\displaystyle r} is the learning rate of the perceptron. Learning rate is a positive number usually chosen to be less than 1. The larger the value, the greater the chance for volatility in the weight changes. y = f ( z ) {\displaystyle y=f(\mathbf {z} )} denotes the output from the perceptron for an input vector z {\displaystyle \mathbf {z} } . D = { ( x 1 , d 1 ) , … , ( x s , d s ) } {\displaystyle D=\{(\mathbf {x} _{1},d_{1}),\dots ,(\mathbf {x} _{s},d_{s})\}} is the training set of s {\displaystyle s} samples, where: x j {\displaystyle \mathbf {x} _{j}} is the n {\displaystyle n} -dimensional input vector. d j {\displaystyle d_{j}} is the desired output value of the perceptron for that input. We show the values of the features as follows: x j , i {\displaystyle x_{j,i}} is the value of the i {\displaystyle i} th feature of the j {\displaystyle j} th training input vector. x j , 0 = 1 {\displaystyle x_{j,0}=1} . To represent the weights: w i {\displaystyle w_{i}} is the i {\displaystyle i} th value in the weight vector, to be multiplied by the value of the i {\displaystyle i} th input feature. Because x j , 0 = 1 {\displaystyle x_{j,0}=1} , the w 0 {\displaystyle w_{0}} is effectively a bias that we use instead of the bias constant b {\displaystyle b} . To show the time-dependence of w {\displaystyle \mathbf {w} } , we use: w i ( t ) {\displaystyle w_{i}(t)} is the weight i {\displaystyle i} at time t {\displaystyle t} . Steps The algorithm updates the weights after every training sample in step 2b. Convergence of one perceptron on a linearly separable dataset A single perceptron is a linear classifier. It can only reach a stable state if all input vectors are classified correctly. In case the training set D is not linearly separable, i.e. if the positive examples cannot be separated from the negative examples by a hyperplane, then the algorithm would not converge since there is no solution. Hence, if linear separability of the training set is not known a priori, one of the training variants below should be used. Detailed analysis and extensions to the convergence theorem are in Chapter 11 of Perceptrons (1969). Linear separability is testable in time min ( O ( n d / 2 ) , O ( d 2 n ) , O ( n d − 1 ln ⁡ n ) ) {\displaystyle \min(O(n^{d/2}),O(d^{2n}),O(n^{d-1}\ln n))} , where n {\displaystyle n} is the number of data points, and d {\displaystyle d} is the dimension of each point. If the training set is linearly separable, then the perceptron is guaranteed to converge after making finitely many mistakes. The theorem is proved by Rosenblatt et al. The following simple proof is due to Novikoff (1962). The idea of the proof is that the weight vector is always adjusted by a bounded amount in a direction with which it has a negative dot product, and thus can be bounded above by O(√t), where t is the number of changes to the weight vector. However, it can also be bounded below by O(t) because if there exists an (unknown) satisfactory weight vector, then every change makes progress in this (unknown) direction by a positive amount that depends only on the input vector. While the perceptron algorithm is guaranteed to converge on some solution in the case of a linearly separable training set, it may still pick any solution and problems may admit many solutions of varying quality. The perceptron of optimal stability, nowadays better known as the linear support-vector machine, was designed to solve this problem (Krauth and Mezard, 1987). Perceptron cycling theorem When the dataset is not linearly separable, then there is no way for a single perceptron to converge. However, we still have This is proved first by Bradley Efron. Learning a Boolean function Consider a dataset where the x {\displaystyle x} are from { − 1 , + 1 } n {\displaystyle \{-1,+1\}^{n}} , that is, the vertices of an n-dimensional hypercube centered at origin, and y = θ ( x i ) {\displaystyle y=\theta (x_{i})} . That is, all data points with positive x i {\displaystyle x_{i}} have y = 1 {\displaystyle y=1} , and vice versa. By the perceptron convergence theorem, a perceptron would converge after making at most n {\displaystyle n} mistakes. If we were to write a logical program to perform the same task, each positive example shows that one of the coordinates is the right one, and each negative example shows that its complement is a positive example. By collecting all the known positive examples, we eventually eliminate all but one coordinate, at which point the dataset is learned. This bound is asymptotically tight in terms of the worst-case. In the worst-case, the first presented example is entirely new, and gives n {\displaystyle n} bits of information, but each subsequent example would differ minimally from previous examples, and gives 1 bit each. After n + 1 {\displaystyle n+1} examples, there are 2 n {\displaystyle 2n} bits of information, which is sufficient for the perceptron (with 2 n {\displaystyle 2n} bits of information). However, it is not tight in terms of expectation if the examples are presented uniformly at random, since the first would give n {\displaystyle n} bits, the second n / 2 {\displaystyle n/2} bits, and so on, taking O ( ln ⁡ n ) {\displaystyle O(\ln n)} examples in total. Variants The pocket algorithm with ratchet (Gallant, 1990) solves the stability problem of perceptron learning by keeping the best solution seen so far "in its pocket". The pocket algorithm then returns the solution in the pocket, rather than the last solution. It can be used also for non-separable data sets, where the aim is to find a perceptron with a small number of misclassifications. However, these solutions appear purely stochastically and hence the pocket algorithm neither approaches them gradually in the course of learning, nor are they guaranteed to show up within a given number of learning steps. The Maxover algorithm (Wendemuth, 1995) is "robust" in the sense that it will converge regardless of (prior) knowledge of linear separability of the data set. In the linearly separable case, it will solve the training problem – if desired, even with optimal stability (maximum margin between the classes). For non-separable data sets, it will return a solution with a computable small number of misclassifications. In all cases, the algorithm gradually approaches the solution in the course of learning, without memorizing previous states and without stochastic jumps. Convergence is to global optimality for separable data sets and to local optimality for non-separable data sets. The Voted Perceptron (Freund and Schapire, 1999), is a variant using multiple weighted perceptrons. The algorithm starts a new perceptron every time an example is wrongly classified, initializing the weights vector with the final weights of the last perceptron. Each perceptron will also be given another weight corresponding to how many examples do they correctly classify before wrongly classifying one, and at the end the output will be a weighted vote on all perceptrons. In separable problems, perceptron training can also aim at finding the largest separating margin between the classes. The so-called perceptron of optimal stability can be determined by means of iterative training and optimization schemes, such as the Min-Over algorithm (Krauth and Mezard, 1987) or the AdaTron (Anlauf and Biehl, 1989)). AdaTron uses the fact that the corresponding quadratic optimization problem is convex. The perceptron of optimal stability, together with the kernel trick, are the conceptual foundations of the support-vector machine. The α {\displaystyle \alpha } -perceptron further used a pre-processing layer of fixed random weights, with thresholded output units. This enabled the perceptron to classify analogue patterns, by projecting them into a binary space. In fact, for a projection space of sufficiently high dimension, patterns can become linearly separable. Another way to solve nonlinear problems without using multiple layers is to use higher order networks (sigma-pi unit). In this type of network, each element in the input vector is extended with each pairwise combination of multiplied inputs (second order). This can be extended to an n-order network. It should be kept in mind, however, that the best classifier is not necessarily that which classifies all the training data perfectly. Indeed, if we had the prior constraint that the data come from equi-variant Gaussian distributions, the linear separation in the input space is optimal, and the nonlinear solution is overfitted. Other linear classification algorithms include Winnow, support-vector machine, and logistic regression. Multiclass perceptron Like most other techniques for training linear classifiers, the perceptron generalizes naturally to multiclass classification. Here, the input x {\displaystyle x} and the output y {\displaystyle y} are drawn from arbitrary sets. A feature representation function f ( x , y ) {\displaystyle f(x,y)} maps each possible input/output pair to a finite-dimensional real-valued feature vector. As before, the feature vector is multiplied by a weight vector w {\displaystyle w} , but now the resulting score is used to choose among many possible outputs: y ^ = argmax y ⁡ f ( x , y ) ⋅ w . {\displaystyle {\hat {y}}=\operatorname {argmax} _{y}f(x,y)\cdot w.} Learning again iterates over the examples, predicting an output for each, leaving the weights unchanged when the predicted output matches the target, and changing them when it does not. The update becomes: w t + 1 = w t + f ( x , y ) − f ( x , y ^ ) . {\displaystyle w_{t+1}=w_{t}+f(x,y)-f(x,{\hat {y}}).} This multiclass feedback formulation reduces to the original perceptron when x {\displaystyle x} is a real-valued vector, y {\displaystyle y} is chosen from { 0 , 1 } {\displaystyle \{0,1\}} , and f ( x , y ) = y x {\displaystyle f(x,y)=yx} . For certain problems, input/output representations and features can be chosen so that a r g m a x y f ( x , y ) ⋅ w {\displaystyle \mathrm {argmax} _{y}f(x,y)\cdot w} can be found efficiently even though y {\displaystyle y} is chosen from a very large or even infinite set. Since 2002, perceptron training has become popular in the field of natural language processing for such tasks as part-of-speech tagging and syntactic parsing (Collins, 2002). It has also been applied to large-scale machine learning problems in a distributed computing setting. References Further reading Aizerman, M. A. and Braverman, E. M. and Lev I. Rozonoer. Theoretical foundations of the potential function method in pattern recognition learning. Automation and Remote Control, 25:821–837, 1964. Rosenblatt, Frank (1958), The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain, Cornell Aeronautical Laboratory, Psychological Review, v65, No. 6, pp. 386–408. doi:10.1037/h0042519. Rosenblatt, Frank (1962), Principles of Neurodynamics. Washington, DC: Spartan Books. Minsky, M. L. and Papert, S. A. 1969. Perceptrons. Cambridge, MA: MIT Press. Gallant, S. I. (1990). Perceptron-based learning algorithms. IEEE Transactions on Neural Networks, vol. 1, no. 2, pp. 179–191. Olazaran Rodriguez, Jose Miguel. A historical sociology of neural network research. PhD Dissertation. University of Edinburgh, 1991. Mohri, Mehryar and Rostamizadeh, Afshin (2013). Perceptron Mistake Bounds arXiv:1305.0208, 2013. Novikoff, A. B. (1962). On convergence proofs on perceptrons. Symposium on the Mathematical Theory of Automata, 12, 615–622. Polytechnic Institute of Brooklyn. Widrow, B., Lehr, M.A., "30 years of Adaptive Neural Networks: Perceptron, Madaline, and Backpropagation," Proc. IEEE, vol 78, no 9, pp. 1415–1442, (1990). Collins, M. 2002. Discriminative training methods for hidden Markov models: Theory and experiments with the perceptron algorithm in Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP '02). Yin, Hongfeng (1996), Perceptron-Based Algorithms and Analysis, Spectrum Library, Concordia University, Canada External links A Perceptron implemented in MATLAB to learn binary NAND function Chapter 3 Weighted networks - the perceptron and chapter 4 Perceptron learning of Neural Networks - A Systematic Introduction by Raúl Rojas (ISBN 978-3-540-60505-8) History of perceptrons Mathematics of multilayer perceptrons Applying a perceptron model using scikit-learn - https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Perceptron.html
Wikipedia
In fluid dynamics, the Graetz number (Gz) is a dimensionless number that characterizes laminar flow in a conduit. The number is defined as: G z = D H L R e P r {\displaystyle \mathrm {Gz} ={D_{H} \over L}\mathrm {Re} \,\mathrm {Pr} } where DH is the diameter in round tubes or hydraulic diameter in arbitrary cross-section ducts L is the length Re is the Reynolds number and Pr is the Prandtl number. This number is useful in determining the thermally developing flow entrance length in ducts. A Graetz number of approximately 1000 or less is the point at which flow would be considered thermally fully developed. When used in connection with mass transfer the Prandtl number is replaced by the Schmidt number, Sc, which expresses the ratio of the momentum diffusivity to the mass diffusivity. G z = D H L R e S c {\displaystyle \mathrm {Gz} ={D_{H} \over L}\mathrm {Re} \,\mathrm {Sc} } The quantity is named after the physicist Leo Graetz.
Wikipedia
Personoid is the concept coined by Stanisław Lem, a Polish science-fiction writer, in Non Serviam, from his book A Perfect Vacuum (1971). His personoids are an abstraction of functions of human mind and they live in computers; they do not need any human-like physical body. In cognitive and software modeling, personoid is a research approach to the development of intelligent autonomous agents. In frame of the IPK (Information, Preferences, Knowledge) architecture, it is a framework of abstract intelligent agent with a cognitive and structural intelligence. It can be seen as an essence of high intelligent entities. From the philosophical and systemics perspectives, personoid societies can also be seen as the carriers of a culture. According to N. Gessler, the personoids study can be a base for the research on artificial culture and culture evolution. Personoids on TV and cinema Welt am Draht (1973) The Thirteenth Floor (1999) See also Android Humanoid Intelligence Artificial Intelligence Culture Computer Science Cognitive Science Anticipatory science Memetics References Stanisław Lem's book Próżnia Doskonała (1971). The collection of book reviews of nonexistent books. Translated into English by Michael Kandel as A Perfect Vacuum (1983). Personetics. Personoids Organizations Framework: An Approach to Highly Autonomous Software Architectures Archived 2006-09-28 at the Wayback Machine, ENEA Report (1998). Paradigms of Personoids, Adam M. Gadomski 1997 Archived 2015-08-26 at the Wayback Machine. Computer Models of Cultural Evolution. Nicholas Gessler. In EVOLUTION IN THE COMPUTER AGE - Proceedings of the Center for the Study of Evolution and the Origin of Life, edited by David B. and Gary B. Fogel. Jones and Bartlett Publishers, Sudbury, Massachusetts (2002).
Wikipedia
Léon-Yves Bottou (French pronunciation: [leɔ̃ bɔtu]; born 1965) is a researcher best known for his work in machine learning and data compression. His work presents stochastic gradient descent as a fundamental learning algorithm. He is also one of the main creators of the DjVu image compression technology (together with Yann LeCun and Patrick Haffner), and the maintainer of DjVuLibre, the open source implementation of DjVu. He is the original developer of the Lush programming language. Life Léon Bottou was born in France in 1965. He obtained the Diplôme d'Ingénieur from École Polytechnique in 1987, a Magistère de Mathématiques Fondamentales et Appliquées et d’Informatique from École Normale Supérieure in 1988, a Diplôme d'Études Approndies in Computer Science in 1988, in 1988, and a PhD from Université Paris-Sud in 1991. His master's thesis concerned using Time Delay Neural Networks for speech recognition. He then joined the Adaptive Systems Research Department at AT&T Bell Laboratories in Holmdel, New Jersey, where he collaborated with Vladimir Vapnik on local learning algorithms. in 1992, he returned to France and founded Neuristique S.A., a company that produced machine learning tools and one of the first data mining software packages. In 1995, he returned to Bell Laboratories, where he developed a number of new machine learning methods, such as Graph Transformer Networks (similar to conditional random field), and applied them to handwriting recognition and OCR. The bank check recognition system that he helped develop was widely deployed by NCR and other companies, reading over 10% of all the checks in the US in the late 1990s and early 2000s. In 1996, he joined AT&T Labs and worked primarily on the DjVu image compression technology, that is used by some websites, notably the Internet Archive, to distribute scanned documents. Between 2002 and 2010, he was a research scientist at NEC Laboratories in Princeton, New Jersey, where he focused on the theory and practice of machine learning with large-scale datasets, on-line learning, and stochastic optimization methods. He developed the open source software LaSVM for fast large-scale support vector machine, and stochastic gradient descent software for training linear SVM and Conditional Random Fields. In 2010 he joined the Microsoft adCenter in Redmond, Washington, and in 2012 became a Principal Researcher at Microsoft Research in New York City. In March 2015 he joined Facebook Artificial Intelligence Research, also in New York City, as a research lead. His work in gradient descent argued that both stochastic gradient descent and batch gradient descent reach similar levels of loss with the same number of training samples, but SGD is faster when running on large datasets. He also argued that second-order gradient descent methods, such as quasi-Newton methods, can be beneficial compared to plain SGD. See (Bottou et al 2018) for a review. He was program chair of the 2013 Conference on Neural Information Processing Systems and the 2009 International Conference on Machine Learning. He is an associate editor of the IEEE's Transactions on Pattern Analysis and Machine Intelligence, the IAPR's Pattern Recognition Letters and the independently published Journal of Machine Learning Research. In 2007, he was received one of the first Blavatnik Awards for Young Scientists from the Blavatnik Family Foundation and the New York Academy of Sciences. References External links Léon Bottou's personal website Léon Bottou publications indexed by Google Scholar
Wikipedia
Kutlu Özergin Ülgen is a Turkish biochemical engineer researching pharmacophore modelling to identify pharmacological chaperones used to treat infectious diseases, genetic diseases, and cancer. Ülgen is a professor in the department of chemical engineering at Boğaziçi University. Education Özergin completed a B.S. (1987) and M.S. (1989) in chemical engineering at Boğaziçi University. She earned a Ph.D. in chemical engineering at University of Manchester in 1992. Özergin researched Streptomyces coelicolor antibiotic production and bioreactors. Her dissertation was titled Study of antibiotic synthesis by free and immobilised streptomyces coelicolor a3(2). Özergin's doctoral advisor was Ferda Mavituna. Career In 1992, Ülgen joined the faculty at Boğaziçi University as in instructor in the department of chemical engineering. She was promoted to assistant professor in 1994, associate professor in 1996, and professor in 2002. She served as head of the chemical engineering department from 2009 to 2011. Ülgen served as associate dean of the faculty of engineering from 2012 to December 2015. Ülgen researches pharmacophore modelling to identify pharmacological chaperones to treat infectious diseases, genetic diseases, and cancer. She uses a systems biology approach to investigate the reconstruction of signaling networks in yeast, worms, and humans. She also researches protein purification, computational physiology, and metabolic pathway engineering. References External links Kutlu Ö. Ülgen publications indexed by Google Scholar
Wikipedia
In computer science and operations research, a memetic algorithm (MA) is an extension of an evolutionary algorithm (EA) that aims to accelerate the evolutionary search for the optimum. An EA is a metaheuristic that reproduces the basic principles of biological evolution as a computer algorithm in order to solve challenging optimization or planning tasks, at least approximately. An MA uses one or more suitable heuristics or local search techniques to improve the quality of solutions generated by the EA and to speed up the search. The effects on the reliability of finding the global optimum depend on both the use case and the design of the MA. Memetic algorithms represent one of the recent growing areas of research in evolutionary computation. The term MA is now widely used as a synergy of evolutionary or any population-based approach with separate individual learning or local improvement procedures for problem search. Quite often, MAs are also referred to in the literature as Baldwinian evolutionary algorithms, Lamarckian EAs, cultural algorithms, or genetic local search. Introduction Inspired by both Darwinian principles of natural evolution and Dawkins' notion of a meme, the term memetic algorithm (MA) was introduced by Pablo Moscato in his technical report in 1989 where he viewed MA as being close to a form of population-based hybrid genetic algorithm (GA) coupled with an individual learning procedure capable of performing local refinements. The metaphorical parallels, on the one hand, to Darwinian evolution and, on the other hand, between memes and domain specific (local search) heuristics are captured within memetic algorithms thus rendering a methodology that balances well between generality and problem specificity. This two-stage nature makes them a special case of dual-phase evolution. In the context of complex optimization, many different instantiations of memetic algorithms have been reported across a wide range of application domains, in general, converging to high-quality solutions more efficiently than their conventional evolutionary counterparts. In general, using the ideas of memetics within a computational framework is called memetic computing or memetic computation (MC). With MC, the traits of universal Darwinism are more appropriately captured. Viewed in this perspective, MA is a more constrained notion of MC. More specifically, MA covers one area of MC, in particular dealing with areas of evolutionary algorithms that marry other deterministic refinement techniques for solving optimization problems. MC extends the notion of memes to cover conceptual entities of knowledge-enhanced procedures or representations. Theoretical Background The no-free-lunch theorems of optimization and search state that all optimization strategies are equally effective with respect to the set of all optimization problems. Conversely, this means that one can expect the following: The more efficiently an algorithm solves a problem or class of problems, the less general it is and the more problem-specific knowledge it builds on. This insight leads directly to the recommendation to complement generally applicable metaheuristics with application-specific methods or heuristics, which fits well with the concept of MAs. The development of MAs 1st generation Pablo Moscato characterized an MA as follows: "Memetic algorithms are a marriage between a population-based global search and the heuristic local search made by each of the individuals. ... The mechanisms to do local search can be to reach a local optimum or to improve (regarding the objective cost function) up to a predetermined level." And he emphasizes "I am not constraining an MA to a genetic representation.". This original definition of MA although encompasses characteristics of cultural evolution (in the form of local refinement) in the search cycle, it may not qualify as a true evolving system according to universal Darwinism, since all the core principles of inheritance/memetic transmission, variation, and selection are missing. This suggests why the term MA stirred up criticisms and controversies among researchers when first introduced. The following pseudo code would correspond to this general definition of an MA: Pseudo code Procedure Memetic Algorithm Initialize: Generate an initial population, evaluate the individuals and assign a quality value to them; while Stopping conditions are not satisfied do Evolve a new population using stochastic search operators. Evaluate all individuals in the population and assign a quality value to them. Select the subset of individuals, Ω i l {\displaystyle \Omega _{il}} , that should undergo the individual improvement procedure. for each individual in Ω i l {\displaystyle \Omega _{il}} do Perform individual learning using meme(s) with frequency or probability of f i l {\displaystyle f_{il}} , with an intensity of t i l {\displaystyle t_{il}} . Proceed with Lamarckian or Baldwinian learning. end for end while Lamarckian learning in this context means to update the chromosome according to the improved solution found by the individual learning step, while Baldwinian learning leaves the chromosome unchanged and uses only the improved fitness. This pseudo code leaves open which steps are based on the fitness of the individuals and which are not. In question are the evolving of the new population and the selection of Ω i l {\displaystyle \Omega _{il}} . Since most MA implementations are based on EAs, the pseudo code of a corresponding representative of the first generation is also given here, following Krasnogor: Pseudo code Procedure Memetic Algorithm Based on an EA Initialization: t = 0 {\displaystyle t=0} // Initialization of the generation counter Randomly generate an initial population P ( t ) {\displaystyle P(t)} Compute the fitness f ( p ) ∀ p ∈ P ( t ) {\displaystyle f(p)\ \ \forall p\in P(t)} while Stopping conditions are not satisfied do Selection: Accordingly to f ( p ) {\displaystyle f(p)} choose a subset of P ( t ) {\displaystyle P(t)} and store it in M ( t ) {\displaystyle M(t)} Offspring: Recombine and mutate individuals p ∈ M ( t ) {\displaystyle p\in M(t)} and store them in M ′ ( t ) {\displaystyle M'(t)} Learning: Improve p ′ {\displaystyle p'} by local search or heuristic ∀ p ′ ∈ M ′ ( t ) {\displaystyle \forall p'\in M'(t)} Evaluation: Compute the fitness f ( p ′ ) ∀ p ′ ∈ M ′ ( t ) {\displaystyle f(p')\ \ \forall p'\in M'(t)} if Lamarckian learning then Update chromosome of p ′ {\displaystyle p'} according to improvement ∀ p ′ ∈ M ′ ( t ) {\displaystyle \forall p'\in M'(t)} fi New generation: Generate P ( t + 1 ) {\displaystyle P(t+1)} by selecting some individuals from P ( t ) {\displaystyle P(t)} and M ′ ( t ) {\displaystyle M'(t)} t = t + 1 {\displaystyle t=t+1} // Increment the generation counter end while Return best individual p ∈ P ( t − 1 ) {\displaystyle p\in P(t-1)} as result; There are some alternatives for this MA scheme. For example: All or some of the initial individuals may be improved by the meme(s). The parents may be locally improved instead of the offspring. Instead of all offspring, only a randomly selected or fitness-dependent fraction may undergo local improvement. The latter requires the evaluation of the offspring in M ′ ( t ) {\displaystyle M'(t)} prior to the Learning step. 2nd generation Multi-meme, hyper-heuristic and meta-Lamarckian MA are referred to as second generation MA exhibiting the principles of memetic transmission and selection in their design. In Multi-meme MA, the memetic material is encoded as part of the genotype. Subsequently, the decoded meme of each respective individual/chromosome is then used to perform a local refinement. The memetic material is then transmitted through a simple inheritance mechanism from parent to offspring(s). On the other hand, in hyper-heuristic and meta-Lamarckian MA, the pool of candidate memes considered will compete, based on their past merits in generating local improvements through a reward mechanism, deciding on which meme to be selected to proceed for future local refinements. Memes with a higher reward have a greater chance of continuing to be used. For a review on second generation MA; i.e., MA considering multiple individual learning methods within an evolutionary system, the reader is referred to. 3rd generation Co-evolution and self-generating MAs may be regarded as 3rd generation MA where all three principles satisfying the definitions of a basic evolving system have been considered. In contrast to 2nd generation MA which assumes that the memes to be used are known a priori, 3rd generation MA utilizes a rule-based local search to supplement candidate solutions within the evolutionary system, thus capturing regularly repeated features or patterns in the problem space. Some design notes The learning method/meme used has a significant impact on the improvement results, so care must be taken in deciding which meme or memes to use for a particular optimization problem. The frequency and intensity of individual learning directly define the degree of evolution (exploration) against individual learning (exploitation) in the MA search, for a given fixed limited computational budget. Clearly, a more intense individual learning provides greater chance of convergence to the local optima but limits the amount of evolution that may be expended without incurring excessive computational resources. Therefore, care should be taken when setting these two parameters to balance the computational budget available in achieving maximum search performance. When only a portion of the population individuals undergo learning, the issue of which subset of individuals to improve need to be considered to maximize the utility of MA search. Last but not least, it has to be decided whether the respective individual should be changed by the learning success (Lamarckian learning) or not (Baldwinian learning). Thus, the following five design questions must be answered, the first of which is addressed by all of the above 2nd generation representatives during an MA run, while the extended form of meta-Lamarckian learning of expands this to the first four design decisions. Selection of an individual learning method or meme to be used for a particular problem or individual In the context of continuous optimization, individual learning exists in the form of local heuristics or conventional exact enumerative methods. Examples of individual learning strategies include the hill climbing, Simplex method, Newton/Quasi-Newton method, interior point methods, conjugate gradient method, line search, and other local heuristics. Note that most of the common individual learning methods are deterministic. In combinatorial optimization, on the other hand, individual learning methods commonly exist in the form of heuristics (which can be deterministic or stochastic) that are tailored to a specific problem of interest. Typical heuristic procedures and schemes include the k-gene exchange, edge exchange, first-improvement, and many others. Determination of the individual learning frequency One of the first issues pertinent to memetic algorithm design is to consider how often the individual learning should be applied; i.e., individual learning frequency. In one case, the effect of individual learning frequency on MA search performance was considered where various configurations of the individual learning frequency at different stages of the MA search were investigated. Conversely, it was shown elsewhere that it may be worthwhile to apply individual learning on every individual if the computational complexity of the individual learning is relatively low. Selection of the individuals to which individual learning is applied On the issue of selecting appropriate individuals among the EA population that should undergo individual learning, fitness-based and distribution-based strategies were studied for adapting the probability of applying individual learning on the population of chromosomes in continuous parametric search problems with Land extending the work to combinatorial optimization problems. Bambha et al. introduced a simulated heating technique for systematically integrating parameterized individual learning into evolutionary algorithms to achieve maximum solution quality. Specification of the intensity of individual learning Individual learning intensity, t i l {\displaystyle t_{il}} , is the amount of computational budget allocated to an iteration of individual learning; i.e., the maximum computational budget allowable for individual learning to expend on improving a single solution. Choice of Lamarckian or Baldwinian learning It is to be decided whether a found improvement is to work only by the better fitness (Baldwinian learning) or whether also the individual is adapted accordingly (lamarckian learning). In the case of an EA, this would mean an adjustment of the genotype. This question has been controversially discussed for EAs in the literature already in the 1990s, stating that the specific use case plays a major role. The background of the debate is that genome adaptation may promote premature convergence. This risk can be effectively mitigated by other measures to better balance breadth and depth searches, such as the use of structured populations. Applications Memetic algorithms have been successfully applied to a multitude of real-world problems. Although many people employ techniques closely related to memetic algorithms, alternative names such as hybrid genetic algorithms are also employed. Researchers have used memetic algorithms to tackle many classical NP problems. To cite some of them: graph partitioning, multidimensional knapsack, travelling salesman problem, quadratic assignment problem, set cover problem, minimal graph coloring, max independent set problem, bin packing problem, and generalized assignment problem. More recent applications include (but are not limited to) business analytics and data science, training of artificial neural networks, pattern recognition, robotic motion planning, beam orientation, circuit design, electric service restoration, medical expert systems, single machine scheduling, automatic timetabling (notably, the timetable for the NHL), manpower scheduling, nurse rostering optimisation, processor allocation, maintenance scheduling (for example, of an electric distribution network), scheduling of multiple workflows to constrained heterogeneous resources, multidimensional knapsack problem, VLSI design, clustering of gene expression profiles, feature/gene selection, parameter determination for hardware fault injection, and multi-class, multi-objective feature selection. Recent activities in memetic algorithms IEEE Workshop on Memetic Algorithms (WOMA 2009). Program Chairs: Jim Smith, University of the West of England, U.K.; Yew-Soon Ong, Nanyang Technological University, Singapore; Gustafson Steven, University of Nottingham; U.K.; Meng Hiot Lim, Nanyang Technological University, Singapore; Natalio Krasnogor, University of Nottingham, U.K. Memetic Computing Journal, first issue appeared in January 2009. 2008 IEEE World Congress on Computational Intelligence (WCCI 2008), Hong Kong, Special Session on Memetic Algorithms. Special Issue on 'Emerging Trends in Soft Computing - Memetic Algorithm' Archived 2011-09-27 at the Wayback Machine, Soft Computing Journal, Completed & In Press, 2008. IEEE Computational Intelligence Society Emergent Technologies Task Force on Memetic Computing Archived 2011-09-27 at the Wayback Machine IEEE Congress on Evolutionary Computation (CEC 2007), Singapore, Special Session on Memetic Algorithms. 'Memetic Computing' by Thomson Scientific's Essential Science Indicators as an Emerging Front Research Area. Special Issue on Memetic Algorithms, IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics, Vol. 37, No. 1, February 2007. Recent Advances in Memetic Algorithms, Series: Studies in Fuzziness and Soft Computing, Vol. 166, ISBN 978-3-540-22904-9, 2005. Special Issue on Memetic Algorithms, Evolutionary Computation Fall 2004, Vol. 12, No. 3: v-vi.
Wikipedia