Dataset Viewer
Auto-converted to Parquet Duplicate
text
stringlengths
37
707k
source
stringclasses
2 values
In computer science, specifically in algorithms related to pathfinding, a heuristic function is said to be admissible if it never overestimates the cost of reaching the goal, i.e. the cost it estimates to reach the goal is not higher than the lowest possible cost from the current point in the path. In other words, it should act as a lower bound. It is related to the concept of consistent heuristics. While all consistent heuristics are admissible, not all admissible heuristics are consistent. Search algorithms An admissible heuristic is used to estimate the cost of reaching the goal state in an informed search algorithm. In order for a heuristic to be admissible to the search problem, the estimated cost must always be lower than or equal to the actual cost of reaching the goal state. The search algorithm uses the admissible heuristic to find an estimated optimal path to the goal state from the current node. For example, in A* search the evaluation function (where n {\displaystyle n} is the current node) is: f ( n ) = g ( n ) + h ( n ) {\displaystyle f(n)=g(n)+h(n)} where f ( n ) {\displaystyle f(n)} = the evaluation function. g ( n ) {\displaystyle g(n)} = the cost from the start node to the current node h ( n ) {\displaystyle h(n)} = estimated cost from current node to goal. h ( n ) {\displaystyle h(n)} is calculated using the heuristic function. With a non-admissible heuristic, the A* algorithm could overlook the optimal solution to a search problem due to an overestimation in f ( n ) {\displaystyle f(n)} . Formulation n {\displaystyle n} is a node h {\displaystyle h} is a heuristic h ( n ) {\displaystyle h(n)} is cost indicated by h {\displaystyle h} to reach a goal from n {\displaystyle n} h ∗ ( n ) {\displaystyle h^{*}(n)} is the optimal cost to reach a goal from n {\displaystyle n} h ( n ) {\displaystyle h(n)} is admissible if, ∀ n {\displaystyle \forall n} h ( n ) ≤ h ∗ ( n ) {\displaystyle h(n)\leq h^{*}(n)} Construction An admissible heuristic can be derived from a relaxed version of the problem, or by information from pattern databases that store exact solutions to subproblems of the problem, or by using inductive learning methods. Examples Two different examples of admissible heuristics apply to the fifteen puzzle problem: Hamming distance Manhattan distance The Hamming distance is the total number of misplaced tiles. It is clear that this heuristic is admissible since the total number of moves to order the tiles correctly is at least the number of misplaced tiles (each tile not in place must be moved at least once). The cost (number of moves) to the goal (an ordered puzzle) is at least the Hamming distance of the puzzle. The Manhattan distance of a puzzle is defined as: h ( n ) = ∑ all tiles d i s t a n c e ( tile, correct position ) {\displaystyle h(n)=\sum _{\text{all tiles}}{\mathit {distance}}({\text{tile, correct position}})} Consider the puzzle below in which the player wishes to move each tile such that the numbers are ordered. The Manhattan distance is an admissible heuristic in this case because every tile will have to be moved at least the number of spots in between itself and its correct position. The subscripts show the Manhattan distance for each tile. The total Manhattan distance for the shown puzzle is: h ( n ) = 3 + 1 + 0 + 1 + 2 + 3 + 3 + 4 + 3 + 2 + 4 + 4 + 4 + 1 + 1 = 36 {\displaystyle h(n)=3+1+0+1+2+3+3+4+3+2+4+4+4+1+1=36} Optimality proof If an admissible heuristic is used in an algorithm that, per iteration, progresses only the path of lowest evaluation (current cost + heuristic) of several candidate paths, terminates the moment its exploration reaches the goal and, crucially, never closes all optimal paths before terminating (something that's possible with A* search algorithm if special care isn't taken), then this algorithm can only terminate on an optimal path. To see why, consider the following proof by contradiction: Assume such an algorithm managed to terminate on a path T with a true cost Ttrue greater than the optimal path S with true cost Strue. This means that before terminating, the evaluated cost of T was less than or equal to the evaluated cost of S (or else S would have been picked). Denote these evaluated costs Teval and Seval respectively. The above can be summarized as follows, Strue < Ttrue Teval ≤ Seval If our heuristic is admissible it follows that at this penultimate step Teval = Ttrue because any increase on the true cost by the heuristic on T would be inadmissible and the heuristic cannot be negative. On the other hand, an admissible heuristic would require that Seval ≤ Strue which combined with the above inequalities gives us Teval < Ttrue and more specifically Teval ≠ Ttrue. As Teval and Ttrue cannot be both equal and unequal our assumption must have been false and so it must be impossible to terminate on a more costly than optimal path. As an example, let us say we have costs as follows:(the cost above/below a node is the heuristic, the cost at an edge is the actual cost) 0 10 0 100 0 START ---- O ----- GOAL | | 0| |100 | | O ------- O ------ O 100 1 100 1 100 So clearly we would start off visiting the top middle node, since the expected total cost, i.e. f ( n ) {\displaystyle f(n)} , is 10 + 0 = 10 {\displaystyle 10+0=10} . Then the goal would be a candidate, with f ( n ) {\displaystyle f(n)} equal to 10 + 100 + 0 = 110 {\displaystyle 10+100+0=110} . Then we would clearly pick the bottom nodes one after the other, followed by the updated goal, since they all have f ( n ) {\displaystyle f(n)} lower than the f ( n ) {\displaystyle f(n)} of the current goal, i.e. their f ( n ) {\displaystyle f(n)} is 100 , 101 , 102 , 102 {\displaystyle 100,101,102,102} . So even though the goal was a candidate, we could not pick it because there were still better paths out there. This way, an admissible heuristic can ensure optimality. However, note that although an admissible heuristic can guarantee final optimality, it is not necessarily efficient. See also Consistent heuristic Heuristic function Search algorithm
Wikipedia
This is a list of contributors to the mathematical background for general relativity. For ease of readability, the contributions (in brackets) are unlinked but can be found in the contributors' article. B Luigi Bianchi (Bianchi identities, Bianchi groups, differential geometry) C Élie Cartan (curvature computation, early extensions of GTR, Cartan geometries) Elwin Bruno Christoffel (connections, tensor calculus, Riemannian geometry) Clarissa-Marie Claudel (Geometry of photon surfaces) D Tevian Dray (The Geometry of General Relativity) E Luther P. Eisenhart (semi-Riemannian geometries) Frank B. Estabrook (Wahlquist-Estabrook approach to solving PDEs; see also parent list) Leonhard Euler (Euler-Lagrange equation, from which the geodesic equation is obtained) G Carl Friedrich Gauss (curvature, theory of surfaces, intrinsic vs. extrinsic) K Martin Kruskal (inverse scattering transform; see also parent list) L Joseph Louis Lagrange (Lagrangian mechanics, Euler-Lagrange equation) Tullio Levi-Civita (tensor calculus, Riemannian geometry; see also parent list) André Lichnerowicz (tensor calculus, transformation groups) M Alexander Macfarlane (space analysis and Algebra of Physics) Jerrold E. Marsden (linear stability) N Isaac Newton (Newton's identities for characteristic of Einstein tensor) R Gregorio Ricci-Curbastro (Ricci tensor, differential geometry) Georg Bernhard Riemann (Riemannian geometry, Riemann curvature tensor) S Richard Schoen (Yamabe problem; see also parent list) Corrado Segre (Segre classification) W Hugo D. Wahlquist (Wahlquist-Estabrook algorithm; see also parent list) Hermann Weyl (Weyl tensor, gauge theories; see also parent list) Eugene P. Wigner (stabilizers in Lorentz group) See also Contributors to differential geometry Contributors to general relativity
Wikipedia
Omega Chi Epsilon (or ΩΧΕ, sometimes simplified to OXE) is an International honor society for chemical engineering students. History The first chapter of Omega Chi Epsilon was formed at the University of Illinois in 1931 by a group of chemical engineering students. These Founders were: F. C. Howard A. Garrell Deem Ethan M. Stifle John W. Bertetti Professors D.B. Keyes and Norman Krase supported the students in their efforts. The Beta chapter was formed in the Iowa State University 1932. The society grew slowly at first. Baird's Manual indicates there were six chapters by 1957, of which three were inactive. However, interest was revived in the 1960s, allowing a sustained growth that has continued to the present day. There are approximately eighty active chapters of the society as of 2021. Omega Chi Epsilon amended its constitution to permit women to become members as of 1966. The organization became a member of the Association of College Honor Societies in 1967. Symbols The society's name comes from its motto "Ode Chrototos Eggegramai" or "In this Society, professionalism is engraved in our minds". The Greek letters ΩΧΕ were chosen to stand for "Order of Chemical Engineers". The society's official seal is made of two concentric circles, bearing at the top, center the words "Omega Chi Epsilon" with the words "Founded, 1931" at the bottom center. The letters of the society appear in the center of the seal. The society's colors are black, white, and maroon. The society's badge is a black Maltese cross background, on which is superimposed a circular maroon crest. The crest bears the letters ΩΧΕ on a white band passing across the horizontal midline. Above the white band are two crossed retorts rendered in gold. Below the white band are a gold integral sign and a lightning bolt. These symbols are noted to represent the roles of chemistry, mathematics, and physics in chemical engineering. Activities Chapter traditions of service to their chemical engineering departments commonly prevail rather than broader, national traditions. Membership Membership is limited to chemical engineering juniors, seniors, and graduate students. Associate membership may be offered to professors or other members of the staff of institutions within the field. Chapters Omega Chi Epsilon has chartered 80 chapters at colleges and universities in the United States, Qatar, and the United Arab Emirates. Governance The Society's annual meeting is held at the same time and place as the annual meeting of the American Institute of Chemical Engineers. Governance is vested in a national president, vice president, executive secretary, and treasurer. With the immediate past president, these constitute the Executive Committee. The current national president is Christi Luks of the Missouri University of Science and Technology. See also American Institute of Chemical Engineers Honor society Honor cord Professional fraternities and sororities References External links Omega Chi Epsilon homepage
Wikipedia
The distributional learning theory or learning of probability distribution is a framework in computational learning theory. It has been proposed from Michael Kearns, Yishay Mansour, Dana Ron, Ronitt Rubinfeld, Robert Schapire and Linda Sellie in 1994 and it was inspired from the PAC-framework introduced by Leslie Valiant. In this framework the input is a number of samples drawn from a distribution that belongs to a specific class of distributions. The goal is to find an efficient algorithm that, based on these samples, determines with high probability the distribution from which the samples have been drawn. Because of its generality, this framework has been used in a large variety of different fields like machine learning, approximation algorithms, applied probability and statistics. This article explains the basic definitions, tools and results in this framework from the theory of computation point of view. Definitions Let X {\displaystyle \textstyle X} be the support of the distributions of interest. As in the original work of Kearns et al. if X {\displaystyle \textstyle X} is finite it can be assumed without loss of generality that X = { 0 , 1 } n {\displaystyle \textstyle X=\{0,1\}^{n}} where n {\displaystyle \textstyle n} is the number of bits that have to be used in order to represent any y ∈ X {\displaystyle \textstyle y\in X} . We focus in probability distributions over X {\displaystyle \textstyle X} . There are two possible representations of a probability distribution D {\displaystyle \textstyle D} over X {\displaystyle \textstyle X} . probability distribution function (or evaluator) an evaluator E D {\displaystyle \textstyle E_{D}} for D {\displaystyle \textstyle D} takes as input any y ∈ X {\displaystyle \textstyle y\in X} and outputs a real number E D [ y ] {\displaystyle \textstyle E_{D}[y]} which denotes the probability that of y {\displaystyle \textstyle y} according to D {\displaystyle \textstyle D} , i.e. E D [ y ] = Pr [ Y = y ] {\displaystyle \textstyle E_{D}[y]=\Pr[Y=y]} if Y ∼ D {\displaystyle \textstyle Y\sim D} . generator a generator G D {\displaystyle \textstyle G_{D}} for D {\displaystyle \textstyle D} takes as input a string of truly random bits y {\displaystyle \textstyle y} and outputs G D [ y ] ∈ X {\displaystyle \textstyle G_{D}[y]\in X} according to the distribution D {\displaystyle \textstyle D} . Generator can be interpreted as a routine that simulates sampling from the distribution D {\displaystyle \textstyle D} given a sequence of fair coin tosses. A distribution D {\displaystyle \textstyle D} is called to have a polynomial generator (respectively evaluator) if its generator (respectively evaluator) exists and can be computed in polynomial time. Let C X {\displaystyle \textstyle C_{X}} a class of distribution over X, that is C X {\displaystyle \textstyle C_{X}} is a set such that every D ∈ C X {\displaystyle \textstyle D\in C_{X}} is a probability distribution with support X {\displaystyle \textstyle X} . The C X {\displaystyle \textstyle C_{X}} can also be written as C {\displaystyle \textstyle C} for simplicity. Before defining learnability, it is necessary to define good approximations of a distribution D {\displaystyle \textstyle D} . There are several ways to measure the distance between two distribution. The three more common possibilities are Kullback-Leibler divergence Total variation distance of probability measures Kolmogorov distance The strongest of these distances is the Kullback-Leibler divergence and the weakest is the Kolmogorov distance. This means that for any pair of distributions D {\displaystyle \textstyle D} , D ′ {\displaystyle \textstyle D'} : KL-distance ( D , D ′ ) ≥ TV-distance ( D , D ′ ) ≥ Kolmogorov-distance ( D , D ′ ) {\displaystyle {\text{KL-distance}}(D,D')\geq {\text{TV-distance}}(D,D')\geq {\text{Kolmogorov-distance}}(D,D')} Therefore, for example if D {\displaystyle \textstyle D} and D ′ {\displaystyle \textstyle D'} are close with respect to Kullback-Leibler divergence then they are also close with respect to all the other distances. Next definitions hold for all the distances and therefore the symbol d ( D , D ′ ) {\displaystyle \textstyle d(D,D')} denotes the distance between the distribution D {\displaystyle \textstyle D} and the distribution D ′ {\displaystyle \textstyle D'} using one of the distances that we describe above. Although learnability of a class of distributions can be defined using any of these distances, applications refer to a specific distance. The basic input that we use in order to learn a distribution is a number of samples drawn by this distribution. For the computational point of view the assumption is that such a sample is given in a constant amount of time. So it's like having access to an oracle G E N ( D ) {\displaystyle \textstyle GEN(D)} that returns a sample from the distribution D {\displaystyle \textstyle D} . Sometimes the interest is, apart from measuring the time complexity, to measure the number of samples that have to be used in order to learn a specific distribution D {\displaystyle \textstyle D} in class of distributions C {\displaystyle \textstyle C} . This quantity is called sample complexity of the learning algorithm. In order for the problem of distribution learning to be more clear consider the problem of supervised learning as defined in. In this framework of statistical learning theory a training set S = { ( x 1 , y 1 ) , … , ( x n , y n ) } {\displaystyle \textstyle S=\{(x_{1},y_{1}),\dots ,(x_{n},y_{n})\}} and the goal is to find a target function f : X → Y {\displaystyle \textstyle f:X\rightarrow Y} that minimizes some loss function, e.g. the square loss function. More formally f = arg ⁡ min g ∫ V ( y , g ( x ) ) d ρ ( x , y ) {\displaystyle f=\arg \min _{g}\int V(y,g(x))d\rho (x,y)} , where V ( ⋅ , ⋅ ) {\displaystyle V(\cdot ,\cdot )} is the loss function, e.g. V ( y , z ) = ( y − z ) 2 {\displaystyle V(y,z)=(y-z)^{2}} and ρ ( x , y ) {\displaystyle \rho (x,y)} the probability distribution according to which the elements of the training set are sampled. If the conditional probability distribution ρ x ( y ) {\displaystyle \rho _{x}(y)} is known then the target function has the closed form f ( x ) = ∫ y y d ρ x ( y ) {\displaystyle f(x)=\int _{y}yd\rho _{x}(y)} . So the set S {\displaystyle S} is a set of samples from the probability distribution ρ ( x , y ) {\displaystyle \rho (x,y)} . Now the goal of distributional learning theory if to find ρ {\displaystyle \rho } given S {\displaystyle S} which can be used to find the target function f {\displaystyle f} . Definition of learnability A class of distributions C {\displaystyle \textstyle C} is called efficiently learnable if for every ϵ > 0 {\displaystyle \textstyle \epsilon >0} and 0 < δ ≤ 1 {\displaystyle \textstyle 0<\delta \leq 1} given access to G E N ( D ) {\displaystyle \textstyle GEN(D)} for an unknown distribution D ∈ C {\displaystyle \textstyle D\in C} , there exists a polynomial time algorithm A {\displaystyle \textstyle A} , called learning algorithm of C {\displaystyle \textstyle C} , that outputs a generator or an evaluator of a distribution D ′ {\displaystyle \textstyle D'} such that Pr [ d ( D , D ′ ) ≤ ϵ ] ≥ 1 − δ {\displaystyle \Pr[d(D,D')\leq \epsilon ]\geq 1-\delta } If we know that D ′ ∈ C {\displaystyle \textstyle D'\in C} then A {\displaystyle \textstyle A} is called proper learning algorithm, otherwise is called improper learning algorithm. In some settings the class of distributions C {\displaystyle \textstyle C} is a class with well known distributions which can be described by a set of parameters. For instance C {\displaystyle \textstyle C} could be the class of all the Gaussian distributions N ( μ , σ 2 ) {\displaystyle \textstyle N(\mu ,\sigma ^{2})} . In this case the algorithm A {\displaystyle \textstyle A} should be able to estimate the parameters μ , σ {\displaystyle \textstyle \mu ,\sigma } . In this case A {\displaystyle \textstyle A} is called parameter learning algorithm. Obviously the parameter learning for simple distributions is a very well studied field that is called statistical estimation and there is a very long bibliography on different estimators for different kinds of simple known distributions. But distributions learning theory deals with learning class of distributions that have more complicated description. First results In their seminal work, Kearns et al. deal with the case where A {\displaystyle \textstyle A} is described in term of a finite polynomial sized circuit and they proved the following for some specific classes of distribution. O R {\displaystyle \textstyle OR} gate distributions for this kind of distributions there is no polynomial-sized evaluator, unless # P ⊆ P / poly {\displaystyle \textstyle \#P\subseteq P/{\text{poly}}} . On the other hand, this class is efficiently learnable with generator. Parity gate distributions this class is efficiently learnable with both generator and evaluator. Mixtures of Hamming Balls this class is efficiently learnable with both generator and evaluator. Probabilistic Finite Automata this class is not efficiently learnable with evaluator under the Noisy Parity Assumption which is an impossibility assumption in the PAC learning framework. ϵ − {\displaystyle \textstyle \epsilon -} Covers One very common technique in order to find a learning algorithm for a class of distributions C {\displaystyle \textstyle C} is to first find a small ϵ − {\displaystyle \textstyle \epsilon -} cover of C {\displaystyle \textstyle C} . Definition A set C ϵ {\displaystyle \textstyle C_{\epsilon }} is called ϵ {\displaystyle \textstyle \epsilon } -cover of C {\displaystyle \textstyle C} if for every D ∈ C {\displaystyle \textstyle D\in C} there is a D ′ ∈ C ϵ {\displaystyle \textstyle D'\in C_{\epsilon }} such that d ( D , D ′ ) ≤ ϵ {\displaystyle \textstyle d(D,D')\leq \epsilon } . An ϵ − {\displaystyle \textstyle \epsilon -} cover is small if it has polynomial size with respect to the parameters that describe D {\displaystyle \textstyle D} . Once there is an efficient procedure that for every ϵ > 0 {\displaystyle \textstyle \epsilon >0} finds a small ϵ − {\displaystyle \textstyle \epsilon -} cover C ϵ {\displaystyle \textstyle C_{\epsilon }} of C then the only left task is to select from C ϵ {\displaystyle \textstyle C_{\epsilon }} the distribution D ′ ∈ C ϵ {\displaystyle \textstyle D'\in C_{\epsilon }} that is closer to the distribution D ∈ C {\displaystyle \textstyle D\in C} that has to be learned. The problem is that given D ′ , D ″ ∈ C ϵ {\displaystyle \textstyle D',D\in C_{\epsilon }} it is not trivial how we can compare d ( D , D ′ ) {\displaystyle \textstyle d(D,D')} and d ( D , D ″ ) {\displaystyle \textstyle d(D,D)} in order to decide which one is the closest to D {\displaystyle \textstyle D} , because D {\displaystyle \textstyle D} is unknown. Therefore, the samples from D {\displaystyle \textstyle D} have to be used to do these comparisons. Obviously the result of the comparison always has a probability of error. So the task is similar with finding the minimum in a set of element using noisy comparisons. There are a lot of classical algorithms in order to achieve this goal. The most recent one which achieves the best guarantees was proposed by Daskalakis and Kamath This algorithm sets up a fast tournament between the elements of C ϵ {\displaystyle \textstyle C_{\epsilon }} where the winner D ∗ {\displaystyle \textstyle D^{*}} of this tournament is the element which is ϵ − {\displaystyle \textstyle \epsilon -} close to D {\displaystyle \textstyle D} (i.e. d ( D ∗ , D ) ≤ ϵ {\displaystyle \textstyle d(D^{*},D)\leq \epsilon } ) with probability at least 1 − δ {\displaystyle \textstyle 1-\delta } . In order to do so their algorithm uses O ( log ⁡ N / ϵ 2 ) {\displaystyle \textstyle O(\log N/\epsilon ^{2})} samples from D {\displaystyle \textstyle D} and runs in O ( N log ⁡ N / ϵ 2 ) {\displaystyle \textstyle O(N\log N/\epsilon ^{2})} time, where N = | C ϵ | {\displaystyle \textstyle N=|C_{\epsilon }|} . Learning sums of random variables Learning of simple well known distributions is a well studied field and there are a lot of estimators that can be used. One more complicated class of distributions is the distribution of a sum of variables that follow simple distributions. These learning procedure have a close relation with limit theorems like the central limit theorem because they tend to examine the same object when the sum tends to an infinite sum. Recently there are two results that described here include the learning Poisson binomial distributions and learning sums of independent integer random variables. All the results below hold using the total variation distance as a distance measure. Learning Poisson binomial distributions Consider n {\displaystyle \textstyle n} independent Bernoulli random variables X 1 , … , X n {\displaystyle \textstyle X_{1},\dots ,X_{n}} with probabilities of success p 1 , … , p n {\displaystyle \textstyle p_{1},\dots ,p_{n}} . A Poisson Binomial Distribution of order n {\displaystyle \textstyle n} is the distribution of the sum X = ∑ i X i {\displaystyle \textstyle X=\sum _{i}X_{i}} . For learning the class P B D = { D : D is a Poisson binomial distribution } {\displaystyle \textstyle PBD=\{D:D~{\text{ is a Poisson binomial distribution}}\}} . The first of the following results deals with the case of improper learning of P B D {\displaystyle \textstyle PBD} and the second with the proper learning of P B D {\displaystyle \textstyle PBD} . Theorem Let D ∈ P B D {\displaystyle \textstyle D\in PBD} then there is an algorithm which given n {\displaystyle \textstyle n} , ϵ > 0 {\displaystyle \textstyle \epsilon >0} , 0 < δ ≤ 1 {\displaystyle \textstyle 0<\delta \leq 1} and access to G E N ( D ) {\displaystyle \textstyle GEN(D)} finds a D ′ {\displaystyle \textstyle D'} such that Pr [ d ( D , D ′ ) ≤ ϵ ] ≥ 1 − δ {\displaystyle \textstyle \Pr[d(D,D')\leq \epsilon ]\geq 1-\delta } . The sample complexity of this algorithm is O ~ ( ( 1 / ϵ 3 ) log ⁡ ( 1 / δ ) ) {\displaystyle \textstyle {\tilde {O}}((1/\epsilon ^{3})\log(1/\delta ))} and the running time is O ~ ( ( 1 / ϵ 3 ) log ⁡ n log 2 ⁡ ( 1 / δ ) ) {\displaystyle \textstyle {\tilde {O}}((1/\epsilon ^{3})\log n\log ^{2}(1/\delta ))} . Theorem Let D ∈ P B D {\displaystyle \textstyle D\in PBD} then there is an algorithm which given n {\displaystyle \textstyle n} , ϵ > 0 {\displaystyle \textstyle \epsilon >0} , 0 < δ ≤ 1 {\displaystyle \textstyle 0<\delta \leq 1} and access to G E N ( D ) {\displaystyle \textstyle GEN(D)} finds a D ′ ∈ P B D {\displaystyle \textstyle D'\in PBD} such that Pr [ d ( D , D ′ ) ≤ ϵ ] ≥ 1 − δ {\displaystyle \textstyle \Pr[d(D,D')\leq \epsilon ]\geq 1-\delta } . The sample complexity of this algorithm is O ~ ( ( 1 / ϵ 2 ) ) log ⁡ ( 1 / δ ) {\displaystyle \textstyle {\tilde {O}}((1/\epsilon ^{2}))\log(1/\delta )} and the running time is ( 1 / ϵ ) O ( log 2 ⁡ ( 1 / ϵ ) ) O ~ ( log ⁡ n log ⁡ ( 1 / δ ) ) {\displaystyle \textstyle (1/\epsilon )^{O(\log ^{2}(1/\epsilon ))}{\tilde {O}}(\log n\log(1/\delta ))} . One part of the above results is that the sample complexity of the learning algorithm doesn't depend on n {\displaystyle \textstyle n} , although the description of D {\displaystyle \textstyle D} is linear in n {\displaystyle \textstyle n} . Also the second result is almost optimal with respect to the sample complexity because there is also a lower bound of O ( 1 / ϵ 2 ) {\displaystyle \textstyle O(1/\epsilon ^{2})} . The proof uses a small ϵ − {\displaystyle \textstyle \epsilon -} cover of P B D {\displaystyle \textstyle PBD} that has been produced by Daskalakis and Papadimitriou, in order to get this algorithm. Learning Sums of Independent Integer Random Variables Consider n {\displaystyle \textstyle n} independent random variables X 1 , … , X n {\displaystyle \textstyle X_{1},\dots ,X_{n}} each of which follows an arbitrary distribution with support { 0 , 1 , … , k − 1 } {\displaystyle \textstyle \{0,1,\dots ,k-1\}} . A k − {\displaystyle \textstyle k-} sum of independent integer random variable of order n {\displaystyle \textstyle n} is the distribution of the sum X = ∑ i X i {\displaystyle \textstyle X=\sum _{i}X_{i}} . For learning the class k − S I I R V = { D : D is a k-sum of independent integer random variable } {\displaystyle \textstyle k-SIIRV=\{D:D{\text{is a k-sum of independent integer random variable }}\}} there is the following result Theorem Let D ∈ k − S I I R V {\displaystyle \textstyle D\in k-SIIRV} then there is an algorithm which given n {\displaystyle \textstyle n} , ϵ > 0 {\displaystyle \textstyle \epsilon >0} and access to G E N ( D ) {\displaystyle \textstyle GEN(D)} finds a D ′ {\displaystyle \textstyle D'} such that Pr [ d ( D , D ′ ) ≤ ϵ ] ≥ 1 − δ {\displaystyle \textstyle \Pr[d(D,D')\leq \epsilon ]\geq 1-\delta } . The sample complexity of this algorithm is poly ( k / ϵ ) {\displaystyle \textstyle {\text{poly}}(k/\epsilon )} and the running time is also poly ( k / ϵ ) {\displaystyle \textstyle {\text{poly}}(k/\epsilon )} . Another part is that the sample and the time complexity does not depend on n {\displaystyle \textstyle n} . Its possible to conclude this independence for the previous section if we set k = 2 {\displaystyle \textstyle k=2} . Learning mixtures of Gaussians Let the random variables X ∼ N ( μ 1 , Σ 1 ) {\displaystyle \textstyle X\sim N(\mu _{1},\Sigma _{1})} and Y ∼ N ( μ 2 , Σ 2 ) {\displaystyle \textstyle Y\sim N(\mu _{2},\Sigma _{2})} . Define the random variable Z {\displaystyle \textstyle Z} which takes the same value as X {\displaystyle \textstyle X} with probability w 1 {\displaystyle \textstyle w_{1}} and the same value as Y {\displaystyle \textstyle Y} with probability w 2 = 1 − w 1 {\displaystyle \textstyle w_{2}=1-w_{1}} . Then if F 1 {\displaystyle \textstyle F_{1}} is the density of X {\displaystyle \textstyle X} and F 2 {\displaystyle \textstyle F_{2}} is the density of Y {\displaystyle \textstyle Y} the density of Z {\displaystyle \textstyle Z} is F = w 1 F 1 + w 2 F 2 {\displaystyle \textstyle F=w_{1}F_{1}+w_{2}F_{2}} . In this case Z {\displaystyle \textstyle Z} is said to follow a mixture of Gaussians. Pearson was the first who introduced the notion of the mixtures of Gaussians in his attempt to explain the probability distribution from which he got same data that he wanted to analyze. So after doing a lot of calculations by hand, he finally fitted his data to a mixture of Gaussians. The learning task in this case is to determine the parameters of the mixture w 1 , w 2 , μ 1 , μ 2 , Σ 1 , Σ 2 {\displaystyle \textstyle w_{1},w_{2},\mu _{1},\mu _{2},\Sigma _{1},\Sigma _{2}} . The first attempt to solve this problem was from Dasgupta. In this work Dasgupta assumes that the two means of the Gaussians are far enough from each other. This means that there is a lower bound on the distance | | μ 1 − μ 2 | | {\displaystyle \textstyle ||\mu _{1}-\mu _{2}||} . Using this assumption Dasgupta and a lot of scientists after him were able to learn the parameters of the mixture. The learning procedure starts with clustering the samples into two different clusters minimizing some metric. Using the assumption that the means of the Gaussians are far away from each other with high probability the samples in the first cluster correspond to samples from the first Gaussian and the samples in the second cluster to samples from the second one. Now that the samples are partitioned the μ i , Σ i {\displaystyle \textstyle \mu _{i},\Sigma _{i}} can be computed from simple statistical estimators and w i {\displaystyle \textstyle w_{i}} by comparing the magnitude of the clusters. If G M {\displaystyle \textstyle GM} is the set of all the mixtures of two Gaussians, using the above procedure theorems like the following can be proved. Theorem Let D ∈ G M {\displaystyle \textstyle D\in GM} with | | μ 1 − μ 2 | | ≥ c n max ( λ m a x ( Σ 1 ) , λ m a x ( Σ 2 ) ) {\displaystyle \textstyle ||\mu _{1}-\mu _{2}||\geq c{\sqrt {n\max(\lambda _{max}(\Sigma _{1}),\lambda _{max}(\Sigma _{2}))}}} , where c > 1 / 2 {\displaystyle \textstyle c>1/2} and λ m a x ( A ) {\displaystyle \textstyle \lambda _{max}(A)} the largest eigenvalue of A {\displaystyle \textstyle A} , then there is an algorithm which given ϵ > 0 {\displaystyle \textstyle \epsilon >0} , 0 < δ ≤ 1 {\displaystyle \textstyle 0<\delta \leq 1} and access to G E N ( D ) {\displaystyle \textstyle GEN(D)} finds an approximation w i ′ , μ i ′ , Σ i ′ {\displaystyle \textstyle w'_{i},\mu '_{i},\Sigma '_{i}} of the parameters such that Pr [ | | w i − w i ′ | | ≤ ϵ ] ≥ 1 − δ {\displaystyle \textstyle \Pr[||w_{i}-w'_{i}||\leq \epsilon ]\geq 1-\delta } (respectively for μ i {\displaystyle \textstyle \mu _{i}} and Σ i {\displaystyle \textstyle \Sigma _{i}} . The sample complexity of this algorithm is M = 2 O ( log 2 ⁡ ( 1 / ( ϵ δ ) ) ) {\displaystyle \textstyle M=2^{O(\log ^{2}(1/(\epsilon \delta )))}} and the running time is O ( M 2 d + M d n ) {\displaystyle \textstyle O(M^{2}d+Mdn)} . The above result could also be generalized in k − {\displaystyle \textstyle k-} mixture of Gaussians. For the case of mixture of two Gaussians there are learning results without the assumption of the distance between their means, like the following one which uses the total variation distance as a distance measure. Theorem Let F ∈ G M {\displaystyle \textstyle F\in GM} then there is an algorithm which given ϵ > 0 {\displaystyle \textstyle \epsilon >0} , 0 < δ ≤ 1 {\displaystyle \textstyle 0<\delta \leq 1} and access to G E N ( D ) {\displaystyle \textstyle GEN(D)} finds w i ′ , μ i ′ , Σ i ′ {\displaystyle \textstyle w'_{i},\mu '_{i},\Sigma '_{i}} such that if F ′ = w 1 ′ F 1 ′ + w 2 ′ F 2 ′ {\displaystyle \textstyle F'=w'_{1}F'_{1}+w'_{2}F'_{2}} , where F i ′ = N ( μ i ′ , Σ i ′ ) {\displaystyle \textstyle F'_{i}=N(\mu '_{i},\Sigma '_{i})} then Pr [ d ( F , F ′ ) ≤ ϵ ] ≥ 1 − δ {\displaystyle \textstyle \Pr[d(F,F')\leq \epsilon ]\geq 1-\delta } . The sample complexity and the running time of this algorithm is poly ( n , 1 / ϵ , 1 / δ , 1 / w 1 , 1 / w 2 , 1 / d ( F 1 , F 2 ) ) {\displaystyle \textstyle {\text{poly}}(n,1/\epsilon ,1/\delta ,1/w_{1},1/w_{2},1/d(F_{1},F_{2}))} . The distance between F 1 {\displaystyle \textstyle F_{1}} and F 2 {\displaystyle \textstyle F_{2}} doesn't affect the quality of the result of the algorithm but just the sample complexity and the running time.
Wikipedia
Chiral magnetic effect (CME) is the generation of electric current along an external magnetic field induced by chirality imbalance. Fermions are said to be chiral if they keep a definite projection of spin quantum number on momentum. The CME is a macroscopic quantum phenomenon present in systems with charged chiral fermions, such as the quark–gluon plasma, or Dirac and Weyl semimetals. The CME is a consequence of chiral anomaly in quantum field theory; unlike conventional superconductivity or superfluidity, it does not require a spontaneous symmetry breaking. The chiral magnetic current is non-dissipative, because it is topologically protected: the imbalance between the densities of left-handed and right-handed chiral fermions is linked to the topology of fields in gauge theory by the Atiyah-Singer index theorem. The experimental observation of CME in a Dirac semimetal, zirconium pentatelluride (ZrTe5), was reported in 2014 by a group from Brookhaven National Laboratory and Stony Brook University. The material showed a conductivity increase in the Lorentz force-free configuration of the parallel magnetic and electric fields. In 2015, the STAR detector at Brookhaven's Relativistic Heavy Ion Collider and ALICE at CERN presented experimental evidence for the existence of CME in the quark–gluon plasma. See also Euler–Heisenberg Lagrangian Chiral anomaly
Wikipedia
In mathematics Lévy's constant (sometimes known as the Khinchin–Lévy constant) occurs in an expression for the asymptotic behaviour of the denominators of the convergents of simple continued fractions. In 1935, the Soviet mathematician Aleksandr Khinchin showed that the denominators qn of the convergents of the continued fraction expansions of almost all real numbers satisfy lim n → ∞ q n 1 / n = e β {\displaystyle \lim _{n\to \infty }{q_{n}}^{1/n}=e^{\beta }} Soon afterward, in 1936, the French mathematician Paul Lévy found the explicit expression for the constant, namely e β = e π 2 / ( 12 ln ⁡ 2 ) = 3.275822918721811159787681882 … {\displaystyle e^{\beta }=e^{\pi ^{2}/(12\ln 2)}=3.275822918721811159787681882\ldots } (sequence A086702 in the OEIS) The term "Lévy's constant" is sometimes used to refer to π 2 / ( 12 ln ⁡ 2 ) {\displaystyle \pi ^{2}/(12\ln 2)} (the logarithm of the above expression), which is approximately equal to 1.1865691104… The value derives from the asymptotic expectation of the logarithm of the ratio of successive denominators, using the Gauss-Kuzmin distribution. In particular, the ratio has the asymptotic density function f ( z ) = 1 z ( z + 1 ) ln ⁡ ( 2 ) {\displaystyle f(z)={\frac {1}{z(z+1)\ln(2)}}} for z ≥ 1 {\displaystyle z\geq 1} and zero otherwise. This gives Lévy's constant as β = ∫ 1 ∞ ln ⁡ z z ( z + 1 ) ln ⁡ 2 d z = ∫ 0 1 ln ⁡ z − 1 ( z + 1 ) ln ⁡ 2 d z = π 2 12 ln ⁡ 2 {\displaystyle \beta =\int _{1}^{\infty }{\frac {\ln z}{z(z+1)\ln 2}}dz=\int _{0}^{1}{\frac {\ln z^{-1}}{(z+1)\ln 2}}dz={\frac {\pi ^{2}}{12\ln 2}}} . The base-10 logarithm of Lévy's constant, which is approximately 0.51532041…, is half of the reciprocal of the limit in Lochs' theorem. Proof The proof assumes basic properties of continued fractions. Let T : x ↦ 1 / x mod 1 {\displaystyle T:x\mapsto 1/x\mod 1} be the Gauss map. Lemma | ln ⁡ x − ln ⁡ p n ( x ) / q n ( x ) | ≤ 1 / q n ( x ) ≤ 1 / F n {\displaystyle |\ln x-\ln p_{n}(x)/q_{n}(x)|\leq 1/q_{n}(x)\leq 1/F_{n}} where F n {\textstyle F_{n}} is the Fibonacci number. Proof. Define the function f ( t ) = ln ⁡ p n + p n − 1 t q n + q n − 1 t {\textstyle f(t)=\ln {\frac {p_{n}+p_{n-1}t}{q_{n}+q_{n-1}t}}} . The quantity to estimate is then | f ( T n x ) − f ( 0 ) | {\displaystyle |f(T^{n}x)-f(0)|} . By the mean value theorem, for any t ∈ [ 0 , 1 ] {\textstyle t\in [0,1]} , | f ( t ) − f ( 0 ) | ≤ max t ∈ [ 0 , 1 ] | f ′ ( t ) | = max t ∈ [ 0 , 1 ] 1 ( p n + t p n − 1 ) ( q n + t q n − 1 ) = 1 p n q n ≤ 1 q n {\displaystyle |f(t)-f(0)|\leq \max _{t\in [0,1]}|f'(t)|=\max _{t\in [0,1]}{\frac {1}{(p_{n}+tp_{n-1})(q_{n}+tq_{n-1})}}={\frac {1}{p_{n}q_{n}}}\leq {\frac {1}{q_{n}}}} The denominator sequence q 0 , q 1 , q 2 , … {\displaystyle q_{0},q_{1},q_{2},\dots } satisfies a recurrence relation, and so it is at least as large as the Fibonacci sequence 1 , 1 , 2 , … {\displaystyle 1,1,2,\dots } . Ergodic argument Since p n ( x ) = q n − 1 ( T x ) {\textstyle p_{n}(x)=q_{n-1}(Tx)} , and p 1 = 1 {\textstyle p_{1}=1} , we have − ln ⁡ q n = ln ⁡ p n ( x ) q n ( x ) + ln ⁡ p n − 1 ( T x ) q n − 1 ( T x ) + ⋯ + ln ⁡ p 1 ( T n − 1 x ) q 1 ( T n − 1 x ) {\displaystyle -\ln q_{n}=\ln {\frac {p_{n}(x)}{q_{n}(x)}}+\ln {\frac {p_{n-1}(Tx)}{q_{n-1}(Tx)}}+\dots +\ln {\frac {p_{1}(T^{n-1}x)}{q_{1}(T^{n-1}x)}}} By the lemma, − ln ⁡ q n = ln ⁡ x + ln ⁡ T x + ⋯ + ln ⁡ T n − 1 x + δ {\displaystyle -\ln q_{n}=\ln x+\ln Tx+\dots +\ln T^{n-1}x+\delta } where | δ | ≤ ∑ k = 1 ∞ 1 / F n {\textstyle |\delta |\leq \sum _{k=1}^{\infty }1/F_{n}} is finite, and is called the reciprocal Fibonacci constant. By Birkhoff's ergodic theorem, the limit lim n → ∞ ln ⁡ q n n {\textstyle \lim _{n\to \infty }{\frac {\ln q_{n}}{n}}} converges to ∫ 0 1 ( − ln ⁡ t ) ρ ( t ) d t = π 2 12 ln ⁡ 2 {\displaystyle \int _{0}^{1}(-\ln t)\rho (t)dt={\frac {\pi ^{2}}{12\ln 2}}} almost surely, where ρ ( t ) = 1 ( 1 + t ) ln ⁡ 2 {\displaystyle \rho (t)={\frac {1}{(1+t)\ln 2}}} is the Gauss distribution. See also Khinchin's constant References Further reading Khinchin, A. Ya. (14 May 1997). Continued Fractions. Dover. ISBN 0-486-69630-8. External links Weisstein, Eric W. "Lévy Constant". MathWorld. OEIS sequence A086702 (Decimal expansion of Lévy's constant)
Wikipedia
The C programming language has a set of functions implementing operations on strings (character strings and byte strings) in its standard library. Various operations, such as copying, concatenation, tokenization and searching are supported. For character strings, the standard library uses the convention that strings are null-terminated: a string of n characters is represented as an array of n + 1 elements, the last of which is a "NUL character" with numeric value 0. The only support for strings in the programming language proper is that the compiler translates quoted string constants into null-terminated strings. Definitions A string is defined as a contiguous sequence of code units terminated by the first zero code unit (often called the NUL code unit). This means a string cannot contain the zero code unit, as the first one seen marks the end of the string. The length of a string is the number of code units before the zero code unit. The memory occupied by a string is always one more code unit than the length, as space is needed to store the zero terminator. Generally, the term string means a string where the code unit is of type char, which is exactly 8 bits on all modern machines. C90 defines wide strings which use a code unit of type wchar_t, which is 16 or 32 bits on modern machines. This was intended for Unicode but it is increasingly common to use UTF-8 in normal strings for Unicode instead. Strings are passed to functions by passing a pointer to the first code unit. Since char * and wchar_t * are different types, the functions that process wide strings are different than the ones processing normal strings and have different names. String literals ("text" in the C source code) are converted to arrays during compilation. The result is an array of code units containing all the characters plus a trailing zero code unit. In C90 L"text" produces a wide string. A string literal can contain the zero code unit (one way is to put \0 into the source), but this will cause the string to end at that point. The rest of the literal will be placed in memory (with another zero code unit added to the end) but it is impossible to know those code units were translated from the string literal, therefore such source code is not a string literal. Character encodings Each string ends at the first occurrence of the zero code unit of the appropriate kind (char or wchar_t). Consequently, a byte string (char*) can contain non-NUL characters in ASCII or any ASCII extension, but not characters in encodings such as UTF-16 (even though a 16-bit code unit might be nonzero, its high or low byte might be zero). The encodings that can be stored in wide strings are defined by the width of wchar_t. In most implementations, wchar_t is at least 16 bits, and so all 16-bit encodings, such as UCS-2, can be stored. If wchar_t is 32-bits, then 32-bit encodings, such as UTF-32, can be stored. (The standard requires a "type that holds any wide character", which on Windows no longer holds true since the UCS-2 to UTF-16 shift. This was recognized as a defect in the standard and fixed in C++.) C++11 and C11 add two types with explicit widths char16_t and char32_t. Variable-width encodings can be used in both byte strings and wide strings. String length and offsets are measured in bytes or wchar_t, not in "characters", which can be confusing to beginning programmers. UTF-8 and Shift JIS are often used in C byte strings, while UTF-16 is often used in C wide strings when wchar_t is 16 bits. Truncating strings with variable-width characters using functions like strncpy can produce invalid sequences at the end of the string. This can be unsafe if the truncated parts are interpreted by code that assumes the input is valid. Support for Unicode literals such as char foo[512] = "φωωβαρ"; (UTF-8) or wchar_t foo[512] = L"φωωβαρ"; (UTF-16 or UTF-32, depends on wchar_t) is implementation defined, and may require that the source code be in the same encoding, especially for char where compilers might just copy whatever is between the quotes. Some compilers or editors will require entering all non-ASCII characters as \xNN sequences for each byte of UTF-8, and/or \uNNNN for each word of UTF-16. Since C11 (and C++11), a new literal prefix u8 is available that guarantees UTF-8 for a bytestring literal, as in char foo[512] = u8"φωωβαρ";. Since C++20 and C23, a char8_t type was added that is meant to store UTF-8 characters and the types of u8 prefixed character and string literals were changed to char8_t and char8_t[] respectively. Features Terminology In historical documentation the term "character" was often used instead of "byte" for C strings, which leads many to believe that these functions somehow do not work for UTF-8. In fact all lengths are defined as being in bytes and this is true in all implementations, and these functions work as well with UTF-8 as with single-byte encodings. The BSD documentation has been fixed to make this clear, but POSIX, Linux, and Windows documentation still uses "character" in many places where "byte" or "wchar_t" is the correct term. Functions for handling memory buffers can process sequences of bytes that include null-byte as part of the data. Names of these functions typically start with mem, as opposite to the str prefix. Headers Most of the functions that operate on C strings are declared in the string.h header (cstring in C++), while functions that operate on C wide strings are declared in the wchar.h header (cwchar in C++). These headers also contain declarations of functions used for handling memory buffers; the name is thus something of a misnomer. Functions declared in string.h are extremely popular since, as a part of the C standard library, they are guaranteed to work on any platform which supports C. However, some security issues exist with these functions, such as potential buffer overflows when not used carefully and properly, causing the programmers to prefer safer and possibly less portable variants, out of which some popular ones are listed below. Some of these functions also violate const-correctness by accepting a const string pointer and returning a non-const pointer within the string. To correct this, some have been separated into two overloaded functions in the C++ version of the standard library. Constants and types Functions Multibyte functions These functions all need a mbstate_t object, originally in static memory (making the functions not be thread-safe) and in later additions the caller must maintain. This was originally intended to track shift states in the mb encodings, but modern ones such as UTF-8 do not need this. However these functions were designed on the assumption that the wc encoding is not a variable-width encoding and thus are designed to deal with exactly one wchar_t at a time, passing it by value rather than using a string pointer. As UTF-16 is a variable-width encoding, the mbstate_t has been reused to keep track of surrogate pairs in the wide encoding, though the caller must still detect and call mbtowc twice for a single character. Later additions to the standard admit that the only conversion programmers are interested in is between UTF-8 and UTF-16 and directly provide this. Numeric conversions The C standard library contains several functions for numeric conversions. The functions that deal with byte strings are defined in the stdlib.h header (cstdlib header in C++). The functions that deal with wide strings are defined in the wchar.h header (cwchar header in C++). The functions strchr, bsearch, strpbrk, strrchr, strstr, memchr and their wide counterparts are not const-correct, since they accept a const string pointer and return a non-const pointer within the string. This has been fixed in C23. Also, since the Normative Amendment 1 (C95), atoxx functions are considered subsumed by strtoxxx functions, for which reason neither C95 nor any later standard provides wide-character versions of these functions. The argument against atoxx is that they do not differentiate between an error and a 0. Popular extensions Replacements Despite the well-established need to replace strcat and strcpy with functions that do not allow buffer overflows, no accepted standard has arisen. This is partly due to the mistaken belief by many C programmers that strncat and strncpy have the desired behavior; however, neither function was designed for this (they were intended to manipulate null-padded fixed-size string buffers, a data format less commonly used in modern software), and the behavior and arguments are non-intuitive and often written incorrectly even by expert programmers. The most popular replacement are the strlcat and strlcpy functions, which appeared in OpenBSD 2.4 in December, 1998. These functions always write one NUL to the destination buffer, truncating the result if necessary, and return the size of buffer that would be needed, which allows detection of the truncation and provides a size for creating a new buffer that will not truncate. For a long time they have not been included in the GNU C library (used by software on Linux), on the basis of allegedly being inefficient, encouraging the use of C strings (instead of some superior alternative form of string), and hiding other potential errors. Even while glibc hadn't added support, strlcat and strlcpy have been implemented in a number of other C libraries including ones for OpenBSD, FreeBSD, NetBSD, Solaris, OS X, and QNX, as well as in alternative C libraries for Linux, such as libbsd, introduced in 2008, and musl, introduced in 2011, and the source code added directly to other projects such as SDL, GLib, ffmpeg, rsync, and even internally in the Linux kernel. This did change in 2024, the glibc FAQ notes that as of glibc 2.38, the code has been committed and thereby added. These functions were standardized as part of POSIX.1-2024, the Austin Group Defect Tracker ID 986 tracked some discussion about such plans for POSIX. Sometimes memcpy or memmove are used, as they may be more efficient than strcpy as they do not repeatedly check for NUL (this is less true on modern processors). Since they need a buffer length as a parameter, correct setting of this parameter can avoid buffer overflows. As part of its 2004 Security Development Lifecycle, Microsoft introduced a family of "secure" functions including strcpy_s and strcat_s (along with many others). These functions were standardized with some minor changes as part of the optional C11 (Annex K) proposed by ISO/IEC WDTR 24731. These functions perform various checks including whether the string is too long to fit in the buffer. If the checks fail, a user-specified "runtime-constraint handler" function is called, which usually aborts the program. These functions attracted considerable criticism because initially they were implemented only on Windows and at the same time warning messages started to be produced by Microsoft Visual C++ suggesting use of these functions instead of standard ones. This has been speculated by some to be an attempt by Microsoft to lock developers into its platform. Experience with these functions has shown significant problems with their adoption and errors in usage, so the removal of Annex K was proposed for the next revision of the C standard. Usage of memset_s has been suggested as a way to avoid unwanted compiler optimizations. See also C syntax § Strings – source code syntax, including backslash escape sequences String functions Perl Compatible Regular Expressions (PCRE) Notes References External links Fast memcpy in C, multiple C coding examples to target different types of CPU instruction architectures
Wikipedia
In mathematics, an N-topological space is a set equipped with N arbitrary topologies. If τ1, τ2, ..., τN are N topologies defined on a nonempty set X, then the N-topological space is denoted by (X,τ1,τ2,...,τN). For N = 1, the structure is simply a topological space. For N = 2, the structure becomes a bitopological space introduced by J. C. Kelly. Example Let X = {x1, x2, ...., xn} be any finite set. Suppose Ar = {x1, x2, ..., xr}. Then the collection τ1 = {φ, A1, A2, ..., An = X} will be a topology on X. If τ1, τ2, ..., τm be m such topologies (chain topologies) defined on X, then the structure (X, τ1, τ2, ..., τm) is an m-topological space.
Wikipedia
Pieter Adriaan Flach (born 8 April 1961, Sneek) is a Dutch computer scientist and a Professor of Artificial Intelligence in the Department of Computer Science at the University of Bristol. He is author of the acclaimed Simply Logical: Intelligent Reasoning by Example (John Wiley, 1994) and Machine Learning: the Art and Science of Algorithms that Make Sense of Data (Cambridge University Press, 2012). Education Flach received an MSc Electrical Engineering from Universiteit Twente in 1987 and a PhD in Computer Science from Tilburg University in 1995. Research Flach's research interests are in data mining and machine learning.
Wikipedia
The German Informatics Society (GI) (German: Gesellschaft für Informatik) is a German professional society for computer science, with around 20,000 personal and 250 corporate members. It is the biggest organized representation of its kind in the German-speaking world. History The German Informatics Society was founded in Bonn, Germany, on September 16, 1969. Initially aimed primarily at researchers, it expanded in the mid-1970s to include computer science professionals, and in 1978 it founded its journal Informatik Spektrum to reach this broader audience. The Deutsche Informatik-Akademie in Bonn was founded in 1987 by the German Informatics Society in order to provide seminars and continuing education for computer science professionals. In 1990, the German Informatics Society contributed to the founding of the International Conference and Research Center for Computer Science (renamed since as the Leibniz Center for Informatics) at Dagstuhl; since its founding, Schloss Dagstuhl has become a major center for international academic workshops. In 1983, the German Informatics Society became a member society of the International Federation for Information Processing (IFIP), taking over the role of representing Germany from the Deutsche Arbeitsgemeinschaft für Rechenanlagen. In 1989, it joined the Council of European Professional Informatics Societies. Activities The main activity of the association is to support the professional development of its members in every aspect of the rapidly changing field of informatics. In order to realise this aim the German Informatics Society maintains a large number of committees, special interest groups, and working groups in the field of theory of computation, artificial intelligence, bioinformatics, software engineering, human computer interaction, databases, technical informatics, graphics and information visualisation, business informatics, legal aspects of computing, computer science education, social computing, and computer security. Up to now, the GI runs more than 30 local groups in cooperation with the German chapter of the Association for Computing Machinery. Other important GI activities include raising public awareness of informatics, including its benefits and risks. Lobbying activities have been organised by the office in Berlin since 2013. Additionally, the GI runs programmes designed for young people and women to foster interest in informatics. In addition to the Informatik Spektrum, which is the journal of the society, most of the society's special interest groups maintain their own journals. Overall the society has approximately 40 regular publications, and it sponsors a similar number of conferences and events annually. Many of these conferences have their proceedings published in the GI's book series, Lecture Notes in Informatics, which also publishes Ph.D. thesis abstracts and research monographs. Every two years, the German Informatics Society awards the Konrad Zuse Medal to an outstanding German computer science researcher. It also offers prizes for the best Ph.D. thesis, for computer science education, for practical innovations, and for teams of student competitors. Each year beginning in 2002, the GI has elected a small number of its members as fellows, its highest membership category. Conferences One of the biggest informatics conferences in the German-speaking world is the INFORMATIK. The conference is organised in cooperation with universities, each year in a different location. More than 1.000 participants visit workshops and keynotes regarding current challenges in the field of information technology. In addition, several special interest groups organise large meetings with an international reputation, for example the „Software Engineering (SE)“, the „Multikonferenz Wirtschaftsinformatik (MKWI), the „Mensch-Computer-Interaktion (MCI)“ and the „Datenbanksysteme für Business, Technologie und Web (BTW)“. The Detection of Intrusions and Malware, and Vulnerability Assessment event, designed to serve as a general forum for discussing malware and the vulnerability of computing systems to attacks, is another annual project under the auspices of the organization. Its last conference was held from 6 July to 7 July in the city of Bonn, Germany, being sponsored by entities such as Google, Rohde & Schwarz, and VMRay. Honorary members The following people are honorary members of the German Informatics Society due to their achievements in the field of informatics. Konrad Zuse (since 1985) Friedrich Ludwig Bauer (since 1987) Wilfried Brauer (since 2000) Günter Hotz (since 2002) Joseph Weizenbaum (since 2003) Gerhard Krüger (since 2007) Heinz Schwärtzel (since 2008) Associated societies Swiss Informatics Society Gesellschaft für Informatik in der Land-, Forst- und Ernährungswirtschaft (GIL) German Chapter of the ACM (GChACM) References External links Official website
Wikipedia
Secret Invasion is an American television miniseries created by Kyle Bradstreet for the streaming service Disney+, based on the 2008 Marvel Comics storyline of the same name. It is the ninth television series in the Marvel Cinematic Universe (MCU) produced by Marvel Studios, sharing continuity with the films of the franchise. It follows Nick Fury and Talos as they uncover a conspiracy by a group of shapeshifting Skrulls to conquer Earth. Bradstreet serves as the head writer, with Ali Selim directing. Samuel L. Jackson and Ben Mendelsohn reprise their respective roles as Fury and Talos from previous MCU media, with Kingsley Ben-Adir, Killian Scott, Samuel Adewunmi, Dermot Mulroney, Richard Dormer, Emilia Clarke, Olivia Colman, Don Cheadle, Charlayne Woodard, Christopher McDonald, and Katie Finneran also starring. Development on the series began by September 2020, with Bradstreet and Jackson attached. The title and premise of the series, along with Mendelsohn's return, were revealed that December. Additional casting occurred throughout March and April 2021, followed by the hiring of Selim and Thomas Bezucha that May to direct the series. Filming began in London by September 2021 and wrapped in late April 2022, with additional filming around England. During production, much of the series' creative team was replaced, with Brian Tucker taking over as writer from Bradstreet and Bezucha exiting, and extensive reshoots took place from mid-June to late September 2022. Secret Invasion premiered on June 21, 2023, and ran for six episodes until July 26. It is the first series in Phase Five of the MCU. The series received mixed reviews from critics, who praised Jackson's and Mendelsohn's performances but criticized the writing (particularly that of the finale), pacing, and visual effects. Premise Nick Fury works with Talos, a shapeshifting alien Skrull, to uncover a conspiracy by a group of renegade Skrulls led by Gravik who plan to gain control of Earth by posing as different humans around the world. Cast and characters Samuel L. Jackson as Nick Fury:The former director of S.H.I.E.L.D. who has been working with the Skrulls in space for years before returning to Earth. Fury has been away from Earth so long in part because he is worn out and uncertain of his place in the world following the events of Avengers: Infinity War (2018) and Avengers: Endgame (2019). Jackson said the series would delve deeper into Fury's past and future, and allowed him to "explore something other than the badassery of who Nick Fury is" including the toll of his job on his personal life. He continued that Secret Invasion allowed him to work out some new elements of the character that his previous appearances in the MCU had not. Executive producer Jonathan Schwartz added that "sins from [Fury's] past start to haunt him once again" given the things he had to do to protect Earth in the past have ramifications. Ben Mendelsohn as Talos: The former leader of the Skrulls and an ally of Fury. Mendelsohn noted how Talos, along with Fury, have "lost their way" and are "up against it" since he was last seen in Captain Marvel (2019). Kingsley Ben-Adir as Gravik:The leader of a group of rebel Skrulls who has broken away from Talos and believe the best way to help their kind is to infiltrate Earth for the resources they need. He sets up his operation in a decommissioned radioactive site in Russia, and has a hatred for most of the Skrulls working for him, believing them to be idiots. Ben-Adir worked to find the proper level of hatred to portray in each scene, since he felt Gravik trusts no one and hates everyone but still needs the other Skrulls to accomplish his goals. Director Ali Selim said Gravik was not a terrorist or "just a bad guy with a bomb" and the series would explore the reasons for his actions. Lucas Persaud portrays Gravik as a child. Killian Scott as Pagon: A rebel Skrull and Gravik's second-in-command. Ben-Adir said Gravik sees that Pagon has ambition and wants to be a leader, but "he doesn't have the guts to take it". Scott also portrays the human counterpart whose form Pagon took in the final episode. Samuel Adewunmi as Beto: A rebel Skrull recruit. Dermot Mulroney as Ritson: The president of the United States. Richard Dormer as Prescod: A former S.H.I.E.L.D. agent who uncovered the Skrulls' plan to invade Earth. Emilia Clarke as G'iah:Talos's daughter who works for Gravik. Clarke described G'iah as having "a kind of punk feeling" to her, adding that being a refugee had "hardened her". She resents Fury since he has not been able to deliver on the promises he made in Captain Marvel to find the Skrulls a new home. Clarke worked with Mendelsohn to create G'iah and Talos's backstory to "fill in a lot of the gaps", with Clarke believing G'iah would have had an "upbringing that was regimented with training" since the Skrulls are a warring species, that would have led to a "fierce need for her own independence" while judging some of Talos's choices. G'iah was previously portrayed as a child in Captain Marvel by Auden L. Ophuls and Harriet L. Ophuls. Olivia Colman as Sonya Falsworth:A high-ranking MI6 agent and an old ally of Fury's who looks to protect the United Kingdom's national security interests during the invasion. Described as "a more antagonistic presence" in the series, Schwartz said Falsworth could be working either with or against Fury depending on their desired goals, with Jackson calling the two "frenemies". Jackson added that Colman's portrayal of Falsworth changed her dynamic with Fury, since she played the character "cozy and fuzzy" rather than contentious, which allowed for the two to "work together in a harmony that's more satisfying to the story and our backstory than any other way". Don Cheadle as Raava / James "Rhodey" Rhodes:A female Skrull posing as Rhodes (an officer in the U.S. Air Force and an Avenger) who serves as an envoy and advisor to President Ritson. Nisha Aaliya portrays Raava in her Skrull form. Jackson said Rhodes would be a "political animal" in the series rather than using the War Machine armor. Cheadle noted that this made Rhodes more of an adversary than in his previous MCU appearances, with the character caught between being "a military man following the chain of command" and someone who can go "outside the box". Once Fury becomes aware that Rhodes has been replaced by a Skrull, Cheadle felt the two enter "sort of a cat-and-mouse game" with each having compromising info on the other. The real Rhodes is ultimately released from his Skrull containment pod at the end of the series. Charlayne Woodard as Varra / Priscilla Davis: A Skrull who is the wife of Nick Fury and has a history with Gravik. Varra took the likeness of Dr. Priscilla Davis who was suffering from a congenital heart defect. Christopher McDonald as Chris Stearns: A Skrull posing as an FXN news host and member of the Skrull council. The character was based on real-life newscaster Tucker Carlson and the Fox News channel. Katie Finneran as Rosa Dalton: A scientist replaced by a Skrull that is researching various DNA samples for the Harvest project. Reprising their MCU roles are Cobie Smulders as Maria Hill, Martin Freeman as Everett K. Ross, and O-T Fagbenle as Rick Mason. The first episode reveals that Ross had been replaced by a Skrull infiltrator, and also features Hill's death. Smulders had been aware of the character's death during her initial discussions to join the series. Tony Curran appears as Derrik Weatherby, the director of MI6 who was replaced by a Skrull. Curran previously portrayed Bor in Thor: The Dark World (2013) and Finn Cooley in the second season of Daredevil (2016). Also appearing are Ben Peel as Brogan, a rebel Skrull who is tortured by Falsworth; Seeta Indrani as Shirley Sagar, Christopher Goh as Jack Hyuk-Bin, Giampiero Judica as NATO Secretary General Sergio Caspani, and Anna Madeley as the UK prime minister Pamela Lawton, all members of the Skrull Council; Juliet Stevenson as Maria Hill's mother Elizabeth; and Charlotte Baker and Kate Braithwaite as Soren, the wife of Talos and mother of G'iah who was killed by Gravik; Baker portrays Soren's human disguise while Braithwaite portrays her Skrull appearance. Soren was previously portrayed by Sharon Blynn in Captain Marvel and Spider-Man: Far From Home (2019). Episodes Production Development In September 2020, Kyle Bradstreet was revealed to be developing a television series for the streaming service Disney+ centered on the Marvel Comics character Nick Fury. The character had previously been one of ten properties announced in September 2005 by Marvel Entertainment chairman and CEO Avi Arad as being developed for film by the newly formed studio Marvel Studios, after Marvel received financing to produce the slate of films to be distributed by Paramount Pictures; Andrew W. Marlowe was hired to write a script for a Nick Fury film in April 2006. In April 2019, after Samuel L. Jackson had portrayed Nick Fury in ten Marvel Cinematic Universe (MCU) films as well as the Marvel Television series Agents of S.H.I.E.L.D., Richard Newby from The Hollywood Reporter felt it was time the character received his own film, calling the character "the MCU's most powerful asset yet to be fully untapped". Jackson was attached to reprise his role in Bradstreet's series, with the latter writing and serving as executive producer. In December 2020, Marvel Studios President Kevin Feige officially announced a new series titled Secret Invasion, with Jackson co-starring with Ben Mendelsohn in his MCU role of Talos. The series is based on the 2008–09 comic book storyline of the same name, with Feige describing it as a "crossover event series" that would tie-in with future MCU films; the series' official premise further described it as a crossover event series. Marvel Studios chose to make a Secret Invasion series instead of a film because it allowed them to do something different than they had done before. Bradstreet had worked on scripts for the series for about a year, before he was replaced with Brian Tucker. Directors were being lined up by April 2021. Thomas Bezucha and Ali Selim were attached to direct the series a month later, with each expected to direct three episodes and work on the story. However, Bezucha left the series during production because of scheduling conflicts with reshoots, and Selim ultimately directed all six episodes. The series reportedly went through multiple issues during pre-production, which necessitated Marvel Studios' executive Jonathan Schwartz becoming more involved with the series to get it "back on track" as it had fallen behind schedule and risked some actors becoming unavailable due to other commitments. The episodes were described as being an hour-long each, with the series ultimately totaling approximately 4.5 hours. Marvel Studios' Feige, Louis D'Esposito, Victoria Alonso, Brad Winderbaum, and Schwartz served as executive producers on the series alongside Jackson, Selim, Bradstreet, and Tucker. The budget for the series was $211.6 million. This was noted for being a large budget compared to the content in the series, which did not use large action set pieces or extensive visual effects. Extensive reshoots were believed to partially be the reason for the large budget. Writing Bradstreet, Tucker, Brant Englestein, Roxanne Paredes, and Michael Bhim served as writers on the series. Tucker received the majority of writing credits on the episodes. Feige said the series would not be looking to match the scope of the Secret Invasion comic book storyline, in terms of the number of characters featured or the impact on the wider universe, considering the comic book featured more characters than the crossover film Avengers: Endgame (2019). Instead, he described Secret Invasion as a showcase for Jackson and Mendelsohn that would explore the political paranoia elements of the Secret Invasion comic series "that was great with the twists and turns that that took". The creatives were also inspired by the Cold War-era espionage novels of John le Carré, the television series Homeland (2011–2020) and The Americans (2013–2018), and the film The Third Man (1949). Selim said the series transitions at times between espionage noir and a Western, highlighting the film The Searchers (1956) as a Western inspiration. Feige said the series would serve as a present-day follow-up to the 1990s story of Captain Marvel (2019), alongside that film's sequel The Marvels (2023), but was tonally different from the films. Jackson said the series would uncover some of the things that happened during the Blip. Cobie Smulders described the series as "a very grounded, on-this-earth drama" that was "dealing with real human issues and dealing with trust". Discussing the Skrulls, shapeshifting green-skinned extraterrestrials who can perfectly simulate any human being at will, Jackson felt their inclusion introduced "a political aspect" in that their ability to shape-shift makes people question who can be trusted and "What happens when people get afraid and don't understand other people? You can't tell who's innocent and who's guilty in this particular instance." The first episode reveals that Everett K. Ross had been replaced by a Skrull infiltrator, while the fourth episode reveals that James "Rhodey" Rhodes has been replaced by the Skrull Raava. Feige explained that the creators chose Rhodes to be a Skrull because they were looking for an established MCU character viewers would not be expecting to be a Skrull, and to introduce a new experience for viewers rewatching his past MCU appearances and questioning if he was a Skrull during them. They approached actor Don Cheadle during early development of the series about this, who liked the opportunity to be able to "play with different sides of Rhodey that we haven't seen before". It is revealed that Rhodes had been replaced by a Skrull "for a long time" and is seen wearing a hospital gown when being released from his containment pod. This was interpreted by some to mean he had been replaced after the events of Captain America: Civil War (2016), a theory which Selim acknowledged, though he would not confirm this specifically, saying "does it have to be definitive, or is it more fun for the audience to go back and revisit every moment" since Civil War to question whether Rhodes was a Skrull or not. Casting Jackson was expected to reprise his role in the series with the reveal of its development in September 2020. When the series was officially announced that December, Feige confirmed Jackson's casting and announced that Mendelsohn would co-star. Kingsley Ben-Adir was cast as the Skrull Gravik, the "lead villain" role, in March 2021, and the following month, Olivia Colman was cast as Sonya Falsworth, along with Emilia Clarke as Talos's daughter G'iah, and Killian Scott as Gravik's second-in-command Pagon. In May 2021, Christopher McDonald joined the cast as newscaster Chris Stearns, a newly created character rather than one from the comics, who had the potential to appear in other MCU series and films. Carmen Ejogo had joined the cast by November 2021 (although she ultimately did not appear in the series), and the next month, Smulders was set to reprise her MCU role as Maria Hill. In February 2022, set photos revealed that Don Cheadle would appear in his MCU role of James "Rhodey" Rhodes, along with Dermot Mulroney as United States President Ritson. The following month, Jackson confirmed that Martin Freeman and Cheadle would appear in the series, with Freeman reprising his MCU role as Everett K. Ross. In September 2022, it was revealed that Charlayne Woodard was cast in the series as Fury's Skrull wife Priscilla. Samuel Adewunmi and Katie Finneran were revealed as part of the cast in March 2023, with Adewunmi as the Skrull Beto and Finneran as the scientist Rosa Dalton. Richard Dormer appears as Agent Prescod, while O-T Fagbenle reprises his Black Widow (2021) role as Rick Mason. Design Sets and costumes Frank Walsh serves as production designer, while Claire Anderson serves as costume designer. In Secret Invasion, Fury does not wear his signature eyepatch, which Jackson noted was a character choice. He explained, "The patch is part of who the strong Nick Fury was. It's part of his vulnerability now. You can look at it and see he's not this perfectly indestructible person. He doesn't feel like that guy." Title sequence The opening title sequence was created by Method Studios using generative artificial intelligence, which prompted significant backlash online. Some commentators felt this was particularly poor timing given the series was released during the 2023 Writers Guild of America strike for which the use of artificial intelligence over real people was a key issue, with language about protecting writers against the use of AI in the writing process. Method Studios issued a statement in response to criticism stating that none of their artists had been replaced with artificial intelligence for the sequence and that the technology, both existing and custom-built for this series, was just one tool that their team used to achieve a specific final look. The statement elaborated that many elements in the sequence were created using traditional tools and techniques, and the artificial intelligence technology was just used to add an "otherworldly and alien look" which the creative team felt "perfectly aligned with the project's overall theme and the desired aesthetic". Storyboard artists and animators on the series expressed disappointment in the opening sequence being generated by AI. Filming Filming had begun by September 1, 2021, in London, under the working title Jambalaya, with Selim directing the series, and Remi Adefarasin serving as cinematographer. Filming was previously expected to begin in mid-August 2021. Jackson began filming his scenes on October 14, after already working on The Marvels which was filming in London at the same time. Filming occurred in West Yorkshire, England, including Leeds on January 22, Huddersfield on January 24, and in Halifax at Piece Hall from January 24 to 31, 2022. Filming occurred at the Liverpool Street station on February 28, 2022. Soundstage work occurred at Pinewood Studios on seven of its stages, as well as Hallmark House, and Versa Studios. Filming wrapped on April 25, 2022. Additional filming occurred in London's Brixton neighborhood, and was also expected to occur across Europe. In mid-2022, factions of the crew and the series' creative leaders experienced disagreements which "debilitated" the production. Jackson revealed in mid-June 2022 that he would return to London in August to work on reshoots for Secret Invasion, after doing the same for The Marvels. McDonald was returning to London by the end of July for the reshoots, which he said were to make the series "better" and to go "much deeper than before". He also indicated that a new writer was brought on to the production to work on the additional material. Jackson completed his reshoots by August 12, 2022, while Clarke filmed scenes in London at the end of September. By early September, many crew members on the series had been replaced, while co-executive producer Chris Gary, the Marvel Studios Production and Development executive overseeing the series, was reassigned and expected to leave the studio when his contract expired at the end of 2023. Jonathan Schwartz, a senior Marvel Studios executive and a member of the Marvel Studios Parliament group, was dispatched to oversee the production. Bezucha also left the series during this time due to new scheduling conflicts with the reshoots. Jackson said because Selim became the director of all the series' episodes, it provided consistency for the cast and crew with the ideas and concepts and allowed Selim to make the series his way. Eben Bolter served as the cinematographer during additional photography which lasted for four months. Post-production Pete Beaudreau, Melissa Lawson Cheung, Drew Kilcoin, and James Stanger serve as editors, while Georgina Street serves as the visual effects producer and Aharon Bourland as the visual effects supervisor. Visual effects for the series were provided by Digital Domain, FuseFX, Luma Pictures, MARZ, One of Us, Zoic Studios, and Cantina Creative. Music In February 2023, Kris Bowers was revealed to be composing for the series, and was working on the score at that time. The series' main title track, "Nick Fury (Main Title Theme)", was released digitally as a single by Marvel Music and Hollywood Records on June 20. Marketing The first footage of the series debuted on Disney+ Day on November 12, 2021. More footage was shown in July 2022 at San Diego Comic-Con. Adam B. Vary of Variety said the footage had an "overall vibe... of paranoia and foreboding", believing the series would fit with the larger "anti-heroic thread" building in Phase Five of the MCU. The first trailer for the series debuted at the 2022 D23 Expo in September 2022. Polygon's Austen Goslin felt the trailer was "mostly a recap of the series' plot", while Vanity Fair's Anthony Breznican noted how Fury had both eyes and said he "appears to be done relying on others to help save the world". Tamera Jones from Collider felt the trailer was "action-packed with explosions and intrigue, giving off more of a spy vibe than a fun paranoid mystery". The second trailer debuted during Sunday Night Baseball on ESPN on April 2, 2023. Edidiong Mboho of Collider felt the trailer "evokes the thrill and excitement" like the first one and provided the "same sense of urgency and paranoia from the Skrull infiltration". Mboho lauded the trailer for featuring the all-star cast of the series "without giving too much away" of its plot. Dais Johnston of Inverse felt that every shot of the trailer provided a "flashy-but-gritty spy-fi story that swaps out the powers and wisecracks of past works for the ingenuity and strategy Nick Fury is known for". Sam Barsanti of The A.V. Club said the trailer featured "more of the physical and psychological toll that life in general has taken on Fury". In early June 2023, a viral marketing website was created for the series that featured a five-minute clip from the first episode and a new trailer for the series. The locked website was initially revealed through cryptic images tweeted on the series' official Twitter account, which included clues to form the password that allowed access to it. At San Diego Comic-Con in July 2023, a Skrull "invasion" occurred, with fans seeing or becoming Skrulls around the convention. Release A red carpet premiere event for Secret Invasion was held in Los Angeles at the El Capitan Theater on June 13, 2023. The series debuted on Disney+ on June 21, 2023, consisting of six episodes, and concluding on July 26, 2023. It was previously expected to release within early 2023. It is the first series of Phase Five of the MCU. The first three episodes were made available on Hulu from July 21 to August 17, 2023, to promote the finale of the series. Reception Viewership According to market research company Parrot Analytics, which looks at consumer engagement in consumer research, streaming, downloads, and on social media, reported that Secret Invasion was the most in-demand new show in the U.S. for the quarter from April 1 to June 30, 2023. It garnered 42.1 times the average series demand in its first 30 days. The series experienced higher initial demand spikes compared to other Marvel series on Disney+. Whip Media, which tracks viewership data for the more than 25 million worldwide users of its TV Time app, calculated that Secret Invasion was the seventh most-watched streaming original television series of 2023. According to the file-sharing news website TorrentFreak, Secret Invasion was the fifth most-watched pirated television series of 2023. Parrot Analytics reported that Secret Invasion was the third most in-demand streaming original of 2023, with 40 times the average demand for shows. Critical response The review aggregator website Rotten Tomatoes reported an approval rating of 52%, with an average score of 6.1/10, based on 197 reviews. The site's critic's consensus states: "A well-deserved showcase for Samuel L. Jackson, Secret Invasion steadies itself after a somewhat slow start by taking the MCU in a darker, more mature direction." Metacritic, which uses a weighted average, assigned the series a score of 63 out of 100 based on 24 critics, indicating "generally favorable reviews". Richard Newby at Empire gave the series 4 out of 5 stars, feeling that it was "a riveting, tense drama that gifts its actors with weighty material and encourages its audience to look beyond the sheen of superheroism." Newby found the series had taken a "sharp turn" from the sense of comfort of previous MCU projects due to the depiction of mature themes, such as terrorism and torture. Eric Deggans of NPR praised the performance of Samuel L. Jackson and called the series an "antidote to superhero fatigue", writing, "By centering on an aging Nick Fury who is struggling to handle a crisis created by his own broken promises, we get a story focused much more on a flawed hero than some kind of super-person juggling computer-generated cars." Lucy Mangan of The Guardian gave the show a grade of 3 out of 5 stars, stating, "Some moments in Marvel's latest TV series remind you how utterly watchable brilliant actors are – despite this darker, more mature outing needing a tad more thought." Barry Hertz of The Globe and Mail said "The chases are slow, the explosions meh, the entire pace and tempo sluggish... The real folly of Secret Invasion is that it compels the best actors of any Marvel series so far to squirm while delivering soul-deadening expository dialogue." Accolades TVLine placed Secret Invasion third on their list of the 10 Worst Shows of 2023. Documentary special In February 2021, the documentary series Marvel Studios: Assembled was announced. The special on this series, "The Making of Secret Invasion", was released on Disney+ on September 20, 2023. Future In September 2022, Feige stated that Secret Invasion would lead into Armor Wars, with Cheadle set to reprise his role as Rhodes. The series was originally believed to tie in with the film The Marvels, in which Jackson reprises his role as Fury, but that film largely ignores the events of Secret Invasion. Matt Webb Mitovich at TVLine speculated that it likely was intended for The Marvels to be set before Secret Invasion, given that film had numerous previous release dates prior to Secret Invasion's premiere, though if so, that assumption "still leaves continuity issues all over the place". Notes References External links Official website at Marvel.com Secret Invasion at IMDb Secret Invasion on Disney+ The Invasion Has Begun viral marketing website (Archived July 21, 2023, at the Wayback Machine)
Wikipedia
The Pile is an 886.03 GB diverse, open-source dataset of English text created as a training dataset for large language models (LLMs). It was constructed by EleutherAI in 2020 and publicly released on December 31 of that year. It is composed of 22 smaller datasets, including 14 new ones. Creation Training LLMs requires sufficiently vast amounts of data that, before the introduction of the Pile, most data used for training LLMs was taken from the Common Crawl. However, LLMs trained on more diverse datasets are better able to handle a wider range of situations after training. The creation of the Pile was motivated by the need for a large enough dataset that contained data from a wide variety of sources and styles of writing. Compared to other datasets, the Pile's main distinguishing features are that it is a curated selection of data chosen by researchers at EleutherAI to contain information they thought language models should learn and that it is the only such dataset that is thoroughly documented by the researchers who developed it. Contents and filtering Artificial intelligences do not learn all they can from data on the first pass, so it is common practice to train an AI on the same data more than once with each pass through the entire dataset referred to as an "epoch". Each of the 22 sub-datasets that make up the Pile was assigned a different number of epochs according to the perceived quality of the data. The table below shows the relative size of each of the 22 sub-datasets before and after being multiplied by the number of epochs. Numbers have been converted to GB, and asterisks are used to indicate the newly introduced datasets. EleutherAI chose the datasets to try to cover a wide range of topics and styles of writing, including academic writing, which models trained on other datasets were found to struggle with. All data used in the Pile was taken from publicly accessible sources. EleutherAI then filtered the dataset as a whole to remove duplicates. Some sub-datasets were also filtered for quality control. Most notably, the Pile-CC is a modified version of the Common Crawl in which the data was filtered to remove parts that are not text, such as HTML formatting and links. Some potential sub-datasets were excluded for various reasons, such as the US Congressional Record, which was excluded due to its racist content. Within the sub-datasets that were included, individual documents were not filtered to remove non-English, biased, or profane text. It was also not filtered on the basis of consent, meaning that, for example, the Pile-CC has all of the same ethical issues as the Common Crawl itself. However, EleutherAI has documented the amount of bias (on the basis of gender, religion, and race) and profanity as well as the level of consent given for each of the sub-datasets, allowing an ethics-concerned researcher to use only those parts of the Pile that meet their own standards. Use The Pile was originally developed to train EleutherAI's GPT-Neo models but has become widely used to train other models, including Microsoft's Megatron-Turing Natural Language Generation, Meta AI's Open Pre-trained Transformers, LLaMA, and Galactica, Stanford University's BioMedLM 2.7B, the Beijing Academy of Artificial Intelligence's Chinese-Transformer-XL, Yandex's YaLM 100B, and Apple's OpenELM. In addition to being used as a training dataset, the Pile can also be used as a benchmark to test models and score how well they perform on a variety of writing styles. DMCA takedown The Books3 component of the dataset contains copyrighted material compiled from Bibliotik, a pirate website. In July 2023, the Rights Alliance took copies of The Pile down through DMCA notices. Users responded by creating copies of The Pile with the offending content removed. See also List of chatbots
Wikipedia
NovoGen is a proprietary form of 3D printing technology that allows scientists to assemble living tissue cells into a desired pattern. When combined with an extracellular matrix, the cells can be arranged into complex structures, such as organs. Designed by Organovo, the NovoGen technology has been successfully integrated by Invetech with a production printer that is intended to help develop processes for tissue repair and organ development.
Wikipedia
The Confederation of European Environmental Engineering Societies (CEEES) was created as a co-operative international organization for information exchange regarding environmental engineering between the various European societies in this field. The CEEES maintains an online public discussion forum for the interchange of information. The member societies of the CEEES As of 2012, these were the twelve member societies of the CEEES: Italy: Associazione Italia Tecnici Prove Ambientali (AITPA) France: Association pour le Développement des Sciences et Techniques de l'Environnement (ASTE) Belgium: Belgian Society of Mechanical and Environmental Engineering (BSMEE) Germany: Gesellschaft für Umweltsimulation (GUS) Finland: Finnish Society of Environmental Engineering (KOTEL) Czech Republic: National Association of Czech Environmental Engineers (NACEI) Austria: Österreichische Gesellschaft für Umweltsimulation (ÖGUS) Netherlands: PLatform Omgevings Technologie (PLOT) United Kingdom: Society of Environmental Engineers (SEE) Sweden: Swedish Environmental Engineering Society (SEES) Portugal: Sociedade Portuguesa de Simulacao Ambiental e Aveliaca de Riscos (SOPSAR) Switzerland: Swiss Society for Environmental Engineering (SSEE) Each member society successively holds the presidency and the secretariat for a period of two years. Technical Advisory Boards The CEEES has three major Technical Advisory Boards: Mechanical Environments: The aim of this board is to advance methodologies and technologies for quantifying, describing and simulating mechanical environmental conditions experienced by mechanical equipment during its useful life. Climatic and Atmospheric Pollution Effects: The aim of this board is the study of the climatic and atmospheric pollution effects on materials and mechanical equipment. Reliability and Environmental Stress Screening: The aim of this board is the study how the environmental effects the reliability of equipment. Publications These are some of the publications of the CEEES: A Bibliography on Transportation Environment, ISSN 1104-6341, published by the Swedish Packaging Research Institute (Packforsk) in 1994. Synthesis of an ESS-Survey at the European Level, ISSN 1104-6341, published by the Swiss Society for Environmental Engineering (SSEE) in 1998. List of Technical Documents Dedicated or Related to ESS, ISBN 91-974043-0-6, published by the Swiss Society for Environmental Engineering (SSEE) in 1998. Climatic and Air Pollution Effects on Material and Equipment,ISBN No. 978-3-9806167-2-0, published by Gesellschaft für Umweltsimulation (GUS) in 1999. Natural and Artificial Ageing of Polymers, 1st European Weathering Symposium, Prague. ISBN 3-9808382-5-0, published by Gesellschaft für Umweltsimulation (GUS) in 2004 Natural and Artificial Ageing of Polymers, 2nd European Weathering Symposium, Gothenburg. ISBN 3-9808382-9-3, published by Gesellschaft für Umweltsimulation (GUS) in 2005 Ultrafine Particles – Key in the Issue of Particulate Matter?, 18th European Federation of Clean Air (EFCA) International Symposium, published by the Karlsruhe Research Center (Forschungszentrum Karlsruhe FZK) in 2007. Natural and Artificial Ageing of Polymers, 3rd European Weathering Symposium, Kraków. ISBN No. 978-3-9810472-3-3, published by GUS in 2005. Reliability - For A Mature Product From The Beginning Of Useful Life. The Different Type Of Tests And Their Impact On Product Reliability. ISSN 1104-6341, published online by CEEES in 2009. See also European Environment Agency Environment Agency Ministry of Housing, Spatial Planning and the Environment (Netherlands) Environmental technology Environmental science Coordination of Information on the Environment External links Official website ASTE website Archived 2021-05-11 at the Wayback Machine BSMEE website CEEES website. GUS website KOTEL website ÖGUS website PLOT website SEE website SEES website SOPSAR website SSEE website
Wikipedia
In engineering, the mass transfer coefficient is a diffusion rate constant that relates the mass transfer rate, mass transfer area, and concentration change as driving force: k c = n ˙ A A Δ c A {\displaystyle k_{c}={\frac {{\dot {n}}_{A}}{A\Delta c_{A}}}} Where: k c {\displaystyle k_{c}} is the mass transfer coefficient [mol/(s·m2)/(mol/m3)], or m/s n ˙ A {\displaystyle {\dot {n}}_{A}} is the mass transfer rate [mol/s] A {\displaystyle A} is the effective mass transfer area [m2] Δ c A {\displaystyle \Delta c_{A}} is the driving force concentration difference [mol/m3]. This can be used to quantify the mass transfer between phases, immiscible and partially miscible fluid mixtures (or between a fluid and a porous solid). Quantifying mass transfer allows for design and manufacture of separation process equipment that can meet specified requirements, estimate what will happen in real life situations (chemical spill), etc. Mass transfer coefficients can be estimated from many different theoretical equations, correlations, and analogies that are functions of material properties, intensive properties and flow regime (laminar or turbulent flow). Selection of the most applicable model is dependent on the materials and the system, or environment, being studied. Mass transfer coefficient units (mol/s)/(m2·mol/m3) = m/s Note, the units will vary based upon which units the driving force is expressed in. The driving force shown here as ' Δ c A {\displaystyle {\Delta c_{A}}} ' is expressed in units of moles per unit of volume, but in some cases the driving force is represented by other measures of concentration with different units. For example, the driving force may be partial pressures when dealing with mass transfer in a gas phase and thus use units of pressure. See also Mass transfer Separation process Sieving coefficient
Wikipedia
A mathemagician is a mathematician who is also a magician. The term "mathemagic" is believed to have been introduced by Royal Vale Heath with his 1933 book "Mathemagic". The name "mathemagician" was probably first applied to Martin Gardner, but has since been used to describe many mathematician/magicians, including Arthur T. Benjamin, Persi Diaconis, and Colm Mulcahy. Diaconis has suggested that the reason so many mathematicians are magicians is that "inventing a magic trick and inventing a theorem are very similar activities." Mathemagician is a neologism, specifically a portmanteau, that combines mathematician and magician. A great number of self-working mentalism tricks rely on mathematical principles, such as Gilbreath's principle. Max Maven often utilizes this type of magic in his performance. The Mathemagician is the name of a character in the 1961 children's book The Phantom Tollbooth. He is the ruler of Digitopolis, the kingdom of mathematics. Notable mathemagicians Jin Akiyama Arthur T. Benjamin Persi Diaconis Alex Elmsley Richard Feynman Karl Fulves Martin Gardner Norman Laurence Gilbreath Ronald Graham Vi Hart Royal Vale Heath Colm Mulcahy W. W. Rouse Ball Raymond Smullyan References Further reading Diaconis, Persi & Graham, Ron. Magical Mathematics: The Mathematical Ideas That Animate Great Magic Tricks Princeton University Press, 2012. ISBN 0691169772 Fulves, Karl. Self-working Number Magic, New York London : Dover Constable, 1983. ISBN 0486243915 Gardner, Martin. Mathematics, Magic and Mystery, Dover, 1956. ISBN 0-486-20335-2 Graham, Ron. Juggling Mathematics and Magic University of California, San Diego Teixeira, Ricardo & Park, Jang Woo. Mathemagics: A Magical Journey Through Advanced Mathematics, Connecting More Than 60 Magic Tricks to High-Level Math World Scientific, 2020. ISBN 978-9811215308.
Wikipedia
Recoil is a rheological phenomenon observed only in non-Newtonian fluids that is characterized by a moving fluid's ability to snap back to a previous position when external forces are removed. Recoil is a result of the fluid's elasticity and memory where the speed and acceleration by which the fluid moves depends on the molecular structure and the location to which it returns depends on the conformational entropy. This effect is observed in numerous non-Newtonian liquids to a small degree, but is prominent in some materials such as molten polymers. Memory The degree to which a fluid will “remember” where it came from depends on the entropy. Viscoelastic properties in fluids cause them to snap back to entropically favorable conformations. Recoil is observed when a favorable conformation is in the fluid's recent past. However, the fluid cannot fully return to its original position due to energy losses stemming from less than perfect elasticity. Recoiling fluids display fading memory meaning the longer a fluid is elongated, the less it will recover. Recoil is related to characteristic time, an estimate of the order of magnitude of reaction for the system. Fluids that are described as recoiling generally have characteristic times on the order of a few seconds. Although recoiling fluids usually recover relatively small distances, some molten polymers can recover back to 1/10 of the total elongation. This property of polymers must be accounted for in polymer processing. Demonstrations of Recoil When a spinning rod is placed in a polymer solution, elastic forces generated by the rotation motion cause fluid to climb up the rod (a phenomenon known as the Weissenberg effect). If the torque being applied is immediately brought to a stop, the fluid recoils down the rod. When a viscoelastic fluid being poured from a beaker is quickly cut with a pair of scissors, the fluid recoils back into the beaker. When fluid at rest in a circular tube is subjected to a pressure drop, a parabolic flow distribution is observed that pulls the liquid down the tube. Immediately after the pressure is alleviated, the fluid recoils backward in the tube and forms a more blunt flow profile. When Silly Putty is rapidly stretched and held at an elongated position for a short period of time, it springs back. However, if it is held at an elongated position for a longer period of time, there is very little recovery and no visible recoil.
Wikipedia
Juergen Pirner (born 1956) is the German creator of Jabberwock, a chatterbot that won the 2003 Loebner prize. Pirner created Jabberwock modelling the Jabberwocky from Lewis Carroll's poem of the same name. Initially, Jabberwock would just give rude or fantasy-related answers; but over the years, Pirner has programmed better responses into it. As of 2007 he has taught it 2.7 million responses. Pirner lives in Hamburg, Germany. References External links Talk to Jabberwock
Wikipedia
A nuclear clock or nuclear optical clock is an atomic clock being developed that will use the energy of a nuclear isomeric transition as its reference frequency, instead of the atomic electron transition energy used by conventional atomic clocks. Such a clock is expected to be more accurate than the best current atomic clocks by a factor of about 10, with an achievable accuracy approaching the 10−19 level. The only nuclear state suitable for the development of a nuclear clock using existing technology is thorium-229m, an isomer of thorium-229 and the lowest-energy nuclear isomer known. With an energy of 8.355733554021(8) eV, this corresponds to a frequency of 2020407384335±2 kHz, or wavelength of 148.382182883 nm, in the vacuum ultraviolet region, making it accessible to laser excitation. Principle of operation Atomic clocks are today's most accurate timekeeping devices. They operate by exploiting the fact that the gap between the energy levels of two bound electron states in an atom is constant across space and time. A bound electron can be excited with electromagnetic radiation precisely when the radiation's photon energy matches the energy of the transition. Via the Planck relation, that transition energy corresponds to a particular frequency. By irradiating an appropriately prepared collection of identical atoms and measuring the number of excitations induced, a light source's frequency can be tuned to maximize this response and therefore closely match the corresponding electron transition energy. The transition energy thus provides a standard of reference which can be used to calibrate such a source reliably. Conventional atomic clocks use microwave (high-frequency radio wave) frequencies, but development of the laser has made it possible to generate very stable light frequencies, and the frequency comb makes it possible to count those oscillations (measured in hundreds of THz, meaning hundred of trillions of cycles per second) to extraordinarily high accuracy. A device which uses a laser in this way is known as an optical atomic clock. One prominent example of an optical atomic clock is the ytterbium (Yb) lattice clock, where a particular electron transition in the ytterbium-171 isotope is used for laser stabilization. In this case, one second has elapsed after 518295836590863.63±0.1 oscillations of the laser light stabilized to the corresponding electron transition. Other examples for optical atomic clocks of the highest accuracy are the Yb-171 single-ion clock, the strontium(Sr)-87 optical lattice clock, and the aluminum(Al)-27 single-ion clock. The achieved accuracies of these clocks vary around 10−18, corresponding to about 1 second of inaccuracy in 30 billion years, significantly longer than the age of the universe. A nuclear optical clock would use the same principle of operation, with the important difference that a nuclear transition instead of an atomic shell electron transition is used for laser stabilization. The expected advantage of a nuclear clock is that the atomic nucleus is smaller than the atomic shell by up to five orders of magnitude, with correspondingly smaller magnetic dipole and electric quadrupole moments, and is therefore significantly less affected by external magnetic and electric fields. Such external perturbations are the limiting factor for the achieved accuracies of electron-based atomic clocks. Due to this conceptual advantage, a nuclear optical clock is expected to achieve a time accuracy approaching 10−19, a ten-fold improvement over electron-based clocks. Ionization An excited atomic nucleus can shed its excess energy by two alternative paths: radiatively, by direct photon (gamma ray) emission, or by internal conversion, transferring the energy to a shell electron which is ejected from the atom. For most nuclear isomers, the available energy is sufficient to eject any electron, and the inner-shell electrons are the most frequently ejected. In the special case of 229mTh, the energy is sufficient only to eject an outer electron (thorium's first ionization energy is 6.3 eV), and if the atom is already ionized, there is not enough energy to eject a second (thorium's second ionization energy is 11.5 eV). The two decay paths have different half-lives. Neutral 229mTh decays almost exclusively by internal conversion, with a half-life of 7±1 μs. In thorium cations, internal conversion is energetically prohibited, and 229mTh+ is forced to take the slower path, decaying radiatively with a half-life of around half an hour. Thus, in the typical case that the clock is designed to measure radiated photons, it is necessary to hold the thorium in an ionized state. This can be done in an ion trap, or by embedding it in an ionic crystal with a band gap greater than the transition energy. In this case, the atoms are not 100% ionized, and a small amount of internal conversion is possible (reducing the half-life to approximately 10 minutes), but the loss is tolerable. Different nuclear clock concepts Two different concepts for nuclear optical clocks have been discussed in the literature: trap-based nuclear clocks and solid-state nuclear clocks. Trap-based nuclear clocks For a trap-based nuclear clock either a single 229Th3+ ion is trapped in a Paul trap, known as the single-ion nuclear clock, or a chain of multiple ions is trapped, considered as the multiple-ion nuclear clock. Such clocks are expected to achieve the highest time accuracy, as the ions are to a large extent isolated from their environment. A multiple-ion nuclear clock could have a significant advantage over the single-ion nuclear clock in terms of stability performance. Solid-state nuclear clocks As the nucleus is largely unaffected by the atomic shell, it is also intriguing to embed many nuclei into a crystal lattice environment. This concept is known as the crystal-lattice nuclear clock. Due to the high density of embedded nuclei of up to 1018 per cm3, this concept would allow irradiating a huge number of nuclei in parallel, thereby drastically increasing the achievable signal-to-noise ratio, but at the cost of potentially higher external perturbations. It has also been proposed to irradiate a metallic 229Th surface and to probe the isomer's excitation in the internal conversion channel, which is known as the internal-conversion nuclear clock. Both types of solid-state nuclear clocks were shown to offer the potential for comparable performance. Transition requirements From the principle of operation of a nuclear optical clock, it is evident that direct laser excitation of a nuclear state is a central requirement for the development of such a clock. This is impossible for most nuclear transitions, as the typical energy range of nuclear transitions (keV to MeV) is orders of magnitude above the maximum energy which is accessible with significant intensity by today's narrow-bandwidth laser technology (a few eV). There are only two nuclear excited states known which possess a sufficiently low excitation energy (below 100 eV). These are 229mTh, a metastable nuclear excited state of the isotope thorium-229 with an excitation energy of only about 8 eV, and 235m1U, a metastable excited state of uranium-235 with an energy of 76.7 eV. However, 235m1U has such an extraordinarily long radiative half-life (on the order of 1022 s, 20,000 times the age of the universe, and far longer than its internal conversion half-life of 26 minutes) that it is not practical to use for a clock. This leaves only 229mTh with a realistic chance of direct nuclear laser excitation. Further requirements for the development of a nuclear clock are that the lifetime of the nuclear excited state is relatively long, thereby leading to a resonance of narrow bandwidth (a high quality factor) and the ground-state nucleus is easily available and sufficiently long-lived to allow one to work with moderate quantities of the material. Fortunately, with 229mTh+ having a radiative half-life (time to decay to 229Th+) of around 103 s, and 229Th having a half-life (time to decay to 225Ra) of 7917±48 years, both conditions are fulfilled for 229mTh+, making it an ideal candidate for the development of a nuclear clock. History History of nuclear clocks As early as 1996 it was proposed by Eugene V. Tkalya to use the nuclear excitation as a "highly stable source of light for metrology". With the development (around 2000) of the frequency comb for measuring optical frequencies exactly, a nuclear optical clock based on 229mTh was first proposed in 2003 by Ekkehard Peik and Christian Tamm, who developed an idea of Uwe Sterr. The paper contains both concepts, the single-ion nuclear clock, as well as the solid-state nuclear clock. In their pioneering work, Peik and Tamm proposed to use individual laser-cooled 229Th3+ ions in a Paul trap to perform nuclear laser spectroscopy. Here the 3+ charge state is advantageous, as it possesses a shell structure suitable for direct laser cooling. It was further proposed to excite an electronic shell state, to achieve 'good' quantum numbers of the total system of the shell plus nucleus that will lead to a reduction of the influence induced by external perturbing fields. A central idea is to probe the successful laser excitation of the nuclear state via the hyperfine-structure shift induced into the electronic shell due to the different nuclear spins of ground- and excited state. This method is known as the double-resonance method. The expected performance of a single-ion nuclear clock was further investigated in 2012 by Corey Campbell et al. with the result that a systematic frequency uncertainty (accuracy) of the clock of 1.5×10−19 could be achieved, which would be by about an order of magnitude better than the accuracy achieved by the best optical atomic clocks today. The nuclear clock approach proposed by Campbell et al. slightly differs from the original one proposed by Peik and Tamm. Instead of exciting an electronic shell state in order to obtain the highest insensitivity against external perturbing fields, the nuclear clock proposed by Campbell et al. uses a stretched pair of nuclear hyperfine states in the electronic ground-state configuration, which appears to be advantageous in terms of the achievable quality factor and an improved suppression of the quadratic Zeeman shift. In 2010, Eugene V. Tkalya showed that it was theoretically possible to use 229mTh as a lasing medium to generate an ultraviolet laser. The solid-state nuclear clock approach was further developed in 2010 by W.G. Rellergert et al. with the result of an expected long-term accuracy of about 2×10−16. Although expected to be less accurate than the single-ion nuclear clock approach due to line-broadening effects and temperature shifts in the crystal lattice environment, this approach may have advantages in terms of compactness, robustness and power consumption. The expected stability performance was investigated by G. Kazakov et al. in 2012. In 2020, the development of an internal conversion nuclear clock was proposed. Important steps on the road towards a nuclear clock include the successful direct laser cooling of 229Th3+ ions in a Paul trap achieved in 2011, and a first detection of the isomer-induced hyperfine-structure shift, enabling the double-resonance method to probe a successful nuclear excitation in 2018. History of 229mTh Since 1976, the 229Th nucleus has been known to possess a low energy excited state, whose excitation energy was originally shown to less than 100 eV, and then shown to be less than 10 eV in 1990. This was, however, too broad an energy range to apply high-resolution spectroscopy techniques; the transition energy had to be narrowed down first. Initial efforts used the fact that, after the alpha decay of 233U, the resultant 229Th nucleus is in an excited state and promptly emits a gamma ray to decay to either the base state or the metastable state. Measuring the small difference in the gamma-ray energies emitted in these processes allows the metastable state energy to be found by subtraction.: §5.1 : §2.3 However, nuclear experiments are not capable of finely measuring the difference in frequency between two high gamma-ray energies, so other experiments were needed. Because of the natural radioactive decay of 229Th nuclei, a tightly concentrated laser frequency was required to excite enough nuclei in an experiment to outcompete the background radiation and give a more accurate measurement of the excitation energy. Because it was infeasible to scan the entire 100eV range, an estimate of the correct frequency was needed. An early mis-step was the (incorrect) measurement of the energy value as 3.5±1.0 eV in 1994. This frequency of light is relatively easy to work with, so many direct detection experiments were attempted which had no hope of success because they were built of materials opaque to photons at the true, higher, energy. In particular: thorium oxide is transparent to 3.5 eV photons, but opaque at 8.3 eV, common optical lens and window materials such as fused quartz are opaque at energies above 8 eV, molecular oxygen (air) is opaque to photons above 6.2 eV; experiments must be conducted in a nitrogen or argon atmosphere, and the ionization energy of thorium is 6.3 eV so the nuclei will decay by internal conversion unless prevented (see § Ionization). The energy value remained elusive until 2003, when the nuclear clock proposal triggered a multitude of experimental efforts to pin down the excited state's parameters like energy and half-life. The detection of light emitted in the direct decay of 229mTh would significantly help to determine its energy to higher precision, but all efforts to observe the light emitted in the decay of 229mTh were failing. The energy level was corrected to 7.6±0.5 eV in 2007 (slightly revised to 7.8±0.5 eV in 2009). Subsequent experiments continued to fail to observe any signal of light emitted in the direct decay, leading people to suspect the existence of a strong non-radiative decay channel. The detection of light emitted by the decay of 229mTh was reported in 2012, and again in 2018, but the observed signals were the subject of controversy within the community. A direct detection of electrons emitted by the isomer's internal conversion decay channel was achieved in 2016. This detection laid the foundation for the determination of the 229mTh half-life in neutral, surface-bound atoms in 2017 and a first laser-spectroscopic characterization in 2018. In 2019, the isomer's energy was measured via the detection of internal conversion electrons emitted in its direct ground-state decay to 8.28±0.17 eV. Also a first successful excitation of the 29 keV nuclear excited state of 229Th via synchrotron radiation was reported, enabling a clock transition energy measurement of 8.30±0.92 eV. In 2020, an energy of 8.10±0.17 eV was obtained from precision gamma-ray spectroscopy. Finally, precise measurements were achieved in 2023 by unambiguous detection of the emitted photons (8.338(24) eV) and in April 2024 by two reports of excitation with a tunable laser at 8.355733(10) eV and 8.35574(3) eV. The light frequency is now known with sufficient accuracy to enable future construction of a prototype clock, and determine the transition's exact frequency and its stability. Precision frequency measurements began immediately, with Jun Ye's laboratory at JILA making a direct comparison to a 87Sr optical atomic clock. Published in September 2024, the frequency was measured as 2020407384335±2 kHz, a relative uncertainty of 10−12. This implies a wavelength of 148.3821828827(15) nm and an energy of 8.355733554021(8) eV. The work also resolved different nuclear quadrupole sublevels and measured the ratio of the ground and excited state nuclear quadrupole moment. Improvements will surely follow. Applications When operational, a nuclear optical clock is expected to be applicable in various fields. In addition to the capabilities of today's atomic clocks, such as satellite-based navigation or data transfer, its high precision will allow new applications inaccessible to other atomic clocks, such as relativistic geodesy, the search for topological dark matter, or the determination of time variations of fundamental constants. A nuclear clock has the potential to be particularly sensitive to possible time variations of the fine-structure constant. The central idea is that the low energy is due to a fortuitous cancellation between strong nuclear and electromagnetic effects within the nucleus which are individually much stronger. Any variation the fine-structure constant would affect the electromagnetic half of this balance, resulting in a proportionally very large change in the total transition energy. A change of even one part in 1018 could be detected by comparison with a conventional atomic clock (whose frequency would also be altered, but not nearly as much), so this measurement would be extraordinarily sensitive to any potential variation of the constant. Recent measurements and analysis are consistent with enhancement factors on the order of 104. References Further reading "The 229Th isomer: prospects for a nuclear optical clock" (November 2020) European Physics Journal A. External links EU thorium nuclear clock (nuClock) project
Wikipedia
Shaft voltage occurs in electric motors and generators due to leakage, induction, or capacitive coupling with the windings of the motor. It can occur in motors powered by variable-frequency drives, as often used in heating, ventilation, air conditioning and refrigeration systems. DC machines may have leakage current from the armature windings that energizes the shaft. Currents due to shaft voltage causes deterioration of motor bearings, but can be prevented with a grounding brush on the shaft, grounding of the motor frame, insulation of the bearing supports, or shielding. Shaft voltage can be induced by non-symmetrical magnetic fields of the motor (or generator) itself. External sources of shaft voltage include other coupled machines, and electrostatic charging due to rubber belts rubbing on drive pulleys. Every rotor has some degree of capacitive coupling to the motor's electrical windings, but the effective inline capacitor acts as a high-pass filter, so the coupling is often weak at 50–60 Hz line frequency. But many Variable Frequency Drives (VFD) induce significant voltage onto the shaft of the driven motor, because of the kilohertz switching of the insulated gate bipolar transistors (IGBTs), which produce the pulse-width modulation used to control the motor. The presence of high frequency ground currents can cause sparks, arcing and electrical shocks and can damage bearings. Counter-measures Techniques used to minimise this problem include: insulation, alternate discharge paths, Faraday shield, insulated bearings, ceramic bearings, grounding brush and shaft grounding ring. Faraday shield An electrostatic shielded induction motor (ESIM) is one approach to the shaft-voltage problem, as the insulation reduces voltage levels below the dielectric breakdown. This effectively stops bearing degradation and offers one solution to accelerated bearing wear caused by fluting, induced by pulsewidth modulated (PWM) inverters. Grounding brush Grounding the shaft by installing a grounding brush device on either the non-drive end or drive end of a VFD electric motor provides an alternate low-impedance path from the motor shaft to the motor case. This method channels the current away from the bearings. It significantly reduces shaft voltage and therefore bearing current by not allowing voltage to build up on the rotor. Shaft grounding ring A shaft grounding ring is installed around the motor shaft and creates a low impedance pathway for current to flow back to the motor frame and to ground. Various styles of rings exist such as those containing microfilaments making direct contact with the shaft or rings that clamp onto the shaft with a carbon brush riding on the ring (not directly on the shaft). Insulated bearings Insulated bearings eliminate the path to ground through the bearing for current to flow. However, installing insulated bearings does not eliminate the shaft voltage, which will still find the lowest impedance path to ground. This can potentially cause a problem if the path happens to be through the driven load or through some other component. Shielded cable High frequency grounding can be significantly improved by installing shielded cable with an extremely low impedance path between the VFD and the motor. One popular cable type is continuous corrugated aluminum sheath cable. See also Stray voltage References External links "Technical guide No. 5 – Bearing currents in modern AC drive systems" (PDF). Archived from the original (PDF) on July 20, 2011. Retrieved May 23, 2011. A Unique System for Reducing High Frequency Stray Noise and Transient Common Mode Ground Currents to Zero, While Enhancing Other Ground Issues Meeting Notices and Rule Changes from Electrical Manufacturing and Coil Winding
Wikipedia
The history of the programming language Scheme begins with the development of earlier members of the Lisp family of languages during the second half of the twentieth century. During the design and development period of Scheme, language designers Guy L. Steele and Gerald Jay Sussman released an influential series of Massachusetts Institute of Technology (MIT) AI Memos known as the Lambda Papers (1975–1980). This resulted in the growth of popularity in the language and the era of standardization from 1990 onward. Much of the history of Scheme has been documented by the developers themselves. Prehistory The development of Scheme was heavily influenced by two predecessors that were quite different from one another: Lisp provided its general semantics and syntax, and ALGOL provided its lexical scope and block structure. Scheme is a dialect of Lisp but Lisp has evolved; the Lisp dialects from which Scheme evolved—although they were in the mainstream at the time—are quite different from any modern Lisp. Lisp Lisp was invented by John McCarthy in 1958 while he was at the Massachusetts Institute of Technology (MIT). McCarthy published its design in a paper in Communications of the ACM in 1960, entitled "Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I" (Part II was never published). He showed that with a few simple operators and a notation for functions, one can build a Turing-complete language for algorithms. The use of s-expressions which characterize the syntax of Lisp was initially intended to be an interim measure pending the development of a language employing what McCarthy called "m-expressions". As an example, the m-expression car[cons[A,B]] is equivalent to the s-expression (car (cons A B)). S-expressions proved popular, however, and the many attempts to implement m-expressions failed to catch on. The first implementation of Lisp was on an IBM 704 by Steve Russell, who read McCarthy's paper and coded the eval function he described in machine code. The familiar (but puzzling to newcomers) names CAR and CDR used in Lisp to describe the head element of a list and its tail, evolved from two IBM 704 assembly language commands: Contents of Address Register and Contents of Decrement Register, each of which returned the contents of a 15-bit register corresponding to segments of a 36-bit IBM 704 instruction word. The first complete Lisp compiler, written in Lisp, was implemented in 1962 by Tim Hart and Mike Levin at MIT. This compiler introduced the Lisp model of incremental compilation, in which compiled and interpreted functions can intermix freely. The two variants of Lisp most significant in the development of Scheme were both developed at MIT: LISP 1.5 developed by McCarthy and others, and Maclisp – developed for MIT's Project MAC, a direct descendant of LISP 1.5. which ran on the PDP-10 and Multics systems. Since its inception, Lisp was closely connected with the artificial intelligence (AI) research community, especially on PDP-10. The 36-bit word size of the PDP-6 and PDP-10 was influenced by the usefulness of having two Lisp 18-bit pointers in one word. ALGOL ALGOL 58, originally to be called IAL for "International Algorithmic Language", was developed jointly by a committee of European and American computer scientists in a meeting in 1958 at ETH Zurich. ALGOL 60, a later revision developed at the ALGOL 60 meeting in Paris and now commonly named ALGOL, became the standard for the publication of algorithms and had a profound effect on future language development, despite the language's lack of commercial success and its limitations. Tony Hoare has remarked: "Here is a language so far ahead of its time that it was not only an improvement on its predecessors but also on nearly all its successors." ALGOL introduced the use of block structure and lexical scope. It was also notorious for its difficult call by name default parameter passing mechanism, which was defined so as to require textual substitution of the expression representing the working parameter in place of the formal parameter during execution of a procedure or function, causing it to be re-evaluated each time it is referenced during execution. ALGOL implementors developed a mechanism they called a thunk, which captured the context of the working parameter, enabling it to be evaluated during execution of the procedure or function. Carl Hewitt, the Actor model, and the birth of Scheme In 1971 Sussman, Drew McDermott, and Eugene Charniak had developed a system called Micro-Planner which was a partial and somewhat unsatisfactory implementation of Carl Hewitt's ambitious Planner project. Sussman and Hewitt worked together along with others on Muddle, later renamed MDL, an extended Lisp which formed a component of Hewitt's project. Drew McDermott, and Sussman in 1972 developed the Lisp-based language Conniver, which revised the use of automatic backtracking in Planner which they thought was unproductive. Hewitt was dubious that the "hairy control structure" in Conniver was a solution to the problems with Planner. Pat Hayes remarked: "Their [Sussman and McDermott] solution, to give the user access to the implementation primitives of Planner, is however, something of a retrograde step (what are Conniver's semantics?)" In November 1972, Hewitt and his students invented the Actor model of computation as a solution to the problems with Planner. A partial implementation of Actors was developed called Planner-73 (later called PLASMA). Steele, then a graduate student at MIT, had been following these developments, and he and Sussman decided to implement a version of the Actor model in their own "tiny Lisp" developed on Maclisp, to understand the model better. Using this basis they then began to develop mechanisms for creating actors and sending messages. PLASMA's use of lexical scope was similar to the lambda calculus. Sussman and Steele decided to try to model Actors in the lambda calculus. They called their modeling system Schemer, eventually changing it to Scheme to fit the six-character limit on the ITS file system on their DEC PDP-10. They soon concluded Actors were essentially closures that never return but instead invoke a continuation, and thus they decided that the closure and the Actor were, for the purposes of their investigation, essentially identical concepts. They eliminated what they regarded as redundant code and, at that point, discovered that they had written a very small and capable dialect of Lisp. Hewitt remained critical of the "hairy control structure" in Scheme and considered primitives (e.g., START!PROCESS, STOP!PROCESS, and EVALUATE!UNINTERRUPTIBLY) used in the Scheme implementation to be a backward step. 25 years later, in 1998, Sussman and Steele reflected that the minimalism of Scheme was not a conscious design goal, but rather the unintended outcome of the design process. "We were actually trying to build something complicated and discovered, serendipitously, that we had accidentally designed something that met all our goals but was much simpler than we had intended... we realized that the lambda calculus—a small, simple formalism—could serve as the core of a powerful and expressive programming language." On the other hand, Hewitt remained critical of the lambda calculus as a foundation for computation writing "The actual situation is that the λ-calculus is capable of expressing some kinds of sequential and parallel control structures but, in general, not the concurrency expressed in the Actor model. On the other hand, the Actor model is capable of expressing everything in the λ-calculus and more." He has also been critical of aspects of Scheme that derive from the lambda calculus such as reliance on continuation functions and the lack of exceptions. The Lambda Papers Between 1975 and 1980 Sussman and Steele worked on developing their ideas about using the lambda calculus, continuations and other advanced programming concepts such as optimization of tail recursion, and published them in a series of AI Memos which have become collectively termed the Lambda Papers. List of papers 1975: Scheme: An Interpreter for Extended Lambda Calculus 1976: Lambda: The Ultimate Imperative 1976: Lambda: The Ultimate Declarative 1977: Debunking the 'Expensive Procedure Call' Myth, or, Procedure Call Implementations Considered Harmful, or, Lambda: The Ultimate GOTO 1978: The Art of the Interpreter or, the Modularity Complex (Parts Zero, One, and Two) 1978: RABBIT: A Compiler for SCHEME 1979: Design of LISP-based Processors, or SCHEME: A Dialect of LISP, or Finite Memories Considered Harmful, or LAMBDA: The Ultimate Opcode 1980: Compiler Optimization Based on Viewing LAMBDA as RENAME + GOTO 1980: Design of a Lisp-based Processor Influence Scheme was the first dialect of Lisp to choose lexical scope. It was also one of the first programming languages after Reynold's Definitional Language to support first-class continuations. It had a large impact on the effort that led to the development of its sister-language, Common Lisp, to which Guy Steele was a contributor. Standardization The Scheme language is standardized in the official Institute of Electrical and Electronics Engineers (IEEE) standard, and a de facto standard called the Revisedn Report on the Algorithmic Language Scheme (RnRS). The most widely implemented standard is R5RS (1998), and a new standard, R6RS, was ratified in 2007. Besides the RnRS standards there are also Scheme Requests for Implementation documents, that contain additional libraries that may be added by Scheme implementations. Timeline
Wikipedia
In classical mechanics and kinematics, Galileo's law of odd numbers states that the distance covered by a falling object in successive equal time intervals is linearly proportional to the odd numbers. That is, if a body falling from rest covers a certain distance during an arbitrary time interval, it will cover 3, 5, 7, etc. times that distance in the subsequent time intervals of the same length. This mathematical model is accurate if the body is not subject to any forces besides uniform gravity (for example, it is falling in a vacuum in a uniform gravitational field). This law was established by Galileo Galilei who was the first to make quantitative studies of free fall. Explanation Using a speed-time graph The graph in the figure is a plot of speed versus time. Distance covered is the area under the line. Each time interval is coloured differently. The distance covered in the second and subsequent intervals is the area of its trapezium, which can be subdivided into triangles as shown. As each triangle has the same base and height, they have the same area as the triangle in the first interval. It can be observed that every interval has two more triangles than the previous one. Since the first interval has one triangle, this leads to the odd numbers. Using the sum of first n odd numbers From the equation for uniform linear acceleration, the distance covered s = u t + 1 2 a t 2 {\displaystyle s=ut+{\tfrac {1}{2}}at^{2}} for initial speed u = 0 , {\displaystyle u=0,} constant acceleration a {\displaystyle a} (acceleration due to gravity without air resistance), and time elapsed t , {\displaystyle t,} it follows that the distance s {\displaystyle s} is proportional to t 2 {\displaystyle t^{2}} (in symbols, s ∝ t 2 {\displaystyle s\propto t^{2}} ), thus the distance from the starting point are consecutive squares for integer values of time elapsed. The middle figure in the diagram is a visual proof that the sum of the first n {\displaystyle n} odd numbers is n 2 . {\displaystyle n^{2}.} In equations: That the pattern continues forever can also be proven algebraically: ∑ k = 1 n ( 2 k − 1 ) = 1 2 ( ∑ k = 1 n ( 2 k − 1 ) + ∑ k = 1 n ( 2 ( n − k + 1 ) − 1 ) ) = 1 2 ∑ k = 1 n ( 2 ( n + 1 ) − 1 − 1 ) = n 2 {\displaystyle {\begin{aligned}\sum _{k=1}^{n}(2\,k-1)&={\frac {1}{2}}\,\left(\sum _{k=1}^{n}(2\,k-1)+\sum _{k=1}^{n}(2\,(n-k+1)-1)\right)\\&={\frac {1}{2}}\,\sum _{k=1}^{n}(2\,(n+1)-1-1)\\&=n^{2}\end{aligned}}} To clarify this proof, since the n {\displaystyle n} th odd positive integer is m : = 2 n − 1 , {\displaystyle m\,\colon =\,2n-1,} if S : = ∑ k = 1 n ( 2 k − 1 ) = 1 + 3 + ⋯ + ( m − 2 ) + m {\displaystyle S\,\colon =\,\sum _{k=1}^{n}(2\,k-1)\,=\,1+3+\cdots +(m-2)+m} denotes the sum of the first n {\displaystyle n} odd integers then S + S = 1 + 3 + ⋯ + ( m − 2 ) + m + m + ( m − 2 ) + ⋯ + 3 + 1 = ( m + 1 ) + ( m + 1 ) + ⋯ + ( m + 1 ) + ( m + 1 ) ( n terms) = n ( m + 1 ) {\displaystyle {\begin{alignedat}{4}S+S&=\;\;1&&+\;\;3&&\;+\cdots +(m-2)&&+\;\;m\\&+\;\;m&&+(m-2)&&\;+\cdots +\;\;3&&+\;\;1\\&=\;(m+1)&&+(m+1)&&\;+\cdots +(m+1)&&+(m+1)\quad {\text{ (}}n{\text{ terms)}}\\&=\;n\,(m+1)&&&&&&&&\\\end{alignedat}}} so that S = 1 2 n ( m + 1 ) . {\displaystyle S={\tfrac {1}{2}}\,n\,(m+1).} Substituting n = 1 2 ( m + 1 ) {\displaystyle n={\tfrac {1}{2}}(m+1)} and m + 1 = 2 n {\displaystyle m+1=2\,n} gives, respectively, the formulas 1 + 3 + ⋯ + m = 1 4 ( m + 1 ) 2 and 1 + 3 + ⋯ + ( 2 n − 1 ) = n 2 {\displaystyle 1+3+\cdots +m\;=\;{\tfrac {1}{4}}(m+1)^{2}\quad {\text{ and }}\quad 1+3+\cdots +(2\,n-1)\;=\;n^{2}} where the first formula expresses the sum entirely in terms of the odd integer m {\displaystyle m} while the second expresses it entirely in terms of n , {\displaystyle n,} which is m {\displaystyle m} 's ordinal position in the list of odd integers 1 , 3 , 5 , … . {\displaystyle 1,3,5,\ldots .} See also Equations of motion – Equations that describe the behavior of a physical system Square numbers – Product of an integer with itselfPages displaying short descriptions of redirect targets
Wikipedia
In statistics and research design, an index is a composite statistic – a measure of changes in a representative group of individual data points, or in other words, a compound measure that aggregates multiple indicators. Indices – also known as indexes and composite indicators – summarize and rank specific observations. Much data in the field of social sciences and sustainability are represented in various indices such as Gender Gap Index, Human Development Index or the Dow Jones Industrial Average. The ‘Report by the Commission on the Measurement of Economic Performance and Social Progress’, written by Joseph Stiglitz, Amartya Sen, and Jean-Paul Fitoussi in 2009 suggests that these measures have experienced a dramatic growth in recent years due to three concurring factors: improvements in the level of literacy (including statistical) increased complexity of modern societies and economies, and widespread availability of information technology. According to Earl Babbie, items in indices are usually weighted equally, unless there are some reasons against it (for example, if two items reflect essentially the same aspect of a variable, they could have a weight of 0.5 each). According to the same author, constructing the items involves four steps. First, items should be selected based on their content validity, unidimensionality, the degree of specificity in which a dimension is to be measured, and their amount of variance. Items should be empirically related to one another, which leads to the second step of examining their multivariate relationships. Third, index scores are designed, which involves determining score ranges and weights for the items. Finally, indices should be validated, which involves testing whether they can predict indicators related to the measured variable not used in their construction. A handbook for the construction of composite indicators (CIs) was published jointly by the OECD and by the European Commission's Joint Research Centre in 2008. The handbook – officially endorsed by the OECD high level statistical committee, describe ten recursive steps for developing an index: Step 1: Theoretical framework Step 2: Data selection Step 3: Imputation of missing data Step 4: Multivariate analysis Step 5: Normalisation Step 6: Weighting Step 7: Aggregating indicators Step 8: Sensitivity analysis Step 9: Link to other measures Step 10: Visualisation As suggested by the list, many modelling choices are needed to construct a composite indicator, which makes their use controversial. The delicate issue of assigning and validating weights is discussed e.g. in. A sociological reading of the nature of composite indicators is offered by Paul-Marie Boulanger, who sees these measures at the intersection of three movements: the democratisation of expertise, the concept that more knowledge is needed to tackle societal and environmental issues that can be provided by the sole experts – this line of thought connects to the concept of extended peer community developed by post-normal science the impulse to the creation of a new public through a process of social discovery, which can be reconnected to the work of pragmatists such as John Dewey the semiotic of Charles Sanders Peirce; Thus a CI is not just a sign or a number, but suggests an action or a behaviour. A subsequent work by Boulanger analyses composite indicators in light of the social system theories of Niklas Luhmann to investigate how different measurements of progress are or are not taken up. See also Index (economics) Scale (social sciences)
Wikipedia
Linguamatics, headquartered in Cambridge, England, with offices in the United States and UK, is a provider of text mining systems through software licensing and services, primarily for pharmaceutical and healthcare applications. Founded in 2001, the company was purchased by IQVIA in January 2019. Technology The company develops enterprise search tools for the life sciences sector. The core natural language processing engine (I2E) uses a federated architecture to incorporate data from 3rd party resources. Initially developed to be used interactively through a graphic user interface, the core software also has an application programming interface that can be used to automate searches. LabKey, Penn Medicine, Atrius Health and Mercy all use Linguamatics software to extract electronic health record data into data warehouses. Linguamatics software is used by 17 of the top 20 global pharmaceutical companies, the US Food and Drug Administration, as well as healthcare providers. Software community The core software, "I2E", is used by a number of companies to either extend their own software or to publish their data. Copyright Clearance Center uses I2E to produce searchable indexes of material that would otherwise be unsearchable due to copyright. Thomson Reuters produces Cortellis Informatics Clinical Text Analytics, which depends on I2E to make clinical data accessible and searchable. Pipeline Pilot can integrate I2E as part of a workflow. ChemAxon can be used alongside I2E to allow named entity recognition of chemicals within unstructured data. Data sources include MEDLINE, ClinicalTrials.gov, FDA Drug Labels, PubMed Central, and Patent Abstracts. See also List of academic databases and search engines
Wikipedia
In mathematics and astrophysics, the Strömgren integral, introduced by Bengt Strömgren (1932, p.123) while computing the Rosseland mean opacity, is the integral: 15 4 π 4 ∫ 0 x t 7 e 2 t ( e t − 1 ) 3 d t . {\displaystyle {\frac {15}{4\pi ^{4}}}\int _{0}^{x}{\frac {t^{7}e^{2t}}{(e^{t}-1)^{3}}}\,dt.} Cox (1964) discussed applications of the Strömgren integral in astrophysics, and MacLeod (1996) discussed how to compute it. References Cox, A. N. (1964), "Stellar absorption coefficients and opacities", in Adler, Lawrence Hugh; McLaughlin, Dean Benjamin (eds.), Stellar Structure, Stars and Stellar Systems: Compendium of Astronomy and Astrophysics, vol. VIII, Chicago, Ill: University of Chicago Press, p. 195, ISBN 978-0-226-45969-1 : ISBN / Date incompatibility (help) MacLeod, Allan J. (1996), "Algorithm 757: MISCFUN, a software package to compute uncommon special functions", ACM Transactions on Mathematical Software, 22 (3), NY, USA: ACM New York: 288–301, doi:10.1145/232826.232846 Strömgren, B. (1932), "The opacity of stellar matter and the hydrogen content of the stars", Zeitschrift für Astrophysik, 4: 118–152, Bibcode:1932ZA......4..118S Strömgren, B. (1933), "On the Interpretation of the Hertzsprung-Russell-Diagram", Zeitschrift für Astrophysik, 7: 222, Bibcode:1933ZA......7..222S External links Stromgren integral
Wikipedia
Mahaney's theorem is a theorem in computational complexity theory proven by Stephen Mahaney that states that if any sparse language is NP-complete, then P = NP. Also, if any sparse language is NP-complete with respect to Turing reductions, then the polynomial-time hierarchy collapses to Δ 2 P {\displaystyle \Delta _{2}^{P}} . Mahaney's argument does not actually require the sparse language to be in NP, so there is a sparse NP-hard set if and only if P = NP. This is because the existence of an NP-hard sparse set implies the existence of an NP-complete sparse set.
Wikipedia
The C++ programming language has support for string handling, mostly implemented in its standard library. The language standard specifies several string types, some inherited from C, some designed to make use of the language's features, such as classes and RAII. The most-used of these is std::string. Since the initial versions of C++ had only the "low-level" C string handling functionality and conventions, multiple incompatible designs for string handling classes have been designed over the years and are still used instead of std::string, and C++ programmers may need to handle multiple conventions in a single application. History The std::string type is the main string datatype in standard C++ since 1998, but it was not always part of C++. From C, C++ inherited the convention of using null-terminated strings that are handled by a pointer to their first element, and a library of functions that manipulate such strings. In modern standard C++, a string literal such as "hello" still denotes a NUL-terminated array of characters. Using C++ classes to implement a string type offers several benefits of automated memory management and a reduced risk of out-of-bounds accesses, and more intuitive syntax for string comparison and concatenation. Therefore, it was strongly tempting to create such a class. Over the years, C++ application, library and framework developers produced their own, incompatible string representations, such as the one in AT&T's Standard Components library (the first such implementation, 1983) or the CString type in Microsoft's MFC. While std::string standardized strings, legacy applications still commonly contain such custom string types and libraries may expect C-style strings, making it "virtually impossible" to avoid using multiple string types in C++ programs and requiring programmers to decide on the desired string representation ahead of starting a project. In a 1991 retrospective on the history of C++, its inventor Bjarne Stroustrup called the lack of a standard string type (and some other standard types) in C++ 1.0 the worst mistake he made in its development; "the absence of those led to everybody re-inventing the wheel and to an unnecessary diversity in the most fundamental classes". Implementation issues The various vendors' string types have different implementation strategies and performance characteristics. In particular, some string types use a copy-on-write strategy, where an operation such as does not actually copy the content of a to b; instead, both strings share their contents and a reference count on the content is incremented. The actual copying is postponed until a mutating operation, such as appending a character to either string, makes the strings' contents differ. Copy-on-write can make major performance changes to code using strings (making some operations much faster and some much slower). Though std::string no longer uses it, many (perhaps most) alternative string libraries still implement copy-on-write strings. Some string implementations store 16-bit or 32-bit code points instead of bytes, this was intended to facilitate processing of Unicode text. However, it means that conversion to these types from std::string or from arrays of bytes is dependent on the "locale" and can throw exceptions. Any processing advantages of 16-bit code units vanished when the variable-width UTF-16 encoding was introduced (though there are still advantages if you must communicate with a 16-bit API such as Windows). Qt's QString is an example. Third-party string implementations also differed considerably in the syntax to extract or compare substrings, or to perform searches in the text. Standard string types The std::string class is the standard representation for a text string since C++98. The class provides some typical string operations like comparison, concatenation, find and replace, and a function for obtaining substrings. An std::string can be constructed from a C-style string, and a C-style string can also be obtained from one. The individual units making up the string are of type char, at least (and almost always) 8 bits each. In modern usage these are often not "characters", but parts of a multibyte character encoding such as UTF-8. The copy-on-write strategy was deliberately allowed by the initial C++ Standard for std::string because it was deemed a useful optimization, and used by nearly all implementations. However, there were mistakes, in particular the operator[] returned a non-const reference in order to make it easy to port C in-place string manipulations (such code often assumed one byte per character and thus this may not have been a good idea!) This allowed the following code that shows that it must make a copy even though it is almost always used only to examine the string and not modify it: This caused implementations, first MSVC and later GCC, to move away from copy-on-write. It was also discovered that the overhead in multi-threaded applications due to the locking needed to examine or change the reference count was greater than the overhead of copying small strings on modern processors (especially for strings smaller than the size of a pointer). The optimization was finally disallowed in C++11, with the result that even passing a std::string as an argument to a function, for example void function_name(std::string s); must be expected to perform a full copy of the string into newly allocated memory. The common idiom to avoid such copying is to pass as a const reference. The C++17 standard added a new string_view class that is only a pointer and length to read-only data, makes passing arguments far faster than either of the above examples: Example usage Related classes std::string is a typedef for a particular instantiation of the std::basic_string template class. Its definition is found in the <string> header: Thus string provides basic_string functionality for strings having elements of type char. There is a similar class std::wstring, which consists of wchar t, and is most often used to store UTF-16 text on Windows and UTF-32 on most Unix-like platforms. The C++ standard, however, does not impose any interpretation as Unicode code points or code units on these types and does not even guarantee that a wchar_t holds more bits than a char. To resolve some of the incompatibilities resulting from wchar_t's properties, C++11 added two new classes: std::u16string and std::u32string (made up of the new types char16_t and char32_t), which are the given number of bits per code unit on all platforms. C++11 also added new string literals of 16-bit and 32-bit "characters" and syntax for putting Unicode code points into null-terminated (C-style) strings. A basic_string is guaranteed to be specializable for any type with a char_traits struct to accompany it. As of C++11, only char, wchar_t, char16_t and char32_t specializations are required to be implemented. A basic_string is also a Standard Library container, and thus the Standard Library algorithms can be applied to the code units in strings. Critiques The design of std::string has been held up as an example of monolithic design by Herb Sutter, who reckons that of the 103 member functions on the class in C++98, 71 could have been decoupled without loss of implementation efficiency.
Wikipedia
The Message Understanding Conferences (MUC) for computing and computer science, were initiated and financed by DARPA (Defense Advanced Research Projects Agency) to encourage the development of new and better methods of information extraction. The character of this competition, many concurrent research teams competing against one another—required the development of standards for evaluation, e.g. the adoption of metrics like precision and recall. Topics and exercises Only for the first conference (MUC-1) could the participant choose the output format for the extracted information. From the second conference the output format, by which the participants' systems would be evaluated, was prescribed. For each topic fields were given, which had to be filled with information from the text. Typical fields were, for example, the cause, the agent, the time and place of an event, the consequences etc. The number of fields increased from conference to conference. At the sixth conference (MUC-6) the task of recognition of named entities and coreference was added. For named entity all phrases in the text were supposed to be marked as person, location, organization, time or quantity. The topics and text sources, which were processed, show a continuous move from military to civil themes, which mirrored the change in business interest in information extraction taking place at the time. Literature Ralph Grishman, Beth Sundheim: Message Understanding Conference - 6: A Brief History. In: Proceedings of the 16th International Conference on Computational Linguistics (COLING), I, Copenhagen, 1996, 466–471. See also DARPA TIPSTER Program External links MUC-7 MUC-6 SAIC Information Extraction
Wikipedia
This article contains economic statistics of the country Singapore. The GDP, GDP Per Capita, GNI Per Capita, Total Trade, Total Imports, Total Exports, Foreign Reserves, Current Account Balance, Average Exchange Rate, Operating Revenue and Total Expenditure are mentioned in the table below for years 1965 through 2018. 1965 to 2014 2014 to 2018 See also Economy of Singapore
Wikipedia
An electromagnetic pulse (EMP), also referred to as a transient electromagnetic disturbance (TED), is a brief burst of electromagnetic energy. The origin of an EMP can be natural or artificial, and can occur as an electromagnetic field, as an electric field, as a magnetic field, or as a conducted electric current. The electromagnetic interference caused by an EMP can disrupt communications and damage electronic equipment. An EMP such as a lightning strike can physically damage objects such as buildings and aircraft. The management of EMP effects is a branch of electromagnetic compatibility (EMC) engineering. The first recorded damage from an electromagnetic pulse came with the solar storm of August 1859, or the Carrington Event. In modern warfare, weapons delivering a high energy EMP are designed to disrupt communications equipment, computers needed to operate modern warplanes, or even put the entire electrical network of a target country out of commission. General characteristics An electromagnetic pulse is a short surge of electromagnetic energy. Its short duration means that it will be spread over a range of frequencies. Pulses are typically characterized by: The mode of energy transfer (radiated, electric, magnetic or conducted). The range or spectrum of frequencies present. Pulse waveform: shape, duration and amplitude. The frequency spectrum and the pulse waveform are interrelated via the Fourier transform which describes how component waveforms may sum to the observed frequency spectrum. Types of energy EMP energy may be transferred in any of four forms: Electric field Magnetic field Electromagnetic radiation Electrical conduction According to Maxwell's equations, a pulse of electric energy will always be accompanied by a pulse of magnetic energy. In a typical pulse, either the electric or the magnetic form will dominate. It can be shown that the non-linear Maxwell's equations can have time-dependent self-similar electromagnetic shock wave solutions where the electric and the magnetic field components have a discontinuity. In general, only radiation acts over long distances, with the magnetic and electric fields acting over short distances. There are a few exceptions, such as a solar magnetic flare. Frequency ranges A pulse of electromagnetic energy typically comprises many frequencies from very low to some upper limit depending on the source. The range defined as EMP, sometimes referred to as "DC [direct current] to daylight", excludes the highest frequencies comprising the optical (infrared, visible, ultraviolet) and ionizing (X and gamma rays) ranges. Some types of EMP events can leave an optical trail, such as lightning and sparks, but these are side effects of the current flow through the air and are not part of the EMP itself. Pulse waveforms The waveform of a pulse describes how its instantaneous amplitude (field strength or current) changes over time. Real pulses tend to be quite complicated, so simplified models are often used. Such a model is typically described either in a diagram or as a mathematical equation. Most electromagnetic pulses have a very sharp leading edge, building up quickly to their maximum level. The classic model is a double-exponential curve which climbs steeply, quickly reaches a peak and then decays more slowly. However, pulses from a controlled switching circuit often approximate the form of a rectangular or "square" pulse. EMP events usually induce a corresponding signal in the surrounding environment or material. Coupling usually occurs most strongly over a relatively narrow frequency band, leading to a characteristic damped sine wave. Visually it is shown as a high frequency sine wave growing and decaying within the longer-lived envelope of the double-exponential curve. A damped sinewave typically has much lower energy and a narrower frequency spread than the original pulse, due to the transfer characteristic of the coupling mode. In practice, EMP test equipment often injects these damped sinewaves directly rather than attempting to recreate the high-energy threat pulses. In a pulse train, such as from a digital clock circuit, the waveform is repeated at regular intervals. A single complete pulse cycle is sufficient to characterise such a regular, repetitive train. Types An EMP arises where the source emits a short-duration pulse of energy. The energy is usually broadband by nature, although it often excites a relatively narrow-band damped sine wave response in the surrounding environment. Some types are generated as repetitive and regular pulse trains. Different types of EMP arise from natural, man-made, and weapons effects. Types of natural EMP events include: Lightning electromagnetic pulse (LEMP). The discharge is typically an initial current flow of perhaps millions of amps, followed by a train of pulses of decreasing energy. Electrostatic discharge (ESD), as a result of two charged objects coming into proximity or even contact. Meteoric EMP. The discharge of electromagnetic energy resulting from either the impact of a meteoroid with a spacecraft or the explosive breakup of a meteoroid passing through the Earth's atmosphere. Coronal mass ejection (CME), sometimes referred to as a solar EMP. A burst of plasma and accompanying magnetic field, ejected from the solar corona and released into the solar wind. Types of (civil) man-made EMP events include: Switching action of electrical circuitry, whether isolated or repetitive (as a pulse train). Electric motors can create a train of pulses as the internal electrical contacts make and break connections as the armature rotates. Gasoline engine ignition systems can create a train of pulses as the spark plugs are energized or fired. Continual switching actions of digital electronic circuitry. Power line surges. These can be up to several kilovolts, enough to damage electronic equipment that is insufficiently protected. Types of military EMP include: Nuclear electromagnetic pulse (NEMP), as a result of a nuclear explosion. A variant of this is the high altitude nuclear EMP (HEMP), which produces a secondary pulse due to particle interactions with the Earth's atmosphere and magnetic field. Non-nuclear electromagnetic pulse (NNEMP) weapons. Lightning electromagnetic pulse (LEMP) Lightning is unusual in that it typically has a preliminary "leader" discharge of low energy building up to the main pulse, which in turn may be followed at intervals by several smaller bursts. Electrostatic discharge (ESD) ESD events are characterized by high voltages of many kV, but small currents sometimes cause visible sparks. ESD is treated as a small, localized phenomenon, although technically a lightning flash is a very large ESD event. ESD can also be man-made, as in the shock received from a Van de Graaff generator. An ESD event can damage electronic circuitry by injecting a high-voltage pulse, besides giving people an unpleasant shock. Such an ESD event can also create sparks, which may in turn ignite fires or fuel-vapour explosions. For this reason, before refueling an aircraft or exposing any fuel vapor to the air, the fuel nozzle is first connected to the aircraft to safely discharge any static. Switching pulses The switching action of an electrical circuit creates a sharp change in the flow of electricity. This sharp change is a form of EMP. Simple electrical sources include inductive loads such as relays, solenoids, and brush contacts in electric motors. These typically send a pulse down any electrical connections present, as well as radiating a pulse of energy. The amplitude is usually small and the signal may be treated as "noise" or "interference". The switching off or "opening" of a circuit causes an abrupt change in the current flowing. This can in turn cause a large pulse in the electric field across the open contacts, causing arcing and damage. It is often necessary to incorporate design features to limit such effects. Electronic devices such as vacuum tubes or valves, transistors, and diodes can also switch on and off very quickly, causing similar issues. One-off pulses may be caused by solid-state switches and other devices used only occasionally. However, the many millions of transistors in a modern computer may switch repeatedly at frequencies above 1 GHz, causing interference that appears to be continuous. Nuclear electromagnetic pulse (NEMP) A nuclear electromagnetic pulse is the abrupt pulse of electromagnetic radiation resulting from a nuclear explosion. The resulting rapidly changing electric fields and magnetic fields may couple with electrical/electronic systems to produce damaging current and voltage surges. The intense gamma radiation emitted can also ionize the surrounding air, creating a secondary EMP as the atoms of air first lose their electrons and then regain them. NEMP weapons are designed to maximize such EMP effects as the primary damage mechanism, and some are capable of destroying susceptible electronic equipment over a wide area. A high-altitude electromagnetic pulse (HEMP) weapon is a NEMP warhead designed to be detonated far above the Earth's surface. The explosion releases a blast of gamma rays into the mid-stratosphere, which ionizes as a secondary effect and the resultant energetic free electrons interact with the Earth's magnetic field to produce a much stronger EMP than is normally produced in the denser air at lower altitudes. Non-nuclear electromagnetic pulse (NNEMP) Non-nuclear electromagnetic pulse (NNEMP) is a weapon-generated electromagnetic pulse without use of nuclear technology. Devices that can achieve this objective include a large low-inductance capacitor bank discharged into a single-loop antenna, a microwave generator, and an explosively pumped flux compression generator. To achieve the frequency characteristics of the pulse needed for optimal coupling into the target, wave-shaping circuits or microwave generators are added between the pulse source and the antenna. Vircators are vacuum tubes that are particularly suitable for microwave conversion of high-energy pulses. NNEMP generators can be carried as a payload of bombs, cruise missiles (such as the CHAMP missile) and drones, with diminished mechanical, thermal and ionizing radiation effects, but without the consequences of deploying nuclear weapons. The range of NNEMP weapons is much less than nuclear EMP. Nearly all NNEMP devices used as weapons require chemical explosives as their initial energy source, producing only one millionth the energy of nuclear explosives of similar weight. The electromagnetic pulse from NNEMP weapons must come from within the weapon, while nuclear weapons generate EMP as a secondary effect. These facts limit the range of NNEMP weapons, but allow finer target discrimination. The effect of small e-bombs has proven to be sufficient for certain terrorist or military operations. Examples of such operations include the destruction of electronic control systems critical to the operation of many ground vehicles and aircraft. The concept of the explosively pumped flux compression generator for generating a non-nuclear electromagnetic pulse was conceived as early as 1951 by Andrei Sakharov in the Soviet Union, but nations kept work on non-nuclear EMP classified until similar ideas emerged in other nations. Effects Minor EMP events, and especially pulse trains, cause low levels of electrical noise or interference which can affect the operation of susceptible devices. For example, a common problem in the mid-twentieth century was interference emitted by the ignition systems of gasoline engines, which caused radio sets to crackle and TV sets to show stripes on the screen. CISPR 25 was established to set threshold standards that vehicles must meet for electromagnetic interference(EMI) emissions. At a high voltage level an EMP can induce a spark, for example from an electrostatic discharge when fuelling a gasoline-engined vehicle. Such sparks have been known to cause fuel-air explosions and precautions must be taken to prevent them. A large and energetic EMP can induce high currents and voltages in the victim unit, temporarily disrupting its function or even permanently damaging it. A powerful EMP can also directly affect magnetic materials and corrupt the data stored on media such as magnetic tape and computer hard drives. Hard drives are usually shielded by heavy metal casings. Some IT asset disposal service providers and computer recyclers use a controlled EMP to wipe such magnetic media. A very large EMP event, such as a lightning strike or an air bursted nuclear weapon, is also capable of damaging objects such as trees, buildings and aircraft directly, either through heating effects or the disruptive effects of the very large magnetic field generated by the current. An indirect effect can be electrical fires caused by heating. Most engineered structures and systems require some form of protection against lightning to be designed in. A good means of protection is a Faraday shield designed to protect certain items from being destroyed. Control Like any electromagnetic interference, the threat from EMP is subject to control measures. This is true whether the threat is natural or man-made. Therefore, most control measures focus on the susceptibility of equipment to EMP effects, and hardening or protecting it from harm. Man-made sources, other than weapons, are also subject to control measures in order to limit the amount of pulse energy emitted. The discipline of ensuring correct equipment operation in the presence of EMP and other RF threats is known as electromagnetic compatibility (EMC). Test simulation To test the effects of EMP on engineered systems and equipment, an EMP simulator may be used. Induced pulse simulation Induced pulses are of much lower energy than threat pulses and so are more practicable to create, but they are less predictable. A common test technique is to use a current clamp in reverse, to inject a range of damped sine wave signals into a cable connected to the equipment under test. The damped sine wave generator is able to reproduce the range of induced signals likely to occur. Threat pulse simulation Sometimes the threat pulse itself is simulated in a repeatable way. The pulse may be reproduced at low energy in order to characterise the subject's response prior to damped sinewave injection, or at high energy to recreate the actual threat conditions. A small-scale ESD simulator may be hand-held. Bench- or room-sized simulators come in a range of designs, depending on the type and level of threat to be generated. At the top end of the scale, large outdoor test facilities incorporating high-energy EMP simulators have been built by several countries. The largest facilities are able to test whole vehicles including ships and aircraft for their susceptibility to EMP. Nearly all of these large EMP simulators used a specialized version of a Marx generator. Examples include the huge wooden-structured ATLAS-I simulator (also known as TRESTLE) at Sandia National Labs, New Mexico, which was at one time the world's largest EMP simulator. Papers on this and other large EMP simulators used by the United States during the latter part of the Cold War, along with more general information about electromagnetic pulses, are now in the care of the SUMMA Foundation, which is hosted at the University of New Mexico. The US Navy also has a large facility called the Electro Magnetic Pulse Radiation Environmental Simulator for Ships I (EMPRESS I). Safety High-level EMP signals can pose a threat to human safety. In such circumstances, direct contact with a live electrical conductor should be avoided. Where this occurs, such as when touching a Van de Graaff generator or other highly charged object, care must be taken to release the object and then discharge the body through a high resistance, in order to avoid the risk of a harmful shock pulse when stepping away. Very high electric field strengths can cause breakdown of the air and a potentially lethal arc current similar to lightning to flow, but electric field strengths of up to 200 kV/m are regarded as safe. According to research from Edd Gent, a 2019 report by the Electric Power Research Institute, which is funded by utility companies, found that a large EMP attack would probably cause regional blackouts but not a nationwide grid failure and that recovery times would be similar to those of other large-scale outages. It is not known how long these electrical blackouts would last, or what extent of damage would occur across the country. It is possible that neighboring countries of the U.S. could also be affected by such an attack, depending on the targeted area and people. According to an article from Naureen Malik, with North Korea's increasingly successful missile and warhead tests in mind, Congress moved to renew funding for the Commission to Assess the Threat to the U.S. from Electromagnetic Pulse Attack as part of the National Defense Authorization Act. According to research from Yoshida Reiji, in a 2016 article for the Tokyo-based nonprofit organization Center for Information and Security Trade Control, Onizuka warned that a high-altitude EMP attack would damage or destroy Japan's power, communications and transport systems as well as disable banks, hospitals and nuclear power plants. In popular culture By 1981, a number of articles on electromagnetic pulse in the popular press spread knowledge of the EMP phenomenon into the popular culture. EMP has been subsequently used in a wide variety of fiction and other aspects of popular culture. Popular media often depict EMP effects incorrectly, causing misunderstandings among the public and even professionals. Official efforts have been made in the U.S. to remedy these misconceptions. The novel One Second After by William R. Forstchen and the following books One Year After, The Final Day and Five Years After portrait the story of a fictional character named John Matherson and his community in Black Mountain, North Carolina that after the US loses a war and an EMP attack "sends our nation [the US] back to the Dark Ages". See also References Citations Sources Katayev, I.G. (1966). Electromagnetic Shock Waves Iliffe Books Ltd. Dorset House, Stanford Street, London, England External links TRESTLE: Landmark of the Cold War, a short documentary film on the SUMMA Foundation website
Wikipedia
In mathematical set theory, the multiverse view is that there are many models of set theory, but no "absolute", "canonical" or "true" model. The various models are all equally valid or true, though some may be more useful or attractive than others. The opposite view is the "universe" view of set theory in which all sets are contained in some single ultimate model. The collection of countable transitive models of ZFC (in some universe) is called the hyperverse and is very similar to the "multiverse". A typical difference between the universe and multiverse views is the attitude to the continuum hypothesis. In the universe view the continuum hypothesis is a meaningful question that is either true or false though we have not yet been able to decide which. In the multiverse view it is meaningless to ask whether the continuum hypothesis is true or false before selecting a model of set theory. Another difference is that the statement "For every transitive model of ZFC there is a larger model of ZFC in which it is countable" is true in some versions of the multiverse view of mathematics but is false in the universe view. References Antos, Carolin; Friedman, Sy-David; Honzik, Radek; Ternullo, Claudio (2015), "Multiverse conceptions in set theory", Synthese, 192 (8): 2463–2488, doi:10.1007/s11229-015-0819-9, MR 3400617 Hamkins, J. D. (2012), "The set-theoretic multiverse", Rev. Symb. Log., 5 (3): 416–449, arXiv:1108.4223, Bibcode:2011arXiv1108.4223H, doi:10.1017/S1755020311000359, MR 2970696
Wikipedia
Service Data Objects is a technology that allows heterogeneous data to be accessed in a uniform way. The SDO specification was originally developed in 2004 as a joint collaboration between Oracle (BEA) and IBM and approved by the Java Community Process in JSR 235. Version 2.0 of the specification was introduced in November 2005 as a key part of the Service Component Architecture. Relation to other technologies Originally, the technology was known as Web Data Objects, or WDO, and was shipped in IBM WebSphere Application Server 5.1 and IBM WebSphere Studio Application Developer 5.1.2. Other similar technologies are JDO, EMF, JAXB and ADO.NET. Design Service Data Objects denote the use of language-agnostic data structures that facilitate communication between structural tiers and various service-providing entities. They require the use of a tree structure with a root node and provide traversal mechanisms (breadth/depth-first) that allow client programs to navigate the elements. Objects can be static (fixed number of fields) or dynamic with a map-like structure allowing for unlimited fields. The specification defines meta-data for all fields and each object graph can also be provided with change summaries that can allow receiving programs to act more efficiently on them. Developers The specification is now being developed by IBM, Rogue Wave, Oracle, SAP, Siebel, Sybase, Xcalia, Software AG within the OASIS Member Section Open CSA since April 2007. Collaborative work and materials remain on the collaboration platform of Open SOA, an informal group of actors of the industry. Implementations The following SDO products are available: Rogue Wave Software HydraSDO Xcalia (for Java and .Net) Oracle (Data Service Integrator) IBM (Virtual XML Garden) IBM (WebSphere Process Server) There are open source implementations of SDO from: The Eclipse Persistence Services Project (EclipseLink) The Apache Tuscany project for Java and C++ The fcl-sdo library included with FreePascal References External links Specification versions and history can be found on Latest materials at OASIS Open CSA Service Data Objects SDO Specifications at OpenSOA Introducing Service Data Objects for PHP Using PHP's SDO and SCA extensions
Wikipedia
A magnetohydrodynamic converter (MHD converter) is an electromagnetic machine with no moving parts involving magnetohydrodynamics, the study of the kinetics of electrically conductive fluids (liquid or ionized gas) in the presence of electromagnetic fields. Such converters act on the fluid using the Lorentz force to operate in two possible ways: either as an electric generator called an MHD generator, extracting energy from a fluid in motion; or as an electric motor called an MHD accelerator or magnetohydrodynamic drive, putting a fluid in motion by injecting energy. MHD converters are indeed reversible, like many electromagnetic devices. Michael Faraday first attempted to test a MHD converter in 1832. MHD converters involving plasmas were highly studied in the 1960s and 1970s, with many government funding and dedicated international conferences. One major conceptual application was the use of MHD converters on the hot exhaust gas in a coal fired power plant, where it could extract some of the energy with very high efficiency, and then pass it into a conventional steam turbine. The research almost stopped after it was considered the electrothermal instability would severely limit the efficiency of such converters when intense magnetic fields are used, although solutions may exist. MHD power generation A magnetohydrodynamic generator is an MHD converter that transforms the kinetic energy of an electrically conductive fluid, in motion with respect to a steady magnetic field, into electricity. MHD power generation has been tested extensively in the 1960s with liquid metals and plasmas as working fluids. Basically, a plasma is hurtling down within a channel whose walls are fitted with electrodes. Electromagnets create a uniform transverse magnetic field within the cavity of the channel. The Lorentz force then acts upon the trajectory of the incoming electrons and positive ions, separating the opposite charge carriers according to their sign. As negative and positive charges are spatially separated within the chamber, an electric potential difference can be retrieved across the electrodes. While work is extracted from the kinetic energy of the incoming high-velocity plasma, the fluid slows down during the process. MHD propulsion A magnetohydrodynamic accelerator is an MHD converter that imparts motion to an electrically conductive fluid initially at rest, using cross electric current and magnetic field both applied within the fluid. MHD propulsion has been mostly tested with models of ships and submarines in seawater. Studies are also ongoing since the early 1960s about aerospace applications of MHD to aircraft propulsion and flow control to enable hypersonic flight: action on the boundary layer to prevent laminar flow from becoming turbulent, shock wave mitigation or cancellation for thermal control and reduction of the wave drag and form drag, inlet flow control and airflow velocity reduction with an MHD generator section ahead of a scramjet or turbojet to extend their regimes at higher Mach numbers, combined to an MHD accelerator in the exhaust nozzle fed by the MHD generator through a bypass system. Research on various designs are also conducted on electromagnetic plasma propulsion for space exploration. In an MHD accelerator, the Lorentz force accelerates all charge carriers in the same direction whatever their sign, as well as neutral atoms and molecules of the fluid through collisions. The fluid is ejected toward the rear and as a reaction, the vehicle accelerates forward. See also Plasma (physics) Lorentz force Electrothermal instability Wingless Electromagnetic Air Vehicle References Further reading Sutton, George W.; Sherman, Arthur (July 2006). Engineering Magnetohydrodynamics. Dover Civil and Mechanical Engineering. Dover Publications. ISBN 978-0486450322. Weier, Tom; Shatrov, Victor; Gerbeth, Gunter (2007). "Flow Control and Propulsion in Poor Conductors". In Molokov, Sergei S.; Moreau, R.; Moffatt, H. Keith (eds.). Magnetohydrodynamics: Historical Evolution and Trends. Springer Science+Business Media. pp. 295–312. doi:10.1007/978-1-4020-4833-3. ISBN 978-1-4020-4832-6.
Wikipedia
Tractable is a technology company specializing in the development of Artificial Intelligence (AI) to assess damage to property and vehicles. The AI allows users to appraise damage digitally. Technology Tractable's technology uses computer vision and deep learning to automate the appraisal of visual damage in accident and disaster recovery, for example to a vehicle. Drivers can be directed to use the application by their insurer after an accident, with the aim of settling their claim more quickly. The AI evaluates the damage from images, and therefore doesn't assess what isn't visible (such as, for example, interior damage to a vehicle or property). History Alexandre Dalyac and Razvan Ranca founded Tractable in 2014, and Adrien Cohen joined as co-founder in 2015. The company employs more than 300 staff members, largely in the United Kingdom. Tractable was named one of the 100 leading AI companies in the world in 2020 and 2021 by CB Insights. It won the Best Technology Award in the 2020 British Insurance Awards. In June 2021, Tractable announced a venture round that valued the company at $1 billion. Tractable was the UK's 100th billion-dollar tech company, or unicorn. In July 2023, the company received a $65 million investment from SoftBank Group, through its Vision Fund 2.
Wikipedia
EpiData is a group of applications used in combination for creating documented data structures and analysis of quantitative data. Overview The EpiData Association, which created the software, was created in 1999 and is based in Denmark. EpiData was developed in Pascal and uses open standards such as HTML where possible. EpiData is widely used by organizations and individuals to create and analyze large amounts of data. The World Health Organization (WHO) uses EpiData in its STEPS method of collecting epidemiological, medical, and public health data, for biostatistics, and for other quantitative-based projects. Epicentre, the research wing of Médecins Sans Frontières, uses EpiData to manage data from its international research studies and field epidemiology studies. E.g.: Piola P, Fogg C et al.: Supervised versus unsupervised intake of six-dose artemether-lumefantrine for treatment of acute, uncomplicated Plasmodium falciparum malaria in Mbarara, Uganda: a randomised trial. Lancet. 2005 Apr 23–29;365(9469):1467-73 'PMID 15850630'. Other examples: 'PMID 16765397', 'PMID 15569777' or 'PMID 17160135'. EpiData has two parts: Epidata Entry – used for simple or programmed data entry and data documentation. It handles simple forms or related systems EpiData Analysis – performs basic statistical analysis, graphs, and comprehensive data management, such as recoding data, label values and variables, and basic statistics. This application can create control charts, such as pareto charts or p-charts, and many other methods to visualize and describe statistical data. The software is free; development is funded by governmental and non-governmental organizations like WHO. See also Clinical surveillance Disease surveillance Epidemiological methods Control chart References External links EpiData official site EpiData Wiki EpiData-list Archived 2021-07-19 at the Wayback Machine – mailing list for EpiData World Health Organization STEPS approach to surveillance Médecins Sans Frontières Epicentre
Wikipedia
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
1