text
stringlengths
559
401k
source
stringlengths
13
121
In probability theory, Kolmogorov equations characterize continuous-time Markov processes. In particular, they describe how the probability of a continuous-time Markov process in a certain state changes over time. There are four distinct equations: the Kolmogorov forward equation for continuous processes, now understood to be identical to the Fokker–Planck equation, the Kolmogorov forward equation for jump processes, and two Kolmogorov backward equations for processes with and without discontinuous jumps. == Diffusion processes vs. jump processes == Writing in 1931, Andrei Kolmogorov started from the theory of discrete time Markov processes, which are described by the Chapman–Kolmogorov equation, and sought to derive a theory of continuous time Markov processes by extending this equation. He found that there are two kinds of continuous time Markov processes, depending on the assumed behavior over small intervals of time: If you assume that "in a small time interval there is an overwhelming probability that the state will remain unchanged; however, if it changes, the change may be radical", then you are led to what are called jump processes. The other case leads to processes such as those "represented by diffusion and by Brownian motion; there it is certain that some change will occur in any time interval, however small; only, here it is certain that the changes during small time intervals will be also small". For each of these two kinds of processes, Kolmogorov derived a forward and a backward system of equations (four in all). == History == The equations are named after Andrei Kolmogorov since they were highlighted in his 1931 foundational work. William Feller, in 1949, used the names "forward equation" and "backward equation" for his more general version of the Kolmogorov's pair, in both jump and diffusion processes. Much later, in 1956, he referred to the equations for the jump process as "Kolmogorov forward equations" and "Kolmogorov backward equations". Other authors, such as Motoo Kimura, referred to the diffusion (Fokker–Planck) equation as Kolmogorov forward equation, a name that has persisted. == The modern view == In the context of a continuous-time Markov process with jumps, see Kolmogorov equations (Markov jump process). In particular, in natural sciences the forward equation is also known as master equation. In the context of a diffusion process, for the backward Kolmogorov equations see Kolmogorov backward equations (diffusion). The forward Kolmogorov equation is also known as Fokker–Planck equation. == Continuous-time Markov chains == The original derivation of the equations by Kolmogorov starts with the Chapman–Kolmogorov equation (Kolmogorov called it fundamental equation) for time-continuous and differentiable Markov processes on a finite, discrete state space. In this formulation, it is assumed that the probabilities P ( x , s ; y , t ) {\displaystyle P(x,s;y,t)} are continuous and differentiable functions of t > s {\displaystyle t>s} , where x , y ∈ Ω {\displaystyle x,y\in \Omega } (the state space) and t > s , t , s ∈ R ≥ 0 {\displaystyle t>s,t,s\in \mathbb {R} _{\geq 0}} are the final and initial times, respectively. Also, adequate limit properties for the derivatives are assumed. Feller derives the equations under slightly different conditions, starting with the concept of purely discontinuous Markov process and then formulating them for more general state spaces. Feller proves the existence of solutions of probabilistic character to the Kolmogorov forward equations and Kolmogorov backward equations under natural conditions. For the case of a countable state space we put i , j {\displaystyle i,j} in place of x , y {\displaystyle x,y} . The Kolmogorov forward equations read ∂ P i j ∂ t ( s ; t ) = ∑ k P i k ( s ; t ) A k j ( t ) {\displaystyle {\frac {\partial P_{ij}}{\partial t}}(s;t)=\sum _{k}P_{ik}(s;t)A_{kj}(t)} , where A ( t ) {\displaystyle A(t)} is the transition rate matrix (also known as the generator matrix), while the Kolmogorov backward equations are ∂ P i j ∂ s ( s ; t ) = − ∑ k P k j ( s ; t ) A i k ( s ) {\displaystyle {\frac {\partial P_{ij}}{\partial s}}(s;t)=-\sum _{k}P_{kj}(s;t)A_{ik}(s)} The functions P i j ( s ; t ) {\displaystyle P_{ij}(s;t)} are continuous and differentiable in both time arguments. They represent the probability that the system that was in state i {\displaystyle i} at time s {\displaystyle s} jumps to state j {\displaystyle j} at some later time t > s {\displaystyle t>s} . The continuous quantities A i j ( t ) {\displaystyle A_{ij}(t)} satisfy A i j ( t ) = [ ∂ P i j ∂ u ( t ; u ) ] u = t , A j k ( t ) ≥ 0 , j ≠ k , ∑ k A j k ( t ) = 0. {\displaystyle A_{ij}(t)=\left[{\frac {\partial P_{ij}}{\partial u}}(t;u)\right]_{u=t},\quad A_{jk}(t)\geq 0,\ j\neq k,\quad \sum _{k}A_{jk}(t)=0.} === Relation with the generating function === Still in the discrete state case, letting s = 0 {\displaystyle s=0} and assuming that the system initially is found in state i {\displaystyle i} , the Kolmogorov forward equations describe an initial-value problem for finding the probabilities of the process, given the quantities A j k ( t ) {\displaystyle A_{jk}(t)} . We write p k ( t ) = P i k ( 0 ; t ) {\displaystyle p_{k}(t)=P_{ik}(0;t)} where ∑ k p k ( t ) = 1 {\displaystyle \sum _{k}p_{k}(t)=1} , then d p k d t ( t ) = ∑ j A j k ( t ) p j ( t ) ; p k ( 0 ) = δ i k , k = 0 , 1 , … . {\displaystyle {\frac {dp_{k}}{dt}}(t)=\sum _{j}A_{jk}(t)p_{j}(t);\quad p_{k}(0)=\delta _{ik},\qquad k=0,1,\dots .} For the case of a pure death process with constant rates the only nonzero coefficients are A j , j − 1 = μ j , j ≥ 1 {\displaystyle A_{j,j-1}=\mu _{j},\ j\geq 1} . Letting Ψ ( x , t ) = ∑ k x k p k ( t ) , {\displaystyle \Psi (x,t)=\sum _{k}x^{k}p_{k}(t),\quad } the system of equations can in this case be recast as a partial differential equation for Ψ ( x , t ) {\displaystyle {\Psi }(x,t)} with initial condition Ψ ( x , 0 ) = x i {\displaystyle \Psi (x,0)=x^{i}} . After some manipulations, the system of equations reads, ∂ Ψ ∂ t ( x , t ) = μ ( 1 − x ) ∂ Ψ ∂ x ( x , t ) ; Ψ ( x , 0 ) = x i , Ψ ( 1 , t ) = 1. {\displaystyle {\frac {\partial \Psi }{\partial t}}(x,t)=\mu (1-x){\frac {\partial {\Psi }}{\partial x}}(x,t);\qquad \Psi (x,0)=x^{i},\quad \Psi (1,t)=1.} == An example from biology == One example from biology is given below: p n ′ ( t ) = ( n − 1 ) β p n − 1 ( t ) − n β p n ( t ) {\displaystyle p_{n}'(t)=(n-1)\beta p_{n-1}(t)-n\beta p_{n}(t)} This equation is applied to model population growth with birth. Where n {\displaystyle n} is the population index, with reference the initial population, β {\displaystyle \beta } is the birth rate, and finally p n ( t ) = Pr ( N ( t ) = n ) {\displaystyle p_{n}(t)=\Pr(N(t)=n)} , i.e. the probability of achieving a certain population size. The analytical solution is: p n ( t ) = ( n − 1 ) β e − n β t ∫ 0 t p n − 1 ( s ) e n β s d s {\displaystyle p_{n}(t)=(n-1)\beta e^{-n\beta t}\int _{0}^{t}\!p_{n-1}(s)\,e^{n\beta s}\mathrm {d} s} This is a formula for the probability p n ( t ) {\displaystyle p_{n}(t)} in terms of the preceding ones, i.e. p n − 1 ( t ) {\displaystyle p_{n-1}(t)} . == See also == Feynman-Kac formula Fokker-Planck equation Kolmogorov backward equation == References ==
Wikipedia/Kolmogorov_equations
Mechanistic models for niche apportionment are biological models used to explain relative species abundance distributions. These niche apportionment models describe how species break up resource pool in multi-dimensional space, determining the distribution of abundances of individuals among species. The relative abundances of species are usually expressed as a Whittaker plot, or rank abundance plot, where species are ranked by number of individuals on the x-axis, plotted against the log relative abundance of each species on the y-axis. The relative abundance can be measured as the relative number of individuals within species or the relative biomass of individuals within species. == History == Niche apportionment models were developed because ecologists sought biological explanations for relative species abundance distributions. MacArthur (1957, 1961), was one of the earliest to express dissatisfaction with purely statistical models, presenting instead 3 mechanistic niche apportionment models. MacArthur believed that ecological niches within a resource pool could be broken up like a stick, with each piece of the stick representing niches occupied in the community. With contributions from Sugihara (1980), Tokeshi (1990, 1993, 1996) expanded upon the broken stick model, when he generated roughly 7 mechanistic niche apportionment models. These mechanistic models provide a useful starting point for describing the species composition of communities. == Description == A niche apportionment model can be used in situations where one resource pool is either sequentially or simultaneously broken up into smaller niches by colonizing species or by speciation (clarification on resource use: species within a guild use same resources, while species within a community may not). These models describe how species that draw from the same resource pool (e.g. a guild (ecology)) partition their niche. The resource pool is broken either sequentially or simultaneously, and the two components of the process of fragmentation of the niche include which fragment is chosen and the size of the resulting fragment (Figure 2). Niche apportionment models have been used in the primary literature to explain, and describe changes in the relative abundance distributions of a diverse array of taxa including, freshwater insects, fish, bryophytes beetles, hymenopteran parasites, plankton assemblages and salt marsh grass. == Assumptions == The mechanistic models that describe these plots work under the assumption that rank abundance plots are based on a rigorous estimate of the abundances of individuals within species and that these measures represent the actual species abundance distribution. Furthermore, whether using the number of individuals as the abundance measure or the biomass of individuals, these models assume that this quantity is directly proportional to the size of the niche occupied by an organism. One suggestion is that abundance measured as the numbers of individuals, may exhibit lower variances than those using biomass. Thus, some studies using abundance as a proxy for niche allocation may overestimate the evenness of a community. This happens because there is not a clear distinction of the relationship between body size, abundance (ecology), and resource use. Often studies fail to incorporate size structure or biomass estimates into measures of actual abundance, and these measure can create a higher variance around the niche apportionment models than abundance measured strictly as the number of individuals. == Tokeshi's mechanistic models of niche apportionment == Seven mechanistic models that describe niche apportionment are described below. The models are presented in the order of increasing evenness, from least even, the Dominance Pre-emption model to the most even the Dominance Decay and MacArthur Fraction models. === Dominance preemption === This model describes a situation where after initial colonization (or speciation) each new species pre-empts more than 50% of the smallest remaining niche. In a Dominance preemption model of niche apportionment the species colonize random portion between 50 and 100% of the smallest remaining niche, making this model stochastic in nature. A closely related model, the Geometric Series, is a deterministic version of the Dominance pre-emption model, wherein the percentage of remaining niche space that the new species occupies (k) is always the same. In fact, the dominance pre-emption and geometric series models are conceptually similar and will produce the same relative abundance distribution when the proportion of the smaller niche filled is always 0.75. The dominance pre-emption model is the best fit to the relative abundance distributions of some stream fish communities in Texas, including some taxonomic groupings, and specific functional groupings. The Geometric (k=0.75) P i = k ( 1 − k ) i − 1 {\displaystyle Pi=k(1-k)^{i-1}} === Random assortment === In the random assortment model the resource pool is divided at random among simultaneously or sequentially colonizing species. This pattern could arise because the abundance measure does not scale with the amount of niche occupied by a species or because temporal-variation in species abundance or niche breadth causes discontinuity in niche apportionment over time and thus species appear to have no relationship between extent of occupancy and their niche. Tokeshi (1993) explained that this model, in many ways, is similar to Caswell's neutral theory of biodiversity, mainly because species appear to act independently of each other. === Random fraction === The random fraction model describes a process where niche size is chosen at random by sequentially colonizing species. The initial species chooses a random portion of the total niche and subsequent colonizing species also choose a random portion of the total niche and divide it randomly until all species have colonized. Tokeshi (1990) found this model to be compatible with some epiphytic Chiromonid shrimp communities, and more recently it has been used to explain the relative abundance distributions of phytoplankton communities, salt meadow vegetation, some communities of insects in the order Diptera, some ground beetle communities, functional and taxonomic groupings of stream fish in Texas bio-regions, and ichneumonid parasitoids. A similar model was developed by Sugihara in an attempt to provide a biological explanation for the log normal distribution of Preston (1948). Sugihara's (1980) Fixed Division Model was similar to the random fraction model, but the randomness of the model is drawn from a triangular distribution with a mean of 0.75 rather that a normal distribution with a mean of 0.5 used in the random fraction. Sugihara used a triangular distribution to draw the random variables because the randomness of some natural populations matches a triangular distribution with a mean of 0.75. === Power fraction === This model can explain a relative abundance distribution when the probability of colonization an existing niche in a resource pool is positively related to the size of that niche (measured as abundance, biomass etc.). The probability with which a portion of the niche colonized is dependent on the relative sizes of the established niches, and is scaled by an exponent k. k can take a value between 0 and 1 and if k>0 there is always a slightly higher probability that the larger niche will be colonized. This model is toted as being more biologically realistic because one can imagine many cases where the niche with the larger proportion of resources is more likely to be invaded because that niche has more resource space, and thus more opportunity for acquisition. The random fraction model of niche apportionment is an extreme of the power fraction model where k=0, and the other extreme of the power fraction, when k=1 resembles the MacArthur Fraction model where the probability of colonization is directly proportion to niche size. === MacArthur fraction === This model requires that the initial niche is broken at random and the successive niches are chosen with a probability proportional to their size. In this model the largest niche always has a greater probability of being broken relative to the smaller niches in the resource pool. This model can lead to a more even distribution where larger niches are more likely to be broken facilitating co-existence between species in equivalent sized niches. The basis for the MacArthur Fraction model is the Broken Stick, developed by MacArthur (1957). These models produce similar results, but one of the main conceptual differences is that niches are filled simultaneously in Broken Stick model rather than sequentially as in the MacArthur Fraction. Tokeshi (1993) argues that sequentially invading a resource pool is more biologically realistic than simultaneously breaking the niche space. When the abundance of fish from all bio-regions in Texas were combined the distribution resembled the broken stick model of niche apportionment, suggesting a relatively even distribution of freshwater fish species in Texas. === Dominance decay === This model can be thought of as the inverse to the Dominance pre-emption model. First, the initial resource pool is colonized randomly and the remaining, subsequent colonizers always colonize the largest niche, whether or not it is already colonized. This model generates the most even community relative to the niche apportionment models described above because the largest niche is always broken into two smaller fragments that are more likely to be equivalent to the size of the smaller niche that was not broken. Communities of this “level” of evenness seem to be rare in natural systems. However, one such community includes the relative abundance distribution of filter feeders in one site within the River Danube in Austria. === Composite === A composite model exists when a combination of niche apportionment models are acting in different portions of the resource pool. Fesl (2002). shows how a composite model might appear in a study of freshwater Diptera, in that different niche apportionment models fit different functional groups of the data. Another example by Higgins and Strauss (2008), modeling fish assemblages, found that fish communities from different habitats and with different species compositions conform to different niche apportionment models, thus the entire species assemblage was a combination of models in different regions of the species range. == Fitting mechanistic models of niche apportionment to empirical data == Mechanistic models of niche apportionment are intended to describe communities. Researchers have used these models in many ways to investigate the temporal and geographic trends in species abundance. For many years the fit of niche apportionment models was conducted by eye and graphs of the models were compared with empirical data. More recently statistical tests of the fit of niche apportionment models to empirical data have been developed. The later method (Etienne and Ollf 2005) uses a Bayesian simulation of the models to test their fit to empirical data. The former method, which is still commonly used, simulates the expected relative abundances, from a normal distribution, of each model given the same number of species as the empirical data. Each model is simulated multiple times, and mean and standard deviation can be calculated to assign confidence intervals around each relative abundance distribution. The confidence around each rank can be tested against empirical data for each model to determine model fit. The confidence intervals are calculated as follows. For more information on the simulation of niche apportionment models the website [1], which explains the program Power Niche. R ( x i ) = μ i ± r σ i n {\displaystyle R(x_{i})=\mu _{i}\pm {\frac {r\sigma _{i}}{\sqrt {n}}}} r=confidence limit of simulated data σ=standard deviation of simulated data n=number of replicates of empirical sample == References ==
Wikipedia/Niche_apportionment_models
The random generalized Lotka–Volterra model (rGLV) is an ecological model and random set of coupled ordinary differential equations where the parameters of the generalized Lotka–Volterra equation are sampled from a probability distribution, analogously to quenched disorder. The rGLV models dynamics of a community of species in which each species' abundance grows towards a carrying capacity but is depleted due to competition from the presence of other species. It is often analyzed in the many-species limit using tools from statistical physics, in particular from spin glass theory. The rGLV has been used as a tool to analyze emergent macroscopic behavior in microbial communities with dense, strong interspecies interactions. The model has served as a context for theoretical investigations studying diversity-stability relations in community ecology and properties of static and dynamic coexistence. Dynamical behavior in the rGLV has been mapped experimentally in community microcosms. The rGLV model has also served as an object of interest for the spin glass and disordered systems physics community to develop new techniques and numerical methods. == Definition == The random generalized Lotka–Volterra model is written as the system of coupled ordinary differential equations, d N i d t = r i K i N i ( K i − N i − ∑ j ( ≠ i ) α i j N j ) , i = 1 , … , S , {\displaystyle {\frac {\mathrm {d} N_{i}}{\mathrm {d} t}}={\frac {r_{i}}{K_{i}}}N_{i}\left(K_{i}-N_{i}-\sum _{j(\neq i)}\alpha _{ij}N_{j}\right),\qquad i=1,\dots ,S,} where N i {\displaystyle N_{i}} is the abundance of species i {\displaystyle i} , S {\displaystyle S} is the number of species, K i {\displaystyle K_{i}} is the carrying capacity of species i {\displaystyle i} in the absence of interactions, r i {\displaystyle r_{i}} sets a timescale, and α {\displaystyle \alpha } is a random matrix whose entries are random variables with mean ⟨ α i j ⟩ = μ α / S {\displaystyle \langle \alpha _{ij}\rangle =\mu _{\alpha }/S} , variance v a r ( α i j ) = σ α 2 / S {\displaystyle \mathrm {var} (\alpha _{ij})=\sigma _{\alpha }^{2}/S} , and correlations c o r r ( α i j , α j i ) = γ {\displaystyle \mathrm {corr} (\alpha _{ij},\alpha _{ji})=\gamma } for i ≠ j {\displaystyle i\neq j} where − 1 ≤ γ ≤ 1 {\displaystyle -1\leq \gamma \leq 1} . The interaction matrix, α {\displaystyle \alpha } , may be parameterized as, α i j = μ α S + σ α S a i j , {\displaystyle \alpha _{ij}={\frac {\mu _{\alpha }}{S}}+{\frac {\sigma _{\alpha }}{\sqrt {S}}}a_{ij},} where a i j {\displaystyle a_{ij}} are standard random variables (i.e., zero mean and unit variance) with ⟨ a i j a j i ⟩ = γ {\displaystyle \langle a_{ij}a_{ji}\rangle =\gamma } for i ≠ j {\displaystyle i\neq j} . The matrix entries may have any distribution with common finite first and second moments and will yield identical results in the large S {\displaystyle S} limit due to the central limit theorem. The carrying capacities may also be treated as random variables with ⟨ K i ⟩ = K , var ⁡ ( K i ) = σ K 2 . {\displaystyle \langle K_{i}\rangle =K,\,\operatorname {var} (K_{i})=\sigma _{K}^{2}.} Analyses by statistical physics-inspired methods have revealed phase transitions between different qualitative behaviors of the model in the many-species limit. In some cases, this may include transitions between the existence of a unique globally-attractive fixed point and chaotic, persistent fluctuations. == Steady-state abundances in the thermodynamic limit == In the thermodynamic limit (i.e., the community has a very large number of species) where a unique globally-attractive fixed point exists, the distribution of species abundances can be computed using the cavity method while assuming the system is self-averaging. The self-averaging assumption means that the distribution of any one species' abundance between samplings of model parameters matches the distribution of species abundances within a single sampling of model parameters. In the cavity method, an additional mean-field species i = 0 {\displaystyle i=0} is introduced and the response of the system is approximated linearly. The cavity calculation yields a self-consistent equation describing the distribution of species abundances as a mean-field random variable, N 0 {\displaystyle N_{0}} . When σ K = 0 {\displaystyle \sigma _{K}=0} , the mean-field equation is, 0 = N 0 ( K − μ α m − N 0 + q ( μ α 2 + γ σ α 2 ) Z + σ α 2 γ χ N 0 ) , {\displaystyle 0=N_{0}\left(K-\mu _{\alpha }m-N_{0}+{\sqrt {q\left(\mu _{\alpha }^{2}+\gamma \sigma _{\alpha }^{2}\right)}}Z+\sigma _{\alpha }^{2}\gamma \chi N_{0}\right),} where m = ⟨ N 0 ⟩ , q = ⟨ N 0 2 ⟩ , χ = ⟨ ∂ N 0 / ∂ K 0 ⟩ {\displaystyle m=\langle N_{0}\rangle ,\,q=\langle N_{0}^{2}\rangle ,\,\chi =\langle \partial N_{0}/\partial K_{0}\rangle } , and Z ∼ N ( 0 , 1 ) {\displaystyle Z\sim {\mathcal {N}}(0,1)} is a standard normal random variable. Only ecologically uninvadable solutions are taken (i.e., the largest solution for N 0 {\displaystyle N_{0}} in the quadratic equation is selected). The relevant susceptibility and moments of N 0 {\displaystyle N_{0}} , which has a truncated normal distribution, are determined self-consistently. == Dynamical phases == In the thermodynamic limit where there is an asymptotically large number of species (i.e., S → ∞ {\displaystyle S\to \infty } ), there are three distinct phases: one in which there is a unique fixed point (UFP), another with a multiple attractors (MA), and a third with unbounded growth. In the MA phase, depending on whether species abundances are replenished at a small rate, may approach arbitrarily small population sizes, or are removed from the community when the population falls below some cutoff, the resulting dynamics may be chaotic with persistent fluctuations or approach an initial conditions-dependent steady state. The transition from the UFP to MA phase is signaled by the cavity solution becoming unstable to disordered perturbations. When σ K = 0 {\displaystyle \sigma _{K}=0} , the phase transition boundary occurs when the parameters satisfy, σ α = 2 1 + γ . {\displaystyle \sigma _{\alpha }={\frac {\sqrt {2}}{1+\gamma }}.} In the σ K > 0 {\displaystyle \sigma _{K}>0} case, the phase boundary can still be calculated analytically, but no closed-form solution has been found; numerical methods are necessary to solve the self-consistent equations determining the phase boundary. The transition to the unbounded growth phase is signaled by the divergence of ⟨ N 0 ⟩ {\displaystyle \langle N_{0}\rangle } as computed in the cavity calculation. == Dynamical mean-field theory == The cavity method can also be used to derive a dynamical mean-field theory model for the dynamics. The cavity calculation yields a self-consistent equation describing the dynamics as a Gaussian process defined by the self-consistent equation (for σ K = 0 {\displaystyle \sigma _{K}=0} ), d N 0 d t = N 0 ( t ) [ K 0 − N 0 ( t ) − μ α m ( t ) − σ α η ( t ) + γ σ α 2 ∫ 0 t d t ′ χ ( t , t ′ ) N 0 ( t ′ ) ] , {\displaystyle {\frac {\mathrm {d} N_{0}}{\mathrm {d} t}}=N_{0}(t)\left[K_{0}-N_{0}(t)-\mu _{\alpha }m(t)-\sigma _{\alpha }\eta (t)+\gamma \sigma _{\alpha }^{2}\int _{0}^{t}\mathrm {d} t'\,\chi (t,t')N_{0}(t')\right],} where m ( t ) = ⟨ N 0 ( t ) ⟩ {\displaystyle m(t)=\langle N_{0}(t)\rangle } , η {\displaystyle \eta } is a zero-mean Gaussian process with autocorrelation ⟨ η ( t ) η ( t ′ ) ⟩ = ⟨ N 0 ( t ) N 0 ( t ′ ) ⟩ {\displaystyle \langle \eta (t)\eta (t')\rangle =\langle N_{0}(t)N_{0}(t')\rangle } , and χ ( t , t ′ ) = ⟨ δ N 0 ( t ) / δ K 0 ( t ′ ) | K 0 ( t ′ ) = K 0 ⟩ {\displaystyle \chi (t,t')=\langle \left.\delta N_{0}(t)/\delta K_{0}(t')\right|_{K_{0}(t')=K_{0}}\rangle } is the dynamical susceptibility defined in terms of a functional derivative of the dynamics with respect to a time-dependent perturbation of the carrying capacity. Using dynamical mean-field theory, it has been shown that at long times, the dynamics exhibit aging in which the characteristic time scale defining the decay of correlations increases linearly in the duration of the dynamics. That is, C N ( t , t + t τ ) → f ( τ ) {\displaystyle C_{N}(t,t+t\tau )\to f(\tau )} when t {\displaystyle t} is large, where C N ( t , t ′ ) = ⟨ N ( t ) N ( t ′ ) ⟩ {\displaystyle C_{N}(t,t')=\langle N(t)N(t')\rangle } is the autocorrelation function of the dynamics and f ( τ ) {\displaystyle f(\tau )} is a common scaling collapse function. When a small immigration rate λ ≪ 1 {\displaystyle \lambda \ll 1} is added (i.e., a small constant is added to the right-hand side of the equations of motion) the dynamics reach a time transitionally invariant state. In this case, the dynamics exhibit jumps between O ( 1 ) {\displaystyle O(1)} and O ( λ ) {\displaystyle O(\lambda )} abundances. == Related articles == Generalized Lotka–Volterra equation Competitive Lotka–Volterra equations Lotka–Volterra equations Consumer-resource model Theoretical ecology Random dynamical system Spin glass Cavity method Dynamical mean-field theory Quenched disorder Community (ecology) Ecological stability == References == == Further reading == Stefano Allesina's Community Ecology course lecture notes: https://stefanoallesina.github.io/Theoretical_Community_Ecology/ Bunin, Guy (2017-04-28). "Ecological communities with Lotka-Volterra dynamics". Physical Review E. 95 (4): 042414. Bibcode:2017PhRvE..95d2414B. doi:10.1103/PhysRevE.95.042414. PMID 28505745.
Wikipedia/Random_generalized_Lotka–Volterra_model
Mutualism describes the ecological interaction between two or more species where each species has a net benefit. Mutualism is a common type of ecological interaction. Prominent examples are: the nutrient exchange between vascular plants and mycorrhizal fungi, the fertilization of flowering plants by pollinators, the ways plants use fruits and edible seeds to encourage animal aid in seed dispersal, and the way corals become photosynthetic with the help of the microorganism zooxanthellae. Mutualism can be contrasted with interspecific competition, in which each species experiences reduced fitness, and exploitation, and with parasitism, in which one species benefits at the expense of the other. However, mutualism may evolve from interactions that began with imbalanced benefits, such as parasitism. The term mutualism was introduced by Pierre-Joseph van Beneden in his 1876 book Animal Parasites and Messmates to mean "mutual aid among species". Mutualism is often conflated with two other types of ecological phenomena: cooperation and symbiosis. Cooperation most commonly refers to increases in fitness through within-species (intraspecific) interactions, although it has been used (especially in the past) to refer to mutualistic interactions, and it is sometimes used to refer to mutualistic interactions that are not obligate. Symbiosis involves two species living in close physical contact over a long period of their existence and may be mutualistic, parasitic, or commensal, so symbiotic relationships are not always mutualistic, and mutualistic interactions are not always symbiotic. Despite a different definition between mutualism and symbiosis, they have been largely used interchangeably in the past, and confusion on their use has persisted. Mutualism plays a key part in ecology and evolution. For example, mutualistic interactions are vital for terrestrial ecosystem function as: about 80% of land plants species rely on mycorrhizal relationships with fungi to provide them with inorganic compounds and trace elements. estimates of tropical rainforest plants with seed dispersal mutualisms with animals range at least from 70% to 93.5%. In addition, mutualism is thought to have driven the evolution of much of the biological diversity we see, such as flower forms (important for pollination mutualisms) and co-evolution between groups of species. A prominent example of pollination mutualism is with bees and flowering plants. Bees use these plants as their food source with pollen and nectar. In turn, they transfer pollen to other nearby flowers, inadvertently allowing for cross-pollination. Cross-pollination has become essential in plant reproduction and fruit/seed production. The bees get their nutrients from the plants, and allow for successful fertilization of plants, demonstrating a mutualistic relationship between two seemingly-unlike species. Mutualism has also been linked to major evolutionary events, such as the evolution of the eukaryotic cell (symbiogenesis) and the colonization of land by plants in association with mycorrhizal fungi. == Types == === Resource-resource relationships === Mutualistic relationships can be thought of as a form of "biological barter" in mycorrhizal associations between plant roots and fungi, with the plant providing carbohydrates to the fungus in return for primarily phosphate but also nitrogenous compounds. Other examples include rhizobia bacteria that fix nitrogen for leguminous plants (family Fabaceae) in return for energy-containing carbohydrates. Metabolite exchange between multiple mutualistic species of bacteria has also been observed in a process known as cross-feeding. === Service-resource relationships === Service-resource relationships are common. Three important types are pollination, cleaning symbiosis, and zoochory. In pollination, a plant trades food resources in the form of nectar or pollen for the service of pollen dispersal. However, daciniphilous Bulbophyllum orchid species trade sex pheromone precursor or booster components via floral synomones/attractants in a true mutualistic interactions with males of Dacini fruit flies (Diptera: Tephritidae: Dacinae). Phagophiles feed (resource) on ectoparasites, thereby providing anti-pest service, as in cleaning symbiosis. Elacatinus and Gobiosoma, genera of gobies, feed on ectoparasites of their clients while cleaning them. Zoochory is the dispersal of the seeds of plants by animals. This is similar to pollination in that the plant produces food resources (for example, fleshy fruit, overabundance of seeds) for animals that disperse the seeds (service). Plants may advertise these resources using colour and a variety of other fruit characteristics, e.g., scent. Fruit of the aardvark cucumber (Cucumis humifructus) is buried so deeply that the plant is solely reliant upon the aardvark's keen sense of smell to detect its ripened fruit, extract, consume and then scatter its seeds; C. humifructus's geographical range is thus restricted to that of the aardvark. Another type is ant protection of aphids, where the aphids trade sugar-rich honeydew (a by-product of their mode of feeding on plant sap) in return for defense against predators such as ladybugs. === Service-service relationships === Strict service-service interactions are very rare, for reasons that are far from clear. One example is the relationship between sea anemones and anemone fish in the family Pomacentridae: the anemones provide the fish with protection from predators (which cannot tolerate the stings of the anemone's tentacles) and the fish defend the anemones against butterflyfish (family Chaetodontidae), which eat anemones. However, in common with many mutualisms, there is more than one aspect to it: in the anemonefish-anemone mutualism, waste ammonia from the fish feeds the symbiotic algae that are found in the anemone's tentacles. Therefore, what appears to be a service-service mutualism in fact has a service-resource component. A second example is that of the relationship between some ants in the genus Pseudomyrmex and trees in the genus Acacia, such as the whistling thorn and bullhorn acacia. The ants nest inside the plant's thorns. In exchange for shelter, the ants protect acacias from attack by herbivores (which they frequently eat when those are small enough, introducing a resource component to this service-service relationship) and competition from other plants by trimming back vegetation that would shade the acacia. In addition, another service-resource component is present, as the ants regularly feed on lipid-rich food-bodies called Beltian bodies that are on the Acacia plant. In the neotropics, the ant Myrmelachista schumanni makes its nest in special cavities in Duroia hirsuta. Plants in the vicinity that belong to other species are killed with formic acid. This selective gardening can be so aggressive that small areas of the rainforest are dominated by Duroia hirsute. These peculiar patches are known by local people as "devil's gardens". In some of these relationships, the cost of the ant's protection can be quite expensive. Cordia sp. trees in the Amazon rainforest have a kind of partnership with Allomerus sp. ants, which make their nests in modified leaves. To increase the amount of living space available, the ants will destroy the tree's flower buds. The flowers die and leaves develop instead, providing the ants with more dwellings. Another type of Allomerus sp. ant lives with the Hirtella sp. tree in the same forests, but in this relationship, the tree has turned the tables on the ants. When the tree is ready to produce flowers, the ant abodes on certain branches begin to wither and shrink, forcing the occupants to flee, leaving the tree's flowers to develop free from ant attack. The term "species group" can be used to describe the manner in which individual organisms group together. In this non-taxonomic context one can refer to "same-species groups" and "mixed-species groups." While same-species groups are the norm, examples of mixed-species groups abound. For example, zebra (Equus burchelli) and wildebeest (Connochaetes taurinus) can remain in association during periods of long distance migration across the Serengeti as a strategy for thwarting predators. Cercopithecus mitis and Cercopithecus ascanius, species of monkey in the Kakamega Forest of Kenya, can stay in close proximity and travel along exactly the same routes through the forest for periods of up to 12 hours. These mixed-species groups cannot be explained by the coincidence of sharing the same habitat. Rather, they are created by the active behavioural choice of at least one of the species in question. === Protocooperation === Protocooperation is a form of mutualism, but the cooperating species do not depend on each other for survival. The term, initially used for intraspecific interactions, was popularized by Eugene Odum (1953), although it is now rarely used. == Evolution == Mutualistic symbiosis can sometimes evolve from parasitism or commensalism. Symbiogenesis, a leading theory on the evolution of Eukaryotes states the origin of the mitochondria and cell nucleus emerged from a parasitic relationship of ancient Archaea and Bacteria. Fungi's relationship to plants in the form of mycelium evolved from parasitism and commensalism. Under certain conditions species of fungi previously in a state of mutualism can turn parasitic on weak or dying plants. Likewise the symbiotic relationship of clown fish and sea anemones emerged from a commensalist relationship. Once a mutualistic relationship emerges both symbionts are pushed towards co-evolution with each other. == Mathematical modeling == Mathematical treatments of mutualisms, like the study of mutualisms in general, have lagged behind those for predation, or predator-prey, consumer-resource, interactions. In models of mutualisms, the terms "type I" and "type II" functional responses refer to the linear and saturating relationships, respectively, between the benefit provided to an individual of species 1 (dependent variable) and the density of species 2 (independent variable). === Type I functional response === One of the simplest frameworks for modeling species interactions is the Lotka–Volterra equations. In this model, the changes in population densities of the two mutualists are quantified as: d N 1 d t = r 1 N 1 − α 11 N 1 2 + β 12 N 1 N 2 d N 2 d t = r 2 N 2 − α 22 N 2 2 + β 21 N 1 N 2 {\displaystyle {\begin{aligned}{\frac {dN_{1}}{dt}}&=r_{1}N_{1}-\alpha _{11}N_{1}^{2}+\beta _{12}N_{1}N_{2}\\[8pt]{\frac {dN_{2}}{dt}}&=r_{2}N_{2}-\alpha _{22}N_{2}^{2}+\beta _{21}N_{1}N_{2}\end{aligned}}} where N i {\displaystyle N_{i}} = the population density of species i. r i {\displaystyle r_{i}} = the intrinsic growth rate of the population of species i. α i i {\displaystyle \alpha _{ii}} = the negative effect of within-species crowding on species i. β i j {\displaystyle \beta _{ij}} = the beneficial effect of the density of species j on species i. Mutualism is in essence the logistic growth equation modified for mutualistic interaction. The mutualistic interaction term represents the increase in population growth of one species as a result of the presence of greater numbers of another species. As the mutualistic interactive term β is always positive, this simple model may lead to unrealistic unbounded growth. So it may be more realistic to include a further term in the formula, representing a saturation mechanism, to avoid this occurring. === Type II functional response === In 1989, David Hamilton Wright modified the above Lotka–Volterra equations by adding a new term, βM/K, to represent a mutualistic relationship. Wright also considered the concept of saturation, which means that with higher densities, there is a decrease in the benefits of further increases of the mutualist population. Without saturation, depending on the size of parameter α, species densities would increase indefinitely. Because that is not possible due to environmental constraints and carrying capacity, a model that includes saturation would be more accurate. Wright's mathematical theory is based on the premise of a simple two-species mutualism model in which the benefits of mutualism become saturated due to limits posed by handling time. Wright defines handling time as the time needed to process a food item, from the initial interaction to the start of a search for new food items and assumes that processing of food and searching for food are mutually exclusive. Mutualists that display foraging behavior are exposed to the restrictions on handling time. Mutualism can be associated with symbiosis. Handling time interactions In 1959, C. S. Holling performed his classic disc experiment that assumed that the number of food items captured is proportional to the allotted searching time; and that there is a handling time variable that exists separately from the notion of search time. He then developed an equation for the Type II functional response, which showed that the feeding rate is equivalent to a x 1 + a x T H {\displaystyle {\cfrac {ax}{1+axT_{H}}}} where a = the instantaneous discovery rate x = food item density TH = handling time The equation that incorporates Type II functional response and mutualism is: d N d t = N [ r ( 1 − c N ) + b a M 1 + a T H M ] {\displaystyle {\frac {dN}{dt}}=N\left[r(1-cN)+{\cfrac {baM}{1+aT_{H}M}}\right]} where N and M = densities of the two mutualists r = intrinsic rate of increase of N c = coefficient measuring negative intraspecific interaction. This is equivalent to inverse of the carrying capacity, 1/K, of N, in the logistic equation. a = instantaneous discovery rate b = coefficient converting encounters with M to new units of N or, equivalently, d N d t = N [ r ( 1 − c N ) + β M / ( X + M ) ] {\displaystyle {\frac {dN}{dt}}=N[r(1-cN)+\beta M/(X+M)]} where X = 1/aTH β = b/TH This model is most effectively applied to free-living species that encounter a number of individuals of the mutualist part in the course of their existences. Wright notes that models of biological mutualism tend to be similar qualitatively, in that the featured isoclines generally have a positive decreasing slope, and by and large similar isocline diagrams. Mutualistic interactions are best visualized as positively sloped isoclines, which can be explained by the fact that the saturation of benefits accorded to mutualism or restrictions posed by outside factors contribute to a decreasing slope. The type II functional response is visualized as the graph of b a M 1 + a T H M {\displaystyle {\cfrac {baM}{1+aT_{H}M}}} vs. M. == Structure of networks == Mutualistic networks made up out of the interaction between plants and pollinators were found to have a similar structure in very different ecosystems on different continents, consisting of entirely different species. The structure of these mutualistic networks may have large consequences for the way in which pollinator communities respond to increasingly harsh conditions and on the community carrying capacity. Mathematical models that examine the consequences of this network structure for the stability of pollinator communities suggest that the specific way in which plant-pollinator networks are organized minimizes competition between pollinators, reduce the spread of indirect effects and thus enhance ecosystem stability and may even lead to strong indirect facilitation between pollinators when conditions are harsh. This means that pollinator species together can survive under harsh conditions. But it also means that pollinator species collapse simultaneously when conditions pass a critical point. This simultaneous collapse occurs, because pollinator species depend on each other when surviving under difficult conditions. Such a community-wide collapse, involving many pollinator species, can occur suddenly when increasingly harsh conditions pass a critical point and recovery from such a collapse might not be easy. The improvement in conditions needed for pollinators to recover could be substantially larger than the improvement needed to return to conditions at which the pollinator community collapsed. == Humans == Humans are involved in mutualisms with other species: their gut flora is essential for efficient digestion. Infestations of head lice might have been beneficial for humans by fostering an immune response that helps to reduce the threat of body louse borne lethal diseases. Some relationships between humans and domesticated animals and plants are to different degrees mutualistic. For example, domesticated cereals that provide food for humans have lost the ability to spread seeds by shattering, a strategy that wild grains use to spread their seeds. In traditional agriculture, some plants have mutualistic relationships as companion plants, providing each other with shelter, soil fertility or natural pest control. For example, beans may grow up cornstalks as a trellis, while fixing nitrogen in the soil for the corn, a phenomenon that is used in Three Sisters farming. One researcher has proposed that the key advantage Homo sapiens had over Neanderthals in competing over similar habitats was the former's mutualism with dogs. === Intestinal microbiota === The microbiota in the human intestine coevolved with the human species, and this relationship is considered to be a mutualism that is beneficial both to the human host and the bacteria in the gut population. The mucous layer of the intestine contains commensal bacteria that produce bacteriocins, modify the pH of the intestinal contents, and compete for nutrition to inhibit colonization by pathogens. The gut microbiota, containing trillions of microorganisms, possesses the metabolic capacity to produce and regulate multiple compounds that reach the circulation and act to influence the function of distal organs and systems. Breakdown of the protective mucosal barrier of the gut can contribute to the development of colon cancer. == Evolution of mutualism == === Evolution by type === Every generation of every organism needs nutrients – and similar nutrients – more than they need particular defensive characteristics, as the fitness benefit of these vary heavily especially by environment. This may be the reason that hosts are more likely to evolve to become dependent on vertically transmitted bacterial mutualists which provide nutrients than those providing defensive benefits. This pattern is generalized beyond bacteria by Yamada et al. 2015's demonstration that undernourished Drosophila are heavily dependent on their fungal symbiont Issatchenkia orientalis for amino acids. === Mutualism breakdown === Mutualisms are not static, and can be lost by evolution. Sachs and Simms (2006) suggest that this can occur via four main pathways: One mutualist shifts to parasitism, and no longer benefits its partner, such as headlice One partner abandons the mutualism and lives autonomously One partner may go extinct A partner may be switched to another species There are many examples of mutualism breakdown. For example, plant lineages inhabiting nutrient-rich environments have evolutionarily abandoned mycorrhizal mutualisms many times independently. Evolutionarily, headlice may have been mutualistic as they allow for early immunity to various body-louse borne disease; however, as these diseases became eradicated, the relationship has become less mutualistic and more parasitic. == Measuring and defining mutualism == Measuring the exact fitness benefit to the individuals in a mutualistic relationship is not always straightforward, particularly when the individuals can receive benefits from a variety of species, for example most plant-pollinator mutualisms. It is therefore common to categorise mutualisms according to the closeness of the association, using terms such as obligate and facultative. Defining "closeness", however, is also problematic. It can refer to mutual dependency (the species cannot live without one another) or the biological intimacy of the relationship in relation to physical closeness (e.g., one species living within the tissues of the other species). == See also == Arbuscular mycorrhiza Co-adaptation Coevolution Ecological facilitation Frugivore Greater honeyguide – has a mutualism with humans Interspecies communication Müllerian mimicry Mutualisms and conservation Mutual Aid: A Factor of Evolution Symbiogenesis Plant–animal interaction == References == == Further references == == Further reading == Boucher, D. G.; James, S.; Keeler, K. (1984). "The ecology of mutualism". Annual Review of Ecology and Systematics. 13: 315–347. doi:10.1146/annurev.es.13.110182.001531. Boucher, D.H. (1985). The Biology of Mutualism : Ecology and Evolution. Oxford University Press. ISBN 0-7099-3238-3. OCLC 11971241.
Wikipedia/Mutualism_and_the_Lotka–Volterra_equation
A functional response in ecology is the intake rate of a consumer as a function of food density (the amount of food available in a given ecotope). It is associated with the numerical response, which is the reproduction rate of a consumer as a function of food density. Following C. S. Holling, functional responses are generally classified into three types, which are called Holling's type I, II, and III. These were formulated using laboratory experiments where participants collected disks from a board of increasing disk density. Thus, the resulting formulae are often referred to as Holling's Disk Equations. == Type I == The type I functional response assumes a linear increase in intake rate with food density, either for all food densities, or only for food densities up to a maximum, beyond which the intake rate is constant. The linear increase assumes that the time needed by the consumer to process a food item is negligible, or that consuming food does not interfere with searching for food. A functional response of type I is used in the Lotka–Volterra predator–prey model. It was the first kind of functional response described and is also the simplest of the three functional responses currently detailed. == Type II == The type II functional response is characterized by a decelerating intake rate, which follows from the assumption that the consumer is limited by its capacity to process food. Type II functional response is often modelled by a rectangular hyperbola, for instance as by Holling's disc equation, which assumes that processing of food and searching for food are mutually exclusive behaviours. The equation is f ( R ) = a R 1 + a h R {\displaystyle {\begin{aligned}f(R)&={\frac {aR}{1+ahR}}\end{aligned}}} where f denotes intake rate and R denotes food (or resource) density. The rate at which the consumer encounters food items per unit of food density is called the attack rate, a. The average time spent on processing a food item is called the handling time, h. Similar equations are the Monod equation for the growth of microorganisms and the Michaelis–Menten equation for the rate of enzymatic reactions. In an example with wolves and caribou, as the number of caribou increases while holding wolves constant, the number of caribou kills increases and then levels off. This is because the proportion of caribou killed per wolf decreases as caribou density increases. The higher the density of caribou, the smaller the proportion of caribou killed per wolf. Explained slightly differently, at very high caribou densities, wolves need very little time to find prey and spend almost all their time handling prey and very little time searching. Wolves are then satiated and the total number of caribou kills reaches a plateau. == Type III == The type III functional response is similar to type II in that at high levels of prey density, saturation occurs. At low prey density levels, the graphical relationship of number of prey consumed and the density of the prey population is a super-linearly increasing function of prey consumed by predators: f ( R ) = a R k 1 + a h R k , k > 1 {\displaystyle {\begin{aligned}f(R)&={\frac {aR^{k}}{1+ahR^{k}}},\;\;\;\;\;\;\;k>1\end{aligned}}} This accelerating function was originally formulated in analogy with of the kinetics of an enzyme with two binding sites for k = 2. More generally, if a prey type is only accepted after every k encounters and rejected the k-1 times in between, which mimics learning, the general form above is found. Learning time is defined as the natural improvement of a predator's searching and attacking efficiency or the natural improvement in their handling efficiency as prey density increases. Imagine a prey density so small that the chance of a predator encountering that prey is extremely low. Because the predator finds prey so infrequently, it has not had enough experience to develop the best ways to capture and subdue that species of prey. Holling identified this mechanism in shrews and deer mice feeding on sawflies. At low numbers of sawfly cocoons per acre, deer mice especially experienced exponential growth in terms of the number of cocoons consumed per individual as the density of cocoons increased. The characteristic saturation point of the type III functional response was also observed in the deer mice. At a certain density of cocoons per acre, the consumption rate of the deer mice reached a saturation amount as the cocoon density continued to increase. Prey switching involves two or more prey species and one predator species. When all prey species are at equal densities, the predator will indiscriminately select between prey species. However, if the density of one of the prey species decreases, then the predator will start selecting the other, more common prey species with a higher frequency because if it can increase the efficiency which with it captures the more abundant prey through learning. Murdoch demonstrated this effect with guppy preying on tubificids and fruit flies. As fruit fly numbers decreased guppies switched from feeding on the fruit flies on the water's surface to feeding on the more abundant tubificids along the bed. If predators learn while foraging, but do not reject prey before they accept one, the functional response becomes a function of the density of all prey types. This describes predators that feed on multiple prey and dynamically switch from one prey type to another. This behaviour can lead to either a type II or a type III functional response. If the density of one prey type is approximately constant, as is often the case in experiments, a type III functional response is found. When the prey densities change in approximate proportion to each other, as is the case in most natural situations, a type II functional response is typically found. This explains why the type III functional response has been found in many experiments in which prey densities are artificially manipulated, but is rare in nature. == See also == Carnivore Ecosystem model Herbivore Lotka–Volterra equations Predator satiation == References ==
Wikipedia/Functional_response
Patch dynamics is an ecological perspective that the structure, function, and dynamics of ecological systems can be understood through studying their interactive patches. Patch dynamics, as a term, may also refer to the spatiotemporal changes within and among patches that make up a landscape. Patch dynamics is ubiquitous in terrestrial and aquatic systems across organizational levels and spatial scales. From a patch dynamics perspective, populations, communities, ecosystems, and landscapes may all be studied effectively as mosaics of patches that differ in size, shape, composition, history, and boundary characteristics. The idea of patch dynamics dates back to the 1940s when plant ecologists studied the structure and dynamics of vegetation in terms of the interactive patches that it comprises. A mathematical theory of patch dynamics was developed by Simon Levin and Robert Paine in the 1970s, originally to describe the pattern and dynamics of an intertidal community as a patch mosaic created and maintained by tidal disturbances. Patch dynamics became a dominant theme in ecology between the late 1970s and the 1990s. Patch dynamics is a conceptual approach to ecosystem and habitat analysis that emphasizes dynamics of heterogeneity within a system (i.e. that each area of an ecosystem is made up of a mosaic of small 'sub-ecosystems'). Diverse patches of habitat created by natural disturbance regimes are seen as critical to the maintenance of this diversity (ecology). A habitat patch is any discrete area with a definite shape, spatial and configuration used by a species for breeding or obtaining other resources. Mosaics are the patterns within landscapes that are composed of smaller elements, such as individual forest stands, shrubland patches, highways, farms, or towns. == Patches and mosaics == Historically, due to the short time scale of human observation, mosaic landscapes were perceived to be static patterns of human population mosaics. This focus centered on the idea that the status of a particular population, community, or ecosystem could be understood by studying a particular patch within a mosaic. However, this perception ignored the conditions that interact with, and connect patches. In 1979, Bormann and Likens coined the phrase shifting mosaic to describe the theory that landscapes change and fluctuate, and are in fact dynamic. This is related to the battle of cells that occurs in a Petri dish. Patch dynamics refers to the concept that landscapes are dynamic. There are three states that a patch can exist in: potential, active, and degraded. Patches in the potential state are transformed into active patches through colonization of the patch by dispersing species arriving from other active or degrading patches. Patches are transformed from the active state to the degraded state when the patch is abandoned, and patches change from degraded to active through a process of recovery. Logging, fire, farming, and reforestation can all contribute to the process of colonization, and can effectively change the shape of the patch. Patch dynamics also refers to changes in the structure, function, and composition of individual patches that can, for example, affect the rate of nutrient cycling. Patches are also linked. Although patches may be separated in space, migration can occur from one patch to another. This migration maintains the population of some patches, and can be the mechanism by which some plant species spread. This implies that ecological systems within landscapes are open, rather than closed and isolated. (Pickett, 2006) == Conservation efforts == Recognizing the patch dynamics within a system is needed for conservation (ecology) efforts to succeed. Successful conservation includes understanding how a patch changes and predicting how they will be affected by external forces. These externalities include natural effects, such as land use, disturbance, restoration, and succession, and the effects of human activities. In a sense, conservation is the active maintenance of patch dynamics (Pickett, 2006). The analysis of patch dynamics could be used to predict changes in biodiversity of an ecosystem. When patches of species can be tracked, it has been shown that fluctuations on the biggest patch (the most dominant species) can be used as an early warning of a biodiversity collapse. That means that if external conditions, like climate change and habitat fragmentation, change the internal dynamics of patches, a sharp reduction in biodiversity can be detected before it is produced. == See also == Conservation biology Edge effect Forest dynamics Habitat conservation Habitat corridor Habitat fragmentation Island biogeography Landscape ecology Spatial ecology == References == == Further reading == Forman, R.T.T. 1995. Land Mosaics: The Ecology of Landscapes and Regions. Cambridge University Press, Cambridge, UK. Groom, Martha J., Meffe, Gary K., Carroll, Ronald. 2006. Principles of Conservation Biology, Third Edition. Mosaics and Patch Dynamics by Steward T.A. Pickett Levin, S. A., and R. T. Paine. 1974. Disturbance, patch formation and community structure. Proceedings of the National Academy of Sciences (USA) 71:2744-2747. Levin, S. A., T. M. Powell, and J. H. Steele, editors. 1993. Patch Dynamics. Springer-Verlag, Berlin. Wu, J. G., and O. L. Loucks. 1995. From balance of nature to hierarchical patch dynamics: A paradigm shift in ecology. Quarterly Review of Biology 70:439-466. Patch Dynamics [1]
Wikipedia/Patch_dynamics
Source–sink dynamics is a theoretical model used by ecologists to describe how variation in habitat quality may affect the population growth or decline of organisms. Since quality is likely to vary among patches of habitat, it is important to consider how a low quality patch might affect a population. In this model, organisms occupy two patches of habitat. One patch, the source, is a high quality habitat that on average allows the population to increase. The second patch, the sink, is a very low quality habitat that, on its own, would not be able to support a population. However, if the excess of individuals produced in the source frequently moves to the sink, the sink population can persist indefinitely. Organisms are generally assumed to be able to distinguish between high and low quality habitat, and to prefer high quality habitat. However, ecological trap theory describes the reasons why organisms may actually prefer sink patches over source patches. Finally, the source–sink model implies that some habitat patches may be more important to the long-term survival of the population, and considering the presence of source–sink dynamics will help inform conservation decisions. == Theory development == Although the seeds of a source–sink model had been planted earlier, Pulliam is often recognized as the first to present a fully developed source–sink model. He defined source and sink patches in terms of their demographic parameters, or BIDE rates (birth, immigration, death, and emigration rates). In the source patch, birth rates were greater than death rates, causing the population to grow. The excess individuals were expected to leave the patch, so that emigration rates were greater than immigration rates. In other words, sources were a net exporter of individuals. In contrast, in a sink patch, death rates were greater than birth rates, resulting in a population decline toward extinction unless enough individuals emigrated from the source patch. Immigration rates were expected to be greater than emigration rates, so that sinks were a net importer of individuals. As a result, there would be a net flow of individuals from the source to the sink (see Table 1). Pulliam's work was followed by many others who developed and tested the source–sink model. Watkinson and Sutherland presented a phenomenon in which high immigration rates could cause a patch to appear to be a sink by raising the patch's population above its carrying capacity (the number of individuals it can support). However, in the absence of immigration, the patches are able to support a smaller population. Since true sinks cannot support any population, the authors called these patches "pseudo-sinks". Definitively distinguishing between true sinks and pseudo-sinks requires cutting off immigration to the patch in question and determining whether the patch is still able to maintain a population. Thomas et al. were able to do just that, taking advantage of an unseasonable frost that killed off the host plants for a source population of Edith's checkerspot butterfly (Euphydryas editha). Without the host plants, the supply of immigrants to other nearby patches was cut off. Although these patches had appeared to be sinks, they did not become extinct without the constant supply of immigrants. They were capable of sustaining a smaller population, suggesting that they were in fact pseudo-sinks. Watkinson and Sutherland's caution about identifying pseudo-sinks was followed by Dias, who argued that differentiating between sources and sinks themselves may be difficult. She asserted that a long-term study of the demographic parameters of the populations in each patch is necessary. Otherwise, temporary variations in those parameters, perhaps due to climate fluctuations or natural disasters, may result in a misclassification of the patches. For example, Johnson described periodic flooding of a river in Costa Rica which completely inundated patches of the host plant for a rolled-leaf beetle (Cephaloleia fenestrata). During the floods, these patches became sinks, but at other times they were no different from other patches. If researchers had not considered what happened during the floods, they would not have understood the full complexity of the system. Dias also argued that an inversion between source and sink habitat is possible so that the sinks may actually become the sources. Because reproduction in source patches is much higher than in sink patches, natural selection is generally expected to favor adaptations to the source habitat. However, if the proportion of source to sink habitat changes so that sink habitat becomes much more available, organisms may begin to adapt to it instead. Once adapted, the sink may become a source habitat. This is believed to have occurred for the blue tit (Parus caeruleus) 7500 years ago as forest composition on Corsica changed, but few modern examples are known. Boughton described a source—pseudo-sink inversion in butterfly populations of E. editha. Following the frost, the butterflies had difficulty recolonizing the former source patches. Boughton found that the host plants in the former sources senesced much earlier than in the former pseudo-sink patches. As a result, immigrants regularly arrived too late to successfully reproduce. He found that the former pseudo-sinks had become sources, and the former sources had become true sinks. One of the most recent additions to the source–sink literature is by Tittler et al., who examined wood thrush (Hylocichla mustelina) survey data for evidence of source and sink populations on a large scale. The authors reasoned that emigrants from sources would likely be the juveniles produced in one year dispersing to reproduce in sinks in the next year, producing a one-year time lag between population changes in the source and in the sink. Using data from the Breeding Bird Survey, an annual survey of North American birds, they looked for relationships between survey sites showing such a one-year time lag. They found several pairs of sites showing significant relationships 60–80 km apart. Several appeared to be sources to more than one sink, and several sinks appeared to receive individuals from more than one source. In addition, some sites appeared to be a sink to one site and a source to another (see Figure 1). The authors concluded that source–sink dynamics may occur on continental scales. One of the more confusing issues involves identifying sources and sinks in the field. Runge et al. point out that in general researchers need to estimate per capita reproduction, probability of survival, and probability of emigration to differentiate source and sink habitats. If emigration is ignored, then individuals that emigrate may be treated as mortalities, thus causing sources to be classified as sinks. This issue is important if the source–sink concept is viewed in terms of habitat quality (as it is in Table 1) because classifying high-quality habitat as low-quality may lead to mistakes in ecological management. Runge et al. showed how to integrate the theory of source–sink dynamics with population projection matrices and ecological statistics in order to differentiate sources and sinks. == Modes of dispersal == Why would individuals ever leave high quality source habitat for a low quality sink habitat? This question is central to source–sink theory. Ultimately, it depends on the organisms and the way they move and distribute themselves between habitat patches. For example, plants disperse passively, relying on other agents such as wind or water currents to move seeds to another patch. Passive dispersal can result in source–sink dynamics whenever the seeds land in a patch that cannot support the plant's growth or reproduction. Winds may continually deposit seeds there, maintaining a population even though the plants themselves do not successfully reproduce. Another good example for this case are soil protists. Soil protists also disperse passively, relying mainly on wind to colonize other sites. As a result, source–sink dynamics can arise simply because external agents dispersed protist propagules (e.g., cysts, spores), forcing individuals to grow in a poor habitat. In contrast, many organisms that disperse actively should have no reason to remain in a sink patch, provided the organisms are able to recognize it as a poor quality patch (see discussion of ecological traps). The reasoning behind this argument is that organisms are often expected to behave according to the "ideal free distribution", which describes a population in which individuals distribute themselves evenly among habitat patches according to how many individuals the patch can support. When there are patches of varying quality available, the ideal free distribution predicts a pattern of "balanced dispersal". In this model, when the preferred habitat patch becomes crowded enough that the average fitness (survival rate or reproductive success) of the individuals in the patch drops below the average fitness in a second, lower quality patch, individuals are expected to move to the second patch. However, as soon as the second patch becomes sufficiently crowded, individuals are expected to move back to the first patch. Eventually, the patches should become balanced so that the average fitness of the individuals in each patch and the rates of dispersal between the two patches are even. In this balanced dispersal model, the probability of leaving a patch is inversely proportional to the carrying capacity of the patch. In this case, individuals should not remain in sink habitat for very long, where the carrying capacity is zero and the probability of leaving is therefore very high. An alternative to the ideal free distribution and balanced dispersal models is when fitness can vary among potential breeding sites within habitat patches and individuals must select the best available site. This alternative has been called the "ideal preemptive distribution", because a breeding site can be preempted if it has already been occupied. For example, the dominant, older individuals in a population may occupy all of the best territories in the source so that the next best territory available may be in the sink. As the subordinate, younger individuals age, they may be able to take over territories in the source, but new subordinate juveniles from the source will have to move to the sink. Pulliam argued that such a pattern of dispersal can maintain a large sink population indefinitely. Furthermore, if good breeding sites in the source are rare and poor breeding sites in the sink are common, it is even possible that the majority of the population resides in the sink. == Importance in ecology == The source–sink model of population dynamics has made contributions to many areas in ecology. For example, a species' niche was originally described as the environmental factors required by a species to carry out its life history, and a species was expected to be found only in areas that met these niche requirements. This concept of a niche was later termed the "fundamental niche", and described as all of the places a species could successfully occupy. In contrast, the "realized niche", was described as all of the places a species actually did occupy, and was expected to be less than the extent of the fundamental niche as a result of competition with other species. However, the source–sink model demonstrated that the majority of a population could occupy a sink which, by definition, did not meet the niche requirements of the species, and was therefore outside the fundamental niche (see Figure 2). In this case, the realized niche was actually larger than the fundamental niche, and ideas about how to define a species' niche had to change. Source–sink dynamics has also been incorporated into studies of metapopulations, a group of populations residing in patches of habitat. Though some patches may go extinct, the regional persistence of the metapopulation depends on the ability of patches to be re-colonized. As long as there are source patches present for successful reproduction, sink patches may allow the total number of individuals in the metapopulation to grow beyond what the source could support, providing a reserve of individuals available for re-colonization. Source–sink dynamics also has implications for studies of the coexistence of species within habitat patches. Because a patch that is a source for one species may be a sink for another, coexistence may actually depend on immigration from a second patch rather than the interactions between the two species. Similarly, source–sink dynamics may influence the regional coexistence and demographics of species within a metacommunity, a group of communities connected by the dispersal of potentially interacting species. Finally, the source–sink model has greatly influenced ecological trap theory, a model in which organisms prefer sink habitat over source habitat. Besides being ecological trap sink habitat may vary in their response i major disturbance and colonization of sink habitat may allow species survival even if population in source habitat extinct due to some catastrophic event which may substantially increase metapopulational stability. == Conservation == Land managers and conservationists have become increasingly interested in preserving and restoring high quality habitat, particularly where rare, threatened, or endangered species are concerned. As a result, it is important to understand how to identify or create high quality habitat, and how populations respond to habitat loss or change. Because a large proportion of a species' population could exist in sink habitat, conservation efforts may misinterpret the species' habitat requirements. Similarly, without considering the presence of a trap, conservationists might mistakenly preserve trap habitat under the assumption that an organism's preferred habitat was also good quality habitat. Simultaneously, source habitat may be ignored or even destroyed if only a small proportion of the population resides there. Degradation or destruction of the source habitat will, in turn, impact the sink or trap populations, potentially over large distances. Finally, efforts to restore degraded habitat may unintentionally create an ecological trap by giving a site the appearance of quality habitat, but which has not yet developed all of the functional elements necessary for an organism's survival and reproduction. For an already threatened species, such mistakes might result in a rapid population decline toward extinction. In considering where to place reserves, protecting source habitat is often assumed to be the goal, although if the cause of a sink is human activity, simply designating an area as a reserve has the potential to convert current sink patches to source patches (e.g. no-take zones). Either way, determining which areas are sources or sinks for any one species may be very difficult, and an area that is a source for one species may be unimportant to others. Finally, areas that are sources or sinks currently may not be in the future as habitats are continually altered by human activity or climate change. Few areas can be expected to be universal sources, or universal sinks. While the presence of source, sink, or trap patches must be considered for short-term population survival, especially for very small populations, long-term survival may depend on the creation of networks of reserves that incorporate a variety of habitats and allow populations to interact. == See also == Conservation biology Ecological trap Ecology Landscape ecology List of ecology topics Metapopulation Perceptual trap Population dynamics Population ecology Population viability analysis Refuge (ecology) == References == == Further reading ==
Wikipedia/Source–sink_dynamics
Any action or influence that species have on each other is considered a biological interaction. These interactions between species can be considered in several ways. One such way is to depict interactions in the form of a network, which identifies the members and the patterns that connect them. Species interactions are considered primarily in terms of trophic interactions, which depict which species feed on others. Currently, ecological networks that integrate non-trophic interactions are being built. The type of interactions they can contain can be classified into six categories: mutualism, commensalism, neutralism, amensalism, antagonism, and competition. Observing and estimating the fitness costs and benefits of species interactions can be very problematic. The way interactions are interpreted can profoundly affect the ensuing conclusions. == Interaction characteristics == Characterization of interactions can be made according to various measures, or any combination of them. Prevalence Prevalence identifies the proportion of the population affected by a given interaction, and thus quantifies whether it is relatively rare or common. Generally, only common interactions are considered. Negative/ Positive Whether the interaction is beneficial or harmful to the species involved determines the sign of the interaction, and what type of interaction it is classified as. To establish whether they are harmful or beneficial, careful observational and/or experimental studies can be conducted, in an attempt to establish the cost/benefit balance experienced by the members. Strength The sign of an interaction does not capture the impact on fitness of that interaction. One example of this is of antagonism, in which predators may have a much stronger impact on their prey species (death), than parasites (reduction in fitness). Similarly, positive interactions can produce anything from a negligible change in fitness to a life or death impact. Relationship in space and time The relationship in space and time is not currently considered within a network structure, though it has been observed by naturalists for centuries. It would be highly informative to include geographical proximity, duration, and seasonal patterns of interactions into network analysis. == Importance of interactions == In the same way that a trophic cascade can occur, it is expected that 'interaction cascades' take place. Thus, it should be possible to construct 'effect' networks which parallel in many ways the energy or matter networks common in the literature. By assessing the network topology and constructing models, we might better understand how interacting species affect each other and how these effects spread through the network. In certain instances, it has been shown that indirect trophic effects tend to dominate direct ones (Patten, 1995)—perhaps this pattern will also emerge in non-trophic interactions. == Keystone species == By analyzing network structures, one can determine keystone species that are of particular importance. A different class of keystone species is what are termed 'ecosystem engineers'. Certain organisms alter the environment so drastically that it affects many interactions that take place within a habitat. This term is used for organisms that "directly or indirectly modulate availability of resources (other than themselves) to other species, by causing physical state changes in biotic or abiotic materials". Beavers are an example of such engineers. Other examples include earthworms, trees, coral reefs, and planktonic organisms. Such 'network engineers' can be seen as "interaction modifiers", meaning that a change in their population density affects the interactions between two or more other species. == Interesting examples == Certain interactions may be particularly problematic to understand. These may include Wolbachia Beneficial endosymbionts Vectors Viruses == Criticisms == Can the complexities of biology ever be captured in schematics? How do we accurately detect and evaluate non-visible interactions? How much predictive power do these networks have for population dynamics? == References ==
Wikipedia/Non-trophic_networks
The generalized Lotka–Volterra equations are a set of equations which are more general than either the competitive or predator–prey examples of Lotka–Volterra types. They can be used to model direct competition and trophic relationships between an arbitrary number of species. Their dynamics can be analysed analytically to some extent. This makes them useful as a theoretical tool for modeling food webs. However, they lack features of other ecological models such as predator preference and nonlinear functional responses, and they cannot be used to model mutualism without allowing indefinite population growth. The generalised Lotka-Volterra equations model the dynamics of the populations x 1 , x 2 , … {\displaystyle x_{1},x_{2},\dots } of n {\displaystyle n} biological species. Together, these populations can be considered as a vector x {\displaystyle \mathbf {x} } . They are a set of ordinary differential equations given by d x i d t = x i f i ( x ) , {\displaystyle {\frac {dx_{i}}{dt}}=x_{i}f_{i}(\mathbf {x} ),} where the vector f {\displaystyle \mathbf {f} } is given by f = r + A x , {\displaystyle \mathbf {f} =\mathbf {r} +A\mathbf {x} ,} where r {\displaystyle \mathbf {r} } is a vector and A {\displaystyle A} is a matrix known as the interaction matrix. == Meaning of parameters == The generalised Lotka-Volterra equations can represent competition and predation, depending on the values of the parameters, as described below. "Generalized" means that all the combinations of pairs of signs for both species (−/−,−/+,+/-, +/+) are possible. They are less suitable for describing mutualism. The values of r {\displaystyle \mathbf {r} } are the intrinsic birth or death rates of the species. A positive value for r i {\displaystyle r_{i}} means that species i is able to reproduce in the absence of any other species (for instance, because it is a plant that is wind pollinated), whereas a negative value means that its population will decline unless the appropriate other species are present (e.g. a herbivore that cannot survive without plants to eat, or a predator that cannot persist without its prey). The values of the elements of the interaction matrix A {\displaystyle A} represent the relationships between the species. The value of a i j {\displaystyle a_{ij}} represents the effect that species j has upon species i. The effect is proportional to the populations of both species, as well as to the value of a i j {\displaystyle a_{ij}} . Thus, if both a i j {\displaystyle a_{ij}} and a j i {\displaystyle a_{ji}} are negative then the two species are said to be in direct competition with one another, since they each have a direct negative effect on the other's population. If a i j {\displaystyle a_{ij}} is positive but a j i {\displaystyle a_{ji}} is negative then species i is considered to be a predator (or parasite) on species j, since i's population grows at j's expense. Positive values for both a i j {\displaystyle a_{ij}} and a j i {\displaystyle a_{ji}} would be considered mutualism. However, this is not often used in practice, because it can make it possible for both species' populations to grow indefinitely. Indirect negative and positive effects are also possible. For example, if two predators eat the same prey then they compete indirectly, even though they might not have a direct competition term in the community matrix. The diagonal terms a i i {\displaystyle a_{ii}} are usually taken to be negative (i.e. species i's population has a negative effect on itself). This self-limitation prevents populations from growing indefinitely. == Dynamics and solutions == The generalised Lotka-Volterra equations are capable of a wide variety of dynamics, including limit cycles and chaos as well as point attractors (see Hofbauer and Sigmund). As with any set of ODEs, fixed points can be found by setting d x i / d t {\displaystyle dx_{i}/dt} to 0 for all i, which gives, if no species is extinct, i.e., if x i ≠ 0 {\displaystyle x_{i}\neq 0} for all i {\displaystyle i} , x = − A − 1 r . {\displaystyle \mathbf {x} =-A^{-1}\mathbf {r} .} This may or may not have positive values for all the x i {\displaystyle x_{i}} ; if it does not, then there is no stable attractor for which the populations of all species are positive. If there is a fixed point with all positive populations the Jacobian matrix in a neighbourhood of the fixed point x {\displaystyle \mathbf {x} } is given by diag ⁡ ( x ) A {\displaystyle \operatorname {diag} (\mathbf {x} )A} . This matrix is known as the community matrix and its eigenvalues determine the stability of the fixed point x {\displaystyle \mathbf {x} } . The fixed point may or may not be stable. If the fixed point is unstable then there may or may not be a periodic or chaotic attractor for which all the populations remain positive. In either case there can also be attractors for which some of the populations are zero and others are positive. x = ( 0 , 0 , … 0 ) {\displaystyle \mathbf {x} =(0,0,\dots 0)} is always a fixed point, corresponding to the absence of all species. For n = 2 {\displaystyle n=2} species, a complete classification of this dynamics, for all sign patterns of above coefficients, is available, which is based upon equivalence to the 3-type replicator equation. == Applications for single trophic communities == In the case of a single trophic community, the trophic level below the one of the community (e.g. plants for a community of herbivore species), corresponding to the food required for individuals of a species i to thrive, is modeled through a parameter Ki known as the carrying capacity. E.g. suppose a mixture of crops involving S species. In this case a i j {\displaystyle a_{ij}} can be thus written in terms of a non-dimensional interaction coefficient a ^ i j {\displaystyle {\hat {a}}_{ij}} : a ^ i j = a i j K i / r i {\displaystyle {\hat {a}}_{ij}=a_{ij}K_{i}/r_{i}} . === Quantitative prediction of species yields from monoculture and biculture experiments === A straightforward procedure to get the set of model parameters { K i , a ^ i j } {\displaystyle \{K_{i},{\hat {a}}_{ij}\}} is to perform, until the equilibrium state is attained: a) the S single species or monoculture experiments, and from each of them to estimate the carrying capacities as the yield of the species i in monoculture K i = m i e x {\displaystyle K_{i}=m_{i}^{ex}} (the superscript ‘ex’ is to emphasize that this is an experimentally measured quantity a); b) the S´(S-1)/2 pairwise experiments producing the biculture yields, x i ( j ) e x {\displaystyle x_{i(j)}^{ex}} and x j ( i ) e x {\displaystyle x_{j(i)}^{ex}} (the subscripts i(j) and j(i) stand for the yield of species i in presence of species j and vice versa). We then can obtain a ^ i j {\displaystyle {\hat {a}}_{ij}} and a ^ j i {\displaystyle {\hat {a}}_{ji}} , as: a ^ i j = ( x i ( j ) e x − m i e x ) / x j ( i ) e x , a ^ j i = ( x j ( i ) e x − m j e x ) / x i ( j ) e x . {\displaystyle {\hat {a}}_{ij}=(x_{i(j)}^{ex}-m_{i}^{ex})/x_{j(i)}^{ex},{\hat {a}}_{ji}=(x_{j(i)}^{ex}-m_{j}^{ex})/x_{i(j)}^{ex}.} Using this procedure it was observed that the Generalized Lotka–Volterra equations can predict with reasonable accuracy most of the species yields in mixtures of S >2 species for the majority of a set of 33 experimental treatments acrossdifferent taxa (algae, plants, protozoa, etc.). === Early warnings of species crashes === The vulnerability of species richness to several factors like, climate change, habitat fragmentation, resource exploitation, etc., poses a challenge to conservation biologists and agencies working to sustain the ecosystem services. Hence, there is a clear need for early warning indicators of species loss generated from empirical data. A recently proposed early warning indicator of such population crashes uses effective estimation of the Lotka-Volterra interaction coefficients a ^ i j {\displaystyle {\hat {a}}_{ij}} . The idea is that such coefficients can be obtained from spatial distributions of individuals of the different species through Maximum Entropy. This method was tested against the data collected for trees by the Barro Colorado Island Research Station, comprising eight censuses performed every 5 years from 1981 to 2015. The main finding was that for those tree species that suffered steep population declines (of at least 50%), across the eight tree censuses, the drop of a ^ i i {\displaystyle {\hat {a}}_{ii}} is always steeper and occurs before the drop of the corresponding species abundance Ni . Indeed, such sharp declines in a ^ i i {\displaystyle {\hat {a}}_{ii}} occur between 5 and 15 years in advance than comparable declines for Ni, and thus they serve as early warnings of impending population busts. == See also == Competitive Lotka–Volterra equations, based on a sigmoidal population curve (i.e., it has a carrying capacity) Predator–prey Lotka–Volterra equations, based on exponential population growth (i.e., no limits on reproduction ability) Random generalized Lotka–Volterra model Consumer-resource model Community matrix Replicator equation Volterra lattice == References ==
Wikipedia/Generalized_Lotka–Volterra_equation
Action assembly theory is a communication theory that emphasizes psychological and social influences on human action. The goal is to examine and describe the links between the cognition and behavior – how an individual's thoughts get transformed into action. It was developed by John Greene. == Definition == Action assembly theory describes the production of behavior in two essential processes: the retrieval of procedural elements from long-term memory, and the organization of these elements to form an output representation of action to be taken. Action assembly theory seeks to explain message behavior (both verbal and nonverbal). It is a communication theory that emphasizes psychological and social influences on human action. The goal is to examine and describe the links between the cognition and behavior-how an individual's thoughts get transformed into action. According to the psychology wiki, action assembly theory describes the production of behavior in two essential processes: the retrieval of procedural elements from long-term memory, and the organization of these elements to form an output representation of action to be taken. For example, assembly is considered a top-to-bottom process that begins with more general strategy and goes to a more specific idea about communicating the specific message. == Retrieval of procedural elements == The idea of procedural record is at the center of action assembly. These records contain information about action, outcomes, and situations. These records are locked in the individual's memory where it remembers past behaviors for the future. A procedural record is at the center of action assembly. It is a personal nugget of truth about past behavior stockpiled for future use, part of an individual's memory system in which information about how to execute various behaviors is stored. Procedural records contain information about action, outcomes, and situations; for example, traveling at excessive speed (action) in a zone that specifies low speed limit (situation) can result in the issuance of a ticket (outcome). Procedural records have different levels of strength. Some are mere scratches that barely leave a trace in the mind, while others are well-worn into long-term memory. A central aspect of the action assembly theory is specifying the processes that link procedural records to behavioral representations. The activation process is the process used to select particular procedural records. For example, if a parent disciplined a child for stealing, all procedural records relevant to this goal and situation would be activated. In turn, if a common disciplinary tactic was to take away toys and play items, a procedural record of that would be activated quickly. == Organization of procedural elements == It is also important to consider the process of assembly, which organizes records into a behavioral representation. For example, assembly is considered a top-down process that begins with general strategy and goes to more specific ideas about communicating the specific message. Action assembly theory has been useful for topics such as speech onset latency and hesitations during speaking. These concepts are assumed to be indicators of cognitive processing. Another use is the study of planning – individuals who plan more effectively are more fluent than those do not, because planning reduces the cognitive load at the time of message production. When an interaction situation has multiple goals, the theory finds increased demands on an individual's information processing capacity. Assembly of goals may be difficult because a specific goal may be incompatible with behaviors associated with the other goals. In turn, multiple goal messages involve more speech hesitations and latencies. The utilization of Action Assembly theory can provide a clear opportunity to plan or assemble goals more careful to mitigate the effect. == Use == When an interaction situation has multiple goals, the theory finds increased demands on an individual's information processing capacity. Assembly of goals may be difficult because a specific goal may be incompatible with behaviors associated with each other goals. In turn, multiple goal messages involve more speech hesitation. The utilization of action assembly theory can provide a clear opportunity to plan or assemble goals more careful to ease the effect. == References == == Further reading == Athay, M., & Darley, J.M. (1981). Toward an interaction-centered theory of personality. In N. Cantor & J.F. Kihlstrom (Eds.), Personality, cognition, and social interaction (pp. 281–308). Hillsdale, NJ: Lawrence Erlbaum. Coulmas, F. (1981). Introduction: Conversational routine. In F. Coulmas (Ed.). Conversational routine: Explorations in standardized communication situations and prepatterned speech (pp. 1–17). The Hague: Mouton. Greene, J. (1984). A cognitive approach to human communication theory: An action assembly theory. Communication Monographs, 51, 289–306. Greene, J. (1989). The stability of non-verbal behavior: An action production approach to cross- situation consistency and discriminativeness. Journal of Language and Social Psychology, 8, 193–200. Greene, J. (1984). Evaluating cognitive explanations of communication phenomena. Quarterly Journal of Speech, 70, 241–254. Miller, K. (2005). Communication theories, perspectives, processes, and contexts. New York, NY: McGraw Hill. Norman, D.A. (1980). Twelve issues for cognitive science. Cognitive Science, 4, 1–32. Schmidt, R.A. (1975). A schema theory of discrete motor skill learning. Psychological Review, 82, 225–260.
Wikipedia/Action_assembly_theory
Networks in labor economics refers to the effect social networks have on jobseekers obtaining employment. Research suggests that around half of the employed workforce found their jobs through social contacts. It is believed that social networks not only contribute to the efficiency of job searching but can also explain, at least partly, wage differences and other inequalities in the workforce. Various models are used to quantify this effect, all having their own strengths and weaknesses. Models generally have to simplify the complex nature of social networks. == The model of Calvo-Armegnol and Jackson == In some economic models, the role of social networks in job searching often use exogenous job networks. Using this framework, Calvo-Armegnol and Jackson were able to point out some network related labor market issues. === The model === In their basic model, in which they attempt to formalize the transmission of job information among individuals, the agents can be either employed with some non-zero, or unemployed with zero wages. The agents can get information about a job, and when they do so, they can decide whether to keep that information for themselves or pass it to their contacts. In the other phase, employed agents can lose their job with a given probability. === Implications === Important indication of their model is that if someone who is employed has the information about a job, she will pass it to her unemployed acquaintances who will then become employed. Therefore, there is a positive correlation between labor outcomes of an individual and her contacts. On the other hand, it can also give an explanation for long term unemployment. If someone's acquaintances are unemployed as well, she has less chance to hear of some job opportunity. They also conclude that different initial wage and employment can cause different drop-outs rates from the labor market, thus, it can explain the existence of wage inequalities across social groups. Calvo-Armengol and Jackson prove that position in the network, and structure of the network affect the probability of being unemployed as well. == Referral based job search == The effectiveness of job searching with personal contacts is the consequence not only the individuals’ but the employers’ behavior as well. They often choose to hire acquaintances of their current employees instead of using a bigger pool of applicants. It is due to the information asymmetry, as they hardly know anything about the productivity of the applicant, and revealing it would be rather time-consuming and expensive. However, employees might be aware both their contacts unobserved characteristics and the specific expectations of employers, so they can improve this imbalance. Another benefit for the firm is that, due to the personal bond, present employees are motivated to choose a candidate who will perform well, since after the recommendation, their reputation is also at stake. Dustman, Glitz and Schönberg showed that using personal connections in job search increases the initial wage and decreases the probability of leaving the firm. Referral based job networks can function even if there is no direct link between the referee and the potential worker. In the model of Finneran and Kelly, there is a hierarchical network in which workers have the opportunity to refer their acquaintances if their employer hires. Workers are referred for a job with some increasing probability regarding their ability and productivity. In a hierarchical model like this, workers who work at a lower level, far from the information, never get an offer. However, the authors have shown that there is a threshold of this referral probability over which even those skilled worker can be referred who are low in the hierarchy. So there is a critical density of referral linkages that exists, under which no qualified workers can be referred; however, if the density of these linkages is high enough, all qualified workers will match with a job, despite their position in the network. == References ==
Wikipedia/Networks_in_labor_economics
Giant Global Graph (GGG) is a name coined in 2007 by Tim Berners-Lee to help distinguish between the nature and significance of the content on the existing World Wide Web and that of a promulgated next-generation web, presumptively named Web 3.0. In common usage, "World Wide Web" refers primarily to a web of discrete information objects readable by human beings, with functional linkages provided between them by human-created hyperlinks. Next-generation Web 3.0 information designs go beyond the discrete web pages of previous generations by emphasizing the metadata which describe information objects like web pages and attribute the relationships that conceptually or semantically link the information objects to each other. Additionally, Web 3.0 technologies and designs enable the organization of entirely new kinds of human- and machine-created data objects. An important related concept that overlaps with Giant Global Graph without fully encompassing it is that of the Semantic Web. Social networking services are one of the earliest and best-known examples of this distinction. In a Social Network, the information about relationships between people, and the kinds of data objects those people share, is at least as important as the data objects themselves. Plus, participants in a Social Network create new kinds of data that did not exist on the web before, such as their Likes for other people's comments and content. Currently, these new kinds of data are primarily structured and mediated by the proprietary systems of companies like Facebook. In the ideal future of the decentralized Giant Global Graph or Semantic Web, such information would be structured in such a way that it could be readable by many different systems and dynamically organized into many different user-readable formats. The GGG concept also relates to the Decentralization of Internet Information, whereby properly-formatted semantic web data objects can be organized and their relationships discerned by any computer on the Internet, rather than solely being organized by large centralized systems such as Facebook and Google. For instance, people using the FOAF protocol to organize information on websites or other Internet nodes can define and interact with their social networks without necessarily requiring the intervention of centralized systems like Facebook. Crucially, where the term Web 3.0 refers to a suite of technologies and to a particular phase in the development of the web, the term Giant Global Graph is intended to refer more generally to the total environment of information that will be generated and sustained through the implementation of these technologies. This environment will be a qualitatively different one than that which existed before the development of these technologies. As of 2017, anticipated progress toward a pervasive semantic web has been side-tracked by the widespread application of machine learning technologies to process existing, unstructured data and content, and that it is no longer clear whether a Web 3.0 epoch will materialize as originally envisioned. == History == The term Giant Global Graph was notably used the first time by the inventor of the World Wide Web, Tim Berners-Lee, on his blog. Tim Berners-Lee thinks about the social network itself that is inside and between social-network Web sites such as Facebook. He assumes that people can use the word "Graph" to distinguish these from the "Web". Then he says that, although he called this graph the Semantic Web, maybe it should have been called the "Giant Global Graph". "GGG" has been used several times by Berners-Lee and by others in other blogs. GGG may be described as the content plus pointers of the WWW transitioning to content plus pointers plus relationships plus descriptions. Significantly, the Giant Global Graph concept seems to have been a significant input in Facebook's concept and name for their "Open Graph" project and protocol, which is their effort to spread their approach to social networking beyond the bounds of the Facebook website, allowing a broader network or "graph" of connections between Facebook users, and between Facebook users and the Internet data objects which interest them. == See also == peer-to-peer distributed system == References ==
Wikipedia/Giant_Global_Graph
A collaborative innovation network (CoIN) is a collaborative innovation practice that uses internet platforms to promote communication and innovation within self-organizing virtual teams. == Overview == Coins work across hierarchies and boundaries where members can exchange ideas and information directly and openly. This collaborative and transparent environment fosters innovation. Peter Gloor describes the phenomenon as "swarm creativity". He says, "CoINs are the best engines to drive innovation." CoINs existed well before the advent of modern communication technology. However, the Internet and instant communication improved productivity and enabled the reach of a global scale. Today, they rely on the Internet, e-mail, and other communications vehicles for information sharing. According to Gloor, CoINs have five main characteristics: Dispersed membership: technology allows members to be spread worldwide. Regardless of the location, members share a common goal and are convinced of their cause. Interdependent membership: cooperation between members is critical to achieving a common goal. The work of one member is affected and interdependent on the others' work. No simple chain of command: there is no superior command. It is a decentralized and self-organized system. Conflicts are solved without the need for a hierarchy or authority. Common goal: members are willing to contribute, work and share freely. They are intrinsically motivated to donate their work, create, and share knowledge in favor of a common goal. Reliance on trust: cooperative behavior and mutual trust are needed to work efficiently within the network. Members act according to an ethical code that states the rules and principles to be followed by all members. Usually, moral codes include regulations related to respect, consistency, reciprocity, and rationality. There are also five essential elements of collaborative innovation networks (which Gloor calls "genetic code"): They are learning networks, and set an informal and flexible environment that facilitates and stimulates collaboration and the exchange of ideas, information, and knowledge. Their members agree on a moral code that guides member conduct and behavior. They are based on trust and self-organization. Members trust each other without needing a centralized management, and are brought together by mutual respect and a strong sense of shared beliefs. They make knowledge accessible to everyone. They operate in internal honesty and transparency, which forms a system based on reciprocal trust and mutually established principles. == Examples == CoINs have been developing many disruptive innovations such as the Internet, Linux, the Web and Wikipedia. Students with little or no budget created these inventions in universities or labs. They were not focused on the money but on the sense of accomplishment. Faced with creations like the Internet, large companies such as IBM and Intel have learned to use the principles of open innovation to enhance their research learning curve. They increased or established collaborations with universities, agencies, and small companies to accelerate their processes and launch new services faster. == Collaborative innovation network factors == Asheim and Isaksen (2002) conclude that innovative network contributes to the achievement of optimal allocation of resources, and promoting knowledge transfer performance. However, four factors of collaborative innovation networks affect the performance of CoINs differently: Network size is the number of partners such as enterprises, universities, research institutions, intermediaries, and government departments in an innovative network. Previous work reveals that network size has a positive effect on knowledge transfer as it provides the actor (e.g., firm) with two significant substantive benefits: one is the exposure to a more significant amount of external information, knowledge, and ideas and the other is resource sharing between the actor and its contacts such as knowledge sharing, reduction of transaction costs, complementarities, and scale. Network heterogeneity: network heterogeneity refers to differences in the knowledge, technology, ability, and size of members in the network. Firms in a more heterogeneous network are more likely to acquire external knowledge resources. When network heterogeneity is higher, getting complementary resources and accelerating the speed of knowledge transfer is easier. Network tie-strength refers to the nature of a relational contact and includes the degree of intimacy, duration, and frequency; the breadth of topic usually refers to time length, tie depth, emotional intensity, intimacy frequency, and interactive connection. A collaborative, innovative network with a high level of tie-strength can provide firms with practical information and knowledge, reduce risk and uncertainty in the innovation process, and achieve successful knowledge transfer. Network centrality refers to an actor's position in a network. Actors centrally located in a network are in an advantageous position to monitor the flow of information and have the consequent advantage of having large numbers of contacts willing and able to provide them with meaningful opportunities and resources. == Current challenges == Collaborative innovation still needs to be empowered. A more collaborative approach involving stakeholders such as governments, corporations, entrepreneurs, and scholars is critical to tackling today's main challenges. == See also == General theory of collaboration: Collective intelligence • Polytely • Swarm intelligence Open politics • Symbolic interactionism Commons-based peer production Community of practice == References == == Further reading == Peter Gloor and Scott Cooper (2007) Coolhunting: Chasing Down the Next Big Thing. ISBN 0-8144-7386-5 Silvestre, B. S., Dalcol, P. R. T. (2009) Geographical proximity and innovation: Evidence from the Campos Basin oil & gas industrial agglomeration — Brazil. Technovation, Vol. 29 (8), pp. 546–561. Gillett, A.G. and Smith, G., 2015. Creativities, innovation, and networks in garage punk rock: A case study of the Eruptörs. Activate A Journal of Entrepreneurship in the Arts, pp. 9–24. artivate.hida.asu.edu/index.php/artivate/article/download/82/36 == External links == fido ('fearless innovation designed online') - collaborative innovation system Ethical Issues in Collaborative Innovation Networks by Peter A. Gloor, Carey Heckman, & Fillia Makedon. "Advances in Interdisciplinary Studies of Work Teams" "Transforming Government Through Collaborative Innovation" "The Future of Work" "Global University System with Globally Collaborative Innovation Network" "Network Plasticity and Collaborative Innovation" "Performance Based Integrated Innovation Management System" proposal
Wikipedia/Collaborative_innovation_network
Exponential family random graph models (ERGMs) are a set of statistical models used to study the structure and patterns within networks, such as those in social, organizational, or scientific contexts. They analyze how connections (edges) form between individuals or entities (nodes) by modeling the likelihood of network features, like clustering or centrality, across diverse examples including knowledge networks, organizational networks, colleague networks, social media networks, networks of scientific collaboration, and more. Part of the exponential family of distributions, ERGMs help researchers understand and predict network behavior in fields ranging from sociology to data science. == Background == Many metrics exist to describe the structural features of an observed network such as the density, centrality, or assortativity. However, these metrics describe the observed network which is only one instance of a large number of possible alternative networks. This set of alternative networks may have similar or dissimilar structural features. To support statistical inference on the processes influencing the formation of network structure, a statistical model should consider the set of all possible alternative networks weighted on their similarity to an observed network. However because network data is inherently relational, it violates the assumptions of independence and identical distribution of standard statistical models like linear regression. Alternative statistical models should reflect the uncertainty associated with a given observation, permit inference about the relative frequency about network substructures of theoretical interest, disambiguating the influence of confounding processes, efficiently representing complex structures, and linking local-level processes to global-level properties. Degree-preserving randomization, for example, is a specific way in which an observed network could be considered in terms of multiple alternative networks. == Definition == The Exponential family is a broad family of models for covering many types of data, not just networks. An ERGM is a model from this family which describes networks. Formally a random graph Y ∈ Y {\displaystyle Y\in {\mathcal {Y}}} consists of a set of n {\displaystyle n} nodes and a collection of tie variables { Y i j : i = 1 , … , n ; j = 1 , … , n } {\displaystyle \{Y_{ij}:i=1,\dots ,n;j=1,\dots ,n\}} , indexed by pairs of nodes i j {\displaystyle ij} , where Y i j = 1 {\displaystyle Y_{ij}=1} if the nodes ( i , j ) {\displaystyle (i,j)} are connected by an edge and Y i j = 0 {\displaystyle Y_{ij}=0} otherwise. A pair of nodes i j {\displaystyle ij} is called a dyad and a dyad is an edge if Y i j = 1 {\displaystyle Y_{ij}=1} . The basic assumption of these models is that the structure in an observed graph y {\displaystyle y} can be explained by a given vector of sufficient statistics s ( y ) {\displaystyle s(y)} which are a function of the observed network and, in some cases, nodal attributes. This way, it is possible to describe any kind of dependence between the undyadic variables: P ( Y = y | θ ) = exp ⁡ ( θ T s ( y ) ) c ( θ ) , ∀ y ∈ Y {\displaystyle P(Y=y|\theta )={\frac {\exp(\theta ^{T}s(y))}{c(\theta )}},\quad \forall y\in {\mathcal {Y}}} where θ {\displaystyle \theta } is a vector of model parameters associated with s ( y ) {\displaystyle s(y)} and c ( θ ) = ∑ y ′ ∈ Y exp ⁡ ( θ T s ( y ′ ) ) {\displaystyle c(\theta )=\sum _{y'\in {\mathcal {Y}}}\exp(\theta ^{T}s(y'))} is a normalising constant. These models represent a probability distribution on each possible network on n {\displaystyle n} nodes. However, the size of the set of possible networks for an undirected network (simple graph) of size n {\displaystyle n} is 2 n ( n − 1 ) / 2 {\displaystyle 2^{n(n-1)/2}} . Because the number of possible networks in the set vastly outnumbers the number of parameters which can constrain the model, the ideal probability distribution is the one which maximizes the Gibbs entropy. == Example == Let V = { 1 , 2 , 3 } {\displaystyle V=\{1,2,3\}} be a set of three nodes and let Y {\displaystyle {\mathcal {Y}}} be the set of all undirected, loopless graphs on V {\displaystyle V} . Loopless implies that for all i = 1 , 2 , 3 {\displaystyle i=1,2,3} it is Y i i = 0 {\displaystyle Y_{ii}=0} and undirected implies that for all i , j = 1 , 2 , 3 {\displaystyle i,j=1,2,3} it is Y i j = Y j i {\displaystyle Y_{ij}=Y_{ji}} , so that there are three binary tie variables ( Y 12 , Y 13 , Y 23 {\displaystyle Y_{12},Y_{13},Y_{23}} ) and 2 3 = 8 {\displaystyle 2^{3}=8} different graphs in this example. Define a two-dimensional vector of statistics by s ( y ) = [ s 1 ( y ) , s 2 ( y ) ] T {\displaystyle s(y)=[s_{1}(y),s_{2}(y)]^{T}} , where s 1 ( y ) = e d g e s ( y ) {\displaystyle s_{1}(y)=edges(y)} is defined to be the number of edges in the graph y {\displaystyle y} and s 2 ( y ) = t r i a n g l e s ( y ) {\displaystyle s_{2}(y)=triangles(y)} is defined to be the number of closed triangles in y {\displaystyle y} . Finally, let the parameter vector be defined by θ = ( θ 1 , θ 2 ) T = ( − ln ⁡ 2 , ln ⁡ 3 ) T {\displaystyle \theta =(\theta _{1},\theta _{2})^{T}=(-\ln 2,\ln 3)^{T}} , so that the probability of every graph y ∈ Y {\displaystyle y\in {\mathcal {Y}}} in this example is given by: P ( Y = y | θ ) = exp ⁡ ( − ln ⁡ 2 ⋅ e d g e s ( y ) + ln ⁡ 3 ⋅ t r i a n g l e s ( y ) ) c ( θ ) {\displaystyle P(Y=y|\theta )={\frac {\exp(-\ln 2\cdot edges(y)+\ln 3\cdot triangles(y))}{c(\theta )}}} We note that in this example, there are just four graph isomorphism classes: the graph with zero edges, three graphs with exactly one edge, three graphs with exactly two edges, and the graph with three edges. Since isomorphic graphs have the same number of edges and the same number of triangles, they also have the same probability in this example ERGM. For a representative y {\displaystyle y} of each isomorphism class, we first compute the term x ( y ) = exp ⁡ ( − ln ⁡ 2 ⋅ e d g e s ( y ) + ln ⁡ 3 ⋅ t r i a n g l e s ( y ) ) {\displaystyle x(y)=\exp(-\ln 2\cdot edges(y)+\ln 3\cdot triangles(y))} , which is proportional to the probability of y {\displaystyle y} (up to the normalizing constant c ( θ ) {\displaystyle c(\theta )} ). If y {\displaystyle y} is the graph with zero edges, then it is e d g e s ( y ) = 0 {\displaystyle edges(y)=0} and t r i a n g l e s ( y ) = 0 {\displaystyle triangles(y)=0} , so that x ( y ) = exp ⁡ ( − ln ⁡ 2 ⋅ 0 + ln ⁡ 3 ⋅ 0 ) = exp ⁡ ( 0 ) = 1. {\displaystyle x(y)=\exp(-\ln 2\cdot 0+\ln 3\cdot 0)=\exp(0)=1.} If y {\displaystyle y} is a graph with exactly one edge, then it is e d g e s ( y ) = 1 {\displaystyle edges(y)=1} and t r i a n g l e s ( y ) = 0 {\displaystyle triangles(y)=0} , so that x ( y ) = exp ⁡ ( − ln ⁡ 2 ⋅ 1 + ln ⁡ 3 ⋅ 0 ) = exp ⁡ ( 0 ) exp ⁡ ( ln ⁡ 2 ) = 1 2 . {\displaystyle x(y)=\exp(-\ln 2\cdot 1+\ln 3\cdot 0)={\frac {\exp(0)}{\exp(\ln 2)}}={\frac {1}{2}}.} If y {\displaystyle y} is a graph with exactly two edges, then it is e d g e s ( y ) = 2 {\displaystyle edges(y)=2} and t r i a n g l e s ( y ) = 0 {\displaystyle triangles(y)=0} , so that x ( y ) = exp ⁡ ( − ln ⁡ 2 ⋅ 2 + ln ⁡ 3 ⋅ 0 ) = exp ⁡ ( 0 ) exp ⁡ ( ln ⁡ 2 ) 2 = 1 4 . {\displaystyle x(y)=\exp(-\ln 2\cdot 2+\ln 3\cdot 0)={\frac {\exp(0)}{\exp(\ln 2)^{2}}}={\frac {1}{4}}.} If y {\displaystyle y} is the graph with exactly three edges, then it is e d g e s ( y ) = 3 {\displaystyle edges(y)=3} and t r i a n g l e s ( y ) = 1 {\displaystyle triangles(y)=1} , so that x ( y ) = exp ⁡ ( − ln ⁡ 2 ⋅ 3 + ln ⁡ 3 ⋅ 1 ) = exp ⁡ ( ln ⁡ 3 ) exp ⁡ ( ln ⁡ 2 ) 3 = 3 8 . {\displaystyle x(y)=\exp(-\ln 2\cdot 3+\ln 3\cdot 1)={\frac {\exp(\ln 3)}{\exp(\ln 2)^{3}}}={\frac {3}{8}}.} The normalizing constant is computed by summing x ( y ) {\displaystyle x(y)} over all eight different graphs y ∈ Y {\displaystyle y\in {\mathcal {Y}}} . This yields: c ( θ ) = 1 + 3 ⋅ 1 2 + 3 ⋅ 1 4 + 3 8 = 29 8 . {\displaystyle c(\theta )=1+3\cdot {\frac {1}{2}}+3\cdot {\frac {1}{4}}+{\frac {3}{8}}={\frac {29}{8}}.} Finally, the probability of every graph y ∈ Y {\displaystyle y\in {\mathcal {Y}}} is given by P ( Y = y | θ ) = x ( y ) c ( θ ) {\displaystyle P(Y=y|\theta )={\frac {x(y)}{c(\theta )}}} . Explicitly, we get that the graph with zero edges has probability 8 29 {\displaystyle {\frac {8}{29}}} , every graph with exactly one edge has probability 4 29 {\displaystyle {\frac {4}{29}}} , every graph with exactly two edges has probability 2 29 {\displaystyle {\frac {2}{29}}} , and the graph with exactly three edges has probability 3 29 {\displaystyle {\frac {3}{29}}} in this example. Intuitively, the structure of graph probabilities in this ERGM example are consistent with typical patterns of social or other networks. The negative parameter ( θ 1 = − ln ⁡ 2 {\displaystyle \theta _{1}=-\ln 2} ) associated with the number of edges implies that - all other things being equal - networks with fewer edges have a higher probability than networks with more edges. This is consistent with the sparsity that is often found in empirical networks, namely that the empirical number of edges typically grows at a slower rate than the maximally possible number of edges. The positive parameter ( θ 2 = ln ⁡ 3 {\displaystyle \theta _{2}=\ln 3} ) associated with the number of closed triangles implies that - all other things being equal - networks with more triangles have a higher probability than networks with fewer triangles. This is consistent with a tendency for triadic closure that is often found in certain types of social networks. Compare these patterns with the graph probabilities computed above. The addition of every edge divides the probability by two. However, when going from a graph with two edges to the graph with three edges, the number of triangles increases by one - which additionally multiplies the probability by three. We note that the explicit calculation of all graph probabilities is only possible since there are so few different graphs in this example. Since the number of different graphs scales exponentially in the number of tie variables - which in turn scales quadratic in the number of nodes -, computing the normalizing constant is in general computationally intractable, already for a moderate number of nodes. == Sampling from an ERGM == Exact sampling from a given ERGM is computationally intractable in general since computing the normalizing constant requires summation over all y ∈ Y {\displaystyle y\in {\mathcal {Y}}} . Efficient approximate sampling from an ERGM can be done via Markov chains and is applied in current methods to approximate expected values and to estimate ERGM parameters. Informally, given an ERGM on a set of graphs Y {\displaystyle {\mathcal {Y}}} with probability mass function P ( Y = y | θ ) = exp ⁡ ( θ T s ( y ) ) c ( θ ) {\displaystyle P(Y=y|\theta )={\frac {\exp(\theta ^{T}s(y))}{c(\theta )}}} , one selects an initial graph y ( 0 ) ∈ Y {\displaystyle y^{(0)}\in {\mathcal {Y}}} (which might be arbitrarily, or randomly, chosen or might represent an observed network) and implicitly defines transition probabilities (or jump probabilities) π ( y , y ′ ) = P ( Y ( t + 1 ) = y ′ | Y ( t ) = y ) {\displaystyle \pi (y,y')=P(Y^{(t+1)}=y'|Y^{(t)}=y)} , which are the conditional probabilities that the Markov chain is on graph y ′ {\displaystyle y'} after Step t + 1 {\displaystyle t+1} , given that it is on graph y {\displaystyle y} after Step t {\displaystyle t} . The transition probabilities do not depend on the graphs in earlier steps ( y ( 0 ) , … , y ( t − 1 ) {\displaystyle y^{(0)},\dots ,y^{(t-1)}} ), which is a defining property of Markov chains, and they do not depend on t {\displaystyle t} , that is, the Markov chain is time-homogeneous. The goal is to define the transition probabilities such that for all y ∈ Y {\displaystyle y\in {\mathcal {Y}}} it is lim t → ∞ P ( Y ( t ) = y ) = exp ⁡ ( θ T s ( y ) ) c ( θ ) , {\displaystyle \lim _{t\to \infty }P(Y^{(t)}=y)={\frac {\exp(\theta ^{T}s(y))}{c(\theta )}},} independent of the initial graph y ( 0 ) {\displaystyle y^{(0)}} . If this is achieved, one can run the Markov chain for a large number of steps and then returns the current graph as a random sample from the given ERGM. The probability to return a graph y ∈ Y {\displaystyle y\in {\mathcal {Y}}} after a finite but large number of update steps is approximately the probability defined by the ERGM. Current methods for sampling from ERGMs with Markov chains usually define an update step by two sub-steps: first, randomly select a candidate y ′ {\displaystyle y'} in a neighborhood of the current graph y {\displaystyle y} and, second, to accept y ′ {\displaystyle y'} with a probability that depends on the probability ratio of the current graph y {\displaystyle y} and the candidate y ′ {\displaystyle y'} . (If the candidate is not accepted, the Markov chain remains on the current graph y {\displaystyle y} .) If the set of graphs Y {\displaystyle {\mathcal {Y}}} is unconstrained (i.e., contains any combination of values on the binary tie variables), a simple method for candidate selection is to choose one tie variable y i j {\displaystyle y_{ij}} uniformly at random and to define the candidate by flipping this single variable (i.e., to set y i j ′ = 1 − y i j {\displaystyle y'_{ij}=1-y_{ij}} ; all other variables take the same value as in y {\displaystyle y} ). A common way to define the acceptance probability is to accept y ′ {\displaystyle y'} with the conditional probability P ( Y = y ′ | Y = y ′ ∨ Y = y ) = P ( Y = y ′ ) P ( Y = y ′ ) + P ( Y = y ) , {\displaystyle P(Y=y'|Y=y'\vee Y=y)={\frac {P(Y=y')}{P(Y=y')+P(Y=y)}},} where the graph probabilities are defined by the ERGM. Crucially, the normalizing constant c ( θ ) {\displaystyle c(\theta )} cancels out in this fraction, so that the acceptance probabilities can be computed efficiently. == See also == Autologistic actor attribute models == References == == Further reading == Byshkin, M.; Stivala, A.; Mira, A.; Robins, G.; Lomi, A. (2018). "Fast Maximum Likelihood Estimation via Equilibrium Expectation for Large Network Data". Scientific Reports. 8 (1): 11509. arXiv:1802.10311. Bibcode:2018NatSR...811509B. doi:10.1038/s41598-018-29725-8. PMC 6068132. PMID 30065311. Caimo, A.; Friel, N (2011). "Bayesian inference for exponential random graph models". Social Networks. 33: 41–55. arXiv:1007.5192. doi:10.1016/j.socnet.2010.09.004. Erdős, P.; Rényi, A (1959). "On random graphs". Publicationes Mathematicae. 6: 290–297. Fienberg, S. E.; Wasserman, S. (1981). "Discussion of An Exponential Family of Probability Distributions for Directed Graphs by Holland and Leinhardt". Journal of the American Statistical Association. 76 (373): 54–57. doi:10.1080/01621459.1981.10477600. Frank, O.; Strauss, D (1986). "Markov Graphs". Journal of the American Statistical Association. 81 (395): 832–842. doi:10.2307/2289017. JSTOR 2289017. Handcock, M. S.; Hunter, D. R.; Butts, C. T.; Goodreau, S. M.; Morris, M. (2008). "statnet: Software Tools for the Representation, Visualization, Analysis and Simulation of Network Data". Journal of Statistical Software. 24 (1): 1–11. doi:10.18637/jss.v024.i01. PMC 2447931. PMID 18618019. Harris, Jenine K (2014). An introduction to exponential random graph modeling. ISBN 9781452220802. OCLC 870698788. Hunter, D. R.; Goodreau, S. M.; Handcock, M. S. (2008). "Goodness of Fit of Social Network Models". Journal of the American Statistical Association. 103 (481): 248–258. CiteSeerX 10.1.1.206.396. doi:10.1198/016214507000000446. Hunter, D. R; Handcock, M. S. (2006). "Inference in curved exponential family models for networks". Journal of Computational and Graphical Statistics. 15 (3): 565–583. CiteSeerX 10.1.1.205.9670. doi:10.1198/106186006X133069. Hunter, D. R.; Handcock, M. S.; Butts, C. T.; Goodreau, S. M.; Morris, M. (2008). "ergm: A Package to Fit, Simulate and Diagnose Exponential-Family Models for Networks". Journal of Statistical Software. 24 (3): 1–29. doi:10.18637/jss.v024.i03. PMC 2743438. Jin, I.H.; Liang, F. (2012). "Fitting social networks models using varying truncation stochastic approximation MCMC algorithm". Journal of Computational and Graphical Statistics. 22 (4): 927–952. doi:10.1080/10618600.2012.680851. Koskinen, J. H.; Robins, G. L.; Pattison, P. E. (2010). "Analysing exponential random graph (p-star) models with missing data using Bayesian data augmentation". Statistical Methodology. 7 (3): 366–384. doi:10.1016/j.stamet.2009.09.007. Morris, M.; Handcock, M. S.; Hunter, D. R. (2008). "Specification of Exponential-Family Random Graph Models: Terms and Computational Aspects". Journal of Statistical Software. 24 (4): 1548–7660. doi:10.18637/jss.v024.i04. PMC 2481518. PMID 18650964. Rinaldo, A.; Fienberg, S. E.; Zhou, Y. (2009). "On the geometry of descrete exponential random families with application to exponential random graph models". Electronic Journal of Statistics. 3: 446–484. arXiv:0901.0026. doi:10.1214/08-EJS350. Robins, G.; Snijders, T.; Wang, P.; Handcock, M.; Pattison, P (2007). "Recent developments in exponential random graph (p*) models for social networks" (PDF). Social Networks. 29 (2): 192–215. doi:10.1016/j.socnet.2006.08.003. hdl:11370/abee7276-394e-4051-a180-7b2ff57d42f5. Schweinberger, Michael (2011). "Instability, sensitivity, and degeneracy of discrete exponential families". Journal of the American Statistical Association. 106 (496): 1361–1370. doi:10.1198/jasa.2011.tm10747. PMC 3405854. PMID 22844170. Schweinberger, Michael; Handcock, Mark (2015). "Local dependence in random graph models: characterization, properties and statistical inference". Journal of the Royal Statistical Society, Series B. 77 (3): 647–676. doi:10.1111/rssb.12081. PMC 4637985. PMID 26560142. Schweinberger, Michael; Stewart, Jonathan (2020). "Concentration and consistency results for canonical and curved exponential-family models of random graphs". The Annals of Statistics. 48 (1): 374–396. arXiv:1702.01812. doi:10.1214/19-AOS1810. Snijders, T. A. B. (2002). "Markov chain Monte Carlo estimation of exponential random graph models" (PDF). Journal of Social Structure. 3. Snijders, T. A. B.; Pattison, P. E.; Robins, G. L.; Handcock, M. S. (2006). "New specifications for exponential random graph models". Sociological Methodology. 36: 99–153. CiteSeerX 10.1.1.62.7975. doi:10.1111/j.1467-9531.2006.00176.x. Strauss, D; Ikeda, M (1990). "Pseudolikelihood estimation for social networks". Journal of the American Statistical Association. 5 (409): 204–212. doi:10.2307/2289546. JSTOR 2289546. van Duijn, M. A.; Snijders, T. A. B.; Zijlstra, B. H. (2004). "p2: a random effects model with covariates for directed graphs". Statistica Neerlandica. 58 (2): 234–254. doi:10.1046/j.0039-0402.2003.00258.x. van Duijn, M. A. J.; Gile, K. J.; Handcock, M. S. (2009). "A framework for the comparison of maximum pseudo-likelihood and maximum likelihood estimation of exponential family random graph models". Social Networks. 31 (1): 52–62. doi:10.1016/j.socnet.2008.10.003. PMC 3500576. PMID 23170041.
Wikipedia/Exponential_random_graph_model
A professional network service (or, in an Internet context, simply a professional network) is a type of social network service that focuses on interactions and relationships for business opportunities and career growth, with less emphasis on activities in personal life. A professional network service is used by working individuals, job-seekers, and businesses to establish and maintain professional contacts, to find work or hire employees, share professional achievements, sell or promote services, and stay up-to-date with industry news and trends. According to LinkedIn managing director Clifford Rosenberg in an interview with AAP in 2010, "[t]his is a call to action for professionals to re-address their use of social networks and begin to reap as many rewards from networking professionally as they do personally." Businesses mostly depend on resources and information outside the company and to get what they need, they need to reach out and professionally network with others, such as employees or clients as well as potential opportunities. "Nardi, Whittaker, and Schwarz (2002) point out three main tasks that they believe networkers need to attend to keep a successful professional (intentional) network: building a network, maintaining the network, and activating selected contacts. They stress that networkers need to continue to add new contacts to their network to access as many resources as possible and to maintain their network by staying in touch with their contacts. This is so that the contacts are easy to activate when the networker has work that needs to be done." By using a professional network service, businesses can keep all of their networks up-to-date, and in order, and helps figure out the best way to efficiently get in touch with each of them. A service that can do all that helps relieve some of the stress when trying to get things done. Not all professional network services are online sites that help promote a business. Some services connect the user to other services that help promote the business other than online sites, such as phone/Internet companies that provide services and companies that specifically are designed to do all of the promoting, online and in person, for a business. == History == In 1997, professional network services started up throughout the world and continue to grow. The first recognizable site to combine all features, such as creating profiles, adding friends, and searching for friends, was SixDegrees.com. According to Boyd and Ellison's article, "Social Network Sites: Definition, History, and Scholarship", from 1997 to 2001, several community tools began supporting various combinations of profiles and publicly articulated Friends. Boyd and Ellison go on to say that the next wave began with Ryze.com in 2001. It was introduced as a new way "to help people leverage their business networks". == Inside the works == Quite a lot of work is put into a professional network service, such as the number of hours that go into them and the type of people they work for, as well as the business model of it all, such as the professional interaction and the multiple services they deal with. === Types of services === Some professional network services not only help promote the business but can also help in connecting to other people. Those services may include a specific phone and/or Internet company or a company that helps to connect with other businesses. According to the Society for New Communications Research (SNCR), there are at least nine online professional networks that are being used. === Professional interaction === Kaplan and Haenlein elaborate on five key considerations for companies when utilizing media. These include the importance of careful selection, the option to choose existing applications or develop custom ones, ensuring alignment with organizational activities, integrating a comprehensive media plan, and providing accessibility to all stakeholders. ==== Choose carefully ==== "Choosing the right medium for any given purpose depends on the target group to be reached and the message to be communicated. On one hand, each Social Media application usually attracts a certain group of people, and firms should be active wherever their customers are present. On the other hand, there may be situations whereby certain features are necessary to ensure effective communication, and these features are only offered by one specific application." ==== Ensure activity alignment ==== "Sometimes you may decide to rely on various Social Media, or a set of different applications within the same group, to have the largest possible reach." "Using different contact channels can be a worthwhile and profitable strategy." According to the Society for New Communications Research at Harvard University, "the average professional belongs to 3–5 online networks for business use, and LinkedIn, Facebook, and Twitter are among the top used." ==== Integrate a media plan ==== Social media and traditional media are "both part of the same: your corporate image" in the customers' eyes. ==== Allow access to all ==== "...once the firm has decided to utilize Social Media applications, it is worth checking that all employees may access them." According to the SNCR, "the convergence of Internet, mobile, and social media has taken significant shape as professionals rely on anywhere access to information, relationships, and networks." ==== Online usage ==== "Half of the respondents report participating in 3 to 5 online professional networks. Another three in ten participate in 6 or more professional networks." "Popular social networks are now being used frequently as Professional Communities. More than nine in ten respondents indicated that they use LinkedIn and half reported using Facebook. Twitter and blogs were frequently listed as 'professional networks'." === Business model === According to Michael Rappa's article, Business models on the Web", "a business model is the method of doing business by which a company can sustain itself – that is, generate revenue. The business model spells out how a company makes money by specifying where it is positioned in the value chain." Rappa mentions that there are at least nine basic categories from which a business model can be separated. Those categories are a brokerage, advertising, infomediary, merchant, manufacturer, affiliate, community, subscription, and utility. "...a firm may combine several different models as part of its overall Internet business strategy." At first, Flickr started as a way to mainstream public relations. == Social impact == When it comes to the social impact that professional network services have on today's society, it has proved to increase activity. According to the SNCR, "[t]hree quarters of respondents rely on professional networks to support business decisions. Reliance has increased for essentially all respondents over the past three years. Younger (20–35) and older professionals (55+) are more active users of social tools than middle-aged professionals. More people are collaborating outside their company wall than within their organizational intranet." == Limitations == Since the internet and social media are a part of this "world where consumers can speak so freely with each other and businesses have increasingly less control over the information available about them in cyberspace", most firms and businesses are uncomfortable with all the freedom. According to Kaplan and Haenlein's article, "Users of the world, unite! The challenges and opportunities of Social Media", businesses are pushed aside and are only able to sit back and watch as their customers publicly post comments, which may or may not be well-written. == See also == Business networking Career-oriented social networking market Employment website Freelance marketplace Social network service List of social networking websites Social media User-generated content Web 2.0 Social networking sites Smart contract: can be used in employment contracts Virtual worlds == Notes and references ==
Wikipedia/Professional_network_service
A network is an abstract structure capturing only the basics of connection patterns and little else. Because it is a generalized pattern, tools developed for analyzing, modeling and understanding networks can theoretically be implemented across disciplines. As long as a system can be represented by a network, there is an extensive set of tools – mathematical, computational, and statistical – that are well-developed and if understood can be applied to the analysis of the system of interest. Tools that are currently employed in risk assessment are often sufficient, but model complexity and limitations of computational power can tether risk assessors to involve more causal connections and account for more Black Swan event outcomes. By applying network theory tools to risk assessment, computational limitations may be overcome and result in broader coverage of events with a narrowed range of uncertainties. Decision-making processes are not incorporated into routine risk assessments; however, they play a critical role in such processes. It is therefore very important for risk assessors to minimize confirmation bias by carrying out their analysis and publishing their results with minimal involvement of external factors such as politics, media, and advocates. In reality, however, it is nearly impossible to break the iron triangle among politicians, scientists (in this case, risk assessors), and advocates and media. Risk assessors need to be sensitive to the difference between risk studies and risk perceptions. One way to bring the two closer is to provide decision-makers with data they can easily rely on and understand. Employing networks in the risk analysis process can visualize causal relationships and identify heavily-weighted or important contributors to the probability of the critical event. Bow-tie diagrams, cause-and-effect diagrams, Bayesian networks (a directed acyclic network) and fault trees are few examples of how network theories can be applied in risk assessment. In epidemiology risk assessments (Figure 7 and 9), once a network model was constructed, we can visually see then quantify and evaluate the potential exposure or infection risk of people related to the well-connected patients (Patient 1, 6, 35, 130 and 127 in Figure 7) or high-traffic places (Hotel M in Figure 9). In ecological risk assessments (Figure 8), through a network model we can identify the keystone species and determine how widespread the impacts will extend from the potential hazards being investigated. == Risk assessment key components == Risk assessment is a method for dealing with uncertainty. For it to be beneficial to the overall risk management and decision making process, it must be able to capture extreme and catastrophic events. Risk assessment involves two parts: risk analysis and risk evaluation, although the term “risk assessment” can be seen used indistinguishable with “risk analysis”. In general, risk assessment can be divided into these steps: Plan and prepare the risk analysis. Define and delimit the system and the scope of the analysis. Identify hazards and potential hazardous events. Determine causes and frequency of each hazardous event. Identify accident scenarios (i.e. even sequences) that may be initiated by each hazardous event. Select relevant and typical accident scenarios. Determine the consequences of each accident scenario. Determine the frequency of each accident scenario. Assess the uncertainty. Establish and describe the risk picture. Report the analysis. Evaluate the risk against risk acceptance criteria Suggest and evaluate potential risk-reducing measures. Naturally, the number of steps required varies with each assessment. It depends on the scope of the analysis and the complexity of the study object. Because these is always varies degrees of uncertainty involved in any risk analysis process, sensitivity and uncertainty analysis are usually carried out to mitigate the level of uncertainty and therefore improve the overall risk assessment result. == Network theory key components == A network is a simplified representation that reduces a system to an abstract structure. Simply put, it is a collection of points linked together by lines. Each point is known as a “vertex” (multiple: “vertices”) or “nodes”, and each line as “edges” or “links”. Network modeling and studying have already been applied in many areas, including computer, physical, biological, ecological, logistical and social science. Through the studying of these models, we gain insights into the nature of individual components (i.e. vertices), connections or interactions between those components (i.e. edges), as well as the pattern of connections (i.e. network). Undoubtedly, modifications of the structure (or pattern) of any given network can have a big effect on the behavior of the system it depicts. For example, connections in a social network affect how people communicate, exchange news, travel, and, less obviously, spread diseases. In order to gain better understanding of how each of these systems functions, some knowledge of the structure of the network is necessary. === Basic terminology === Small-World Effect The small-world effect is one of the most remarkable network phenomena. It describes a finding that in many (perhaps most) networks the mean path distances between vertices are surprisingly small. It has many implications in various areas of network studies. For instance, in social network, one can ruminate how fast a rumor (or a contagious disease) is spread in a community. From a mathematical point of view, since path lengths in networks are typically scale as log n (where n = number of network vertices), it is only logical it remains a small number even with large complex networks. Another idea comes along with the small-world effect is called funneling. It was derived from a social network experiment conducted by the experimental psychologist Stanley Milgram in the 1960s. In that experiment he concluded, along with the small-world effect phenomenon, that in any given social network, there were always few that were especially well connected. These few individuals were therefore responsible for the connection between any members and the rest of the world. Degree, Hubs, and Paths Degree of a vertex is the number of edges connected to it. For example, on Figure 4, vertex 3 has a degree of five. Hubs are vertices in a network with a relatively higher degree. Vertex 3 again is a good example. In a social network, hubs can mean individuals with many acquaintances. In risk assessment, it can mean a hazardous event with multiple triggers (or the causal part of a bow-tie diagram). A path in a network is a route between a vertex and another across the network. From the same figure, an example of a path from vertex 1 to 6 can be 1→5→3→6. Centrality Centrality is a measure of how important (or central) certain vertices are in a network. It can be measured by counting the number of edges connected to it (i.e its degree). The vertices with the highest degree therefore have a high degree centrality. Degree centrality can have many implications. In a social network, a person with high degree centrality may have more influence over others, more access to information, or more opportunities than those with fewer connections. In a citation network, a paper with high degree centrality may suggest it is more influential and thus has a greater impact on its respective area of research. Eigenvector centrality is an extension of the concept of degree centrality, based on the fact that in many networks not all vertices have the same weight or importance. A vertex's importance in its network increases if it has more connections to important vertices. Eigenvector centrality, therefore, can be view as a centrality scoring system for not just one but its neighboring vertices as well. Components Subgroups, or subsets of vertices, in a disconnected network. Disconnected network means in such network, there is at least a pair of vertices that no path connecting between them at all. Vice verse is known as a connected network, where all vertices within are connected by at least one path. One can therefore say a connected network has only one component. Directed Networks Networks of which each edge has a direction from one vertex to another. The edges are therefore known as directed edges. Example of such network include a link from the reference section on this page which will leads you to another, but not the other way around. In terms of food web, a prey eaten by a predator is another example. Directed networks can be cyclic or acyclic. A cyclic directed network is one with a closed loop of edges. An acyclic directed network does not contain such loop. Since a self-edge – an edge connecting a vertex to itself – is considered a cycle, it is therefore absent from any acyclic network. A Bayesian network is an example of an acyclic directed network. Weighted Network In reality, not all edges shares the same importance or weight (connections in a social network and keystone species in a food web, for example). A weighted network adds such element to its connections. It is widely used in genomic and systems biologic applications. Trees Undirected networks with no closed loops. A tree can be part of a network but isolated as a separate component. If all parts of a network are trees, such network is called a forest. An administrative body can sometime be viewed as a forest. == Other Examples of Network Theory Application == === Social network === Early social network studies can be traced back to the end of the nineteenth century. However well-documented studies and foundation of this field are usually attributed to a psychiatrist named Jacob Moreno. He published a book entitled Who Whall Survive? in 1934 which laid out the foundation for sociometry (later known as social network analysis). Another famous contributor to the early development of social network analysis is a perimental psychologist known as Stanley Milgram. His "small-world" experiments gave rise to concepts such as six degrees of separation and well-connected acquaintances (also known as "sociometric superstars"). This experiment was recently repeated by Dodds et al. by means of email messages, and the basic results were similar to Milgram's. The estimated true average path length (that is, the number of edges the email message has to pass from one unique individual to the intended targets in different countries) for the experiment was around five to seven, which is not much deviated from the original six degree of separation. === Food web === A food web, or food chain, is an example of directed network which describes the prey-predator relationship in a given ecosystem. Vertices in this type of network represent species, and the edges the prey-predator relationship. A collection of species may be represented by a single vertex if all members in that collection prey upon and are preyed on by the same organisms. A food web is often acyclic, with few exceptions such as adults preys on juveniles and parasitism. Note: In the food web main article, a food web was depicted as cyclic. That is based on the flow of the carbon and energy sources in a given ecosystem. The food web described here based solely on prey-predator roles; Organisms active in the carbon and nitrogen cycles (such as decomposers and fixers) are not considered in this description. === Epidemiology === Epidemiology is closely related to social network. Contagious diseases can spread through connection networks such as work space, transportation, intimate body contacts and water system (see Figure 7 and 9). Though it only exists virtually, a computer viruses spread across internet networks are not much different from their physical counterparts. Therefore, understanding each of these network patterns can no doubt aid us in more precise prediction of the outcomes of epidemics and preparing better disease prevention protocols. The simplest model of infection is presented as a SI (susceptible - infected) model. Most diseases, however, do not behave in such simple manner. Therefore, many modifications to this model were made such as the SIR (susceptible – infected – recovered), the SIS (the second S denotes reinfection) and SIRS models. The idea of latency is taken into accounts in models such as SEIR (where E stands for exposed). The SIR model is also known as the Reed-Frost model. To factor these into an outbreak network model, one must consider the degree distributions of vertices in the giant component of the network (outbreaks in small components are isolation and die out quickly, which does not allow the outbreaks to become epidemics). Theoretically, weighted network can provide more accurate information on exposure probability of vertices but more proofs are needed. Pastor-Satorras et al. pioneered much work in this area, which began with the simplest form (the SI model) and applied to networks drawn from the configuration model. The biology of how an infection causes disease in an individual is complicated and is another type of disease pattern specialists are interested in (a process known as pathogenesis which involves immunology of the host and virulence factors of the pathogen). == Notes == == References == Dolgoarshinnykh, Regina. "Criticality in Epidemic Models". Columbia University, New York. Criticality in Epidemic Models Legrain, Amaury, and Tom Auwers. The Principal-agent Model and the Network Theory as Framework for Administrative Procedures: Social Security in Belgium. EGPA Conference "Public Manager under Pressure: between Politics, Professionalism and Civil Society" (2006): 1-40 Martinez, Neo, and Dunne, Jennifer. "Foodwebs.org". Pacific Ecoinformatics and Computational Ecology Lab., 2011. foodwebs.org Meyers, Lauren A., M.E.J. Newman, and Stephanie Schrag. Applying Network Theory to Epidemics: Control Measures for Mycoplasma Pneumoniae Outbreaks. Emerging Infectious Diseases 9.2 (2003): 204-10 National Research Council (NRC). Risk Assessment in the Federal Government: Understanding the Process. Washington D.C.: National Academy Press, 1983. National Research Council (NRC). Understanding Risk: Informing Decisions in a Democratic Society. Washington D.C.: National Academy Press, 1996. Newman, Mark E. J. Networks: an Introduction. Oxford: Oxford UP, 2010, ISBN 978-0199206650 . Pielke Jr., Roger A. Policy, Politics and Perspective. Nature 416 (2002): 367-68. Rausand, Marvin. Risk Assessment: Theory, Methods, and Applications. Hoboken, NJ: John Wiley & Sons, 2011. Rothman, Kenneth J., Sander Greenland, and Timothy L. Lash. Modern Epidemiology. 3rd ed. Philadelphia: Wolters Kluwer Health/Lippincott Williams & Wilkins, 2008. Rowland, Todd and Weisstein, Eric W. "Causal Network." From MathWorld—A Wolfram Web Resource. Causal Network Slovic, Paul. Perception of Risk. Science 236 (1987): 280-85. Taleb, Nassim N. Errors, Robustness, and the Fourth Quadrant. International Journal of Forecasting 25.4 (2009): 744-59 Wolfram, Stephen. A New Kind of Science. Champaign, IL: Wolfram Media, 2002.
Wikipedia/Network_theory_in_risk_assessment
In communication networks, cognitive network (CN) is a new type of data network that makes use of cutting edge technology from several research areas (i.e. machine learning, knowledge representation, computer network, network management) to solve some problems current networks are faced with. Cognitive network is different from cognitive radio (CR) as it covers all the layers of the OSI model (not only layers 1 and 2 as with CR ). == History == The first definition of the cognitive network was provided by Theo Kanter in his doctoral research at KTH, The Royal Institute of Technology, Stockholm, including a presentation in June 1998 of the cognitive network as the network with memory. Theo was a student of Chip Maguire who also was advising Joe Mitola, the originator of cognitive radio. Mitola focused on cognition in the nodes, while Kanter focused on cognition in the network. Mitola's Licentiate thesis, published in August, 1999 includes the following quote "Over time, the [Radio Knowledge Representation Language] RKRL-empowered network can learn to distinguish a feature of the natural environment that does not match the models. It could declare the errors to a cognitive network." This is the earliest publication of the concept cognitive network, since Kanter published a bit later. IBM's autonomic networks challenge of 2001 instigated the introduction of a cognition cycle into networks. Cognitive radio, Kanter's cognitive networks, and IBM's autonomic networks provided the foundation for the parallel evolution of cognitive wireless networks and other cognitive networks. In 2004, Petri Mahonen, currently at RWTH, Aachen, and a member of Mitola's doctoral committee organized the first international workshop on cognitive wireless networks at Dagstuhl, Germany. In addition, the EU's E2R and E3 programs developed cognitive network theory under the rubric of self* - self organizing networks, self-aware networks, and so forth. One of the attempts to define the concept of cognitive network was made in 2005 by Thomas et al. and is based on an older idea of the Knowledge Plane described by Clark et al. in 2003 . B.S. Manoj et al. proposed a Cognitive Complete Knowledge Network System in 2008. Since then, several research activities in the area have emerged. A survey and an edited book reveal some of these efforts. The Knowledge Plane is "a pervasive system within the network that builds and maintains high level models of what the network is supposed to do, in order to provide services and advice to other elements of the network" . The concept of large scale cognitive network was further made in 2008 by Song, where such Knowledge Plan is clearly defined for large scale wireless networks as the knowledge about the availability of radio spectrum and wireless stations. == Definition == Thomas et al. define the CN as a network with a cognitive process that can perceive current network conditions, plan, decide, act on those conditions, learn from the consequences of its actions, all while following end-to-end goals. This loop, the cognition loop, senses the environment, plans actions according to input from sensors and network policies, decides which scenario fits best its end-to-end purpose using a reasoning engine, and finally acts on the chosen scenario as discussed in the previous section. The system learns from the past (situations, plans, decisions, actions) and uses this knowledge to improve the decisions in the future. This definition of CN does not explicitly mention the knowledge of the network; it only describes the cognitive loop and adds end-to-end goals that would distinguish it from CR or so called cognitive layers. This definition of CN seems to be incomplete since it lacks knowledge which is an important component of a cognitive system as discussed in, and. Balamuralidhar and Prasad gives an interesting view of the role of ontological knowledge representation: “The persistent nature of this ontology enables proactiveness and robustness to ‘ignorable events’ while the unitary nature enables end-to-end adaptations.” In, CN is seen as a communication network augmented by a knowledge plane that can span vertically over layers (making use of cross-layer design) and/or horizontally across technologies and nodes (covering a heterogeneous environment). The knowledge plane needs at least two elements: (1) a representation of relevant knowledge about the scope (device, homogeneous network, heterogeneous network, etc.); (2) a cognition loop which uses artificial intelligence techniques inside its states (learning techniques, decision making techniques, etc.). Furthermore, in and, a detailed cross-layer network architecture was proposed for CNs, where CN is interpreted as a network that can utilize both radio spectrum and wireless station resources opportunistically, based upon the knowledge of such resource availability. Since CR has been developed as a radio transceiver that can utilize spectrum channels opportunistically (dynamic spectrum access), the CN is therefore a network that can opportunistically organize CRs. == Network architecture == The cross-layer network architecture of CN in is also named as Embedded Wireless Interconnection (EWI) as opposed to Open System Interconnection (OSI) protocol stack. The CN architecture is based on a new definition of wireless linkage. The new abstract wireless links are redefined as arbitrary mutual co-operations among a set of neighboring (proximity) wireless nodes. In comparison, traditional wireless networking relies on point-to-point "virtual wired-links" with a predetermined pair of wireless nodes and allotted spectrum. This network architecture also has the following three primary principles: Functional Linkage Abstraction: Based on the definition of abstract wireless linkage, wireless link modules are implemented in individual wireless nodes, which can set up different types of abstract wireless links. According to the functional abstractions, categories of wireless link modules can include: broadcast, unicast, multicast, and data aggregation, etc. Therefore, network functionality can be integrated in the design of wireless link modules. This also results in two hierarchical layers as the architectural basics, including the system layer and the wireless link layer, respectively. The bottom wireless link layer supplies a library of wireless link modules to the upper system layer; the system layer organizes the wireless link modules to achieve effective application programming. Opportunistic Wireless Links: In realizing the cognitive wireless networking concept, both the occupied spectrum and the participating nodes of an abstract wireless link are opportunistically determined by their instantaneous availabilities. This principle decides the design of wireless link modules in the wireless link layer. The system performance can improve with larger network scale, since higher network density introduces extra diversity in the opportunistic formation of any abstract wireless links. Global QoS Decoupling: Global application or network QoS (Quality of Service) is decoupled into local requirements of co-operations in neighboring wireless nodes, i.e., wireless link QoS. More specifically, by decoupling global application-level QoS, it allows the system layer to better organize the wireless link modules that are provided by the wireless link layer. For example, by decoupling global network-level QoS, such as throughput, end-to-end delay, and delay jitter, the wireless link module design can achieve the global QoS requirements. Based on the provided wireless link modules, the complexity at individual nodes can be independent of the network scale. Wireless link modules provide system designers with reusable open network abstractions, where the modules can be individually updated, or new modules may be added into the wireless link layer. High modularity and flexibility could be essential for middleware or application developments. EWI is also an organizing-style architecture, where the system layer organizes the wireless link modules (at the wireless link layer); and peer wireless link modules can exchange module management information by padding packet headers to the system-layer information units. Five types of wireless link modules were proposed, including broadcast, peer-to-peer unicast, multicast, to-sink unicast, and data aggregation, respectively. Other arbitrary types of modules may be added, establishing other types of abstract wireless links without limitation. For example, the broadcast module simply disseminates data packets to surrounding nodes. The peer-to-peer unicast module can deliver data packets from source to destination over multiple wireless hops. The multicast module sends data packets to multiple destinations, as compared to peer-to-peer unicast. The to-sink unicast module can be especially useful in wireless sensor networks, which utilizes higher capabilities of data collectors (or sinks), so as to achieve better data delivery. The data-aggregation module opportunistically collects and aggregates the context related data from a set of proximity wireless nodes. Two service access points (SAP) are defined on the interface between the system layer and the wireless link layer, which are WL_SAP (Wireless Link SAP) and WLME_SAP (Wireless Link Management Entity SAP), respectively. WL_SAP is used for the data plane, whereas WLME_SAP is used for the management plane. The SAPs are utilized by the system layer in controlling the QoS of wireless link modules. == See also == Cross-layer optimization End-to-end principle Opportunistic Mesh == References == == Sources == Kanter, Theo (2001), "Adaptive Personal Mobile Communication, Service Architecture and Protocols.", Trita-It. Avh. (Ph.D. Dissertation), Kista, Sweden: KTH Royal Institute of Technology, ISSN 1403-5286 Clark, David D.; Partridge, Craig; Ramming, J. Christopher; Wroclawski, John T. (2003), "A knowledge plane for the internet", Proceedings of the 2003 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications - SIGCOMM '03, p. 3, doi:10.1145/863955.863957, ISBN 1581137354, S2CID 207627798 Mitola, Joseph (2000), "Cognitive Radio – An Integrated Agent Architecture for Software Defined Radio", Trita-It. Avh. (Ph.D. Dissertation), Kista, Sweden: KTH Royal Institute of Technology, ISSN 1403-5286 Thomas, R.W.; Dasilva, L.A.; MacKenzie, A.B. (2005), "Cognitive networks", First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks, 2005. DySPAN 2005, pp. 352–360, doi:10.1109/DYSPAN.2005.1542652, ISBN 1-4244-0013-9 Manoj, B.; Rao, Ramesh; Zorzi, Michele (2008), "CogNet: A cognitive complete knowledge network system", IEEE Wireless Communications, 15 (6): 81–88, doi:10.1109/MWC.2008.4749751, S2CID 1511248 == External links == IEEE Technical Committee on Cognitive Networks
Wikipedia/Cognitive_network
A service network is a structure that brings together several entities to deliver a particular service. For instance, one organisation (the buyer) may sub-contract another organisation (the supplier) to deliver after-sales services to a third party (the customer). The buyer may use more than one supplier. Likewise, the supplier may participate in other networks. The rationale for a service network is that each organisation is focusing on what they do best. A service network can also be defined as a collection of people and information brought together on the internet to provide a specific service or achieve a common business objective. It is an evolving extension of service systems and applies Enterprise 2.0 technologies, also known as enterprise social software, to enable corporations to leverage the advances of the consumer internet for the benefit of business. In this case, the service network is designed to benefit from the wisdom of crowds and a human's natural tendency and desire to share information, collaborate, and self organize into communities of common interests and objectives. In business, the value of collaboration is clearly recognized, but the ability is often hampered by rigid organizational boundaries and fragmented information systems. A service network enables businesses to realize the benefits of mass collaboration despite the constraints of modern organizational structures and systems. == History == The world's economy is shifting rapidly from agriculture and manufacturing to services. When the United States declared independence, 90% of the world's economy was on the farm. Today, the services sector accounts for approximately 80% of the U.S. economy. But unlike traditional disciplines like computer science and engineering, innovation and investment directed towards service innovation had historically not kept pace with its growth. However, in 2007, momentum and investment in service innovation grew dramatically and the creation and evolution of service networks began in earnest along with many other service initiatives. == Investments in service innovation == The term service network is increasingly being used within the context of service innovation initiatives that span academia, business, and government. Some examples include: The University of Cambridge and IBM Corporation use the term service network in their discussion paper, "Succeeding through Service Innovation" and describe it within the context of service systems networks. Ingres Corporation uses the term service network as a new paradigm in software service to enable Enterprise 2.0 IT service management. Openwater Corporation uses the term service network to help describe and brand their product offerings and solutions. Investments in service innovation include, but are not limited to, service networks. Business Week magazine, in an article dated, March 29, 2007, cited Service Innovation as the Next Big Thing. IBM is investing heavily in service science, management and engineering (SSME) as a means to bring academia, industry, and governments to become more focused and systematic about innovation in the services sector. Universities are beginning to create degree programs around Service Science. Missouri State University and IBM announced on September 19, 2007, the first Bachelor of Science (BS) degree in IT Service Management in the U.S. High Tech software companies are beginning to roll out next generation service platforms using service networks. Several service consortiums and communities to help drive service innovation across the high technology industry continue to grow. These include the Consortium for Service Innovation as well as the Service, Research & Innovation Community. == Delivery and usage == Service networks are typically delivered as an online or hosted solution, also referred to as software as a service (SaaS) solutions. == Adversarial service networks == It is possible for participants to have adversarial relationships with other members of the service network . For instance, manufacturers may attempt to disintermediate service firms when it is more profitable for the manufacturer to replace a whole product rather than repair it. One example in aviation is how manufacturers of airframes and components attempt to sign service contracts with airlines, capturing in the process the aftersales service market previously operated by maintenance and repair service firms. The result is a network with internal adversarial dynamics. == See also == Enterprise 2.0 Service system == References == == Other sources == Andrew McAfee.Enterprise 2.0: The Dawn of Emergent Collaboration (http://www.wikiservice.at/upload/ChristopheDucamp/McAfeeEntrepriseDeux.pdf) MIT Sloan Management Review Spring 2006, Vol.47 No.3 Don Tapscott. Wikinomics (How Mass Collaboration Changes Everything) Penguin Books Ltd, First Published in 2006 by Portfolio, a member of Penguin Group (USA) Inc. Consortium for Service Innovation. The Adaptive Organization Operational Model (http://www.serviceinnovation.org/included/docs/library/programs/ao_opmodel_v1.4.pdf) == External links == IBM SSME Website UC Berkeley (USA) The Information and Service Design Program (service research and instructional program) Center for Service Systems Research
Wikipedia/Service_network
In mathematics, random graph is the general term to refer to probability distributions over graphs. Random graphs may be described simply by a probability distribution, or by a random process which generates them. The theory of random graphs lies at the intersection between graph theory and probability theory. From a mathematical perspective, random graphs are used to answer questions about the properties of typical graphs. Its practical applications are found in all areas in which complex networks need to be modeled – many random graph models are thus known, mirroring the diverse types of complex networks encountered in different areas. In a mathematical context, random graph refers almost exclusively to the Erdős–Rényi random graph model. In other contexts, any graph model may be referred to as a random graph. == Models == A random graph is obtained by starting with a set of n isolated vertices and adding successive edges between them at random. The aim of the study in this field is to determine at what stage a particular property of the graph is likely to arise. Different random graph models produce different probability distributions on graphs. Most commonly studied is the one proposed by Edgar Gilbert but often called the Erdős–Rényi model, denoted G(n,p). In it, every possible edge occurs independently with probability 0 < p < 1. The probability of obtaining any one particular random graph with m edges is p m ( 1 − p ) N − m {\displaystyle p^{m}(1-p)^{N-m}} with the notation N = ( n 2 ) {\displaystyle N={\tbinom {n}{2}}} . A closely related model, also called the Erdős–Rényi model and denoted G(n,M), assigns equal probability to all graphs with exactly M edges. With 0 ≤ M ≤ N, G(n,M) has ( N M ) {\displaystyle {\tbinom {N}{M}}} elements and every element occurs with probability 1 / ( N M ) {\displaystyle 1/{\tbinom {N}{M}}} . The G(n,M) model can be viewed as a snapshot at a particular time (M) of the random graph process G ~ n {\displaystyle {\tilde {G}}_{n}} , a stochastic process that starts with n vertices and no edges, and at each step adds one new edge chosen uniformly from the set of missing edges. If instead we start with an infinite set of vertices, and again let every possible edge occur independently with probability 0 < p < 1, then we get an object G called an infinite random graph. Except in the trivial cases when p is 0 or 1, such a G almost surely has the following property: Given any n + m elements a 1 , … , a n , b 1 , … , b m ∈ V {\displaystyle a_{1},\ldots ,a_{n},b_{1},\ldots ,b_{m}\in V} , there is a vertex c in V that is adjacent to each of a 1 , … , a n {\displaystyle a_{1},\ldots ,a_{n}} and is not adjacent to any of b 1 , … , b m {\displaystyle b_{1},\ldots ,b_{m}} . It turns out that if the vertex set is countable then there is, up to isomorphism, only a single graph with this property, namely the Rado graph. Thus any countably infinite random graph is almost surely the Rado graph, which for this reason is sometimes called simply the random graph. However, the analogous result is not true for uncountable graphs, of which there are many (nonisomorphic) graphs satisfying the above property. Another model, which generalizes Gilbert's random graph model, is the random dot-product model. A random dot-product graph associates with each vertex a real vector. The probability of an edge uv between any vertices u and v is some function of the dot product u • v of their respective vectors. The network probability matrix models random graphs through edge probabilities, which represent the probability p i , j {\displaystyle p_{i,j}} that a given edge e i , j {\displaystyle e_{i,j}} exists for a specified time period. This model is extensible to directed and undirected; weighted and unweighted; and static or dynamic graphs structure. For M ≃ pN, where N is the maximal number of edges possible, the two most widely used models, G(n,M) and G(n,p), are almost interchangeable. Random regular graphs form a special case, with properties that may differ from random graphs in general. Once we have a model of random graphs, every function on graphs, becomes a random variable. The study of this model is to determine if, or at least estimate the probability that, a property may occur. == Terminology == The term 'almost every' in the context of random graphs refers to a sequence of spaces and probabilities, such that the error probabilities tend to zero. == Properties == The theory of random graphs studies typical properties of random graphs, those that hold with high probability for graphs drawn from a particular distribution. For example, we might ask for a given value of n {\displaystyle n} and p {\displaystyle p} what the probability is that G ( n , p ) {\displaystyle G(n,p)} is connected. In studying such questions, researchers often concentrate on the asymptotic behavior of random graphs—the values that various probabilities converge to as n {\displaystyle n} grows very large. Percolation theory characterizes the connectedness of random graphs, especially infinitely large ones. Percolation is related to the robustness of the graph (called also network). Given a random graph of n {\displaystyle n} nodes and an average degree ⟨ k ⟩ {\displaystyle \langle k\rangle } . Next we remove randomly a fraction 1 − p {\displaystyle 1-p} of nodes and leave only a fraction p {\displaystyle p} . There exists a critical percolation threshold p c = 1 ⟨ k ⟩ {\displaystyle p_{c}={\tfrac {1}{\langle k\rangle }}} below which the network becomes fragmented while above p c {\displaystyle p_{c}} a giant connected component exists. Localized percolation refers to removing a node its neighbors, next nearest neighbors etc. until a fraction of 1 − p {\displaystyle 1-p} of nodes from the network is removed. It was shown that for random graph with Poisson distribution of degrees p c = 1 ⟨ k ⟩ {\displaystyle p_{c}={\tfrac {1}{\langle k\rangle }}} exactly as for random removal. Random graphs are widely used in the probabilistic method, where one tries to prove the existence of graphs with certain properties. The existence of a property on a random graph can often imply, via the Szemerédi regularity lemma, the existence of that property on almost all graphs. In random regular graphs, G ( n , r − r e g ) {\displaystyle G(n,r-reg)} are the set of r {\displaystyle r} -regular graphs with r = r ( n ) {\displaystyle r=r(n)} such that n {\displaystyle n} and m {\displaystyle m} are the natural numbers, 3 ≤ r < n {\displaystyle 3\leq r<n} , and r n = 2 m {\displaystyle rn=2m} is even. The degree sequence of a graph G {\displaystyle G} in G n {\displaystyle G^{n}} depends only on the number of edges in the sets V n ( 2 ) = { i j : 1 ≤ j ≤ n , i ≠ j } ⊂ V ( 2 ) , i = 1 , ⋯ , n . {\displaystyle V_{n}^{(2)}=\left\{ij\ :\ 1\leq j\leq n,i\neq j\right\}\subset V^{(2)},\qquad i=1,\cdots ,n.} If edges, M {\displaystyle M} in a random graph, G M {\displaystyle G_{M}} is large enough to ensure that almost every G M {\displaystyle G_{M}} has minimum degree at least 1, then almost every G M {\displaystyle G_{M}} is connected and, if n {\displaystyle n} is even, almost every G M {\displaystyle G_{M}} has a perfect matching. In particular, the moment the last isolated vertex vanishes in almost every random graph, the graph becomes connected. Almost every graph process on an even number of vertices with the edge raising the minimum degree to 1 or a random graph with slightly more than n 4 log ⁡ ( n ) {\displaystyle {\tfrac {n}{4}}\log(n)} edges and with probability close to 1 ensures that the graph has a complete matching, with exception of at most one vertex. For some constant c {\displaystyle c} , almost every labeled graph with n {\displaystyle n} vertices and at least c n log ⁡ ( n ) {\displaystyle cn\log(n)} edges is Hamiltonian. With the probability tending to 1, the particular edge that increases the minimum degree to 2 makes the graph Hamiltonian. Properties of random graph may change or remain invariant under graph transformations. Mashaghi A. et al., for example, demonstrated that a transformation which converts random graphs to their edge-dual graphs (or line graphs) produces an ensemble of graphs with nearly the same degree distribution, but with degree correlations and a significantly higher clustering coefficient. == Colouring == Given a random graph G of order n with the vertex V(G) = {1, ..., n}, by the greedy algorithm on the number of colors, the vertices can be colored with colors 1, 2, ... (vertex 1 is colored 1, vertex 2 is colored 1 if it is not adjacent to vertex 1, otherwise it is colored 2, etc.). The number of proper colorings of random graphs given a number of q colors, called its chromatic polynomial, remains unknown so far. The scaling of zeros of the chromatic polynomial of random graphs with parameters n and the number of edges m or the connection probability p has been studied empirically using an algorithm based on symbolic pattern matching. == Random trees == A random tree is a tree or arborescence that is formed by a stochastic process. In a large range of random graphs of order n and size M(n) the distribution of the number of tree components of order k is asymptotically Poisson. Types of random trees include uniform spanning tree, random minimum spanning tree, random binary tree, treap, rapidly exploring random tree, Brownian tree, and random forest. == Conditional random graphs == Consider a given random graph model defined on the probability space ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},P)} and let P ( G ) : Ω → R m {\displaystyle {\mathcal {P}}(G):\Omega \rightarrow R^{m}} be a real valued function which assigns to each graph in Ω {\displaystyle \Omega } a vector of m properties. For a fixed p ∈ R m {\displaystyle \mathbf {p} \in R^{m}} , conditional random graphs are models in which the probability measure P {\displaystyle P} assigns zero probability to all graphs such that ' P ( G ) ≠ p {\displaystyle {\mathcal {P}}(G)\neq \mathbf {p} } . Special cases are conditionally uniform random graphs, where P {\displaystyle P} assigns equal probability to all the graphs having specified properties. They can be seen as a generalization of the Erdős–Rényi model G(n,M), when the conditioning information is not necessarily the number of edges M, but whatever other arbitrary graph property P ( G ) {\displaystyle {\mathcal {P}}(G)} . In this case very few analytical results are available and simulation is required to obtain empirical distributions of average properties. == History == The earliest use of a random graph model was by Helen Hall Jennings and Jacob Moreno in 1938 where a "chance sociogram" (a directed Erdős-Rényi model) was considered in studying comparing the fraction of reciprocated links in their network data with the random model. Another use, under the name "random net", was by Ray Solomonoff and Anatol Rapoport in 1951, using a model of directed graphs with fixed out-degree and randomly chosen attachments to other vertices. The Erdős–Rényi model of random graphs was first defined by Paul Erdős and Alfréd Rényi in their 1959 paper "On Random Graphs" and independently by Gilbert in his paper "Random graphs". == See also == Bose–Einstein condensation: a network theory approach – model in network sciencePages displaying wikidata descriptions as a fallback Cavity method – Mathematical method in statistical physics Complex networks – Network with non-trivial topological featuresPages displaying short descriptions of redirect targets Dual-phase evolution – Process that drives self-organization within complex adaptive systems Erdős–Rényi model – Two closely related models for generating random graphs Exponential random graph model – statistical models for network analysisPages displaying wikidata descriptions as a fallback Graph theory – Area of discrete mathematics Interdependent networks – Subfield of network science Network science – Academic field Percolation – Filtration of fluids through porous materials Percolation theory – Mathematical theory on behavior of connected clusters in a random graph Random graph theory of gelation – Mathematical theory for sol–gel processes Regular graph – Graph where each vertex has the same number of neighbors Scale free network – Network whose degree distribution follows a power lawPages displaying short descriptions of redirect targets Semilinear response – Extension of linear response theory in mesoscopic regimes Stochastic block model – Concept in network science Lancichinetti–Fortunato–Radicchi benchmark – AlgorithmPages displaying short descriptions with no spaces == References ==
Wikipedia/Random_graphs
In graph theory, a flow network (also known as a transportation network) is a directed graph where each edge has a capacity and each edge receives a flow. The amount of flow on an edge cannot exceed the capacity of the edge. Often in operations research, a directed graph is called a network, the vertices are called nodes and the edges are called arcs. A flow must satisfy the restriction that the amount of flow into a node equals the amount of flow out of it, unless it is a source, which has only outgoing flow, or sink, which has only incoming flow. A flow network can be used to model traffic in a computer network, circulation with demands, fluids in pipes, currents in an electrical circuit, or anything similar in which something travels through a network of nodes. As such, efficient algorithms for solving network flows can also be applied to solve problems that can be reduced to a flow network, including survey design, airline scheduling, image segmentation, and the matching problem. == Definition == A network is a directed graph G = (V, E) with a non-negative capacity function c for each edge, and without multiple arcs (i.e. edges with the same source and target nodes). Without loss of generality, we may assume that if (u, v) ∈ E, then (v, u) is also a member of E. Additionally, if (v, u) ∉ E then we may add (v, u) to E and then set the c(v, u) = 0. If two nodes in G are distinguished – one as the source s and the other as the sink t – then (G, c, s, t) is called a flow network. == Flows == Flow functions model the net flow of units between pairs of nodes, and are useful when asking questions such as what is the maximum number of units that can be transferred from the source node s to the sink node t? The amount of flow between two nodes is used to represent the net amount of units being transferred from one node to the other. The excess function xf : V → ℝ represents the net flow entering a given node u (i.e. the sum of the flows entering u) and is defined by x f ( u ) = ∑ w ∈ V f ( w , u ) − ∑ w ∈ V f ( u , w ) . {\displaystyle x_{f}(u)=\sum _{w\in V}f(w,u)-\sum _{w\in V}f(u,w).} A node u is said to be active if xf (u) > 0 (i.e. the node u consumes flow), deficient if xf (u) < 0 (i.e. the node u produces flow), or conserving if xf (u) = 0. In flow networks, the source s is deficient, and the sink t is active. Pseudo-flows, feasible flows, and pre-flows are all examples of flow functions. A pseudo-flow is a function f of each edge in the network that satisfies the following two constraints for all nodes u and v: Skew symmetry constraint: The flow on an arc from u to v is equivalent to the negation of the flow on the arc from v to u, that is: f (u, v) = −f (v, u). The sign of the flow indicates the flow's direction. Capacity constraint: An arc's flow cannot exceed its capacity, that is: f (u, v) ≤ c(u, v). A pre-flow is a pseudo-flow that, for all v ∈ V \{s}, satisfies the additional constraint: Non-deficient flows: The net flow entering the node v is non-negative, except for the source, which "produces" flow. That is: xf (v) ≥ 0 for all v ∈ V \{s}. A feasible flow, or just a flow, is a pseudo-flow that, for all v ∈ V \{s, t}, satisfies the additional constraint: Flow conservation constraint: The total net flow entering a node v is zero for all nodes in the network except the source s and the sink t, that is: xf (v) = 0 for all v ∈ V \{s, t}. In other words, for all nodes in the network except the source s and the sink t, the total sum of the incoming flow of a node is equal to its outgoing flow (i.e. ∑ ( u , v ) ∈ E f ( u , v ) = ∑ ( v , z ) ∈ E f ( v , z ) {\displaystyle \sum _{(u,v)\in E}f(u,v)=\sum _{(v,z)\in E}f(v,z)} , for each vertex v ∈ V \{s, t}). The value |f| of a feasible flow f for a network, is the net flow into the sink t of the flow network, that is: |f| = xf (t). Note, the flow value in a network is also equal to the total outgoing flow of source s, that is: |f| = −xf (s). Also, if we define A as a set of nodes in G such that s ∈ A and t ∉ A, the flow value is equal to the total net flow going out of A (i.e. |f| = f out(A) − f in(A)). The flow value in a network is the total amount of flow from s to t. == Concepts useful to flow problems == === Flow decomposition === Flow decomposition is a process of breaking down a given flow into a collection of path flows and cycle flows. Every flow through a network can be decomposed into one or more paths and corresponding quantities, such that each edge in the flow equals the sum of all quantities of paths that pass through it. Flow decomposition is a powerful tool in optimization problems to maximize or minimize specific flow parameters. === Adding arcs and flows === We do not use multiple arcs within a network because we can combine those arcs into a single arc. To combine two arcs into a single arc, we add their capacities and their flow values, and assign those to the new arc: Given any two nodes u and v, having two arcs from u to v with capacities c1(u,v) and c2(u,v) respectively is equivalent to considering only a single arc from u to v with a capacity equal to c1(u,v)+c2(u,v). Given any two nodes u and v, having two arcs from u to v with pseudo-flows f1(u,v) and f2(u,v) respectively is equivalent to considering only a single arc from u to v with a pseudo-flow equal to f1(u,v)+f2(u,v). Along with the other constraints, the skew symmetry constraint must be remembered during this step to maintain the direction of the original pseudo-flow arc. Adding flow to an arc is the same as adding an arc with the capacity of zero. === Residuals === The residual capacity of an arc e with respect to a pseudo-flow f is denoted cf, and it is the difference between the arc's capacity and its flow. That is, cf (e) = c(e) − f(e). From this we can construct a residual network, denoted Gf (V, Ef), with a capacity function cf which models the amount of available capacity on the set of arcs in G = (V, E). More specifically, capacity function cf of each arc (u, v) in the residual network represents the amount of flow which can be transferred from u to v given the current state of the flow within the network. This concept is used in Ford–Fulkerson algorithm which computes the maximum flow in a flow network. Note that there can be an unsaturated path (a path with available capacity) from u to v in the residual network, even though there is no such path from u to v in the original network. Since flows in opposite directions cancel out, decreasing the flow from v to u is the same as increasing the flow from u to v. === Augmenting paths === An augmenting path is a path (u1, u2, ..., uk) in the residual network, where u1 = s, uk = t, and for all ui, ui + 1 (cf (ui, ui + 1) > 0) (1 ≤ i < k). More simply, an augmenting path is an available flow path from the source to the sink. A network is at maximum flow if and only if there is no augmenting path in the residual network Gf. The bottleneck is the minimum residual capacity of all the edges in a given augmenting path. See example explained in the "Example" section of this article. The flow network is at maximum flow if and only if it has a bottleneck with a value equal to zero. If any augmenting path exists, its bottleneck weight will be greater than 0. In other words, if there is a bottleneck value greater than 0, then there is an augmenting path from the source to the sink. However, we know that if there is any augmenting path, then the network is not at maximum flow, which in turn means that, if there is a bottleneck value greater than 0, then the network is not at maximum flow. The term "augmenting the flow" for an augmenting path means updating the flow f of each arc in this augmenting path to equal the capacity c of the bottleneck. Augmenting the flow corresponds to pushing additional flow along the augmenting path until there is no remaining available residual capacity in the bottleneck. === Multiple sources and/or sinks === Sometimes, when modeling a network with more than one source, a supersource is introduced to the graph. This consists of a vertex connected to each of the sources with edges of infinite capacity, so as to act as a global source. A similar construct for sinks is called a supersink. == Example == In Figure 1 you see a flow network with source labeled s, sink t, and four additional nodes. The flow and capacity is denoted f / c {\displaystyle f/c} . Notice how the network upholds the capacity constraint and flow conservation constraint. The total amount of flow from s to t is 5, which can be easily seen from the fact that the total outgoing flow from s is 5, which is also the incoming flow to t. By the skew symmetry constraint, from c to a is -2 because the flow from a to c is 2. In Figure 2 you see the residual network for the same given flow. Notice how there is positive residual capacity on some edges where the original capacity is zero in Figure 1, for example for the edge ( d , c ) {\displaystyle (d,c)} . This network is not at maximum flow. There is available capacity along the paths ( s , a , c , t ) {\displaystyle (s,a,c,t)} , ( s , a , b , d , t ) {\displaystyle (s,a,b,d,t)} and ( s , a , b , d , c , t ) {\displaystyle (s,a,b,d,c,t)} , which are then the augmenting paths. The bottleneck of the ( s , a , c , t ) {\displaystyle (s,a,c,t)} path is equal to min ( c ( s , a ) − f ( s , a ) , c ( a , c ) − f ( a , c ) , c ( c , t ) − f ( c , t ) ) {\displaystyle \min(c(s,a)-f(s,a),c(a,c)-f(a,c),c(c,t)-f(c,t))} = min ( c f ( s , a ) , c f ( a , c ) , c f ( c , t ) ) {\displaystyle =\min(c_{f}(s,a),c_{f}(a,c),c_{f}(c,t))} = min ( 5 − 3 , 3 − 2 , 2 − 1 ) {\displaystyle =\min(5-3,3-2,2-1)} = min ( 2 , 1 , 1 ) = 1 {\displaystyle =\min(2,1,1)=1} . == Applications == Picture a series of water pipes, fitting into a network. Each pipe is of a certain diameter, so it can only maintain a flow of a certain amount of water. Anywhere that pipes meet, the total amount of water coming into that junction must be equal to the amount going out, otherwise we would quickly run out of water, or we would have a buildup of water. We have a water inlet, which is the source, and an outlet, the sink. A flow would then be one possible way for water to get from source to sink so that the total amount of water coming out of the outlet is consistent. Intuitively, the total flow of a network is the rate at which water comes out of the outlet. Flows can pertain to people or material over transportation networks, or to electricity over electrical distribution systems. For any such physical network, the flow coming into any intermediate node needs to equal the flow going out of that node. This conservation constraint is equivalent to Kirchhoff's current law. Flow networks also find applications in ecology: flow networks arise naturally when considering the flow of nutrients and energy between different organisms in a food web. The mathematical problems associated with such networks are quite different from those that arise in networks of fluid or traffic flow. The field of ecosystem network analysis, developed by Robert Ulanowicz and others, involves using concepts from information theory and thermodynamics to study the evolution of these networks over time. == Classifying flow problems == The simplest and most common problem using flow networks is to find what is called the maximum flow, which provides the largest possible total flow from the source to the sink in a given graph. There are many other problems which can be solved using max flow algorithms, if they are appropriately modeled as flow networks, such as bipartite matching, the assignment problem and the transportation problem. Maximum flow problems can be solved in polynomial time with various algorithms (see table). The max-flow min-cut theorem states that finding a maximal network flow is equivalent to finding a cut of minimum capacity that separates the source and the sink, where a cut is the division of vertices such that the source is in one division and the sink is in another. In a multi-commodity flow problem, you have multiple sources and sinks, and various "commodities" which are to flow from a given source to a given sink. This could be for example various goods that are produced at various factories, and are to be delivered to various given customers through the same transportation network. In a minimum cost flow problem, each edge u , v {\displaystyle u,v} has a given cost k ( u , v ) {\displaystyle k(u,v)} , and the cost of sending the flow f ( u , v ) {\displaystyle f(u,v)} across the edge is f ( u , v ) ⋅ k ( u , v ) {\displaystyle f(u,v)\cdot k(u,v)} . The objective is to send a given amount of flow from the source to the sink, at the lowest possible price. In a circulation problem, you have a lower bound ℓ ( u , v ) {\displaystyle \ell (u,v)} on the edges, in addition to the upper bound c ( u , v ) {\displaystyle c(u,v)} . Each edge also has a cost. Often, flow conservation holds for all nodes in a circulation problem, and there is a connection from the sink back to the source. In this way, you can dictate the total flow with ℓ ( t , s ) {\displaystyle \ell (t,s)} and c ( t , s ) {\displaystyle c(t,s)} . The flow circulates through the network, hence the name of the problem. In a network with gains or generalized network each edge has a gain, a real number (not zero) such that, if the edge has gain g, and an amount x flows into the edge at its tail, then an amount gx flows out at the head. In a source localization problem, an algorithm tries to identify the most likely source node of information diffusion through a partially observed network. This can be done in linear time for trees and cubic time for arbitrary networks and has applications ranging from tracking mobile phone users to identifying the originating source of disease outbreaks. == See also == Braess's paradox Centrality Ford–Fulkerson algorithm Edmonds-Karp algorithm Dinic's algorithm Traffic flow (computer networking) Flow graph (disambiguation) Max-flow min-cut theorem Oriented matroid Shortest path problem Nowhere-zero flow == References == == Further reading == George T. Heineman; Gary Pollice; Stanley Selkow (2008). "Chapter 8:Network Flow Algorithms". Algorithms in a Nutshell. Oreilly Media. pp. 226–250. ISBN 978-0-596-51624-6. Ravindra K. Ahuja; Thomas L. Magnanti; James B. Orlin (1993). Network Flows: Theory, Algorithms and Applications. Prentice Hall. ISBN 0-13-617549-X. Bollobás, Béla (1979). Graph Theory: An Introductory Course. Heidelberg: Springer-Verlag. ISBN 3-540-90399-2. Chartrand, Gary; Oellermann, Ortrud R. (1993). Applied and Algorithmic Graph Theory. New York: McGraw-Hill. ISBN 0-07-557101-3. Even, Shimon (1979). Graph Algorithms. Rockville, Maryland: Computer Science Press. ISBN 0-914894-21-8. Gibbons, Alan (1985). Algorithmic Graph Theory. Cambridge: Cambridge University Press. ISBN 0-521-28881-9. Thomas H. Cormen; Charles E. Leiserson; Ronald L. Rivest; Clifford Stein (2001) [1990]. "26". Introduction to Algorithms (2nd ed.). MIT Press and McGraw-Hill. pp. 696–697. ISBN 0-262-03293-7. == External links == Maximum Flow Problem Real graph instances Lemon C++ library with several maximum flow and minimum cost circulation algorithms QuickGraph Archived 2018-01-21 at the Wayback Machine, graph data structures and algorithms for .Net
Wikipedia/Random_networks
The field of complex networks has emerged as an important area of science to generate novel insights into nature of complex systems The application of network theory to climate science is a young and emerging field. To identify and analyze patterns in global climate, scientists model climate data as complex networks. Unlike most real-world networks where nodes and edges are well defined, in climate networks, nodes are identified as the sites in a spatial grid of the underlying global climate data set, which can be represented at various resolutions. Two nodes are connected by an edge depending on the degree of statistical similarity (that may be related to dependence) between the corresponding pairs of time-series taken from climate records. The climate network approach enables novel insights into the dynamics of the climate system over different spatial and temporal scales. == Construction of climate networks == Depending upon the choice of nodes and/or edges, climate networks may take many different forms, shapes, sizes and complexities. Tsonis et al. introduced the field of complex networks to climate. In their model, the nodes for the network were constituted by a single variable (500 hPa) from NCEP/NCAR Reanalysis datasets. In order to estimate the edges between nodes, correlation coefficient at zero time lag between all possible pairs of nodes were estimated. A pair of nodes was considered to be connected, if their correlation coefficient is above a threshold of 0.5. Steinhaeuser and team introduced the novel technique of multivariate networks in climate by constructing networks from several climate variables separately and capture their interaction in multivariate predictive model. It was demonstrated in their studies that in context of climate, extracting predictors based on cluster attributes yield informative precursors to improve predictive skills. Kawale et al. presented a graph based approach to find dipoles in pressure data. Given the importance of teleconnection, this methodology has potential to provide significant insights. Imme et al. introduced a new type of network construction in climate based on temporal probabilistic graphical model, which provides an alternative viewpoint by focusing on information flow within network over time. Agarwal et al. proposed advanced linear and nonlinear methods to construct and investigate climate networks at different timescales. Climate networks constructed using SST datasets at different timescale averred that multi-scale analysis of climatic processes holds the promise of better understanding the system dynamics that may be missed when processes are analyzed at one timescale only == Applications of climate networks == Climate networks enable insights into the dynamics of climate system over many spatial scales. The local degree centrality and related measures have been used to identify super-nodes and to associate them to known dynamical interrelations in the atmosphere, called teleconnection patterns. It was observed that climate networks possess “small world” properties owing to the long-range spatial connections. Steinhaeuser et al. applied complex networks to explore the multivariate and multi-scale dependence in climate data. Findings of the group suggested a close similarity of observed dependence patterns in multiple variables over multiple time and spatial scales. Tsonis and Roeber investigated the coupling architecture of the climate network. It was found that the overall network emerges from intertwined subnetworks. One subnetwork is operating at higher altitudes and other is operating in the tropics, while the equatorial subnetwork acts as an agent linking the 2 hemispheres . Though, both networks possess Small World Property, the 2 subnetworks are significantly different from each other in terms of network properties like degree distribution. Donges et al. applied climate networks for physics and nonlinear dynamical interpretations in climate. The team used measure of node centrality, betweenness centrality (BC) to demonstrate the wave-like structures in the BC fields of climate networks constructed from monthly averaged reanalysis and atmosphere-ocean coupled general circulation model (AOGCM) surface air temperature (SAT) data. == Teleconnection path == Teleconnections are spatial patterns in the atmosphere that link weather and climate anomalies over large distances across the globe. Teleconnections have the characteristics that they are persistent, lasting for 1 to 2 weeks, and often much longer, and they are recurrent, as similar patterns tend to occur repeatedly. The presence of teleconnections is associated with changes in temperature, wind, precipitation, atmospheric variables of greatest societal interest. == Computational issues and challenges == There are numerous computational challenges that arise at various stages of the network construction and analysis process in field of climate networks: Calculating the pair-wise correlations between all grid points is a non-trivial task. Computational demands of network construction, which depends upon the resolution of spatial grid. Generation of predictive models from the data poses additional challenges. Inclusion of lag and lead effects over space and time is a non-trivial task. == See also == Community structure Network theory Network science Teleconnection Climatology == References ==
Wikipedia/Climate_as_complex_networks
Network homophily refers to the theory in network science which states that, based on node attributes, similar nodes may be more likely to attach to each other than dissimilar ones. The hypothesis is linked to the model of preferential attachment and it draws from the phenomenon of homophily in social sciences and much of the scientific analysis of the creation of social ties based on similarity comes from network science. In fact, empirical research seems to indicate the frequent occurrence of homophily in real networks. Homophily in social relations may lead to a commensurate distance in networks leading to the creation of clusters that have been observed in social networking services. Homophily is a key topic in network science as it can determine the speed of the diffusion of information and ideas. == Node attributes and homophily == The existence of network homophily may necessitate a closer examination of node attributes as opposed to other theories on network evolution which focus on network properties. It is often assumed that nodes are identical and the evolution of networks is determined by the characteristics of the broader network such as the degree. Degree heterogeneity is also observed as a prevalent phenomenon (with a large number of nodes having a small number of links and a few of them having many). It may be linked to homophily as the two seem to show similar characteristics in networks. A large number of excess links caused by degree heterogeneity might be confused with homophily. == Influence on network evolution == Kim and Altmann (2017) find that homophily may affect the evolution of the degree distribution of scale-free networks. More specifically, homophily may cause a bias towards convexity instead of the often hypothesised concave shape of networks. Thus, homophily can significantly (and uniformly) affect the emergence of scale-free networks influenced by preferential attachment, regardless of the type of seed networks observed (e.g. whether it is centralized or decentralized). Although the size of clusters might affect the magnitude of relative homophily. A higher level of homophily can be associated to a more convex cumulative degree distribution instead of a concave one. Although not as salient, the link density of the network might also lead to short-term, localized deviations in the shape of the distribution. In the development of the shape of the cumulative degree distribution curve the effects of the link structure of existing nodes (among themselves and with new nodes) and homophily work against each other, with the former leading to concavity and homophily causing convexity. Consequently, there is a level of homophily such that the two effects cancel each other out and the cumulative degree distribution reaches a linear shape in a log-log scale. Large variety of shapes observed in empirical studies of real complex networks may be explained by these phenomena. A low level of homophily could then be linked to a convex shape of cumulative degree distributions which have been observed in networks of Facebook wall posts, Flickr users, and message boards; while linear shapes have been noted in the networks of and software class dependency, Yahoo adverts, and YouTube users. Compared to these two shapes, convexity seems to be much less prevalent with examples in Google Plus and Filmtipset networks. This can be explained by the argument that high levels of homophily may significantly decrease the viability of networks, hence making convexity less frequent in complex networks. === Long-run convergence === In the long run, networks tend to converge in the case of unbiased network-based search. Nevertheless, younger nodes might show some bias in their connections. Bias may arise during network development through random meetings and network based search which are the two main processes through which new agents connect to established nodes. Bramoullé et al. (2012) illustrate this by conducting a study on the citation network of physics journals from the American Physical Society (APS) between 1985 and 2003. The two stages of the network development process of new nodes in this context is the random but potentially type biased finding of an article or reference by authors, and the discovery of references through citations in popular articles. Because similar articles are likely to cite similar references bias may arise in the formation of connections. Convergence is explained by three models of integration: weak integration, long-run integration, and partial integration. Weak integration states that well-established nodes have a higher tendency to create new connections than young nodes regardless of the type of the node. Thus bias in link probabilities is eliminated over time as nodes age. Long-run integration states that the types of neighbouring nodes of any node will converge to the global distribution of types of the network as a whole which eliminates biases among neighbouring nodes. Partial integration causes the distribution of type in neighbouring nodes to converge monotonically to the global distribution with time albeit with some bias in the limit. Homophily leads new nodes to connect to similar nodes with a higher probability but this bias is less apparent between second degree nodes than between first degree nodes of any given node. With time the connections created by network-based search get more and more prevalent (with the increase in the number of neighbours), and because second degree connections contain more and more randomly found nodes the connections of older nodes become more diverse and less influenced by homophily. Thus the citations of an older article is likely to come from a larger variety of subjects and scientific research fields. == References ==
Wikipedia/Network_homophily
In network science, a gradient network is a directed subnetwork of an undirected "substrate" network where each node has an associated scalar potential and one out-link that points to the node with the smallest (or largest) potential in its neighborhood, defined as the union of itself and its neighbors on the substrate network. == Definition == Transport takes place on a fixed network G = G ( V , E ) {\displaystyle G=G(V,E)} called the substrate graph. It has N nodes, V = { 0 , 1 , . . . , N − 1 } {\displaystyle V=\{0,1,...,N-1\}} and the set of edges E = { ( i , j ) | i , j ∈ V } {\displaystyle E=\{(i,j)|i,j\in V\}} . Given a node i, we can define its set of neighbors in G by Si(1) = {j ∈ V | (i,j)∈ E}. Let us also consider a scalar field, h = {h0, .., hN−1} defined on the set of nodes V, so that every node i has a scalar value hi associated to it. Gradient ∇hi on a network: ∇hi = {\displaystyle =} (i, μ(i)) i.e. the directed edge from i to μ(i), where μ(i) ∈ Si(1) ∪ {i}, and hμ has the maximum value in h j | j ∈ S i ( 1 ) ∪ i {\displaystyle {h_{j}|j\in S_{i}^{(1)}\cup {i}}} . Gradient network : ∇ G = {\displaystyle G=} ∇ G {\displaystyle G} ( V , F ) {\displaystyle (V,F)} where F is the set of gradient edges on G. In general, the scalar field depends on time, due to the flow, external sources and sinks on the network. Therefore, the gradient network ∇ G {\displaystyle G} will be dynamic. == Motivation and history == The concept of a gradient network was first introduced by Toroczkai and Bassler (2004). Generally, real-world networks (such as citation graphs, the Internet, cellular metabolic networks, the worldwide airport network), which often evolve to transport entities such as information, cars, power, water, forces, and so on, are not globally designed; instead, they evolve and grow through local changes. For example, if a router on the Internet is frequently congested and packets are lost or delayed due to that, it will be replaced by several interconnected new routers. Moreover, this flow is often generated or influenced by local gradients of a scalar. For example: electric current is driven by a gradient of electric potential. In information networks, properties of nodes will generate a bias in the way of information is transmitted from a node to its neighbors. This idea motivated the approach to study the flow efficiency of a network by using gradient networks, when the flow is driven by gradients of a scalar field distributed on the network. Recent research investigates the connection between network topology and the flow efficiency of the transportation. == In-degree distribution of gradient networks == In a gradient network, the in-degree of a node i, ki (in) is the number of gradient edges pointing into i, and the in-degree distribution is R ( l ) = P { k i ( i n ) = l } {\displaystyle R(l)=P\{k_{i}^{(in)}=l\}} . When the substrate G is a random graph and each pair of nodes is connected with probability P (i.e. an Erdős–Rényi random graph), the scalars hi are i.i.d. (independent identically distributed) the exact expression for R(l) is given by In the limit N → ∞ {\displaystyle N\to \infty } and P → 0 {\displaystyle P\to 0} , the degree distribution becomes the power law This shows in this limit, the gradient network of random network is scale-free. Furthermore, if the substrate network G is scale-free, like in the Barabási–Albert model, then the gradient network also follow the power-law with the same exponent as those of G. == The congestion on networks == The fact that the topology of the substrate network influence the level of network congestion can be illustrated by a simple example: if the network has a star-like structure, then at the central node, the flow would become congested because the central node should handle all the flow from other nodes. However, if the network has a ring-like structure, since every node takes the same role, there is no flow congestion. Under assumption that the flow is generated by gradients in the network, flow efficiency on networks can be characterized through the jamming factor (or congestion factor), defined as follows: J = 1 − ⟨ ⟨ N receive N send ⟩ h ⟩ network = R ( 0 ) {\displaystyle J=1-\langle \langle {\frac {N_{\text{receive}}}{N_{\text{send}}}}\rangle _{h}\rangle _{\text{network}}=R(0)} where Nreceive is the number of nodes that receive gradient flow and Nsend is the number of nodes that send gradient flow. The value of J is between 0 and 1; J = 0 {\displaystyle J=0} means no congestion, and J = 1 {\displaystyle J=1} corresponds to maximal congestion. In the limit N → ∞ {\displaystyle N\to \infty } , for an Erdős–Rényi random graph, the congestion factor becomes J ( N , P ) = 1 − ln ⁡ N N ln ⁡ ( 1 1 − P ) [ 1 + O ( 1 N ) ] → 1. {\displaystyle J(N,P)=1-{\frac {\ln N}{N\ln({\frac {1}{1-P}})}}\left[1+O({\frac {1}{N}})\right]\rightarrow 1.} This result shows that random networks are maximally congested in that limit. On the contrary, for a scale-free network, J is a constant for any N, which means that scale-free networks are not prone to maximal jamming. == Approaches to control congestion == One problem in communication networks is understanding how to control congestion and maintain normal and efficient network function. Zonghua Liu et al. (2006) showed that congestion is more likely to occur at the nodes with high degrees in networks, and an efficient approach of selectively enhancing the message-process capability of a small fraction (e.g. 3%) of nodes is shown to perform just as well as enhancing the capability of all nodes. Ana L Pastore y Piontti et al. (2008) showed that relaxational dynamics can reduce network congestion. Pan et al. (2011) studied jamming properties in a scheme where edges are given weights of a power of the scalar difference between node potentials. Niu and Pan (2016) showed that congestion can be reduced by introducing a correlation between the gradient field and the local network topology. == See also == Network dynamics Network topology Quantum complex network == References ==
Wikipedia/Gradient_network
A telecommunications network is a group of nodes interconnected by telecommunications links that are used to exchange messages between the nodes. The links may use a variety of technologies based on the methodologies of circuit switching, message switching, or packet switching, to pass messages and signals. Multiple nodes may cooperate to pass the message from an originating node to the destination node, via multiple network hops. For this routing function, each node in the network is assigned a network address for identification and locating it on the network. The collection of addresses in the network is called the address space of the network. Examples of telecommunications networks include computer networks, the Internet, the public switched telephone network (PSTN), the global Telex network, the aeronautical ACARS network, and the wireless radio networks of cell phone telecommunication providers. == Network structure == this is the structure of network general, every telecommunications network conceptually consists of three parts, or planes (so-called because they can be thought of as being and often are, separate overlay networks): The data plane (also user plane, bearer plane, or forwarding plane) carries the network's users' traffic, the actual payload. The control plane carries control information (also known as signaling). The management plane carries the operations, administration and management traffic required for network management. The management plane is sometimes considered a part of the control plane. == Data networks == Data networks are used extensively throughout the world for communication between individuals and organizations. Data networks can be connected to allow users seamless access to resources that are hosted outside of the particular provider they are connected to. The Internet is the best example of the internetworking of many data networks from different organizations. Terminals attached to IP networks like the Internet are addressed using IP addresses. Protocols of the Internet protocol suite (TCP/IP) provide the control and routing of messages across the and IP data network. There are many different network structures that IP can be used across to efficiently route messages, for example: Wide area networks (WAN) Metropolitan area networks (MAN) Local area networks (LAN) There are three features that differentiate MANs from LANs or WANs: The area of the network size is between LANs and WANs. The MAN will have a physical area between 5 and 50 km in diameter. MANs do not generally belong to a single organization. The equipment that interconnects the network, the links, and the MAN itself are often owned by an association or a network provider that provides or leases the service to others. A MAN is a means for sharing resources at high speeds within the network. It often provides connections to WAN networks for access to resources outside the scope of the MAN. Data center networks also rely highly on TCP/IP for communication across machines. They connect thousands of servers, are designed to be highly robust, provide low latency and high bandwidth. Data center network topology plays a significant role in determining the level of failure resiliency, ease of incremental expansion, communication bandwidth and latency. == Capacity and speed == In analogy to the improvements in the speed and capacity of digital computers, provided by advances in semiconductor technology and expressed in the bi-yearly doubling of transistor density, which is described empirically by Moore's law, the capacity and speed of telecommunications networks have followed similar advances, for similar reasons. In telecommunication, this is expressed in Edholm's law, proposed by and named after Phil Edholm in 2004. This empirical law holds that the bandwidth of telecommunication networks doubles every 18 months, which has proven to be true since the 1970s. The trend is evident in the Internet, cellular (mobile), wireless and wired local area networks (LANs), and personal area networks. This development is the consequence of rapid advances in the development of metal-oxide-semiconductor technology. == See also == Transcoder free operation == References ==
Wikipedia/Telecommunication_network
The stretched exponential function f β ( t ) = e − t β {\displaystyle f_{\beta }(t)=e^{-t^{\beta }}} is obtained by inserting a fractional power law into the exponential function. In most applications, it is meaningful only for arguments t between 0 and +∞. With β = 1, the usual exponential function is recovered. With a stretching exponent β between 0 and 1, the graph of log f versus t is characteristically stretched, hence the name of the function. The compressed exponential function (with β > 1) has less practical importance, with the notable exceptions of β = 2, which gives the normal distribution, and of compressed exponential relaxation in the dynamics of amorphous solids. In mathematics, the stretched exponential is also known as the complementary cumulative Weibull distribution. The stretched exponential is also the characteristic function, basically the Fourier transform, of the Lévy symmetric alpha-stable distribution. In physics, the stretched exponential function is often used as a phenomenological description of relaxation in disordered systems. It was first introduced by Rudolf Kohlrausch in 1854 to describe the discharge of a capacitor; thus it is also known as the Kohlrausch function. In 1970, G. Williams and D.C. Watts used the Fourier transform of the stretched exponential to describe dielectric spectra of polymers; in this context, the stretched exponential or its Fourier transform are also called the Kohlrausch–Williams–Watts (KWW) function. The Kohlrausch–Williams–Watts (KWW) function corresponds to the time domain charge response of the main dielectric models, such as the Cole–Cole equation, the Cole–Davidson equation, and the Havriliak–Negami relaxation, for small time arguments. In phenomenological applications, it is often not clear whether the stretched exponential function should be used to describe the differential or the integral distribution function—or neither. In each case, one gets the same asymptotic decay, but a different power law prefactor, which makes fits more ambiguous than for simple exponentials. In a few cases, it can be shown that the asymptotic decay is a stretched exponential, but the prefactor is usually an unrelated power. == Mathematical properties == === Moments === Following the usual physical interpretation, we interpret the function argument t as time, and fβ(t) is the differential distribution. The area under the curve can thus be interpreted as a mean relaxation time. One finds ⟨ τ ⟩ ≡ ∫ 0 ∞ d t e − ( t / τ K ) β = τ K β Γ ( 1 β ) {\displaystyle \langle \tau \rangle \equiv \int _{0}^{\infty }dt\,e^{-(t/\tau _{K})^{\beta }}={\tau _{K} \over \beta }\Gamma {\left({\frac {1}{\beta }}\right)}} where Γ is the gamma function. For exponential decay, ⟨τ⟩ = τK is recovered. The higher moments of the stretched exponential function are ⟨ τ n ⟩ ≡ ∫ 0 ∞ d t t n − 1 e − ( t / τ K ) β = τ K n β Γ ( n β ) . {\displaystyle \langle \tau ^{n}\rangle \equiv \int _{0}^{\infty }dt\,t^{n-1}\,e^{-(t/\tau _{K})^{\beta }}={{\tau _{K}}^{n} \over \beta }\Gamma {\left({\frac {n}{\beta }}\right)}.} === Distribution function === In physics, attempts have been made to explain stretched exponential behaviour as a linear superposition of simple exponential decays. This requires a nontrivial distribution of relaxation times, ρ(u), which is implicitly defined by e − t β = ∫ 0 ∞ d u ρ ( u ) e − t / u . {\displaystyle e^{-t^{\beta }}=\int _{0}^{\infty }du\,\rho (u)\,e^{-t/u}.} Alternatively, a distribution G = u ρ ( u ) {\displaystyle G=u\rho (u)} is used. ρ can be computed from the series expansion: ρ ( u ) = − 1 π u ∑ k = 0 ∞ ( − 1 ) k k ! sin ⁡ ( π β k ) Γ ( β k + 1 ) u β k {\displaystyle \rho (u)=-{1 \over \pi u}\sum _{k=0}^{\infty }{(-1)^{k} \over k!}\sin(\pi \beta k)\Gamma (\beta k+1)u^{\beta k}} For rational values of β, ρ(u) can be calculated in terms of elementary functions. But the expression is in general too complex to be useful except for the case β = 1/2 where G ( u ) = u ρ ( u ) = 1 2 π u e − u / 4 {\displaystyle G(u)=u\rho (u)={1 \over 2{\sqrt {\pi }}}{\sqrt {u}}e^{-u/4}} Figure 2 shows the same results plotted in both a linear and a log representation. The curves converge to a Dirac delta function peaked at u = 1 as β approaches 1, corresponding to the simple exponential function. The moments of the original function can be expressed as ⟨ τ n ⟩ = Γ ( n ) ∫ 0 ∞ d τ t n ρ ( τ ) . {\displaystyle \langle \tau ^{n}\rangle =\Gamma (n)\int _{0}^{\infty }d\tau \,t^{n}\,\rho (\tau ).} The first logarithmic moment of the distribution of simple-exponential relaxation times is ⟨ ln ⁡ τ ⟩ = ( 1 − 1 β ) E u + ln ⁡ τ K {\displaystyle \langle \ln \tau \rangle =\left(1-{1 \over \beta }\right){\rm {Eu}}+\ln \tau _{K}} where Eu is the Euler constant. == Fourier transform == To describe results from spectroscopy or inelastic scattering, the sine or cosine Fourier transform of the stretched exponential is needed. It must be calculated either by numeric integration, or from a series expansion. The series here as well as the one for the distribution function are special cases of the Fox–Wright function. For practical purposes, the Fourier transform may be approximated by the Havriliak–Negami function, though nowadays the numeric computation can be done so efficiently that there is no longer any reason not to use the Kohlrausch–Williams–Watts function in the frequency domain. == History and further applications == As said in the introduction, the stretched exponential was introduced by the German physicist Rudolf Kohlrausch in 1854 to describe the discharge of a capacitor (Leyden jar) that used glass as dielectric medium. The next documented usage is by Friedrich Kohlrausch, son of Rudolf, to describe torsional relaxation. A. Werner used it in 1907 to describe complex luminescence decays; Theodor Förster in 1949 as the fluorescence decay law of electronic energy donors. Outside condensed matter physics, the stretched exponential has been used to describe the removal rates of small, stray bodies in the solar system, the diffusion-weighted MRI signal in the brain, and the production from unconventional gas wells. === In probability === If the integrated distribution is a stretched exponential, the normalized probability density function is given by p ( τ ∣ λ , β ) d τ = λ Γ ( 1 + β − 1 ) e − ( τ λ ) β d τ {\displaystyle p(\tau \mid \lambda ,\beta )~d\tau ={\frac {\lambda }{\Gamma (1+\beta ^{-1})}}~e^{-(\tau \lambda )^{\beta }}~d\tau } Note that confusingly some authors have been known to use the name "stretched exponential" to refer to the Weibull distribution. === Modified functions === A modified stretched exponential function f β ( t ) = e − t β ( t ) {\displaystyle f_{\beta }(t)=e^{-t^{\beta (t)}}} with a slowly t-dependent exponent β has been used for biological survival curves. === Wireless Communications === In wireless communications, a scaled version of the stretched exponential function has been shown to appear in the Laplace Transform for the interference power I {\displaystyle I} when the transmitters' locations are modeled as a 2D Poisson Point Process with no exclusion region around the receiver. The Laplace transform can be written for arbitrary fading distribution as follows: L I ( s ) = exp ⁡ ( − π λ E [ g 2 η ] Γ ( 1 − 2 η ) s 2 η ) = exp ⁡ ( − t s β ) {\displaystyle L_{I}(s)=\exp \left(-\pi \lambda \mathbb {E} {\left[g^{\frac {2}{\eta }}\right]}\Gamma {\left(1-{\frac {2}{\eta }}\right)}s^{\frac {2}{\eta }}\right)=\exp \left(-ts^{\beta }\right)} where g {\displaystyle g} is the power of the fading, η {\displaystyle \eta } is the path loss exponent, λ {\displaystyle \lambda } is the density of the 2D Poisson Point Process, Γ ( ⋅ ) {\displaystyle \Gamma (\cdot )} is the Gamma function, and E [ x ] {\displaystyle \mathbb {E} [x]} is the expectation of the variable x {\displaystyle x} . The same reference also shows how to obtain the inverse Laplace Transform for the stretched exponential exp ⁡ ( − s β ) {\displaystyle \exp \left(-s^{\beta }\right)} for higher order integer β = β q β b {\displaystyle \beta =\beta _{q}\beta _{b}} from lower order integers β a {\displaystyle \beta _{a}} and β b {\displaystyle \beta _{b}} . == Internet Streaming == The stretched exponential has been used to characterize Internet media accessing patterns, such as YouTube and other stable streaming media sites. The commonly agreed power-law accessing patterns of Web workloads mainly reflect text-based content Web workloads, such as daily updated news sites. == References == == External links == J. Wuttke: libkww C library to compute the Fourier transform of the stretched exponential function
Wikipedia/Stretched_exponential_function
Organizational network analysis (ONA) is a method for studying communication and socio-technical networks within a formal organization. This technique creates statistical and graphical models of the people, tasks, groups, knowledge and resources of organizational systems. It is based on social network theory and more specifically, dynamic network analysis. == Applications == ONA can be used in a variety of ways by managers, consultants, and executives. === Network visualizations === There are several tools that allow managers to visually depict their employee networks. Most of the tools are built specifically for researchers and academics who study Network theory, but are relatively inexpensive to use, as long as the leaders are well-versed on how to capture the information, feed it into the tool in the correct formats, and understand how to "read" and translate the network graphs into business decisions. === Innovation gauge === Several recent studies and research has highlighted that 'Psychological Safety' is the marker for an innovative team. This has been studied and published first by Google, in their Project Aristotle work as well as highlighted in New York Times and other research publications. Amy Edmondson is the preeminent scholar and researcher in this field who has worked across various industries to identify the benefits and even the characteristics of 'Psychological Safety' in teams. ONA is now increasingly being used in this context to analyze the relationships developed within a given team, and for understanding how that team works as a unit to create this psychological safety for its members. This technique is more thorough than the traditional surveys. === Employee engagement === Engagement surveys and other such culture surveys have become a mainstay of the workplace. However, one of the largest complaints from such surveys are that once managers see the results, often the aggregated sentiments of their employees, they are unsure of next steps and actions. Organizational Network Analysis, when combined with such engagement surveys, however change the way that leaders use and leverage these results. Because ONA allows managers to see the context behind the sentiments, they can actually understand how to correct or sustain these results. For example, if a company's engagement survey said 30% of the employees felt they are inadequately trained for their jobs, a manager would be perhaps inclined to either do nothing, or invest more in comprehensive training programs. However, doing an ONA alongside this might reveal to managers that employees are unhappy with training because they have limited access to institutional knowledge at the company. Then, instead of a training program, managers might simply work on ensuring their top knowledge hubs share their knowledge broadly, and have a longer, more sustainable improvement to the team's level of information and training. == References ==
Wikipedia/Organizational_network_analysis
Network medicine is the application of network science towards identifying, preventing, and treating diseases. This field focuses on using network topology and network dynamics towards identifying diseases and developing medical drugs. Biological networks, such as protein-protein interactions and metabolic pathways, are utilized by network medicine. Disease networks, which map relationships between diseases and biological factors, also play an important role in the field. Epidemiology is extensively studied using network science as well; social networks and transportation networks are used to model the spreading of disease across populations. Network medicine is a medically focused area of systems biology. == Background == The term "network medicine" was introduced by Albert-László Barabási in an the article "Network Medicine – From Obesity to the 'Diseasome'", published in The New England Journal of Medicine, in 2007. Barabási states that biological systems, similarly to social and technological systems, contain many components that are connected in complicated relationships but are organized by simple principles. Relaying on the tools and principles of network theory, the organizing principles can be analyzed by representing systems as complex networks, which are collections of nodes linked together by a particular biological or molecular relationship. For networks pertaining to medicine, nodes represent biological factors (biomolecules, diseases, phenotypes, etc.) and links (edges) represent their relationships (physical interactions, shared metabolic pathway, shared gene, shared trait, etc.). Barabasi suggested that understanding human disease requires us to focus on three key networks, the metabolic network, the disease network, and the social network. The network medicine is based on the idea that understanding complexity of gene regulation, metabolic reactions, and protein-protein interactions and that representing these as complex networks will shed light on the causes and mechanisms of diseases. It is possible, for example, to infer a bipartite graph representing the connections of diseases to their associated genes using the OMIM database. The projection of the diseases, called the human disease network (HDN), is a network of diseases connected to each other if they share a common gene. Using the HDN, diseases can be classified and analyzed through the genetic relationships between them. Network medicine has proven to be a valuable tool in analyzing big biomedical data. == Research areas == === Interactome === The whole set of molecular interactions in the human cell, also known as the interactome, can be used for disease identification and prevention. These networks have been technically classified as scale-free, disassortative, small-world networks, having a high betweenness centrality. Protein-protein interactions have been mapped, using proteins as nodes and their interactions between each other as links. These maps utilize databases such as BioGRID and the Human Protein Reference Database. The metabolic network encompasses the biochemical reactions in metabolic pathways, connecting two metabolites if they are in the same pathway. Researchers have used databases such as KEGG to map these networks. Others networks include cell signaling networks, gene regulatory networks, and RNA networks. Using interactome networks, one can discover and classify diseases, as well as develop treatments through knowledge of its associations and their role in the networks. One observation is that diseases can be classified not by their principle phenotypes (pathophenotype) but by their disease module, which is a neighborhood or group of components in the interactome that, if disrupted, results in a specific pathophenotype. Disease modules can be used in a variety of ways, such as predicting disease genes that have not been discovered yet. Therefore, network medicine looks to identify the disease module for a specific pathophenotype using clustering algorithms. === Diseasome === Human disease networks, also called the diseasome, are networks in which the nodes are diseases and the links, the strength of correlation between them. This correlation is commonly quantified based on associated cellular components that two diseases share. The first-published human disease network (HDN) looked at genes, finding that many of the disease associated genes are non-essential genes, as these are the genes that do not completely disrupt the network and are able to be passed down generations. Metabolic disease networks (MDN), in which two diseases are connected by a shared metabolite or metabolic pathway, have also been extensively studied and is especially relevant in the case of metabolic disorders. Three representations of the diseasome are: Shared gene formalism states that if a gene is linked to two different disease phenotypes, then the two diseases likely have a common genetic origin (genetic disorders). Shared metabolic pathway formalism states that if a metabolic pathway is linked to two different diseases, then the two diseases likely have a shared metabolic origin (metabolic disorders). Disease comorbidity formalism uses phenotypic disease networks (PDN), where two diseases are linked if the observed comorbidity between their phenotypes exceeds a predefined threshold. This does not look at the mechanism of action of diseases, but captures disease progression and how highly connected diseases correlate to higher mortality rates. Some disease networks connect diseases to associated factors outside the human cell. Networks of environmental and genetic etiological factors linked with shared diseases, called the "etiome", can be also used to assess the clustering of environmental factors in these networks and understand the role of the environment on the interactome. The human symptom-disease network (HSDN), published in June 2014, showed that the symptoms of disease and disease associated cellular components were strongly correlated and that diseases of the same categories tend to form highly connected communities, with respect to their symptoms. === Pharmacology === Network pharmacology is a developing field based in systems pharmacology that looks at the effect of drugs on both the interactome and the diseasome. The topology of a biochemical reaction network determines the shape of drug dose-response curve as well as the type of drug-drug interactions, thus can help design efficient and safe therapeutic strategies. In addition, the drug-target network (DTN) can play an important role in understanding the mechanisms of action of approved and experimental drugs. The network theory view of pharmaceuticals is based on the effect of the drug in the interactome, especially the region that the drug target occupies. Combination therapy for a complex disease (polypharmacology) is suggested in this field since one active pharmaceutical ingredient (API) aimed at one target may not affect the entire disease module. The concept of disease modules can be used to aid in drug discovery, drug design, and the development of biomarkers for disease detection. There can be a variety of ways to identifying drugs using network pharmacology; a simple example of this is the "guilt by association" method. This states if two diseases are treated by the same drug, a drug that treats one disease may treat the other. Drug repurposing, drug-drug interactions and drug side-effects have also been studied in this field. The next iteration of network pharmacology used entirely different disease definitions, defined as dysfunction in signaling modules derived from protein-protein interaction modules. The latter as well as the interactome had many conceptual shortcomings, e.g., each protein appears only once in the interactome, whereas in reality, one protein can occur in different contexts and different cellular locations. Such signaling modules are therapeutically best targeted at several sites, which is now the new and clinically applied definition of network pharmacology. To achieve higher than current precision, patients must not be selected solely on descriptive phenotypes but also based on diagnostics that detect the module dysregulation. Moreover, such mechanism-based network pharmacology has the advantage that each of the drugs used within one module is highly synergistic, which allows for reducing the doses of each drug, which then reduces the potential of these drugs acting on other proteins outside the module and hence the chance for unwanted side effects. === Network epidemics === Network epidemics has been built by applying network science to existing epidemic models, as many transportation networks and social networks play a role in the spread of disease. Social networks have been used to assess the role of social ties in the spread of obesity in populations. Epidemic models and concepts, such as spreading and contact tracing, have been adapted to be used in network analysis. These models can be used in public health policies, in order to implement strategies such as targeted immunization and has been recently used to model the spread of the Ebola virus epidemic in West Africa across countries and continents. === Drug prescription networks (DPNs) === Recently, some researchers tended to represent medication use in form of networks. The nodes in these networks represent medications and the edges represent some sort of relationship between these medications. Cavallo et al. (2013) described the topology of a co-prescription network to demonstrate which drug classes are most co-prescribed. Bazzoni et al. (2015) concluded that the DPNs of co-prescribed medications are dense, highly clustered, modular and assortative. Askar et al. (2021) created a network of the severe drug-drug interactions (DDIs) showing that it consisted of many clusters. === Other networks === The development of organs and other biological systems can be modelled as network structures where the clinical (e.g., radiographic, functional) characteristics can be represented as nodes and the relationships between these characteristics are represented as the links among such nodes. Therefore, it is possible to use networks to model how organ systems dynamically interact. == Educational and clinical implementation == The Channing Division of Network Medicine at Brigham and Women's Hospital was created in 2012 to study, reclassify, and develop treatments for complex diseases using network science and systems biology. It currently involves more than 80 Harvard Medical School (HMS) faculty and focuses on three areas: Chronic Disease Epidemiology uses genomics and metabolomics in large, long-term epidemiology studies, such as the Nurses' Health Study. Systems Genetics & Genomics focuses on complex respiratory diseases, specifically COPD and asthma, in smaller population studies. Systems Pathology uses multidisciplinary approaches, including as control theory, dynamical systems, and combinatorial optimization, to understand complex diseases and guide biomarker design. Massachusetts Institute of Technology offers an undergraduate course called "Network Medicine: Using Systems Biology and Signaling Networks to Create Novel Cancer Therapeutics". Also, Harvard Catalyst (The Harvard Clinical and Translational Science Center) offers a three-day course entitled "Introduction to Network Medicine", open to clinical and science professionals with doctorate degrees. Current worldwide efforts in network medicine are coordinated by the Network Medicine Institute and Global Alliance, representing 33 leading universities and institutions around the world committed to improving global health. == See also == == References ==
Wikipedia/Network_medicine
Policy network analysis is a field of research in political science focusing on the links and interdependence between government's sections and other societal actors, aiming to understand the policy-making process and public policy outcomes. == Definition of policy networks == Although the number of definitions is almost as large as the number of approaches of analysis, Rhodes: 426  aims to offer a minimally exclusive starting point: "Policy networks are sets of formal institutional and informal linkages between governmental and other actors structured around shared if endlessly negotiated beliefs and interests in public policy making and implementation." == Possible typologies of policy networks == As Thatcher: 391  notes, policy network approaches initially aimed to model specific forms of state-interest group relations, without giving exhaustive typologies. === Policy communities vs. Issue networks === The most widely used paradigm of the 1970s and 1980s only analyzed two specific types of policy networks: policy communities and issue networks. Justifications of the usage of these concepts were deduced from empirical case studies. Policy Communities in which you refer to relatively slowly changing networks defining the context of policy-making in specific policy segments. The network links are generally perceived as the relational ties between bureaucrats, politicians and interest groups. The main characteristic of policy communities – compared to issue networks – is that the boundaries of the networks are more stable and more clearly defined. This concept was studied in the context of policy-making in the United Kingdom. In contrast, issue networks – a concept established in literature about United States government - refer to a looser system, where a relatively large number of stakeholders are involved. Non-government actors in these networks usually include not only interest group representatives but also professional or academic experts. An important characteristic of issue network is that membership is constantly changing, interdependence is often asymmetric and – compared to policy communities – it is harder to identify dominant actors. === Other possible typologies === New typological approaches appeared in the early 1990s and late 1980s with the aim of grouping policy networks into a system of mutually exclusive and commonly exhaustive categories. One possible logic of typology is based on the degree of integration, membership size and distribution of resources in the network. This categorization – perhaps most importantly represented by R. A. W. Rhodes – allows the combination of policy communities and issue networks with categories like professional network, intragovernmental network and producer network. Other approaches identify categories based on distinct patterns of state-interest group relations. Patterns include corporatism and pluralism, iron triangles, subgovernment and clientelism while the differentiation is based on membership, stability and sectorality. == Roles of policy network analysis == As the field of policy network analysis grew since the late 20th century, scholars developed competing descriptive, theoretical and prescriptive accounts. Each type gives different specific content for the term policy network and uses different research methodologies. === Descriptive usage === For several authors, policy networks describe specific forms of government policy-making. The three most important forms are interest intermediation, interorganizational analysis, and governance. ==== Interest intermediation ==== An approach developed from the literature on US pluralism, policy networks are often analyzed in order to identify the most important actors influencing governmental decision-making. From this perspective, a network-based assessment is useful to describe power positions, the structure of oligopoly in political markets, and the institutions of interest negotiation. ==== Interorganizational analysis ==== Another branch of descriptive literature, which emerged from the study of European politics, aims to understand the interdependency in decision-making between formal political institutions and the corresponding organizational structures. This viewpoint emphasizes the importance of overlapping organizational responsibilities and the distribution of power in shaping specific policy outcomes. ==== Governance ==== A third direction of descriptive scholarship is to describe general patterns of policy-making – the formal institutions of power-sharing between government, independent state bodies and the representatives of employer and labor interests. === Theoretical usage === The two most important theoretical approaches aiming to understand and explain actor's behavior in policy networks are the following: power dependence and rational choice. ==== Power dependence ==== In power dependence models, policy networks are understood as mechanism of exchanging resources between organizations in the networks. The dynamic of exchange is determined by the comparative value of resources (f.e. legal, political or financial in nature) and individual capacities to deploy them in order create better bargaining positions and achieve higher degrees of autonomy. ==== Rational Choice ==== In policy network analysis, theorists complement standard rational choice arguments with the insights of new institutionalism. This "actor-centered institutionalism" is used to describe policy networks as structural arrangements between relatively stable sets of public and private players. Rational choice theorists identify links between network actors as channels to exchange multiple goods (f.e. knowledge, resources and information). === Prescriptive usage === The prescriptive literature on policy networks focuses on the phenomenon's role in constraining or enabling certain governmental action. From this viewpoint, networks are seen as central elements of the realm of policy-making at least partially defining the desirability of status quo – thus a possible target of reform initiatives. The three most common network management approaches are the following: instrumental (a focus on altering dependency relation), institutional (a focus on rules, incentives and culture) and interactive (a focus on communication and negotiation). == New directions and debates == As Rhodes points out, there is a long-lasting debate in the field about general theories predicting the emergence of specific networks and corresponding policy outcomes depending on specific conditions. No theories have succeeded in achieving this level of generality yet and some scholars doubt they ever will. Other debates are focusing on describing and theorizing change in policy networks. While some political scientists state that this might not be possible, other scholars have made efforts towards the understanding of policy network dynamics. One example is the advocacy coalition framework, which aims to analyze the effect of commonly represented beliefs (in coalitions) on policy outcomes. == See also == Political Science Political Economy Advocacy Group Rational Choice Theory Issue networks Network science == Further reading == Sabatier, Paul A. (June 1987). "Knowledge, policy-oriented learning, and policy change: an advocacy coalition framework". Science Communication. 8 (4): 649–692. doi:10.1177/0164025987008004005. S2CID 144775441. Sabatier, Paul A.; Jenkins-Smith, Hank C., eds. (1993). Policy change and learning: an advocacy coalition approach. Boulder, Colorado: Westview Press. ISBN 9780813316499. == References ==
Wikipedia/Policy_network_analysis
Hyperlink-Induced Topic Search (HITS; also known as hubs and authorities) is a link analysis algorithm that rates Web pages, developed by Jon Kleinberg. The idea behind Hubs and Authorities stemmed from a particular insight into the creation of web pages when the Internet was originally forming; that is, certain web pages, known as hubs, served as large directories that were not actually authoritative in the information that they held, but were used as compilations of a broad catalog of information that led users direct to other authoritative pages. In other words, a good hub represents a page that pointed to many other pages, while a good authority represents a page that is linked by many different hubs. The scheme therefore assigns two scores for each page: its authority, which estimates the value of the content of the page, and its hub value, which estimates the value of its links to other pages. == History == === In journals === Many methods have been used to rank the importance of scientific journals. One such method is Garfield's impact factor. Journals such as Science and Nature are filled with numerous citations, making these magazines have very high impact factors. Thus, when comparing two more obscure journals which have received roughly the same number of citations but one of these journals has received many citations from Science and Nature, this journal needs be ranked higher. In other words, it is better to receive citations from an important journal than from an unimportant one. === On the Web === This phenomenon also occurs in the Internet. Counting the number of links to a page can give us a general estimate of its prominence on the Web, but a page with very few incoming links may also be prominent, if two of these links come from the home pages of sites like Yahoo!, Google, or MSN. Because these sites are of very high importance but are also search engines, a page can be ranked much higher than its actual relevance. == Algorithm == === Steps === In the HITS algorithm, the first step is to retrieve the most relevant pages to the search query. This set is called the root set and can be obtained by taking the top pages returned by a text-based search algorithm. A base set is generated by augmenting the root set with all the web pages that are linked from it and some of the pages that link to it. The web pages in the base set and all hyperlinks among those pages form a focused subgraph. The HITS computation is performed only on this focused subgraph. According to Kleinberg the reason for constructing a base set is to ensure that most (or many) of the strongest authorities are included. Authority and hub values are defined in terms of one another in a mutual recursion. An authority value is computed as the sum of the scaled hub values that point to that page. A hub value is the sum of the scaled authority values of the pages it points to. Some implementations also consider the relevance of the linked pages. The algorithm performs a series of iterations, each consisting of two basic steps: Authority update: Update each node's authority score to be equal to the sum of the hub scores of each node that points to it. That is, a node is given a high authority score by being linked from pages that are recognized as Hubs for information. Hub update: Update each node's hub score to be equal to the sum of the authority scores of each node that it points to. That is, a node is given a high hub score by linking to nodes that are considered to be authorities on the subject. The Hub score and Authority score for a node is calculated with the following algorithm: Start with each node having a hub score and authority score of 1. Run the authority update rule Run the hub update rule Normalize the values by dividing each Hub score by square root of the sum of the squares of all Hub scores, and dividing each Authority score by square root of the sum of the squares of all Authority scores. Repeat from the second step as necessary. === Comparison to PageRank === HITS, like Page and Brin's PageRank, is an iterative algorithm based on the linkage of the documents on the web. However it does have some major differences: It is processed on a small subset of ‘relevant’ documents (a 'focused subgraph' or base set), instead of the set of all documents as was the case with PageRank. It is query-dependent: the same page can receive a different hub/authority score given a different base set, which appears for a different query; It must, as a corollary, be executed at query time, not at indexing time, with the associated drop in performance that accompanies query-time processing. It computes two scores per document (hub and authority) as opposed to a single score; It is not commonly used by search engines (though a similar algorithm was said to be used by Teoma, which was acquired by Ask Jeeves/Ask.com). == In detail == To begin the ranking, we let a u t h ( p ) = 1 {\displaystyle \mathrm {auth} (p)=1} and h u b ( p ) = 1 {\displaystyle \mathrm {hub} (p)=1} for each page p {\displaystyle p} . We consider two types of updates: Authority Update Rule and Hub Update Rule. In order to calculate the hub/authority scores of each node, repeated iterations of the Authority Update Rule and the Hub Update Rule are applied. A k-step application of the Hub-Authority algorithm entails applying for k times first the Authority Update Rule and then the Hub Update Rule. === Authority update rule === For each p {\displaystyle p} , we update a u t h ( p ) {\displaystyle \mathrm {auth} (p)} to a u t h ( p ) = ∑ q ∈ P t o h u b ( q ) {\displaystyle \mathrm {auth} (p)=\displaystyle \sum \nolimits _{q\in P_{\mathrm {to} }}\mathrm {hub} (q)} where P t o {\displaystyle P_{\mathrm {to} }} is all pages which link to page p {\displaystyle p} . That is, a page's authority score is the sum of all the hub scores of pages that point to it. === Hub update rule === For each p {\displaystyle p} , we update h u b ( p ) {\displaystyle \mathrm {hub} (p)} to h u b ( p ) = ∑ q ∈ P f r o m a u t h ( q ) {\displaystyle \mathrm {hub} (p)=\displaystyle \sum \nolimits _{q\in P_{\mathrm {from} }}\mathrm {auth} (q)} where P f r o m {\displaystyle P_{\mathrm {from} }} is all pages which page p {\displaystyle p} links to. That is, a page's hub score is the sum of all the authority scores of pages it points to. === Normalization === The final hub-authority scores of nodes are determined after infinite repetitions of the algorithm. As directly and iteratively applying the Hub Update Rule and Authority Update Rule leads to diverging values, it is necessary to normalize the matrix after every iteration. Thus the values obtained from this process will eventually converge. == Pseudocode == G := set of pages for each page p in G do p.auth = 1 // p.auth is the authority score of the page p p.hub = 1 // p.hub is the hub score of the page p for step from 1 to k do // run the algorithm for k steps norm = 0 for each page p in G do // update all authority values first p.auth = 0 for each page q in p.incomingNeighbors do // p.incomingNeighbors is the set of pages that link to p p.auth += q.hub norm += square(p.auth) // calculate the sum of the squared auth values to normalise norm = sqrt(norm) for each page p in G do // update the auth scores p.auth = p.auth / norm // normalise the auth values norm = 0 for each page p in G do // then update all hub values p.hub = 0 for each page r in p.outgoingNeighbors do // p.outgoingNeighbors is the set of pages that p links to p.hub += r.auth norm += square(p.hub) // calculate the sum of the squared hub values to normalise norm = sqrt(norm) for each page p in G do // then update all hub values p.hub = p.hub / norm // normalise the hub values The hub and authority values converge in the pseudocode above. The code below does not converge, because it is necessary to limit the number of steps that the algorithm runs for. One way to get around this, however, would be to normalize the hub and authority values after each "step" by dividing each authority value by the square root of the sum of the squares of all authority values, and dividing each hub value by the square root of the sum of the squares of all hub values. This is what the pseudocode above does. == Non-converging pseudocode == G := set of pages for each page p in G do p.auth = 1 // p.auth is the authority score of the page p p.hub = 1 // p.hub is the hub score of the page p function HubsAndAuthorities(G) for step from 1 to k do // run the algorithm for k steps for each page p in G do // update all authority values first p.auth = 0 for each page q in p.incomingNeighbors do // p.incomingNeighbors is the set of pages that link to p p.auth += q.hub for each page p in G do // then update all hub values p.hub = 0 for each page r in p.outgoingNeighbors do // p.outgoingNeighbors is the set of pages that p links to p.hub += r.auth == See also == PageRank == References == Kleinberg, Jon (1999). "Authoritative sources in a hyperlinked environment" (PDF). Journal of the ACM. 46 (5): 604–632. CiteSeerX 10.1.1.54.8485. doi:10.1145/324133.324140. S2CID 221584113. Li, L.; Shang, Y.; Zhang, W. (2002). "Improvement of HITS-based Algorithms on Web Documents". Proceedings of the 11th International World Wide Web Conference (WWW 2002). Honolulu, HI. ISBN 978-1-880672-20-4. Archived from the original on 2005-04-03. Retrieved 2005-06-03.{{cite book}}: CS1 maint: location missing publisher (link) == External links == U.S. patent 6,112,202 Create a data search engine from a relational database Search engine in C# based on HITS
Wikipedia/HITS_algorithm
Personal knowledge networks (PKN) are methods for organizations to identify, capture, evaluate, retrieve, verify and share information. This method was primarily conceived by researchers to facilitate the sharing of personal, informal knowledge between organizations. Various technologies and behaviors support personal knowledge networking, including wikis and Really Simple Syndication (RSS). Researchers propose that knowledge management (KM) can occur with little explicit governance. This trend is referred to as "grassroots KM" as opposed to traditional, top-down enterprise KM. == Origin == In an organization, individuals often know each other and interact beyond their official duties, leading to knowledge flows and learning. Drawbacks of Traditional Knowledge Management Traditional Knowledge Management focuses more on technology than on social interaction. Organizations should first look at the culture inherent inside, as it significantly affects the social interaction among members involved. Technical Support from Social Network Social software provides an answer to this previous question. It is a means of giving people what they want in terms of their traditional knowledge management activities, in a way that also benefits the firm. == Comparison between KM and PKN == === Structural Aspect === Content-Centric vs User-Centric A content-based process is regarded as a major factor leading to the incompatibility of Knowledge Management in the current situation. In contrast, a user-based process focuses on each individual in a learning process, shifting the driving force of knowledge from an organization's content database to the learners themselves. Furthermore, knowledge can only be evaluated or managed by individuals, emphasizing its unique nature. Centralized vs Distributed In the PKN model, knowledge learning is undertaken with a high consideration of its natural distributed format. In comparison, the centralized feature has been proven to perform well in guiding an organized and structured learning session. However, the well-structured guidance could hardly satisfy the various and timely requirements of today's users. Top-Down vs Bottom-Up Top-down models and hierarchically controlled structures are the enemies of innovation. In general, learners and knowledge workers love to learn, but they hate not being given the freedom to decide how they learn and work (Cross, 2003). Given this fact, a better way to cope with this system is to let them develop and emerge naturally in a free-form way, which could be abstracted to a bottom-up structure. Enforcement vs Voluntary Traditional KM mainly adopts a pushing model that passively provides content to users and expects the learning process to happen. This model is not sufficient to improve learners' motivation. Considering the dynamic and flexible nature of the learning process, LM and KM approaches require a shift in emphasis from a knowledge-push to a knowledge-pull model. PKN provides a more attractive platform where users can locate content according to their needs from information repositories. === Application Aspect === Personal knowledge search tools instead of searching on the corporate intranet "Blogging" instead of the enterprise's Web content management == References ==
Wikipedia/Personal_knowledge_networking
Quantum complex networks are complex networks whose nodes are quantum computing devices. Quantum mechanics has been used to create secure quantum communications channels that are protected from hacking. Quantum communications offer the potential for secure enterprise-scale solutions. == Motivation == In theory, it is possible to take advantage of quantum mechanics to create secure communications using features such as quantum key distribution is an application of quantum cryptography that enables secure communications Quantum teleportation can transfer data at a higher rate than classical channels. == History == Successful quantum teleportation experiments in 1998. Prototypical quantum communication networks arrived in 2004. Large scale communication networks tend to have non-trivial topologies and characteristics, such as small world effect, community structure, or scale-free. == Concepts == === Qubits === In quantum information theory, qubits are analogous to bits in classical systems. A qubit is a quantum object that, when measured, can be found to be in one of only two states, and that is used to transmit information. Photon polarization or nuclear spin are examples of binary phenomena that can be used as qubits. === Entanglement === Quantum entanglement is a physical phenomenon characterized by correlation between the quantum states of two or more physically separate qubits. Maximally entangled states are those that maximize the entropy of entanglement. In the context of quantum communication, entangled qubits are used as a quantum channel. === Bell measurement === Bell measurement is a kind of joint quantum-mechanical measurement of two qubits such that, after the measurement, the two qubits are maximally entangled. === Entanglement swapping === Entanglement swapping is a strategy used in the study of quantum networks that allows connections in the network to change. For example, given 4 qubits, A, B, C and D, such that qubits C and D belong to the same station, while A and C belong to two different stations, and where qubit A is entangled with qubit C and qubit B is entangled with qubit D. Performing a Bell measurement for qubits A and B, entangles qubits A and B. It is also possible to entangle qubits C and D, despite the fact that these two qubits never interact directly with each other. Following this process, the entanglement between qubits A and C, and qubits B and D are lost. This strategy can be used to define network topology. == Network structure == While models for quantum complex networks are not of identical structure, usually a node represents a set of qubits in the same station (where operations like Bell measurements and entanglement swapping can be applied) and an edge between node i {\displaystyle i} and j {\displaystyle j} means that a qubit in node i {\displaystyle i} is entangled to a qubit in node j {\displaystyle j} , although those two qubits are in different places and so cannot physically interact. Quantum networks where the links are interaction terms instead of entanglement are also of interest. === Notation === Each node in the network contains a set of qubits in different states. To represent the quantum state of these qubits, it is convenient to use Dirac notation and represent the two possible states of each qubit as | 0 ⟩ {\displaystyle |0\rangle } and | 1 ⟩ {\displaystyle |1\rangle } . In this notation, two particles are entangled if the joint wave function, | ψ i j ⟩ {\displaystyle |\psi _{ij}\rangle } , cannot be decomposed as | ψ i j ⟩ = | ϕ ⟩ i ⊗ | ϕ ⟩ j , {\displaystyle |\psi _{ij}\rangle =|\phi \rangle _{i}\otimes |\phi \rangle _{j},} where | ϕ ⟩ i {\displaystyle |\phi \rangle _{i}} represents the quantum state of the qubit at node i and | ϕ ⟩ j {\displaystyle |\phi \rangle _{j}} represents the quantum state of the qubit at node j. Another important concept is maximally entangled states. The four states (the Bell states) that maximize the entropy of entanglement between two qubits can be written as follows: | Φ i j + ⟩ = 1 2 ( | 0 ⟩ i ⊗ | 0 ⟩ j + | 1 ⟩ i ⊗ | 1 ⟩ j ) , {\displaystyle |\Phi _{ij}^{+}\rangle ={\frac {1}{\sqrt {2}}}(|0\rangle _{i}\otimes |0\rangle _{j}+|1\rangle _{i}\otimes |1\rangle _{j}),} | Φ i j − ⟩ = 1 2 ( | 0 ⟩ i ⊗ | 0 ⟩ j − | 1 ⟩ i ⊗ | 1 ⟩ j ) , {\displaystyle |\Phi _{ij}^{-}\rangle ={\frac {1}{\sqrt {2}}}(|0\rangle _{i}\otimes |0\rangle _{j}-|1\rangle _{i}\otimes |1\rangle _{j}),} | Ψ i j + ⟩ = 1 2 ( | 0 ⟩ i ⊗ | 1 ⟩ j + | 1 ⟩ i ⊗ | 0 ⟩ j ) , {\displaystyle |\Psi _{ij}^{+}\rangle ={\frac {1}{\sqrt {2}}}(|0\rangle _{i}\otimes |1\rangle _{j}+|1\rangle _{i}\otimes |0\rangle _{j}),} | Ψ i j − ⟩ = 1 2 ( | 0 ⟩ i ⊗ | 1 ⟩ j − | 1 ⟩ i ⊗ | 0 ⟩ j ) . {\displaystyle |\Psi _{ij}^{-}\rangle ={\frac {1}{\sqrt {2}}}(|0\rangle _{i}\otimes |1\rangle _{j}-|1\rangle _{i}\otimes |0\rangle _{j}).} == Models == === Quantum random networks === The quantum random network model proposed by Perseguers et al. (2009) can be thought of as a quantum version of the Erdős–Rényi model. In this model, each node contains N − 1 {\displaystyle N-1} qubits, one for each other node. The degree of entanglement between a pair of nodes, represented by p {\displaystyle p} , plays a similar role to the parameter p {\displaystyle p} in the Erdős–Rényi model in which two nodes form a connection with probability p {\displaystyle p} , whereas in the context of quantum random networks, p {\displaystyle p} refers to the probability of converting an entangled pair of qubits to a maximally entangled state using only local operations and classical communication. Using Dirac notation, a pair of entangled qubits connecting the nodes i {\displaystyle i} and j {\displaystyle j} is represented as | ψ i j ⟩ = 1 − p / 2 | 0 ⟩ i ⊗ | 0 ⟩ j + p / 2 | 1 ⟩ i ⊗ | 1 ⟩ j , {\displaystyle |\psi _{ij}\rangle ={\sqrt {1-p/2}}|0\rangle _{i}\otimes |0\rangle _{j}+{\sqrt {p/2}}|1\rangle _{i}\otimes |1\rangle _{j},} For p = 0 {\displaystyle p=0} , the two qubits are not entangled: | ψ i j ⟩ = | 0 ⟩ i ⊗ | 0 ⟩ j , {\displaystyle |\psi _{ij}\rangle =|0\rangle _{i}\otimes |0\rangle _{j},} and for p = 1 {\displaystyle p=1} , we obtain the maximally entangled state: | ψ i j ⟩ = 1 / 2 ( | 0 ⟩ i ⊗ | 0 ⟩ j + | 1 ⟩ i ⊗ | 1 ⟩ j ) {\displaystyle |\psi _{ij}\rangle ={\sqrt {1/2}}(|0\rangle _{i}\otimes |0\rangle _{j}+|1\rangle _{i}\otimes |1\rangle _{j})} . For intermediate values of p {\displaystyle p} , 0 < p < 1 {\displaystyle 0<p<1} , any entangled state is, with probability p {\displaystyle p} , successfully converted to the maximally entangled state using LOCC operations. One feature that distinguishes this model from its classical analogue is the fact that, in quantum random networks, links are only truly established after they are measured, and it is possible to exploit this fact to shape the final state of the network. For an initial quantum complex network with an infinite number of nodes, Perseguers et al. showed that, the right measurements and entanglement swapping, make it possible to collapse the initial network to a network containing any finite subgraph, provided that p {\displaystyle p} scales with N {\displaystyle N} as p ∼ N Z {\textstyle p\sim N^{Z}} , where Z ≥ − 2 {\displaystyle Z\geq -2} . This result is contrary to classical graph theory, where the type of subgraphs contained in a network is bounded by the value of z {\displaystyle z} . === Entanglement percolation === Entanglement percolation models attempt to determine whether a quantum network is capable of establishing a connection between two arbitrary nodes through entanglement, and to find the best strategies to create such connections. Cirac et al. (2007) applied a model to complex networks by Cuquet et al. (2009), in which nodes are distributed in a lattice or in a complex network, and each pair of neighbors share two pairs of entangled qubits that can be converted to a maximally entangled qubit pair with probability p {\displaystyle p} . We can think of maximally entangled qubits as the true links between nodes. In classical percolation theory, with a probability p {\displaystyle p} that two nodes are connected, p {\displaystyle p} has a critical value (denoted by p c {\displaystyle p_{c}} ), so that if p > p c {\displaystyle p>p_{c}} a path between two randomly selected nodes exists with a finite probability, and for p < p c {\displaystyle p<p_{c}} the probability of such a path existing is asymptotically zero. p c {\displaystyle p_{c}} depends only on the network topology. A similar phenomenon was found in the model proposed by Cirac et al. (2007), where the probability of forming a maximally entangled state between two randomly selected nodes is zero if p < p c {\displaystyle p<p_{c}} and finite if p > p c {\displaystyle p>p_{c}} . The main difference between classical and entangled percolation is that, in quantum networks, it is possible to change the links in the network, in a way changing the effective topology of the network. As a result, p c {\displaystyle p_{c}} depends on the strategy used to convert partially entangled qubits to maximally connected qubits. With a naïve approach, p c {\displaystyle p_{c}} for a quantum network is equal to p c {\displaystyle p_{c}} for a classic network with the same topology. Nevertheless, it was shown that is possible to take advantage of quantum swapping to lower p c {\displaystyle p_{c}} both in regular lattices and complex networks. == See also == Erdős–Rényi model Gradient network Network dynamics Network topology Quantum key distribution Quantum teleportation == References == == External links == LOCC operations
Wikipedia/Quantum_complex_network
In mathematics and social science, a collaboration graph is a graph modeling some social network where the vertices represent participants of that network (usually individual people) and where two distinct participants are joined by an edge whenever there is a collaborative relationship between them of a particular kind. Collaboration graphs are used to measure the closeness of collaborative relationships between the participants of the network. == Types considered in the literature == The most well-studied collaboration graphs include: Collaboration graph of mathematicians also known as the Erdős collaboration graph, where two mathematicians are joined by an edge whenever they co-authored a paper together (with possibly other co-authors present). Collaboration graph of movie actors, also known as the Hollywood graph or co-stardom network, where two movie actors are joined by an edge whenever they appeared in a movie together. Collaborations graphs in other social networks, such as sports, including the "NBA graph" whose vertices are players where two players are joined by an edge if they have ever played together on the same team. Co-authorship graphs in published articles, where individual nodes may be assigned either at the level of the author, institution, or country. These types of graphs are useful in establishing and evaluating research networks. == Features == By construction, the collaboration graph is a simple graph, since it has no loop-edges and no multiple edges. The collaboration graph need not be connected. Thus each person who never co-authored a joint paper represents an isolated vertex in the collaboration graph of mathematicians. Both the collaboration graph of mathematicians and movie actors were shown to have "small world topology": they have a very large number of vertices, most of small degree, that are highly clustered, and a "giant" connected component with small average distances between vertices. == Collaboration distance == The distance between two people/nodes in a collaboration graph is called the collaboration distance. Thus the collaboration distance between two distinct nodes is equal to the smallest number of edges in an edge-path connecting them. If no path connecting two nodes in a collaboration graph exists, the collaboration distance between them is said to be infinite. The collaboration distance may be used, for instance, for evaluating the citations of an author, a group of authors or a journal. In the collaboration graph of mathematicians, the collaboration distance from a particular person to Paul Erdős is called the Erdős number of that person. MathSciNet has a free online tool for computing the collaboration distance between any two mathematicians as well as the Erdős number of a mathematician. This tool also shows the actual chain of co-authors that realizes the collaboration distance. For the Hollywood graph, an analog of the Erdős number, called the Bacon number, has also been considered, which measures the collaboration distance to Kevin Bacon. == Generalizations == Some generalizations of the collaboration graph of mathematicians have also been considered. There is a hypergraph version, where individual mathematicians are vertices and where a group of mathematicians (not necessarily just two) constitutes a hyperedge if there is a paper of which they were all co-authors. A multigraph version of a collaboration graph has also been considered where two mathematicians are joined by k {\displaystyle k} edges if they co-authored exactly k {\displaystyle k} papers together. Another variation is a weighted collaboration graph where with rational weights where two mathematicians are joined by an edge with weight 1 k {\displaystyle {\tfrac {1}{k}}} whenever they co-authored exactly k {\displaystyle k} papers together. This model naturally leads to the notion of a "rational Erdős number". == See also == Graph theory – Area of discrete mathematics == References == == External links == Collaboration distance calculator of the American Mathematical Society Collaboration graph of the University of Georgia Mathematics Department Collaboration graph of the University of Oakland Mathematics and Statistics Department
Wikipedia/Collaboration_graph
Exponential family random graph models (ERGMs) are a set of statistical models used to study the structure and patterns within networks, such as those in social, organizational, or scientific contexts. They analyze how connections (edges) form between individuals or entities (nodes) by modeling the likelihood of network features, like clustering or centrality, across diverse examples including knowledge networks, organizational networks, colleague networks, social media networks, networks of scientific collaboration, and more. Part of the exponential family of distributions, ERGMs help researchers understand and predict network behavior in fields ranging from sociology to data science. == Background == Many metrics exist to describe the structural features of an observed network such as the density, centrality, or assortativity. However, these metrics describe the observed network which is only one instance of a large number of possible alternative networks. This set of alternative networks may have similar or dissimilar structural features. To support statistical inference on the processes influencing the formation of network structure, a statistical model should consider the set of all possible alternative networks weighted on their similarity to an observed network. However because network data is inherently relational, it violates the assumptions of independence and identical distribution of standard statistical models like linear regression. Alternative statistical models should reflect the uncertainty associated with a given observation, permit inference about the relative frequency about network substructures of theoretical interest, disambiguating the influence of confounding processes, efficiently representing complex structures, and linking local-level processes to global-level properties. Degree-preserving randomization, for example, is a specific way in which an observed network could be considered in terms of multiple alternative networks. == Definition == The Exponential family is a broad family of models for covering many types of data, not just networks. An ERGM is a model from this family which describes networks. Formally a random graph Y ∈ Y {\displaystyle Y\in {\mathcal {Y}}} consists of a set of n {\displaystyle n} nodes and a collection of tie variables { Y i j : i = 1 , … , n ; j = 1 , … , n } {\displaystyle \{Y_{ij}:i=1,\dots ,n;j=1,\dots ,n\}} , indexed by pairs of nodes i j {\displaystyle ij} , where Y i j = 1 {\displaystyle Y_{ij}=1} if the nodes ( i , j ) {\displaystyle (i,j)} are connected by an edge and Y i j = 0 {\displaystyle Y_{ij}=0} otherwise. A pair of nodes i j {\displaystyle ij} is called a dyad and a dyad is an edge if Y i j = 1 {\displaystyle Y_{ij}=1} . The basic assumption of these models is that the structure in an observed graph y {\displaystyle y} can be explained by a given vector of sufficient statistics s ( y ) {\displaystyle s(y)} which are a function of the observed network and, in some cases, nodal attributes. This way, it is possible to describe any kind of dependence between the undyadic variables: P ( Y = y | θ ) = exp ⁡ ( θ T s ( y ) ) c ( θ ) , ∀ y ∈ Y {\displaystyle P(Y=y|\theta )={\frac {\exp(\theta ^{T}s(y))}{c(\theta )}},\quad \forall y\in {\mathcal {Y}}} where θ {\displaystyle \theta } is a vector of model parameters associated with s ( y ) {\displaystyle s(y)} and c ( θ ) = ∑ y ′ ∈ Y exp ⁡ ( θ T s ( y ′ ) ) {\displaystyle c(\theta )=\sum _{y'\in {\mathcal {Y}}}\exp(\theta ^{T}s(y'))} is a normalising constant. These models represent a probability distribution on each possible network on n {\displaystyle n} nodes. However, the size of the set of possible networks for an undirected network (simple graph) of size n {\displaystyle n} is 2 n ( n − 1 ) / 2 {\displaystyle 2^{n(n-1)/2}} . Because the number of possible networks in the set vastly outnumbers the number of parameters which can constrain the model, the ideal probability distribution is the one which maximizes the Gibbs entropy. == Example == Let V = { 1 , 2 , 3 } {\displaystyle V=\{1,2,3\}} be a set of three nodes and let Y {\displaystyle {\mathcal {Y}}} be the set of all undirected, loopless graphs on V {\displaystyle V} . Loopless implies that for all i = 1 , 2 , 3 {\displaystyle i=1,2,3} it is Y i i = 0 {\displaystyle Y_{ii}=0} and undirected implies that for all i , j = 1 , 2 , 3 {\displaystyle i,j=1,2,3} it is Y i j = Y j i {\displaystyle Y_{ij}=Y_{ji}} , so that there are three binary tie variables ( Y 12 , Y 13 , Y 23 {\displaystyle Y_{12},Y_{13},Y_{23}} ) and 2 3 = 8 {\displaystyle 2^{3}=8} different graphs in this example. Define a two-dimensional vector of statistics by s ( y ) = [ s 1 ( y ) , s 2 ( y ) ] T {\displaystyle s(y)=[s_{1}(y),s_{2}(y)]^{T}} , where s 1 ( y ) = e d g e s ( y ) {\displaystyle s_{1}(y)=edges(y)} is defined to be the number of edges in the graph y {\displaystyle y} and s 2 ( y ) = t r i a n g l e s ( y ) {\displaystyle s_{2}(y)=triangles(y)} is defined to be the number of closed triangles in y {\displaystyle y} . Finally, let the parameter vector be defined by θ = ( θ 1 , θ 2 ) T = ( − ln ⁡ 2 , ln ⁡ 3 ) T {\displaystyle \theta =(\theta _{1},\theta _{2})^{T}=(-\ln 2,\ln 3)^{T}} , so that the probability of every graph y ∈ Y {\displaystyle y\in {\mathcal {Y}}} in this example is given by: P ( Y = y | θ ) = exp ⁡ ( − ln ⁡ 2 ⋅ e d g e s ( y ) + ln ⁡ 3 ⋅ t r i a n g l e s ( y ) ) c ( θ ) {\displaystyle P(Y=y|\theta )={\frac {\exp(-\ln 2\cdot edges(y)+\ln 3\cdot triangles(y))}{c(\theta )}}} We note that in this example, there are just four graph isomorphism classes: the graph with zero edges, three graphs with exactly one edge, three graphs with exactly two edges, and the graph with three edges. Since isomorphic graphs have the same number of edges and the same number of triangles, they also have the same probability in this example ERGM. For a representative y {\displaystyle y} of each isomorphism class, we first compute the term x ( y ) = exp ⁡ ( − ln ⁡ 2 ⋅ e d g e s ( y ) + ln ⁡ 3 ⋅ t r i a n g l e s ( y ) ) {\displaystyle x(y)=\exp(-\ln 2\cdot edges(y)+\ln 3\cdot triangles(y))} , which is proportional to the probability of y {\displaystyle y} (up to the normalizing constant c ( θ ) {\displaystyle c(\theta )} ). If y {\displaystyle y} is the graph with zero edges, then it is e d g e s ( y ) = 0 {\displaystyle edges(y)=0} and t r i a n g l e s ( y ) = 0 {\displaystyle triangles(y)=0} , so that x ( y ) = exp ⁡ ( − ln ⁡ 2 ⋅ 0 + ln ⁡ 3 ⋅ 0 ) = exp ⁡ ( 0 ) = 1. {\displaystyle x(y)=\exp(-\ln 2\cdot 0+\ln 3\cdot 0)=\exp(0)=1.} If y {\displaystyle y} is a graph with exactly one edge, then it is e d g e s ( y ) = 1 {\displaystyle edges(y)=1} and t r i a n g l e s ( y ) = 0 {\displaystyle triangles(y)=0} , so that x ( y ) = exp ⁡ ( − ln ⁡ 2 ⋅ 1 + ln ⁡ 3 ⋅ 0 ) = exp ⁡ ( 0 ) exp ⁡ ( ln ⁡ 2 ) = 1 2 . {\displaystyle x(y)=\exp(-\ln 2\cdot 1+\ln 3\cdot 0)={\frac {\exp(0)}{\exp(\ln 2)}}={\frac {1}{2}}.} If y {\displaystyle y} is a graph with exactly two edges, then it is e d g e s ( y ) = 2 {\displaystyle edges(y)=2} and t r i a n g l e s ( y ) = 0 {\displaystyle triangles(y)=0} , so that x ( y ) = exp ⁡ ( − ln ⁡ 2 ⋅ 2 + ln ⁡ 3 ⋅ 0 ) = exp ⁡ ( 0 ) exp ⁡ ( ln ⁡ 2 ) 2 = 1 4 . {\displaystyle x(y)=\exp(-\ln 2\cdot 2+\ln 3\cdot 0)={\frac {\exp(0)}{\exp(\ln 2)^{2}}}={\frac {1}{4}}.} If y {\displaystyle y} is the graph with exactly three edges, then it is e d g e s ( y ) = 3 {\displaystyle edges(y)=3} and t r i a n g l e s ( y ) = 1 {\displaystyle triangles(y)=1} , so that x ( y ) = exp ⁡ ( − ln ⁡ 2 ⋅ 3 + ln ⁡ 3 ⋅ 1 ) = exp ⁡ ( ln ⁡ 3 ) exp ⁡ ( ln ⁡ 2 ) 3 = 3 8 . {\displaystyle x(y)=\exp(-\ln 2\cdot 3+\ln 3\cdot 1)={\frac {\exp(\ln 3)}{\exp(\ln 2)^{3}}}={\frac {3}{8}}.} The normalizing constant is computed by summing x ( y ) {\displaystyle x(y)} over all eight different graphs y ∈ Y {\displaystyle y\in {\mathcal {Y}}} . This yields: c ( θ ) = 1 + 3 ⋅ 1 2 + 3 ⋅ 1 4 + 3 8 = 29 8 . {\displaystyle c(\theta )=1+3\cdot {\frac {1}{2}}+3\cdot {\frac {1}{4}}+{\frac {3}{8}}={\frac {29}{8}}.} Finally, the probability of every graph y ∈ Y {\displaystyle y\in {\mathcal {Y}}} is given by P ( Y = y | θ ) = x ( y ) c ( θ ) {\displaystyle P(Y=y|\theta )={\frac {x(y)}{c(\theta )}}} . Explicitly, we get that the graph with zero edges has probability 8 29 {\displaystyle {\frac {8}{29}}} , every graph with exactly one edge has probability 4 29 {\displaystyle {\frac {4}{29}}} , every graph with exactly two edges has probability 2 29 {\displaystyle {\frac {2}{29}}} , and the graph with exactly three edges has probability 3 29 {\displaystyle {\frac {3}{29}}} in this example. Intuitively, the structure of graph probabilities in this ERGM example are consistent with typical patterns of social or other networks. The negative parameter ( θ 1 = − ln ⁡ 2 {\displaystyle \theta _{1}=-\ln 2} ) associated with the number of edges implies that - all other things being equal - networks with fewer edges have a higher probability than networks with more edges. This is consistent with the sparsity that is often found in empirical networks, namely that the empirical number of edges typically grows at a slower rate than the maximally possible number of edges. The positive parameter ( θ 2 = ln ⁡ 3 {\displaystyle \theta _{2}=\ln 3} ) associated with the number of closed triangles implies that - all other things being equal - networks with more triangles have a higher probability than networks with fewer triangles. This is consistent with a tendency for triadic closure that is often found in certain types of social networks. Compare these patterns with the graph probabilities computed above. The addition of every edge divides the probability by two. However, when going from a graph with two edges to the graph with three edges, the number of triangles increases by one - which additionally multiplies the probability by three. We note that the explicit calculation of all graph probabilities is only possible since there are so few different graphs in this example. Since the number of different graphs scales exponentially in the number of tie variables - which in turn scales quadratic in the number of nodes -, computing the normalizing constant is in general computationally intractable, already for a moderate number of nodes. == Sampling from an ERGM == Exact sampling from a given ERGM is computationally intractable in general since computing the normalizing constant requires summation over all y ∈ Y {\displaystyle y\in {\mathcal {Y}}} . Efficient approximate sampling from an ERGM can be done via Markov chains and is applied in current methods to approximate expected values and to estimate ERGM parameters. Informally, given an ERGM on a set of graphs Y {\displaystyle {\mathcal {Y}}} with probability mass function P ( Y = y | θ ) = exp ⁡ ( θ T s ( y ) ) c ( θ ) {\displaystyle P(Y=y|\theta )={\frac {\exp(\theta ^{T}s(y))}{c(\theta )}}} , one selects an initial graph y ( 0 ) ∈ Y {\displaystyle y^{(0)}\in {\mathcal {Y}}} (which might be arbitrarily, or randomly, chosen or might represent an observed network) and implicitly defines transition probabilities (or jump probabilities) π ( y , y ′ ) = P ( Y ( t + 1 ) = y ′ | Y ( t ) = y ) {\displaystyle \pi (y,y')=P(Y^{(t+1)}=y'|Y^{(t)}=y)} , which are the conditional probabilities that the Markov chain is on graph y ′ {\displaystyle y'} after Step t + 1 {\displaystyle t+1} , given that it is on graph y {\displaystyle y} after Step t {\displaystyle t} . The transition probabilities do not depend on the graphs in earlier steps ( y ( 0 ) , … , y ( t − 1 ) {\displaystyle y^{(0)},\dots ,y^{(t-1)}} ), which is a defining property of Markov chains, and they do not depend on t {\displaystyle t} , that is, the Markov chain is time-homogeneous. The goal is to define the transition probabilities such that for all y ∈ Y {\displaystyle y\in {\mathcal {Y}}} it is lim t → ∞ P ( Y ( t ) = y ) = exp ⁡ ( θ T s ( y ) ) c ( θ ) , {\displaystyle \lim _{t\to \infty }P(Y^{(t)}=y)={\frac {\exp(\theta ^{T}s(y))}{c(\theta )}},} independent of the initial graph y ( 0 ) {\displaystyle y^{(0)}} . If this is achieved, one can run the Markov chain for a large number of steps and then returns the current graph as a random sample from the given ERGM. The probability to return a graph y ∈ Y {\displaystyle y\in {\mathcal {Y}}} after a finite but large number of update steps is approximately the probability defined by the ERGM. Current methods for sampling from ERGMs with Markov chains usually define an update step by two sub-steps: first, randomly select a candidate y ′ {\displaystyle y'} in a neighborhood of the current graph y {\displaystyle y} and, second, to accept y ′ {\displaystyle y'} with a probability that depends on the probability ratio of the current graph y {\displaystyle y} and the candidate y ′ {\displaystyle y'} . (If the candidate is not accepted, the Markov chain remains on the current graph y {\displaystyle y} .) If the set of graphs Y {\displaystyle {\mathcal {Y}}} is unconstrained (i.e., contains any combination of values on the binary tie variables), a simple method for candidate selection is to choose one tie variable y i j {\displaystyle y_{ij}} uniformly at random and to define the candidate by flipping this single variable (i.e., to set y i j ′ = 1 − y i j {\displaystyle y'_{ij}=1-y_{ij}} ; all other variables take the same value as in y {\displaystyle y} ). A common way to define the acceptance probability is to accept y ′ {\displaystyle y'} with the conditional probability P ( Y = y ′ | Y = y ′ ∨ Y = y ) = P ( Y = y ′ ) P ( Y = y ′ ) + P ( Y = y ) , {\displaystyle P(Y=y'|Y=y'\vee Y=y)={\frac {P(Y=y')}{P(Y=y')+P(Y=y)}},} where the graph probabilities are defined by the ERGM. Crucially, the normalizing constant c ( θ ) {\displaystyle c(\theta )} cancels out in this fraction, so that the acceptance probabilities can be computed efficiently. == See also == Autologistic actor attribute models == References == == Further reading == Byshkin, M.; Stivala, A.; Mira, A.; Robins, G.; Lomi, A. (2018). "Fast Maximum Likelihood Estimation via Equilibrium Expectation for Large Network Data". Scientific Reports. 8 (1): 11509. arXiv:1802.10311. Bibcode:2018NatSR...811509B. doi:10.1038/s41598-018-29725-8. PMC 6068132. PMID 30065311. Caimo, A.; Friel, N (2011). "Bayesian inference for exponential random graph models". Social Networks. 33: 41–55. arXiv:1007.5192. doi:10.1016/j.socnet.2010.09.004. Erdős, P.; Rényi, A (1959). "On random graphs". Publicationes Mathematicae. 6: 290–297. Fienberg, S. E.; Wasserman, S. (1981). "Discussion of An Exponential Family of Probability Distributions for Directed Graphs by Holland and Leinhardt". Journal of the American Statistical Association. 76 (373): 54–57. doi:10.1080/01621459.1981.10477600. Frank, O.; Strauss, D (1986). "Markov Graphs". Journal of the American Statistical Association. 81 (395): 832–842. doi:10.2307/2289017. JSTOR 2289017. Handcock, M. S.; Hunter, D. R.; Butts, C. T.; Goodreau, S. M.; Morris, M. (2008). "statnet: Software Tools for the Representation, Visualization, Analysis and Simulation of Network Data". Journal of Statistical Software. 24 (1): 1–11. doi:10.18637/jss.v024.i01. PMC 2447931. PMID 18618019. Harris, Jenine K (2014). An introduction to exponential random graph modeling. ISBN 9781452220802. OCLC 870698788. Hunter, D. R.; Goodreau, S. M.; Handcock, M. S. (2008). "Goodness of Fit of Social Network Models". Journal of the American Statistical Association. 103 (481): 248–258. CiteSeerX 10.1.1.206.396. doi:10.1198/016214507000000446. Hunter, D. R; Handcock, M. S. (2006). "Inference in curved exponential family models for networks". Journal of Computational and Graphical Statistics. 15 (3): 565–583. CiteSeerX 10.1.1.205.9670. doi:10.1198/106186006X133069. Hunter, D. R.; Handcock, M. S.; Butts, C. T.; Goodreau, S. M.; Morris, M. (2008). "ergm: A Package to Fit, Simulate and Diagnose Exponential-Family Models for Networks". Journal of Statistical Software. 24 (3): 1–29. doi:10.18637/jss.v024.i03. PMC 2743438. Jin, I.H.; Liang, F. (2012). "Fitting social networks models using varying truncation stochastic approximation MCMC algorithm". Journal of Computational and Graphical Statistics. 22 (4): 927–952. doi:10.1080/10618600.2012.680851. Koskinen, J. H.; Robins, G. L.; Pattison, P. E. (2010). "Analysing exponential random graph (p-star) models with missing data using Bayesian data augmentation". Statistical Methodology. 7 (3): 366–384. doi:10.1016/j.stamet.2009.09.007. Morris, M.; Handcock, M. S.; Hunter, D. R. (2008). "Specification of Exponential-Family Random Graph Models: Terms and Computational Aspects". Journal of Statistical Software. 24 (4): 1548–7660. doi:10.18637/jss.v024.i04. PMC 2481518. PMID 18650964. Rinaldo, A.; Fienberg, S. E.; Zhou, Y. (2009). "On the geometry of descrete exponential random families with application to exponential random graph models". Electronic Journal of Statistics. 3: 446–484. arXiv:0901.0026. doi:10.1214/08-EJS350. Robins, G.; Snijders, T.; Wang, P.; Handcock, M.; Pattison, P (2007). "Recent developments in exponential random graph (p*) models for social networks" (PDF). Social Networks. 29 (2): 192–215. doi:10.1016/j.socnet.2006.08.003. hdl:11370/abee7276-394e-4051-a180-7b2ff57d42f5. Schweinberger, Michael (2011). "Instability, sensitivity, and degeneracy of discrete exponential families". Journal of the American Statistical Association. 106 (496): 1361–1370. doi:10.1198/jasa.2011.tm10747. PMC 3405854. PMID 22844170. Schweinberger, Michael; Handcock, Mark (2015). "Local dependence in random graph models: characterization, properties and statistical inference". Journal of the Royal Statistical Society, Series B. 77 (3): 647–676. doi:10.1111/rssb.12081. PMC 4637985. PMID 26560142. Schweinberger, Michael; Stewart, Jonathan (2020). "Concentration and consistency results for canonical and curved exponential-family models of random graphs". The Annals of Statistics. 48 (1): 374–396. arXiv:1702.01812. doi:10.1214/19-AOS1810. Snijders, T. A. B. (2002). "Markov chain Monte Carlo estimation of exponential random graph models" (PDF). Journal of Social Structure. 3. Snijders, T. A. B.; Pattison, P. E.; Robins, G. L.; Handcock, M. S. (2006). "New specifications for exponential random graph models". Sociological Methodology. 36: 99–153. CiteSeerX 10.1.1.62.7975. doi:10.1111/j.1467-9531.2006.00176.x. Strauss, D; Ikeda, M (1990). "Pseudolikelihood estimation for social networks". Journal of the American Statistical Association. 5 (409): 204–212. doi:10.2307/2289546. JSTOR 2289546. van Duijn, M. A.; Snijders, T. A. B.; Zijlstra, B. H. (2004). "p2: a random effects model with covariates for directed graphs". Statistica Neerlandica. 58 (2): 234–254. doi:10.1046/j.0039-0402.2003.00258.x. van Duijn, M. A. J.; Gile, K. J.; Handcock, M. S. (2009). "A framework for the comparison of maximum pseudo-likelihood and maximum likelihood estimation of exponential family random graph models". Social Networks. 31 (1): 52–62. doi:10.1016/j.socnet.2008.10.003. PMC 3500576. PMID 23170041.
Wikipedia/Exponential_family_random_graph_models
The network probability matrix describes the probability structure of a network based on the historical presence or absence of edges in a network. For example, individuals in a social network are not connected to other individuals with uniform random probability. The probability structure is much more complex. Intuitively, there are some people whom a person will communicate with or be connected more closely than others. For this reason, real-world networks tend to have clusters or cliques of nodes that are more closely related than others (Albert and Barabasi, 2002, Carley [year], Newmann 2003). This can be simulated by varying the probabilities that certain nodes will communicate. The network probability matrix was originally proposed by Ian McCulloh. == References == McCulloh, I., Lospinoso, J. & Carley, K.M. (2007). Probability Mechanics in Communications Networks. In Proceedings of the 12th International Conference on Applied Mathematics of the World Science Engineering Academy and Society, Cairo, Egypt. 30–31 December 2007. "Understanding Network Science," (Archived article) https://wayback-beta.archive.org/web/20080830045705/http://zangani.com/blog/2007-1030-networkingscience Linked: The New Science of Networks, A.-L. Barabási (Perseus Publishing, Cambridge (2002). Network Science, The National Academies Press (2005)ISBN 0-309-10026-7 == External links == Center for Computational Analysis of Social and Organizational Systems (CASOS) at Carnegie Mellon University U.S. Military Academy Network Science Center The Center for Interdisciplinary Research on Complex Systems at Northeastern University
Wikipedia/Network_probability_matrix
Arabidopsis thaliana, the thale cress, mouse-ear cress or arabidopsis, is a small plant from the mustard family (Brassicaceae), native to Eurasia and Africa. Commonly found along the shoulders of roads and in disturbed land, it is generally considered a weed. A winter annual with a relatively short lifecycle, A. thaliana is a popular model organism in plant biology and genetics. For a complex multicellular eukaryote, A. thaliana has a relatively small genome of around 135 megabase pairs. It was the first plant to have its genome sequenced, and is an important tool for understanding the molecular biology of many plant traits, including flower development and light sensing. == Description == Arabidopsis thaliana is an annual (rarely biennial) plant, usually growing to 20–25 cm tall. The leaves form a rosette at the base of the plant, with a few leaves also on the flowering stem. The basal leaves are green to slightly purplish in color, 1.5–5 cm long, and 2–10 mm broad, with an entire to coarsely serrated margin; the stem leaves are smaller and unstalked, usually with an entire margin. Leaves are covered with small, unicellular hairs called trichomes. The flowers are 3 mm in diameter, arranged in a corymb; their structure is that of the typical Brassicaceae. The fruit is a silique 5–20 mm long, containing 20–30 seeds. Roots are simple in structure, with a single primary root that grows vertically downward, later producing smaller lateral roots. These roots form interactions with rhizosphere bacteria such as Bacillus megaterium. A. thaliana can complete its entire lifecycle in six weeks. The central stem that produces flowers grows after about 3 weeks, and the flowers naturally self-pollinate. In the lab, A. thaliana may be grown in Petri plates, pots, or hydroponics, under fluorescent lights or in a greenhouse. == Taxonomy == The plant was first described in 1577 in the Harz Mountains by Johannes Thal (1542–1583), a physician from Nordhausen, Thüringen, Germany, who called it Pilosella siliquosa. In 1753, Carl Linnaeus renamed the plant Arabis thaliana in honor of Thal. In 1842, German botanist Gustav Heynhold erected the new genus Arabidopsis and placed the plant in that genus. The generic name, Arabidopsis, comes from Greek, meaning "resembling Arabis" (the genus in which Linnaeus had initially placed it). Thousands of natural inbred accessions of A. thaliana have been collected from throughout its natural and introduced range. These accessions exhibit considerable genetic and phenotypic variation, which can be used to study the adaptation of this species to different environments. == Distribution and habitat == A. thaliana is native to Europe, Asia, and Africa, and its geographic distribution is rather continuous from the Mediterranean to Scandinavia and Spain to Greece. It also appears to be native in tropical alpine ecosystems in Africa and perhaps South Africa. It has been introduced and naturalized worldwide, including in North America around the 17th century. A. thaliana readily grows and often pioneers rocky, sandy, and calcareous soils. It is generally considered a weed, due to its widespread distribution in agricultural fields, roadsides, railway lines, waste ground, and other disturbed habitats, but due to its limited competitive ability and small size, it is not categorized as a noxious weed. Like most Brassicaceae species, A. thaliana is edible by humans in a salad or cooked, but it does not enjoy widespread use as a spring vegetable. == Use as a model organism == Botanists and biologists began to research A. thaliana in the early 1900s, and the first systematic description of mutants was done around 1945. A. thaliana is now widely used for studying plant sciences, including genetics, evolution, population genetics, and plant development. Although A. thaliana the plant has little direct significance for agriculture, A. thaliana the model organism has revolutionized our understanding of the genetic, cellular, and molecular biology of flowering plants. The first mutant in A. thaliana was documented in 1873 by Alexander Braun, describing a double flower phenotype (the mutated gene was likely Agamous, cloned and characterized in 1990). Friedrich Laibach (who had published the chromosome number in 1907) did not propose A. thaliana as a model organism, though, until 1943. His student, Erna Reinholz, published her thesis on A. thaliana in 1945, describing the first collection of A. thaliana mutants that they generated using X-ray mutagenesis. Laibach continued his important contributions to A. thaliana research by collecting a large number of accessions (often questionably referred to as "ecotypes"). With the help of Albert Kranz, these were organised into a large collection of 750 natural accessions of A. thaliana from around the world. In the 1950s and 1960s, John Langridge and George Rédei played an important role in establishing A. thaliana as a useful organism for biological laboratory experiments. Rédei wrote several scholarly reviews instrumental in introducing the model to the scientific community. The start of the A. thaliana research community dates to a newsletter called Arabidopsis Information Service, established in 1964. The first International Arabidopsis Conference was held in 1965, in Göttingen, Germany. In the 1980s, A. thaliana started to become widely used in plant research laboratories around the world. It was one of several candidates that included maize, petunia, and tobacco. The latter two were attractive, since they were easily transformable with the then-current technologies, while maize was a well-established genetic model for plant biology. The breakthrough year for A. thaliana as a model plant was 1986, in which T-DNA-mediated transformation and the first cloned A. thaliana gene were described. === Genomics === ==== Nuclear genome ==== Due to the small size of its genome, and because it is diploid, Arabidopsis thaliana is useful for genetic mapping and sequencing — with about 157 megabase pairs and five chromosomes, A. thaliana has one of the smallest genomes among plants. It was long thought to have the smallest genome of all flowering plants, but that title is now considered to belong to plants in the genus Genlisea, order Lamiales, with Genlisea tuberosa, a carnivorous plant, showing a genome size of approximately 61 Mbp. It was the first plant genome to be sequenced, completed in 2000 by the Arabidopsis Genome Initiative. The most up-to-date version of the A. thaliana genome is maintained by the Arabidopsis Information Resource. The genome encodes ~27,600 protein-coding genes and about 6,500 non-coding genes. However, the Uniprot database lists 39,342 proteins in their Arabidopsis reference proteome. Among the 27,600 protein-coding genes 25,402 (91.8%) are now annotated with "meaningful" product names, although a large fraction of these proteins is likely only poorly understood and only known in general terms (e.g. as "DNA-binding protein without known specificity"). Uniprot lists more than 3,000 proteins as "uncharacterized" as part of the reference proteome. ==== Chloroplast genome ==== The plastome of A. thaliana is a 154,478 base-pair-long DNA molecule, a size typically encountered in most flowering plants (see the list of sequenced plastomes). It comprises 136 genes coding for small subunit ribosomal proteins (rps, in yellow: see figure), large subunit ribosomal proteins (rpl, orange), hypothetical chloroplast open reading frame proteins (ycf, lemon), proteins involved in photosynthetic reactions (green) or in other functions (red), ribosomal RNAs (rrn, blue), and transfer RNAs (trn, black). ==== Mitochondrial genome ==== The mitochondrial genome of A. thaliana is 367,808 base pairs long and contains 57 genes. There are many repeated regions in the Arabidopsis mitochondrial genome. The largest repeats recombine regularly and isomerize the genome. Like most plant mitochondrial genomes, the Arabidopsis mitochondrial genome exists as a complex arrangement of overlapping branched and linear molecules in vivo. === Genetics === Genetic transformation of A. thaliana is routine, using Agrobacterium tumefaciens to transfer DNA into the plant genome. The current protocol, termed "floral dip", involves simply dipping flowers into a solution containing Agrobacterium carrying a plasmid of interest and a detergent. This method avoids the need for tissue culture or plant regeneration. The A. thaliana gene knockout collections are a unique resource for plant biology made possible by the availability of high-throughput transformation and funding for genomics resources. The site of T-DNA insertions has been determined for over 300,000 independent transgenic lines, with the information and seeds accessible through online T-DNA databases. Through these collections, insertional mutants are available for most genes in A. thaliana. Characterized accessions and mutant lines of A. thaliana serve as experimental material in laboratory studies. The most commonly used background lines are Ler (Landsberg erecta), and Col, or Columbia. Other background lines less-often cited in the scientific literature are Ws, or Wassilewskija, C24, Cvi, or Cape Verde Islands, Nossen, etc. (see for ex.) Sets of closely related accessions named Col-0, Col-1, etc., have been obtained and characterized; in general, mutant lines are available through stock centers, of which best-known are the Nottingham Arabidopsis Stock Center-NASC and the Arabidopsis Biological Resource Center-ABRC in Ohio, USA. The Col-0 accession was selected by Rédei from within a (nonirradiated) population of seeds designated 'Landsberg' which he received from Laibach. Columbia (named for the location of Rédei's former institution, University of Missouri-Columbia) was the reference accession sequenced in the Arabidopsis Genome Initiative. The Later (Landsberg erecta) line was selected by Rédei (because of its short stature) from a Landsberg population he had mutagenized with X-rays. As the Ler collection of mutants is derived from this initial line, Ler-0 does not correspond to the Landsberg accessions, which designated La-0, La-1, etc. Trichome formation is initiated by the GLABROUS1 protein. Knockouts of the corresponding gene lead to glabrous plants. This phenotype has already been used in gene editing experiments and might be of interest as visual marker for plant research to improve gene editing methods such as CRISPR/Cas9. ==== Non-Mendelian inheritance controversy ==== In 2005, scientists at Purdue University proposed that A. thaliana possessed an alternative to previously known mechanisms of DNA repair, producing an unusual pattern of inheritance, but the phenomenon observed (reversion of mutant copies of the HOTHEAD gene to a wild-type state) was later suggested to be an artifact because the mutants show increased outcrossing due to organ fusion. === Lifecycle === The plant's small size and rapid lifecycle are also advantageous for research. Having specialized as a spring ephemeral, it has been used to found several laboratory strains that take about 6 weeks from germination to mature seed. The small size of the plant is convenient for cultivation in a small space, and it produces many seeds. Further, the selfing nature of this plant assists genetic experiments. Also, as an individual plant can produce several thousand seeds, each of the above criteria leads to A. thaliana being valued as a genetic model organism. === Cellular biology === Arabidopsis is often the model for study of SNAREs in plants. This has shown SNAREs to be heavily involved in vesicle trafficking. Zheng et al. 1999 found an Arabidopsis SNARE called AtVTI1a is probably essential to Golgi-vacuole trafficking. This is still a wide open field and plant SNAREs' role in trafficking remains understudied. === DNA repair === The DNA of plants is vulnerable to ultraviolet light, and DNA repair mechanisms have evolved to avoid or repair genome damage caused by UV. Kaiser et al. showed that in A. thaliana cyclobutane pyrimidine dimers (CPDs) induced by UV light can be repaired by expression of CPD photolyase. === Germination in lunar regolith === On May 12, 2022, NASA announced that specimens of Arabidopsis thaliana had been successfully germinated and grown in samples of lunar regolith. While the plants successfully germinated and grew into seedlings, they were not as robust as specimens that had been grown in volcanic ash as a control group, although the experiments also found some variation in the plants grown in regolith based on the location the samples were taken from, as A. thaliana grown in regolith gathered during Apollo 12 & Apollo 17 were more robust than those grown in samples taken during Apollo 11. == Development == === Flower development === A. thaliana has been extensively studied as a model for flower development. The developing flower has four basic organs - sepals, petals, stamens, and carpels (which go on to form pistils). These organs are arranged in a series of whorls, four sepals on the outer whorl, followed by four petals inside this, six stamens, and a central carpel region. Homeotic mutations in A. thaliana result in the change of one organ to another—in the case of the agamous mutation, for example, stamens become petals and carpels are replaced with a new flower, resulting in a recursively repeated sepal-petal-petal pattern. Observations of homeotic mutations led to the formulation of the ABC model of flower development by E. Coen and E. Meyerowitz. According to this model, floral organ identity genes are divided into three classes - class A genes (which affect sepals and petals), class B genes (which affect petals and stamens), and class C genes (which affect stamens and carpels). These genes code for transcription factors that combine to cause tissue specification in their respective regions during development. Although developed through study of A. thaliana flowers, this model is generally applicable to other flowering plants. === Leaf development === Studies of A. thaliana have provided considerable insights with regards to the genetics of leaf morphogenesis, particularly in dicotyledon-type plants. Much of the understanding has come from analyzing mutants in leaf development, some of which were identified in the 1960s, but were not analysed with genetic and molecular techniques until the mid-1990s. A. thaliana leaves are well suited to studies of leaf development because they are relatively simple and stable. Using A. thaliana, the genetics behind leaf shape development have become more clear and have been broken down into three stages: The initiation of the leaf primordium, the establishment of dorsiventrality, and the development of a marginal meristem. Leaf primordia are initiated by the suppression of the genes and proteins of class I KNOX family (such as SHOOT APICAL MERISTEMLESS). These class I KNOX proteins directly suppress gibberellin biosynthesis in the leaf primordium. Many genetic factors were found to be involved in the suppression of these class I KNOX genes in leaf primordia (such as ASYMMETRIC LEAVES1, BLADE-ON-PETIOLE1, SAWTOOTH1, etc.). Thus, with this suppression, the levels of gibberellin increase and leaf primordium initiate growth. The establishment of leaf dorsiventrality is important since the dorsal (adaxial) surface of the leaf is different from the ventral (abaxial) surface. === Microscopy === A. thaliana is well suited for light microscopy analysis. Young seedlings on the whole, and their roots in particular, are relatively translucent. This, together with their small size, facilitates live cell imaging using both fluorescence and confocal laser scanning microscopy. By wet-mounting seedlings in water or in culture media, plants may be imaged uninvasively, obviating the need for fixation and sectioning and allowing time-lapse measurements. Fluorescent protein constructs can be introduced through transformation. The developmental stage of each cell can be inferred from its location in the plant or by using fluorescent protein markers, allowing detailed developmental analysis. == Physiology == === Light sensing, light emission, and circadian biology === The photoreceptors phytochromes A, B, C, D, and E mediate red light-based phototropic response. Understanding the function of these receptors has helped plant biologists understand the signaling cascades that regulate photoperiodism, germination, de-etiolation, and shade avoidance in plants. The genes FCA, fy, fpa, LUMINIDEPENDENS (ld), fly, fve and FLOWERING LOCUS C (FLC) are involved in photoperiod triggering of flowering and vernalization. Specifically Lee et al 1994 find ld produces a homeodomain and Blazquez et al 2001 that fve produces a WD40 repeat. The UVR8 protein detects UV-B light and mediates the response to this DNA-damaging wavelength. A. thaliana was used extensively in the study of the genetic basis of phototropism, chloroplast alignment, and stomal aperture and other blue light-influenced processes. These traits respond to blue light, which is perceived by the phototropin light receptors. Arabidopsis has also been important in understanding the functions of another blue light receptor, cryptochrome, which is especially important for light entrainment to control the plants' circadian rhythms. When the onset of darkness is unusually early, A. thaliana reduces its metabolism of starch by an amount that effectively requires division. Light responses were even found in roots, previously thought to be largely insensitive to light. While the gravitropic response of A. thaliana root organs is their predominant tropic response, specimens treated with mutagens and selected for the absence of gravitropic action showed negative phototropic response to blue or white light, and positive response to red light, indicating that the roots also show positive phototropism. In 2000, Dr. Janet Braam of Rice University genetically engineered A. thaliana to glow in the dark when touched. The effect was visible to ultrasensitive cameras. Multiple efforts, including the Glowing Plant project, have sought to use A. thaliana to increase plant luminescence intensity towards commercially viable levels. === Thigmomorphogenesis (Touch response) === In 1990, Janet Braam and Ronald W. Davis determined that A. thaliana exhibits thigmomorphogenesis in response to wind, rain and touch. Four or more touch induced genes in A. thaliana were found to be regulated by such stimuli. In 2002, Massimo Pigliucci found that A. thaliana developed different patterns of branching in response to sustained exposure to wind, a display of phenotypic plasticity. === On the Moon === On January 2, 2019, China's Chang'e-4 lander brought A. thaliana to the moon. A small microcosm 'tin' in the lander contained A. thaliana, seeds of potatoes, and silkworm eggs. As plants would support the silkworms with oxygen, and the silkworms would in turn provide the plants with necessary carbon dioxide and nutrients through their waste, researchers will evaluate whether plants successfully perform photosynthesis, and grow and bloom in the lunar environment. === Secondary metabolites === Thalianin is an Arabidopsis root triterpene. Potter et al., 2018 finds synthesis is induced by a combination of at least 2 facts, cell-specific transcription factors (TFs) and the accessibility of the chromatin. == Plant–pathogen interactions == Understanding how plants achieve resistance is important to protect the world's food production, and the agriculture industry. Many model systems have been developed to better understand interactions between plants and bacterial, fungal, oomycete, viral, and nematode pathogens. A. thaliana has been a powerful tool for the study of the subdiscipline of plant pathology, that is, the interaction between plants and disease-causing pathogens. The use of A. thaliana has led to many breakthroughs in the advancement of knowledge of how plants manifest plant disease resistance. The reason most plants are resistant to most pathogens is through nonhost resistance - not all pathogens will infect all plants. An example where A. thaliana was used to determine the genes responsible for nonhost resistance is Blumeria graminis, the causal agent of powdery mildew of grasses. A. thaliana mutants were developed using the mutagen ethyl methanesulfonate and screened to identify mutants with increased infection by B. graminis. The mutants with higher infection rates are referred to as PEN mutants due to the ability of B. graminis to penetrate A. thaliana to begin the disease process. The PEN genes were later mapped to identify the genes responsible for nonhost resistance to B. graminis. In general, when a plant is exposed to a pathogen, or nonpathogenic microbe, an initial response, known as PAMP-triggered immunity (PTI), occurs because the plant detects conserved motifs known as pathogen-associated molecular patterns (PAMPs). These PAMPs are detected by specialized receptors in the host known as pattern recognition receptors (PRRs) on the plant cell surface. The best-characterized PRR in A. thaliana is FLS2 (Flagellin-Sensing2), which recognizes bacterial flagellin, a specialized organelle used by microorganisms for the purpose of motility, as well as the ligand flg22, which comprises the 22 amino acids recognized by FLS2. Discovery of FLS2 was facilitated by the identification of an A. thaliana ecotype, Ws-0, that was unable to detect flg22, leading to the identification of the gene encoding FLS2. FLS2 shows striking similarity to rice XA21, the first PRR isolated in 1995. Both flagellin and UV-C act similarly to increase homologous recombination in A. thaliana, as demonstrated by Molinier et al. 2006. Beyond this somatic effect, they found this to extend to subsequent generations of the plant. A second PRR, EF-Tu receptor (EFR), identified in A. thaliana, recognizes the bacterial EF-Tu protein, the prokaryotic elongation factor used in protein synthesis, as well as the laboratory-used ligand elf18. Using Agrobacterium-mediated transformation, a technique that takes advantage of the natural process by which Agrobacterium transfers genes into host plants, the EFR gene was transformed into Nicotiana benthamiana, tobacco plant that does not recognize EF-Tu, thereby permitting recognition of bacterial EF-Tu thereby confirming EFR as the receptor of EF-Tu. Both FLS2 and EFR use similar signal transduction pathways to initiate PTI. A. thaliana has been instrumental in dissecting these pathways to better understand the regulation of immune responses, the most notable one being the mitogen-activated protein kinase (MAP kinase) cascade. Downstream responses of PTI include callose deposition, the oxidative burst, and transcription of defense-related genes. PTI is able to combat pathogens in a nonspecific manner. A stronger and more specific response in plants is that of effector-triggered immunity (ETI), which is dependent upon the recognition of pathogen effectors, proteins secreted by the pathogen that alter functions in the host, by plant resistance genes (R-genes), often described as a gene-for-gene relationship. This recognition may occur directly or indirectly via a guardee protein in a hypothesis known as the guard hypothesis. The first R-gene cloned in A. thaliana was RPS2 (resistance to Pseudomonas syringae 2), which is responsible for recognition of the effector avrRpt2. The bacterial effector avrRpt2 is delivered into A. thaliana via the Type III secretion system of P. syringae pv. tomato strain DC3000. Recognition of avrRpt2 by RPS2 occurs via the guardee protein RIN4, which is cleaved. Recognition of a pathogen effector leads to a dramatic immune response known as the hypersensitive response, in which the infected plant cells undergo cell death to prevent the spread of the pathogen. Systemic acquired resistance (SAR) is another example of resistance that is better understood in plants because of research done in A. thaliana. Benzothiadiazol (BTH), a salicylic acid (SA) analog, has been used historically as an antifungal compound in crop plants. BTH, as well as SA, has been shown to induce SAR in plants. The initiation of the SAR pathway was first demonstrated in A. thaliana in which increased SA levels are recognized by nonexpresser of PR genes 1 (NPR1) due to redox change in the cytosol, resulting in the reduction of NPR1. NPR1, which usually exists in a multiplex (oligomeric) state, becomes monomeric (a single unit) upon reduction. When NPR1 becomes monomeric, it translocates to the nucleus, where it interacts with many TGA transcription factors, and is able to induce pathogen-related genes such as PR1. Another example of SAR would be the research done with transgenic tobacco plants, which express bacterial salicylate hydroxylase, nahG gene, requires the accumulation of SA for its expression Although not directly immunological, intracellular transport affects susceptibility by incorporating - or being tricked into incorporating - pathogen particles. For example, the Dynamin-related protein 2b/drp2b gene helps to move invaginated material into cells, with some mutants increasing PstDC3000 virulence even further. === Evolutionary aspect of plant-pathogen resistance === Plants are affected by multiple pathogens throughout their lifetimes. In response to the presence of pathogens, plants have evolved receptors on their cell surfaces to detect and respond to pathogens. Arabidopsis thaliana is a model organism used to determine specific defense mechanisms of plant-pathogen resistance. These plants have special receptors on their cell surfaces that allow for detection of pathogens and initiate mechanisms to inhibit pathogen growth. They contain two receptors, FLS2 (bacterial flagellin receptor) and EF-Tu (bacterial EF-Tu protein), which use signal transduction pathways to initiate the disease response pathway. The pathway leads to the recognition of the pathogen causing the infected cells to undergo cell death to stop the spread of the pathogen. Plants with FLS2 and EF-Tu receptors have shown to have increased fitness in the population. This has led to the belief that plant-pathogen resistance is an evolutionary mechanism that has built up over generations to respond to dynamic environments, such as increased predation and extreme temperatures. A. thaliana has also been used to study SAR. This pathway uses benzothiadiazol, a chemical inducer, to induce transcription factors, mRNA, of SAR genes. This accumulation of transcription factors leads to inhibition of pathogen-related genes. Plant-pathogen interactions are important for an understanding of how plants have evolved to combat different types of pathogens that may affect them. Variation in resistance of plants across populations is due to variation in environmental factors. Plants that have evolved resistance, whether it be the general variation or the SAR variation, have been able to live longer and hold off necrosis of their tissue (premature death of cells), which leads to better adaptation and fitness for populations that are in rapidly changing environments. In the future, comparisons of the pathosystems of wild populations + their coevolved pathogens with wild-wild hybrids of known parentage may reveal new mechanisms of balancing selection. In life history theory we may find that A. thaliana maintains certain alleles due to pleitropy between plant-pathogen effects and other traits, as in livestock. Research in A. thaliana suggests that the immunity regulator protein family EDS1 in general co-evolved with the CCHELO family of nucleotide-binding–leucine-rich-repeat-receptors (NLRs). Xiao et al. 2005 have shown that the powdery mildew immunity mediated by A. thaliana's RPW8 (which has a CCHELO domain) is dependent on two members of this family: EDS1 itself and PAD4. RESISTANCE TO PSEUDOMONAS SYRINGAE 5/RPS5 is a disease resistance protein which guards AvrPphB SUSCEPTIBLE 1/PBS1. PBS1, as the name would suggest, is the target of AvrPphB, an effector produced by Pseudomonas syringae pv. phaseolicola. == Other research == Ongoing research on A. thaliana is being performed on the International Space Station by the European Space Agency. The goals are to study the growth and reproduction of plants from seed to seed in microgravity. Plant-on-a-chip devices in which A. thaliana tissues can be cultured in semi-in vitro conditions have been described. Use of these devices may aid understanding of pollen-tube guidance and the mechanism of sexual reproduction in A. thaliana. Researchers at the University of Florida were able to grow the plant in lunar soil originating from the Sea of Tranquillity. === Self-pollination === A. thaliana is a predominantly self-pollinating plant with an outcrossing rate estimated at less than 0.3%. An analysis of the genome-wide pattern of linkage disequilibrium suggested that self-pollination evolved roughly a million years ago or more. Meioses that lead to self-pollination are unlikely to produce significant beneficial genetic variability. However, these meioses can provide the adaptive benefit of recombinational repair of DNA damages during formation of germ cells at each generation. Such a benefit may have been sufficient to allow the long-term persistence of meioses even when followed by self-fertilization. A physical mechanism for self-pollination in A. thaliana is through pre-anthesis autogamy, such that fertilisation takes place largely before flower opening. == Databases and other resources == TAIR and NASC: curated sources for diverse genetic and molecular biology information, links to gene expression databases etc. Arabidopsis Biological Resource Center (seed and DNA stocks) Nottingham Arabidopsis Stock Centre (seed and DNA stocks) Artade database AraDiv: a dataset of functional traits and leaf hyperspectral reflectance of Arabidopsis thaliana: see data repository == See also == Sexual selection in Arabidopsis thaliana A. thaliana responses to salinity BZIP intron plant The Thaliana Bridge, installed in 2021 at Harlow Carr was inspired by the work of the botanical scientist Rachel Leech and represents the sequence of an Arabidopsis thaliana chromosome. Novosphingobium arabidopsis, isolated from the rhizosphere of the plant == References == == External links == Arabidopsis transcriptional regulatory map The Arabidopsis Information Resource (TAIR) Salk Institute Genomic Analysis Laboratory Archived 8 March 2021 at the Wayback Machine What Makes Plants Grow? The Arabidopsis genome knows Featured article in Genome News Network The Arabidopsis book - A comprehensive review published yearly related to research in Arabidopsis A. thaliana protein abundance The Arabidopsis Information Portal (Araport)
Wikipedia/Protein_Interaction_Networks
There is no agreed upon definition of value network. A general definition that subsumes the other definitions is that a value network is a network of roles linked by interactions in which economic entities engage in both tangible and intangible exchanges to achieve economic or social good. This definition is similar to one given by Verna Allee. == Definitions == Different definitions provide different perspectives on the general concept of a value network. === Christensen === Clayton Christensen defines a value network as: "The collection of upstream suppliers, downstream channels to market, and ancillary providers that support a common business model within an industry. When would-be disruptors enter into existing value networks, they must adapt their Business models to conform to the value network and therefore fail at disruption because they become co-opted." === Fjeldstad and Stabell: Value configurations === Fjeldstad and Stabell define a value network as one of three ways by which an organisation generates value. The others are the value shop and value chain. Their value networks consist of the following components: customers, a service that enables interaction among them, an organization to provide the service, and contracts that enable access to the service One example of a value network is that formed by social media users. The company provides a service, users contract with the company, and immediately have access to the value network of other customers. A less obvious example is a car insurance company. The Company provides insurance. Customers can travel and interact in various ways while limiting risk exposure. The insurance policies represent the company's contracts and the internal processes. Fjeldstad & Stabell and Christensen's concepts address how a Company understands itself and its value creation process, but they are not identical. Christensen's value networks address the relation between a Company and its suppliers and the requirements posed by the customers, and how these interact when defining what represents value in the product that is produced. Fjeldstad and Stabell's value networks emphasize that the created value is between interacting customers, as facilitated by value networks. === Normann and Ramirez: Value constellations === Normann and Ramirez argued in 1993 that strategy is not a fixed set of activities along a value chain. Instead the focus should be on the value creating system. All stakeholders are obligated to produce value. Successful companies conceive of strategy as systematic social innovation. === Verna Allee: Value networks === Verna Allee defines value networks as any web of relationships that generates both tangible and intangible value through complex dynamic exchanges between two or more individuals, groups or organizations. Any organization or group of organizations engaged in both tangible and intangible exchanges can be viewed as a value network, whether private industry, government or public sector. Allee developed Value network analysis, a whole systems mapping and analysis approach to understanding tangible and intangible value creation among participants in an enterprise system. Revealing the hidden network patterns behind business processes can provide predictive intelligence for when workflow performance is at risk. She believes value network analysis provides a standard way to define, map and analyse the participants, transactions and tangible and intangible deliverables that together form a value network. Allee says value network analysis can lead to profound shifts in perception of problem situations and mobilize collective action to implement change. == Important terms and concepts == === Tangible value === All exchanges of goods, services or revenue, including all transactions involving contracts, invoices, return receipts of orders, requests for proposals, confirmations and payments are considered to be tangible value. Products or services that generate revenue or are expected as part of a service are also included in the tangible value flow of goods, services, and revenue (2). In government agencies, these would be mandated activities. In civil society organizations, these would be formal commitments to provide resources or services. === Intangible value === Two primary sub-categories are included in intangible value: knowledge and benefits. Intangible knowledge exchanges include strategic information, planning knowledge, process knowledge, technical know-how, collaborative design and policy development; which support the product and service tangible value network. Intangible benefits are also considered favors that can be offered from one person to another. Examples include offering political or emotional support to someone. Another example of intangible value is when a research organization asks someone to volunteer their time and expertise to a project in exchange for the intangible benefit of prestige by affiliation (3). All biological organisms, including humans, function in a self-organizing mode internally and externally. That is, the elements in our bodies—down to individual cells and DNA molecules—work together in order to sustain us. However, there is no central "boss" to control this dynamic activity. Our relationships with other individuals also progress through the same circular free flowing process as we search for outcomes that are best for our well-being. Under the right conditions these social exchanges can be extraordinarily altruistic. Conversely, they can also be quite self-centered and even violent. It all depends on the context of the immediate environment and the people involved. === A non-linear approach === Often, value networks are considered to consist of groups of companies working together to produce and transport a product to the customer. Relationships among customers of a single company are examples of how value networks can be found in any organization. Companies can link their customers together by direct methods like the telephone or indirect methods like combining customer's resources together. The purpose of value networks is to create the most benefit for the people involved in the network (5). The intangible value of knowledge within these networks is just as important as a monetary value. In order to succeed knowledge must be shared to create the best situations or opportunities. Value networks are how ideas flow into the market and to the people that need to hear them. Because value networks are instrumental in advancing business and institutional practices a value network analysis can be useful in a wide variety of business situations. Some typical ones are listed below. ==== Relationship management ==== Relationship management typically just focuses on managing information about customers, suppliers, and business partners. A value network approach considers relationships as two-way value-creating interactions, which focus on realizing value as well as providing value. ==== Business web and ecosystem development ==== Resource deployment, delivery, market innovation, knowledge sharing, and time-to-market advantage are dependent on the quality, coherence, and vitality of the relevant value networks, business webs and business ecosystems. ==== Fast-track complex process redesign ==== Product and service offerings are constantly changing – and so are the processes to innovate, design, manufacture, and deliver them. Multiple, interdependent, and concurrent processes are too complex for traditional process mapping, but can be analyzed very quickly with the value network method. ==== Reconfiguring the organization ==== Mergers, acquisitions, downsizing, expansion to new markets, new product groups, new partners, new roles and functions – anytime relationships change, value interactions and flows change too. ==== Supporting knowledge networks and communities of practice ==== Understanding the transactional dynamics is vital for purposeful networks of all kinds, including networks and communities focused on creating knowledge value. A value network analysis helps communities of practice negotiate for resources and demonstrate their value to different groups within the organization. ==== Develop scorecards, conduct ROI and cost/benefit analyses, and drive decision making ==== Because the value network approach addresses both financial and non-financial assets and exchanges, it expands metrics and indexes beyond the lagging indicators of financial return and operational performance – to also include leading indicators for strategic capability and system optimization. == See also == == References == == External links == NetLab- at the University of Toronto, studies the intersection of social, communication, information and computing networks. Value Network Analysis and Value Conversion of Tangible and Intangible Assets, Verna Allee. CASOS – Center for Computational Analysis of Social and Organizational Systems at Carnegie Mellon. Understanding Collaborative Networks: Expanding on the Concept of Value Networks
Wikipedia/Value_network
Network formation is an aspect of network science that seeks to model how a network evolves by identifying which factors affect its structure and how these mechanisms operate. Network formation hypotheses are tested by using either a dynamic model with an increasing network size or by making an agent-based model to determine which network structure is the equilibrium in a fixed-size network. == Dynamic models == A dynamic model, often used by physicists and biologists, begins as a small network or even a single node. The modeler then uses a (usually randomized) rule on how newly arrived nodes form links in order to increase the size of the network. The aim is to determine what the properties the network will be when it grows in size. In this way, researchers try to reproduce properties common in most real networks, such as the small world network property or the scale-free network property. These properties are common in almost every real network including the World Wide Web, the metabolic network or the network of international air routes. The oldest model of this type is the Erdős-Rényi model, in which new nodes randomly choose other nodes to connect to. A second well-known model is the Watts and Strogatz model, which starts from a standard two-dimensional lattice and evolves by replacing links randomly. These models display some realistic network properties, but fail to account for others. One of the most influential models of network formation is the Barabási-Albert model. Here, the network also starts from a small system, and incoming nodes choose their links randomly, but the randomization is not uniform. Instead, nodes which already possess a greater number of links will have a higher likelihood of becoming connected to incoming nodes. This mechanism is known as preferential attachment. In comparison to previous models, the Barabbas-Albert model seems to more accurately reflect phenomena observed in real-world networks. == Agent-based models == The second approach to model network formation is agent- or game theory-based modelling. In these models, a network with fixed number of nodes or agents is created. Every agent is given utility function, a representation of its linking preferences, and directed to form links with other nodes based upon it. Usually, forming or maintaining a link will have a cost, but having connections to other nodes will have benefits. The method tests the hypothesis that, given some initial setting and parameter values, a certain network structure will emerge as an equilibrium of this game. Since the number of nodes usually fixed, they can very rarely explain the properties of huge real-world networks; however, they are very useful to examine the network formation in smaller groups. Jackson and Wolinsky pioneered these types of models in a 1996 paper, which has since inspired several game-theoretic models. These models were further developed by Jackson and Watts, who put this approach to a dynamic setting to see how the network structure evolve over time. Usually, games with known network structure are widely applicable; however, there are various settings when players interact without fully knowing who their neighbors are and what the network structure is. These games can be modeled using incomplete information network games. == Growing networks in agent-based setting == There are very few models that try to combine the two approaches. However, in 2007, Jackson and Rogers modeled a growing network in which new nodes chose their connections partly based on random choices and partly based on maximizing their utility function. With this general framework, modelers can reproduce almost every stylized trait of real-life networks. == References == == Further reading == Barabási and Albert (2002). "Statistical mechanics of complex networks" (PDF). Reviews of Modern Physics. 74 (1): 47–97. arXiv:cond-mat/0106096. Bibcode:2002RvMP...74...47A. CiteSeerX 10.1.1.242.4753. doi:10.1103/revmodphys.74.47. Archived from the original (PDF) on 2015-08-24.
Wikipedia/Network_formation
In mathematics, especially in probability theory and ergodic theory, the invariant sigma-algebra is a sigma-algebra formed by sets which are invariant under a group action or dynamical system. It can be interpreted as of being "indifferent" to the dynamics. The invariant sigma-algebra appears in the study of ergodic systems, as well as in theorems of probability theory such as de Finetti's theorem and the Hewitt-Savage law. == Definition == === Strictly invariant sets === Let ( X , F ) {\displaystyle (X,{\mathcal {F}})} be a measurable space, and let T : ( X , F ) → ( X , F ) {\displaystyle T:(X,{\mathcal {F}})\to (X,{\mathcal {F}})} be a measurable function. A measurable subset S ∈ F {\displaystyle S\in {\mathcal {F}}} is called invariant if and only if T − 1 ( S ) = S {\displaystyle T^{-1}(S)=S} . Equivalently, if for every x ∈ X {\displaystyle x\in X} , we have that x ∈ S {\displaystyle x\in S} if and only if T ( x ) ∈ S {\displaystyle T(x)\in S} . More generally, let M {\displaystyle M} be a group or a monoid, let α : M × X → X {\displaystyle \alpha :M\times X\to X} be a monoid action, and denote the action of m ∈ M {\displaystyle m\in M} on X {\displaystyle X} by α m : X → X {\displaystyle \alpha _{m}:X\to X} . A subset S ⊆ X {\displaystyle S\subseteq X} is α {\displaystyle \alpha } -invariant if for every m ∈ M {\displaystyle m\in M} , α m − 1 ( S ) = S {\displaystyle \alpha _{m}^{-1}(S)=S} . === Almost surely invariant sets === Let ( X , F ) {\displaystyle (X,{\mathcal {F}})} be a measurable space, and let T : ( X , F ) → ( X , F ) {\displaystyle T:(X,{\mathcal {F}})\to (X,{\mathcal {F}})} be a measurable function. A measurable subset (event) S ∈ F {\displaystyle S\in {\mathcal {F}}} is called almost surely invariant if and only if its indicator function 1 S {\displaystyle 1_{S}} is almost surely equal to the indicator function 1 T − 1 ( S ) {\displaystyle 1_{T^{-1}(S)}} . Similarly, given a measure-preserving Markov kernel k : ( X , F , p ) → ( X , F , p ) {\displaystyle k:(X,{\mathcal {F}},p)\to (X,{\mathcal {F}},p)} , we call an event S ∈ F {\displaystyle S\in {\mathcal {F}}} almost surely invariant if and only if k ( S ∣ x ) = 1 S ( x ) {\displaystyle k(S\mid x)=1_{S}(x)} for almost all x ∈ X {\displaystyle x\in X} . As for the case of strictly invariant sets, the definition can be extended to an arbitrary group or monoid action. In many cases, almost surely invariant sets differ by invariant sets only by a null set (see below). === Sigma-algebra structure === Both strictly invariant sets and almost surely invariant sets are closed under taking countable unions and complements, and hence they form sigma-algebras. These sigma-algebras are usually called either the invariant sigma-algebra or the sigma-algebra of invariant events, both in the strict case and in the almost sure case, depending on the author. For the purpose of the article, let's denote by I {\displaystyle {\mathcal {I}}} the sigma-algebra of strictly invariant sets, and by I ~ {\displaystyle {\tilde {\mathcal {I}}}} the sigma-algebra of almost surely invariant sets. == Properties == Given a measure-preserving function T : ( X , A , p ) → ( X , A , p ) {\displaystyle T:(X,{\mathcal {A}},p)\to (X,{\mathcal {A}},p)} , a set A ∈ A {\displaystyle A\in {\mathcal {A}}} is almost surely invariant if and only if there exists a strictly invariant set A ′ ∈ I {\displaystyle A'\in {\mathcal {I}}} such that p ( A △ A ′ ) = 0 {\displaystyle p(A\triangle A')=0} . Given measurable functions T : ( X , A ) → ( X , A ) {\displaystyle T:(X,{\mathcal {A}})\to (X,{\mathcal {A}})} and f : ( X , A ) → ( R , B ) {\displaystyle f:(X,{\mathcal {A}})\to (\mathbb {R} ,{\mathcal {B}})} , we have that f {\displaystyle f} is invariant, meaning that f ∘ T = f {\displaystyle f\circ T=f} , if and only if it is I {\displaystyle {\mathcal {I}}} -measurable. The same is true replacing ( R , B ) {\displaystyle (\mathbb {R} ,{\mathcal {B}})} with any measurable space where the sigma-algebra separates points. An invariant measure p {\displaystyle p} is (by definition) ergodic if and only if for every invariant subset A ∈ I {\displaystyle A\in {\mathcal {I}}} , p ( A ) = 0 {\displaystyle p(A)=0} or p ( A ) = 1 {\displaystyle p(A)=1} . == Examples == === Exchangeable sigma-algebra === Given a measurable space ( X , A ) {\displaystyle (X,{\mathcal {A}})} , denote by ( X N , A ⊗ N ) {\displaystyle (X^{\mathbb {N} },{\mathcal {A}}^{\otimes \mathbb {N} })} be the countable cartesian power of X {\displaystyle X} , equipped with the product sigma-algebra. We can view X N {\displaystyle X^{\mathbb {N} }} as the space of infinite sequences of elements of X {\displaystyle X} , X N = { ( x 0 , x 1 , x 2 , … ) , x i ∈ X } . {\displaystyle X^{\mathbb {N} }=\{(x_{0},x_{1},x_{2},\dots ),x_{i}\in X\}.} Consider now the group S ∞ {\displaystyle S_{\infty }} of finite permutations of N {\displaystyle \mathbb {N} } , i.e. bijections σ : N → N {\displaystyle \sigma :\mathbb {N} \to \mathbb {N} } such that σ ( n ) ≠ n {\displaystyle \sigma (n)\neq n} only for finitely many n ∈ N {\displaystyle n\in \mathbb {N} } . Each finite permutation σ {\displaystyle \sigma } acts measurably on X N {\displaystyle X^{\mathbb {N} }} by permuting the components, and so we have an action of the countable group S ∞ {\displaystyle S_{\infty }} on X N {\displaystyle X^{\mathbb {N} }} . An invariant event for this sigma-algebra is often called an exchangeable event or symmetric event, and the sigma-algebra of invariant events is often called the exchangeable sigma-algebra. A random variable on X N {\displaystyle X^{\mathbb {N} }} is exchangeable (i.e. permutation-invariant) if and only if it is measurable for the exchangeable sigma-algebra. The exchangeable sigma-algebra plays a role in the Hewitt-Savage zero-one law, which can be equivalently stated by saying that for every probability measure p {\displaystyle p} on ( X , A ) {\displaystyle (X,{\mathcal {A}})} , the product measure p ⊗ N {\displaystyle p^{\otimes \mathbb {N} }} on X N {\displaystyle X^{\mathbb {N} }} assigns to each exchangeable event probability either zero or one. Equivalently, for the measure p ⊗ N {\displaystyle p^{\otimes \mathbb {N} }} , every exchangeable random variable on X N {\displaystyle X^{\mathbb {N} }} is almost surely constant. It also plays a role in the de Finetti theorem. === Shift-invariant sigma-algebra === As in the example above, given a measurable space ( X , A ) {\displaystyle (X,{\mathcal {A}})} , consider the countably infinite cartesian product ( X N , A ⊗ N ) {\displaystyle (X^{\mathbb {N} },{\mathcal {A}}^{\otimes \mathbb {N} })} . Consider now the shift map T : X N → X N {\displaystyle T:X^{\mathbb {N} }\to X^{\mathbb {N} }} given by mapping ( x 0 , x 1 , x 2 , … ) ∈ X N {\displaystyle (x_{0},x_{1},x_{2},\dots )\in X^{\mathbb {N} }} to ( x 1 , x 2 , x 3 , … ) ∈ X N {\displaystyle (x_{1},x_{2},x_{3},\dots )\in X^{\mathbb {N} }} . An invariant event for this sigma-algebra is called a shift-invariant event, and the resulting sigma-algebra is sometimes called the shift-invariant sigma-algebra. This sigma-algebra is related to the one of tail events, which is given by the following intersection, ⋂ n ∈ N ( ⨂ m ≥ n A m ) , {\displaystyle \bigcap _{n\in \mathbb {N} }\left(\bigotimes _{m\geq n}{\mathcal {A}}_{m}\right),} where A m ⊆ A ⊗ N {\displaystyle {\mathcal {A}}_{m}\subseteq {\mathcal {A}}^{\otimes \mathbb {N} }} is the sigma-algebra induced on X N {\displaystyle X^{\mathbb {N} }} by the projection on the m {\displaystyle m} -th component π m : ( X N , A ⊗ N ) → ( X , A ) {\displaystyle \pi _{m}:(X^{\mathbb {N} },{\mathcal {A}}^{\otimes \mathbb {N} })\to (X,{\mathcal {A}})} . Every shift-invariant event is a tail event, but the converse is not true. == See also == Invariant set De Finetti theorem Hewitt-Savage zero-one law Exchangeable random variables Invariant measure Ergodic system == Citations == == References == Viana, Marcelo; Oliveira, Krerley (2016). Foundations of Ergodic Theory. Cambridge University Press. ISBN 978-1-107-12696-1. Billingsley, Patrick (1995). Probability and Measure. John Wiley & Sons. ISBN 0-471-00710-2. Durrett, Rick (2010). Probability: theory and examples. Cambridge University Press. ISBN 978-0-521-76539-8. Douc, Randal; Moulines, Eric; Priouret, Pierre; Soulier, Philippe (2018). Markov Chains. Springer. doi:10.1007/978-3-319-97704-1. ISBN 978-3-319-97703-4. Klenke, Achim (2020). Probability Theory: A comprehensive course. Universitext. Springer. doi:10.1007/978-1-4471-5361-0. ISBN 978-3-030-56401-8. Hewitt, E.; Savage, L. J. (1955). "Symmetric measures on Cartesian products". Trans. Amer. Math. Soc. 80 (2): 470–501. doi:10.1090/s0002-9947-1955-0076206-8.
Wikipedia/Invariant_sigma-algebra
In mathematics, a measure-preserving dynamical system is an object of study in the abstract formulation of dynamical systems, and ergodic theory in particular. Measure-preserving systems obey the Poincaré recurrence theorem, and are a special case of conservative systems. They provide the formal, mathematical basis for a broad range of physical systems, and, in particular, many systems from classical mechanics (in particular, most non-dissipative systems) as well as systems in thermodynamic equilibrium. == Definition == A measure-preserving dynamical system is defined as a probability space and a measure-preserving transformation on it. In more detail, it is a system ( X , B , μ , T ) {\displaystyle (X,{\mathcal {B}},\mu ,T)} with the following structure: X {\displaystyle X} is a set, B {\displaystyle {\mathcal {B}}} is a σ-algebra over X {\displaystyle X} , μ : B → [ 0 , 1 ] {\displaystyle \mu :{\mathcal {B}}\rightarrow [0,1]} is a probability measure, so that μ ( X ) = 1 {\displaystyle \mu (X)=1} , and μ ( ∅ ) = 0 {\displaystyle \mu (\varnothing )=0} , T : X → X {\displaystyle T:X\rightarrow X} is a measurable transformation which preserves the measure μ {\displaystyle \mu } , i.e., ∀ A ∈ B μ ( T − 1 ( A ) ) = μ ( A ) {\displaystyle \forall A\in {\mathcal {B}}\;\;\mu (T^{-1}(A))=\mu (A)} . == Discussion == One may ask why the measure preserving transformation is defined in terms of the inverse μ ( T − 1 ( A ) ) = μ ( A ) {\displaystyle \mu (T^{-1}(A))=\mu (A)} instead of the forward transformation μ ( T ( A ) ) = μ ( A ) {\displaystyle \mu (T(A))=\mu (A)} . This can be understood intuitively. Consider the typical measure on the unit interval [ 0 , 1 ] {\displaystyle [0,1]} , and a map T x = 2 x mod 1 = { 2 x if x < 1 / 2 2 x − 1 if x > 1 / 2 {\displaystyle Tx=2x\mod 1={\begin{cases}2x{\text{ if }}x<1/2\\2x-1{\text{ if }}x>1/2\\\end{cases}}} . This is the Bernoulli map. Now, distribute an even layer of paint on the unit interval [ 0 , 1 ] {\displaystyle [0,1]} , and then map the paint forward. The paint on the [ 0 , 1 / 2 ] {\displaystyle [0,1/2]} half is spread thinly over all of [ 0 , 1 ] {\displaystyle [0,1]} , and the paint on the [ 1 / 2 , 1 ] {\displaystyle [1/2,1]} half as well. The two layers of thin paint, layered together, recreates the exact same paint thickness. More generally, the paint that would arrive at subset A ⊂ [ 0 , 1 ] {\displaystyle A\subset [0,1]} comes from the subset T − 1 ( A ) {\displaystyle T^{-1}(A)} . For the paint thickness to remain unchanged (measure-preserving), the mass of incoming paint should be the same: μ ( A ) = μ ( T − 1 ( A ) ) {\displaystyle \mu (A)=\mu (T^{-1}(A))} . Consider a mapping T {\displaystyle {\mathcal {T}}} of power sets: T : P ( X ) → P ( X ) {\displaystyle {\mathcal {T}}:P(X)\to P(X)} Consider now the special case of maps T {\displaystyle {\mathcal {T}}} which preserve intersections, unions and complements (so that it is a map of Borel sets) and also sends X {\displaystyle X} to X {\displaystyle X} (because we want it to be conservative). Every such conservative, Borel-preserving map can be specified by some surjective map T : X → X {\displaystyle T:X\to X} by writing T ( A ) = T − 1 ( A ) {\displaystyle {\mathcal {T}}(A)=T^{-1}(A)} . Of course, one could also define T ( A ) = T ( A ) {\displaystyle {\mathcal {T}}(A)=T(A)} , but this is not enough to specify all such possible maps T {\displaystyle {\mathcal {T}}} . That is, conservative, Borel-preserving maps T {\displaystyle {\mathcal {T}}} cannot, in general, be written in the form T ( A ) = T ( A ) ; {\displaystyle {\mathcal {T}}(A)=T(A);} . μ ( T − 1 ( A ) ) {\displaystyle \mu (T^{-1}(A))} has the form of a pushforward, whereas μ ( T ( A ) ) {\displaystyle \mu (T(A))} is generically called a pullback. Almost all properties and behaviors of dynamical systems are defined in terms of the pushforward. For example, the transfer operator is defined in terms of the pushforward of the transformation map T {\displaystyle T} ; the measure μ {\displaystyle \mu } can now be understood as an invariant measure; it is just the Frobenius–Perron eigenvector of the transfer operator (recall, the FP eigenvector is the largest eigenvector of a matrix; in this case it is the eigenvector which has the eigenvalue one: the invariant measure.) There are two classification problems of interest. One, discussed below, fixes ( X , B , μ ) {\displaystyle (X,{\mathcal {B}},\mu )} and asks about the isomorphism classes of a transformation map T {\displaystyle T} . The other, discussed in transfer operator, fixes ( X , B ) {\displaystyle (X,{\mathcal {B}})} and T {\displaystyle T} , and asks about maps μ {\displaystyle \mu } that are measure-like. Measure-like, in that they preserve the Borel properties, but are no longer invariant; they are in general dissipative and so give insights into dissipative systems and the route to equilibrium. In terms of physics, the measure-preserving dynamical system ( X , B , μ , T ) {\displaystyle (X,{\mathcal {B}},\mu ,T)} often describes a physical system that is in equilibrium, for example, thermodynamic equilibrium. One might ask: how did it get that way? Often, the answer is by stirring, mixing, turbulence, thermalization or other such processes. If a transformation map T {\displaystyle T} describes this stirring, mixing, etc. then the system ( X , B , μ , T ) {\displaystyle (X,{\mathcal {B}},\mu ,T)} is all that is left, after all of the transient modes have decayed away. The transient modes are precisely those eigenvectors of the transfer operator that have eigenvalue less than one; the invariant measure μ {\displaystyle \mu } is the one mode that does not decay away. The rate of decay of the transient modes are given by (the logarithm of) their eigenvalues; the eigenvalue one corresponds to infinite half-life. == Informal example == The microcanonical ensemble from physics provides an informal example. Consider, for example, a fluid, gas or plasma in a box of width, length and height w × l × h , {\displaystyle w\times l\times h,} consisting of N {\displaystyle N} atoms. A single atom in that box might be anywhere, having arbitrary velocity; it would be represented by a single point in w × l × h × R 3 . {\displaystyle w\times l\times h\times \mathbb {R} ^{3}.} A given collection of N {\displaystyle N} atoms would then be a single point somewhere in the space ( w × l × h ) N × R 3 N . {\displaystyle (w\times l\times h)^{N}\times \mathbb {R} ^{3N}.} The "ensemble" is the collection of all such points, that is, the collection of all such possible boxes (of which there are an uncountably-infinite number). This ensemble of all-possible-boxes is the space X {\displaystyle X} above. In the case of an ideal gas, the measure μ {\displaystyle \mu } is given by the Maxwell–Boltzmann distribution. It is a product measure, in that if p i ( x , y , z , v x , v y , v z ) d 3 x d 3 p {\displaystyle p_{i}(x,y,z,v_{x},v_{y},v_{z})\,d^{3}x\,d^{3}p} is the probability of atom i {\displaystyle i} having position and velocity x , y , z , v x , v y , v z {\displaystyle x,y,z,v_{x},v_{y},v_{z}} , then, for N {\displaystyle N} atoms, the probability is the product of N {\displaystyle N} of these. This measure is understood to apply to the ensemble. So, for example, one of the possible boxes in the ensemble has all of the atoms on one side of the box. One can compute the likelihood of this, in the Maxwell–Boltzmann measure. It will be enormously tiny, of order O ( 2 − 3 N ) . {\displaystyle {\mathcal {O}}\left(2^{-3N}\right).} Of all possible boxes in the ensemble, this is a ridiculously small fraction. The only reason that this is an "informal example" is because writing down the transition function T {\displaystyle T} is difficult, and, even if written down, it is hard to perform practical computations with it. Difficulties are compounded if there are interactions between the particles themselves, like a van der Waals interaction or some other interaction suitable for a liquid or a plasma; in such cases, the invariant measure is no longer the Maxwell–Boltzmann distribution. The art of physics is finding reasonable approximations. This system does exhibit one key idea from the classification of measure-preserving dynamical systems: two ensembles, having different temperatures, are inequivalent. The entropy for a given canonical ensemble depends on its temperature; as physical systems, it is "obvious" that when the temperatures differ, so do the systems. This holds in general: systems with different entropy are not isomorphic. == Examples == Unlike the informal example above, the examples below are sufficiently well-defined and tractable that explicit, formal computations can be performed. μ could be the normalized angle measure dθ/2π on the unit circle, and T a rotation. See equidistribution theorem; the Bernoulli scheme; the interval exchange transformation; with the definition of an appropriate measure, a subshift of finite type; the base flow of a random dynamical system; the flow of a Hamiltonian vector field on the tangent bundle of a closed connected smooth manifold is measure-preserving (using the measure induced on the Borel sets by the symplectic volume form) by Liouville's theorem (Hamiltonian); for certain maps and Markov processes, the Krylov–Bogolyubov theorem establishes the existence of a suitable measure to form a measure-preserving dynamical system. == Generalization to groups and monoids == The definition of a measure-preserving dynamical system can be generalized to the case in which T is not a single transformation that is iterated to give the dynamics of the system, but instead is a monoid (or even a group, in which case we have the action of a group upon the given probability space) of transformations Ts : X → X parametrized by s ∈ Z (or R, or N ∪ {0}, or [0, +∞)), where each transformation Ts satisfies the same requirements as T above. In particular, the transformations obey the rules: T 0 = i d X : X → X {\displaystyle T_{0}=\mathrm {id} _{X}:X\rightarrow X} , the identity function on X; T s ∘ T t = T t + s {\displaystyle T_{s}\circ T_{t}=T_{t+s}} , whenever all the terms are well-defined; T s − 1 = T − s {\displaystyle T_{s}^{-1}=T_{-s}} , whenever all the terms are well-defined. The earlier, simpler case fits into this framework by defining Ts = Ts for s ∈ N. == Homomorphisms == The concept of a homomorphism and an isomorphism may be defined. Consider two dynamical systems ( X , A , μ , T ) {\displaystyle (X,{\mathcal {A}},\mu ,T)} and ( Y , B , ν , S ) {\displaystyle (Y,{\mathcal {B}},\nu ,S)} . Then a mapping φ : X → Y {\displaystyle \varphi :X\to Y} is a homomorphism of dynamical systems if it satisfies the following three properties: The map φ {\displaystyle \varphi \ } is measurable. For each B ∈ B {\displaystyle B\in {\mathcal {B}}} , one has μ ( φ − 1 B ) = ν ( B ) {\displaystyle \mu (\varphi ^{-1}B)=\nu (B)} . For μ {\displaystyle \mu } -almost all x ∈ X {\displaystyle x\in X} , one has φ ( T x ) = S ( φ x ) {\displaystyle \varphi (Tx)=S(\varphi x)} . The system ( Y , B , ν , S ) {\displaystyle (Y,{\mathcal {B}},\nu ,S)} is then called a factor of ( X , A , μ , T ) {\displaystyle (X,{\mathcal {A}},\mu ,T)} . The map φ {\displaystyle \varphi \;} is an isomorphism of dynamical systems if, in addition, there exists another mapping ψ : Y → X {\displaystyle \psi :Y\to X} that is also a homomorphism, which satisfies for μ {\displaystyle \mu } -almost all x ∈ X {\displaystyle x\in X} , one has x = ψ ( φ x ) {\displaystyle x=\psi (\varphi x)} ; for ν {\displaystyle \nu } -almost all y ∈ Y {\displaystyle y\in Y} , one has y = φ ( ψ y ) {\displaystyle y=\varphi (\psi y)} . Hence, one may form a category of dynamical systems and their homomorphisms. == Generic points == A point x ∈ X is called a generic point if the orbit of the point is distributed uniformly according to the measure. == Symbolic names and generators == Consider a dynamical system ( X , B , T , μ ) {\displaystyle (X,{\mathcal {B}},T,\mu )} , and let Q = {Q1, ..., Qk} be a partition of X into k measurable pair-wise disjoint sets. Given a point x ∈ X, clearly x belongs to only one of the Qi. Similarly, the iterated point Tnx can belong to only one of the parts as well. The symbolic name of x, with regards to the partition Q, is the sequence of integers {an} such that T n x ∈ Q a n . {\displaystyle T^{n}x\in Q_{a_{n}}.} The set of symbolic names with respect to a partition is called the symbolic dynamics of the dynamical system. A partition Q is called a generator or generating partition if μ-almost every point x has a unique symbolic name. == Operations on partitions == Given a partition Q = {Q1, ..., Qk} and a dynamical system ( X , B , T , μ ) {\displaystyle (X,{\mathcal {B}},T,\mu )} , define the T-pullback of Q as T − 1 Q = { T − 1 Q 1 , … , T − 1 Q k } . {\displaystyle T^{-1}Q=\{T^{-1}Q_{1},\ldots ,T^{-1}Q_{k}\}.} Further, given two partitions Q = {Q1, ..., Qk} and R = {R1, ..., Rm}, define their refinement as Q ∨ R = { Q i ∩ R j ∣ i = 1 , … , k , j = 1 , … , m , μ ( Q i ∩ R j ) > 0 } . {\displaystyle Q\vee R=\{Q_{i}\cap R_{j}\mid i=1,\ldots ,k,\ j=1,\ldots ,m,\ \mu (Q_{i}\cap R_{j})>0\}.} With these two constructs, the refinement of an iterated pullback is defined as ⋁ n = 0 N T − n Q = { Q i 0 ∩ T − 1 Q i 1 ∩ ⋯ ∩ T − N Q i N where i ℓ = 1 , … , k , ℓ = 0 , … , N , μ ( Q i 0 ∩ T − 1 Q i 1 ∩ ⋯ ∩ T − N Q i N ) > 0 } {\displaystyle {\begin{aligned}\bigvee _{n=0}^{N}T^{-n}Q&=\{Q_{i_{0}}\cap T^{-1}Q_{i_{1}}\cap \cdots \cap T^{-N}Q_{i_{N}}\\&{}\qquad {\mbox{ where }}i_{\ell }=1,\ldots ,k,\ \ell =0,\ldots ,N,\ \\&{}\qquad \qquad \mu \left(Q_{i_{0}}\cap T^{-1}Q_{i_{1}}\cap \cdots \cap T^{-N}Q_{i_{N}}\right)>0\}\\\end{aligned}}} which plays crucial role in the construction of the measure-theoretic entropy of a dynamical system. == Measure-theoretic entropy == The entropy of a partition Q {\displaystyle {\mathcal {Q}}} is defined as H ( Q ) = − ∑ Q ∈ Q μ ( Q ) log ⁡ μ ( Q ) . {\displaystyle H({\mathcal {Q}})=-\sum _{Q\in {\mathcal {Q}}}\mu (Q)\log \mu (Q).} The measure-theoretic entropy of a dynamical system ( X , B , T , μ ) {\displaystyle (X,{\mathcal {B}},T,\mu )} with respect to a partition Q = {Q1, ..., Qk} is then defined as h μ ( T , Q ) = lim N → ∞ 1 N H ( ⋁ n = 0 N T − n Q ) . {\displaystyle h_{\mu }(T,{\mathcal {Q}})=\lim _{N\rightarrow \infty }{\frac {1}{N}}H\left(\bigvee _{n=0}^{N}T^{-n}{\mathcal {Q}}\right).} Finally, the Kolmogorov–Sinai metric or measure-theoretic entropy of a dynamical system ( X , B , T , μ ) {\displaystyle (X,{\mathcal {B}},T,\mu )} is defined as h μ ( T ) = sup Q h μ ( T , Q ) . {\displaystyle h_{\mu }(T)=\sup _{\mathcal {Q}}h_{\mu }(T,{\mathcal {Q}}).} where the supremum is taken over all finite measurable partitions. A theorem of Yakov Sinai in 1959 shows that the supremum is actually obtained on partitions that are generators. Thus, for example, the entropy of the Bernoulli process is log 2, since almost every real number has a unique binary expansion. That is, one may partition the unit interval into the intervals [0, 1/2) and [1/2, 1]. Every real number x is either less than 1/2 or not; and likewise so is the fractional part of 2nx. If the space X is compact and endowed with a topology, or is a metric space, then the topological entropy may also be defined. If T {\displaystyle T} is an ergodic, piecewise expanding, and Markov on X ⊂ R {\displaystyle X\subset \mathbb {R} } , and μ {\displaystyle \mu } is absolutely continuous with respect to the Lebesgue measure, then we have the Rokhlin formula (section 4.3 and section 12.3 ): h μ ( T ) = ∫ ln ⁡ | d T / d x | μ ( d x ) {\displaystyle h_{\mu }(T)=\int \ln |dT/dx|\mu (dx)} This allows calculation of entropy of many interval maps, such as the logistic map. Ergodic means that T − 1 ( A ) = A {\displaystyle T^{-1}(A)=A} implies A {\displaystyle A} has full measure or zero measure. Piecewise expanding and Markov means that there is a partition of X {\displaystyle X} into finitely many open intervals, such that for some ϵ > 0 {\displaystyle \epsilon >0} , | T ′ | ≥ 1 + ϵ {\displaystyle |T'|\geq 1+\epsilon } on each open interval. Markov means that for each I i {\displaystyle I_{i}} from those open intervals, either T ( I i ) ∩ I i = ∅ {\displaystyle T(I_{i})\cap I_{i}=\emptyset } or T ( I i ) ∩ I i = I i {\displaystyle T(I_{i})\cap I_{i}=I_{i}} . == Classification and anti-classification theorems == One of the primary activities in the study of measure-preserving systems is their classification according to their properties. That is, let ( X , B , μ ) {\displaystyle (X,{\mathcal {B}},\mu )} be a measure space, and let U {\displaystyle U} be the set of all measure preserving systems ( X , B , μ , T ) {\displaystyle (X,{\mathcal {B}},\mu ,T)} . An isomorphism S ∼ T {\displaystyle S\sim T} of two transformations S , T {\displaystyle S,T} defines an equivalence relation R ⊂ U × U . {\displaystyle {\mathcal {R}}\subset U\times U.} The goal is then to describe the relation R {\displaystyle {\mathcal {R}}} . A number of classification theorems have been obtained; but quite interestingly, a number of anti-classification theorems have been found as well. The anti-classification theorems state that there are more than a countable number of isomorphism classes, and that a countable amount of information is not sufficient to classify isomorphisms. The first anti-classification theorem, due to Hjorth, states that if U {\displaystyle U} is endowed with the weak topology, then the set R {\displaystyle {\mathcal {R}}} is not a Borel set. There are a variety of other anti-classification results. For example, replacing isomorphism with Kakutani equivalence, it can be shown that there are uncountably many non-Kakutani equivalent ergodic measure-preserving transformations of each entropy type. These stand in contrast to the classification theorems. These include: Ergodic measure-preserving transformations with a pure point spectrum have been classified. Bernoulli shifts are classified by their metric entropy. See Ornstein theory for more. == See also == Krylov–Bogolyubov theorem on the existence of invariant measures Poincaré recurrence theorem – Certain dynamical systems will eventually return to (or approximate) their initial state == References == == Further reading == Michael S. Keane, "Ergodic theory and subshifts of finite type", (1991), appearing as Chapter 2 in Ergodic Theory, Symbolic Dynamics and Hyperbolic Spaces, Tim Bedford, Michael Keane and Caroline Series, Eds. Oxford University Press, Oxford (1991). ISBN 0-19-853390-X (Provides expository introduction, with exercises, and extensive references.) Lai-Sang Young, "Entropy in Dynamical Systems" (pdf; ps), appearing as Chapter 16 in Entropy, Andreas Greven, Gerhard Keller, and Gerald Warnecke, eds. Princeton University Press, Princeton, NJ (2003). ISBN 0-691-11338-6 T. Schürmann and I. Hoffmann, The entropy of strange billiards inside n-simplexes. J. Phys. A 28(17), page 5033, 1995. PDF-Document (gives a more involved example of measure-preserving dynamical system.)
Wikipedia/Measure-preserving_transformation
In mathematics, a measure-preserving dynamical system is an object of study in the abstract formulation of dynamical systems, and ergodic theory in particular. Measure-preserving systems obey the Poincaré recurrence theorem, and are a special case of conservative systems. They provide the formal, mathematical basis for a broad range of physical systems, and, in particular, many systems from classical mechanics (in particular, most non-dissipative systems) as well as systems in thermodynamic equilibrium. == Definition == A measure-preserving dynamical system is defined as a probability space and a measure-preserving transformation on it. In more detail, it is a system ( X , B , μ , T ) {\displaystyle (X,{\mathcal {B}},\mu ,T)} with the following structure: X {\displaystyle X} is a set, B {\displaystyle {\mathcal {B}}} is a σ-algebra over X {\displaystyle X} , μ : B → [ 0 , 1 ] {\displaystyle \mu :{\mathcal {B}}\rightarrow [0,1]} is a probability measure, so that μ ( X ) = 1 {\displaystyle \mu (X)=1} , and μ ( ∅ ) = 0 {\displaystyle \mu (\varnothing )=0} , T : X → X {\displaystyle T:X\rightarrow X} is a measurable transformation which preserves the measure μ {\displaystyle \mu } , i.e., ∀ A ∈ B μ ( T − 1 ( A ) ) = μ ( A ) {\displaystyle \forall A\in {\mathcal {B}}\;\;\mu (T^{-1}(A))=\mu (A)} . == Discussion == One may ask why the measure preserving transformation is defined in terms of the inverse μ ( T − 1 ( A ) ) = μ ( A ) {\displaystyle \mu (T^{-1}(A))=\mu (A)} instead of the forward transformation μ ( T ( A ) ) = μ ( A ) {\displaystyle \mu (T(A))=\mu (A)} . This can be understood intuitively. Consider the typical measure on the unit interval [ 0 , 1 ] {\displaystyle [0,1]} , and a map T x = 2 x mod 1 = { 2 x if x < 1 / 2 2 x − 1 if x > 1 / 2 {\displaystyle Tx=2x\mod 1={\begin{cases}2x{\text{ if }}x<1/2\\2x-1{\text{ if }}x>1/2\\\end{cases}}} . This is the Bernoulli map. Now, distribute an even layer of paint on the unit interval [ 0 , 1 ] {\displaystyle [0,1]} , and then map the paint forward. The paint on the [ 0 , 1 / 2 ] {\displaystyle [0,1/2]} half is spread thinly over all of [ 0 , 1 ] {\displaystyle [0,1]} , and the paint on the [ 1 / 2 , 1 ] {\displaystyle [1/2,1]} half as well. The two layers of thin paint, layered together, recreates the exact same paint thickness. More generally, the paint that would arrive at subset A ⊂ [ 0 , 1 ] {\displaystyle A\subset [0,1]} comes from the subset T − 1 ( A ) {\displaystyle T^{-1}(A)} . For the paint thickness to remain unchanged (measure-preserving), the mass of incoming paint should be the same: μ ( A ) = μ ( T − 1 ( A ) ) {\displaystyle \mu (A)=\mu (T^{-1}(A))} . Consider a mapping T {\displaystyle {\mathcal {T}}} of power sets: T : P ( X ) → P ( X ) {\displaystyle {\mathcal {T}}:P(X)\to P(X)} Consider now the special case of maps T {\displaystyle {\mathcal {T}}} which preserve intersections, unions and complements (so that it is a map of Borel sets) and also sends X {\displaystyle X} to X {\displaystyle X} (because we want it to be conservative). Every such conservative, Borel-preserving map can be specified by some surjective map T : X → X {\displaystyle T:X\to X} by writing T ( A ) = T − 1 ( A ) {\displaystyle {\mathcal {T}}(A)=T^{-1}(A)} . Of course, one could also define T ( A ) = T ( A ) {\displaystyle {\mathcal {T}}(A)=T(A)} , but this is not enough to specify all such possible maps T {\displaystyle {\mathcal {T}}} . That is, conservative, Borel-preserving maps T {\displaystyle {\mathcal {T}}} cannot, in general, be written in the form T ( A ) = T ( A ) ; {\displaystyle {\mathcal {T}}(A)=T(A);} . μ ( T − 1 ( A ) ) {\displaystyle \mu (T^{-1}(A))} has the form of a pushforward, whereas μ ( T ( A ) ) {\displaystyle \mu (T(A))} is generically called a pullback. Almost all properties and behaviors of dynamical systems are defined in terms of the pushforward. For example, the transfer operator is defined in terms of the pushforward of the transformation map T {\displaystyle T} ; the measure μ {\displaystyle \mu } can now be understood as an invariant measure; it is just the Frobenius–Perron eigenvector of the transfer operator (recall, the FP eigenvector is the largest eigenvector of a matrix; in this case it is the eigenvector which has the eigenvalue one: the invariant measure.) There are two classification problems of interest. One, discussed below, fixes ( X , B , μ ) {\displaystyle (X,{\mathcal {B}},\mu )} and asks about the isomorphism classes of a transformation map T {\displaystyle T} . The other, discussed in transfer operator, fixes ( X , B ) {\displaystyle (X,{\mathcal {B}})} and T {\displaystyle T} , and asks about maps μ {\displaystyle \mu } that are measure-like. Measure-like, in that they preserve the Borel properties, but are no longer invariant; they are in general dissipative and so give insights into dissipative systems and the route to equilibrium. In terms of physics, the measure-preserving dynamical system ( X , B , μ , T ) {\displaystyle (X,{\mathcal {B}},\mu ,T)} often describes a physical system that is in equilibrium, for example, thermodynamic equilibrium. One might ask: how did it get that way? Often, the answer is by stirring, mixing, turbulence, thermalization or other such processes. If a transformation map T {\displaystyle T} describes this stirring, mixing, etc. then the system ( X , B , μ , T ) {\displaystyle (X,{\mathcal {B}},\mu ,T)} is all that is left, after all of the transient modes have decayed away. The transient modes are precisely those eigenvectors of the transfer operator that have eigenvalue less than one; the invariant measure μ {\displaystyle \mu } is the one mode that does not decay away. The rate of decay of the transient modes are given by (the logarithm of) their eigenvalues; the eigenvalue one corresponds to infinite half-life. == Informal example == The microcanonical ensemble from physics provides an informal example. Consider, for example, a fluid, gas or plasma in a box of width, length and height w × l × h , {\displaystyle w\times l\times h,} consisting of N {\displaystyle N} atoms. A single atom in that box might be anywhere, having arbitrary velocity; it would be represented by a single point in w × l × h × R 3 . {\displaystyle w\times l\times h\times \mathbb {R} ^{3}.} A given collection of N {\displaystyle N} atoms would then be a single point somewhere in the space ( w × l × h ) N × R 3 N . {\displaystyle (w\times l\times h)^{N}\times \mathbb {R} ^{3N}.} The "ensemble" is the collection of all such points, that is, the collection of all such possible boxes (of which there are an uncountably-infinite number). This ensemble of all-possible-boxes is the space X {\displaystyle X} above. In the case of an ideal gas, the measure μ {\displaystyle \mu } is given by the Maxwell–Boltzmann distribution. It is a product measure, in that if p i ( x , y , z , v x , v y , v z ) d 3 x d 3 p {\displaystyle p_{i}(x,y,z,v_{x},v_{y},v_{z})\,d^{3}x\,d^{3}p} is the probability of atom i {\displaystyle i} having position and velocity x , y , z , v x , v y , v z {\displaystyle x,y,z,v_{x},v_{y},v_{z}} , then, for N {\displaystyle N} atoms, the probability is the product of N {\displaystyle N} of these. This measure is understood to apply to the ensemble. So, for example, one of the possible boxes in the ensemble has all of the atoms on one side of the box. One can compute the likelihood of this, in the Maxwell–Boltzmann measure. It will be enormously tiny, of order O ( 2 − 3 N ) . {\displaystyle {\mathcal {O}}\left(2^{-3N}\right).} Of all possible boxes in the ensemble, this is a ridiculously small fraction. The only reason that this is an "informal example" is because writing down the transition function T {\displaystyle T} is difficult, and, even if written down, it is hard to perform practical computations with it. Difficulties are compounded if there are interactions between the particles themselves, like a van der Waals interaction or some other interaction suitable for a liquid or a plasma; in such cases, the invariant measure is no longer the Maxwell–Boltzmann distribution. The art of physics is finding reasonable approximations. This system does exhibit one key idea from the classification of measure-preserving dynamical systems: two ensembles, having different temperatures, are inequivalent. The entropy for a given canonical ensemble depends on its temperature; as physical systems, it is "obvious" that when the temperatures differ, so do the systems. This holds in general: systems with different entropy are not isomorphic. == Examples == Unlike the informal example above, the examples below are sufficiently well-defined and tractable that explicit, formal computations can be performed. μ could be the normalized angle measure dθ/2π on the unit circle, and T a rotation. See equidistribution theorem; the Bernoulli scheme; the interval exchange transformation; with the definition of an appropriate measure, a subshift of finite type; the base flow of a random dynamical system; the flow of a Hamiltonian vector field on the tangent bundle of a closed connected smooth manifold is measure-preserving (using the measure induced on the Borel sets by the symplectic volume form) by Liouville's theorem (Hamiltonian); for certain maps and Markov processes, the Krylov–Bogolyubov theorem establishes the existence of a suitable measure to form a measure-preserving dynamical system. == Generalization to groups and monoids == The definition of a measure-preserving dynamical system can be generalized to the case in which T is not a single transformation that is iterated to give the dynamics of the system, but instead is a monoid (or even a group, in which case we have the action of a group upon the given probability space) of transformations Ts : X → X parametrized by s ∈ Z (or R, or N ∪ {0}, or [0, +∞)), where each transformation Ts satisfies the same requirements as T above. In particular, the transformations obey the rules: T 0 = i d X : X → X {\displaystyle T_{0}=\mathrm {id} _{X}:X\rightarrow X} , the identity function on X; T s ∘ T t = T t + s {\displaystyle T_{s}\circ T_{t}=T_{t+s}} , whenever all the terms are well-defined; T s − 1 = T − s {\displaystyle T_{s}^{-1}=T_{-s}} , whenever all the terms are well-defined. The earlier, simpler case fits into this framework by defining Ts = Ts for s ∈ N. == Homomorphisms == The concept of a homomorphism and an isomorphism may be defined. Consider two dynamical systems ( X , A , μ , T ) {\displaystyle (X,{\mathcal {A}},\mu ,T)} and ( Y , B , ν , S ) {\displaystyle (Y,{\mathcal {B}},\nu ,S)} . Then a mapping φ : X → Y {\displaystyle \varphi :X\to Y} is a homomorphism of dynamical systems if it satisfies the following three properties: The map φ {\displaystyle \varphi \ } is measurable. For each B ∈ B {\displaystyle B\in {\mathcal {B}}} , one has μ ( φ − 1 B ) = ν ( B ) {\displaystyle \mu (\varphi ^{-1}B)=\nu (B)} . For μ {\displaystyle \mu } -almost all x ∈ X {\displaystyle x\in X} , one has φ ( T x ) = S ( φ x ) {\displaystyle \varphi (Tx)=S(\varphi x)} . The system ( Y , B , ν , S ) {\displaystyle (Y,{\mathcal {B}},\nu ,S)} is then called a factor of ( X , A , μ , T ) {\displaystyle (X,{\mathcal {A}},\mu ,T)} . The map φ {\displaystyle \varphi \;} is an isomorphism of dynamical systems if, in addition, there exists another mapping ψ : Y → X {\displaystyle \psi :Y\to X} that is also a homomorphism, which satisfies for μ {\displaystyle \mu } -almost all x ∈ X {\displaystyle x\in X} , one has x = ψ ( φ x ) {\displaystyle x=\psi (\varphi x)} ; for ν {\displaystyle \nu } -almost all y ∈ Y {\displaystyle y\in Y} , one has y = φ ( ψ y ) {\displaystyle y=\varphi (\psi y)} . Hence, one may form a category of dynamical systems and their homomorphisms. == Generic points == A point x ∈ X is called a generic point if the orbit of the point is distributed uniformly according to the measure. == Symbolic names and generators == Consider a dynamical system ( X , B , T , μ ) {\displaystyle (X,{\mathcal {B}},T,\mu )} , and let Q = {Q1, ..., Qk} be a partition of X into k measurable pair-wise disjoint sets. Given a point x ∈ X, clearly x belongs to only one of the Qi. Similarly, the iterated point Tnx can belong to only one of the parts as well. The symbolic name of x, with regards to the partition Q, is the sequence of integers {an} such that T n x ∈ Q a n . {\displaystyle T^{n}x\in Q_{a_{n}}.} The set of symbolic names with respect to a partition is called the symbolic dynamics of the dynamical system. A partition Q is called a generator or generating partition if μ-almost every point x has a unique symbolic name. == Operations on partitions == Given a partition Q = {Q1, ..., Qk} and a dynamical system ( X , B , T , μ ) {\displaystyle (X,{\mathcal {B}},T,\mu )} , define the T-pullback of Q as T − 1 Q = { T − 1 Q 1 , … , T − 1 Q k } . {\displaystyle T^{-1}Q=\{T^{-1}Q_{1},\ldots ,T^{-1}Q_{k}\}.} Further, given two partitions Q = {Q1, ..., Qk} and R = {R1, ..., Rm}, define their refinement as Q ∨ R = { Q i ∩ R j ∣ i = 1 , … , k , j = 1 , … , m , μ ( Q i ∩ R j ) > 0 } . {\displaystyle Q\vee R=\{Q_{i}\cap R_{j}\mid i=1,\ldots ,k,\ j=1,\ldots ,m,\ \mu (Q_{i}\cap R_{j})>0\}.} With these two constructs, the refinement of an iterated pullback is defined as ⋁ n = 0 N T − n Q = { Q i 0 ∩ T − 1 Q i 1 ∩ ⋯ ∩ T − N Q i N where i ℓ = 1 , … , k , ℓ = 0 , … , N , μ ( Q i 0 ∩ T − 1 Q i 1 ∩ ⋯ ∩ T − N Q i N ) > 0 } {\displaystyle {\begin{aligned}\bigvee _{n=0}^{N}T^{-n}Q&=\{Q_{i_{0}}\cap T^{-1}Q_{i_{1}}\cap \cdots \cap T^{-N}Q_{i_{N}}\\&{}\qquad {\mbox{ where }}i_{\ell }=1,\ldots ,k,\ \ell =0,\ldots ,N,\ \\&{}\qquad \qquad \mu \left(Q_{i_{0}}\cap T^{-1}Q_{i_{1}}\cap \cdots \cap T^{-N}Q_{i_{N}}\right)>0\}\\\end{aligned}}} which plays crucial role in the construction of the measure-theoretic entropy of a dynamical system. == Measure-theoretic entropy == The entropy of a partition Q {\displaystyle {\mathcal {Q}}} is defined as H ( Q ) = − ∑ Q ∈ Q μ ( Q ) log ⁡ μ ( Q ) . {\displaystyle H({\mathcal {Q}})=-\sum _{Q\in {\mathcal {Q}}}\mu (Q)\log \mu (Q).} The measure-theoretic entropy of a dynamical system ( X , B , T , μ ) {\displaystyle (X,{\mathcal {B}},T,\mu )} with respect to a partition Q = {Q1, ..., Qk} is then defined as h μ ( T , Q ) = lim N → ∞ 1 N H ( ⋁ n = 0 N T − n Q ) . {\displaystyle h_{\mu }(T,{\mathcal {Q}})=\lim _{N\rightarrow \infty }{\frac {1}{N}}H\left(\bigvee _{n=0}^{N}T^{-n}{\mathcal {Q}}\right).} Finally, the Kolmogorov–Sinai metric or measure-theoretic entropy of a dynamical system ( X , B , T , μ ) {\displaystyle (X,{\mathcal {B}},T,\mu )} is defined as h μ ( T ) = sup Q h μ ( T , Q ) . {\displaystyle h_{\mu }(T)=\sup _{\mathcal {Q}}h_{\mu }(T,{\mathcal {Q}}).} where the supremum is taken over all finite measurable partitions. A theorem of Yakov Sinai in 1959 shows that the supremum is actually obtained on partitions that are generators. Thus, for example, the entropy of the Bernoulli process is log 2, since almost every real number has a unique binary expansion. That is, one may partition the unit interval into the intervals [0, 1/2) and [1/2, 1]. Every real number x is either less than 1/2 or not; and likewise so is the fractional part of 2nx. If the space X is compact and endowed with a topology, or is a metric space, then the topological entropy may also be defined. If T {\displaystyle T} is an ergodic, piecewise expanding, and Markov on X ⊂ R {\displaystyle X\subset \mathbb {R} } , and μ {\displaystyle \mu } is absolutely continuous with respect to the Lebesgue measure, then we have the Rokhlin formula (section 4.3 and section 12.3 ): h μ ( T ) = ∫ ln ⁡ | d T / d x | μ ( d x ) {\displaystyle h_{\mu }(T)=\int \ln |dT/dx|\mu (dx)} This allows calculation of entropy of many interval maps, such as the logistic map. Ergodic means that T − 1 ( A ) = A {\displaystyle T^{-1}(A)=A} implies A {\displaystyle A} has full measure or zero measure. Piecewise expanding and Markov means that there is a partition of X {\displaystyle X} into finitely many open intervals, such that for some ϵ > 0 {\displaystyle \epsilon >0} , | T ′ | ≥ 1 + ϵ {\displaystyle |T'|\geq 1+\epsilon } on each open interval. Markov means that for each I i {\displaystyle I_{i}} from those open intervals, either T ( I i ) ∩ I i = ∅ {\displaystyle T(I_{i})\cap I_{i}=\emptyset } or T ( I i ) ∩ I i = I i {\displaystyle T(I_{i})\cap I_{i}=I_{i}} . == Classification and anti-classification theorems == One of the primary activities in the study of measure-preserving systems is their classification according to their properties. That is, let ( X , B , μ ) {\displaystyle (X,{\mathcal {B}},\mu )} be a measure space, and let U {\displaystyle U} be the set of all measure preserving systems ( X , B , μ , T ) {\displaystyle (X,{\mathcal {B}},\mu ,T)} . An isomorphism S ∼ T {\displaystyle S\sim T} of two transformations S , T {\displaystyle S,T} defines an equivalence relation R ⊂ U × U . {\displaystyle {\mathcal {R}}\subset U\times U.} The goal is then to describe the relation R {\displaystyle {\mathcal {R}}} . A number of classification theorems have been obtained; but quite interestingly, a number of anti-classification theorems have been found as well. The anti-classification theorems state that there are more than a countable number of isomorphism classes, and that a countable amount of information is not sufficient to classify isomorphisms. The first anti-classification theorem, due to Hjorth, states that if U {\displaystyle U} is endowed with the weak topology, then the set R {\displaystyle {\mathcal {R}}} is not a Borel set. There are a variety of other anti-classification results. For example, replacing isomorphism with Kakutani equivalence, it can be shown that there are uncountably many non-Kakutani equivalent ergodic measure-preserving transformations of each entropy type. These stand in contrast to the classification theorems. These include: Ergodic measure-preserving transformations with a pure point spectrum have been classified. Bernoulli shifts are classified by their metric entropy. See Ornstein theory for more. == See also == Krylov–Bogolyubov theorem on the existence of invariant measures Poincaré recurrence theorem – Certain dynamical systems will eventually return to (or approximate) their initial state == References == == Further reading == Michael S. Keane, "Ergodic theory and subshifts of finite type", (1991), appearing as Chapter 2 in Ergodic Theory, Symbolic Dynamics and Hyperbolic Spaces, Tim Bedford, Michael Keane and Caroline Series, Eds. Oxford University Press, Oxford (1991). ISBN 0-19-853390-X (Provides expository introduction, with exercises, and extensive references.) Lai-Sang Young, "Entropy in Dynamical Systems" (pdf; ps), appearing as Chapter 16 in Entropy, Andreas Greven, Gerhard Keller, and Gerald Warnecke, eds. Princeton University Press, Princeton, NJ (2003). ISBN 0-691-11338-6 T. Schürmann and I. Hoffmann, The entropy of strange billiards inside n-simplexes. J. Phys. A 28(17), page 5033, 1995. PDF-Document (gives a more involved example of measure-preserving dynamical system.)
Wikipedia/Measure-theoretic_entropy
In mathematics, the normal form of a dynamical system is a simplified form that can be useful in determining the system's behavior. Normal forms are often used for determining local bifurcations in a system. All systems exhibiting a certain type of bifurcation are locally (around the equilibrium) topologically equivalent to the normal form of the bifurcation. For example, the normal form of a saddle-node bifurcation is d x d t = μ + x 2 {\displaystyle {\frac {\mathrm {d} x}{\mathrm {d} t}}=\mu +x^{2}} where μ {\displaystyle \mu } is the bifurcation parameter. The transcritical bifurcation d x d t = r ln ⁡ x + x − 1 {\displaystyle {\frac {\mathrm {d} x}{\mathrm {d} t}}=r\ln x+x-1} near x = 1 {\displaystyle x=1} can be converted to the normal form d u d t = R u − u 2 + O ( u 3 ) {\displaystyle {\frac {\mathrm {d} u}{\mathrm {d} t}}=Ru-u^{2}+O(u^{3})} with the transformation u = r 2 ( x − 1 ) , R = r + 1 {\displaystyle u={\frac {r}{2}}(x-1),R=r+1} . See also canonical form for use of the terms canonical form, normal form, or standard form more generally in mathematics. == References == == Further reading == Guckenheimer, John; Holmes, Philip (1983), Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields, Springer, Section 3.3, ISBN 0-387-90819-6 Kuznetsov, Yuri A. (1998), Elements of Applied Bifurcation Theory (Second ed.), Springer, Section 2.4, ISBN 0-387-98382-1 Murdock, James (2006). "Normal forms". Scholarpedia. 1 (10): 1902. Bibcode:2006SchpJ...1.1902M. doi:10.4249/scholarpedia.1902. Murdock, James (2003). Normal Forms and Unfoldings for Local Dynamical Systems. Springer. ISBN 978-0-387-21785-7.
Wikipedia/Normal_form_(dynamical_systems)
In mathematics, a multivalued function, multiple-valued function, many-valued function, or multifunction, is a function that has two or more values in its range for at least one point in its domain. It is a set-valued function with additional properties depending on context; some authors do not distinguish between set-valued functions and multifunctions, but English Wikipedia currently does, having a separate article for each. A multivalued function of sets f : X → Y is a subset Γ f ⊆ X × Y . {\displaystyle \Gamma _{f}\ \subseteq \ X\times Y.} Write f(x) for the set of those y ∈ Y with (x,y) ∈ Γf. If f is an ordinary function, it is a multivalued function by taking its graph Γ f = { ( x , f ( x ) ) : x ∈ X } . {\displaystyle \Gamma _{f}\ =\ \{(x,f(x))\ :\ x\in X\}.} They are called single-valued functions to distinguish them. == Motivation == The term multivalued function originated in complex analysis, from analytic continuation. It often occurs that one knows the value of a complex analytic function f ( z ) {\displaystyle f(z)} in some neighbourhood of a point z = a {\displaystyle z=a} . This is the case for functions defined by the implicit function theorem or by a Taylor series around z = a {\displaystyle z=a} . In such a situation, one may extend the domain of the single-valued function f ( z ) {\displaystyle f(z)} along curves in the complex plane starting at a {\displaystyle a} . In doing so, one finds that the value of the extended function at a point z = b {\displaystyle z=b} depends on the chosen curve from a {\displaystyle a} to b {\displaystyle b} ; since none of the new values is more natural than the others, all of them are incorporated into a multivalued function. For example, let f ( z ) = z {\displaystyle f(z)={\sqrt {z}}\,} be the usual square root function on positive real numbers. One may extend its domain to a neighbourhood of z = 1 {\displaystyle z=1} in the complex plane, and then further along curves starting at z = 1 {\displaystyle z=1} , so that the values along a given curve vary continuously from 1 = 1 {\displaystyle {\sqrt {1}}=1} . Extending to negative real numbers, one gets two opposite values for the square root—for example ±i for −1—depending on whether the domain has been extended through the upper or the lower half of the complex plane. This phenomenon is very frequent, occurring for nth roots, logarithms, and inverse trigonometric functions. To define a single-valued function from a complex multivalued function, one may distinguish one of the multiple values as the principal value, producing a single-valued function on the whole plane which is discontinuous along certain boundary curves. Alternatively, dealing with the multivalued function allows having something that is everywhere continuous, at the cost of possible value changes when one follows a closed path (monodromy). These problems are resolved in the theory of Riemann surfaces: to consider a multivalued function f ( z ) {\displaystyle f(z)} as an ordinary function without discarding any values, one multiplies the domain into a many-layered covering space, a manifold which is the Riemann surface associated to f ( z ) {\displaystyle f(z)} . == Inverses of functions == If f : X → Y is an ordinary function, then its inverse is the multivalued function Γ f − 1 ⊆ Y × X {\displaystyle \Gamma _{f^{-1}}\ \subseteq \ Y\times X} defined as Γf, viewed as a subset of X × Y. When f is a differentiable function between manifolds, the inverse function theorem gives conditions for this to be single-valued locally in X. For example, the complex logarithm log(z) is the multivalued inverse of the exponential function ez : C → C×, with graph Γ log ⁡ ( z ) = { ( z , w ) : w = log ⁡ ( z ) } ⊆ C × C × . {\displaystyle \Gamma _{\log(z)}\ =\ \{(z,w)\ :\ w=\log(z)\}\ \subseteq \ \mathbf {C} \times \mathbf {C} ^{\times }.} It is not single valued, given a single w with w = log(z), we have log ⁡ ( z ) = w + 2 π i Z . {\displaystyle \log(z)\ =\ w\ +\ 2\pi i\mathbf {Z} .} Given any holomorphic function on an open subset of the complex plane C, its analytic continuation is always a multivalued function. == Concrete examples == Every real number greater than zero has two real square roots, so that square root may be considered a multivalued function. For example, we may write 4 = ± 2 = { 2 , − 2 } {\displaystyle {\sqrt {4}}=\pm 2=\{2,-2\}} ; although zero has only one square root, 0 = { 0 } {\displaystyle {\sqrt {0}}=\{0\}} . Note that x {\displaystyle {\sqrt {x}}} usually denotes only the principal square root of x {\displaystyle x} . Each nonzero complex number has two square roots, three cube roots, and in general n nth roots. The only nth root of 0 is 0. The complex logarithm function is multiple-valued. The values assumed by log ⁡ ( a + b i ) {\displaystyle \log(a+bi)} for real numbers a {\displaystyle a} and b {\displaystyle b} are log ⁡ a 2 + b 2 + i arg ⁡ ( a + b i ) + 2 π n i {\displaystyle \log {\sqrt {a^{2}+b^{2}}}+i\arg(a+bi)+2\pi ni} for all integers n {\displaystyle n} . Inverse trigonometric functions are multiple-valued because trigonometric functions are periodic. We have tan ⁡ ( π 4 ) = tan ⁡ ( 5 π 4 ) = tan ⁡ ( − 3 π 4 ) = tan ⁡ ( ( 2 n + 1 ) π 4 ) = ⋯ = 1. {\displaystyle \tan \left({\tfrac {\pi }{4}}\right)=\tan \left({\tfrac {5\pi }{4}}\right)=\tan \left({\tfrac {-3\pi }{4}}\right)=\tan \left({\tfrac {(2n+1)\pi }{4}}\right)=\cdots =1.} As a consequence, arctan(1) is intuitively related to several values: π/4, 5π/4, −3π/4, and so on. We can treat arctan as a single-valued function by restricting the domain of tan x to −π/2 < x < π/2 – a domain over which tan x is monotonically increasing. Thus, the range of arctan(x) becomes −π/2 < y < π/2. These values from a restricted domain are called principal values. The antiderivative can be considered as a multivalued function. The antiderivative of a function is the set of functions whose derivative is that function. The constant of integration follows from the fact that the derivative of a constant function is 0. Inverse hyperbolic functions over the complex domain are multiple-valued because hyperbolic functions are periodic along the imaginary axis. Over the reals, they are single-valued, except for arcosh and arsech. These are all examples of multivalued functions that come about from non-injective functions. Since the original functions do not preserve all the information of their inputs, they are not reversible. Often, the restriction of a multivalued function is a partial inverse of the original function. == Branch points == Multivalued functions of a complex variable have branch points. For example, for the nth root and logarithm functions, 0 is a branch point; for the arctangent function, the imaginary units i and −i are branch points. Using the branch points, these functions may be redefined to be single-valued functions, by restricting the range. A suitable interval may be found through use of a branch cut, a kind of curve that connects pairs of branch points, thus reducing the multilayered Riemann surface of the function to a single layer. As in the case with real functions, the restricted range may be called the principal branch of the function. == Applications == In physics, multivalued functions play an increasingly important role. They form the mathematical basis for Dirac's magnetic monopoles, for the theory of defects in crystals and the resulting plasticity of materials, for vortices in superfluids and superconductors, and for phase transitions in these systems, for instance melting and quark confinement. They are the origin of gauge field structures in many branches of physics. == See also == Relation (mathematics) Function (mathematics) Binary relation Set-valued function == Further reading == H. Kleinert, Multivalued Fields in Condensed Matter, Electrodynamics, and Gravitation, World Scientific (Singapore, 2008) (also available online) H. Kleinert, Gauge Fields in Condensed Matter, Vol. I: Superflow and Vortex Lines, 1–742, Vol. II: Stresses and Defects, 743–1456, World Scientific, Singapore, 1989 (also available online: Vol. I and Vol. II) == References ==
Wikipedia/Single-valued_function
Coherence therapy is a system of psychotherapy based in the theory that symptoms of mood, thought and behavior are produced coherently according to the person's current mental models of reality, most of which are implicit and unconscious. It was created by Bruce Ecker and Laurel Hulley, who first described it in their 1996 book Depth Oriented Brief Therapy. == History == Ecker and Hulley began developing coherence therapy in the late 1980s and early 1990s as they investigated, in their clinical practice of psychotherapy, why certain sessions seemed to produce deep transformations of emotional meaning and unambiguous symptom cessation, while most sessions did not. Studying many such sessions for several years, they concluded that in these sessions, the therapist had desisted from doing anything to oppose or counteract the symptom, and the client had a powerful, felt experience of some previously unrecognized "emotional truth" that was making the symptom necessary to have. Ecker and Hulley began developing a collection of experiential methods to intentionally facilitate this process, adopting some relevant existing clinical techniques. They began teaching the system in 1993 and first published it in their 1996 book Depth Oriented Brief Therapy. In 2005, Ecker and Hulley began calling the system coherence therapy in order for the name to more clearly reflect the central principle of the approach. In 2012, they published with coauthor Robin Ticic the book Unlocking the Emotional Brain, which described how their system's central principle could also be demonstrated in other systems of psychotherapy. == General description == The basis of coherence therapy is the principle of symptom coherence. This is the view that any response of the brain–mind–body system is an expression of coherent personal constructs (or schemas), which are nonverbal, emotional, perceptual and somatic knowings, not verbal-cognitive propositions. A therapy client's presenting symptoms are understood as an activation and enactment of specific constructs. The principle of symptom coherence can be found in varying degrees, explicitly or implicitly, in the writings of a number of historical psychotherapy theorists, including Sigmund Freud (1923), Harry Stack Sullivan (1948), Carl Jung (1964), R. D. Laing (1967), Gregory Bateson (1972), Virginia Satir (1972), Paul Watzlawick (1974), Eugene Gendlin (1982), Vittorio Guidano & Giovanni Liotti (1983), Les Greenberg (1993), Bessel van der Kolk (1994), Robert Kegan & Lisa Lahey (2001), Sue Johnson (2004), and others. The principle of symptom coherence maintains that an individual's seemingly irrational, out-of-control symptoms are (with some exceptions) sensible, cogent, orderly expressions of the person's existing constructions of self and world, rather than a disorder or pathology. Even a person's psychological resistance to change is seen as a result of the coherence of the person's mental constructions. Thus, coherence therapy, like some other postmodern therapies, approaches a person's resistance to change as an ally in psychotherapy and not an enemy. Coherence therapy is considered a type of psychological constructivism. It differs from some other forms of constructivism in that the principle of symptom coherence is fully explicit and operationalized, guiding and informing the entire methodology. The process of coherence therapy is experiential rather than analytic, and in this regard is similar to Gestalt therapy, Focusing or Hakomi. The aim is for the client to come into direct, emotional experience of the unconscious personal constructs (akin to complexes or ego-states) which produce an unwanted symptom and to undergo a natural process of revising or dissolving these constructs, thereby eliminating the symptom. Practitioners claim that the entire process often requires a dozen sessions or less, although it can take longer when the meanings and emotions underlying the symptom are particularly complex or intense. == Symptom coherence == Symptom coherence is defined by Ecker and Hulley as follows: A person produces a particular symptom because, despite the suffering it entails, the symptom is compellingly necessary to have, according to at least one unconscious, nonverbal, emotionally potent schema or construction of reality. Each symptom-requiring construction is cogent—a sensible, meaningful, well-knit, well-defined schema that was formed adaptively in response to earlier experiences and is still carried and applied in the present. The person ceases producing the symptom as soon as there no longer exists any construction of reality in which the symptom is necessary to have. There are several forms of symptom coherence. Some symptoms are necessary because they serve a crucial function (such as depression that protects against feeling and expressing anger), while others have no function but are necessary in the sense of being an inevitable effect, or by-product, caused by some other adaptive, coherent but unconscious response (such as depression resulting from isolation, which itself is a strategy for feeling safe). Both functional and functionless symptoms are coherent, according to the client's own material. In other words, the theory states that symptoms are produced by how the individual strives, without conscious awareness, to carry out self-protecting or self-affirming purposes formed in the course of living. This model of symptom production fits into the broader category of psychological constructivism, which views the person as having profound, if unrecognized, agency in shaping experience and behavior. Symptom coherence does not apply to those symptoms that are not directly or indirectly caused by implicit schemas or emotional learnings—for example, hypothyroidism-induced depression, autism, and biochemical addiction. == Hierarchical organization of constructs == As a tool for identifying all of a person's relevant schemas or constructions of reality, Ecker and Hulley defined several logically hierarchical domains or orders of construction (inspired by Gregory Bateson): The first order consists of a person's overt responses: thoughts, feelings, and behaviors. The second order consists of the person's specific meaning of the concrete situation to which they are responding. The third order consists of the person's broad purposes and strategies for construing that specific meaning (teleology). The fourth order consists of the person's general meaning of the nature of self, others, and the world (ontology and primal world beliefs). The fifth order consists of the person's broad purposes and strategies for construing that general meaning. Higher orders (beyond the fifth order) are rarely involved in psychotherapy. A person's first-order symptoms of thought, mood, or behavior follow from a second-order construal of the situation, and that second-order construal is powerfully influenced by the person's third- and fourth-order constructions. Hence the third and higher orders constitute what Ecker and Hulley call "the emotional truth of the symptom", which are the meanings and purposes that are intended to be discovered, integrated, and transformed in therapy. == Evidence from neuroscience == In a series of three articles published in the Journal of Constructivist Psychology from 2007 to 2009, Bruce Ecker and Brian Toomey presented evidence that coherence therapy may be one of the systems of psychotherapy which, according to current neuroscience, makes fullest use of the brain's built-in capacities for change. Ecker and Toomey argued that the mechanism of change in coherence therapy correlates with the recently discovered neural process of memory reconsolidation, a process that can "unwire" and delete longstanding emotional conditioning held in implicit memory. They claim that coherence therapy achieves implicit memory deletion and also claim that it aligns with the growing body of evidence supporting memory reconsolidation. Ecker and colleagues claim that: (a) their procedural steps match those identified by neuroscientists for reconsolidation, (b) their procedural steps result in effortless cessation of symptoms, and (c) the emotional experience of the retrieved, symptom-generating emotional schemas can no longer be evoked by cues that formerly evoked it strongly. The process of removing the neural basis of the symptom in coherence therapy (and in similar postmodern therapies) is different from the counteractive strategy of some behavioral therapies. In such behavioral therapies, new preferred behavioral patterns are typically practiced to compete against and hopefully override the unwanted ones; this counteractive process, like the "extinction" of conditioned responses in animals, is known to be inherently unstable and prone to relapse, because the neural circuit of the unwanted pattern continues to exist even when the unwanted pattern is in abeyance. Through reconsolidation, the unwanted neural circuits are "unwired" and cannot relapse. == See also == Client-centered therapy Cognitive therapy § Cognitive model Decisional balance sheet § ABC model Emotionally focused therapy Immunity to change Method of levels Post-rationalist cognitive therapy Schema therapy == Notes == == References == === Psychotherapy literature === === Neuroscience literature === == External links == CoherenceTherapy.org — Coherence Therapy (Depth Oriented Brief Therapy)
Wikipedia/Coherence_therapy
Piaget's theory of cognitive development, or his genetic epistemology, is a comprehensive theory about the nature and development of human intelligence. It was originated by the Swiss developmental psychologist Jean Piaget (1896–1980). The theory deals with the nature of knowledge itself and how humans gradually come to acquire, construct, and use it. Piaget's theory is mainly known as a developmental stage theory. In 1919, while working at the Alfred Binet Laboratory School in Paris, Piaget "was intrigued by the fact that children of different ages made different kinds of mistakes while solving problems". His experience and observations at the Alfred Binet Laboratory were the beginnings of his theory of cognitive development. He believed that children of different ages made different mistakes because of the "quality rather than quantity" of their intelligence. Piaget proposed four stages to describe the development process of children: sensorimotor stage, pre-operational stage, concrete operational stage, and formal operational stage. Each stage describes a specific age group. In each stage, he described how children develop their cognitive skills. For example, he believed that children experience the world through actions, representing things with words, thinking logically, and using reasoning. To Piaget, cognitive development was a progressive reorganisation of mental processes resulting from biological maturation and environmental experience. He believed that children construct an understanding of the world around them, experience discrepancies between what they already know and what they discover in their environment, then adjust their ideas accordingly. Moreover, Piaget claimed that cognitive development is at the centre of the human organism, and language is contingent on knowledge and understanding acquired through cognitive development. Piaget's earlier work received the greatest attention. Child-centred classrooms and "open education" are direct applications of Piaget's views. Despite its huge success, Piaget's theory has some limitations that Piaget recognised himself: for example, the theory supports sharp stages rather than continuous development (horizontal and vertical décalage). == Nature of intelligence: operative and figurative == Piaget argued that reality is a construction. Reality is defined in reference to the two conditions that define dynamic systems. Specifically, he argued that reality involves transformations and states. Transformations refer to all manners of changes that a thing or person can undergo. States refer to the conditions or the appearances in which things or persons can be found between transformations. For example, there might be changes in shape or form (for instance, liquids are reshaped as they are transferred from one vessel to another, and similarly humans change in their characteristics as they grow older), in size (a toddler does not walk and run without falling, but after 7 yrs of age, the child's sensorimotor anatomy is well developed and now acquires skill faster), or in placement or location in space and time (e.g., various objects or persons might be found at one place at one time and at a different place at another time). Thus, Piaget argued, if human intelligence is to be adaptive, it must have functions to represent both the transformational and the static aspects of reality. He proposed that operative intelligence is responsible for the representation and manipulation of the dynamic or transformational aspects of reality, and that figurative intelligence is responsible for the representation of the static aspects of reality. Operative intelligence is the active aspect of intelligence. It involves all actions, overt or covert, undertaken in order to follow, recover, or anticipate the transformations of the objects or persons of interest. Figurative intelligence is the more or less static aspect of intelligence, involving all means of representation used to retain in mind the states (i.e., successive forms, shapes, or locations) that intervene between transformations. That is, it involves perception, imitation, mental imagery, drawing, and language. Therefore, the figurative aspects of intelligence derive their meaning from the operative aspects of intelligence, because states cannot exist independently of the transformations that interconnect them. Piaget stated that the figurative or the representational aspects of intelligence are subservient to its operative and dynamic aspects, and therefore, that understanding essentially derives from the operative aspect of intelligence. At any time, operative intelligence frames how the world is understood and it changes if understanding is not successful. Piaget stated that this process of understanding and change involves two basic functions: assimilation and accommodation. === Assimilation and accommodation === Through his study of the field of education, Piaget focused on two processes, which he named assimilation and accommodation. To Piaget, assimilation meant integrating external elements into structures of lives or environments, or those we could have through experience. Assimilation is how humans perceive and adapt to new information. It is the process of fitting new information into pre-existing cognitive schemas. Assimilation in which new experiences are reinterpreted to fit into, or assimilate with, old ideas and analyzing new facts accordingly. It occurs when humans are faced with new or unfamiliar information and refer to previously learned information in order to make sense of it. In contrast, accommodation is the process of taking new information in one's environment and altering pre-existing schemas in order to fit in the new information. This happens when the existing schema (knowledge) does not work, and needs to be changed to deal with a new object or situation. Accommodation is imperative because it is how people will continue to interpret new concepts, schemas, frameworks, and more. Various teaching methods have been developed based on Piaget's insights that call for the use of questioning and inquiry-based education to help learners more blatantly face the sorts of contradictions to their pre-existing schemas that are conducive to learning. Piaget believed that the human brain has been programmed through evolution to bring equilibrium, which is what he believed ultimately influences structures by the internal and external processes through assimilation and accommodation. Piaget's understanding was that assimilation and accommodation cannot exist without the other. They are two sides of a coin. To assimilate an object into an existing mental schema, one first needs to take into account or accommodate to the particularities of this object to a certain extent. For instance, to recognize (assimilate) an apple as an apple, one must first focus (accommodate) on the contour of this object. To do this, one needs to roughly recognize the size of the object. Development increases the balance, or equilibration, between these two functions. When in balance with each other, assimilation and accommodation generate mental schemas of the operative intelligence. When one function dominates over the other, they generate representations which belong to figurative intelligence. === Cognitive equilibration === Piaget agreed with most other developmental psychologists in that there are three very important factors that are attributed to development: maturation, experience, and the social environment. But where his theory differs involves his addition of a fourth factor, equilibration, which "refers to the organism's attempt to keep its cognitive schemes in balance". Also see Piaget, and Boom's detailed account. Equilibration is the motivational element that guides cognitive development. As humans, we have a biological need to make sense of the things we encounter in every aspect of our world in order to muster a greater understanding of it, and therefore, to flourish in it. This is where the concept of equilibration comes into play. If a child is confronted with information that does not fit into his or her previously held schemes, disequilibrium is said to occur. This, as one would imagine, is unsatisfactory to the child, so he or she will try to fix it. The incongruence will be fixed in one of three ways. The child will either ignore the newly discovered information, assimilate the information into a preexisting scheme, or accommodate the information by modifying a different scheme. Using any of these methods will return the child to a state of equilibrium, however, depending on the information being presented to the child, that state of equilibrium is not likely to be permanent. For example, let's say Dave, a three-year-old boy who has grown up on a farm and is accustomed to seeing Horses regularly, has been brought to the zoo by his parents and sees an Elephant for the first time. Immediately he shouts "look mommy, Horsey!" Because Dave does not have a scheme for Elephants, he interprets the Elephant as being a Horse due to its large size, color, tail, and long face. He believes the Elephant is a Horse until his mother corrects. The new information Dave has received has put him in a state of disequilibrium. He now has to do one of three things. He can either: (1) turn his head, move towards another section of animals, and ignore this newly presented information; (2) distort the defining characteristics of an Elephant so that he can assimilate it into his "Horsey" scheme; or (3) he can modify his preexisting "Animal" schema to accommodate this new information regarding Elephants by slightly altering his knowledge of animals as he knows them. With age comes entry into a higher stage of development. With that being said, previously held schemes (and the children that hold them) are more than likely to be confronted with discrepant information the older they get. Silverman and Geiringer propose that one would be more successful in attempting to change a child's mode of thought by exposing that child to concepts that reflect a higher rather than a lower stage of development. Furthermore, children are better influenced by modeled performances that are one stage above their developmental level, as opposed to modeled performances that are either lower or two or more stages above their level. == Four stages of development == In his theory of cognitive development, Jean Piaget proposed that humans progress through four developmental stages: the sensorimotor stage, preoperational stage, concrete operational stage, and formal operational stage. === Sensorimotor stage === The first of these, the sensorimotor stage "extends from birth to the acquisition of language". In this stage, infants progressively construct knowledge and understanding of the world by coordinating experiences (such as vision and hearing) from physical interactions with objects (such as grasping, sucking, and stepping). Infants gain knowledge of the world from the physical actions they perform within it. They progress from reflexive, instinctual action at birth to the beginning of symbolic thought toward the end of the stage. Children learn that they are separate from the environment. They can think about aspects of the environment, even though these may be outside the reach of the child's senses. In this stage, according to Piaget, the development of object permanence is one of the most important accomplishments. Object permanence is a child's understanding that an object continues to exist even though they cannot see or hear it. Peek-a-boo is a game in which children who have yet to fully develop object permanence respond to sudden hiding and revealing of a face. By the end of the sensorimotor period, children develop a permanent sense of self and object and will quickly lose interest in Peek-a-boo. Piaget divided the sensorimotor stage into six sub-stages. === Preoperational stage === By observing sequences of play, Piaget was able to demonstrate the second stage of his theory, the pre-operational stage. He said that this stage starts towards the end of the second year. It starts when the child begins to learn to speak and lasts up until the age of seven. During the pre-operational stage of cognitive development, Piaget noted that children do not yet understand concrete logic and cannot mentally manipulate information. Children's increase in playing and pretending takes place in this stage. However, the child still has trouble seeing things from different points of view. The children's play is mainly categorized by symbolic play and manipulating symbols. Such play is demonstrated by the idea of checkers being snacks, pieces of paper being plates, and a box being a table. Their observations of symbols exemplifies the idea of play with the absence of the actual objects involved. The pre-operational stage is sparse and logically inadequate in regard to mental operations. The child is able to form stable concepts as well as magical beliefs (magical thinking). The child, however, is still not able to perform operations, which are tasks that the child can do mentally, rather than physically. Thinking in this stage is still egocentric, meaning the child has difficulty seeing the viewpoint of others. The Pre-operational Stage is split into two substages: the symbolic function substage, and the intuitive thought substage. The symbolic function substage is when children are able to understand, represent, remember, and picture objects in their mind without having the object in front of them. The intuitive thought substage is when children tend to propose the questions of "why?" and "how come?" This stage is when children want to understand everything. ==== Symbolic function substage ==== At about two to four years of age, children cannot yet manipulate and transform information in a logical way. However, they now can think in images and symbols. Other examples of mental abilities are language and pretend play. Symbolic play is when children develop imaginary friends or role-play with friends. Children's play becomes more social and they assign roles to each other. Some examples of symbolic play include playing house, or having a tea party. The type of symbolic play in which children engage is connected with their level of creativity and ability to connect with others. Additionally, the quality of their symbolic play can have consequences on their later development. For example, young children whose symbolic play is of a violent nature tend to exhibit less prosocial behavior and are more likely to display antisocial tendencies in later years. In this stage, there are still limitations, such as egocentrism and precausal thinking. Egocentrism occurs when a child is unable to distinguish between their own perspective and that of another person. Children tend to stick to their own viewpoint, rather than consider the view of others. Indeed, they are not even aware that such a concept as "different viewpoints" exists. Egocentrism can be seen in an experiment performed by Piaget and Swiss developmental psychologist Bärbel Inhelder, known as the three mountain problem. In this experiment, three views of a mountain are shown to the child, who is asked what a traveling doll would see at the various angles. The child will consistently describe what they can see from the position from which they are seated, regardless of the angle from which they are asked to take the doll's perspective. Egocentrism would also cause a child to believe, "I like The Lion Guard, so the high school student next door must like The Lion Guard, too." Similar to preoperational children's egocentric thinking is their structuring of a cause and effect relationships. Piaget coined the term "precausal thinking" to describe the way in which preoperational children use their own existing ideas or views, like in egocentrism, to explain cause-and-effect relationships. Three main concepts of causality as displayed by children in the preoperational stage include: animism, artificialism and transductive reasoning. Animism is the belief that inanimate objects are capable of actions and have lifelike qualities. An example could be a child believing that the sidewalk was mad and made them fall down, or that the stars twinkle in the sky because they are happy. Artificialism refers to the belief that environmental characteristics can be attributed to human actions or interventions. For example, a child might say that it is windy outside because someone is blowing very hard, or the clouds are white because someone painted them that color. Finally, precausal thinking is categorized by transductive reasoning. Transductive reasoning is when a child fails to understand the true relationships between cause and effect. Unlike deductive or inductive reasoning (general to specific, or specific to general), transductive reasoning refers to when a child reasons from specific to specific, drawing a relationship between two separate events that are otherwise unrelated. For example, if a child hears the dog bark and then a balloon popped, the child would conclude that because the dog barked, the balloon popped. ==== Intuitive thought substage ==== A main feature of the pre-operational stage of development is primitive reasoning. Between the ages of four and seven, reasoning changes from symbolic thought to intuitive thought. This stage is "marked by greater dependence on intuitive thinking rather than just perception." Children begin to have more automatic thoughts that don't require evidence. During this stage there is a heightened sense of curiosity and need to understand how and why things work. Piaget named this substage "intuitive thought" because they are starting to develop more logical thought but cannot explain their reasoning. Thought during this stage is still immature and cognitive errors occur. Children in this stage depend on their own subjective perception of the object or event. This stage is characterized by centration, conservation, irreversibility, class inclusion, and transitive inference. Centration is the act of focusing all attention on one characteristic or dimension of a situation, whilst disregarding all others. Conservation is the awareness that altering a substance's appearance does not change its basic properties. Children at this stage are unaware of conservation and exhibit centration. Both centration and conservation can be more easily understood once familiarized with Piaget's most famous experimental task. In this task, a child is presented with two identical beakers containing the same amount of liquid. The child usually notes that the beakers do contain the same amount of liquid. When one of the beakers is poured into a taller and thinner container, children who are younger than seven or eight years old typically say that the two beakers no longer contain the same amount of liquid, and that the taller container holds the larger quantity (centration), without taking into consideration the fact that both beakers were previously noted to contain the same amount of liquid. Due to superficial changes, the child was unable to comprehend that the properties of the substances continued to remain the same (conservation). Irreversibility is a concept developed in this stage which is closely related to the ideas of centration and conservation. Irreversibility refers to when children are unable to mentally reverse a sequence of events. In the same beaker situation, the child does not realize that, if the sequence of events was reversed and the water from the tall beaker was poured back into its original beaker, then the same amount of water would exist. Another example of children's reliance on visual representations is their misunderstanding of "less than" or "more than". When two rows containing equal numbers of blocks are placed in front of a child, one row spread farther apart than the other, the child will think that the row spread farther contains more blocks. Class inclusion refers to a kind of conceptual thinking that children in the preoperational stage cannot yet grasp. Children's inability to focus on two aspects of a situation at once inhibits them from understanding the principle that one category or class can contain several different subcategories or classes. For example, a four-year-old girl may be shown a picture of eight dogs and three cats. The girl knows what cats and dogs are, and she is aware that they are both animals. However, when asked, "Are there more dogs or animals?" she is likely to answer "more dogs". This is due to her difficulty focusing on the two subclasses and the larger class all at the same time. She may have been able to view the dogs as dogs or animals, but struggled when trying to classify them as both, simultaneously. Similar to this is concept relating to intuitive thought, known as "transitive inference". Transitive inference is using previous knowledge to determine the missing piece, using basic logic. Children in the preoperational stage lack this logic. An example of transitive inference would be when a child is presented with the information "A" is greater than "B" and "B" is greater than "C". This child may have difficulty here understanding that "A" is also greater than "C". === Concrete operational stage === The concrete operational stage is the third stage of Piaget's theory of cognitive development. This stage, which follows the preoperational stage, occurs between the ages of 7 and 11 (middle childhood and preadolescence) years, and is characterized by the appropriate use of logic. During this stage, a child's thought processes become more mature and "adult like". They start solving problems in a more logical fashion. Abstract, hypothetical thinking is not yet developed in the child, and children can only solve problems that apply to concrete events or objects. At this stage, the children undergo a transition where the child learns rules such as conservation. Piaget determined that children are able to incorporate inductive reasoning. Inductive reasoning involves drawing inferences from observations in order to make a generalization. In contrast, children struggle with deductive reasoning, which involves using a generalized principle in order to try to predict the outcome of an event. Children in this stage commonly experience difficulties with figuring out logic in their heads. For example, a child will understand that "A is more than B" and "B is more than C". However, when asked "is A more than C?", the child might not be able to logically figure the question out mentally. Two other important processes in the concrete operational stage are logic and the elimination of egocentrism. Egocentrism is the inability to consider or understand a perspective other than one's own. It is the phase where the thought and morality of the child is completely self focused. During this stage, the child acquires the ability to view things from another individual's perspective, even if they think that perspective is incorrect. For instance, show a child a comic in which Jane puts a doll under a box, leaves the room, and then Melissa moves the doll to a drawer, and Jane comes back. A child in the concrete operations stage will say that Jane will still think it's under the box even though the child knows it is in the drawer. (See also False-belief task.) Children in this stage can, however, only solve problems that apply to actual (concrete) objects or events, and not abstract concepts or hypothetical tasks. Understanding and knowing how to use full common sense has not yet been completely adapted. Piaget determined that children in the concrete operational stage were able to incorporate inductive logic. On the other hand, children at this age have difficulty using deductive logic, which involves using a general principle to predict the outcome of a specific event. This includes mental reversibility. An example of this is being able to reverse the order of relationships between mental categories. For example, a child might be able to recognize that his or her dog is a Labrador, that a Labrador is a dog, and that a dog is an animal, and draw conclusions from the information available, as well as apply all these processes to hypothetical situations. The abstract quality of the adolescent's thought at the formal operational level is evident in the adolescent's verbal problem solving ability. The logical quality of the adolescent's thought is when children are more likely to solve problems in a trial-and-error fashion. Adolescents begin to think more as a scientist thinks, devising plans to solve problems and systematically test opinions. They use hypothetical-deductive reasoning, which means that they develop hypotheses or best guesses, and systematically deduce, or conclude, which is the best path to follow in solving the problem. During this stage the adolescent is able to understand love, logical proofs and values. During this stage the young person begins to entertain possibilities for the future and is fascinated with what they can be. Adolescents also are changing cognitively by the way that they think about social matters. One thing that brings about a change is egocentrism. This happens by heightening self-consciousness and giving adolescents an idea of who they are through their personal uniqueness and invincibility. Adolescent egocentrism can be dissected into two types of social thinking: imaginary audience and personal fable. Imaginary audience consists of an adolescent believing that others are watching them and the things they do. Personal fable is not the same thing as imaginary audience but is often confused with imaginary audience. Personal fable consists of believing that you are exceptional in some way. These types of social thinking begin in the concrete stage but carry on to the formal operational stage of development. ==== Testing for concrete operations ==== Piagetian tests are well known and practiced to test for concrete operations. The most prevalent tests are those for conservation. There are some important aspects that the experimenter must take into account when performing experiments with these children. One example of an experiment for testing conservation is the water level task. An experimenter will have two glasses that are the same size, fill them to the same level with liquid, and make sure the child understands that both of the glasses have the same amount of water in them. Then, the experimenter will pour the liquid from one of the small glasses into a tall, thin glass. The experimenter will then ask the child if the taller glass has more liquid, less liquid, or the same amount of liquid. The child will then give his answer. There are three keys for the experimenter to keep in mind with this experiment. These are justification, number of times asking, and word choice. Justification: After the child has answered the question being posed, the experimenter must ask why the child gave that answer. This is important because the answers they give can help the experimenter to assess the child's developmental age. Number of times asking: Some argue that a child's answers can be influenced by the number of times an experimenter asks them about the amount of water in the glasses. For example, a child is asked about the amount of liquid in the first set of glasses and then asked once again after the water is moved into a different sized glass. Some children will doubt their original answer and say something they would not have said if they did not doubt their first answer. Word choice: The phrasing that the experimenter uses may affect how the child answers. If, in the liquid and glass example, the experimenter asks, "Which of these glasses has more liquid?", the child may think that his thoughts of them being the same is wrong because the adult is saying that one must have more. Alternatively, if the experimenter asks, "Are these equal?", then the child is more likely to say that they are, because the experimenter is implying that they are. Classification: As children's experiences and vocabularies grow, they build schemata and are able to organize objects in many different ways. They also understand classification hierarchies and can arrange objects into a variety of classes and subclasses. Identity: One feature of concrete operational thought is the understanding that objects have qualities that do not change even if the object is altered in some way. For instance, mass of an object does not change by rearranging it. A piece of chalk is still chalk even when the piece is broken in two. Reversibility: The child learns that some things that have been changed can be returned to their original state. Water can be frozen and then thawed to become liquid again; however, eggs cannot be unscrambled. Children use reversibility a lot in mathematical problems such as: 2 + 3 = 5 and 5 – 3 = 2. Conservation: The ability to understand that the quantity (mass, weight volume) of something doesn't change due to the change of appearance. Decentration: The ability to focus on more than one feature of scenario or problem at a time. This also describes the ability to attend to more than one task at a time. Decentration is what allows for conservation to occur. Seriation: Arranging items along a quantitative dimension, such as length or weight, in a methodical way is now demonstrated by the concrete operational child. For example, they can logically arrange a series of different-sized sticks in order by length. Younger children not yet in the concrete stage approach a similar task in a haphazard way. These new cognitive skills increase the child's understanding of the physical world. However, according to Piaget, they still cannot think in abstract ways. Additionally, they do not think in systematic scientific ways. For example, most children under age twelve would not be able to come up with the variables that influence the period that a pendulum takes to complete its arc. Even if they were given weights they could attach to strings in order to do this experiment, they would not be able to draw a clear conclusion. === Formal operational stage === The final stage is known as the formal operational stage (early to middle adolescence, beginning at age 11 and finalizing around 14–15): Intelligence is demonstrated through the logical use of symbols related to abstract concepts. This form of thought includes "assumptions that have no necessary relation to reality." At this point, the person is capable of hypothetical and deductive reasoning. During this time, people develop the ability to think about abstract concepts. Piaget stated that "hypothetico-deductive reasoning" becomes important during the formal operational stage. This type of thinking involves hypothetical "what-if" situations that are not always rooted in reality, i.e. counterfactual thinking. It is often required in science and mathematics. Abstract thought emerges during the formal operational stage. Children tend to think very concretely and specifically in earlier stages, and begin to consider possible outcomes and consequences of actions. Metacognition, the capacity for "thinking about thinking" that allows adolescents and adults to reason about their thought processes and monitor them. Problem-solving is demonstrated when children use trial-and-error to solve problems. The ability to systematically solve a problem in a logical and methodical way emerges. Children in primary school years mostly use inductive reasoning, but adolescents start to use deductive reasoning. Inductive reasoning is when children draw general conclusions from personal experiences and specific facts. Adolescents learn how to use deductive reasoning by applying logic to create specific conclusions from abstract concepts. This capability results from their capacity to think hypothetically. "However, research has shown that not all persons in all cultures reach formal operations, and most people do not use formal operations in all aspects of their lives". ==== Experiments ==== Piaget and his colleagues conducted several experiments to assess formal operational thought. In one of the experiments, Piaget evaluated the cognitive capabilities of children of different ages through the use of a scale and varying weights. The task was to balance the scale by hooking weights on the ends of the scale. To successfully complete the task, the children must use formal operational thought to realize that the distance of the weights from the center and the heaviness of the weights both affected the balance. A heavier weight has to be placed closer to the center of the scale, and a lighter weight has to be placed farther from the center, so that the two weights balance each other. While 3- to 5- year olds could not at all comprehend the concept of balancing, children by the age of 7 could balance the scale by placing the same weights on both ends, but they failed to realize the importance of the location. By age 10, children could think about location but failed to use logic and instead used trial-and-error. Finally, by age 13 and 14, in early to middle adolescence, some children more clearly understood the relationship between weight and distance and could successfully implement their hypothesis. === The stages and causation === Piaget sees children's conception of causation as a march from "primitive" conceptions of cause to those of a more scientific, rigorous, and mechanical nature. These primitive concepts are characterized as supernatural, with a decidedly non-natural or non-mechanical tone. Piaget has as his most basic assumption that babies are phenomenists. That is, their knowledge "consists of assimilating things to schemas" from their own action such that they appear, from the child's point of view, "to have qualities which, in fact, stem from the organism". Consequently, these "subjective conceptions," so prevalent during Piaget's first stage of development, are dashed upon discovering deeper empirical truths. Piaget gives the example of a child believing that the moon and stars follow him on a night walk. Upon learning that such is the case for his friends, he must separate his self from the object, resulting in a theory that the moon is immobile, or moves independently of other agents. The second stage, from around three to eight years of age, is characterized by a mix of this type of magical, animistic, or "non-natural" conceptions of causation and mechanical or "naturalistic" causation. This conjunction of natural and non-natural causal explanations supposedly stems from experience itself, though Piaget does not make much of an attempt to describe the nature of the differences in conception. In his interviews with children, he asked questions specifically about natural phenomena, such as: "What makes clouds move?", "What makes the stars move?", "Why do rivers flow?" The nature of all the answers given, Piaget says, are such that these objects must perform their actions to "fulfill their obligations towards men". He calls this "moral explanation". == Postulated physical mechanisms underlying schemes, schemas, and stages == First note the distinction between 'schemes' (analogous to 1D lists of action-instructions, e.g. leading to separate pen-strokes), and figurative 'schemas' (aka 'schemata', akin to 2D drawings/sketches or virtual 3D models); see schema. This distinction (often overlooked by translators) is emphasized by Piaget & Inhelder, and others + (Appendix p. 21-22); also in an earlier (1958) Psychology dictionary. In 1967, Piaget considered the possibility of RNA molecules as likely embodiments of his still-abstract schemes (which he promoted as units of action) — though he did not come to any firm conclusion. At that time, due to work such as that of Swedish biochemist Holger Hydén, RNA concentrations had, indeed, been shown to correlate with learning. To date, with one exception, it has been impossible to investigate such RNA hypotheses by traditional direct observation and logical deduction. The one exception is that such ultra-micro sites would almost certainly have to use optical communication, and recently studies have demonstrated that nerve-fibres can indeed transmit light/infra-red (in addition to their acknowledged role). However it accords with the philosophy of science, especially scientific realism, to do indirect investigations of such phenomena which are intrinsically unobservable for practical reasons. The art then is to build up a plausible interdisciplinary case from the indirect evidence (as indeed the child does during concept development) — and then retain that model until it is disproved by observable-or-other new evidence which then calls for new accommodation. In that spirit, it now might be said that the RNA/infra-red model is valid (for explaining Piagetian higher intelligence). Anyhow the current situation opens the way for more testing, and further development in several directions, including the finer points of Piaget's agenda. == Practical applications == Parents can use Piaget's theory in many ways to support their child's growth. Teachers can also use Piaget's theory to help their students. For example, recent studies have shown that children in the same grade and of the same age perform differently on tasks measuring basic addition and subtraction accuracy. Children in the preoperational and concrete operational levels of cognitive development perform arithmetic operations (such as addition and subtraction) with similar accuracy; however, children in the concrete operational level have been able to perform both addition problems and subtraction problems with overall greater precision. Teachers can use Piaget's theory to see where each child in their class stands with each subject by discussing the syllabus with their students and the students' parents. The stage of cognitive growth of a person differ from another. Cognitive development or thinking is an active process from the beginning to the end of life. Intellectual advancement happens because people at every age and developmental period look for cognitive equilibrium. To achieve this balance, the easiest way is to understand the new experiences through the lens of the preexisting ideas. Infants learn that new objects can be grabbed in the same way of familiar objects, and adults explain the day's headlines as evidence for their existing worldview. However, the application of standardized Piagetian theory and procedures in different societies established widely varying results that lead some to speculate not only that some cultures produce more cognitive development than others but that without specific kinds of cultural experience, but also formal schooling, development might cease at certain level, such as concrete operational level. A procedure was done following methods developed in Geneva (i.e. water level task). Participants were presented with two beakers of equal circumference and height, filled with equal amounts of water. The water from one beaker was transferred into another with taller and smaller circumference. The children and young adults from non-literate societies of a given age were more likely to think that the taller, thinner beaker had more water in it. On the other hand, an experiment on the effects of modifying testing procedures to match local cultural produced a different pattern of results. In the revised procedures, the participants explained in their own language and indicated that while the water was now "more", the quantity was the same. Piaget's water level task has also been applied to the elderly by Formann and results showed an age-associated non-linear decline of performance. == Relation to psychometric theories of intelligence == Researchers have linked Piaget's theory to Cattell and Horn's theory of fluid and crystallized abilities. Piaget's operative intelligence corresponds to the Cattell-Horn formulation of fluid ability in that both concern logical thinking and the "eduction of relations" (an expression Cattell used to refer to the inferring of relationships). Piaget's treatment of everyday learning corresponds to the Cattell-Horn formulation of crystallized ability in that both reflect the impress of experience. Piaget's operativity is considered to be prior to, and ultimately provides the foundation for, everyday learning, much like fluid ability's relation to crystallized intelligence. Piaget's theory also aligns with another psychometric theory, namely the psychometric theory of g, general intelligence. Piaget designed a number of tasks to assess hypotheses arising from his theory. The tasks were not intended to measure individual differences and they have no equivalent in psychometric intelligence tests. Notwithstanding the different research traditions in which psychometric tests and Piagetian tasks were developed, the correlations between the two types of measures have been found to be consistently positive and generally moderate in magnitude. g is thought to underlie performance on the two types of tasks. It has been shown that it is possible to construct a battery consisting of Piagetian tasks that is as good a measure of g as standard IQ tests. == Challenges to Piagetian stage theory == Piagetian accounts of development have been challenged on several grounds. First, as Piaget himself noted, development does not always progress in the smooth manner his theory seems to predict. Décalage, or progressive forms of cognitive developmental progression in a specific domain, suggest that the stage model is, at best, a useful approximation. Furthermore, studies have found that children may be able to learn concepts and capability of complex reasoning that supposedly represented in more advanced stages with relative ease (Lourenço & Machado, 1996, p. 145). More broadly, Piaget's theory is "domain general," predicting that cognitive maturation occurs concurrently across different domains of knowledge (such as mathematics, logic, and understanding of physics or language). Piaget did not take into account variability in a child's performance notably how a child can differ in sophistication across several domains. Piaget’s theory has been challenged through research studies on a child’s cognitive development such as the habituation paradigm. Many infants possess “core knowledge” which allow them to have an innate understanding for how things around them work. Infants were found to have coherence (objects move in one piece), continuity (objects follow continuous paths), and contact (objects do not move without being touched). In an experiment conducted by Renée Baillargeon, three month old infants were tested to see if they were surprised when a board fell downward and appeared to pass through a ball hidden behind it. These infants were shocked and confused, despite their ages not aligning with the eight months proposed by Piaget. Thus, it was found that the way in which children learn about the world is not strictly confined through different age groups. During the 1980s and 1990s, cognitive developmentalists were influenced by "neo-nativist" and evolutionary psychology ideas. These ideas de-emphasized domain general theories and emphasized domain specificity or modularity of mind. Modularity implies that different cognitive faculties may be largely independent of one another, and thus develop according to quite different timetables, which are "influenced by real world experiences". In this vein, some cognitive developmentalists argued that, rather than being domain general learners, children come equipped with domain specific theories, sometimes referred to as "core knowledge," which allows them to break into learning within that domain. For example, even young infants appear to be sensitive to some predictable regularities in the movement and interactions of objects (for example, an object cannot pass through another object), or in human behavior (for example, a hand repeatedly reaching for an object has that object, not just a particular path of motion), as it becomes the building block of which more elaborate knowledge is constructed. Piaget's theory has been said to undervalue the influence that culture has on cognitive development. Piaget demonstrates that a child goes through several stages of cognitive development and come to conclusions on their own, however, a child's sociocultural environment plays an important part in their cognitive development. Social interaction teaches the child about the world and helps them develop through the cognitive stages, which Piaget neglected to consider. More recent work from a newer dynamic systems approach has strongly challenged some of the basic presumptions of the "core knowledge" school that Piaget suggested. Dynamic systems approaches harken to modern neuroscientific research that was not available to Piaget when he was constructing his theory. This brought new light into research in psychology in which new techniques such as brain imaging provided new understanding to cognitive development. One important finding is that domain-specific knowledge is constructed as children develop and integrate knowledge. This enables the domain to improve the accuracy of the knowledge as well as organization of memories. However, this suggests more of a "smooth integration" of learning and development than either Piaget, or his neo-nativist critics, had envisioned. Additionally, some psychologists, such as Lev Vygotsky and Jerome Bruner, thought differently from Piaget, suggesting that language was more important for cognition development than Piaget implied. == Post-Piagetian and neo-Piagetian stages == In recent years, several theorists attempted to address concerns with Piaget's theory by developing new theories and models that can accommodate evidence which violates Piagetian predictions and postulates. The neo-Piagetian theories of cognitive development, advanced by Robbie Case, Andreas Demetriou, Graeme S. Halford, Kurt W. Fischer, Michael Lamport Commons, and Juan Pascual-Leone, attempted to integrate Piaget's theory with cognitive and differential theories of cognitive organization and development. Their aim was to better account for the cognitive factors of development and for intra-individual and inter-individual differences in cognitive development. They suggested that development along Piaget's stages is due to increasing working memory capacity and processing efficiency by "biological maturation". Moreover, Demetriou's theory ascribes an important role to hypercognitive processes of "self-monitoring, self-recording, self-evaluation, and self-regulation", and it recognizes the operation of several relatively autonomous domains of thought (Demetriou, 1998; Demetriou, Mouyi, Spanoudis, 2010; Demetriou, 2003, p. 153). Piaget's theory stops at the formal operational stage, but other researchers have observed the thinking of adults is more nuanced than formal operational thought. This fifth stage has been named post formal thought or operation. Post formal stages have been proposed. Michael Commons presented evidence for four post formal stages in the model of hierarchical complexity: systematic, meta-systematic, paradigmatic, and cross-paradigmatic (Commons & Richards, 2003, p. 206–208; Oliver, 2004, p. 31). There are many theorists, however, who have criticized "post formal thinking," because the concept lacks both theoretical and empirical verification. The term "integrative thinking" has been suggested for use instead. A "sentential" stage, said to occur before the early preoperational stage, has been proposed by Fischer, Biggs and Biggs, Commons, and Richards. Jerome Bruner has expressed views on cognitive development in a "pragmatic orientation" in which humans actively use knowledge for practical applications, such as problem solving and understanding reality. Michael Lamport Commons proposed the model of hierarchical complexity (MHC) in two dimensions: horizontal complexity and vertical complexity (Commons & Richards, 2003, p. 205). Kieran Egan has proposed five stages of understanding. These are "somatic", "mythic", "romantic", "philosophic", and "ironic". These stages are developed through cognitive tools such as "stories", "binary oppositions", "fantasy" and "rhyme, rhythm, and meter" to enhance memorization to develop a long-lasting learning capacity. Lawrence Kohlberg developed three stages of moral development: "Preconventional", "Conventional" and "Postconventional". Each level is composed of two orientation stages, with a total of six orientation stages: (1) "Punishment-Obedience", (2) "Instrumental Relativist", (3) "Good Boy-Nice Girl", (4) "Law and Order", (5) "Social Contract", and (6) "Universal Ethical Principle". Andreas Demetriou has expressed neo-Piagetian theories of cognitive development. Jane Loevinger's stages of ego development occur through "an evolution of stages". "First is the Presocial Stage followed by the Symbiotic Stage, Impulsive Stage, Self-Protective Stage, Conformist Stage, Self-Aware Level: Transition from Conformist to Conscientious Stage, Individualistic Level: Transition from Conscientious to the Autonomous Stage, Conformist Stage, and Integrated Stage". Ken Wilber has incorporated Piaget's theory in his multidisciplinary field of integral theory. The human consciousness is structured in hierarchical order and organized in "holon" chains or "great chain of being", which are based on the level of spiritual and psychological development. Oliver Kress published a model that connected Piaget's theory of development and Abraham Maslow's concept of self-actualization. Cheryl Armon has proposed five stages of " the Good Life". These are "Egoistic Hedonism", "Instrumental Hedonism", "Affective/Altruistic Mutuality", "Individuality", and "Autonomy/Community" (Andreoletti & Demick, 2003, p. 284) (Armon, 1984, p. 40–43). Christopher R. Hallpike proposed that human evolution of cognitive moral understanding had evolved from the beginning of time from its primitive state to the present time. Robert Kegan extended Piaget's developmental model to adults in describing what he called constructive-developmental psychology. == References == == External links ==
Wikipedia/Stage_theory
Piaget's theory of cognitive development, or his genetic epistemology, is a comprehensive theory about the nature and development of human intelligence. It was originated by the Swiss developmental psychologist Jean Piaget (1896–1980). The theory deals with the nature of knowledge itself and how humans gradually come to acquire, construct, and use it. Piaget's theory is mainly known as a developmental stage theory. In 1919, while working at the Alfred Binet Laboratory School in Paris, Piaget "was intrigued by the fact that children of different ages made different kinds of mistakes while solving problems". His experience and observations at the Alfred Binet Laboratory were the beginnings of his theory of cognitive development. He believed that children of different ages made different mistakes because of the "quality rather than quantity" of their intelligence. Piaget proposed four stages to describe the development process of children: sensorimotor stage, pre-operational stage, concrete operational stage, and formal operational stage. Each stage describes a specific age group. In each stage, he described how children develop their cognitive skills. For example, he believed that children experience the world through actions, representing things with words, thinking logically, and using reasoning. To Piaget, cognitive development was a progressive reorganisation of mental processes resulting from biological maturation and environmental experience. He believed that children construct an understanding of the world around them, experience discrepancies between what they already know and what they discover in their environment, then adjust their ideas accordingly. Moreover, Piaget claimed that cognitive development is at the centre of the human organism, and language is contingent on knowledge and understanding acquired through cognitive development. Piaget's earlier work received the greatest attention. Child-centred classrooms and "open education" are direct applications of Piaget's views. Despite its huge success, Piaget's theory has some limitations that Piaget recognised himself: for example, the theory supports sharp stages rather than continuous development (horizontal and vertical décalage). == Nature of intelligence: operative and figurative == Piaget argued that reality is a construction. Reality is defined in reference to the two conditions that define dynamic systems. Specifically, he argued that reality involves transformations and states. Transformations refer to all manners of changes that a thing or person can undergo. States refer to the conditions or the appearances in which things or persons can be found between transformations. For example, there might be changes in shape or form (for instance, liquids are reshaped as they are transferred from one vessel to another, and similarly humans change in their characteristics as they grow older), in size (a toddler does not walk and run without falling, but after 7 yrs of age, the child's sensorimotor anatomy is well developed and now acquires skill faster), or in placement or location in space and time (e.g., various objects or persons might be found at one place at one time and at a different place at another time). Thus, Piaget argued, if human intelligence is to be adaptive, it must have functions to represent both the transformational and the static aspects of reality. He proposed that operative intelligence is responsible for the representation and manipulation of the dynamic or transformational aspects of reality, and that figurative intelligence is responsible for the representation of the static aspects of reality. Operative intelligence is the active aspect of intelligence. It involves all actions, overt or covert, undertaken in order to follow, recover, or anticipate the transformations of the objects or persons of interest. Figurative intelligence is the more or less static aspect of intelligence, involving all means of representation used to retain in mind the states (i.e., successive forms, shapes, or locations) that intervene between transformations. That is, it involves perception, imitation, mental imagery, drawing, and language. Therefore, the figurative aspects of intelligence derive their meaning from the operative aspects of intelligence, because states cannot exist independently of the transformations that interconnect them. Piaget stated that the figurative or the representational aspects of intelligence are subservient to its operative and dynamic aspects, and therefore, that understanding essentially derives from the operative aspect of intelligence. At any time, operative intelligence frames how the world is understood and it changes if understanding is not successful. Piaget stated that this process of understanding and change involves two basic functions: assimilation and accommodation. === Assimilation and accommodation === Through his study of the field of education, Piaget focused on two processes, which he named assimilation and accommodation. To Piaget, assimilation meant integrating external elements into structures of lives or environments, or those we could have through experience. Assimilation is how humans perceive and adapt to new information. It is the process of fitting new information into pre-existing cognitive schemas. Assimilation in which new experiences are reinterpreted to fit into, or assimilate with, old ideas and analyzing new facts accordingly. It occurs when humans are faced with new or unfamiliar information and refer to previously learned information in order to make sense of it. In contrast, accommodation is the process of taking new information in one's environment and altering pre-existing schemas in order to fit in the new information. This happens when the existing schema (knowledge) does not work, and needs to be changed to deal with a new object or situation. Accommodation is imperative because it is how people will continue to interpret new concepts, schemas, frameworks, and more. Various teaching methods have been developed based on Piaget's insights that call for the use of questioning and inquiry-based education to help learners more blatantly face the sorts of contradictions to their pre-existing schemas that are conducive to learning. Piaget believed that the human brain has been programmed through evolution to bring equilibrium, which is what he believed ultimately influences structures by the internal and external processes through assimilation and accommodation. Piaget's understanding was that assimilation and accommodation cannot exist without the other. They are two sides of a coin. To assimilate an object into an existing mental schema, one first needs to take into account or accommodate to the particularities of this object to a certain extent. For instance, to recognize (assimilate) an apple as an apple, one must first focus (accommodate) on the contour of this object. To do this, one needs to roughly recognize the size of the object. Development increases the balance, or equilibration, between these two functions. When in balance with each other, assimilation and accommodation generate mental schemas of the operative intelligence. When one function dominates over the other, they generate representations which belong to figurative intelligence. === Cognitive equilibration === Piaget agreed with most other developmental psychologists in that there are three very important factors that are attributed to development: maturation, experience, and the social environment. But where his theory differs involves his addition of a fourth factor, equilibration, which "refers to the organism's attempt to keep its cognitive schemes in balance". Also see Piaget, and Boom's detailed account. Equilibration is the motivational element that guides cognitive development. As humans, we have a biological need to make sense of the things we encounter in every aspect of our world in order to muster a greater understanding of it, and therefore, to flourish in it. This is where the concept of equilibration comes into play. If a child is confronted with information that does not fit into his or her previously held schemes, disequilibrium is said to occur. This, as one would imagine, is unsatisfactory to the child, so he or she will try to fix it. The incongruence will be fixed in one of three ways. The child will either ignore the newly discovered information, assimilate the information into a preexisting scheme, or accommodate the information by modifying a different scheme. Using any of these methods will return the child to a state of equilibrium, however, depending on the information being presented to the child, that state of equilibrium is not likely to be permanent. For example, let's say Dave, a three-year-old boy who has grown up on a farm and is accustomed to seeing Horses regularly, has been brought to the zoo by his parents and sees an Elephant for the first time. Immediately he shouts "look mommy, Horsey!" Because Dave does not have a scheme for Elephants, he interprets the Elephant as being a Horse due to its large size, color, tail, and long face. He believes the Elephant is a Horse until his mother corrects. The new information Dave has received has put him in a state of disequilibrium. He now has to do one of three things. He can either: (1) turn his head, move towards another section of animals, and ignore this newly presented information; (2) distort the defining characteristics of an Elephant so that he can assimilate it into his "Horsey" scheme; or (3) he can modify his preexisting "Animal" schema to accommodate this new information regarding Elephants by slightly altering his knowledge of animals as he knows them. With age comes entry into a higher stage of development. With that being said, previously held schemes (and the children that hold them) are more than likely to be confronted with discrepant information the older they get. Silverman and Geiringer propose that one would be more successful in attempting to change a child's mode of thought by exposing that child to concepts that reflect a higher rather than a lower stage of development. Furthermore, children are better influenced by modeled performances that are one stage above their developmental level, as opposed to modeled performances that are either lower or two or more stages above their level. == Four stages of development == In his theory of cognitive development, Jean Piaget proposed that humans progress through four developmental stages: the sensorimotor stage, preoperational stage, concrete operational stage, and formal operational stage. === Sensorimotor stage === The first of these, the sensorimotor stage "extends from birth to the acquisition of language". In this stage, infants progressively construct knowledge and understanding of the world by coordinating experiences (such as vision and hearing) from physical interactions with objects (such as grasping, sucking, and stepping). Infants gain knowledge of the world from the physical actions they perform within it. They progress from reflexive, instinctual action at birth to the beginning of symbolic thought toward the end of the stage. Children learn that they are separate from the environment. They can think about aspects of the environment, even though these may be outside the reach of the child's senses. In this stage, according to Piaget, the development of object permanence is one of the most important accomplishments. Object permanence is a child's understanding that an object continues to exist even though they cannot see or hear it. Peek-a-boo is a game in which children who have yet to fully develop object permanence respond to sudden hiding and revealing of a face. By the end of the sensorimotor period, children develop a permanent sense of self and object and will quickly lose interest in Peek-a-boo. Piaget divided the sensorimotor stage into six sub-stages. === Preoperational stage === By observing sequences of play, Piaget was able to demonstrate the second stage of his theory, the pre-operational stage. He said that this stage starts towards the end of the second year. It starts when the child begins to learn to speak and lasts up until the age of seven. During the pre-operational stage of cognitive development, Piaget noted that children do not yet understand concrete logic and cannot mentally manipulate information. Children's increase in playing and pretending takes place in this stage. However, the child still has trouble seeing things from different points of view. The children's play is mainly categorized by symbolic play and manipulating symbols. Such play is demonstrated by the idea of checkers being snacks, pieces of paper being plates, and a box being a table. Their observations of symbols exemplifies the idea of play with the absence of the actual objects involved. The pre-operational stage is sparse and logically inadequate in regard to mental operations. The child is able to form stable concepts as well as magical beliefs (magical thinking). The child, however, is still not able to perform operations, which are tasks that the child can do mentally, rather than physically. Thinking in this stage is still egocentric, meaning the child has difficulty seeing the viewpoint of others. The Pre-operational Stage is split into two substages: the symbolic function substage, and the intuitive thought substage. The symbolic function substage is when children are able to understand, represent, remember, and picture objects in their mind without having the object in front of them. The intuitive thought substage is when children tend to propose the questions of "why?" and "how come?" This stage is when children want to understand everything. ==== Symbolic function substage ==== At about two to four years of age, children cannot yet manipulate and transform information in a logical way. However, they now can think in images and symbols. Other examples of mental abilities are language and pretend play. Symbolic play is when children develop imaginary friends or role-play with friends. Children's play becomes more social and they assign roles to each other. Some examples of symbolic play include playing house, or having a tea party. The type of symbolic play in which children engage is connected with their level of creativity and ability to connect with others. Additionally, the quality of their symbolic play can have consequences on their later development. For example, young children whose symbolic play is of a violent nature tend to exhibit less prosocial behavior and are more likely to display antisocial tendencies in later years. In this stage, there are still limitations, such as egocentrism and precausal thinking. Egocentrism occurs when a child is unable to distinguish between their own perspective and that of another person. Children tend to stick to their own viewpoint, rather than consider the view of others. Indeed, they are not even aware that such a concept as "different viewpoints" exists. Egocentrism can be seen in an experiment performed by Piaget and Swiss developmental psychologist Bärbel Inhelder, known as the three mountain problem. In this experiment, three views of a mountain are shown to the child, who is asked what a traveling doll would see at the various angles. The child will consistently describe what they can see from the position from which they are seated, regardless of the angle from which they are asked to take the doll's perspective. Egocentrism would also cause a child to believe, "I like The Lion Guard, so the high school student next door must like The Lion Guard, too." Similar to preoperational children's egocentric thinking is their structuring of a cause and effect relationships. Piaget coined the term "precausal thinking" to describe the way in which preoperational children use their own existing ideas or views, like in egocentrism, to explain cause-and-effect relationships. Three main concepts of causality as displayed by children in the preoperational stage include: animism, artificialism and transductive reasoning. Animism is the belief that inanimate objects are capable of actions and have lifelike qualities. An example could be a child believing that the sidewalk was mad and made them fall down, or that the stars twinkle in the sky because they are happy. Artificialism refers to the belief that environmental characteristics can be attributed to human actions or interventions. For example, a child might say that it is windy outside because someone is blowing very hard, or the clouds are white because someone painted them that color. Finally, precausal thinking is categorized by transductive reasoning. Transductive reasoning is when a child fails to understand the true relationships between cause and effect. Unlike deductive or inductive reasoning (general to specific, or specific to general), transductive reasoning refers to when a child reasons from specific to specific, drawing a relationship between two separate events that are otherwise unrelated. For example, if a child hears the dog bark and then a balloon popped, the child would conclude that because the dog barked, the balloon popped. ==== Intuitive thought substage ==== A main feature of the pre-operational stage of development is primitive reasoning. Between the ages of four and seven, reasoning changes from symbolic thought to intuitive thought. This stage is "marked by greater dependence on intuitive thinking rather than just perception." Children begin to have more automatic thoughts that don't require evidence. During this stage there is a heightened sense of curiosity and need to understand how and why things work. Piaget named this substage "intuitive thought" because they are starting to develop more logical thought but cannot explain their reasoning. Thought during this stage is still immature and cognitive errors occur. Children in this stage depend on their own subjective perception of the object or event. This stage is characterized by centration, conservation, irreversibility, class inclusion, and transitive inference. Centration is the act of focusing all attention on one characteristic or dimension of a situation, whilst disregarding all others. Conservation is the awareness that altering a substance's appearance does not change its basic properties. Children at this stage are unaware of conservation and exhibit centration. Both centration and conservation can be more easily understood once familiarized with Piaget's most famous experimental task. In this task, a child is presented with two identical beakers containing the same amount of liquid. The child usually notes that the beakers do contain the same amount of liquid. When one of the beakers is poured into a taller and thinner container, children who are younger than seven or eight years old typically say that the two beakers no longer contain the same amount of liquid, and that the taller container holds the larger quantity (centration), without taking into consideration the fact that both beakers were previously noted to contain the same amount of liquid. Due to superficial changes, the child was unable to comprehend that the properties of the substances continued to remain the same (conservation). Irreversibility is a concept developed in this stage which is closely related to the ideas of centration and conservation. Irreversibility refers to when children are unable to mentally reverse a sequence of events. In the same beaker situation, the child does not realize that, if the sequence of events was reversed and the water from the tall beaker was poured back into its original beaker, then the same amount of water would exist. Another example of children's reliance on visual representations is their misunderstanding of "less than" or "more than". When two rows containing equal numbers of blocks are placed in front of a child, one row spread farther apart than the other, the child will think that the row spread farther contains more blocks. Class inclusion refers to a kind of conceptual thinking that children in the preoperational stage cannot yet grasp. Children's inability to focus on two aspects of a situation at once inhibits them from understanding the principle that one category or class can contain several different subcategories or classes. For example, a four-year-old girl may be shown a picture of eight dogs and three cats. The girl knows what cats and dogs are, and she is aware that they are both animals. However, when asked, "Are there more dogs or animals?" she is likely to answer "more dogs". This is due to her difficulty focusing on the two subclasses and the larger class all at the same time. She may have been able to view the dogs as dogs or animals, but struggled when trying to classify them as both, simultaneously. Similar to this is concept relating to intuitive thought, known as "transitive inference". Transitive inference is using previous knowledge to determine the missing piece, using basic logic. Children in the preoperational stage lack this logic. An example of transitive inference would be when a child is presented with the information "A" is greater than "B" and "B" is greater than "C". This child may have difficulty here understanding that "A" is also greater than "C". === Concrete operational stage === The concrete operational stage is the third stage of Piaget's theory of cognitive development. This stage, which follows the preoperational stage, occurs between the ages of 7 and 11 (middle childhood and preadolescence) years, and is characterized by the appropriate use of logic. During this stage, a child's thought processes become more mature and "adult like". They start solving problems in a more logical fashion. Abstract, hypothetical thinking is not yet developed in the child, and children can only solve problems that apply to concrete events or objects. At this stage, the children undergo a transition where the child learns rules such as conservation. Piaget determined that children are able to incorporate inductive reasoning. Inductive reasoning involves drawing inferences from observations in order to make a generalization. In contrast, children struggle with deductive reasoning, which involves using a generalized principle in order to try to predict the outcome of an event. Children in this stage commonly experience difficulties with figuring out logic in their heads. For example, a child will understand that "A is more than B" and "B is more than C". However, when asked "is A more than C?", the child might not be able to logically figure the question out mentally. Two other important processes in the concrete operational stage are logic and the elimination of egocentrism. Egocentrism is the inability to consider or understand a perspective other than one's own. It is the phase where the thought and morality of the child is completely self focused. During this stage, the child acquires the ability to view things from another individual's perspective, even if they think that perspective is incorrect. For instance, show a child a comic in which Jane puts a doll under a box, leaves the room, and then Melissa moves the doll to a drawer, and Jane comes back. A child in the concrete operations stage will say that Jane will still think it's under the box even though the child knows it is in the drawer. (See also False-belief task.) Children in this stage can, however, only solve problems that apply to actual (concrete) objects or events, and not abstract concepts or hypothetical tasks. Understanding and knowing how to use full common sense has not yet been completely adapted. Piaget determined that children in the concrete operational stage were able to incorporate inductive logic. On the other hand, children at this age have difficulty using deductive logic, which involves using a general principle to predict the outcome of a specific event. This includes mental reversibility. An example of this is being able to reverse the order of relationships between mental categories. For example, a child might be able to recognize that his or her dog is a Labrador, that a Labrador is a dog, and that a dog is an animal, and draw conclusions from the information available, as well as apply all these processes to hypothetical situations. The abstract quality of the adolescent's thought at the formal operational level is evident in the adolescent's verbal problem solving ability. The logical quality of the adolescent's thought is when children are more likely to solve problems in a trial-and-error fashion. Adolescents begin to think more as a scientist thinks, devising plans to solve problems and systematically test opinions. They use hypothetical-deductive reasoning, which means that they develop hypotheses or best guesses, and systematically deduce, or conclude, which is the best path to follow in solving the problem. During this stage the adolescent is able to understand love, logical proofs and values. During this stage the young person begins to entertain possibilities for the future and is fascinated with what they can be. Adolescents also are changing cognitively by the way that they think about social matters. One thing that brings about a change is egocentrism. This happens by heightening self-consciousness and giving adolescents an idea of who they are through their personal uniqueness and invincibility. Adolescent egocentrism can be dissected into two types of social thinking: imaginary audience and personal fable. Imaginary audience consists of an adolescent believing that others are watching them and the things they do. Personal fable is not the same thing as imaginary audience but is often confused with imaginary audience. Personal fable consists of believing that you are exceptional in some way. These types of social thinking begin in the concrete stage but carry on to the formal operational stage of development. ==== Testing for concrete operations ==== Piagetian tests are well known and practiced to test for concrete operations. The most prevalent tests are those for conservation. There are some important aspects that the experimenter must take into account when performing experiments with these children. One example of an experiment for testing conservation is the water level task. An experimenter will have two glasses that are the same size, fill them to the same level with liquid, and make sure the child understands that both of the glasses have the same amount of water in them. Then, the experimenter will pour the liquid from one of the small glasses into a tall, thin glass. The experimenter will then ask the child if the taller glass has more liquid, less liquid, or the same amount of liquid. The child will then give his answer. There are three keys for the experimenter to keep in mind with this experiment. These are justification, number of times asking, and word choice. Justification: After the child has answered the question being posed, the experimenter must ask why the child gave that answer. This is important because the answers they give can help the experimenter to assess the child's developmental age. Number of times asking: Some argue that a child's answers can be influenced by the number of times an experimenter asks them about the amount of water in the glasses. For example, a child is asked about the amount of liquid in the first set of glasses and then asked once again after the water is moved into a different sized glass. Some children will doubt their original answer and say something they would not have said if they did not doubt their first answer. Word choice: The phrasing that the experimenter uses may affect how the child answers. If, in the liquid and glass example, the experimenter asks, "Which of these glasses has more liquid?", the child may think that his thoughts of them being the same is wrong because the adult is saying that one must have more. Alternatively, if the experimenter asks, "Are these equal?", then the child is more likely to say that they are, because the experimenter is implying that they are. Classification: As children's experiences and vocabularies grow, they build schemata and are able to organize objects in many different ways. They also understand classification hierarchies and can arrange objects into a variety of classes and subclasses. Identity: One feature of concrete operational thought is the understanding that objects have qualities that do not change even if the object is altered in some way. For instance, mass of an object does not change by rearranging it. A piece of chalk is still chalk even when the piece is broken in two. Reversibility: The child learns that some things that have been changed can be returned to their original state. Water can be frozen and then thawed to become liquid again; however, eggs cannot be unscrambled. Children use reversibility a lot in mathematical problems such as: 2 + 3 = 5 and 5 – 3 = 2. Conservation: The ability to understand that the quantity (mass, weight volume) of something doesn't change due to the change of appearance. Decentration: The ability to focus on more than one feature of scenario or problem at a time. This also describes the ability to attend to more than one task at a time. Decentration is what allows for conservation to occur. Seriation: Arranging items along a quantitative dimension, such as length or weight, in a methodical way is now demonstrated by the concrete operational child. For example, they can logically arrange a series of different-sized sticks in order by length. Younger children not yet in the concrete stage approach a similar task in a haphazard way. These new cognitive skills increase the child's understanding of the physical world. However, according to Piaget, they still cannot think in abstract ways. Additionally, they do not think in systematic scientific ways. For example, most children under age twelve would not be able to come up with the variables that influence the period that a pendulum takes to complete its arc. Even if they were given weights they could attach to strings in order to do this experiment, they would not be able to draw a clear conclusion. === Formal operational stage === The final stage is known as the formal operational stage (early to middle adolescence, beginning at age 11 and finalizing around 14–15): Intelligence is demonstrated through the logical use of symbols related to abstract concepts. This form of thought includes "assumptions that have no necessary relation to reality." At this point, the person is capable of hypothetical and deductive reasoning. During this time, people develop the ability to think about abstract concepts. Piaget stated that "hypothetico-deductive reasoning" becomes important during the formal operational stage. This type of thinking involves hypothetical "what-if" situations that are not always rooted in reality, i.e. counterfactual thinking. It is often required in science and mathematics. Abstract thought emerges during the formal operational stage. Children tend to think very concretely and specifically in earlier stages, and begin to consider possible outcomes and consequences of actions. Metacognition, the capacity for "thinking about thinking" that allows adolescents and adults to reason about their thought processes and monitor them. Problem-solving is demonstrated when children use trial-and-error to solve problems. The ability to systematically solve a problem in a logical and methodical way emerges. Children in primary school years mostly use inductive reasoning, but adolescents start to use deductive reasoning. Inductive reasoning is when children draw general conclusions from personal experiences and specific facts. Adolescents learn how to use deductive reasoning by applying logic to create specific conclusions from abstract concepts. This capability results from their capacity to think hypothetically. "However, research has shown that not all persons in all cultures reach formal operations, and most people do not use formal operations in all aspects of their lives". ==== Experiments ==== Piaget and his colleagues conducted several experiments to assess formal operational thought. In one of the experiments, Piaget evaluated the cognitive capabilities of children of different ages through the use of a scale and varying weights. The task was to balance the scale by hooking weights on the ends of the scale. To successfully complete the task, the children must use formal operational thought to realize that the distance of the weights from the center and the heaviness of the weights both affected the balance. A heavier weight has to be placed closer to the center of the scale, and a lighter weight has to be placed farther from the center, so that the two weights balance each other. While 3- to 5- year olds could not at all comprehend the concept of balancing, children by the age of 7 could balance the scale by placing the same weights on both ends, but they failed to realize the importance of the location. By age 10, children could think about location but failed to use logic and instead used trial-and-error. Finally, by age 13 and 14, in early to middle adolescence, some children more clearly understood the relationship between weight and distance and could successfully implement their hypothesis. === The stages and causation === Piaget sees children's conception of causation as a march from "primitive" conceptions of cause to those of a more scientific, rigorous, and mechanical nature. These primitive concepts are characterized as supernatural, with a decidedly non-natural or non-mechanical tone. Piaget has as his most basic assumption that babies are phenomenists. That is, their knowledge "consists of assimilating things to schemas" from their own action such that they appear, from the child's point of view, "to have qualities which, in fact, stem from the organism". Consequently, these "subjective conceptions," so prevalent during Piaget's first stage of development, are dashed upon discovering deeper empirical truths. Piaget gives the example of a child believing that the moon and stars follow him on a night walk. Upon learning that such is the case for his friends, he must separate his self from the object, resulting in a theory that the moon is immobile, or moves independently of other agents. The second stage, from around three to eight years of age, is characterized by a mix of this type of magical, animistic, or "non-natural" conceptions of causation and mechanical or "naturalistic" causation. This conjunction of natural and non-natural causal explanations supposedly stems from experience itself, though Piaget does not make much of an attempt to describe the nature of the differences in conception. In his interviews with children, he asked questions specifically about natural phenomena, such as: "What makes clouds move?", "What makes the stars move?", "Why do rivers flow?" The nature of all the answers given, Piaget says, are such that these objects must perform their actions to "fulfill their obligations towards men". He calls this "moral explanation". == Postulated physical mechanisms underlying schemes, schemas, and stages == First note the distinction between 'schemes' (analogous to 1D lists of action-instructions, e.g. leading to separate pen-strokes), and figurative 'schemas' (aka 'schemata', akin to 2D drawings/sketches or virtual 3D models); see schema. This distinction (often overlooked by translators) is emphasized by Piaget & Inhelder, and others + (Appendix p. 21-22); also in an earlier (1958) Psychology dictionary. In 1967, Piaget considered the possibility of RNA molecules as likely embodiments of his still-abstract schemes (which he promoted as units of action) — though he did not come to any firm conclusion. At that time, due to work such as that of Swedish biochemist Holger Hydén, RNA concentrations had, indeed, been shown to correlate with learning. To date, with one exception, it has been impossible to investigate such RNA hypotheses by traditional direct observation and logical deduction. The one exception is that such ultra-micro sites would almost certainly have to use optical communication, and recently studies have demonstrated that nerve-fibres can indeed transmit light/infra-red (in addition to their acknowledged role). However it accords with the philosophy of science, especially scientific realism, to do indirect investigations of such phenomena which are intrinsically unobservable for practical reasons. The art then is to build up a plausible interdisciplinary case from the indirect evidence (as indeed the child does during concept development) — and then retain that model until it is disproved by observable-or-other new evidence which then calls for new accommodation. In that spirit, it now might be said that the RNA/infra-red model is valid (for explaining Piagetian higher intelligence). Anyhow the current situation opens the way for more testing, and further development in several directions, including the finer points of Piaget's agenda. == Practical applications == Parents can use Piaget's theory in many ways to support their child's growth. Teachers can also use Piaget's theory to help their students. For example, recent studies have shown that children in the same grade and of the same age perform differently on tasks measuring basic addition and subtraction accuracy. Children in the preoperational and concrete operational levels of cognitive development perform arithmetic operations (such as addition and subtraction) with similar accuracy; however, children in the concrete operational level have been able to perform both addition problems and subtraction problems with overall greater precision. Teachers can use Piaget's theory to see where each child in their class stands with each subject by discussing the syllabus with their students and the students' parents. The stage of cognitive growth of a person differ from another. Cognitive development or thinking is an active process from the beginning to the end of life. Intellectual advancement happens because people at every age and developmental period look for cognitive equilibrium. To achieve this balance, the easiest way is to understand the new experiences through the lens of the preexisting ideas. Infants learn that new objects can be grabbed in the same way of familiar objects, and adults explain the day's headlines as evidence for their existing worldview. However, the application of standardized Piagetian theory and procedures in different societies established widely varying results that lead some to speculate not only that some cultures produce more cognitive development than others but that without specific kinds of cultural experience, but also formal schooling, development might cease at certain level, such as concrete operational level. A procedure was done following methods developed in Geneva (i.e. water level task). Participants were presented with two beakers of equal circumference and height, filled with equal amounts of water. The water from one beaker was transferred into another with taller and smaller circumference. The children and young adults from non-literate societies of a given age were more likely to think that the taller, thinner beaker had more water in it. On the other hand, an experiment on the effects of modifying testing procedures to match local cultural produced a different pattern of results. In the revised procedures, the participants explained in their own language and indicated that while the water was now "more", the quantity was the same. Piaget's water level task has also been applied to the elderly by Formann and results showed an age-associated non-linear decline of performance. == Relation to psychometric theories of intelligence == Researchers have linked Piaget's theory to Cattell and Horn's theory of fluid and crystallized abilities. Piaget's operative intelligence corresponds to the Cattell-Horn formulation of fluid ability in that both concern logical thinking and the "eduction of relations" (an expression Cattell used to refer to the inferring of relationships). Piaget's treatment of everyday learning corresponds to the Cattell-Horn formulation of crystallized ability in that both reflect the impress of experience. Piaget's operativity is considered to be prior to, and ultimately provides the foundation for, everyday learning, much like fluid ability's relation to crystallized intelligence. Piaget's theory also aligns with another psychometric theory, namely the psychometric theory of g, general intelligence. Piaget designed a number of tasks to assess hypotheses arising from his theory. The tasks were not intended to measure individual differences and they have no equivalent in psychometric intelligence tests. Notwithstanding the different research traditions in which psychometric tests and Piagetian tasks were developed, the correlations between the two types of measures have been found to be consistently positive and generally moderate in magnitude. g is thought to underlie performance on the two types of tasks. It has been shown that it is possible to construct a battery consisting of Piagetian tasks that is as good a measure of g as standard IQ tests. == Challenges to Piagetian stage theory == Piagetian accounts of development have been challenged on several grounds. First, as Piaget himself noted, development does not always progress in the smooth manner his theory seems to predict. Décalage, or progressive forms of cognitive developmental progression in a specific domain, suggest that the stage model is, at best, a useful approximation. Furthermore, studies have found that children may be able to learn concepts and capability of complex reasoning that supposedly represented in more advanced stages with relative ease (Lourenço & Machado, 1996, p. 145). More broadly, Piaget's theory is "domain general," predicting that cognitive maturation occurs concurrently across different domains of knowledge (such as mathematics, logic, and understanding of physics or language). Piaget did not take into account variability in a child's performance notably how a child can differ in sophistication across several domains. Piaget’s theory has been challenged through research studies on a child’s cognitive development such as the habituation paradigm. Many infants possess “core knowledge” which allow them to have an innate understanding for how things around them work. Infants were found to have coherence (objects move in one piece), continuity (objects follow continuous paths), and contact (objects do not move without being touched). In an experiment conducted by Renée Baillargeon, three month old infants were tested to see if they were surprised when a board fell downward and appeared to pass through a ball hidden behind it. These infants were shocked and confused, despite their ages not aligning with the eight months proposed by Piaget. Thus, it was found that the way in which children learn about the world is not strictly confined through different age groups. During the 1980s and 1990s, cognitive developmentalists were influenced by "neo-nativist" and evolutionary psychology ideas. These ideas de-emphasized domain general theories and emphasized domain specificity or modularity of mind. Modularity implies that different cognitive faculties may be largely independent of one another, and thus develop according to quite different timetables, which are "influenced by real world experiences". In this vein, some cognitive developmentalists argued that, rather than being domain general learners, children come equipped with domain specific theories, sometimes referred to as "core knowledge," which allows them to break into learning within that domain. For example, even young infants appear to be sensitive to some predictable regularities in the movement and interactions of objects (for example, an object cannot pass through another object), or in human behavior (for example, a hand repeatedly reaching for an object has that object, not just a particular path of motion), as it becomes the building block of which more elaborate knowledge is constructed. Piaget's theory has been said to undervalue the influence that culture has on cognitive development. Piaget demonstrates that a child goes through several stages of cognitive development and come to conclusions on their own, however, a child's sociocultural environment plays an important part in their cognitive development. Social interaction teaches the child about the world and helps them develop through the cognitive stages, which Piaget neglected to consider. More recent work from a newer dynamic systems approach has strongly challenged some of the basic presumptions of the "core knowledge" school that Piaget suggested. Dynamic systems approaches harken to modern neuroscientific research that was not available to Piaget when he was constructing his theory. This brought new light into research in psychology in which new techniques such as brain imaging provided new understanding to cognitive development. One important finding is that domain-specific knowledge is constructed as children develop and integrate knowledge. This enables the domain to improve the accuracy of the knowledge as well as organization of memories. However, this suggests more of a "smooth integration" of learning and development than either Piaget, or his neo-nativist critics, had envisioned. Additionally, some psychologists, such as Lev Vygotsky and Jerome Bruner, thought differently from Piaget, suggesting that language was more important for cognition development than Piaget implied. == Post-Piagetian and neo-Piagetian stages == In recent years, several theorists attempted to address concerns with Piaget's theory by developing new theories and models that can accommodate evidence which violates Piagetian predictions and postulates. The neo-Piagetian theories of cognitive development, advanced by Robbie Case, Andreas Demetriou, Graeme S. Halford, Kurt W. Fischer, Michael Lamport Commons, and Juan Pascual-Leone, attempted to integrate Piaget's theory with cognitive and differential theories of cognitive organization and development. Their aim was to better account for the cognitive factors of development and for intra-individual and inter-individual differences in cognitive development. They suggested that development along Piaget's stages is due to increasing working memory capacity and processing efficiency by "biological maturation". Moreover, Demetriou's theory ascribes an important role to hypercognitive processes of "self-monitoring, self-recording, self-evaluation, and self-regulation", and it recognizes the operation of several relatively autonomous domains of thought (Demetriou, 1998; Demetriou, Mouyi, Spanoudis, 2010; Demetriou, 2003, p. 153). Piaget's theory stops at the formal operational stage, but other researchers have observed the thinking of adults is more nuanced than formal operational thought. This fifth stage has been named post formal thought or operation. Post formal stages have been proposed. Michael Commons presented evidence for four post formal stages in the model of hierarchical complexity: systematic, meta-systematic, paradigmatic, and cross-paradigmatic (Commons & Richards, 2003, p. 206–208; Oliver, 2004, p. 31). There are many theorists, however, who have criticized "post formal thinking," because the concept lacks both theoretical and empirical verification. The term "integrative thinking" has been suggested for use instead. A "sentential" stage, said to occur before the early preoperational stage, has been proposed by Fischer, Biggs and Biggs, Commons, and Richards. Jerome Bruner has expressed views on cognitive development in a "pragmatic orientation" in which humans actively use knowledge for practical applications, such as problem solving and understanding reality. Michael Lamport Commons proposed the model of hierarchical complexity (MHC) in two dimensions: horizontal complexity and vertical complexity (Commons & Richards, 2003, p. 205). Kieran Egan has proposed five stages of understanding. These are "somatic", "mythic", "romantic", "philosophic", and "ironic". These stages are developed through cognitive tools such as "stories", "binary oppositions", "fantasy" and "rhyme, rhythm, and meter" to enhance memorization to develop a long-lasting learning capacity. Lawrence Kohlberg developed three stages of moral development: "Preconventional", "Conventional" and "Postconventional". Each level is composed of two orientation stages, with a total of six orientation stages: (1) "Punishment-Obedience", (2) "Instrumental Relativist", (3) "Good Boy-Nice Girl", (4) "Law and Order", (5) "Social Contract", and (6) "Universal Ethical Principle". Andreas Demetriou has expressed neo-Piagetian theories of cognitive development. Jane Loevinger's stages of ego development occur through "an evolution of stages". "First is the Presocial Stage followed by the Symbiotic Stage, Impulsive Stage, Self-Protective Stage, Conformist Stage, Self-Aware Level: Transition from Conformist to Conscientious Stage, Individualistic Level: Transition from Conscientious to the Autonomous Stage, Conformist Stage, and Integrated Stage". Ken Wilber has incorporated Piaget's theory in his multidisciplinary field of integral theory. The human consciousness is structured in hierarchical order and organized in "holon" chains or "great chain of being", which are based on the level of spiritual and psychological development. Oliver Kress published a model that connected Piaget's theory of development and Abraham Maslow's concept of self-actualization. Cheryl Armon has proposed five stages of " the Good Life". These are "Egoistic Hedonism", "Instrumental Hedonism", "Affective/Altruistic Mutuality", "Individuality", and "Autonomy/Community" (Andreoletti & Demick, 2003, p. 284) (Armon, 1984, p. 40–43). Christopher R. Hallpike proposed that human evolution of cognitive moral understanding had evolved from the beginning of time from its primitive state to the present time. Robert Kegan extended Piaget's developmental model to adults in describing what he called constructive-developmental psychology. == References == == External links ==
Wikipedia/Piaget's_theory_of_cognitive_development
In computer science, purely functional programming usually designates a programming paradigm—a style of building the structure and elements of computer programs—that treats all computation as the evaluation of mathematical functions. Program state and mutable objects are usually modeled with temporal logic, as explicit variables that represent the program state at each step of a program execution: a variable state is passed as an input parameter of a state-transforming function, which returns the updated state as part of its return value. This style handles state changes without losing the referential transparency of the program expressions. Purely functional programming consists of ensuring that functions, inside the functional paradigm, will only depend on their arguments, regardless of any global or local state. A pure functional subroutine only has visibility of changes of state represented by state variables included in its scope. == Difference between pure and impure functional programming == The exact difference between pure and impure functional programming is a matter of controversy. Sabry's proposed definition of purity is that all common evaluation strategies (call-by-name, call-by-value, and call-by-need) produce the same result, ignoring strategies that error or diverge. A program is usually said to be functional when it uses some concepts of functional programming, such as first-class functions and higher-order functions. However, a first-class function need not be purely functional, as it may use techniques from the imperative paradigm, such as arrays or input/output methods that use mutable cells, which update their state as side effects. In fact, the earliest programming languages cited as being functional, IPL and Lisp, are both "impure" functional languages by Sabry's definition. == Properties of purely functional programming == === Strict versus non-strict evaluation === Each evaluation strategy which ends on a purely functional program returns the same result. In particular, it ensures that the programmer does not have to consider in which order programs are evaluated, since eager evaluation will return the same result as lazy evaluation. However, it is still possible that an eager evaluation may not terminate while the lazy evaluation of the same program halts. An advantage of this is that lazy evaluation can be implemented much more easily; as all expressions will return the same result at any moment (regardless of program state), their evaluation can be delayed as much as necessary. === Parallel computing === In a purely functional language, the only dependencies between computations are data dependencies, and computations are deterministic. Therefore, to program in parallel, the programmer need only specify the pieces that should be computed in parallel, and the runtime can handle all other details such as distributing tasks to processors, managing synchronization and communication, and collecting garbage in parallel. This style of programming avoids common issues such as race conditions and deadlocks, but has less control than an imperative language. To ensure a speedup, the granularity of tasks must be carefully chosen to be neither too big nor too small. In theory, it is possible to use runtime profiling and compile-time analysis to judge whether introducing parallelism will speed up the program, and thus automatically parallelize purely functional programs. In practice, this has not been terribly successful, and fully automatic parallelization is not practical. === Data structures === Purely functional data structures are persistent. Persistency is required for functional programming; without it, the same computation could return different results. Functional programming may use persistent non-purely functional data structures, while those data structures may not be used in purely functional programs. Purely functional data structures are often represented in a different way than their imperative counterparts. For example, array with constant-time access and update is a basic component of most imperative languages and many imperative data-structures, such as hash table and binary heap, are based on arrays. Arrays can be replaced by map or random access list, which admits purely functional implementation, but the access and update time is logarithmic. Therefore, purely functional data structures can be used in languages which are non-functional, but they may not be the most efficient tool available, especially if persistency is not required. In general, conversion of an imperative program to a purely functional one also requires ensuring that the formerly-mutable structures are now explicitly returned from functions that update them, a program structure called store-passing style. == Purely functional language == A purely functional language is a language which only admits purely functional programming. Purely functional programs can however be written in languages which are not purely functional. == References ==
Wikipedia/Pure_functional
In computer science, information hiding is the principle of segregation of the design decisions in a computer program that are most likely to change, thus protecting other parts of the program from extensive modification if the design decision is changed. The protection involves providing a stable interface which protects the remainder of the program from the implementation (whose details are likely to change). Written in another way, information hiding is the ability to prevent certain aspects of a class or software component from being accessible to its clients, using either programming language features (like private variables) or an explicit exporting policy. == Overview == The term encapsulation is often used interchangeably with information hiding. Not all agree on the distinctions between the two, though; one may think of information hiding as being the principle and encapsulation being the technique. A software module hides information by encapsulating the information into a module or other construct which presents an interface. A common use of information hiding is to hide the physical storage layout for data so that if it is changed, the change is restricted to a small subset of the total program. For example, if a three-dimensional point (x, y, z) is represented in a program with three floating-point scalar variables and later, the representation is changed to a single array variable of size three, a module designed with information hiding in mind would protect the remainder of the program from such a change. In object-oriented programming, information hiding (by way of nesting of types) reduces software development risk by shifting the code's dependency on an uncertain implementation (design decision) onto a well-defined interface. Clients of the interface perform operations purely through the interface, so, if the implementation changes, the clients do not have to change. == Encapsulation == In his book on object-oriented design, Grady Booch defined encapsulation as "the process of compartmentalizing the elements of an abstraction that constitute its structure and behavior; encapsulation serves to separate the contractual interface of an abstraction and its implementation." The purpose is to achieve the potential for change: the internal mechanisms of the component can be improved without impact on other components, or the component can be replaced with a different one that supports the same public interface. Encapsulation also protects the integrity of the component, by preventing users from setting the internal data of the component into an invalid or inconsistent state. Another benefit of encapsulation is that it reduces system complexity and thus increases robustness, by limiting the interdependencies between software components. In this sense, the idea of encapsulation is more general than how it is applied in object-oriented programming. For example, a relational database is encapsulated in the sense that its only public interface is a query language (such as SQL), which hides all the internal machinery and data structures of the database management system. As such, encapsulation is a core principle of good software architecture, at every level of granularity. Encapsulating software behind an interface allows the construction of objects that mimic the behavior and interactions of objects in the real world. For example, a simple digital alarm clock is a real-world object that a layperson (nonexpert) can use and understand. They can understand what the alarm clock does, and how to use it through the provided interface (buttons and screen), without having to understand every part inside of the clock. Similarly, if the clock were replaced by a different model, the layperson could continue to use it in the same way, provided that the interface works the same. In the more concrete setting of an object-oriented programming language, the notion is used to mean either an information hiding mechanism, a bundling mechanism, or the combination of the two. (See Encapsulation (object-oriented programming) for details.) == History == The concept of information hiding was first described by David Parnas in 1972. Before then, modularity was discussed by Richard Gauthier and Stephen Pont in their 1970 book Designing Systems Programs although modular programming itself had been used at many commercial sites for many years previously – especially in I/O sub-systems and software libraries – without acquiring the 'information hiding' tag – but for similar reasons, as well as the more obvious code reuse reason. == Example == Information hiding serves as an effective criterion for dividing any piece of equipment, software, or hardware, into modules of functionality. For instance, a car is a complex piece of equipment. In order to make the design, manufacturing, and maintenance of a car reasonable, the complex piece of equipment is divided into modules with particular interfaces hiding design decisions. By designing a car in this fashion, a car manufacturer can also offer various options while still having a vehicle that is economical to manufacture. For instance, a car manufacturer may have a luxury version of the car as well as a standard version. The luxury version comes with a more powerful engine than the standard version. The engineers designing the two different car engines, one for the luxury version and one for the standard version, provide the same interface for both engines. Both engines fit into the engine bay of the car which is the same between both versions. Both engines fit the same transmission, the same engine mounts, and the same controls. The differences in the engines are that the more powerful luxury version has a larger displacement with a fuel injection system that is programmed to provide the fuel-air mixture that the larger displacement engine requires. In addition to the more powerful engine, the luxury version may also offer other options such as a better radio with CD player, more comfortable seats, a better suspension system with wider tires, and different paint colors. With all of these changes, most of the car is the same between the standard version and the luxury version. The radio with CD player is a module that replaces the standard radio, also a module, in the luxury model. The more comfortable seats are installed into the same seat mounts as the standard types of seats. Whether the seats are leather or plastic, or offer lumbar support or not, does not matter. The engineers design the car by dividing the task up into pieces of work that are assigned to teams. Each team then designs their component to a particular standard or interface which allows the team flexibility in the design of the component while at the same time ensuring that all of the components will fit together. Motor vehicle manufacturers frequently use the same core structure for several different models, in part as a cost-control measure. Such a "platform" also provides an example of information hiding, since the floorplan can be built without knowing whether it is to be used in a sedan or a hatchback. As can be seen by this example, information hiding provides flexibility. This flexibility allows a programmer to modify the functionality of a computer program during normal evolution as the computer program is changed to better fit the needs of users. When a computer program is well designed, decomposing the source code solution into modules using the principle of information hiding, evolutionary changes are much easier because the changes typically are local rather than global changes. Cars provide another example of this in how they interface with drivers. They present a standard interface (pedals, wheel, shifter, signals, gauges, etc.) on which people are trained and licensed. Thus, people only have to learn to drive a car; they don't need to learn a completely different way of driving every time they drive a new model. (Granted, there are manual and automatic transmissions and other such differences, but on the whole, cars maintain a unified interface.) == See also == Implementation inheritance Inheritance semantics Modularity (programming) Opaque data type Virtual inheritance Transparency (human–computer interaction) Scope (programming) Compartmentalization (information security) Law of Demeter == Notes == == References ==
Wikipedia/Visibility_(computer_science)
In mathematics, the qualitative theory of differential equations studies the behavior of differential equations by means other than finding their solutions. It originated from the works of Henri Poincaré and Aleksandr Lyapunov. There are relatively few differential equations that can be solved explicitly, but using tools from analysis and topology, one can "solve" them in the qualitative sense, obtaining information about their properties. It was used by Benjamin Kuipers in the book Qualitative reasoning: modeling and simulation with incomplete knowledge to demonstrate how the theory of PDEs can be applied even in situations where only qualitative knowledge is available. == References == == Further reading == Kuipers, Benjamin. Qualitative reasoning: modeling and simulation with incomplete knowledge. MIT press, 1994. Viktor Vladimirovich Nemytskii, Vyacheslav Stepanov, Qualitative theory of differential equations, Princeton University Press, Princeton, 1960. === Original references === Henri Poincaré, "Mémoire sur les courbes définies par une équation différentielle", Journal de Mathématiques Pures et Appliquées (1881, in French) Lyapunov, Aleksandr M. (1992). "The general problem of the stability of motion". International Journal of Control. 55 (3): 531–534. doi:10.1080/00207179208934253. ISSN 0020-7179. (it was translated from the original Russian into French and then into this English version, the original is from the year 1892)
Wikipedia/Qualitative_theory_of_differential_equations
In mathematics, a group action of a group G {\displaystyle G} on a set S {\displaystyle S} is a group homomorphism from G {\displaystyle G} to some group (under function composition) of functions from S {\displaystyle S} to itself. It is said that G {\displaystyle G} acts on S {\displaystyle S} . Many sets of transformations form a group under function composition; for example, the rotations around a point in the plane. It is often useful to consider the group as an abstract group, and to say that one has a group action of the abstract group that consists of performing the transformations of the group of transformations. The reason for distinguishing the group from the transformations is that, generally, a group of transformations of a structure acts also on various related structures; for example, the above rotation group also acts on triangles by transforming triangles into triangles. If a group acts on a structure, it will usually also act on objects built from that structure. For example, the group of Euclidean isometries acts on Euclidean space and also on the figures drawn in it; in particular, it acts on the set of all triangles. Similarly, the group of symmetries of a polyhedron acts on the vertices, the edges, and the faces of the polyhedron. A group action on a vector space is called a representation of the group. In the case of a finite-dimensional vector space, it allows one to identify many groups with subgroups of the general linear group GL ⁡ ( n , K ) {\displaystyle \operatorname {GL} (n,K)} , the group of the invertible matrices of dimension n {\displaystyle n} over a field K {\displaystyle K} . The symmetric group S n {\displaystyle S_{n}} acts on any set with n {\displaystyle n} elements by permuting the elements of the set. Although the group of all permutations of a set depends formally on the set, the concept of group action allows one to consider a single group for studying the permutations of all sets with the same cardinality. == Definition == === Left group action === If G {\displaystyle G} is a group with identity element e {\displaystyle e} , and X {\displaystyle X} is a set, then a (left) group action α {\displaystyle \alpha } of G {\displaystyle G} on X is a function α : G × X → X {\displaystyle \alpha :G\times X\to X} that satisfies the following two axioms: for all g and h in G and all x in X {\displaystyle X} . The group G {\displaystyle G} is then said to act on X {\displaystyle X} (from the left). A set X {\displaystyle X} together with an action of G {\displaystyle G} is called a (left) G {\displaystyle G} -set. It can be notationally convenient to curry the action α {\displaystyle \alpha } , so that, instead, one has a collection of transformations αg : X → X, with one transformation αg for each group element g ∈ G. The identity and compatibility relations then read α e ( x ) = x {\displaystyle \alpha _{e}(x)=x} and α g ( α h ( x ) ) = ( α g ∘ α h ) ( x ) = α g h ( x ) {\displaystyle \alpha _{g}(\alpha _{h}(x))=(\alpha _{g}\circ \alpha _{h})(x)=\alpha _{gh}(x)} The second axiom states that the function composition is compatible with the group multiplication; they form a commutative diagram. This axiom can be shortened even further, and written as α g ∘ α h = α g h {\displaystyle \alpha _{g}\circ \alpha _{h}=\alpha _{gh}} . With the above understanding, it is very common to avoid writing α {\displaystyle \alpha } entirely, and to replace it with either a dot, or with nothing at all. Thus, α(g, x) can be shortened to g⋅x or gx, especially when the action is clear from context. The axioms are then e ⋅ x = x {\displaystyle e{\cdot }x=x} g ⋅ ( h ⋅ x ) = ( g h ) ⋅ x {\displaystyle g{\cdot }(h{\cdot }x)=(gh){\cdot }x} From these two axioms, it follows that for any fixed g in G {\displaystyle G} , the function from X to itself which maps x to g⋅x is a bijection, with inverse bijection the corresponding map for g−1. Therefore, one may equivalently define a group action of G on X as a group homomorphism from G into the symmetric group Sym(X) of all bijections from X to itself. === Right group action === Likewise, a right group action of G {\displaystyle G} on X {\displaystyle X} is a function α : X × G → X , {\displaystyle \alpha :X\times G\to X,} that satisfies the analogous axioms: (with α(x, g) often shortened to xg or x⋅g when the action being considered is clear from context) for all g and h in G and all x in X. The difference between left and right actions is in the order in which a product gh acts on x. For a left action, h acts first, followed by g second. For a right action, g acts first, followed by h second. Because of the formula (gh)−1 = h−1g−1, a left action can be constructed from a right action by composing with the inverse operation of the group. Also, a right action of a group G on X can be considered as a left action of its opposite group Gop on X. Thus, for establishing general properties of group actions, it suffices to consider only left actions. However, there are cases where this is not possible. For example, the multiplication of a group induces both a left action and a right action on the group itself—multiplication on the left and on the right, respectively. == Notable properties of actions == Let G be a group acting on a set X. The action is called faithful or effective if g⋅x = x for all x ∈ X implies that g = eG. Equivalently, the homomorphism from G to the group of bijections of X corresponding to the action is injective. The action is called free (or semiregular or fixed-point free) if the statement that g⋅x = x for some x ∈ X already implies that g = eG. In other words, no non-trivial element of G fixes a point of X. This is a much stronger property than faithfulness. For example, the action of any group on itself by left multiplication is free. This observation implies Cayley's theorem that any group can be embedded in a symmetric group (which is infinite when the group is). A finite group may act faithfully on a set of size much smaller than its cardinality (however such an action cannot be free). For instance the abelian 2-group (Z / 2Z)n (of cardinality 2n) acts faithfully on a set of size 2n. This is not always the case, for example the cyclic group Z / 2nZ cannot act faithfully on a set of size less than 2n. In general the smallest set on which a faithful action can be defined can vary greatly for groups of the same size. For example, three groups of size 120 are the symmetric group S5, the icosahedral group A5 × Z / 2Z and the cyclic group Z / 120Z. The smallest sets on which faithful actions can be defined for these groups are of size 5, 7, and 16 respectively. === Transitivity properties === The action of G on X is called transitive if for any two points x, y ∈ X there exists a g ∈ G so that g ⋅ x = y. The action is simply transitive (or sharply transitive, or regular) if it is both transitive and free. This means that given x, y ∈ X there is exactly one g ∈ G such that g ⋅ x = y. If X is acted upon simply transitively by a group G then it is called a principal homogeneous space for G or a G-torsor. For an integer n ≥ 1, the action is n-transitive if X has at least n elements, and for any pair of n-tuples (x1, ..., xn), (y1, ..., yn) ∈ Xn with pairwise distinct entries (that is xi ≠ xj, yi ≠ yj when i ≠ j) there exists a g ∈ G such that g⋅xi = yi for i = 1, ..., n. In other words, the action on the subset of Xn of tuples without repeated entries is transitive. For n = 2, 3 this is often called double, respectively triple, transitivity. The class of 2-transitive groups (that is, subgroups of a finite symmetric group whose action is 2-transitive) and more generally multiply transitive groups is well-studied in finite group theory. An action is sharply n-transitive when the action on tuples without repeated entries in Xn is sharply transitive. ==== Examples ==== The action of the symmetric group of X is transitive, in fact n-transitive for any n up to the cardinality of X. If X has cardinality n, the action of the alternating group is (n − 2)-transitive but not (n − 1)-transitive. The action of the general linear group of a vector space V on the set V ∖ {0} of non-zero vectors is transitive, but not 2-transitive (similarly for the action of the special linear group if the dimension of v is at least 2). The action of the orthogonal group of a Euclidean space is not transitive on nonzero vectors but it is on the unit sphere. === Primitive actions === The action of G on X is called primitive if there is no partition of X preserved by all elements of G apart from the trivial partitions (the partition in a single piece and its dual, the partition into singletons). === Topological properties === Assume that X is a topological space and the action of G is by homeomorphisms. The action is wandering if every x ∈ X has a neighbourhood U such that there are only finitely many g ∈ G with g⋅U ∩ U ≠ ∅. More generally, a point x ∈ X is called a point of discontinuity for the action of G if there is an open subset U ∋ x such that there are only finitely many g ∈ G with g⋅U ∩ U ≠ ∅. The domain of discontinuity of the action is the set of all points of discontinuity. Equivalently it is the largest G-stable open subset Ω ⊂ X such that the action of G on Ω is wandering. In a dynamical context this is also called a wandering set. The action is properly discontinuous if for every compact subset K ⊂ X there are only finitely many g ∈ G such that g⋅K ∩ K ≠ ∅. This is strictly stronger than wandering; for instance the action of Z on R2 ∖ {(0, 0)} given by n⋅(x, y) = (2nx, 2−ny) is wandering and free but not properly discontinuous. The action by deck transformations of the fundamental group of a locally simply connected space on a universal cover is wandering and free. Such actions can be characterized by the following property: every x ∈ X has a neighbourhood U such that g⋅U ∩ U = ∅ for every g ∈ G ∖ {eG}. Actions with this property are sometimes called freely discontinuous, and the largest subset on which the action is freely discontinuous is then called the free regular set. An action of a group G on a locally compact space X is called cocompact if there exists a compact subset A ⊂ X such that X = G ⋅ A. For a properly discontinuous action, cocompactness is equivalent to compactness of the quotient space X / G. === Actions of topological groups === Now assume G is a topological group and X a topological space on which it acts by homeomorphisms. The action is said to be continuous if the map G × X → X is continuous for the product topology. The action is said to be proper if the map G × X → X × X defined by (g, x) ↦ (x, g⋅x) is proper. This means that given compact sets K, K′ the set of g ∈ G such that g⋅K ∩ K′ ≠ ∅ is compact. In particular, this is equivalent to proper discontinuity if G is a discrete group. It is said to be locally free if there exists a neighbourhood U of eG such that g⋅x ≠ x for all x ∈ X and g ∈ U ∖ {eG}. The action is said to be strongly continuous if the orbital map g ↦ g⋅x is continuous for every x ∈ X. Contrary to what the name suggests, this is a weaker property than continuity of the action. If G is a Lie group and X a differentiable manifold, then the subspace of smooth points for the action is the set of points x ∈ X such that the map g ↦ g⋅x is smooth. There is a well-developed theory of Lie group actions, i.e. action which are smooth on the whole space. === Linear actions === If g acts by linear transformations on a module over a commutative ring, the action is said to be irreducible if there are no proper nonzero g-invariant submodules. It is said to be semisimple if it decomposes as a direct sum of irreducible actions. == Orbits and stabilizers == Consider a group G acting on a set X. The orbit of an element x in X is the set of elements in X to which x can be moved by the elements of G. The orbit of x is denoted by G⋅x: G ⋅ x = { g ⋅ x : g ∈ G } . {\displaystyle G{\cdot }x=\{g{\cdot }x:g\in G\}.} The defining properties of a group guarantee that the set of orbits of (points x in) X under the action of G form a partition of X. The associated equivalence relation is defined by saying x ~ y if and only if there exists a g in G with g⋅x = y. The orbits are then the equivalence classes under this relation; two elements x and y are equivalent if and only if their orbits are the same, that is, G⋅x = G⋅y. The group action is transitive if and only if it has exactly one orbit, that is, if there exists x in X with G⋅x = X. This is the case if and only if G⋅x = X for all x in X (given that X is non-empty). The set of all orbits of X under the action of G is written as X / G (or, less frequently, as G \ X), and is called the quotient of the action. In geometric situations it may be called the orbit space, while in algebraic situations it may be called the space of coinvariants, and written XG, by contrast with the invariants (fixed points), denoted XG: the coinvariants are a quotient while the invariants are a subset. The coinvariant terminology and notation are used particularly in group cohomology and group homology, which use the same superscript/subscript convention. === Invariant subsets === If Y is a subset of X, then G⋅Y denotes the set {g⋅y : g ∈ G and y ∈ Y}. The subset Y is said to be invariant under G if G⋅Y = Y (which is equivalent G⋅Y ⊆ Y). In that case, G also operates on Y by restricting the action to Y. The subset Y is called fixed under G if g⋅y = y for all g in G and all y in Y. Every subset that is fixed under G is also invariant under G, but not conversely. Every orbit is an invariant subset of X on which G acts transitively. Conversely, any invariant subset of X is a union of orbits. The action of G on X is transitive if and only if all elements are equivalent, meaning that there is only one orbit. A G-invariant element of X is x ∈ X such that g⋅x = x for all g ∈ G. The set of all such x is denoted XG and called the G-invariants of X. When X is a G-module, XG is the zeroth cohomology group of G with coefficients in X, and the higher cohomology groups are the derived functors of the functor of G-invariants. === Fixed points and stabilizer subgroups === Given g in G and x in X with g⋅x = x, it is said that "x is a fixed point of g" or that "g fixes x". For every x in X, the stabilizer subgroup of G with respect to x (also called the isotropy group or little group) is the set of all elements in G that fix x: G x = { g ∈ G : g ⋅ x = x } . {\displaystyle G_{x}=\{g\in G:g{\cdot }x=x\}.} This is a subgroup of G, though typically not a normal one. The action of G on X is free if and only if all stabilizers are trivial. The kernel N of the homomorphism with the symmetric group, G → Sym(X), is given by the intersection of the stabilizers Gx for all x in X. If N is trivial, the action is said to be faithful (or effective). Let x and y be two elements in X, and let g be a group element such that y = g⋅x. Then the two stabilizer groups Gx and Gy are related by Gy = gGxg−1. Proof: by definition, h ∈ Gy if and only if h⋅(g⋅x) = g⋅x. Applying g−1 to both sides of this equality yields (g−1hg)⋅x = x; that is, g−1hg ∈ Gx. An opposite inclusion follows similarly by taking h ∈ Gx and x = g−1⋅y. The above says that the stabilizers of elements in the same orbit are conjugate to each other. Thus, to each orbit, we can associate a conjugacy class of a subgroup of G (that is, the set of all conjugates of the subgroup). Let (H) denote the conjugacy class of H. Then the orbit O has type (H) if the stabilizer Gx of some/any x in O belongs to (H). A maximal orbit type is often called a principal orbit type. === Orbit-stabilizer theorem === Orbits and stabilizers are closely related. For a fixed x in X, consider the map f : G → X given by g ↦ g⋅x. By definition the image f(G) of this map is the orbit G⋅x. The condition for two elements to have the same image is f ( g ) = f ( h ) ⟺ g ⋅ x = h ⋅ x ⟺ g − 1 h ⋅ x = x ⟺ g − 1 h ∈ G x ⟺ h ∈ g G x . {\displaystyle f(g)=f(h)\iff g{\cdot }x=h{\cdot }x\iff g^{-1}h{\cdot }x=x\iff g^{-1}h\in G_{x}\iff h\in gG_{x}.} In other words, f(g) = f(h) if and only if g and h lie in the same coset for the stabilizer subgroup Gx. Thus, the fiber f−1({y}) of f over any y in G⋅x is contained in such a coset, and every such coset also occurs as a fiber. Therefore f induces a bijection between the set G / Gx of cosets for the stabilizer subgroup and the orbit G⋅x, which sends gGx ↦ g⋅x. This result is known as the orbit-stabilizer theorem. If G is finite then the orbit-stabilizer theorem, together with Lagrange's theorem, gives | G ⋅ x | = [ G : G x ] = | G | / | G x | , {\displaystyle |G\cdot x|=[G\,:\,G_{x}]=|G|/|G_{x}|,} in other words the length of the orbit of x times the order of its stabilizer is the order of the group. In particular that implies that the orbit length is a divisor of the group order. Example: Let G be a group of prime order p acting on a set X with k elements. Since each orbit has either 1 or p elements, there are at least k mod p orbits of length 1 which are G-invariant elements. More specifically, k and the number of G-invariant elements are congruent modulo p. This result is especially useful since it can be employed for counting arguments (typically in situations where X is finite as well). Example: We can use the orbit-stabilizer theorem to count the automorphisms of a graph. Consider the cubical graph as pictured, and let G denote its automorphism group. Then G acts on the set of vertices {1, 2, ..., 8}, and this action is transitive as can be seen by composing rotations about the center of the cube. Thus, by the orbit-stabilizer theorem, |G| = |G ⋅ 1| |G1| = 8 |G1|. Applying the theorem now to the stabilizer G1, we can obtain |G1| = |(G1) ⋅ 2| |(G1)2|. Any element of G that fixes 1 must send 2 to either 2, 4, or 5. As an example of such automorphisms consider the rotation around the diagonal axis through 1 and 7 by 2π/3, which permutes 2, 4, 5 and 3, 6, 8, and fixes 1 and 7. Thus, |(G1) ⋅ 2| = 3. Applying the theorem a third time gives |(G1)2| = |((G1)2) ⋅ 3| |((G1)2)3|. Any element of G that fixes 1 and 2 must send 3 to either 3 or 6. Reflecting the cube at the plane through 1, 2, 7 and 8 is such an automorphism sending 3 to 6, thus |((G1)2) ⋅ 3| = 2. One also sees that ((G1)2)3 consists only of the identity automorphism, as any element of G fixing 1, 2 and 3 must also fix all other vertices, since they are determined by their adjacency to 1, 2 and 3. Combining the preceding calculations, we can now obtain |G| = 8 ⋅ 3 ⋅ 2 ⋅ 1 = 48. === Burnside's lemma === A result closely related to the orbit-stabilizer theorem is Burnside's lemma: | X / G | = 1 | G | ∑ g ∈ G | X g | , {\displaystyle |X/G|={\frac {1}{|G|}}\sum _{g\in G}|X^{g}|,} where Xg is the set of points fixed by g. This result is mainly of use when G and X are finite, when it can be interpreted as follows: the number of orbits is equal to the average number of points fixed per group element. Fixing a group G, the set of formal differences of finite G-sets forms a ring called the Burnside ring of G, where addition corresponds to disjoint union, and multiplication to Cartesian product. == Examples == The trivial action of any group G on any set X is defined by g⋅x = x for all g in G and all x in X; that is, every group element induces the identity permutation on X. In every group G, left multiplication is an action of G on G: g⋅x = gx for all g, x in G. This action is free and transitive (regular), and forms the basis of a rapid proof of Cayley's theorem – that every group is isomorphic to a subgroup of the symmetric group of permutations of the set G. In every group G with subgroup H, left multiplication is an action of G on the set of cosets G / H: g⋅aH = gaH for all g, a in G. In particular if H contains no nontrivial normal subgroups of G this induces an isomorphism from G to a subgroup of the permutation group of degree [G : H]. In every group G, conjugation is an action of G on G: g⋅x = gxg−1. An exponential notation is commonly used for the right-action variant: xg = g−1xg; it satisfies (xg)h = xgh. In every group G with subgroup H, conjugation is an action of G on conjugates of H: g⋅K = gKg−1 for all g in G and K conjugates of H. An action of Z on a set X uniquely determines and is determined by an automorphism of X, given by the action of 1. Similarly, an action of Z / 2Z on X is equivalent to the data of an involution of X. The symmetric group Sn and its subgroups act on the set {1, ..., n} by permuting its elements The symmetry group of a polyhedron acts on the set of vertices of that polyhedron. It also acts on the set of faces or the set of edges of the polyhedron. The symmetry group of any geometrical object acts on the set of points of that object. For a coordinate space V over a field F with group of units F*, the mapping F* × V → V given by a × (x1, x2, ..., xn) ↦ (ax1, ax2, ..., axn) is a group action called scalar multiplication. The automorphism group of a vector space (or graph, or group, or ring ...) acts on the vector space (or set of vertices of the graph, or group, or ring ...). The general linear group GL(n, K) and its subgroups, particularly its Lie subgroups (including the special linear group SL(n, K), orthogonal group O(n, K), special orthogonal group SO(n, K), and symplectic group Sp(n, K)) are Lie groups that act on the vector space Kn. The group operations are given by multiplying the matrices from the groups with the vectors from Kn. The general linear group GL(n, Z) acts on Zn by natural matrix action. The orbits of its action are classified by the greatest common divisor of coordinates of the vector in Zn. The affine group acts transitively on the points of an affine space, and the subgroup V of the affine group (that is, a vector space) has transitive and free (that is, regular) action on these points; indeed this can be used to give a definition of an affine space. The projective linear group PGL(n + 1, K) and its subgroups, particularly its Lie subgroups, which are Lie groups that act on the projective space Pn(K). This is a quotient of the action of the general linear group on projective space. Particularly notable is PGL(2, K), the symmetries of the projective line, which is sharply 3-transitive, preserving the cross ratio; the Möbius group PGL(2, C) is of particular interest. The isometries of the plane act on the set of 2D images and patterns, such as wallpaper patterns. The definition can be made more precise by specifying what is meant by image or pattern, for example, a function of position with values in a set of colors. Isometries are in fact one example of affine group (action). The sets acted on by a group G comprise the category of G-sets in which the objects are G-sets and the morphisms are G-set homomorphisms: functions f : X → Y such that g⋅(f(x)) = f(g⋅x) for every g in G. The Galois group of a field extension L / K acts on the field L but has only a trivial action on elements of the subfield K. Subgroups of Gal(L / K) correspond to subfields of L that contain K, that is, intermediate field extensions between L and K. The additive group of the real numbers (R, +) acts on the phase space of "well-behaved" systems in classical mechanics (and in more general dynamical systems) by time translation: if t is in R and x is in the phase space, then x describes a state of the system, and t + x is defined to be the state of the system t seconds later if t is positive or −t seconds ago if t is negative. The additive group of the real numbers (R, +) acts on the set of real functions of a real variable in various ways, with (t⋅f)(x) equal to, for example, f(x + t), f(x) + t, f(xet), f(x)et, f(x + t)et, or f(xet) + t, but not f(xet + t). Given a group action of G on X, we can define an induced action of G on the power set of X, by setting g⋅U = {g⋅u : u ∈ U} for every subset U of X and every g in G. This is useful, for instance, in studying the action of the large Mathieu group on a 24-set and in studying symmetry in certain models of finite geometries. The quaternions with norm 1 (the versors), as a multiplicative group, act on R3: for any such quaternion z = cos α/2 + v sin α/2, the mapping f(x) = zxz* is a counterclockwise rotation through an angle α about an axis given by a unit vector v; z is the same rotation; see quaternions and spatial rotation. This is not a faithful action because the quaternion −1 leaves all points where they were, as does the quaternion 1. Given left G-sets X, Y, there is a left G-set YX whose elements are G-equivariant maps α : X × G → Y, and with left G-action given by g⋅α = α ∘ (idX × –g) (where "–g" indicates right multiplication by g). This G-set has the property that its fixed points correspond to equivariant maps X → Y; more generally, it is an exponential object in the category of G-sets. == Group actions and groupoids == The notion of group action can be encoded by the action groupoid G′ = G ⋉ X associated to the group action. The stabilizers of the action are the vertex groups of the groupoid and the orbits of the action are its components. == Morphisms and isomorphisms between G-sets == If X and Y are two G-sets, a morphism from X to Y is a function f : X → Y such that f(g⋅x) = g⋅f(x) for all g in G and all x in X. Morphisms of G-sets are also called equivariant maps or G-maps. The composition of two morphisms is again a morphism. If a morphism f is bijective, then its inverse is also a morphism. In this case f is called an isomorphism, and the two G-sets X and Y are called isomorphic; for all practical purposes, isomorphic G-sets are indistinguishable. Some example isomorphisms: Every regular G action is isomorphic to the action of G on G given by left multiplication. Every free G action is isomorphic to G × S, where S is some set and G acts on G × S by left multiplication on the first coordinate. (S can be taken to be the set of orbits X / G.) Every transitive G action is isomorphic to left multiplication by G on the set of left cosets of some subgroup H of G. (H can be taken to be the stabilizer group of any element of the original G-set.) With this notion of morphism, the collection of all G-sets forms a category; this category is a Grothendieck topos (in fact, assuming a classical metalogic, this topos will even be Boolean). == Variants and generalizations == We can also consider actions of monoids on sets, by using the same two axioms as above. This does not define bijective maps and equivalence relations however. See semigroup action. Instead of actions on sets, we can define actions of groups and monoids on objects of an arbitrary category: start with an object X of some category, and then define an action on X as a monoid homomorphism into the monoid of endomorphisms of X. If X has an underlying set, then all definitions and facts stated above can be carried over. For example, if we take the category of vector spaces, we obtain group representations in this fashion. We can view a group G as a category with a single object in which every morphism is invertible. A (left) group action is then nothing but a (covariant) functor from G to the category of sets, and a group representation is a functor from G to the category of vector spaces. A morphism between G-sets is then a natural transformation between the group action functors. In analogy, an action of a groupoid is a functor from the groupoid to the category of sets or to some other category. In addition to continuous actions of topological groups on topological spaces, one also often considers smooth actions of Lie groups on smooth manifolds, regular actions of algebraic groups on algebraic varieties, and actions of group schemes on schemes. All of these are examples of group objects acting on objects of their respective category. == Gallery == == See also == Gain graph Group with operators Measurable group action Monoid action Young–Deruyts development == Notes == == Citations == == References == Aschbacher, Michael (2000). Finite Group Theory. Cambridge University Press. ISBN 978-0-521-78675-1. MR 1777008. Dummit, David; Richard Foote (2003). Abstract Algebra (3rd ed.). Wiley. ISBN 0-471-43334-9. Eie, Minking; Chang, Shou-Te (2010). A Course on Abstract Algebra. World Scientific. ISBN 978-981-4271-88-2. Hatcher, Allen (2002), Algebraic Topology, Cambridge University Press, ISBN 978-0-521-79540-1, MR 1867354. Rotman, Joseph (1995). An Introduction to the Theory of Groups. Graduate Texts in Mathematics 148 (4th ed.). Springer-Verlag. ISBN 0-387-94285-8. Smith, Jonathan D.H. (2008). Introduction to abstract algebra. Textbooks in mathematics. CRC Press. ISBN 978-1-4200-6371-4. Kapovich, Michael (2009), Hyperbolic manifolds and discrete groups, Modern Birkhäuser Classics, Birkhäuser, pp. xxvii+467, ISBN 978-0-8176-4912-8, Zbl 1180.57001 Maskit, Bernard (1988), Kleinian groups, Grundlehren der Mathematischen Wissenschaften, vol. 287, Springer-Verlag, pp. XIII+326, Zbl 0627.30039 Perrone, Paolo (2024), Starting Category Theory, World Scientific, doi:10.1142/9789811286018_0005, ISBN 978-981-12-8600-1 Thurston, William (1980), The geometry and topology of three-manifolds, Princeton lecture notes, p. 175, archived from the original on 2020-07-27, retrieved 2016-02-08 Thurston, William P. (1997), Three-dimensional geometry and topology. Vol. 1., Princeton Mathematical Series, vol. 35, Princeton University Press, pp. x+311, Zbl 0873.57001 tom Dieck, Tammo (1987), Transformation groups, de Gruyter Studies in Mathematics, vol. 8, Berlin: Walter de Gruyter & Co., p. 29, doi:10.1515/9783110858372.312, ISBN 978-3-11-009745-0, MR 0889050 == External links == "Action of a group on a manifold", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weisstein, Eric W. "Group Action". MathWorld.
Wikipedia/Orbit_(group_theory)
In applied mathematics, the phase space method is a technique for constructing and analyzing solutions of dynamical systems, that is, solving time-dependent differential equations. The method consists of first rewriting the equations as a system of differential equations that are first-order in time, by introducing additional variables. The original and the new variables form a vector in the phase space. The solution then becomes a curve in the phase space, parametrized by time. The curve is usually called a trajectory or an orbit. The (vector) differential equation is reformulated as a geometrical description of the curve, that is, as a differential equation in terms of the phase space variables only, without the original time parametrization. Finally, a solution in the phase space is transformed back into the original setting. The phase space method is used widely in physics. It can be applied, for example, to find traveling wave solutions of reaction–diffusion systems. == See also == Reaction–diffusion system Fisher's equation == References ==
Wikipedia/Phase_space_method
In mathematics, a partition of a set is a grouping of its elements into non-empty subsets, in such a way that every element is included in exactly one subset. Every equivalence relation on a set defines a partition of this set, and every partition defines an equivalence relation. A set equipped with an equivalence relation or a partition is sometimes called a setoid, typically in type theory and proof theory. == Definition and notation == A partition of a set X is a set of non-empty subsets of X such that every element x in X is in exactly one of these subsets (i.e., the subsets are nonempty mutually disjoint sets). Equivalently, a family of sets P is a partition of X if and only if all of the following conditions hold: The family P does not contain the empty set (that is ∅ ∉ P {\displaystyle \emptyset \notin P} ). The union of the sets in P is equal to X (that is ⋃ A ∈ P A = X {\displaystyle \textstyle \bigcup _{A\in P}A=X} ). The sets in P are said to exhaust or cover X. See also collectively exhaustive events and cover (topology). The intersection of any two distinct sets in P is empty (that is ( ∀ A , B ∈ P ) A ≠ B ⟹ A ∩ B = ∅ {\displaystyle (\forall A,B\in P)\;A\neq B\implies A\cap B=\emptyset } ). The elements of P are said to be pairwise disjoint or mutually exclusive. See also mutual exclusivity. The sets in P {\displaystyle P} are called the blocks, parts, or cells, of the partition. If a ∈ X {\displaystyle a\in X} then we represent the cell containing a {\displaystyle a} by [ a ] {\displaystyle [a]} . That is to say, [ a ] {\displaystyle [a]} is notation for the cell in P {\displaystyle P} which contains a {\displaystyle a} . Every partition P {\displaystyle P} may be identified with an equivalence relation on X {\displaystyle X} , namely the relation ∼ P {\displaystyle \sim _{\!P}} such that for any a , b ∈ X {\displaystyle a,b\in X} we have a ∼ P b {\displaystyle a\sim _{\!P}b} if and only if a ∈ [ b ] {\displaystyle a\in [b]} (equivalently, if and only if b ∈ [ a ] {\displaystyle b\in [a]} ). The notation ∼ P {\displaystyle \sim _{\!P}} evokes the idea that the equivalence relation may be constructed from the partition. Conversely every equivalence relation may be identified with a partition. This is why it is sometimes said informally that "an equivalence relation is the same as a partition". If P is the partition identified with a given equivalence relation ∼ {\displaystyle \sim } , then some authors write P = X / ∼ {\displaystyle P=X/{\sim }} . This notation is suggestive of the idea that the partition is the set X divided into cells. The notation also evokes the idea that, from the equivalence relation one may construct the partition. The rank of P {\displaystyle P} is | X | − | P | {\displaystyle |X|-|P|} , if X {\displaystyle X} is finite. == Examples == The empty set ∅ {\displaystyle \emptyset } has exactly one partition, namely ∅ {\displaystyle \emptyset } . (Note: this is the partition, not a member of the partition.) For any non-empty set X, P = { X } is a partition of X, called the trivial partition. Particularly, every singleton set {x} has exactly one partition, namely { {x} }. For any non-empty proper subset A of a set U, the set A together with its complement form a partition of U, namely, { A, U ∖ A }. The set {1, 2, 3} has these five partitions (one partition per item): { {1}, {2}, {3} }, sometimes written 1 | 2 | 3. { {1, 2}, {3} }, or 1 2 | 3. { {1, 3}, {2} }, or 1 3 | 2. { {1}, {2, 3} }, or 1 | 2 3. { {1, 2, 3} }, or 123 (in contexts where there will be no confusion with the number). The following are not partitions of {1, 2, 3}: { {}, {1, 3}, {2} } is not a partition (of any set) because one of its elements is the empty set. { {1, 2}, {2, 3} } is not a partition (of any set) because the element 2 is contained in more than one block. { {1}, {2} } is not a partition of {1, 2, 3} because none of its blocks contains 3; however, it is a partition of {1, 2}. == Partitions and equivalence relations == For any equivalence relation on a set X, the set of its equivalence classes is a partition of X. Conversely, from any partition P of X, we can define an equivalence relation on X by setting x ~ y precisely when x and y are in the same part in P. Thus the notions of equivalence relation and partition are essentially equivalent. The axiom of choice guarantees for any partition of a set X the existence of a subset of X containing exactly one element from each part of the partition. This implies that given an equivalence relation on a set one can select a canonical representative element from every equivalence class. == Refinement of partitions == A partition α of a set X is a refinement of a partition ρ of X—and we say that α is finer than ρ and that ρ is coarser than α—if every element of α is a subset of some element of ρ. Informally, this means that α is a further fragmentation of ρ. In that case, it is written that α ≤ ρ. This "finer-than" relation on the set of partitions of X is a partial order (so the notation "≤" is appropriate). Each set of elements has a least upper bound (their "join") and a greatest lower bound (their "meet"), so that it forms a lattice, and more specifically (for partitions of a finite set) it is a geometric and supersolvable lattice. The partition lattice of a 4-element set has 15 elements and is depicted in the Hasse diagram on the left. The meet and join of partitions α and ρ are defined as follows. The meet α ∧ ρ {\displaystyle \alpha \wedge \rho } is the partition whose blocks are the intersections of a block of α and a block of ρ, except for the empty set. In other words, a block of α ∧ ρ {\displaystyle \alpha \wedge \rho } is the intersection of a block of α and a block of ρ that are not disjoint from each other. To define the join α ∨ ρ {\displaystyle \alpha \vee \rho } , form a relation on the blocks A of α and the blocks B of ρ by A ~ B if A and B are not disjoint. Then α ∨ ρ {\displaystyle \alpha \vee \rho } is the partition in which each block C is the union of a family of blocks connected by this relation. Based on the equivalence between geometric lattices and matroids, this lattice of partitions of a finite set corresponds to a matroid in which the base set of the matroid consists of the atoms of the lattice, namely, the partitions with n − 2 {\displaystyle n-2} singleton sets and one two-element set. These atomic partitions correspond one-for-one with the edges of a complete graph. The matroid closure of a set of atomic partitions is the finest common coarsening of them all; in graph-theoretic terms, it is the partition of the vertices of the complete graph into the connected components of the subgraph formed by the given set of edges. In this way, the lattice of partitions corresponds to the lattice of flats of the graphic matroid of the complete graph. Another example illustrates refinement of partitions from the perspective of equivalence relations. If D is the set of cards in a standard 52-card deck, the same-color-as relation on D – which can be denoted ~C – has two equivalence classes: the sets {red cards} and {black cards}. The 2-part partition corresponding to ~C has a refinement that yields the same-suit-as relation ~S, which has the four equivalence classes {spades}, {diamonds}, {hearts}, and {clubs}. == Noncrossing partitions == A partition of the set N = {1, 2, ..., n} with corresponding equivalence relation ~ is noncrossing if it has the following property: If four elements a, b, c and d of N having a < b < c < d satisfy a ~ c and b ~ d, then a ~ b ~ c ~ d. The name comes from the following equivalent definition: Imagine the elements 1, 2, ..., n of N drawn as the n vertices of a regular n-gon (in counterclockwise order). A partition can then be visualized by drawing each block as a polygon (whose vertices are the elements of the block). The partition is then noncrossing if and only if these polygons do not intersect. The lattice of noncrossing partitions of a finite set forms a subset of the lattice of all partitions, but not a sublattice, since the join operations of the two lattices do not agree. The noncrossing partition lattice has taken on importance because of its role in free probability theory. == Counting partitions == The total number of partitions of an n-element set is the Bell number Bn. The first several Bell numbers are B0 = 1, B1 = 1, B2 = 2, B3 = 5, B4 = 15, B5 = 52, and B6 = 203 (sequence A000110 in the OEIS). Bell numbers satisfy the recursion B n + 1 = ∑ k = 0 n ( n k ) B k {\displaystyle B_{n+1}=\sum _{k=0}^{n}{n \choose k}B_{k}} and have the exponential generating function ∑ n = 0 ∞ B n n ! z n = e e z − 1 . {\displaystyle \sum _{n=0}^{\infty }{\frac {B_{n}}{n!}}z^{n}=e^{e^{z}-1}.} The Bell numbers may also be computed using the Bell triangle in which the first value in each row is copied from the end of the previous row, and subsequent values are computed by adding two numbers, the number to the left and the number to the above left of the position. The Bell numbers are repeated along both sides of this triangle. The numbers within the triangle count partitions in which a given element is the largest singleton. The number of partitions of an n-element set into exactly k (non-empty) parts is the Stirling number of the second kind S(n, k). The number of noncrossing partitions of an n-element set is the Catalan number C n = 1 n + 1 ( 2 n n ) . {\displaystyle C_{n}={1 \over n+1}{2n \choose n}.} == See also == Exact cover Block design Cluster analysis List of partition topics Lamination (topology) MECE principle Partial equivalence relation Partition algebra Partition refinement Point-finite collection Rhyme schemes by set partition Weak ordering (ordered set partition) == Notes == == References == Brualdi, Richard A. (2004). Introductory Combinatorics (4th ed.). Pearson Prentice Hall. ISBN 0-13-100119-1. Schechter, Eric (1997). Handbook of Analysis and Its Foundations. Academic Press. ISBN 0-12-622760-8.
Wikipedia/Partition_(set_theory)
In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space, such as in a parametric curve. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, the random motion of particles in the air, and the number of fish each springtime in a lake. The most general definition unifies several concepts in mathematics such as ordinary differential equations and ergodic theory by allowing different choices of the space and how time is measured. Time can be measured by integers, by real or complex numbers or can be a more general algebraic object, losing the memory of its physical origin, and the space may be a manifold or simply a set, without the need of a smooth space-time structure defined on it. At any given time, a dynamical system has a state representing a point in an appropriate state space. This state is often given by a tuple of real numbers or by a vector in a geometrical manifold. The evolution rule of the dynamical system is a function that describes what future states follow from the current state. Often the function is deterministic, that is, for a given time interval only one future state follows from the current state. However, some systems are stochastic, in that random events also affect the evolution of the state variables. The study of dynamical systems is the focus of dynamical systems theory, which has applications to a wide variety of fields such as mathematics, physics, biology, chemistry, engineering, economics, history, and medicine. Dynamical systems are a fundamental part of chaos theory, logistic map dynamics, bifurcation theory, the self-assembly and self-organization processes, and the edge of chaos concept. == Overview == The concept of a dynamical system has its origins in Newtonian mechanics. There, as in other natural sciences and engineering disciplines, the evolution rule of dynamical systems is an implicit relation that gives the state of the system for only a short time into the future. (The relation is either a differential equation, difference equation or other time scale.) To determine the state for all future times requires iterating the relation many times—each advancing time a small step. The iteration procedure is referred to as solving the system or integrating the system. If the system can be solved, then, given an initial point, it is possible to determine all its future positions, a collection of points known as a trajectory or orbit. Before the advent of computers, finding an orbit required sophisticated mathematical techniques and could be accomplished only for a small class of dynamical systems. Numerical methods implemented on electronic computing machines have simplified the task of determining the orbits of a dynamical system. For simple dynamical systems, knowing the trajectory is often sufficient, but most dynamical systems are too complicated to be understood in terms of individual trajectories. The difficulties arise because: The systems studied may only be known approximately—the parameters of the system may not be known precisely or terms may be missing from the equations. The approximations used bring into question the validity or relevance of numerical solutions. To address these questions several notions of stability have been introduced in the study of dynamical systems, such as Lyapunov stability or structural stability. The stability of the dynamical system implies that there is a class of models or initial conditions for which the trajectories would be equivalent. The operation for comparing orbits to establish their equivalence changes with the different notions of stability. The type of trajectory may be more important than one particular trajectory. Some trajectories may be periodic, whereas others may wander through many different states of the system. Applications often require enumerating these classes or maintaining the system within one class. Classifying all possible trajectories has led to the qualitative study of dynamical systems, that is, properties that do not change under coordinate changes. Linear dynamical systems and systems that have two numbers describing a state are examples of dynamical systems where the possible classes of orbits are understood. The behavior of trajectories as a function of a parameter may be what is needed for an application. As a parameter is varied, the dynamical systems may have bifurcation points where the qualitative behavior of the dynamical system changes. For example, it may go from having only periodic motions to apparently erratic behavior, as in the transition to turbulence of a fluid. The trajectories of the system may appear erratic, as if random. In these cases it may be necessary to compute averages using one very long trajectory or many different trajectories. The averages are well defined for ergodic systems and a more detailed understanding has been worked out for hyperbolic systems. Understanding the probabilistic aspects of dynamical systems has helped establish the foundations of statistical mechanics and of chaos. == History == Many people regard French mathematician Henri Poincaré as the founder of dynamical systems. Poincaré published two now classical monographs, "New Methods of Celestial Mechanics" (1892–1899) and "Lectures on Celestial Mechanics" (1905–1910). In them, he successfully applied the results of their research to the problem of the motion of three bodies and studied in detail the behavior of solutions (frequency, stability, asymptotic, and so on). These papers included the Poincaré recurrence theorem, which states that certain systems will, after a sufficiently long but finite time, return to a state very close to the initial state. Aleksandr Lyapunov developed many important approximation methods. His methods, which he developed in 1899, make it possible to define the stability of sets of ordinary differential equations. He created the modern theory of the stability of a dynamical system. In 1913, George David Birkhoff proved Poincaré's "Last Geometric Theorem", a special case of the three-body problem, a result that made him world-famous. In 1927, he published his Dynamical Systems. Birkhoff's most durable result has been his 1931 discovery of what is now called the ergodic theorem. Combining insights from physics on the ergodic hypothesis with measure theory, this theorem solved, at least in principle, a fundamental problem of statistical mechanics. The ergodic theorem has also had repercussions for dynamics. Stephen Smale made significant advances as well. His first contribution was the Smale horseshoe that jumpstarted significant research in dynamical systems. He also outlined a research program carried out by many others. Oleksandr Mykolaiovych Sharkovsky developed Sharkovsky's theorem on the periods of discrete dynamical systems in 1964. One of the implications of the theorem is that if a discrete dynamical system on the real line has a periodic point of period 3, then it must have periodic points of every other period. In the late 20th century the dynamical system perspective to partial differential equations started gaining popularity. Palestinian mechanical engineer Ali H. Nayfeh applied nonlinear dynamics in mechanical and engineering systems. His pioneering work in applied nonlinear dynamics has been influential in the construction and maintenance of machines and structures that are common in daily life, such as ships, cranes, bridges, buildings, skyscrapers, jet engines, rocket engines, aircraft and spacecraft. == Formal definition == In the most general sense, a dynamical system is a tuple (T, X, Φ) where T is a monoid, written additively, X is a non-empty set and Φ is a function Φ : U ⊆ ( T × X ) → X {\displaystyle \Phi :U\subseteq (T\times X)\to X} with p r o j 2 ( U ) = X {\displaystyle \mathrm {proj} _{2}(U)=X} (where p r o j 2 {\displaystyle \mathrm {proj} _{2}} is the 2nd projection map) and for any x in X: Φ ( 0 , x ) = x {\displaystyle \Phi (0,x)=x} Φ ( t 2 , Φ ( t 1 , x ) ) = Φ ( t 2 + t 1 , x ) , {\displaystyle \Phi (t_{2},\Phi (t_{1},x))=\Phi (t_{2}+t_{1},x),} for t 1 , t 2 + t 1 ∈ I ( x ) {\displaystyle \,t_{1},\,t_{2}+t_{1}\in I(x)} and t 2 ∈ I ( Φ ( t 1 , x ) ) {\displaystyle \ t_{2}\in I(\Phi (t_{1},x))} , where we have defined the set I ( x ) := { t ∈ T : ( t , x ) ∈ U } {\displaystyle I(x):=\{t\in T:(t,x)\in U\}} for any x in X. In particular, in the case that U = T × X {\displaystyle U=T\times X} we have for every x in X that I ( x ) = T {\displaystyle I(x)=T} and thus that Φ defines a monoid action of T on X. The function Φ(t,x) is called the evolution function of the dynamical system: it associates to every point x in the set X a unique image, depending on the variable t, called the evolution parameter. X is called phase space or state space, while the variable x represents an initial state of the system. We often write Φ x ( t ) ≡ Φ ( t , x ) {\displaystyle \Phi _{x}(t)\equiv \Phi (t,x)} Φ t ( x ) ≡ Φ ( t , x ) {\displaystyle \Phi ^{t}(x)\equiv \Phi (t,x)} if we take one of the variables as constant. The function Φ x : I ( x ) → X {\displaystyle \Phi _{x}:I(x)\to X} is called the flow through x and its graph is called the trajectory through x. The set γ x ≡ { Φ ( t , x ) : t ∈ I ( x ) } {\displaystyle \gamma _{x}\equiv \{\Phi (t,x):t\in I(x)\}} is called the orbit through x. The orbit through x is the image of the flow through x. A subset S of the state space X is called Φ-invariant if for all x in S and all t in T Φ ( t , x ) ∈ S . {\displaystyle \Phi (t,x)\in S.} Thus, in particular, if S is Φ-invariant, I ( x ) = T {\displaystyle I(x)=T} for all x in S. That is, the flow through x must be defined for all time for every element of S. More commonly there are two classes of definitions for a dynamical system: one is motivated by ordinary differential equations and is geometrical in flavor; and the other is motivated by ergodic theory and is measure theoretical in flavor. === Geometrical definition === In the geometrical definition, a dynamical system is the tuple ⟨ T , M , f ⟩ {\displaystyle \langle {\mathcal {T}},{\mathcal {M}},f\rangle } . T {\displaystyle {\mathcal {T}}} is the domain for time – there are many choices, usually the reals or the integers, possibly restricted to be non-negative. M {\displaystyle {\mathcal {M}}} is a manifold, i.e. locally a Banach space or Euclidean space, or in the discrete case a graph. f is an evolution rule t → f t (with t ∈ T {\displaystyle t\in {\mathcal {T}}} ) such that f t is a diffeomorphism of the manifold to itself. So, f is a "smooth" mapping of the time-domain T {\displaystyle {\mathcal {T}}} into the space of diffeomorphisms of the manifold to itself. In other terms, f(t) is a diffeomorphism, for every time t in the domain T {\displaystyle {\mathcal {T}}} . ==== Real dynamical system ==== A real dynamical system, real-time dynamical system, continuous time dynamical system, or flow is a tuple (T, M, Φ) with T an open interval in the real numbers R, M a manifold locally diffeomorphic to a Banach space, and Φ a continuous function. If Φ is continuously differentiable we say the system is a differentiable dynamical system. If the manifold M is locally diffeomorphic to Rn, the dynamical system is finite-dimensional; if not, the dynamical system is infinite-dimensional. This does not assume a symplectic structure. When T is taken to be the reals, the dynamical system is called global or a flow; and if T is restricted to the non-negative reals, then the dynamical system is a semi-flow. ==== Discrete dynamical system ==== A discrete dynamical system, discrete-time dynamical system is a tuple (T, M, Φ), where M is a manifold locally diffeomorphic to a Banach space, and Φ is a function. When T is taken to be the integers, it is a cascade or a map. If T is restricted to the non-negative integers we call the system a semi-cascade. ==== Cellular automaton ==== A cellular automaton is a tuple (T, M, Φ), with T a lattice such as the integers or a higher-dimensional integer grid, M is a set of functions from an integer lattice (again, with one or more dimensions) to a finite set, and Φ a (locally defined) evolution function. As such cellular automata are dynamical systems. The lattice in M represents the "space" lattice, while the one in T represents the "time" lattice. ==== Multidimensional generalization ==== Dynamical systems are usually defined over a single independent variable, thought of as time. A more general class of systems are defined over multiple independent variables and are therefore called multidimensional systems. Such systems are useful for modeling, for example, image processing. ==== Compactification of a dynamical system ==== Given a global dynamical system (R, X, Φ) on a locally compact and Hausdorff topological space X, it is often useful to study the continuous extension Φ* of Φ to the one-point compactification X* of X. Although we lose the differential structure of the original system we can now use compactness arguments to analyze the new system (R, X*, Φ*). In compact dynamical systems the limit set of any orbit is non-empty, compact and simply connected. === Measure theoretical definition === A dynamical system may be defined formally as a measure-preserving transformation of a measure space, the triplet (T, (X, Σ, μ), Φ). Here, T is a monoid (usually the non-negative integers), X is a set, and (X, Σ, μ) is a probability space, meaning that Σ is a sigma-algebra on X and μ is a finite measure on (X, Σ). A map Φ: X → X is said to be Σ-measurable if and only if, for every σ in Σ, one has Φ − 1 σ ∈ Σ {\displaystyle \Phi ^{-1}\sigma \in \Sigma } . A map Φ is said to preserve the measure if and only if, for every σ in Σ, one has μ ( Φ − 1 σ ) = μ ( σ ) {\displaystyle \mu (\Phi ^{-1}\sigma )=\mu (\sigma )} . Combining the above, a map Φ is said to be a measure-preserving transformation of X , if it is a map from X to itself, it is Σ-measurable, and is measure-preserving. The triplet (T, (X, Σ, μ), Φ), for such a Φ, is then defined to be a dynamical system. The map Φ embodies the time evolution of the dynamical system. Thus, for discrete dynamical systems the iterates Φ n = Φ ∘ Φ ∘ ⋯ ∘ Φ {\displaystyle \Phi ^{n}=\Phi \circ \Phi \circ \dots \circ \Phi } for every integer n are studied. For continuous dynamical systems, the map Φ is understood to be a finite time evolution map and the construction is more complicated. ==== Relation to geometric definition ==== The measure theoretical definition assumes the existence of a measure-preserving transformation. Many different invariant measures can be associated to any one evolution rule. If the dynamical system is given by a system of differential equations the appropriate measure must be determined. This makes it difficult to develop ergodic theory starting from differential equations, so it becomes convenient to have a dynamical systems-motivated definition within ergodic theory that side-steps the choice of measure and assumes the choice has been made. A simple construction (sometimes called the Krylov–Bogolyubov theorem) shows that for a large class of systems it is always possible to construct a measure so as to make the evolution rule of the dynamical system a measure-preserving transformation. In the construction a given measure of the state space is summed for all future points of a trajectory, assuring the invariance. Some systems have a natural measure, such as the Liouville measure in Hamiltonian systems, chosen over other invariant measures, such as the measures supported on periodic orbits of the Hamiltonian system. For chaotic dissipative systems the choice of invariant measure is technically more challenging. The measure needs to be supported on the attractor, but attractors have zero Lebesgue measure and the invariant measures must be singular with respect to the Lebesgue measure. A small region of phase space shrinks under time evolution. For hyperbolic dynamical systems, the Sinai–Ruelle–Bowen measures appear to be the natural choice. They are constructed on the geometrical structure of stable and unstable manifolds of the dynamical system; they behave physically under small perturbations; and they explain many of the observed statistics of hyperbolic systems. == Construction of dynamical systems == The concept of evolution in time is central to the theory of dynamical systems as seen in the previous sections: the basic reason for this fact is that the starting motivation of the theory was the study of time behavior of classical mechanical systems. But a system of ordinary differential equations must be solved before it becomes a dynamic system. For example, consider an initial value problem such as the following: x ˙ = v ( t , x ) {\displaystyle {\dot {\boldsymbol {x}}}={\boldsymbol {v}}(t,{\boldsymbol {x}})} x | t = 0 = x 0 {\displaystyle {\boldsymbol {x}}|_{t=0}={\boldsymbol {x}}_{0}} where x ˙ {\displaystyle {\dot {\boldsymbol {x}}}} represents the velocity of the material point x M is a finite dimensional manifold v: T × M → TM is a vector field in Rn or Cn and represents the change of velocity induced by the known forces acting on the given material point in the phase space M. The change is not a vector in the phase space M, but is instead in the tangent space TM. There is no need for higher order derivatives in the equation, nor for the parameter t in v(t,x), because these can be eliminated by considering systems of higher dimensions. Depending on the properties of this vector field, the mechanical system is called autonomous, when v(t, x) = v(x) homogeneous when v(t, 0) = 0 for all t The solution can be found using standard ODE techniques and is denoted as the evolution function already introduced above x ( t ) = Φ ( t , x 0 ) {\displaystyle {\boldsymbol {x}}(t)=\Phi (t,{\boldsymbol {x}}_{0})} The dynamical system is then (T, M, Φ). Some formal manipulation of the system of differential equations shown above gives a more general form of equations a dynamical system must satisfy x ˙ − v ( t , x ) = 0 ⇔ G ( t , Φ ( t , x 0 ) ) = 0 {\displaystyle {\dot {\boldsymbol {x}}}-{\boldsymbol {v}}(t,{\boldsymbol {x}})=0\qquad \Leftrightarrow \qquad {\mathfrak {G}}\left(t,\Phi (t,{\boldsymbol {x}}_{0})\right)=0} where G : ( T × M ) M → C {\displaystyle {\mathfrak {G}}:{{(T\times M)}^{M}}\to \mathbf {C} } is a functional from the set of evolution functions to the field of the complex numbers. This equation is useful when modeling mechanical systems with complicated constraints. Many of the concepts in dynamical systems can be extended to infinite-dimensional manifolds—those that are locally Banach spaces—in which case the differential equations are partial differential equations. == Examples == == Linear dynamical systems == Linear dynamical systems can be solved in terms of simple functions and the behavior of all orbits classified. In a linear system the phase space is the N-dimensional Euclidean space, so any point in phase space can be represented by a vector with N numbers. The analysis of linear systems is possible because they satisfy a superposition principle: if u(t) and w(t) satisfy the differential equation for the vector field (but not necessarily the initial condition), then so will u(t) + w(t). === Flows === For a flow, the vector field v(x) is an affine function of the position in the phase space, that is, x ˙ = v ( x ) = A x + b , {\displaystyle {\dot {x}}=v(x)=Ax+b,} with A a matrix, b a vector of numbers and x the position vector. The solution to this system can be found by using the superposition principle (linearity). The case b ≠ 0 with A = 0 is just a straight line in the direction of b: Φ t ( x 1 ) = x 1 + b t . {\displaystyle \Phi ^{t}(x_{1})=x_{1}+bt.} When b is zero and A ≠ 0 the origin is an equilibrium (or singular) point of the flow, that is, if x0 = 0, then the orbit remains there. For other initial conditions, the equation of motion is given by the exponential of a matrix: for an initial point x0, Φ t ( x 0 ) = e t A x 0 . {\displaystyle \Phi ^{t}(x_{0})=e^{tA}x_{0}.} When b = 0, the eigenvalues of A determine the structure of the phase space. From the eigenvalues and the eigenvectors of A it is possible to determine if an initial point will converge or diverge to the equilibrium point at the origin. The distance between two different initial conditions in the case A ≠ 0 will change exponentially in most cases, either converging exponentially fast towards a point, or diverging exponentially fast. Linear systems display sensitive dependence on initial conditions in the case of divergence. For nonlinear systems this is one of the (necessary but not sufficient) conditions for chaotic behavior. === Maps === A discrete-time, affine dynamical system has the form of a matrix difference equation: x n + 1 = A x n + b , {\displaystyle x_{n+1}=Ax_{n}+b,} with A a matrix and b a vector. As in the continuous case, the change of coordinates x → x + (1 − A) –1b removes the term b from the equation. In the new coordinate system, the origin is a fixed point of the map and the solutions are of the linear system A nx0. The solutions for the map are no longer curves, but points that hop in the phase space. The orbits are organized in curves, or fibers, which are collections of points that map into themselves under the action of the map. As in the continuous case, the eigenvalues and eigenvectors of A determine the structure of phase space. For example, if u1 is an eigenvector of A, with a real eigenvalue smaller than one, then the straight lines given by the points along α u1, with α ∈ R, is an invariant curve of the map. Points in this straight line run into the fixed point. There are also many other discrete dynamical systems. == Local dynamics == The qualitative properties of dynamical systems do not change under a smooth change of coordinates (this is sometimes taken as a definition of qualitative): a singular point of the vector field (a point where v(x) = 0) will remain a singular point under smooth transformations; a periodic orbit is a loop in phase space and smooth deformations of the phase space cannot alter it being a loop. It is in the neighborhood of singular points and periodic orbits that the structure of a phase space of a dynamical system can be well understood. In the qualitative study of dynamical systems, the approach is to show that there is a change of coordinates (usually unspecified, but computable) that makes the dynamical system as simple as possible. === Rectification === A flow in most small patches of the phase space can be made very simple. If y is a point where the vector field v(y) ≠ 0, then there is a change of coordinates for a region around y where the vector field becomes a series of parallel vectors of the same magnitude. This is known as the rectification theorem. The rectification theorem says that away from singular points the dynamics of a point in a small patch is a straight line. The patch can sometimes be enlarged by stitching several patches together, and when this works out in the whole phase space M the dynamical system is integrable. In most cases the patch cannot be extended to the entire phase space. There may be singular points in the vector field (where v(x) = 0); or the patches may become smaller and smaller as some point is approached. The more subtle reason is a global constraint, where the trajectory starts out in a patch, and after visiting a series of other patches comes back to the original one. If the next time the orbit loops around phase space in a different way, then it is impossible to rectify the vector field in the whole series of patches. === Near periodic orbits === In general, in the neighborhood of a periodic orbit the rectification theorem cannot be used. Poincaré developed an approach that transforms the analysis near a periodic orbit to the analysis of a map. Pick a point x0 in the orbit γ and consider the points in phase space in that neighborhood that are perpendicular to v(x0). These points are a Poincaré section S(γ, x0), of the orbit. The flow now defines a map, the Poincaré map F : S → S, for points starting in S and returning to S. Not all these points will take the same amount of time to come back, but the times will be close to the time it takes x0. The intersection of the periodic orbit with the Poincaré section is a fixed point of the Poincaré map F. By a translation, the point can be assumed to be at x = 0. The Taylor series of the map is F(x) = J · x + O(x2), so a change of coordinates h can only be expected to simplify F to its linear part h − 1 ∘ F ∘ h ( x ) = J ⋅ x . {\displaystyle h^{-1}\circ F\circ h(x)=J\cdot x.} This is known as the conjugation equation. Finding conditions for this equation to hold has been one of the major tasks of research in dynamical systems. Poincaré first approached it assuming all functions to be analytic and in the process discovered the non-resonant condition. If λ1, ..., λν are the eigenvalues of J they will be resonant if one eigenvalue is an integer linear combination of two or more of the others. As terms of the form λi – Σ (multiples of other eigenvalues) occurs in the denominator of the terms for the function h, the non-resonant condition is also known as the small divisor problem. === Conjugation results === The results on the existence of a solution to the conjugation equation depend on the eigenvalues of J and the degree of smoothness required from h. As J does not need to have any special symmetries, its eigenvalues will typically be complex numbers. When the eigenvalues of J are not in the unit circle, the dynamics near the fixed point x0 of F is called hyperbolic and when the eigenvalues are on the unit circle and complex, the dynamics is called elliptic. In the hyperbolic case, the Hartman–Grobman theorem gives the conditions for the existence of a continuous function that maps the neighborhood of the fixed point of the map to the linear map J · x. The hyperbolic case is also structurally stable. Small changes in the vector field will only produce small changes in the Poincaré map and these small changes will reflect in small changes in the position of the eigenvalues of J in the complex plane, implying that the map is still hyperbolic. The Kolmogorov–Arnold–Moser (KAM) theorem gives the behavior near an elliptic point. == Bifurcation theory == When the evolution map Φt (or the vector field it is derived from) depends on a parameter μ, the structure of the phase space will also depend on this parameter. Small changes may produce no qualitative changes in the phase space until a special value μ0 is reached. At this point the phase space changes qualitatively and the dynamical system is said to have gone through a bifurcation. Bifurcation theory considers a structure in phase space (typically a fixed point, a periodic orbit, or an invariant torus) and studies its behavior as a function of the parameter μ. At the bifurcation point the structure may change its stability, split into new structures, or merge with other structures. By using Taylor series approximations of the maps and an understanding of the differences that may be eliminated by a change of coordinates, it is possible to catalog the bifurcations of dynamical systems. The bifurcations of a hyperbolic fixed point x0 of a system family Fμ can be characterized by the eigenvalues of the first derivative of the system DFμ(x0) computed at the bifurcation point. For a map, the bifurcation will occur when there are eigenvalues of DFμ on the unit circle. For a flow, it will occur when there are eigenvalues on the imaginary axis. For more information, see the main article on Bifurcation theory. Some bifurcations can lead to very complicated structures in phase space. For example, the Ruelle–Takens scenario describes how a periodic orbit bifurcates into a torus and the torus into a strange attractor. In another example, Feigenbaum period-doubling describes how a stable periodic orbit goes through a series of period-doubling bifurcations. == Ergodic systems == In many dynamical systems, it is possible to choose the coordinates of the system so that the volume (really a ν-dimensional volume) in phase space is invariant. This happens for mechanical systems derived from Newton's laws as long as the coordinates are the position and the momentum and the volume is measured in units of (position) × (momentum). The flow takes points of a subset A into the points Φ t(A) and invariance of the phase space means that v o l ( A ) = v o l ( Φ t ( A ) ) . {\displaystyle \mathrm {vol} (A)=\mathrm {vol} (\Phi ^{t}(A)).} In the Hamiltonian formalism, given a coordinate it is possible to derive the appropriate (generalized) momentum such that the associated volume is preserved by the flow. The volume is said to be computed by the Liouville measure. In a Hamiltonian system, not all possible configurations of position and momentum can be reached from an initial condition. Because of energy conservation, only the states with the same energy as the initial condition are accessible. The states with the same energy form an energy shell Ω, a sub-manifold of the phase space. The volume of the energy shell, computed using the Liouville measure, is preserved under evolution. For systems where the volume is preserved by the flow, Poincaré discovered the recurrence theorem: Assume the phase space has a finite Liouville volume and let F be a phase space volume-preserving map and A a subset of the phase space. Then almost every point of A returns to A infinitely often. The Poincaré recurrence theorem was used by Zermelo to object to Boltzmann's derivation of the increase in entropy in a dynamical system of colliding atoms. One of the questions raised by Boltzmann's work was the possible equality between time averages and space averages, what he called the ergodic hypothesis. The hypothesis states that the length of time a typical trajectory spends in a region A is vol(A)/vol(Ω). The ergodic hypothesis turned out not to be the essential property needed for the development of statistical mechanics and a series of other ergodic-like properties were introduced to capture the relevant aspects of physical systems. Koopman approached the study of ergodic systems by the use of functional analysis. An observable a is a function that to each point of the phase space associates a number (say instantaneous pressure, or average height). The value of an observable can be computed at another time by using the evolution function φ t. This introduces an operator U t, the transfer operator, ( U t a ) ( x ) = a ( Φ − t ( x ) ) . {\displaystyle (U^{t}a)(x)=a(\Phi ^{-t}(x)).} By studying the spectral properties of the linear operator U it becomes possible to classify the ergodic properties of Φ t. In using the Koopman approach of considering the action of the flow on an observable function, the finite-dimensional nonlinear problem involving Φ t gets mapped into an infinite-dimensional linear problem involving U. The Liouville measure restricted to the energy surface Ω is the basis for the averages computed in equilibrium statistical mechanics. An average in time along a trajectory is equivalent to an average in space computed with the Boltzmann factor exp(−βH). This idea has been generalized by Sinai, Bowen, and Ruelle (SRB) to a larger class of dynamical systems that includes dissipative systems. SRB measures replace the Boltzmann factor and they are defined on attractors of chaotic systems. == Nonlinear dynamical systems and chaos == Simple nonlinear dynamical systems, including piecewise linear systems, can exhibit strongly unpredictable behavior, which might seem to be random, despite the fact that they are fundamentally deterministic. This unpredictable behavior has been called chaos. Hyperbolic systems are precisely defined dynamical systems that exhibit the properties ascribed to chaotic systems. In hyperbolic systems the tangent spaces perpendicular to an orbit can be decomposed into a combination of two parts: one with the points that converge towards the orbit (the stable manifold) and another of the points that diverge from the orbit (the unstable manifold). This branch of mathematics deals with the long-term qualitative behavior of dynamical systems. Here, the focus is not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather to answer questions like "Will the system settle down to a steady state in the long term, and if so, what are the possible attractors?" or "Does the long-term behavior of the system depend on its initial condition?" The chaotic behavior of complex systems is not the issue. Meteorology has been known for years to involve complex—even chaotic—behavior. Chaos theory has been so surprising because chaos can be found within almost trivial systems. The Pomeau–Manneville scenario of the logistic map and the Fermi–Pasta–Ulam–Tsingou problem arose with just second-degree polynomials; the horseshoe map is piecewise linear. === Solutions of finite duration === For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, meaning here that in these solutions the system will reach the value zero at some time, called an ending time, and then stay there forever after. This can occur only when system trajectories are not uniquely determined forwards and backwards in time by the dynamics, thus solutions of finite duration imply a form of "backwards-in-time unpredictability" closely related to the forwards-in-time unpredictability of chaos. This behavior cannot happen for Lipschitz continuous differential equations according to the proof of the Picard-Lindelof theorem. These solutions are non-Lipschitz functions at their ending times and cannot be analytical functions on the whole real line. As example, the equation: y ′ = − sgn ( y ) | y | , y ( 0 ) = 1 {\displaystyle y'=-{\text{sgn}}(y){\sqrt {|y|}},\,\,y(0)=1} Admits the finite duration solution: y ( t ) = 1 4 ( 1 − t 2 + | 1 − t 2 | ) 2 {\displaystyle y(t)={\frac {1}{4}}\left(1-{\frac {t}{2}}+\left|1-{\frac {t}{2}}\right|\right)^{2}} that is zero for t ≥ 2 {\displaystyle t\geq 2} and is not Lipschitz continuous at its ending time t = 2. {\displaystyle t=2.} == See also == == References == == Further reading == == External links == Arxiv preprint server has daily submissions of (non-refereed) manuscripts in dynamical systems. Encyclopedia of dynamical systems A part of Scholarpedia — peer-reviewed and written by invited experts. Nonlinear Dynamics. Models of bifurcation and chaos by Elmer G. Wiens Sci.Nonlinear FAQ 2.0 (Sept 2003) provides definitions, explanations and resources related to nonlinear science Online books or lecture notes Geometrical theory of dynamical systems. Nils Berglund's lecture notes for a course at ETH at the advanced undergraduate level. Dynamical systems. George D. Birkhoff's 1927 book already takes a modern approach to dynamical systems. Chaos: classical and quantum. An introduction to dynamical systems from the periodic orbit point of view. Learning Dynamical Systems. Tutorial on learning dynamical systems. Ordinary Differential Equations and Dynamical Systems. Lecture notes by Gerald Teschl Research groups Dynamical Systems Group Groningen, IWI, University of Groningen. Chaos @ UMD. Concentrates on the applications of dynamical systems. [2], SUNY Stony Brook. Lists of conferences, researchers, and some open problems. Center for Dynamics and Geometry, Penn State. Control and Dynamical Systems, Caltech. Laboratory of Nonlinear Systems, Ecole Polytechnique Fédérale de Lausanne (EPFL). Center for Dynamical Systems, University of Bremen Systems Analysis, Modelling and Prediction Group, University of Oxford Non-Linear Dynamics Group, Instituto Superior Técnico, Technical University of Lisbon Dynamical Systems Archived 2017-06-02 at the Wayback Machine, IMPA, Instituto Nacional de Matemática Pura e Applicada. Nonlinear Dynamics Workgroup Archived 2015-01-21 at the Wayback Machine, Institute of Computer Science, Czech Academy of Sciences. UPC Dynamical Systems Group Barcelona, Polytechnical University of Catalonia. Center for Control, Dynamical Systems, and Computation, University of California, Santa Barbara.
Wikipedia/Evolution_function
Systems medicine is an interdisciplinary field of study that looks at the systems of the human body as part of an integrated whole, incorporating biochemical, physiological, and environment interactions. Systems medicine draws on systems science and systems biology, and considers complex interactions within the human body in light of a patient's genomics, behavior and environment. The earliest uses of the term systems medicine appeared in 1992, in an article on systems medicine and pharmacology by T. Kamada. An important topic in systems medicine and systems biomedicine is the development of computational models that describe disease progression and the effect of therapeutic interventions. More recent approaches include the redefinition of disease phenotypes based on common mechanisms rather than symptoms. These provide then therapeutic targets including network pharmacology and drug repurposing. Since 2018, there is a dedicated scientific journal, Systems Medicine. == Fundamental schools of systems medicine == Essentially, the issues dealt with by systems medicine can be addressed in two basic ways, molecular (MSM) and organismal systems medicine (OSM): === Molecular systems medicine (MSM) === This approach relies on omics technologies (genomics, proteomics, transcriptomics, phenomics, metabolomics etc.) and tries to understand physiological processes and the evolution of disease in a bottom-up strategy, i.e. by simulating, synthesising and integrating the description of molecular processes to deliver an explanation of an organ system or even the organism in its whole. === Organismal systems medicine (OSM) === This branch of systems medicine, going back to the traditions of Ludwig von Bertalanffy's systems theory and biological cybernetics is a top-down strategy that starts with the description of large, complex processing structures (i.e. neural networks, feedback loops and other motifs) and tries to find sufficient and necessary conditions for the corresponding functional organisation on a molecular level. A common challenge for both schools is the translation between the molecular and the organismal level. This can be achieved e.g. by affine subspace mapping and sensitivity analysis, but also requires some preparative steps on both ends of the epistemic gap. === Systems Medicine Education === Georgetown University is the first in the Nation to launch a MS program in Systems Medicine. It has developed a rigorous curriculum, The programs have been developed and led by Dr. Sona Vasudevan, PhD. == List of research groups == == See also == Biocybernetics Medical cybernetics Systems biology Systems science Systems pharmacology == References ==
Wikipedia/Systems_medicine
In the social sciences, methodological individualism is a method for explaining social phenomena strictly in terms of the decisions of individuals, each being moved by their own personal motivations. In contrast, explanations of social phenomena which assume that cause and effect acts upon whole classes or groups are deemed illusory, and thus rejected according to this approach. Or to put it another way, only group dynamics which can be explained in terms of individual subjective motivations are considered valid. With its bottom-up micro-level approach, methodological individualism is often contrasted with methodological holism, a top-down macro-level approach, and methodological pluralism. == History within the Social Sciences == This framework was introduced as a foundational assumption within the social sciences by Max Weber, and discussed in his book Economy and Society. Within later schools of economic thought, such as the Austrian School, strict adherence to methodological individualism is considered a necessary starting principle. It draws heavily upon assumptions of neoclassical economics, where social behavior is explained in terms of rational actors whose choices are constrained by prices and incomes, and where individuals' subjective preferences are treated as a given. == Criticisms == Economist Mark Blaug has criticized over-reliance on methodological individualism in economics, saying that "it is helpful to note what methodological individualism strictly interpreted [...] would imply for economics. In effect, it would rule out all macroeconomic propositions that cannot be reduced to microeconomic ones [...] this amounts to saying goodbye to almost the whole of received macroeconomics. There must be something wrong with a methodological principle that has such devastating implications". Similarly, the economist Alan Kirman has critiqued general equilibrium theory and modern economics for its "fundamentally individualistic approach to constructing economic models", and showed that an individualist competitive equilibrium is not necessarily stable or unique. However, stability and uniqueness can be achieved if aggregate variables are added, and as a result he argued "the idea that we should start at the level of the isolated individual is one which we may well have to abandon". == See also == Methodological holism Methodological pluralism Austrian School Praxeology Analytical Marxism – School of Marxist theory == References == == Further reading == Agassi, Joseph. "Methodological individualism." The British Journal of Sociology 11.3 (1960): 244–70. Kenneth J. Arrow (1994), "Methodological Individualism and Social Knowledge," American Economic Review, 84(2), JSTOR 2117792 pp. 1–9]. Kaushik Basu (2008), "Methodological Individualism", The New Palgrave Dictionary of Economics, 2nd ed., New York : Palgrave Macmillan ISBN 978-0-333-78676-5 Abstract. Brian Epstein (2009), "Ontological Individualism Reconsidered", Synthese 166(1), pp. 187–213. Friedrich A. Hayek (1948), Individualism and Economic Order. University of Chicago Press. ISBN 0-226-32093-6 Geoffrey Hodgson, (2007) "Meanings of Methodological Individualism", Journal of Economic Methodology 14(2), June, pp. 211–26. Harold Kincaid (2008), "Individualism versus Holism," The New Palgrave Dictionary of Economics, 2nd ed., New York: Palgrave Macmillan ISBN 978-0-333-78676-5 Abstract. Steven Lukes (1968), "Methodological Individualism Reconsidered", British Journal of Sociology 19, pp. 119–29. Ludwig von Mises, "The Principle of Methodological Individualism", chapt. 2 in Human Action ISBN 9780865976313 Eprint. Joseph Schumpeter (1909), "On the Concept of Social Value", Quarterly Journal of Economics, 23(2), February, pp. 213–32. Lars Udéhn (2002), "The Changing Face of Methodological Individualism", Annual Review of Sociology, 28, pp. 479–507. == External links ==
Wikipedia/Methodological_individualism
In the mathematics of chaotic dynamical systems, in the Pyragas method of stabilizing a periodic orbit, an appropriate continuous controlling signal is injected into the system, whose intensity is nearly zero as the system evolves close to the desired periodic orbit but increases when it drifts away from the desired orbit. Both the Pyragas and OGY (Ott, Grebogi and Yorke) methods are part of a general class of methods called "closed loop" or "feedback" methods which can be applied based on knowledge of the system obtained through solely observing the behavior of the system as a whole over a suitable period of time. The method was proposed by Lithuanian physicist Kęstutis Pyragas. == References == == External links == Kęstutis Pyragas homepage
Wikipedia/Pyragas_method
Cybernetical physics is a scientific area on the border of cybernetics and physics which studies physical systems with cybernetical methods. Cybernetical methods are understood as methods developed within control theory, information theory, systems theory and related areas: control design, estimation, identification, optimization, pattern recognition, signal processing, image processing, etc. Physical systems are also understood in a broad sense; they may be either lifeless, living nature or of artificial (engineering) origin, and must have reasonably understood dynamics and models suitable for posing cybernetical problems. Research objectives in cybernetical physics are frequently formulated as analyses of a class of possible system state changes under external (controlling) actions of a certain class. An auxiliary goal is designing the controlling actions required to achieve a prespecified property change. Among typical control action classes are functions which are constant in time (bifurcation analysis, optimization), functions which depend only on time (vibration mechanics, spectroscopic studies, program control), and functions whose value depends on measurement made at the same time or on previous instances. The last class is of special interest since these functions correspond to system analysis by means of external feedback (feedback control). == Roots of cybernetical physics == Until recently no creative interaction of physics and control theory (cybernetics) had been seen and no control theory methods were directly used for discovering new physical effects and phenomena. The situation dramatically changed in the 1990s when two new areas emerged: control of chaos and quantum control. === Control of chaos === In 1990 a paper was published in Physical Review Letters by Edward Ott, Celso Grebogi and James Yorke from the University of Maryland reporting that even small feedback action can dramatically change the behavior of a nonlinear system, e.g., turn chaotic motions into periodic ones and vice versa. The idea almost immediately became popular in the physics community, and since 1990 hundreds of papers have been published demonstrating the ability of small control, with or without feedback, to significantly change the dynamics of real or model systems. By 2003, this paper by Ott, Grebogi and Yorke had been quoted over 1300 times whilst the total number of papers relating to control of chaos exceeded 4000 by the beginning of the 21st century, with 300-400 papers per year being published in peer-reviewed journals. The method proposed in is now called the OGY-method after the authors' initials. Later, a number of other methods were proposed for transforming chaotic trajectories into periodic ones, for example delayed feedback (Pyragas method). Numerous nonlinear and adaptive control methods were also applied for the control of chaos, see surveys in. It is important that the results obtained were interpreted as discovering new properties of physical systems. Thousands of papers were published that examine and predict properties of systems based on the use of control, identification and other cybernetic methods. Notably, most of those papers were published in physical journals, their authors representing university physics departments. It has become clear that such types of control goals are important not only for the control of chaos, but also for the control of a broader class of oscillatory processes. This provides evidence for the existence of an emerging field of research related to both physics and control, that of "cybernetical physics". === Quantum control === It is conceivable that molecular physics was the area where ideas of control first appeared. James Clerk Maxwell introduced a hypothetical being, known as Maxwell's demon, with the ability to measure the velocities of gas molecules in a vessel and to direct the fast molecules to one part of the vessel while keeping the slow molecules in another part. This produces a temperature difference between the two parts of the vessel, which seems to contradict the second law of thermodynamics. Now, after more than a century of fruitful life, this demon is even more active than in the past. Recent papers discussed issues relating to the experimental implementation of Maxwell's demon, particularly at the quantum-mechanical level. At the end of the 1970s the first mathematical results for the control of quantum mechanical models appeared based on control theory At the end of the 1980s and beginning of the 1990s rapid developments in the laser industry led to the appearance of ultrafast, so-called femtosecond lasers. This new generation of lasers has the ability to generate pulses with durations of a few femtoseconds and even less (1 fs = 10 − 15 {\displaystyle 10^{-15}} sec). The duration of such a pulse is comparable with the period of a molecule's natural oscillation. Therefore, a femtosecond laser can, in principle, be used as a mean of controlling single molecules and atoms. A consequence of such an application is the possibility of realizing the alchemists' dream of changing the natural course of chemical reactions. A new area in chemistry emerged, femtochemistry, and new femtotechnologies were developed. Ahmed Zewail from Caltech was awarded the 1999 Nobel Prize in Chemistry for his work on femtochemistry. Using modern control theory, new horizons may open for studying the interaction of atoms and molecules, and new ways and possible limits may be discovered for intervening in the intimate processes of the microworld. Besides, control is an important part of many recent nanoscale applications, including nanomotors, nanowires, nanochips, nanorobots, etc. The number of publications in peer-reviewed journals exceeds 600 per year. === Control thermodynamics === The basics of thermodynamics were stated by Sadi Carnot in 1824. He considered a heat engine which operates by drawing heat from a source which is at thermal equilibrium at temperature T h o t {\displaystyle T_{hot}} , and delivering useful work. Carnot saw that, in order to operate continuously, the engine requires also a cold reservoir with the temperature T c o l d {\displaystyle T_{cold}} , to which some heat can be discharged. By simple logic he established the famous ‘’’Carnot Principle’’’: ‘’No heat engine can be more efficient than a reversible one operating between the same temperatures’’. In fact it was nothing but the solution to an optimal control problem: maximum work can be extracted by a reversible machine and the value of extracted work depends only on the temperatures of the source and the bath. Later, Kelvin introduced his absolute temperature scale (Kelvin scale) and accomplished the next step, evaluating Carnot's reversible efficiency η C a r n o t = 1 − T c o l d T h o t . {\displaystyle \eta _{Carnot}=1-{\frac {T_{cold}}{T_{hot}}}.} However, most work was devoted to studying stationary systems over infinite time intervals, while for practical purposes it is important to know the possibilities and limitations of the system's evolution for finite times as well as under other types of constraints caused by a finite amount of available resources. The pioneer work devoted to evaluating finite time limitations for heat engines was published by I. Novikov in 1957, and independently by F.L. Curzon and B. Ahlborn in 1975: the efficiency at maximum power per cycle of a heat engine coupled to its surroundings through a constant heat conductor is η N C A = 1 − T c o l d T h o t {\displaystyle \eta _{NCA}=1-{\sqrt {\frac {T_{cold}}{T_{hot}}}}} (the Novikov-Curzon-Ahlborn formula). The Novikov-Curzon-Ahlborn process is also optimal in the sense of minimal dissipation. Otherwise, if the dissipation degree is given, the process corresponds to the maximum entropy principle. Later, the results were extended and generalized for other criteria and for more complex situations based on modern optimal control theory. As a result, a new direction in thermodynamics arose known under the names "optimization thermodynamics", "finite-time thermodynamics", Endoreversible thermodynamics or "control thermodynamics", see. == Subject and methodology of cybernetical physics == By the end of the 1990s it had become clear that a new area in physics dealing with control methods had emerged. The term "cybernetical physics" was proposed in. The subject and methodology of the field are systematically presented in. A description of the control problems related to cybernetical physics includes classes of controlled plant models, control objectives (goals) and admissible control algorithms. The methodology of cybernetical physics comprises typical methods used for solving problems and typical results in the field. === Models of controlled systems === A formal statement of any control problem begins with a model of the system to be controlled (plant) and a model of the control objective (goal). Even if the plant model is not given (the case in many real world applications) it should be determined in some way. The system models used in cybernetics are similar to traditional models of physics and mechanics with one difference: the inputs and outputs of the model should be explicitly specified. The following main classes of models are considered in the literature related to control of physical systems: continuous systems with lumped parameters described in state space by differential equations, distributed (spatio-temporal) systems described by partial differential equations, and discrete-time state-space models described by difference equations. === Control goals === It is natural to classify control problems by their control goals. Five kinds are listed below. Regulation (often called stabilization or positioning) is the most common and simple control goal. Regulation is understood as driving the state vector x ( t ) {\displaystyle x(t)} (or the output vector y ( t ) {\displaystyle y(t)} ) to some equilibrium state x ∗ {\displaystyle x*} (respectively, y ∗ {\displaystyle y*} ). Tracking. State tracking is driving a solution x ( t ) {\displaystyle x(t)} to the prespecified function of time x ∗ ( t ) {\displaystyle x*(t)} . Similarly, output tracking is driving the output y ( t ) {\displaystyle y(t)} to the desired output function y ∗ ( t ) {\displaystyle y*(t)} . The problem is more complex if the desired equilibrium x ∗ {\displaystyle x*} or trajectory x ∗ ( t ) {\displaystyle x*(t)} is unstable in the absence of control action. For example, a typical problem of chaos control can be formulated as tracking an unstable periodic solution (orbit). The key feature of the control problems for physical systems is that the goal should be achieved by means of sufficiently small control. A limit case is stabilization of a system by an arbitrarily small control. The solvability of this task is not obvious if the trajectory x ∗ ( t ) {\displaystyle x*(t)} is unstable, for example in the case of chaotic systems. See. Generation (excitation) of oscillations. The third class of control goals corresponds to the problems of "excitation" or "generation" of oscillations. Here, it is assumed that the system is initially at rest. The problem is to find out if it is possible to drive it into an oscillatory mode with the desired characteristics (energy, frequency, etc.) In this case the goal trajectory of the state vector x ∗ ( t ) {\displaystyle x*(t)} is not prespecified. Moreover, the goal trajectory may be unknown, or may even be irrelevant to the achievement of the control goal. Such problems are well known in electrical, radio engineering, acoustics, laser, and vibrational technologies, and indeed wherever it is necessary to create an oscillatory mode for a system. Such a class of control goals can be related to problems of dissociation, ionization of molecular systems, escape from a potential well, chaotization, and other problems related to the growth of the system energy and its possible phase transition. Sometimes such problems can be reduced to tracking, but the reference trajectories x ∗ ( t ) {\displaystyle x*(t)} in these cases are not necessarily periodic and may be unstable. Besides, the goal trajectory x ∗ ( t ) {\displaystyle x*(t)} may be known only partially. Synchronization. The fourth important class of control goals corresponds to synchronization (more accurately, "controlled synchronization" as distinct from "autosynchronization" or "self-synchronization"). Generally speaking, synchronization is understood as concurrent change of the states of two or more systems or, perhaps, concurrent change of some quantities related to the systems, e.g., equalizing of oscillation frequencies. If the required relation is established only asymptotically, one speaks of "asymptotic synchronization". If synchronization does not exist in the system without control the problem may be posed as finding the control function which ensures synchronization in the closed-loop system, i.e., synchronization may be a control goal. Synchronization problem differs from the model reference control problem in that some phase shifts between the processes are allowed that are either constant or tend to constant values. Besides, in a number of synchronization problems the links between the systems to be synchronized are bidirectional. In such cases the limit mode (synchronous mode) in the overall system is not known in advance. Modification of the limit sets (attractors) of the systems. The last class of control goals is related to the modification of some quantitative characteristics that limit the behavior of the system. It includes such specific goals as changing the type of the equilibrium (e.g., transforming an unstable equilibrium into a stable one, or vice versa); changing the type of the limit set (e.g., transforming a limit cycle into a chaotic attractor, or vice versa, changing the fractal dimension of the limit set, etc.); changing the position or the type of the bifurcation point in the parameter space of the system. Investigation of the above problems started at the end of the 1980s with work on bifurcation control and continued with work on the control of chaos. Ott, Grebogi and Yorke and their followers introduced a new class of control goals not requiring any quantitative characteristic of the desired motion. Instead, the desired qualitative type of the limit set (attractor) was specified, e.g., control should provide the system with a chaotic attractor. Additionally, the desired degree of chaoticity may be specified by specifying the Lyapunov exponent, fractal dimension, entropy, etc. See. In addition to the main control goal, some additional goals or constraints may be specified. A typical example is the "small control" requirement: the control function should have little power or should require a small expenditure of energy. Such a restriction is needed to avoid "violence" and preserve the inherent properties of the system under control. This is important for ensuring the elimination of artefacts and for an adequate study of the system. Three types of control are used in physical problems: constant control, feedforward control and feedback control. Implementation of a feedback control requires additional measurement devices working in real time, which are often hard to install. Therefore, studying the system may start with the application of inferior forms of control: time-constant and then feedforward control. The possibilities of changing system behavior by means of feedback control can then be studied. === Methodology === The methodology of cybernetical physics is based on control theory. Typically, some parameters of physical systems are unknown and some variables are not available for measurement. From the control viewpoint this means that control design should be performed under significant uncertainty, i.e., methods of robust control or adaptive control should be used. A variety of design methods have been developed by control theorists and control engineers for both linear and nonlinear systems. Methods of partial control, control by weak signals, etc. have also been developed. == Fields of research and prospects == Currently, an interest in applying control methods in physics is still growing. The following areas of research are being actively developed: Control of oscillations Control of synchronization Control of chaos, bifurcations Control of phase transitions, stochastic resonance Optimal control in thermodynamics Control of micromechanical, molecular and quantum systems Among the most important applications are: control of fusion, control of beams, control in nano- and femto-technologies. In order to facilitate information exchange in the area of cybernetical physics the International Physics and Control Society (IPACS) was created. IPACS organizes regular conferences (Physics and Control Conferences) and supports an electronic library, IPACS Electronic Library and an information portal, Physics and Control Resources. == See also == Maxwell's demon == References == == External links == Portal, Physics and Control Resources IPACS Electronic Library International Physics and Control Society (IPACS)
Wikipedia/Cybernetical_physics
In combinatorial game theory, a branch of mathematics, a hot game is one in which each player can improve their position by making the next move. By contrast, a cold game is one where each player can only worsen their position by making the next move. The class of cold games are equivalent to the class of surreal numbers and so can be ordered by value, while hot games can have other values. There are also tepid games, which are games with a temperature of exactly zero. Tepid games are formed by the class of strictly numerish games: that is, games that are equivalent to a number plus an infinitesimal. Hackenbush can only represent tepid and cold games (by its decomposition into a purple mountain and a green jungle). == Example == For example, consider a game in which players alternately remove tokens of their own color from a table, the Blue player removing only blue tokens and the Red player removing only red tokens, with the winner being the last player to remove a token. Obviously, victory will go to the player who starts off with more tokens, or to the second player if the number of red and blue tokens are equal. Removing a token of one's own color leaves the position slightly worse for the player who made the move, since that player now has fewer tokens on the table. Thus each token represents a "cold" component of the game. Now consider a special purple token bearing the number "100", which may be removed by either player, who then replaces the purple token with 100 tokens of their own color. (In the notation of Conway, the purple token is the game {100|−100}.) The purple token is a "hot" component, because it is highly advantageous to be the player who removes the purple token. Indeed, if there are any purple tokens on the table, players will prefer to remove them first, leaving the red or blue tokens for last. In general, a player will always prefer to move in a hot game rather than a cold game, because moving in a hot game improves their position, while moving in a cold game injures their position. == Temperature == The temperature of a game is a measure of its value to the two players. A purple "100" token has a temperature of 100 because its value to each player is 100 moves. In general, players will prefer to move in the hottest component available. For example, suppose there is a purple "100" token and also a purple "1,000" token which allows the player who takes it to dump 1,000 tokens of their own color on the table. Each player will prefer to remove the "1,000" token, with temperature 1,000 before the "100" token, with temperature 100. To take a slightly more complicated example, consider the game {10|2} + {5|−5}. {5|−5} is a token which either player may replace with 5 tokens of their own color, and {10|2} is a token which the Blue player may replace with 10 blue tokens or the Red player may replace with 2 blue tokens. The temperature of the {10|2} component is ½(10 − 2) = 4, while the temperature of the {5|−5} component is 5. This suggests that each player should prefer to play in the {5|−5} component. Indeed, the best first move for the Red player is to replace {5|−5} with −5, whereupon the Blue player replaces {10|2} with 10, leaving a total of 5; had the Red player moved in the cooler {10|2} component instead, the final position would have been 2 + 5 = 7, which is worse for Red. Similarly, the best first move for the Blue player is also in the hotter component, from {5|−5} to 5, even though moving in the {10|2} component produces more blue tokens in the short term. == Snort == In the game of Snort, Red and Blue players take turns coloring the vertices of a graph, with the constraint that two vertices that are connected by an edge may not be colored differently. As usual, the last player to make a legal move is the winner. Since a player's moves improve their position by effectively reserving the adjacent vertices for them alone, positions in Snort are typically hot. In contrast, in the closely related game Col, where adjacent vertices may not have the same color, positions are usually cold. == Applications == The theory of hot games has found some application in the analysis of endgame strategy in Go. == See also == Domineering, another game in which hot positions arise Cooling and heating (combinatorial game theory), operations to make hot games amenable to the same type of analysis as cold games == References == Berlekamp, Elwyn P.; Conway, John H.; Guy, Richard K. (1982). Winning Ways. Vol. 1 (1st ed.). New York: Academic Press. ISBN 0-12-091150-7. Conway, John H. (2001). On Numbers and Games (2 ed.). A K Peters, Ltd. pp. 101–108. ISBN 1-56881-127-6.
Wikipedia/Temperature_(game_theory)
In combinatorial game theory, cooling, heating, and overheating are operations on hot games to make them more amenable to the traditional methods of the theory, which was originally devised for cold games in which the winner is the last player to have a legal move. Overheating was generalised by Elwyn Berlekamp for the analysis of Blockbusting. Chilling (or unheating) and warming are variants used in the analysis of the endgame of Go. Cooling and chilling may be thought of as a tax on the player who moves, making them pay for the privilege of doing so, while heating, warming and overheating are operations that more or less reverse cooling and chilling. == Basic operations: cooling, heating == The cooled game G t {\displaystyle G_{t}} (" G {\displaystyle G} cooled by t {\displaystyle t} ") for a game G {\displaystyle G} and a (surreal) number t {\displaystyle t} is defined by G t = { { G t L − t ∣ G t R + t } for all numbers t ≤ any number τ for which G τ is infinitesimally close to some number m , m for t > τ {\displaystyle G_{t}={\begin{cases}\{G_{t}^{L}-t\mid G_{t}^{R}+t\}&{\text{ for all numbers }}t\leq {\text{ any number }}\tau {\text{ for which }}G_{\tau }{\text{ is infinitesimally close to some number }}m{\text{ , }}\\m&{\text{ for }}t>\tau \end{cases}}} . The amount t {\displaystyle t} by which G {\displaystyle G} is cooled is known as the temperature; the minimum τ {\displaystyle \tau } for which G τ {\displaystyle G_{\tau }} is infinitesimally close to m {\displaystyle m} is known as the temperature t ( G ) {\displaystyle t(G)} of G {\displaystyle G} ; G {\displaystyle G} is said to freeze to G τ {\displaystyle G_{\tau }} ; m {\displaystyle m} is the mean value (or simply mean) of G {\displaystyle G} . Heating is the inverse of cooling and is defined as the "integral" ∫ t G = { G if G is a number, { ∫ t ( G L ) + t ∣ ∫ t ( G R ) − t } otherwise. {\displaystyle \int ^{t}G={\begin{cases}G&{\text{ if }}G{\text{ is a number, }}\\\{\int ^{t}(G^{L})+t\mid \int ^{t}(G^{R})-t\}&{\text{ otherwise. }}\end{cases}}} == Multiplication and overheating == Norton multiplication is an extension of multiplication to a game G {\displaystyle G} and a positive game U {\displaystyle U} (the "unit") defined by G . U = { G × U (i.e. the sum of G copies of U ) if G is a non-negative integer, − G × − U if G is a negative integer, { G L . U + ( U + I ) ∣ G R . U − ( U + I ) } where I ranges over Δ ( U ) otherwise. {\displaystyle G.U={\begin{cases}G\times U&{\text{ (i.e. the sum of }}G{\text{ copies of }}U{\text{) if }}G{\text{ is a non-negative integer, }}\\-G\times -U&{\text{ if }}G{\text{ is a negative integer, }}\\\{G^{L}.U+(U+I)\mid G^{R}.U-(U+I)\}{\text{ where }}I{\text{ ranges over }}\Delta (U)&{\text{ otherwise. }}\end{cases}}} The incentives Δ ( U ) {\displaystyle \Delta (U)} of a game U {\displaystyle U} are defined as { u − U : u ∈ U L } ∪ { U − u : u ∈ U R } {\displaystyle \{u-U:u\in U^{L}\}\cup \{U-u:u\in U^{R}\}} . Overheating is an extension of heating used in Berlekamp's solution of Blockbusting, where G {\displaystyle G} overheated from s {\displaystyle s} to t {\displaystyle t} is defined for arbitrary games G , s , t {\displaystyle G,s,t} with s > 0 {\displaystyle s>0} as ∫ s t G = { G . s if G is an integer, { ∫ s t ( G L ) + t ∣ ∫ s t ( G R ) − t } otherwise. {\displaystyle \int _{s}^{t}G={\begin{cases}G.s&{\text{ if }}G{\text{ is an integer, }}\\\{\int _{s}^{t}(G^{L})+t\mid \int _{s}^{t}(G^{R})-t\}&{\text{ otherwise. }}\end{cases}}} Winning Ways also defines overheating of a game G {\displaystyle G} by a positive game X {\displaystyle X} , as ∫ 0 t G = { ∫ 0 t ( G L ) + X ∣ ∫ 0 t ( G R ) − X } {\displaystyle \int _{0}^{t}G=\left\{\int _{0}^{t}(G^{L})+X\mid \int _{0}^{t}(G^{R})-X\right\}} Note that in this definition numbers are not treated differently from arbitrary games. Note that the "lower bound" 0 distinguishes this from the previous definition by Berlekamp == Operations for Go: chilling and warming == Chilling is a variant of cooling by 1 {\displaystyle 1} used to analyse the Go endgame of Go and is defined by f ( G ) = { m if G is of the form m or m ∗ , { f ( G L ) − 1 ∣ f ( G R ) + 1 } otherwise. {\displaystyle f(G)={\begin{cases}m&{\text{ if }}G{\text{ is of the form }}m{\text{ or }}m*,\\\{f(G^{L})-1\mid f(G^{R})+1\}&{\text{ otherwise.}}\end{cases}}} This is equivalent to cooling by 1 {\displaystyle 1} when G {\displaystyle G} is an "even elementary Go position in canonical form". Warming is a special case of overheating, namely ∫ 1 ∗ 1 {\displaystyle \int _{1*}^{1}} , normally written simply as ∫ {\displaystyle \int } which inverts chilling when G {\displaystyle G} is an "even elementary Go position in canonical form". In this case the previous definition simplifies to the form ∫ G = { G if G is an even integer, G ∗ if G is an odd integer, { ∫ ( G L ) + 1 ∣ ∫ ( G R ) − 1 } otherwise. {\displaystyle \int G={\begin{cases}G&{\text{ if }}G{\text{ is an even integer, }}\\G*&{\text{ if }}G{\text{ is an odd integer, }}\\\{\int (G^{L})+1\mid \int (G^{R})-1\}&{\text{ otherwise. }}\end{cases}}} == References ==
Wikipedia/Cooling_and_heating_(combinatorial_game_theory)
The Kahn–Kalai conjecture, also known as the expectation threshold conjecture or more recently the Park-Pham Theorem, was a conjecture in the field of graph theory and statistical mechanics, proposed by Jeff Kahn and Gil Kalai in 2006. It was proven in a paper published in 2024. == Background == This conjecture concerns the general problem of estimating when phase transitions occur in systems. For example, in a random network with N {\displaystyle N} nodes, where each edge is included with probability p {\displaystyle p} , it is unlikely for the graph to contain a Hamiltonian cycle if p {\displaystyle p} is less than a threshold value ( log ⁡ N ) / N {\displaystyle (\log N)/N} , but highly likely if p {\displaystyle p} exceeds that threshold. Threshold values are often difficult to calculate, but a lower bound for the threshold, the "expectation threshold", is generally easier to calculate. The Kahn–Kalai conjecture is that the two values are generally close together in a precisely defined way, namely that there is a universal constant K {\displaystyle K} for which the ratio between the two is less than K log ⁡ l ( F ) {\displaystyle K\log {l({\mathcal {F}})}} where l ( F ) {\displaystyle l({\mathcal {F}})} is the size of a largest minimal element of an increasing family F {\displaystyle {\mathcal {F}}} of subsets of a power set. == Proof == Jinyoung Park and Huy Tuan Pham announced a proof of the conjecture in 2022; it was published in 2024. == References == == See also == Percolation theory
Wikipedia/Kahn–Kalai_conjecture
In mathematics and probability theory, continuum percolation theory is a branch of mathematics that extends discrete percolation theory to continuous space (often Euclidean space ℝn). More specifically, the underlying points of discrete percolation form types of lattices whereas the underlying points of continuum percolation are often randomly positioned in some continuous space and form a type of point process. For each point, a random shape is frequently placed on it and the shapes overlap each with other to form clumps or components. As in discrete percolation, a common research focus of continuum percolation is studying the conditions of occurrence for infinite or giant components. Other shared concepts and analysis techniques exist in these two types of percolation theory as well as the study of random graphs and random geometric graphs. Continuum percolation arose from an early mathematical model for wireless networks, which, with the rise of several wireless network technologies in recent years, has been generalized and studied in order to determine the theoretical bounds of information capacity and performance in wireless networks. In addition to this setting, continuum percolation has gained application in other disciplines including biology, geology, and physics, such as the study of porous material and semiconductors, while becoming a subject of mathematical interest in its own right. == Early history == In the early 1960s Edgar Gilbert proposed a mathematical model in wireless networks that gave rise to the field of continuum percolation theory, thus generalizing discrete percolation. The underlying points of this model, sometimes known as the Gilbert disk model, were scattered uniformly in the infinite plane ℝ2 according to a homogeneous Poisson process. Gilbert, who had noticed similarities between discrete and continuum percolation, then used concepts and techniques from the probability subject of branching processes to show that a threshold value existed for the infinite or "giant" component. == Definitions and terminology == The exact names, terminology, and definitions of these models may vary slightly depending on the source, which is also reflected in the use of point process notation. === Common models === A number of well-studied models exist in continuum percolation, which are often based on homogeneous Poisson point processes. ==== Disk model ==== Consider a collection of points {xi} in the plane ℝ2 that form a homogeneous Poisson process Φ with constant (point) density λ. For each point of the Poisson process (i.e. xi ∈ Φ), place a disk Di with its center located at the point xi. If each disk Di has a random radius Ri (from a common distribution) that is independent of all the other radii and all the underlying points {xi}, then the resulting mathematical structure is known as a random disk model. ==== Boolean model ==== Given a random disk model, if the set union of all the disks {Di} is taken, then the resulting structure ⋃i Di is known as a Boolean–Poisson model (also known as simply the Boolean model), which is a commonly studied model in continuum percolation as well as stochastic geometry. If all the radii are set to some common constant, say, r > 0, then the resulting model is sometimes known as the Gilbert disk (Boolean) model. ==== Germ-grain model ==== The disk model can be generalized to more arbitrary shapes where, instead of a disk, a random compact (hence bounded and closed in ℝ2) shape Si is placed on each point xi. Again, each shape Si has a common distribution and independent to all other shapes and the underlying (Poisson) point process. This model is known as the germ–grain model where the underlying points {xi} are the germs and the random compact shapes Si are the grains. The set union of all the shapes forms a Boolean germ-grain model. Typical choices for the grains include disks, random polygon and segments of random length. Boolean models are also examples of stochastic processes known as coverage processes. The above models can be extended from the plane ℝ2 to general Euclidean space ℝn. === Components and criticality === In the Boolean–Poisson model, disks there can be isolated groups or clumps of disks that do not contact any other clumps of disks. These clumps are known as components. If the area (or volume in higher dimensions) of a component is infinite, one says it is an infinite or "giant" component. A major focus of percolation theory is establishing the conditions when giant components exist in models, which has parallels with the study of random networks. If no big component exists, the model is said to be subcritical. The conditions of giant component criticality naturally depend on parameters of the model such as the density of the underlying point process. == Excluded area theory == The excluded area of a placed object is defined as the minimal area around the object into which an additional object cannot be placed without overlapping with the first object. For example, in a system of randomly oriented homogeneous rectangles of length l, width w and aspect ratio r = ⁠l/w⁠, the average excluded area is given by: A r = 2 l w ( 1 + 4 π 2 ) + 2 π ( l 2 + w 2 ) = 2 l 2 [ 1 r ( 1 + 4 π 2 ) + 1 π ( 1 + 1 r 2 ) ] {\displaystyle A_{r}=2lw\left(1+{\frac {4}{\pi ^{2}}}\right)+{\frac {2}{\pi }}\left(l^{2}+w^{2}\right)=2l^{2}\left[{\frac {1}{r}}\left(1+{\frac {4}{\pi ^{2}}}\right)+{\frac {1}{\pi }}\left(1+{\frac {1}{r^{2}}}\right)\right]} In a system of identical ellipses with semi-axes a and b and ratio r = ⁠a/b⁠, and perimeter C, the average excluded areas is given by: A r = 2 π a b + C 2 2 π {\displaystyle A_{r}=2\pi ab+{\frac {C^{2}}{2\pi }}} The excluded area theory states that the critical number density (percolation threshold) Nc of a system is inversely proportional to the average excluded area Ar: N c ∝ A r − 1 {\displaystyle N_{\mathrm {c} }\propto A_{r}^{-1}} It has been shown via Monte-Carlo simulations that percolation threshold in both homogeneous and heterogeneous systems of rectangles or ellipses is dominated by the average excluded areas and can be approximated fairly well by the linear relation N c − N c 0 ∝ A r − 1 {\displaystyle N_{\mathrm {c} }-N_{\mathrm {c} 0}\propto A_{r}^{-1}} with a proportionality constant in the range 3.1–3.5. == Applications == The applications of percolation theory are various and range from material sciences to wireless communication systems. Often the work involves showing that a type of phase transition occurs in the system. === Wireless networks === Wireless networks are sometimes best represented with stochastic models owing to their complexity and unpredictability, hence continuum percolation have been used to develop stochastic geometry models of wireless networks. For example, the tools of continuous percolation theory and coverage processes have been used to study the coverage and connectivity of sensor networks. One of the main limitations of these networks is energy consumption where usually each node has a battery and an embedded form of energy harvesting. To reduce energy consumption in sensor networks, various sleep schemes have been suggested that entail having a subcollection of nodes go into a low energy-consuming sleep mode. These sleep schemes obviously affect the coverage and connectivity of sensor networks. Simple power-saving models have been proposed such as the simple uncoordinated 'blinking' model where (at each time interval) each node independently powers down (or up) with some fixed probability. Using the tools of percolation theory, a blinking Boolean Poisson model has been analyzed to study the latency and connectivity effects of such a simple power scheme. == See also == Stochastic geometry models of wireless networks Random graphs Boolean model (probability theory) Percolation thresholds == References ==
Wikipedia/Continuum_percolation_theory
In statistical mechanics, probability theory, graph theory, etc. the random cluster model is a random graph that generalizes and unifies the Ising model, Potts model, and percolation model. It is used to study random combinatorial structures, electrical networks, etc. It is also referred to as the RC model or sometimes the FK representation after its founders Cees Fortuin and Piet Kasteleyn. The random cluster model has a critical limit, described by a conformal field theory. == Definition == Let G = ( V , E ) {\displaystyle G=(V,E)} be a graph, and ω : E → { 0 , 1 } {\displaystyle \omega :E\to \{0,1\}} be a bond configuration on the graph that maps each edge to a value of either 0 or 1. We say that a bond is closed on edge e ∈ E {\displaystyle e\in E} if ω ( e ) = 0 {\displaystyle \omega (e)=0} , and open if ω ( e ) = 1 {\displaystyle \omega (e)=1} . If we let A ( ω ) = { e ∈ E : ω ( e ) = 1 } {\displaystyle A(\omega )=\{e\in E:\omega (e)=1\}} be the set of open bonds, then an open cluster or FK cluster is any connected component in A ( ω ) {\displaystyle A(\omega )} union the set of vertices. Note that an open cluster can be a single vertex (if that vertex is not incident to any open bonds). Suppose an edge is open independently with probability p {\displaystyle p} and closed otherwise, then this is just the standard Bernoulli percolation process. The probability measure of a configuration ω {\displaystyle \omega } is given as μ ( ω ) = ∏ e ∈ E p ω ( e ) ( 1 − p ) 1 − ω ( e ) . {\displaystyle \mu (\omega )=\prod _{e\in E}p^{\omega (e)}(1-p)^{1-\omega (e)}.} The RC model is a generalization of percolation, where each cluster is weighted by a factor of q {\displaystyle q} . Given a configuration ω {\displaystyle \omega } , we let C ( ω ) {\displaystyle C(\omega )} be the number of open clusters, or alternatively the number of connected components formed by the open bonds. Then for any q > 0 {\displaystyle q>0} , the probability measure of a configuration ω {\displaystyle \omega } is given as μ ( ω ) = 1 Z q C ( ω ) ∏ e ∈ E p ω ( e ) ( 1 − p ) 1 − ω ( e ) . {\displaystyle \mu (\omega )={\frac {1}{Z}}q^{C(\omega )}\prod _{e\in E}p^{\omega (e)}(1-p)^{1-\omega (e)}.} Z is the partition function, or the sum over the unnormalized weights of all configurations, Z = ∑ ω ∈ Ω { q C ( ω ) ∏ e ∈ E ( G ) p ω ( e ) ( 1 − p ) 1 − ω ( e ) } . {\displaystyle Z=\sum _{\omega \in \Omega }\left\{q^{C(\omega )}\prod _{e\in E(G)}p^{\omega (e)}(1-p)^{1-\omega (e)}\right\}.} The partition function of the RC model is a specialization of the Tutte polynomial, which itself is a specialization of the multivariate Tutte polynomial. == Special values of q == The parameter q {\displaystyle q} of the random cluster model can take arbitrary complex values. This includes the following special cases: q → 0 {\displaystyle q\to 0} : linear resistance networks. q < 1 {\displaystyle q<1} : negatively-correlated percolation. q = 1 {\displaystyle q=1} : Bernoulli percolation, with Z = 1 {\displaystyle Z=1} . q = 2 {\displaystyle q=2} : the Ising model. q ∈ Z + {\displaystyle q\in \mathbb {Z} ^{+}} : q {\displaystyle q} -state Potts model. == Edwards-Sokal representation == The Edwards-Sokal (ES) representation of the Potts model is named after Robert G. Edwards and Alan D. Sokal. It provides a unified representation of the Potts and random cluster models in terms of a joint distribution of spin and bond configurations. Let G = ( V , E ) {\displaystyle G=(V,E)} be a graph, with the number of vertices being n = | V | {\displaystyle n=|V|} and the number of edges being m = | E | {\displaystyle m=|E|} . We denote a spin configuration as σ ∈ Z q n {\displaystyle \sigma \in \mathbb {Z} _{q}^{n}} and a bond configuration as ω ∈ { 0 , 1 } m {\displaystyle \omega \in \{0,1\}^{m}} . The joint measure of ( σ , ω ) {\displaystyle (\sigma ,\omega )} is given as μ ( σ , ω ) = Z − 1 ψ ( σ ) ϕ p ( ω ) 1 A ( σ , ω ) , {\displaystyle \mu (\sigma ,\omega )=Z^{-1}\psi (\sigma )\phi _{p}(\omega )1_{A}(\sigma ,\omega ),} where ψ {\displaystyle \psi } is the uniform measure, ϕ p {\displaystyle \phi _{p}} is the product measure with density p = 1 − e − β {\displaystyle p=1-e^{-\beta }} , and Z {\displaystyle Z} is an appropriate normalizing constant. Importantly, the indicator function 1 A {\displaystyle 1_{A}} of the set A = { ( σ , ω ) : σ i = σ j for any edge ( i , j ) where ω = 1 } {\displaystyle A=\{(\sigma ,\omega ):\sigma _{i}=\sigma _{j}{\text{ for any edge }}(i,j){\text{ where }}\omega =1\}} enforces the constraint that a bond can only be open on an edge if the adjacent spins are of the same state, also known as the SW rule. The statistics of the Potts spins can be recovered from the cluster statistics (and vice versa), thanks to the following features of the ES representation: The marginal measure μ ( σ ) {\displaystyle \mu (\sigma )} of the spins is the Boltzmann measure of the q-state Potts model at inverse temperature β {\displaystyle \beta } . The marginal measure ϕ p , q ( ω ) {\displaystyle \phi _{p,q}(\omega )} of the bonds is the random-cluster measure with parameters q and p. The conditional measure μ ( σ | ω ) {\displaystyle \mu (\sigma \,|\,\omega )} of the spin represents a uniformly random assignment of spin states that are constant on each connected component of the bond arrangement ω {\displaystyle \omega } . The conditional measure ϕ p , q ( ω | σ ) {\displaystyle \phi _{p,q}(\omega \,|\,\sigma )} of the bonds represents a percolation process (of ratio p) on the subgraph of G {\displaystyle G} formed by the edges where adjacent spins are aligned. In the case of the Ising model, the probability that two vertices ( i , j ) {\displaystyle (i,j)} are in the same connected component of the bond arrangement ω {\displaystyle \omega } equals the two-point correlation function of spins σ i and σ j {\displaystyle \sigma _{i}{\text{ and }}\sigma _{j}} , written ϕ p , q ( i ↔ j ) = ⟨ σ i σ j ⟩ {\displaystyle \phi _{p,q}(i\leftrightarrow j)=\langle \sigma _{i}\sigma _{j}\rangle } . === Frustration === There are several complications of the ES representation once frustration is present in the spin model (e.g. the Ising model with both ferromagnetic and anti-ferromagnetic couplings in the same lattice). In particular, there is no longer a correspondence between the spin statistics and the cluster statistics, and the correlation length of the RC model will be greater than the correlation length of the spin model. This is the reason behind the inefficiency of the SW algorithm for simulating frustrated systems. == Two-dimensional case == If the underlying graph G {\displaystyle G} is a planar graph, there is a duality between the random cluster models on G {\displaystyle G} and on the dual graph G ∗ {\displaystyle G^{*}} . At the level of the partition function, the duality reads Z ~ G ( q , v ) = q | V | − | E | − 1 v | E | Z ~ G ∗ ( q , q v ) with v = p 1 − p and Z ~ G ( q , v ) = ( 1 − p ) − | E | Z G ( q , v ) {\displaystyle {\tilde {Z}}_{G}(q,v)=q^{|V|-|E|-1}v^{|E|}{\tilde {Z}}_{G^{*}}\left(q,{\frac {q}{v}}\right)\qquad {\text{with}}\qquad v={\frac {p}{1-p}}\quad {\text{and}}\quad {\tilde {Z}}_{G}(q,v)=(1-p)^{-|E|}Z_{G}(q,v)} On a self-dual graph such as the square lattice, a phase transition can only occur at the self-dual coupling v self-dual = q {\displaystyle v_{\text{self-dual}}={\sqrt {q}}} . The random cluster model on a planar graph can be reformulated as a loop model on the corresponding medial graph. For a configuration ω {\displaystyle \omega } of the random cluster model, the corresponding loop configuration is the set of self-avoiding loops that separate the clusters from the dual clusters. In the transfer matrix approach, the loop model is written in terms of a Temperley-Lieb algebra with the parameter δ = q + q − 1 {\displaystyle \delta =q+q^{-1}} . In two dimensions, the random cluster model is therefore closely related to the O(n) model, which is also a loop model. In two dimensions, the critical random cluster model is described by a conformal field theory with the central charge c = 13 − 6 β 2 − 6 β − 2 with q = 4 cos 2 ⁡ ( π β 2 ) . {\displaystyle c=13-6\beta ^{2}-6\beta ^{-2}\qquad {\text{with}}\qquad q=4\cos ^{2}(\pi \beta ^{2})\ .} Known exact results include the conformal dimensions of the fields that detect whether a point belongs to an FK cluster or a spin cluster. In terms of Kac indices, these conformal dimensions are respectively 2 h 0 , 1 2 {\displaystyle 2h_{0,{\frac {1}{2}}}} and 2 h 1 2 , 0 {\displaystyle 2h_{{\frac {1}{2}},0}} , corresponding to the fractal dimensions 2 − 2 h 0 , 1 2 {\displaystyle 2-2h_{0,{\frac {1}{2}}}} and 2 − 2 h 1 2 , 0 {\displaystyle 2-2h_{{\frac {1}{2}},0}} of the clusters. == History and applications == RC models were introduced in 1969 by Fortuin and Kasteleyn, mainly to solve combinatorial problems. After their founders, it is sometimes referred to as FK models. In 1971 they used it to obtain the FKG inequality. Post 1987, interest in the model and applications in statistical physics reignited. It became the inspiration for the Swendsen–Wang algorithm describing the time-evolution of Potts models. Michael Aizenman and coauthors used it to study the phase boundaries in 1D Ising and Potts models. == See also == Tutte polynomial Ising model Random graph Swendsen–Wang algorithm FKG inequality == References == == External links == Random-Cluster Model – Wolfram MathWorld
Wikipedia/Random_cluster_model
The bunkbed conjecture (also spelled bunk bed conjecture) is a statement in percolation theory, a branch of mathematics that studies the behavior of connected clusters in a random graph. The conjecture is named after its analogy to a bunk bed structure. It was first posited by Pieter Kasteleyn in 1985. A preprint giving a proposed counterexample to the conjecture was posted on the arXiv in October 2024 by Nikita Gladkov, Igor Pak, and Alexander Zimin. == Description == The conjecture has many equivalent formulations. In the most general formulation it involves two identical graphs, referred to as the upper bunk and the lower bunk. These graphs are isomorphic, meaning they share the same structure. Additional edges, termed posts, are added to connect each vertex in the upper bunk with the corresponding vertex in the lower bunk. Each edge in the graph is assigned a probability. The edges in the upper bunk and their corresponding edges in the lower bunk share the same probability. The probabilities assigned to the posts can be arbitrary. A random subgraph of the bunkbed graph is then formed by independently deleting each edge based on the assigned probability. Equivalently, it can be assumed that all edges have the same deletion probability 0 < p < 1 {\displaystyle 0<p<1} . == Statement of the conjecture == The bunkbed conjecture states that in the resulting random subgraph, the probability that a vertex x in the upper bunk is connected to some vertex y in the upper bunk is greater than or equal to the probability that x is connected to y′, the isomorphic copy of y in the lower bunk. == Interpretation and significance == The conjecture suggests that two vertices of a graph are more likely to remain connected after randomly removing some edges if the graph distance between the vertices is smaller. This is intuitive, and similar questions for random walks and Ising model were resolved positively. The original motivation for the conjecture was its implication that, in a percolation on the infinite square grid, the probability of (0, 0) being connected to (x, y) for x, y ≥ 0 is greater than the probability of (0, 0) being connected to (x + 1, y). Despite intuitiveness, proving this conjecture is not straightforward and is an active area of research in percolation theory. It was proved for specific types of graphs, such as wheels, complete graphs, complete bipartite graphs, and graphs with a local symmetry. It was also proved in the limit p → 1 for any graph. Counterexamples for generalizations of the bunkbed conjecture have been published for site percolation, hypergraphs, and directed graphs. == References ==
Wikipedia/Bunkbed_conjecture
Flory–Stockmayer theory is a theory governing the cross-linking and gelation of step-growth polymers. The Flory–Stockmayer theory represents an advancement from the Carothers equation, allowing for the identification of the gel point for polymer synthesis not at stoichiometric balance. The theory was initially conceptualized by Paul Flory in 1941 and then was further developed by Walter Stockmayer in 1944 to include cross-linking with an arbitrary initial size distribution. The Flory–Stockmayer theory was the first theory investigating percolation processes. Flory–Stockmayer theory is a special case of random graph theory of gelation. == History == Gelation occurs when a polymer forms large interconnected polymer molecules through cross-linking. In other words, polymer chains are cross-linked with other polymer chains to form an infinitely large molecule, interspersed with smaller complex molecules, shifting the polymer from a liquid to a network solid or gel phase. The Carothers equation is an effective method for calculating the degree of polymerization for stoichiometrically balanced reactions. However, the Carothers equation is limited to branched systems, describing the degree of polymerization only at the onset of cross-linking. The Flory–Stockmayer Theory allows for the prediction of when gelation occurs using percent conversion of initial monomer and is not confined to cases of stoichiometric balance. Additionally, the Flory–Stockmayer Theory can be used to predict whether gelation is possible through analyzing the limiting reagent of the step-growth polymerization. == Flory’s assumptions == In creating the Flory–Stockmayer Theory, Flory made three assumptions that affect the accuracy of this model. These assumptions were: All functional groups on a branch unit are equally reactive All reactions occur between A and B There are no intramolecular reactions As a result of these assumptions, a conversion slightly higher than that predicted by the Flory–Stockmayer Theory is commonly needed to actually create a polymer gel. Since steric hindrance effects prevent each functional group from being equally reactive and intramolecular reactions do occur, the gel forms at slightly higher conversion. Flory postulated that his treatment can also be applied to chain-growth polymerization mechanisms, as the three criteria stated above are satisfied under the assumptions that (1) the probability of chain termination is independent of chain length, and (2) multifunctional co-monomers react randomly with growing polymer chains. == General case == The Flory–Stockmayer Theory predicts the gel point for the system consisting of three types of monomer units linear units with two A-groups (concentration c 1 {\displaystyle c_{1}} ), linear units with two B groups (concentration c 2 {\displaystyle c_{2}} ), branched A units (concentration c 3 {\displaystyle c_{3}} ). The following definitions are used to formally define the system f {\displaystyle f} is the number of reactive functional groups on the branch unit (i.e. the functionality of that branch unit) p A {\displaystyle p_{A}} is the probability that A has reacted (conversion of A groups) p B {\displaystyle p_{B}} is the probability that B has reacted (conversion of B groups) ρ = f c 3 2 c 1 + f c 3 {\displaystyle \rho ={\frac {fc_{3}}{2c_{1}+fc_{3}}}} is the ratio of number of A groups in the branch unit to the total number of A groups r = 2 c 1 + f c 3 2 c 2 = p B p A {\displaystyle r={\frac {2c_{1}+fc_{3}}{2c_{2}}}={\frac {p_{B}}{p_{A}}}} is the ratio between total number of A and B groups. So that p B = r p A . {\displaystyle p_{B}=rp_{A}.} The theory states that the gelation occurs only if α > α c {\displaystyle \alpha >\alpha _{c}} , where α c = 1 f − 1 {\displaystyle \alpha _{c}={\frac {1}{f-1}}} is the critical value for cross-linking and α {\displaystyle \alpha } is presented as a function of p A {\displaystyle p_{A}} , α ( p A ) = r p A 2 ρ 1 − r p A 2 ( 1 − ρ ) {\displaystyle \alpha (p_{A})={\frac {rp_{A}^{2}\rho }{1-rp_{A}^{2}(1-\rho )}}} or, alternatively, as a function of p B {\displaystyle p_{B}} , α ( p B ) = p B 2 ρ r − p B 2 ( 1 − ρ ) {\displaystyle \alpha (p_{B})={\frac {p_{B}^{2}\rho }{r-p_{B}^{2}(1-\rho )}}} . One may now substitute expressions for r , ρ {\displaystyle r,\rho } into definition of α {\displaystyle \alpha } and obtain the critical values of p A , ( p B ) {\displaystyle p_{A},(p_{B})} that admit gelation. Thus gelation occurs if p A > α c r ( α c + ρ − α c ρ ) . {\displaystyle p_{A}>{\sqrt {\frac {\alpha _{c}}{r(\alpha _{c}+\rho -\alpha _{c}\rho )}}}.} alternatively, the same condition for p B {\displaystyle p_{B}} reads, p B > r α c α c + ρ − α c ρ {\displaystyle p_{B}>{\sqrt {\frac {r\alpha _{c}}{\alpha _{c}+\rho -\alpha _{c}\rho }}}} The both inequalities are equivalent and one may use the one that is more convenient. For instance, depending on which conversion p A {\displaystyle p_{A}} or p B {\displaystyle p_{B}} is resolved analytically. === Trifunctional A monomer with difunctional B monomer === α c = 1 f − 1 = 1 3 − 1 = 1 2 {\displaystyle \alpha _{c}={\frac {1}{f-1}}={\frac {1}{3-1}}={\frac {1}{2}}} Since all the A functional groups are from the trifunctional monomer, ρ = 1 and α = p B 2 ρ r 1 − p B 2 ( 1 − ρ ) r = p B 2 r {\displaystyle \alpha ={\frac {\frac {p_{B}^{2}\rho }{r}}{1-{\frac {p_{B}^{2}(1-\rho )}{r}}}}={\frac {p_{B}^{2}}{r}}} Therefore, gelation occurs when p B 2 r > α c {\displaystyle {\frac {p_{B}^{2}}{r}}>\alpha _{c}} or when, p B > r 2 {\displaystyle p_{B}>{\sqrt {\frac {r}{2}}}} Similarly, gelation occurs when p A > 1 2 r {\displaystyle p_{A}>{\sqrt {\frac {1}{2r}}}} == References ==
Wikipedia/Flory–Stockmayer_theory
In the mathematics of shuffling playing cards, the Gilbert–Shannon–Reeds model is a probability distribution on riffle shuffle permutations. It forms the basis for a recommendation that a deck of cards should be riffled seven times in order to thoroughly randomize it. It is named after the work of Edgar Gilbert, Claude Shannon, and J. Reeds, reported in a 1955 technical report by Gilbert and in a 1981 unpublished manuscript of Reeds. == The model == A riffle shuffle permutation of a sequence of elements is obtained by partitioning the elements into two contiguous subsequences, and then arbitrarily interleaving the two subsequences. For instance, this describes many common ways of shuffling a deck of playing cards, by cutting the deck into two piles of cards that are then riffled together. The Gilbert–Shannon–Reeds model assigns a probability to each of these permutations. In this way, it describes the probability of obtaining each permutation, when a shuffle is performed at random. The model may be defined in several equivalent ways, describing alternative ways of performing this random shuffle: Most similarly to the way humans shuffle cards, the Gilbert–Shannon–Reeds model describes the probabilities obtained from a certain mathematical model of randomly cutting and then riffling a deck of cards. First, the deck is cut into two packets. If there are a total of n {\displaystyle n} cards, then the probability of selecting k {\displaystyle k} cards in the first deck and n − k {\displaystyle n-k} in the second deck is defined as ( n k ) / 2 n {\displaystyle {\tbinom {n}{k}}/2^{n}} . Then, one card at a time is repeatedly moved from the bottom of one of the packets to the top of the shuffled deck, such that if x {\displaystyle x} cards remain in one packet and y {\displaystyle y} cards remain in the other packet, then the probability of choosing a card from the first packet is x / ( x + y ) {\displaystyle x/(x+y)} and the probability of choosing a card from the second packet is y / ( x + y ) {\displaystyle y/(x+y)} . A second, alternative description can be based on a property of the model, that it generates a permutation of the initial deck in which each card is equally likely to have come from the first or the second packet. To generate a random permutation according to this model, begin by flipping a fair coin n {\displaystyle n} times, to determine for each position of the shuffled deck whether it comes from the first packet or the second packet. Then split into two packets whose sizes are the number of tails and the number of heads flipped, and use the same coin flip sequence to determine from which packet to pull each card of the shuffled deck. A third alternative description is more abstract, but lends itself better to mathematical analysis. Generate a set of n {\displaystyle n} values from the uniform continuous distribution on the unit interval, and place them in sorted order. Then the doubling map x ↦ 2 x ( mod 1 ) {\displaystyle x\mapsto 2x{\pmod {1}}} from the theory of dynamical systems maps this system of points to a permutation of the points in which the permuted ordering obeys the Gilbert–Shannon–Reeds model, and the positions of the new points are again uniformly random. Among all of the possible riffle shuffle permutations of a card deck, the Gilbert–Shannon–Reeds model gives almost all riffles equal probability, 1 / 2 n {\displaystyle 1/2^{n}} , of occurring. However, there is one exception, the identity permutation, which has a greater probability ( n + 1 ) / 2 n {\displaystyle (n+1)/2^{n}} of occurring. == Inverse == The inverse permutation of a random riffle may be generated directly. To do so, start with a deck of n cards and then repeatedly deal off the bottom card of the deck onto one of two piles, choosing randomly with equal probability which of the two piles to deal each card onto. Then, when all cards have been dealt, stack the two piles back together. == The effect of repeated riffles == Bayer & Diaconis (1992) analyzed mathematically the total variation distance between two probability distributions on permutations: the uniform distribution in which all permutations are equally likely, and the distribution generated by repeated applications of the Gilbert–Shannon–Reeds model. The total variation distance measures how similar or dissimilar two probability distributions are; it is zero only when the two distributions are identical, and attains a maximum value of one for probability distributions that never generate the same values as each other. Bayer and Diaconis reported that, for decks of n cards shuffled 3 2 log 2 ⁡ n + θ {\displaystyle {\tfrac {3}{2}}\log _{2}n+\theta } times, where θ is an arbitrary constant, the total variation distance is close to one when θ is significantly less than zero, and close to zero when θ is significantly greater than zero, independently of n. In particular their calculations showed that for n = 52, five riffles produce a distribution whose total variation distance from uniform is still close to one, while seven riffles give total variation distance 0.334. This result was widely reported as implying that card decks should be riffled seven times in order to thoroughly randomize them. Similar analyses have been performed using the Kullback–Leibler divergence, a distance between two probability distributions defined in terms of entropy; the divergence of a distribution from uniform can be interpreted as the number of bits of information that can still be recovered about the initial state of the card deck. The results are qualitatively different: rather than having a sharp threshold between random and non-random at 3 2 log 2 ⁡ n {\displaystyle {\tfrac {3}{2}}\log _{2}n} shuffles, as occurs for total variation distance, the divergence decays more gradually, decreasing linearly as the number of shuffles ranges from zero to log 2 ⁡ n {\displaystyle \log _{2}n} (at which point the number of remaining bits of information is linear, smaller by a logarithmic factor than its initial value) and then decreasing exponentially until, after 3 2 log 2 ⁡ n {\displaystyle {\tfrac {3}{2}}\log _{2}n} shuffles, only a constant number of bits of information remain. == References ==
Wikipedia/Gilbert–Shannon–Reeds_model
In mathematics, the blancmange curve is a self-affine fractal curve constructible by midpoint subdivision. It is also known as the Takagi curve, after Teiji Takagi who described it in 1901, or as the Takagi–Landsberg curve, a generalization of the curve named after Takagi and Georg Landsberg. The name blancmange comes from its resemblance to a Blancmange pudding. It is a special case of the more general de Rham curve. == Definition == The blancmange function is defined on the unit interval by blanc ⁡ ( x ) = ∑ n = 0 ∞ s ( 2 n x ) 2 n , {\displaystyle \operatorname {blanc} (x)=\sum _{n=0}^{\infty }{s(2^{n}x) \over 2^{n}},} where s ( x ) {\displaystyle s(x)} is the triangle wave, defined by s ( x ) = min n ∈ Z | x − n | {\displaystyle s(x)=\min _{n\in {\mathbf {Z} }}|x-n|} , that is, s ( x ) {\displaystyle s(x)} is the distance from x to the nearest integer. The Takagi–Landsberg curve is a slight generalization, given by T w ( x ) = ∑ n = 0 ∞ w n s ( 2 n x ) {\displaystyle T_{w}(x)=\sum _{n=0}^{\infty }w^{n}s(2^{n}x)} for a parameter w {\displaystyle w} ; thus the blancmange curve is the case w = 1 / 2 {\displaystyle w=1/2} . The value H = − log 2 ⁡ w {\displaystyle H=-\log _{2}w} is known as the Hurst parameter. The function can be extended to all of the real line: applying the definition given above shows that the function repeats on each unit interval. === Functional equation definition === The periodic version of the Takagi curve can also be defined as the unique bounded solution T = T w : R → R {\displaystyle T=T_{w}:\mathbb {R} \to \mathbb {R} } to the functional equation T ( x ) = s ( x ) + w T ( 2 x ) . {\displaystyle T(x)=s(x)+wT(2x).} Indeed, the blancmange function T w {\displaystyle T_{w}} is certainly bounded, and solves the functional equation, since T w ( x ) := ∑ n = 0 ∞ w n s ( 2 n x ) = s ( x ) + ∑ n = 1 ∞ w n s ( 2 n x ) {\displaystyle T_{w}(x):=\sum _{n=0}^{\infty }w^{n}s(2^{n}x)=s(x)+\sum _{n=1}^{\infty }w^{n}s(2^{n}x)} = s ( x ) + w ∑ n = 0 ∞ w n s ( 2 n + 1 x ) = s ( x ) + w T w ( 2 x ) . {\displaystyle =s(x)+w\sum _{n=0}^{\infty }w^{n}s(2^{n+1}x)=s(x)+wT_{w}(2x).} Conversely, if T : R → R {\displaystyle T:\mathbb {R} \to \mathbb {R} } is a bounded solution of the functional equation, iterating the equality one has for any N T ( x ) = ∑ n = 0 N w n s ( 2 n x ) + w N + 1 T ( 2 N + 1 x ) = ∑ n = 0 N w n s ( 2 n x ) + o ( 1 ) , for N → ∞ , {\displaystyle T(x)=\sum _{n=0}^{N}w^{n}s(2^{n}x)+w^{N+1}T(2^{N+1}x)=\sum _{n=0}^{N}w^{n}s(2^{n}x)+o(1),{\text{ for }}N\to \infty ,} whence T = T w {\displaystyle T=T_{w}} . Incidentally, the above functional equations possesses infinitely many continuous, non-bounded solutions, e.g. T w ( x ) + c | x | − log 2 ⁡ w . {\displaystyle T_{w}(x)+c|x|^{-\log _{2}w}.} === Graphical construction === The blancmange curve can be visually built up out of triangle wave functions if the infinite sum is approximated by finite sums of the first few terms. In the illustrations below, progressively finer triangle functions (shown in red) are added to the curve at each stage. == Properties == === Convergence and continuity === The infinite sum defining T w ( x ) {\displaystyle T_{w}(x)} converges absolutely for all x . {\displaystyle x.} Since 0 ≤ s ( x ) ≤ 1 / 2 {\displaystyle 0\leq s(x)\leq 1/2} for all x ∈ R , {\displaystyle x\in \mathbb {R} ,} ∑ n = 0 ∞ | w n s ( 2 n x ) | ≤ 1 2 ∑ n = 0 ∞ | w | n = 1 2 ⋅ 1 1 − | w | {\displaystyle \sum _{n=0}^{\infty }|w^{n}s(2^{n}x)|\leq {\frac {1}{2}}\sum _{n=0}^{\infty }|w|^{n}={\frac {1}{2}}\cdot {\frac {1}{1-|w|}}} if | w | < 1. {\displaystyle |w|<1.} The Takagi curve of parameter w {\displaystyle w} is defined on the unit interval (or R {\displaystyle \mathbb {R} } ) if | w | < 1 {\displaystyle |w|<1} . The Takagi function of parameter w {\displaystyle w} is continuous. The functions T w , n {\displaystyle T_{w,n}} defined by the partial sums T w , n ( x ) = ∑ k = 0 n w k s ( 2 k x ) {\displaystyle T_{w,n}(x)=\sum _{k=0}^{n}w^{k}s(2^{k}x)} are continuous and converge uniformly toward T w : {\displaystyle T_{w}:} | T w ( x ) − T w , n ( x ) | = | ∑ k = n + 1 ∞ w k s ( 2 k x ) | = | w n + 1 ∑ k = 0 ∞ w k s ( 2 k + n + 1 x ) | ≤ | w | n + 1 2 ⋅ 1 1 − | w | {\displaystyle {\begin{aligned}\left|T_{w}(x)-T_{w,n}(x)\right|&=\left|\sum _{k=n+1}^{\infty }w^{k}s(2^{k}x)\right|\\&=\left|w^{n+1}\sum _{k=0}^{\infty }w^{k}s(2^{k+n+1}x)\right|\\&\leq {\frac {|w|^{n+1}}{2}}\cdot {\frac {1}{1-|w|}}\end{aligned}}} for all x when | w | < 1. {\displaystyle |w|<1.} This bound decreases as n → ∞ . {\displaystyle n\to \infty .} By the uniform limit theorem, T w {\displaystyle T_{w}} is continuous if |w| < 1. === Subadditivity === Since the absolute value is a subadditive function so is the function s ( x ) = min n ∈ Z | x − n | {\displaystyle s(x)=\min _{n\in {\mathbf {Z} }}|x-n|} , and its dilations s ( 2 k x ) {\displaystyle s(2^{k}x)} ; since positive linear combinations and point-wise limits of subadditive functions are subadditive, the Takagi function is subadditive for any value of the parameter w {\displaystyle w} . === The special case of the parabola === For w = 1 / 4 {\displaystyle w=1/4} , one obtains the parabola: the construction of the parabola by midpoint subdivision was described by Archimedes. === Differentiability === For values of the parameter 0 < w < 1 / 2 , {\displaystyle 0<w<1/2,} the Takagi function T w {\displaystyle T_{w}} is differentiable in the classical sense at any x ∈ R {\displaystyle x\in \mathbb {R} } which is not a dyadic rational. By derivation under the sign of series, for any non dyadic rational x ∈ R , {\displaystyle x\in \mathbb {R} ,} one finds T w ′ ( x ) = ∑ n = 0 ∞ ( 2 w ) n ( 2 b n − 1 ) {\displaystyle T_{w}^{\prime }(x)=\sum _{n=0}^{\infty }(2w)^{n}\,(2b_{n}-1)} where ( b n ) n ∈ N ∈ { 0 , 1 } N {\displaystyle (b_{n})_{n\in \mathbb {N} }\in \{0,1\}^{\mathbb {N} }} is the sequence of binary digits in the base 2 expansion of x {\displaystyle x} : x = ∑ n = − k ∞ b n 2 − n − 1 . {\displaystyle x=\sum _{n=-k}^{\infty }b_{n}2^{-n-1}\;.} Equivalently, the bits in the binary expansion can be understood as a sequence of square waves, the Haar wavelets, scaled to width 2 − n . {\displaystyle 2^{-n}.} This follows, since the derivative of the triangle wave is just the square wave: d d x s ( x ) = sgn ⁡ ( 1 / 2 − ( x mod 1 ) ) {\displaystyle {\frac {d}{dx}}s(x)=\operatorname {sgn}(1/2-(x\!\!\!\mod 1))} and so T w ′ ( x ) = ∑ n = 0 ∞ ( 2 w ) n sgn ⁡ ( 1 / 2 − ( 2 n x mod 1 ) ) {\displaystyle T_{w}^{\prime }(x)=\sum _{n=0}^{\infty }(2w)^{n}\operatorname {sgn}(1/2-(2^{n}x\!\!\!\mod 1))} For the parameter 0 < w < 1 / 2 , {\displaystyle 0<w<1/2,} the function T w {\displaystyle T_{w}} is Lipschitz of constant 1 / ( 1 − 2 w ) . {\displaystyle 1/(1-2w).} In particular for the special value w = 1 / 4 {\displaystyle w=1/4} one finds, for any non dyadic rational x ∈ [ 0 , 1 ] {\displaystyle x\in [0,1]} T 1 / 4 ′ ( x ) = 2 − 4 x {\displaystyle T_{1/4}'(x)=2-4x} , according with the mentioned T 1 / 4 ( x ) = 2 x ( 1 − x ) . {\displaystyle T_{1/4}(x)=2x(1-x).} For w = 1 / 2 {\displaystyle w=1/2} the blancmange function T w {\displaystyle T_{w}} it is of bounded variation on no non-empty open set; it is not even locally Lipschitz, but it is quasi-Lipschitz, indeed, it admits the function ω ( t ) := t ( | log 2 ⁡ t | + 1 / 2 ) {\displaystyle \omega (t):=t(|\log _{2}t|+1/2)} as a modulus of continuity . === Fourier series expansion === The Takagi–Landsberg function admits an absolutely convergent Fourier series expansion: T w ( x ) = ∑ m = 0 ∞ a m cos ⁡ ( 2 π m x ) {\displaystyle T_{w}(x)=\sum _{m=0}^{\infty }a_{m}\cos(2\pi mx)} with a 0 = 1 / 4 ( 1 − w ) {\displaystyle a_{0}=1/4(1-w)} and, for m ≥ 1 {\displaystyle m\geq 1} a m := − 2 π 2 m 2 ( 4 w ) ν ( m ) , {\displaystyle a_{m}:=-{\frac {2}{\pi ^{2}m^{2}}}(4w)^{\nu (m)},} where 2 ν ( m ) {\displaystyle 2^{\nu (m)}} is the maximum power of 2 {\displaystyle 2} that divides m {\displaystyle m} . Indeed, the above triangle wave s ( x ) {\displaystyle s(x)} has an absolutely convergent Fourier series expansion s ( x ) = 1 4 − 2 π 2 ∑ k = 0 ∞ 1 ( 2 k + 1 ) 2 cos ⁡ ( 2 π ( 2 k + 1 ) x ) . {\displaystyle s(x)={\frac {1}{4}}-{\frac {2}{\pi ^{2}}}\sum _{k=0}^{\infty }{\frac {1}{(2k+1)^{2}}}\cos {\big (}2\pi (2k+1)x{\big )}.} By absolute convergence, one can reorder the corresponding double series for T w ( x ) {\displaystyle T_{w}(x)} : T w ( x ) := ∑ n = 0 ∞ w n s ( 2 n x ) = 1 4 ∑ n = 0 ∞ w n − 2 π 2 ∑ n = 0 ∞ ∑ k = 0 ∞ w n ( 2 k + 1 ) 2 cos ⁡ ( 2 π 2 n ( 2 k + 1 ) x ) : {\displaystyle T_{w}(x):=\sum _{n=0}^{\infty }w^{n}s(2^{n}x)={\frac {1}{4}}\sum _{n=0}^{\infty }w^{n}-{\frac {2}{\pi ^{2}}}\sum _{n=0}^{\infty }\sum _{k=0}^{\infty }{\frac {w^{n}}{(2k+1)^{2}}}\cos {\big (}2\pi 2^{n}(2k+1)x{\big )}\,:} putting m = 2 n ( 2 k + 1 ) {\displaystyle m=2^{n}(2k+1)} yields the above Fourier series for T w ( x ) . {\displaystyle T_{w}(x).} === Self similarity === The recursive definition allows the monoid of self-symmetries of the curve to be given. This monoid is given by two generators, g and r, which act on the curve (restricted to the unit interval) as [ g ⋅ T w ] ( x ) = T w ( g ⋅ x ) = T w ( x 2 ) = x 2 + w T w ( x ) {\displaystyle [g\cdot T_{w}](x)=T_{w}\left(g\cdot x\right)=T_{w}\left({\frac {x}{2}}\right)={\frac {x}{2}}+wT_{w}(x)} and [ r ⋅ T w ] ( x ) = T w ( r ⋅ x ) = T w ( 1 − x ) = T w ( x ) . {\displaystyle [r\cdot T_{w}](x)=T_{w}(r\cdot x)=T_{w}(1-x)=T_{w}(x).} A general element of the monoid then has the form γ = g a 1 r g a 2 r ⋯ r g a n {\displaystyle \gamma =g^{a_{1}}rg^{a_{2}}r\cdots rg^{a_{n}}} for some integers a 1 , a 2 , ⋯ , a n {\displaystyle a_{1},a_{2},\cdots ,a_{n}} This acts on the curve as a linear function: γ ⋅ T w = a + b x + c T w {\displaystyle \gamma \cdot T_{w}=a+bx+cT_{w}} for some constants a, b and c. Because the action is linear, it can be described in terms of a vector space, with the vector space basis: 1 ↦ e 1 = [ 1 0 0 ] {\displaystyle 1\mapsto e_{1}={\begin{bmatrix}1\\0\\0\end{bmatrix}}} x ↦ e 2 = [ 0 1 0 ] {\displaystyle x\mapsto e_{2}={\begin{bmatrix}0\\1\\0\end{bmatrix}}} T w ↦ e 3 = [ 0 0 1 ] {\displaystyle T_{w}\mapsto e_{3}={\begin{bmatrix}0\\0\\1\end{bmatrix}}} In this representation, the action of g and r are given by g = [ 1 0 0 0 1 2 1 2 0 0 w ] {\displaystyle g={\begin{bmatrix}1&0&0\\0&{\frac {1}{2}}&{\frac {1}{2}}\\0&0&w\end{bmatrix}}} and r = [ 1 1 0 0 − 1 0 0 0 1 ] {\displaystyle r={\begin{bmatrix}1&1&0\\0&-1&0\\0&0&1\end{bmatrix}}} That is, the action of a general element γ {\displaystyle \gamma } maps the blancmange curve on the unit interval [0,1] to a sub-interval [ m / 2 p , n / 2 p ] {\displaystyle [m/2^{p},n/2^{p}]} for some integers m, n, p. The mapping is given exactly by [ γ ⋅ T w ] ( x ) = a + b x + c T w ( x ) {\displaystyle [\gamma \cdot T_{w}](x)=a+bx+cT_{w}(x)} where the values of a, b and c can be obtained directly by multiplying out the above matrices. That is: γ = [ 1 m 2 p a 0 n − m 2 p b 0 0 c ] {\displaystyle \gamma ={\begin{bmatrix}1&{\frac {m}{2^{p}}}&a\\0&{\frac {n-m}{2^{p}}}&b\\0&0&c\end{bmatrix}}} Note that p = a 1 + a 2 + ⋯ + a n {\displaystyle p=a_{1}+a_{2}+\cdots +a_{n}} is immediate. The monoid generated by g and r is sometimes called the dyadic monoid; it is a sub-monoid of the modular group. When discussing the modular group, the more common notation for g and r is T and S, but that notation conflicts with the symbols used here. The above three-dimensional representation is just one of many representations it can have; it shows that the blancmange curve is one possible realization of the action. That is, there are representations for any dimension, not just 3; some of these give the de Rham curves. == Integrating the Blancmange curve == Given that the integral of blanc ⁡ ( x ) {\displaystyle \operatorname {blanc} (x)} from 0 to 1 is 1/2, the identity blanc ⁡ ( x ) = blanc ⁡ ( 2 x ) / 2 + s ( x ) {\displaystyle \operatorname {blanc} (x)=\operatorname {blanc} (2x)/2+s(x)} allows the integral over any interval to be computed by the following relation. The computation is recursive with computing time on the order of log of the accuracy required. Defining I ( x ) = ∫ 0 x blanc ⁡ ( y ) d y {\displaystyle I(x)=\int _{0}^{x}\operatorname {blanc} (y)\,dy} one has that I ( x ) = { I ( 2 x ) / 4 + x 2 / 2 if 0 ≤ x ≤ 1 / 2 1 / 2 − I ( 1 − x ) if 1 / 2 ≤ x ≤ 1 n / 2 + I ( x − n ) if n ≤ x ≤ ( n + 1 ) {\displaystyle I(x)={\begin{cases}I(2x)/4+x^{2}/2&{\text{if }}0\leq x\leq 1/2\\1/2-I(1-x)&{\text{if }}1/2\leq x\leq 1\\n/2+I(x-n)&{\text{if }}n\leq x\leq (n+1)\\\end{cases}}} The definite integral is given by: ∫ a b blanc ⁡ ( y ) d y = I ( b ) − I ( a ) . {\displaystyle \int _{a}^{b}\operatorname {blanc} (y)\,dy=I(b)-I(a).} A more general expression can be obtained by defining S ( x ) = ∫ 0 x s ( y ) d y = { x 2 / 2 , 0 ≤ x ≤ 1 2 − x 2 / 2 + x − 1 / 4 , 1 2 ≤ x ≤ 1 n / 4 + S ( x − n ) , ( n ≤ x ≤ n + 1 ) {\displaystyle S(x)=\int _{0}^{x}s(y)dy={\begin{cases}x^{2}/2,&0\leq x\leq {\frac {1}{2}}\\-x^{2}/2+x-1/4,&{\frac {1}{2}}\leq x\leq 1\\n/4+S(x-n),&(n\leq x\leq n+1)\end{cases}}} which, combined with the series representation, gives I w ( x ) = ∫ 0 x T w ( y ) d y = ∑ n = 0 ∞ ( w / 2 ) n S ( 2 n x ) {\displaystyle I_{w}(x)=\int _{0}^{x}T_{w}(y)dy=\sum _{n=0}^{\infty }(w/2)^{n}S(2^{n}x)} Note that I w ( 1 ) = 1 4 ( 1 − w ) {\displaystyle I_{w}(1)={\frac {1}{4(1-w)}}} This integral is also self-similar on the unit interval, under an action of the dyadic monoid described in the section Self similarity. Here, the representation is 4-dimensional, having the basis { e 1 , e 2 , e 3 , e 4 } = { 1 , x , x 2 , I w ( x ) } {\displaystyle \{e_{1},e_{2},e_{3},e_{4}\}=\{1,x,x^{2},I_{w}(x)\}} . The action of g on the unit interval is the commuting diagram [ g ⋅ I w ] ( x ) = I w ( g ⋅ x ) = I w ( x 2 ) = x 2 8 + w 2 I w ( x ) . {\displaystyle [g\cdot I_{w}](x)=I_{w}\left(g\cdot x\right)=I_{w}\left({\frac {x}{2}}\right)={\frac {x^{2}}{8}}+{\frac {w}{2}}I_{w}(x).} From this, one can then immediately read off the generators of the four-dimensional representation: g = [ 1 0 0 0 0 1 2 0 0 0 0 1 4 1 8 0 0 0 w 2 ] {\displaystyle g={\begin{bmatrix}1&0&0&0\\0&{\frac {1}{2}}&0&0\\0&0&{\frac {1}{4}}&{\frac {1}{8}}\\0&0&0&{\frac {w}{2}}\end{bmatrix}}} and r = [ 1 1 1 1 4 ( 1 − w ) 0 − 1 − 2 0 0 0 1 0 0 0 0 − 1 ] {\displaystyle r={\begin{bmatrix}1&1&1&{\frac {1}{4(1-w)}}\\0&-1&-2&0\\0&0&1&0\\0&0&0&-1\end{bmatrix}}} Repeated integrals transform under a 5,6,... dimensional representation. == Relation to simplicial complexes == Let N = ( n t t ) + ( n t − 1 t − 1 ) + … + ( n j j ) , n t > n t − 1 > … > n j ≥ j ≥ 1. {\displaystyle N={\binom {n_{t}}{t}}+{\binom {n_{t-1}}{t-1}}+\ldots +{\binom {n_{j}}{j}},\quad n_{t}>n_{t-1}>\ldots >n_{j}\geq j\geq 1.} Define the Kruskal–Katona function κ t ( N ) = ( n t t + 1 ) + ( n t − 1 t ) + ⋯ + ( n j j + 1 ) . {\displaystyle \kappa _{t}(N)={n_{t} \choose t+1}+{n_{t-1} \choose t}+\dots +{n_{j} \choose j+1}.} The Kruskal–Katona theorem states that this is the minimum number of (t − 1)-simplexes that are faces of a set of N t-simplexes. As t and N approach infinity, κ t ( N ) − N {\displaystyle \kappa _{t}(N)-N} (suitably normalized) approaches the blancmange curve. == See also == Cantor function (also known as the Devil's staircase) Minkowski's question mark function Weierstrass function Dyadic transformation == References == Weisstein, Eric W. "Blancmange Function". MathWorld. Takagi, Teiji (1901), "A Simple Example of the Continuous Function without Derivative", Proc. Phys.-Math. Soc. Jpn., 1: 176–177, doi:10.11429/subutsuhokoku1901.1.F176 Benoit Mandelbrot, "Fractal Landscapes without creases and with rivers", appearing in The Science of Fractal Images, ed. Heinz-Otto Peitgen, Dietmar Saupe; Springer-Verlag (1988) pp 243–260. Linas Vepstas, Symmetries of Period-Doubling Maps, (2004) Donald Knuth, The Art of Computer Programming, volume 4a. Combinatorial algorithms, part 1. ISBN 0-201-03804-8. See pages 372–375. == Further reading == Allaart, Pieter C.; Kawamura, Kiko (11 October 2011), The Takagi function: a survey, arXiv:1110.1691, Bibcode:2011arXiv1110.1691A Lagarias, Jeffrey C. (17 December 2011), The Takagi Function and Its Properties, arXiv:1112.4205, Bibcode:2011arXiv1112.4205L == External links == Takagi Explorer (Some properties of the Takagi function)
Wikipedia/Takagi_function
Entropy is a scientific concept, most commonly associated with states of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the microscopic description of nature in statistical physics, and to the principles of information theory. It has found far-ranging applications in chemistry and physics, in biological systems and their relation to life, in cosmology, economics, sociology, weather science, climate change and information systems including the transmission of information in telecommunication. Entropy is central to the second law of thermodynamics, which states that the entropy of an isolated system left to spontaneous evolution cannot decrease with time. As a result, isolated systems evolve toward thermodynamic equilibrium, where the entropy is highest. A consequence of the second law of thermodynamics is that certain processes are irreversible. The thermodynamic concept was referred to by Scottish scientist and engineer William Rankine in 1850 with the names thermodynamic function and heat-potential. In 1865, German physicist Rudolf Clausius, one of the leading founders of the field of thermodynamics, defined it as the quotient of an infinitesimal amount of heat to the instantaneous temperature. He initially described it as transformation-content, in German Verwandlungsinhalt, and later coined the term entropy from a Greek word for transformation. Austrian physicist Ludwig Boltzmann explained entropy as the measure of the number of possible microscopic arrangements or states of individual atoms and molecules of a system that comply with the macroscopic condition of the system. He thereby introduced the concept of statistical disorder and probability distributions into a new field of thermodynamics, called statistical mechanics, and found the link between the microscopic interactions, which fluctuate about an average configuration, to the macroscopically observable behaviour, in form of a simple logarithmic law, with a proportionality constant, the Boltzmann constant, which has become one of the defining universal constants for the modern International System of Units. == History == In his 1803 paper Fundamental Principles of Equilibrium and Movement, the French mathematician Lazare Carnot proposed that in any machine, the accelerations and shocks of the moving parts represent losses of moment of activity; in any natural process there exists an inherent tendency towards the dissipation of useful energy. In 1824, building on that work, Lazare's son, Sadi Carnot, published Reflections on the Motive Power of Fire, which posited that in all heat-engines, whenever "caloric" (what is now known as heat) falls through a temperature difference, work or motive power can be produced from the actions of its fall from a hot to cold body. He used an analogy with how water falls in a water wheel. That was an early insight into the second law of thermodynamics. Carnot based his views of heat partially on the early 18th-century "Newtonian hypothesis" that both heat and light were types of indestructible forms of matter, which are attracted and repelled by other matter, and partially on the contemporary views of Count Rumford, who showed in 1789 that heat could be created by friction, as when cannon bores are machined. Carnot reasoned that if the body of the working substance, such as a body of steam, is returned to its original state at the end of a complete engine cycle, "no change occurs in the condition of the working body". The first law of thermodynamics, deduced from the heat-friction experiments of James Joule in 1843, expresses the concept of energy and its conservation in all processes; the first law, however, is unsuitable to separately quantify the effects of friction and dissipation. In the 1850s and 1860s, German physicist Rudolf Clausius objected to the supposition that no change occurs in the working body, and gave that change a mathematical interpretation, by questioning the nature of the inherent loss of usable heat when work is done, e.g., heat produced by friction. He described his observations as a dissipative use of energy, resulting in a transformation-content (Verwandlungsinhalt in German), of a thermodynamic system or working body of chemical species during a change of state. That was in contrast to earlier views, based on the theories of Isaac Newton, that heat was an indestructible particle that had mass. Clausius discovered that the non-usable energy increases as steam proceeds from inlet to exhaust in a steam engine. From the prefix en-, as in 'energy', and from the Greek word τροπή [tropē], which is translated in an established lexicon as turning or change and that he rendered in German as Verwandlung, a word often translated into English as transformation, in 1865 Clausius coined the name of that property as entropy. The word was adopted into the English language in 1868. Later, scientists such as Ludwig Boltzmann, Josiah Willard Gibbs, and James Clerk Maxwell gave entropy a statistical basis. In 1877, Boltzmann visualized a probabilistic way to measure the entropy of an ensemble of ideal gas particles, in which he defined entropy as proportional to the natural logarithm of the number of microstates such a gas could occupy. The proportionality constant in this definition, called the Boltzmann constant, has become one of the defining universal constants for the modern International System of Units (SI). Henceforth, the essential problem in statistical thermodynamics has been to determine the distribution of a given amount of energy E over N identical systems. Constantin Carathéodory, a Greek mathematician, linked entropy with a mathematical definition of irreversibility, in terms of trajectories and integrability. == Etymology == In 1865, Clausius named the concept of "the differential of a quantity which depends on the configuration of the system", entropy (Entropie) after the Greek word for 'transformation'. He gave "transformational content" (Verwandlungsinhalt) as a synonym, paralleling his "thermal and ergonal content" (Wärme- und Werkinhalt) as the name of U, but preferring the term entropy as a close parallel of the word energy, as he found the concepts nearly "analogous in their physical significance". This term was formed by replacing the root of ἔργον ('ergon', 'work') by that of τροπή ('tropy', 'transformation'). In more detail, Clausius explained his choice of "entropy" as a name as follows: I prefer going to the ancient languages for the names of important scientific quantities, so that they may mean the same thing in all living tongues. I propose, therefore, to call S the entropy of a body, after the Greek word "transformation". I have designedly coined the word entropy to be similar to energy, for these two quantities are so analogous in their physical significance, that an analogy of denominations seems to me helpful. Leon Cooper added that in this way "he succeeded in coining a word that meant the same thing to everybody: nothing". == Definitions and descriptions == The concept of entropy is described by two principal approaches, the macroscopic perspective of classical thermodynamics, and the microscopic description central to statistical mechanics. The classical approach defines entropy in terms of macroscopically measurable physical properties, such as bulk mass, volume, pressure, and temperature. The statistical definition of entropy defines it in terms of the statistics of the motions of the microscopic constituents of a system — modelled at first classically, e.g. Newtonian particles constituting a gas, and later quantum-mechanically (photons, phonons, spins, etc.). The two approaches form a consistent, unified view of the same phenomenon as expressed in the second law of thermodynamics, which has found universal applicability to physical processes. === State variables and functions of state === Many thermodynamic properties are defined by physical variables that define a state of thermodynamic equilibrium, which essentially are state variables. State variables depend only on the equilibrium condition, not on the path evolution to that state. State variables can be functions of state, also called state functions, in a sense that one state variable is a mathematical function of other state variables. Often, if some properties of a system are determined, they are sufficient to determine the state of the system and thus other properties' values. For example, temperature and pressure of a given quantity of gas determine its state, and thus also its volume via the ideal gas law. A system composed of a pure substance of a single phase at a particular uniform temperature and pressure is determined, and is thus a particular state, and has a particular volume. The fact that entropy is a function of state makes it useful. In the Carnot cycle, the working fluid returns to the same state that it had at the start of the cycle, hence the change or line integral of any state function, such as entropy, over this reversible cycle is zero. === Reversible process === The entropy change d S {\textstyle \mathrm {d} S} of a system can be well-defined as a small portion of heat δ Q r e v {\textstyle \delta Q_{\mathsf {rev}}} transferred from the surroundings to the system during a reversible process divided by the temperature T {\textstyle T} of the system during this heat transfer: d S = δ Q r e v T {\displaystyle \mathrm {d} S={\frac {\delta Q_{\mathsf {rev}}}{T}}} The reversible process is quasistatic (i.e., it occurs without any dissipation, deviating only infinitesimally from the thermodynamic equilibrium), and it may conserve total entropy. For example, in the Carnot cycle, while the heat flow from a hot reservoir to a cold reservoir represents the increase in the entropy in a cold reservoir, the work output, if reversibly and perfectly stored, represents the decrease in the entropy which could be used to operate the heat engine in reverse, returning to the initial state; thus the total entropy change may still be zero at all times if the entire process is reversible. In contrast, an irreversible process increases the total entropy of the system and surroundings. Any process that happens quickly enough to deviate from the thermal equilibrium cannot be reversible; the total entropy increases, and the potential for maximum work to be done during the process is lost. === Carnot cycle === The concept of entropy arose from Rudolf Clausius's study of the Carnot cycle which is a thermodynamic cycle performed by a Carnot heat engine as a reversible heat engine. In a Carnot cycle, the heat Q H {\textstyle Q_{\mathsf {H}}} is transferred from a hot reservoir to a working gas at the constant temperature T H {\textstyle T_{\mathsf {H}}} during isothermal expansion stage and the heat Q C {\textstyle Q_{\mathsf {C}}} is transferred from a working gas to a cold reservoir at the constant temperature T C {\textstyle T_{\mathsf {C}}} during isothermal compression stage. According to Carnot's theorem, a heat engine with two thermal reservoirs can produce a work W {\textstyle W} if and only if there is a temperature difference between reservoirs. Originally, Carnot did not distinguish between heats Q H {\textstyle Q_{\mathsf {H}}} and Q C {\textstyle Q_{\mathsf {C}}} , as he assumed caloric theory to be valid and hence that the total heat in the system was conserved. But in fact, the magnitude of heat Q H {\textstyle Q_{\mathsf {H}}} is greater than the magnitude of heat Q C {\textstyle Q_{\mathsf {C}}} . Through the efforts of Clausius and Kelvin, the work W {\textstyle W} done by a reversible heat engine was found to be the product of the Carnot efficiency (i.e., the efficiency of all reversible heat engines with the same pair of thermal reservoirs) and the heat Q H {\textstyle Q_{\mathsf {H}}} absorbed by a working body of the engine during isothermal expansion: W = T H − T C T H ⋅ Q H = ( 1 − T C T H ) Q H {\displaystyle W={\frac {T_{\mathsf {H}}-T_{\mathsf {C}}}{T_{\mathsf {H}}}}\cdot Q_{\mathsf {H}}=\left(1-{\frac {T_{\mathsf {C}}}{T_{\mathsf {H}}}}\right)Q_{\mathsf {H}}} To derive the Carnot efficiency Kelvin had to evaluate the ratio of the work output to the heat absorbed during the isothermal expansion with the help of the Carnot–Clapeyron equation, which contained an unknown function called the Carnot function. The possibility that the Carnot function could be the temperature as measured from a zero point of temperature was suggested by Joule in a letter to Kelvin. This allowed Kelvin to establish his absolute temperature scale. It is known that a work W > 0 {\textstyle W>0} produced by an engine over a cycle equals to a net heat Q Σ = | Q H | − | Q C | {\textstyle Q_{\Sigma }=\left\vert Q_{\mathsf {H}}\right\vert -\left\vert Q_{\mathsf {C}}\right\vert } absorbed over a cycle. Thus, with the sign convention for a heat Q {\textstyle Q} transferred in a thermodynamic process ( Q > 0 {\textstyle Q>0} for an absorption and Q < 0 {\textstyle Q<0} for a dissipation) we get: W − Q Σ = W − | Q H | + | Q C | = W − Q H − Q C = 0 {\displaystyle W-Q_{\Sigma }=W-\left\vert Q_{\mathsf {H}}\right\vert +\left\vert Q_{\mathsf {C}}\right\vert =W-Q_{\mathsf {H}}-Q_{\mathsf {C}}=0} Since this equality holds over an entire Carnot cycle, it gave Clausius the hint that at each stage of the cycle the difference between a work and a net heat would be conserved, rather than a net heat itself. Which means there exists a state function U {\textstyle U} with a change of d U = δ Q − d W {\textstyle \mathrm {d} U=\delta Q-\mathrm {d} W} . It is called an internal energy and forms a central concept for the first law of thermodynamics. Finally, comparison for both the representations of a work output in a Carnot cycle gives us: | Q H | T H − | Q C | T C = Q H T H + Q C T C = 0 {\displaystyle {\frac {\left\vert Q_{\mathsf {H}}\right\vert }{T_{\mathsf {H}}}}-{\frac {\left\vert Q_{\mathsf {C}}\right\vert }{T_{\mathsf {C}}}}={\frac {Q_{\mathsf {H}}}{T_{\mathsf {H}}}}+{\frac {Q_{\mathsf {C}}}{T_{\mathsf {C}}}}=0} Similarly to the derivation of internal energy, this equality implies existence of a state function S {\textstyle S} with a change of d S = δ Q / T {\textstyle \mathrm {d} S=\delta Q/T} and which is conserved over an entire cycle. Clausius called this state function entropy. In addition, the total change of entropy in both thermal reservoirs over Carnot cycle is zero too, since the inversion of a heat transfer direction means a sign inversion for the heat transferred during isothermal stages: − Q H T H − Q C T C = Δ S r , H + Δ S r , C = 0 {\displaystyle -{\frac {Q_{\mathsf {H}}}{T_{\mathsf {H}}}}-{\frac {Q_{\mathsf {C}}}{T_{\mathsf {C}}}}=\Delta S_{\mathsf {r,H}}+\Delta S_{\mathsf {r,C}}=0} Here we denote the entropy change for a thermal reservoir by Δ S r , i = − Q i / T i {\textstyle \Delta S_{{\mathsf {r}},i}=-Q_{i}/T_{i}} , where i {\textstyle i} is either H {\textstyle {\mathsf {H}}} for a hot reservoir or C {\textstyle {\mathsf {C}}} for a cold one. If we consider a heat engine which is less effective than Carnot cycle (i.e., the work W {\textstyle W} produced by this engine is less than the maximum predicted by Carnot's theorem), its work output is capped by Carnot efficiency as: W < ( 1 − T C T H ) Q H {\displaystyle W<\left(1-{\frac {T_{\mathsf {C}}}{T_{\mathsf {H}}}}\right)Q_{\mathsf {H}}} Substitution of the work W {\textstyle W} as the net heat into the inequality above gives us: Q H T H + Q C T C < 0 {\displaystyle {\frac {Q_{\mathsf {H}}}{T_{\mathsf {H}}}}+{\frac {Q_{\mathsf {C}}}{T_{\mathsf {C}}}}<0} or in terms of the entropy change Δ S r , i {\textstyle \Delta S_{{\mathsf {r}},i}} : Δ S r , H + Δ S r , C > 0 {\displaystyle \Delta S_{\mathsf {r,H}}+\Delta S_{\mathsf {r,C}}>0} A Carnot cycle and an entropy as shown above prove to be useful in the study of any classical thermodynamic heat engine: other cycles, such as an Otto, Diesel or Brayton cycle, could be analysed from the same standpoint. Notably, any machine or cyclic process converting heat into work (i.e., heat engine) that is claimed to produce an efficiency greater than the one of Carnot is not viable — due to violation of the second law of thermodynamics. For further analysis of sufficiently discrete systems, such as an assembly of particles, statistical thermodynamics must be used. Additionally, descriptions of devices operating near the limit of de Broglie waves, e.g. photovoltaic cells, have to be consistent with quantum statistics. === Classical thermodynamics === The thermodynamic definition of entropy was developed in the early 1850s by Rudolf Clausius and essentially describes how to measure the entropy of an isolated system in thermodynamic equilibrium with its parts. Clausius created the term entropy as an extensive thermodynamic variable that was shown to be useful in characterizing the Carnot cycle. Heat transfer in the isotherm steps (isothermal expansion and isothermal compression) of the Carnot cycle was found to be proportional to the temperature of a system (known as its absolute temperature). This relationship was expressed in an increment of entropy that is equal to incremental heat transfer divided by temperature. Entropy was found to vary in the thermodynamic cycle but eventually returned to the same value at the end of every cycle. Thus it was found to be a function of state, specifically a thermodynamic state of the system. While Clausius based his definition on a reversible process, there are also irreversible processes that change entropy. Following the second law of thermodynamics, entropy of an isolated system always increases for irreversible processes. The difference between an isolated system and closed system is that energy may not flow to and from an isolated system, but energy flow to and from a closed system is possible. Nevertheless, for both closed and isolated systems, and indeed, also in open systems, irreversible thermodynamics processes may occur. According to the Clausius equality, for a reversible cyclic thermodynamic process: ∮ δ Q r e v T = 0 {\displaystyle \oint {\frac {\delta Q_{\mathsf {rev}}}{T}}=0} which means the line integral ∫ L δ Q r e v / T {\textstyle \int _{L}{\delta Q_{\mathsf {rev}}/T}} is path-independent. Thus we can define a state function S {\textstyle S} , called entropy: d S = δ Q r e v T {\displaystyle \mathrm {d} S={\frac {\delta Q_{\mathsf {rev}}}{T}}} Therefore, thermodynamic entropy has the dimension of energy divided by temperature, and the unit joule per kelvin (J/K) in the International System of Units (SI). To find the entropy difference between any two states of the system, the integral must be evaluated for some reversible path between the initial and final states. Since an entropy is a state function, the entropy change of the system for an irreversible path is the same as for a reversible path between the same two states. However, the heat transferred to or from the surroundings is different as well as its entropy change. We can calculate the change of entropy only by integrating the above formula. To obtain the absolute value of the entropy, we consider the third law of thermodynamics: perfect crystals at the absolute zero have an entropy S = 0 {\textstyle S=0} . From a macroscopic perspective, in classical thermodynamics the entropy is interpreted as a state function of a thermodynamic system: that is, a property depending only on the current state of the system, independent of how that state came to be achieved. In any process, where the system gives up Δ E {\displaystyle \Delta E} of energy to the surrounding at the temperature T {\textstyle T} , its entropy falls by Δ S {\textstyle \Delta S} and at least T ⋅ Δ S {\textstyle T\cdot \Delta S} of that energy must be given up to the system's surroundings as a heat. Otherwise, this process cannot go forward. In classical thermodynamics, the entropy of a system is defined if and only if it is in a thermodynamic equilibrium (though a chemical equilibrium is not required: for example, the entropy of a mixture of two moles of hydrogen and one mole of oxygen in standard conditions is well-defined). === Statistical mechanics === The statistical definition was developed by Ludwig Boltzmann in the 1870s by analysing the statistical behaviour of the microscopic components of the system. Boltzmann showed that this definition of entropy was equivalent to the thermodynamic entropy to within a constant factor—known as the Boltzmann constant. In short, the thermodynamic definition of entropy provides the experimental verification of entropy, while the statistical definition of entropy extends the concept, providing an explanation and a deeper understanding of its nature. The interpretation of entropy in statistical mechanics is the measure of uncertainty, disorder, or mixedupness in the phrase of Gibbs, which remains about a system after its observable macroscopic properties, such as temperature, pressure and volume, have been taken into account. For a given set of macroscopic variables, the entropy measures the degree to which the probability of the system is spread out over different possible microstates. In contrast to the macrostate, which characterizes plainly observable average quantities, a microstate specifies all molecular details about the system including the position and momentum of every molecule. The more such states are available to the system with appreciable probability, the greater the entropy. In statistical mechanics, entropy is a measure of the number of ways a system can be arranged, often taken to be a measure of "disorder" (the higher the entropy, the higher the disorder). This definition describes the entropy as being proportional to the natural logarithm of the number of possible microscopic configurations of the individual atoms and molecules of the system (microstates) that could cause the observed macroscopic state (macrostate) of the system. The constant of proportionality is the Boltzmann constant. The Boltzmann constant, and therefore entropy, have dimensions of energy divided by temperature, which has a unit of joules per kelvin (J⋅K−1) in the International System of Units (or kg⋅m2⋅s−2⋅K−1 in terms of base units). The entropy of a substance is usually given as an intensive property — either entropy per unit mass (SI unit: J⋅K−1⋅kg−1) or entropy per unit amount of substance (SI unit: J⋅K−1⋅mol−1). Specifically, entropy is a logarithmic measure for the system with a number of states, each with a probability p i {\textstyle p_{i}} of being occupied (usually given by the Boltzmann distribution): S = − k B ∑ i p i ln ⁡ p i {\displaystyle S=-k_{\mathsf {B}}\sum _{i}{p_{i}\ln {p_{i}}}} where k B {\textstyle k_{\mathsf {B}}} is the Boltzmann constant and the summation is performed over all possible microstates of the system. In case states are defined in a continuous manner, the summation is replaced by an integral over all possible states, or equivalently we can consider the expected value of the logarithm of the probability that a microstate is occupied: S = − k B ⟨ ln ⁡ p ⟩ {\displaystyle S=-k_{\mathsf {B}}\left\langle \ln {p}\right\rangle } This definition assumes the basis states to be picked in a way that there is no information on their relative phases. In a general case the expression is: S = − k B t r ( ρ ^ × ln ⁡ ρ ^ ) {\displaystyle S=-k_{\mathsf {B}}\ \mathrm {tr} {\left({\hat {\rho }}\times \ln {\hat {\rho }}\right)}} where ρ ^ {\textstyle {\hat {\rho }}} is a density matrix, t r {\displaystyle \mathrm {tr} } is a trace operator and ln {\displaystyle \ln } is a matrix logarithm. The density matrix formalism is not required if the system is in thermal equilibrium so long as the basis states are chosen to be eigenstates of the Hamiltonian. For most practical purposes it can be taken as the fundamental definition of entropy since all other formulae for S {\textstyle S} can be derived from it, but not vice versa. In what has been called the fundamental postulate in statistical mechanics, among system microstates of the same energy (i.e., degenerate microstates) each microstate is assumed to be populated with equal probability p i = 1 / Ω {\textstyle p_{i}=1/\Omega } , where Ω {\textstyle \Omega } is the number of microstates whose energy equals that of the system. Usually, this assumption is justified for an isolated system in a thermodynamic equilibrium. Then in case of an isolated system the previous formula reduces to: S = k B ln ⁡ Ω {\displaystyle S=k_{\mathsf {B}}\ln {\Omega }} In thermodynamics, such a system is one with a fixed volume, number of molecules, and internal energy, called a microcanonical ensemble. The most general interpretation of entropy is as a measure of the extent of uncertainty about a system. The equilibrium state of a system maximizes the entropy because it does not reflect all information about the initial conditions, except for the conserved variables. This uncertainty is not of the everyday subjective kind, but rather the uncertainty inherent to the experimental method and interpretative model. The interpretative model has a central role in determining entropy. The qualifier "for a given set of macroscopic variables" above has deep implications when two observers use different sets of macroscopic variables. For example, consider observer A using variables U {\textstyle U} , V {\textstyle V} , W {\textstyle W} and observer B using variables U {\textstyle U} , V {\textstyle V} , W {\textstyle W} , X {\textstyle X} . If observer B changes variable X {\textstyle X} , then observer A will see a violation of the second law of thermodynamics, since he does not possess information about variable X {\textstyle X} and its influence on the system. In other words, one must choose a complete set of macroscopic variables to describe the system, i.e. every independent parameter that may change during experiment. Entropy can also be defined for any Markov processes with reversible dynamics and the detailed balance property. In Boltzmann's 1896 Lectures on Gas Theory, he showed that this expression gives a measure of entropy for systems of atoms and molecules in the gas phase, thus providing a measure for the entropy of classical thermodynamics. === Entropy of a system === In a thermodynamic system, pressure and temperature tend to become uniform over time because the equilibrium state has higher probability (more possible combinations of microstates) than any other state. As an example, for a glass of ice water in air at room temperature, the difference in temperature between the warm room (the surroundings) and the cold glass of ice and water (the system and not part of the room) decreases as portions of the thermal energy from the warm surroundings spread to the cooler system of ice and water. Over time the temperature of the glass and its contents and the temperature of the room become equal. In other words, the entropy of the room has decreased as some of its energy has been dispersed to the ice and water, of which the entropy has increased. However, as calculated in the example, the entropy of the system of ice and water has increased more than the entropy of the surrounding room has decreased. In an isolated system such as the room and ice water taken together, the dispersal of energy from warmer to cooler always results in a net increase in entropy. Thus, when the "universe" of the room and ice water system has reached a temperature equilibrium, the entropy change from the initial state is at a maximum. The entropy of the thermodynamic system is a measure of how far the equalisation has progressed. Thermodynamic entropy is a non-conserved state function that is of great importance in the sciences of physics and chemistry. Historically, the concept of entropy evolved to explain why some processes (permitted by conservation laws) occur spontaneously while their time reversals (also permitted by conservation laws) do not; systems tend to progress in the direction of increasing entropy. For isolated systems, entropy never decreases. This fact has several important consequences in science: first, it prohibits "perpetual motion" machines; and second, it implies the arrow of entropy has the same direction as the arrow of time. Increases in the total entropy of system and surroundings correspond to irreversible changes, because some energy is expended as waste heat, limiting the amount of work a system can do. Unlike many other functions of state, entropy cannot be directly observed but must be calculated. Absolute standard molar entropy of a substance can be calculated from the measured temperature dependence of its heat capacity. The molar entropy of ions is obtained as a difference in entropy from a reference state defined as zero entropy. The second law of thermodynamics states that the entropy of an isolated system must increase or remain constant. Therefore, entropy is not a conserved quantity: for example, in an isolated system with non-uniform temperature, heat might irreversibly flow and the temperature become more uniform such that entropy increases. Chemical reactions cause changes in entropy and system entropy, in conjunction with enthalpy, plays an important role in determining in which direction a chemical reaction spontaneously proceeds. One dictionary definition of entropy is that it is "a measure of thermal energy per unit temperature that is not available for useful work" in a cyclic process. For instance, a substance at uniform temperature is at maximum entropy and cannot drive a heat engine. A substance at non-uniform temperature is at a lower entropy (than if the heat distribution is allowed to even out) and some of the thermal energy can drive a heat engine. A special case of entropy increase, the entropy of mixing, occurs when two or more different substances are mixed. If the substances are at the same temperature and pressure, there is no net exchange of heat or work – the entropy change is entirely due to the mixing of the different substances. At a statistical mechanical level, this results due to the change in available volume per particle with mixing. === Equivalence of definitions === Proofs of equivalence between the entropy in statistical mechanics — the Gibbs entropy formula: S = − k B ∑ i p i ln ⁡ p i {\displaystyle S=-k_{\mathsf {B}}\sum _{i}{p_{i}\ln {p_{i}}}} and the entropy in classical thermodynamics: d S = δ Q r e v T {\displaystyle \mathrm {d} S={\frac {\delta Q_{\mathsf {rev}}}{T}}} together with the fundamental thermodynamic relation are known for the microcanonical ensemble, the canonical ensemble, the grand canonical ensemble, and the isothermal–isobaric ensemble. These proofs are based on the probability density of microstates of the generalised Boltzmann distribution and the identification of the thermodynamic internal energy as the ensemble average U = ⟨ E i ⟩ {\textstyle U=\left\langle E_{i}\right\rangle } . Thermodynamic relations are then employed to derive the well-known Gibbs entropy formula. However, the equivalence between the Gibbs entropy formula and the thermodynamic definition of entropy is not a fundamental thermodynamic relation but rather a consequence of the form of the generalized Boltzmann distribution. Furthermore, it has been shown that the definitions of entropy in statistical mechanics is the only entropy that is equivalent to the classical thermodynamics entropy under the following postulates: == Second law of thermodynamics == The second law of thermodynamics requires that, in general, the total entropy of any system does not decrease other than by increasing the entropy of some other system. Hence, in a system isolated from its environment, the entropy of that system tends not to decrease. It follows that heat cannot flow from a colder body to a hotter body without the application of work to the colder body. Secondly, it is impossible for any device operating on a cycle to produce net work from a single temperature reservoir; the production of net work requires flow of heat from a hotter reservoir to a colder reservoir, or a single expanding reservoir undergoing adiabatic cooling, which performs adiabatic work. As a result, there is no possibility of a perpetual motion machine. It follows that a reduction in the increase of entropy in a specified process, such as a chemical reaction, means that it is energetically more efficient. It follows from the second law of thermodynamics that the entropy of a system that is not isolated may decrease. An air conditioner, for example, may cool the air in a room, thus reducing the entropy of the air of that system. The heat expelled from the room (the system), which the air conditioner transports and discharges to the outside air, always makes a bigger contribution to the entropy of the environment than the decrease of the entropy of the air of that system. Thus, the total of entropy of the room plus the entropy of the environment increases, in agreement with the second law of thermodynamics. In mechanics, the second law in conjunction with the fundamental thermodynamic relation places limits on a system's ability to do useful work. The entropy change of a system at temperature T {\textstyle T} absorbing an infinitesimal amount of heat δ q {\textstyle \delta q} in a reversible way, is given by δ q / T {\textstyle \delta q/T} . More explicitly, an energy T R S {\textstyle T_{R}S} is not available to do useful work, where T R {\textstyle T_{R}} is the temperature of the coldest accessible reservoir or heat sink external to the system. For further discussion, see Exergy. Statistical mechanics demonstrates that entropy is governed by probability, thus allowing for a decrease in disorder even in an isolated system. Although this is possible, such an event has a small probability of occurring, making it unlikely. The applicability of a second law of thermodynamics is limited to systems in or sufficiently near equilibrium state, so that they have defined entropy. Some inhomogeneous systems out of thermodynamic equilibrium still satisfy the hypothesis of local thermodynamic equilibrium, so that entropy density is locally defined as an intensive quantity. For such systems, there may apply a principle of maximum time rate of entropy production. It states that such a system may evolve to a steady state that maximises its time rate of entropy production. This does not mean that such a system is necessarily always in a condition of maximum time rate of entropy production; it means that it may evolve to such a steady state. == Applications == === The fundamental thermodynamic relation === The entropy of a system depends on its internal energy and its external parameters, such as its volume. In the thermodynamic limit, this fact leads to an equation relating the change in the internal energy U {\textstyle U} to changes in the entropy and the external parameters. This relation is known as the fundamental thermodynamic relation. If external pressure p {\textstyle p} bears on the volume V {\textstyle V} as the only external parameter, this relation is: d U = T d S − p d V {\displaystyle \mathrm {d} U=T\ \mathrm {d} S-p\ \mathrm {d} V} Since both internal energy and entropy are monotonic functions of temperature T {\textstyle T} , implying that the internal energy is fixed when one specifies the entropy and the volume, this relation is valid even if the change from one state of thermal equilibrium to another with infinitesimally larger entropy and volume happens in a non-quasistatic way (so during this change the system may be very far out of thermal equilibrium and then the whole-system entropy, pressure, and temperature may not exist). The fundamental thermodynamic relation implies many thermodynamic identities that are valid in general, independent of the microscopic details of the system. Important examples are the Maxwell relations and the relations between heat capacities. === Entropy in chemical thermodynamics === Thermodynamic entropy is central in chemical thermodynamics, enabling changes to be quantified and the outcome of reactions predicted. The second law of thermodynamics states that entropy in an isolated system — the combination of a subsystem under study and its surroundings — increases during all spontaneous chemical and physical processes. The Clausius equation introduces the measurement of entropy change which describes the direction and quantifies the magnitude of simple changes such as heat transfer between systems — always from hotter body to cooler one spontaneously. Thermodynamic entropy is an extensive property, meaning that it scales with the size or extent of a system. In many processes it is useful to specify the entropy as an intensive property independent of the size, as a specific entropy characteristic of the type of system studied. Specific entropy may be expressed relative to a unit of mass, typically the kilogram (unit: J⋅kg−1⋅K−1). Alternatively, in chemistry, it is also referred to one mole of substance, in which case it is called the molar entropy with a unit of J⋅mol−1⋅K−1. Thus, when one mole of substance at about 0 K is warmed by its surroundings to 298 K, the sum of the incremental values of q r e v / T {\textstyle q_{\mathsf {rev}}/T} constitute each element's or compound's standard molar entropy, an indicator of the amount of energy stored by a substance at 298 K. Entropy change also measures the mixing of substances as a summation of their relative quantities in the final mixture. Entropy is equally essential in predicting the extent and direction of complex chemical reactions. For such applications, Δ S {\textstyle \Delta S} must be incorporated in an expression that includes both the system and its surroundings: Δ S u n i v e r s e = Δ S s u r r o u n d i n g s + Δ S s y s t e m {\displaystyle \Delta S_{\mathsf {universe}}=\Delta S_{\mathsf {surroundings}}+\Delta S_{\mathsf {system}}} Via additional steps this expression becomes the equation of Gibbs free energy change Δ G {\textstyle \Delta G} for reactants and products in the system at the constant pressure and temperature T {\textstyle T} : Δ G = Δ H − T Δ S {\displaystyle \Delta G=\Delta H-T\ \Delta S} where Δ H {\textstyle \Delta H} is the enthalpy change and Δ S {\textstyle \Delta S} is the entropy change. The spontaneity of a chemical or physical process is governed by the Gibbs free energy change (ΔG), as defined by the equation ΔG = ΔH − TΔS, where ΔH represents the enthalpy change, ΔS the entropy change, and T the temperature in Kelvin. A negative ΔG indicates a thermodynamically favorable (spontaneous) process, while a positive ΔG denotes a non-spontaneous one. When both ΔH and ΔS are positive (endothermic, entropy-increasing), the reaction becomes spontaneous at sufficiently high temperatures, as the TΔS term dominates. Conversely, if both ΔH and ΔS are negative (exothermic, entropy-decreasing), spontaneity occurs only at low temperatures, where the enthalpy term prevails. Reactions with ΔH < 0 and ΔS > 0 (exothermic and entropy-increasing) are spontaneous at all temperatures, while those with ΔH > 0 and ΔS < 0 (endothermic and entropy-decreasing) are non-spontaneous regardless of temperature. These principles underscore the interplay between energy exchange, disorder, and temperature in determining the direction of natural processes, from phase transitions to biochemical reactions. === World's technological capacity to store and communicate entropic information === A 2011 study in Science estimated the world's technological capacity to store and communicate optimally compressed information normalised on the most effective compression algorithms available in the year 2007, therefore estimating the entropy of the technologically available sources. The author's estimate that humankind's technological capacity to store information grew from 2.6 (entropically compressed) exabytes in 1986 to 295 (entropically compressed) exabytes in 2007. The world's technological capacity to receive information through one-way broadcast networks was 432 exabytes of (entropically compressed) information in 1986, to 1.9 zettabytes in 2007. The world's effective capacity to exchange information through two-way telecommunication networks was 281 petabytes of (entropically compressed) information in 1986, to 65 (entropically compressed) exabytes in 2007. === Entropy balance equation for open systems === In chemical engineering, the principles of thermodynamics are commonly applied to "open systems", i.e. those in which heat, work, and mass flow across the system boundary. In general, flow of heat Q ˙ {\textstyle {\dot {Q}}} , flow of shaft work W ˙ S {\textstyle {\dot {W}}_{\mathsf {S}}} and pressure-volume work P V ˙ {\textstyle P{\dot {V}}} across the system boundaries cause changes in the entropy of the system. Heat transfer entails entropy transfer Q ˙ / T {\textstyle {\dot {Q}}/T} , where T {\textstyle T} is the absolute thermodynamic temperature of the system at the point of the heat flow. If there are mass flows across the system boundaries, they also influence the total entropy of the system. This account, in terms of heat and work, is valid only for cases in which the work and heat transfers are by paths physically distinct from the paths of entry and exit of matter from the system. To derive a generalised entropy balanced equation, we start with the general balance equation for the change in any extensive quantity θ {\textstyle \theta } in a thermodynamic system, a quantity that may be either conserved, such as energy, or non-conserved, such as entropy. The basic generic balance expression states that d θ / d t {\textstyle \mathrm {d} \theta /\mathrm {d} t} , i.e. the rate of change of θ {\textstyle \theta } in the system, equals the rate at which θ {\textstyle \theta } enters the system at the boundaries, minus the rate at which θ {\textstyle \theta } leaves the system across the system boundaries, plus the rate at which θ {\textstyle \theta } is generated within the system. For an open thermodynamic system in which heat and work are transferred by paths separate from the paths for transfer of matter, using this generic balance equation, with respect to the rate of change with time t {\textstyle t} of the extensive quantity entropy S {\textstyle S} , the entropy balance equation is: d S d t = ∑ k = 1 K M ˙ k S ^ k + Q ˙ T + S ˙ g e n {\displaystyle {\frac {\mathrm {d} S}{\mathrm {d} t}}=\sum _{k=1}^{K}{{\dot {M}}_{k}{\hat {S}}_{k}+{\frac {\dot {Q}}{T}}+{\dot {S}}_{\mathsf {gen}}}} where ∑ k = 1 K M ˙ k S ^ k {\textstyle \sum _{k=1}^{K}{{\dot {M}}_{k}{\hat {S}}_{k}}} is the net rate of entropy flow due to the flows of mass M ˙ k {\textstyle {\dot {M}}_{k}} into and out of the system with entropy per unit mass S ^ k {\textstyle {\hat {S}}_{k}} , Q ˙ / T {\textstyle {\dot {Q}}/T} is the rate of entropy flow due to the flow of heat across the system boundary and S ˙ g e n {\textstyle {\dot {S}}_{\mathsf {gen}}} is the rate of entropy generation within the system, e.g. by chemical reactions, phase transitions, internal heat transfer or frictional effects such as viscosity. In case of multiple heat flows the term Q ˙ / T {\textstyle {\dot {Q}}/T} is replaced by ∑ j Q ˙ j / T j {\textstyle \sum _{j}{{\dot {Q}}_{j}/T_{j}}} , where Q ˙ j {\textstyle {\dot {Q}}_{j}} is the heat flow through j {\textstyle j} -th port into the system and T j {\textstyle T_{j}} is the temperature at the j {\textstyle j} -th port. The nomenclature "entropy balance" is misleading and often deemed inappropriate because entropy is not a conserved quantity. In other words, the term S ˙ g e n {\textstyle {\dot {S}}_{\mathsf {gen}}} is never a known quantity but always a derived one based on the expression above. Therefore, the open system version of the second law is more appropriately described as the "entropy generation equation" since it specifies that: S ˙ g e n ≥ 0 {\displaystyle {\dot {S}}_{\mathsf {gen}}\geq 0} with zero for reversible process and positive values for irreversible one. == Entropy change formulas for simple processes == For certain simple transformations in systems of constant composition, the entropy changes are given by simple formulas. === Isothermal expansion or compression of an ideal gas === For the expansion (or compression) of an ideal gas from an initial volume V 0 {\textstyle V_{0}} and pressure P 0 {\textstyle P_{0}} to a final volume V {\textstyle V} and pressure P {\textstyle P} at any constant temperature, the change in entropy is given by: Δ S = n R ln ⁡ V V 0 = − n R ln ⁡ P P 0 {\displaystyle \Delta S=nR\ln {\frac {V}{V_{0}}}=-nR\ln {\frac {P}{P_{0}}}} Here n {\textstyle n} is the amount of gas (in moles) and R {\textstyle R} is the ideal gas constant. These equations also apply for expansion into a finite vacuum or a throttling process, where the temperature, internal energy and enthalpy for an ideal gas remain constant. === Cooling and heating === For pure heating or cooling of any system (gas, liquid or solid) at constant pressure from an initial temperature T 0 {\textstyle T_{0}} to a final temperature T {\textstyle T} , the entropy change is: Δ S = n C P ln ⁡ T T 0 {\textstyle \Delta S=nC_{\mathrm {P} }\ln {\frac {T}{T_{0}}}} provided that the constant-pressure molar heat capacity (or specific heat) C P {\textstyle C_{\mathrm {P} }} is constant and that no phase transition occurs in this temperature interval. Similarly at constant volume, the entropy change is: Δ S = n C V ln ⁡ T T 0 {\displaystyle \Delta S=nC_{\mathrm {V} }\ln {\frac {T}{T_{0}}}} where the constant-volume molar heat capacity C V {\textstyle C_{\mathrm {V} }} is constant and there is no phase change. At low temperatures near absolute zero, heat capacities of solids quickly drop off to near zero, so the assumption of constant heat capacity does not apply. Since entropy is a state function, the entropy change of any process in which temperature and volume both vary is the same as for a path divided into two steps – heating at constant volume and expansion at constant temperature. For an ideal gas, the total entropy change is: Δ S = n C V ln ⁡ T T 0 + n R ln ⁡ V V 0 {\displaystyle \Delta S=nC_{\mathrm {V} }\ln {\frac {T}{T_{0}}}+nR\ln {\frac {V}{V_{0}}}} Similarly if the temperature and pressure of an ideal gas both vary: Δ S = n C P ln ⁡ T T 0 − n R ln ⁡ P P 0 {\displaystyle \Delta S=nC_{\mathrm {P} }\ln {\frac {T}{T_{0}}}-nR\ln {\frac {P}{P_{0}}}} === Phase transitions === Reversible phase transitions occur at constant temperature and pressure. The reversible heat is the enthalpy change for the transition, and the entropy change is the enthalpy change divided by the thermodynamic temperature. For fusion (i.e., melting) of a solid to a liquid at the melting point T m {\textstyle T_{\mathsf {m}}} , the entropy of fusion is: Δ S f u s = Δ H f u s T m . {\displaystyle \Delta S_{\mathsf {fus}}={\frac {\Delta H_{\mathsf {fus}}}{T_{\mathsf {m}}}}.} Similarly, for vaporisation of a liquid to a gas at the boiling point T b {\displaystyle T_{\mathsf {b}}} , the entropy of vaporisation is: Δ S v a p = Δ H v a p T b {\displaystyle \Delta S_{\mathsf {vap}}={\frac {\Delta H_{\mathsf {vap}}}{T_{\mathsf {b}}}}} == Approaches to understanding entropy == As a fundamental aspect of thermodynamics and physics, several different approaches to entropy beyond that of Clausius and Boltzmann are valid. === Standard textbook definitions === The following is a list of additional definitions of entropy from a collection of textbooks: a measure of energy dispersal at a specific temperature. a measure of disorder in the universe or of the availability of the energy in a system to do work. a measure of a system's thermal energy per unit temperature that is unavailable for doing useful work. In Boltzmann's analysis in terms of constituent particles, entropy is a measure of the number of possible microscopic states (or microstates) of a system in thermodynamic equilibrium. === Order and disorder === Entropy is often loosely associated with the amount of order or disorder, or of chaos, in a thermodynamic system. The traditional qualitative description of entropy is that it refers to changes in the state of the system and is a measure of "molecular disorder" and the amount of wasted energy in a dynamical energy transformation from one state or form to another. In this direction, several recent authors have derived exact entropy formulas to account for and measure disorder and order in atomic and molecular assemblies. One of the simpler entropy order/disorder formulas is that derived in 1984 by thermodynamic physicist Peter Landsberg, based on a combination of thermodynamics and information theory arguments. He argues that when constraints operate on a system, such that it is prevented from entering one or more of its possible or permitted states, as contrasted with its forbidden states, the measure of the total amount of "disorder" and "order" in the system are each given by:: 69  D i s o r d e r = C D C I {\displaystyle {\mathsf {Disorder}}={\frac {C_{\mathsf {D}}}{C_{\mathsf {I}}}}} O r d e r = 1 − C O C I {\displaystyle {\mathsf {Order}}=1-{\frac {C_{\mathsf {O}}}{C_{\mathsf {I}}}}} Here, C D {\textstyle C_{\mathsf {D}}} is the "disorder" capacity of the system, which is the entropy of the parts contained in the permitted ensemble, C I {\textstyle C_{\mathsf {I}}} is the "information" capacity of the system, an expression similar to Shannon's channel capacity, and C O {\textstyle C_{\mathsf {O}}} is the "order" capacity of the system. === Energy dispersal === The concept of entropy can be described qualitatively as a measure of energy dispersal at a specific temperature. Similar terms have been in use from early in the history of classical thermodynamics, and with the development of statistical thermodynamics and quantum theory, entropy changes have been described in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantised energy levels. Ambiguities in the terms disorder and chaos, which usually have meanings directly opposed to equilibrium, contribute to widespread confusion and hamper comprehension of entropy for most students. As the second law of thermodynamics shows, in an isolated system internal portions at different temperatures tend to adjust to a single uniform temperature and thus produce equilibrium. A recently developed educational approach avoids ambiguous terms and describes such spreading out of energy as dispersal, which leads to loss of the differentials required for work even though the total energy remains constant in accordance with the first law of thermodynamics (compare discussion in next section). Physical chemist Peter Atkins, in his textbook Physical Chemistry, introduces entropy with the statement that "spontaneous changes are always accompanied by a dispersal of energy or matter and often both". === Relating entropy to energy usefulness === It is possible (in a thermal context) to regard lower entropy as a measure of the effectiveness or usefulness of a particular quantity of energy. Energy supplied at a higher temperature (i.e. with low entropy) tends to be more useful than the same amount of energy available at a lower temperature. Mixing a hot parcel of a fluid with a cold one produces a parcel of intermediate temperature, in which the overall increase in entropy represents a "loss" that can never be replaced. As the entropy of the universe is steadily increasing, its total energy is becoming less useful. Eventually, this is theorised to lead to the heat death of the universe. === Entropy and adiabatic accessibility === A definition of entropy based entirely on the relation of adiabatic accessibility between equilibrium states was given by E. H. Lieb and J. Yngvason in 1999. This approach has several predecessors, including the pioneering work of Constantin Carathéodory from 1909 and the monograph by R. Giles. In the setting of Lieb and Yngvason, one starts by picking, for a unit amount of the substance under consideration, two reference states X 0 {\textstyle X_{0}} and X 1 {\textstyle X_{1}} such that the latter is adiabatically accessible from the former but not conversely. Defining the entropies of the reference states to be 0 and 1 respectively, the entropy of a state X {\textstyle X} is defined as the largest number λ {\textstyle \lambda } such that X {\textstyle X} is adiabatically accessible from a composite state consisting of an amount λ {\textstyle \lambda } in the state X 1 {\textstyle X_{1}} and a complementary amount, ( 1 − λ ) {\textstyle (1-\lambda )} , in the state X 0 {\textstyle X_{0}} . A simple but important result within this setting is that entropy is uniquely determined, apart from a choice of unit and an additive constant for each chemical element, by the following properties: it is monotonic with respect to the relation of adiabatic accessibility, additive on composite systems, and extensive under scaling. === Entropy in quantum mechanics === In quantum statistical mechanics, the concept of entropy was developed by John von Neumann and is generally referred to as "von Neumann entropy": S = − k B t r ( ρ ^ × ln ⁡ ρ ^ ) {\displaystyle S=-k_{\mathsf {B}}\ \mathrm {tr} {\left({\hat {\rho }}\times \ln {\hat {\rho }}\right)}} where ρ ^ {\textstyle {\hat {\rho }}} is the density matrix, t r {\textstyle \mathrm {tr} } is the trace operator and k B {\textstyle k_{\mathsf {B}}} is the Boltzmann constant. This upholds the correspondence principle, because in the classical limit, when the phases between the basis states are purely random, this expression is equivalent to the familiar classical definition of entropy for states with classical probabilities p i {\textstyle p_{i}} : S = − k B ∑ i p i ln ⁡ p i {\displaystyle S=-k_{\mathsf {B}}\sum _{i}{p_{i}\ln {p_{i}}}} i.e. in such a basis the density matrix is diagonal. Von Neumann established a rigorous mathematical framework for quantum mechanics with his work Mathematische Grundlagen der Quantenmechanik. He provided in this work a theory of measurement, where the usual notion of wave function collapse is described as an irreversible process (the so-called von Neumann or projective measurement). Using this concept, in conjunction with the density matrix he extended the classical concept of entropy into the quantum domain. === Information theory === When viewed in terms of information theory, the entropy state function is the amount of information in the system that is needed to fully specify the microstate of the system. Entropy is the measure of the amount of missing information before reception. Often called Shannon entropy, it was originally devised by Claude Shannon in 1948 to study the size of information of a transmitted message. The definition of information entropy is expressed in terms of a discrete set of probabilities p i {\textstyle p_{i}} so that: H ( X ) = − ∑ i = 1 n p ( x i ) log ⁡ p ( x i ) {\displaystyle H(X)=-\sum _{i=1}^{n}{p(x_{i})\log {p(x_{i})}}} where the base of the logarithm determines the units (for example, the binary logarithm corresponds to bits). In the case of transmitted messages, these probabilities were the probabilities that a particular message was actually transmitted, and the entropy of the message system was a measure of the average size of information of a message. For the case of equal probabilities (i.e. each message is equally probable), the Shannon entropy (in bits) is just the number of binary questions needed to determine the content of the message. Most researchers consider information entropy and thermodynamic entropy directly linked to the same concept, while others argue that they are distinct. Both expressions are mathematically similar. If W {\textstyle W} is the number of microstates that can yield a given macrostate, and each microstate has the same a priori probability, then that probability is p = 1 / W {\textstyle p=1/W} . The Shannon entropy (in nats) is: H = − ∑ i = 1 W p i ln ⁡ p i = ln ⁡ W {\displaystyle H=-\sum _{i=1}^{W}{p_{i}\ln {p_{i}}}=\ln {W}} and if entropy is measured in units of k {\textstyle k} per nat, then the entropy is given by: H = k ln ⁡ W {\displaystyle H=k\ln {W}} which is the Boltzmann entropy formula, where k {\textstyle k} is the Boltzmann constant, which may be interpreted as the thermodynamic entropy per nat. Some authors argue for dropping the word entropy for the H {\textstyle H} function of information theory and using Shannon's other term, "uncertainty", instead. === Measurement === The entropy of a substance can be measured, although in an indirect way. The measurement, known as entropymetry, is done on a closed system with constant number of particles N {\textstyle N} and constant volume V {\textstyle V} , and it uses the definition of temperature in terms of entropy, while limiting energy exchange to heat d U → d Q {\textstyle \mathrm {d} U\rightarrow \mathrm {d} Q} : T := ( ∂ U ∂ S ) V , N ⇒ ⋯ ⇒ d S = d Q T {\displaystyle T:={\left({\frac {\partial U}{\partial S}}\right)}_{V,N}\ \Rightarrow \ \cdots \ \Rightarrow \ \mathrm {d} S={\frac {\mathrm {d} Q}{T}}} The resulting relation describes how entropy changes d S {\textstyle \mathrm {d} S} when a small amount of energy d Q {\textstyle \mathrm {d} Q} is introduced into the system at a certain temperature T {\textstyle T} . The process of measurement goes as follows. First, a sample of the substance is cooled as close to absolute zero as possible. At such temperatures, the entropy approaches zero – due to the definition of temperature. Then, small amounts of heat are introduced into the sample and the change in temperature is recorded, until the temperature reaches a desired value (usually 25 °C). The obtained data allows the user to integrate the equation above, yielding the absolute value of entropy of the substance at the final temperature. This value of entropy is called calorimetric entropy. == Interdisciplinary applications == Although the concept of entropy was originally a thermodynamic concept, it has been adapted in other fields of study, including information theory, psychodynamics, thermoeconomics/ecological economics, and evolution. === Philosophy and theoretical physics === Entropy is the only quantity in the physical sciences that seems to imply a particular direction of progress, sometimes called an arrow of time. As time progresses, the second law of thermodynamics states that the entropy of an isolated system never decreases in large systems over significant periods of time. Hence, from this perspective, entropy measurement is thought of as a clock in these conditions. Since the 19th century, a number the philosophers have drawn upon the concept of entropy to develop novel metaphysical and ethical systems. Examples of this work can be found in the thought of Friedrich Nietzsche and Philipp Mainländer, Claude Lévi-Strauss, Isabelle Stengers, Shannon Mussett, and Drew M. Dalton. === Biology === Chiavazzo et al. proposed that where cave spiders choose to lay their eggs can be explained through entropy minimisation. Entropy has been proven useful in the analysis of base pair sequences in DNA. Many entropy-based measures have been shown to distinguish between different structural regions of the genome, differentiate between coding and non-coding regions of DNA, and can also be applied for the recreation of evolutionary trees by determining the evolutionary distance between different species. === Cosmology === Assuming that a finite universe is an isolated system, the second law of thermodynamics states that its total entropy is continually increasing. It has been speculated, since the 19th century, that the universe is fated to a heat death in which all the energy ends up as a homogeneous distribution of thermal energy so that no more work can be extracted from any source. If the universe can be considered to have generally increasing entropy, then – as Roger Penrose has pointed out – gravity plays an important role in the increase because gravity causes dispersed matter to accumulate into stars, which collapse eventually into black holes. The entropy of a black hole is proportional to the surface area of the black hole's event horizon. Jacob Bekenstein and Stephen Hawking have shown that black holes have the maximum possible entropy of any object of equal size. This makes them likely end points of all entropy-increasing processes, if they are totally effective matter and energy traps. However, the escape of energy from black holes might be possible due to quantum activity (see Hawking radiation). The role of entropy in cosmology remains a controversial subject since the time of Ludwig Boltzmann. Recent work has cast some doubt on the heat death hypothesis and the applicability of any simple thermodynamic model to the universe in general. Although entropy does increase in the model of an expanding universe, the maximum possible entropy rises much more rapidly, moving the universe further from the heat death with time, not closer. This results in an "entropy gap" pushing the system further away from the posited heat death equilibrium. Other complicating factors, such as the energy density of the vacuum and macroscopic quantum effects, are difficult to reconcile with thermodynamical models, making any predictions of large-scale thermodynamics extremely difficult. Current theories suggest the entropy gap to have been originally opened up by the early rapid exponential expansion of the universe. === Economics === Romanian American economist Nicholas Georgescu-Roegen, a progenitor in economics and a paradigm founder of ecological economics, made extensive use of the entropy concept in his magnum opus on The Entropy Law and the Economic Process. Due to Georgescu-Roegen's work, the laws of thermodynamics form an integral part of the ecological economics school.: 204f : 29–35  Although his work was blemished somewhat by mistakes, a full chapter on the economics of Georgescu-Roegen has approvingly been included in one elementary physics textbook on the historical development of thermodynamics.: 95–112  In economics, Georgescu-Roegen's work has generated the term 'entropy pessimism'.: 116  Since the 1990s, leading ecological economist and steady-state theorist Herman Daly – a student of Georgescu-Roegen – has been the economics profession's most influential proponent of the entropy pessimism position.: 545f  == See also == == Notes == == References == David, Kover (14 August 2018). "Entropia – fyzikálna veličina vesmíru a nášho života". stejfree.sk. Archived from the original on 27 May 2022. Retrieved 13 April 2022. == Further reading == == External links == "Entropy" at Scholarpedia Entropy and the Clausius inequality MIT OCW lecture, part of 5.60 Thermodynamics & Kinetics, Spring 2008 Entropy and the Second Law of Thermodynamics – an A-level physics lecture with 'derivation' of entropy based on Carnot cycle Khan Academy: entropy lectures, part of Chemistry playlist Entropy Intuition More on Entropy Proof: S (or Entropy) is a valid state variable Reconciling Thermodynamic and State Definitions of Entropy Thermodynamic Entropy Definition Clarification Moriarty, Philip; Merrifield, Michael (2009). "S Entropy". Sixty Symbols. Brady Haran for the University of Nottingham. The Discovery of Entropy by Adam Shulman. Hour-long video, January 2013. The Second Law of Thermodynamics and Entropy – Yale OYC lecture, part of Fundamentals of Physics I (PHYS 200)
Wikipedia/Entropy_(thermodynamics)
Prediction by partial matching (PPM) is an adaptive statistical data compression technique based on context modeling and prediction. PPM models use a set of previous symbols in the uncompressed symbol stream to predict the next symbol in the stream. PPM algorithms can also be used to cluster data into predicted groupings in cluster analysis. == Theory == Predictions are usually reduced to symbol rankings. Each symbol (a letter, bit or any other amount of data) is ranked before it is compressed, and the ranking system determines the corresponding codeword (and therefore the compression rate). In many compression algorithms, the ranking is equivalent to probability mass function estimation. Given the previous letters (or given a context), each symbol is assigned with a probability. For instance, in arithmetic coding the symbols are ranked by their probabilities to appear after previous symbols, and the whole sequence is compressed into a single fraction that is computed according to these probabilities. The number of previous symbols, n, determines the order of the PPM model which is denoted as PPM(n). Unbounded variants where the context has no length limitations also exist and are denoted as PPM*. If no prediction can be made based on all n context symbols, a prediction is attempted with n − 1 symbols. This process is repeated until a match is found or no more symbols remain in context. At that point a fixed prediction is made. Much of the work in optimizing a PPM model is handling inputs that have not already occurred in the input stream. The obvious way to handle them is to create a "never-seen" symbol which triggers the escape sequence. But what probability should be assigned to a symbol that has never been seen? This is called the zero-frequency problem. One variant uses the Laplace estimator, which assigns the "never-seen" symbol a fixed pseudocount of one. A variant called PPMd increments the pseudocount of the "never-seen" symbol every time the "never-seen" symbol is used. (In other words, PPMd estimates the probability of a new symbol as the ratio of the number of unique symbols to the total number of symbols observed). == Implementation == PPM compression implementations vary greatly in other details. The actual symbol selection is usually recorded using arithmetic coding, though it is also possible to use Huffman encoding or even some type of dictionary coding technique. The underlying model used in most PPM algorithms can also be extended to predict multiple symbols. It is also possible to use non-Markov modeling to either replace or supplement Markov modeling. The symbol size is usually static, typically a single byte, which makes generic handling of any file format easy. Published research on this family of algorithms can be found as far back as the mid-1980s. Software implementations were not popular until the early 1990s because PPM algorithms require a significant amount of RAM. Recent PPM implementations are among the best-performing lossless compression programs for natural language text. PPMd is a public domain implementation of PPMII (PPM with information inheritance) by Dmitry Shkarin which has undergone several incompatible revisions. It is used in the RAR file format by default. It is also available in the 7z and zip file formats. Attempts to improve PPM algorithms led to the PAQ series of data compression algorithms. A PPM algorithm, rather than being used for compression, is used to increase the efficiency of user input in the alternate input method program Dasher. == See also == Language model n-gram == Sources == Cleary, J.; Witten, I. (April 1984). "Data Compression Using Adaptive Coding and Partial String Matching". IEEE Trans. Commun. 32 (4): 396–402. CiteSeerX 10.1.1.14.4305. doi:10.1109/TCOM.1984.1096090. Moffat, A. (November 1990). "Implementing the PPM data compression scheme". IEEE Trans. Commun. 38 (11): 1917–1921. CiteSeerX 10.1.1.120.8728. doi:10.1109/26.61469. Cleary, J. G.; Teahan, W. J.; Witten, I. H. (1997). "Unbounded length contexts for PPM". The Computer Journal. 40 (2_and_3). Oxford, England: Oxford University Press: 67–75. doi:10.1093/comjnl/40.2_and_3.67. ISSN 0010-4620. C. Bloom, Solving the problems of context modeling. W.J. Teahan, Probability estimation for PPM, Original Source from archive.org. Schürmann, T.; Grassberger, P. (September 1996). "Entropy estimation of symbol sequences". Chaos. 6 (3): 414–427. arXiv:cond-mat/0203436. Bibcode:1996Chaos...6..414S. doi:10.1063/1.166191. PMID 12780271. S2CID 10090433. == References == == External links == Suite of PPM compressors with benchmarks BICOM, a bijective PPM compressor Archived 2004-04-15 at the Wayback Machine "Arithmetic Coding + Statistical Modeling = Data Compression", Part 2
Wikipedia/PPM_compression_algorithm
In information theory, the graph entropy is a measure of the information rate achievable by communicating symbols over a channel in which certain pairs of values may be confused. This measure, first introduced by Körner in the 1970s, has since also proven itself useful in other settings, including combinatorics. == Definition == Let G = ( V , E ) {\displaystyle G=(V,E)} be an undirected graph. The graph entropy of G {\displaystyle G} , denoted H ( G ) {\displaystyle H(G)} is defined as H ( G ) = min X , Y I ( X ; Y ) {\displaystyle H(G)=\min _{X,Y}I(X;Y)} where X {\displaystyle X} is chosen uniformly from V {\displaystyle V} , Y {\displaystyle Y} ranges over independent sets of G, the joint distribution of X {\displaystyle X} and Y {\displaystyle Y} is such that X ∈ Y {\displaystyle X\in Y} with probability one, and I ( X ; Y ) {\displaystyle I(X;Y)} is the mutual information of X {\displaystyle X} and Y {\displaystyle Y} . That is, if we let I {\displaystyle {\mathcal {I}}} denote the independent vertex sets in G {\displaystyle G} , we wish to find the joint distribution X , Y {\displaystyle X,Y} on V × I {\displaystyle V\times {\mathcal {I}}} with the lowest mutual information such that (i) the marginal distribution of the first term is uniform and (ii) in samples from the distribution, the second term contains the first term almost surely. The mutual information of X {\displaystyle X} and Y {\displaystyle Y} is then called the entropy of G {\displaystyle G} . == Properties == Monotonicity. If G 1 {\displaystyle G_{1}} is a subgraph of G 2 {\displaystyle G_{2}} on the same vertex set, then H ( G 1 ) ≤ H ( G 2 ) {\displaystyle H(G_{1})\leq H(G_{2})} . Subadditivity. Given two graphs G 1 = ( V , E 1 ) {\displaystyle G_{1}=(V,E_{1})} and G 2 = ( V , E 2 ) {\displaystyle G_{2}=(V,E_{2})} on the same set of vertices, the graph union G 1 ∪ G 2 = ( V , E 1 ∪ E 2 ) {\displaystyle G_{1}\cup G_{2}=(V,E_{1}\cup E_{2})} satisfies H ( G 1 ∪ G 2 ) ≤ H ( G 1 ) + H ( G 2 ) {\displaystyle H(G_{1}\cup G_{2})\leq H(G_{1})+H(G_{2})} . Arithmetic mean of disjoint unions. Let G 1 , G 2 , ⋯ , G k {\displaystyle G_{1},G_{2},\cdots ,G_{k}} be a sequence of graphs on disjoint sets of vertices, with n 1 , n 2 , ⋯ , n k {\displaystyle n_{1},n_{2},\cdots ,n_{k}} vertices, respectively. Then H ( G 1 ∪ G 2 ∪ ⋯ G k ) = 1 ∑ i = 1 k n i ∑ i = 1 k n i H ( G i ) {\displaystyle H(G_{1}\cup G_{2}\cup \cdots G_{k})={\tfrac {1}{\sum _{i=1}^{k}n_{i}}}\sum _{i=1}^{k}{n_{i}H(G_{i})}} . Additionally, simple formulas exist for certain families classes of graphs. Complete balanced k-partite graphs have entropy log 2 ⁡ k {\displaystyle \log _{2}k} . In particular, Edge-less graphs have entropy 0 {\displaystyle 0} . Complete graphs on n {\displaystyle n} vertices have entropy log 2 ⁡ n {\displaystyle \log _{2}n} . Complete balanced bipartite graphs have entropy 1 {\displaystyle 1} . Complete bipartite graphs with n {\displaystyle n} vertices in one partition and m {\displaystyle m} in the other have entropy H ( n m + n ) {\displaystyle H\left({\frac {n}{m+n}}\right)} , where H {\displaystyle H} is the binary entropy function. == Example == Here, we use properties of graph entropy to provide a simple proof that a complete graph G {\displaystyle G} on n {\displaystyle n} vertices cannot be expressed as the union of fewer than log 2 ⁡ n {\displaystyle \log _{2}n} bipartite graphs. Proof By monotonicity, no bipartite graph can have graph entropy greater than that of a complete bipartite graph, which is bounded by 1 {\displaystyle 1} . Thus, by sub-additivity, the union of k {\displaystyle k} bipartite graphs cannot have entropy greater than k {\displaystyle k} . Now let G = ( V , E ) {\displaystyle G=(V,E)} be a complete graph on n {\displaystyle n} vertices. By the properties listed above, H ( G ) = log 2 ⁡ n {\displaystyle H(G)=\log _{2}n} . Therefore, the union of fewer than log 2 ⁡ n {\displaystyle \log _{2}n} bipartite graphs cannot have the same entropy as G {\displaystyle G} , so G {\displaystyle G} cannot be expressed as such a union. ◼ {\displaystyle \blacksquare } == General References == Matthias Dehmer; Frank Emmert-Streib; Zengqiang Chen; Xueliang Li; Yongtang Shi (25 July 2016). Mathematical Foundations and Applications of Graph Entropy. Wiley. ISBN 978-3-527-69325-2. == Notes ==
Wikipedia/Graph_entropy
Generalized relative entropy ( ε {\displaystyle \varepsilon } -relative entropy) is a measure of dissimilarity between two quantum states. It is a "one-shot" analogue of quantum relative entropy and shares many properties of the latter quantity. In the study of quantum information theory, we typically assume that information processing tasks are repeated multiple times, independently. The corresponding information-theoretic notions are therefore defined in the asymptotic limit. The quintessential entropy measure, von Neumann entropy, is one such notion. In contrast, the study of one-shot quantum information theory is concerned with information processing when a task is conducted only once. New entropic measures emerge in this scenario, as traditional notions cease to give a precise characterization of resource requirements. ε {\displaystyle \varepsilon } -relative entropy is one such particularly interesting measure. In the asymptotic scenario, relative entropy acts as a parent quantity for other measures besides being an important measure itself. Similarly, ε {\displaystyle \varepsilon } -relative entropy functions as a parent quantity for other measures in the one-shot scenario. == Definition == To motivate the definition of the ε {\displaystyle \varepsilon } -relative entropy D ε ( ρ | | σ ) {\displaystyle D^{\varepsilon }(\rho ||\sigma )} , consider the information processing task of hypothesis testing. In hypothesis testing, we wish to devise a strategy to distinguish between two density operators ρ {\displaystyle \rho } and σ {\displaystyle \sigma } . A strategy is a POVM with elements Q {\displaystyle Q} and I − Q {\displaystyle I-Q} . The probability that the strategy produces a correct guess on input ρ {\displaystyle \rho } is given by Tr ⁡ ( ρ Q ) {\displaystyle \operatorname {Tr} (\rho Q)} and the probability that it produces a wrong guess is given by Tr ⁡ ( σ Q ) {\displaystyle \operatorname {Tr} (\sigma Q)} . ε {\displaystyle \varepsilon } -relative entropy captures the minimum probability of error when the state is σ {\displaystyle \sigma } , given that the success probability for ρ {\displaystyle \rho } is at least ε {\displaystyle \varepsilon } . For ε ∈ ( 0 , 1 ) {\displaystyle \varepsilon \in (0,1)} , the ε {\displaystyle \varepsilon } -relative entropy between two quantum states ρ {\displaystyle \rho } and σ {\displaystyle \sigma } is defined as D ε ( ρ | | σ ) = − log ⁡ 1 ε min { ⟨ Q , σ ⟩ | 0 ≤ Q ≤ I and ⟨ Q , ρ ⟩ ≥ ε } . {\displaystyle D^{\varepsilon }(\rho ||\sigma )=-\log {\frac {1}{\varepsilon }}\min\{\langle Q,\sigma \rangle |0\leq Q\leq I{\text{ and }}\langle Q,\rho \rangle \geq \varepsilon \}~.} From the definition, it is clear that D ε ( ρ | | σ ) ≥ 0 {\displaystyle D^{\varepsilon }(\rho ||\sigma )\geq 0} . This inequality is saturated if and only if ρ = σ {\displaystyle \rho =\sigma } , as shown below. == Relationship to the trace distance == Suppose the trace distance between two density operators ρ {\displaystyle \rho } and σ {\displaystyle \sigma } is ‖ ρ − σ ‖ 1 = δ . {\displaystyle {\left\|\rho -\sigma \right\|}_{1}=\delta ~.} For 0 < ε < 1 {\displaystyle 0<\varepsilon <1} , it holds that log ⁡ ε ε − ( 1 − ε ) δ ≤ D ε ( ρ | | σ ) ≤ log ⁡ ε ε − δ . {\displaystyle \log {\frac {\varepsilon }{\varepsilon -(1-\varepsilon )\delta }}\quad \leq \quad D^{\varepsilon }(\rho ||\sigma )\quad \leq \quad \log {\frac {\varepsilon }{\varepsilon -\delta }}~.} In particular, this implies the following analogue of the Pinsker inequality 1 − ε ε ‖ ρ − σ ‖ 1 ≤ D ε ( ρ | | σ ) . {\displaystyle {\frac {1-\varepsilon }{\varepsilon }}{\left\|\rho -\sigma \right\|}_{1}\quad \leq \quad D^{\varepsilon }(\rho ||\sigma )~.} Furthermore, the proposition implies that for any ε ∈ ( 0 , 1 ) {\displaystyle \varepsilon \in (0,1)} , D ε ( ρ | | σ ) = 0 {\displaystyle D^{\varepsilon }(\rho ||\sigma )=0} if and only if ρ = σ {\displaystyle \rho =\sigma } , inheriting this property from the trace distance. This result and its proof can be found in Dupuis et al. === Proof of inequality a) === Upper bound: Trace distance can be written as ‖ ρ − σ ‖ 1 = max 0 ≤ Q ≤ 1 Tr ⁡ ( Q ( ρ − σ ) ) . {\displaystyle {\left\|\rho -\sigma \right\|}_{1}=\max _{0\leq Q\leq 1}\operatorname {Tr} (Q(\rho -\sigma ))~.} This maximum is achieved when Q {\displaystyle Q} is the orthogonal projector onto the positive eigenspace of ρ − σ {\displaystyle \rho -\sigma } . For any POVM element Q {\displaystyle Q} we have Tr ⁡ ( Q ( ρ − σ ) ) ≤ δ {\displaystyle \operatorname {Tr} (Q(\rho -\sigma ))\leq \delta } so that if Tr ⁡ ( Q ρ ) ≥ ε {\displaystyle \operatorname {Tr} (Q\rho )\geq \varepsilon } , we have Tr ⁡ ( Q σ ) ≥ Tr ⁡ ( Q ρ ) − δ ≥ ε − δ . {\displaystyle \operatorname {Tr} (Q\sigma )~\geq ~\operatorname {Tr} (Q\rho )-\delta ~\geq ~\varepsilon -\delta ~.} From the definition of the ε {\displaystyle \varepsilon } -relative entropy, we get 2 − D ε ( ρ | | σ ) ≥ ε − δ ε . {\displaystyle 2^{-D^{\varepsilon }(\rho ||\sigma )}\geq {\frac {\varepsilon -\delta }{\varepsilon }}~.} Lower bound: Let Q {\displaystyle Q} be the orthogonal projection onto the positive eigenspace of ρ − σ {\displaystyle \rho -\sigma } , and let Q ¯ {\displaystyle {\bar {Q}}} be the following convex combination of I {\displaystyle I} and Q {\displaystyle Q} : Q ¯ = ( ε − μ ) I + ( 1 − ε + μ ) Q {\displaystyle {\bar {Q}}=\left(\varepsilon -\mu \right)I+\left(1-\varepsilon +\mu \right)Q} where μ = ( 1 − ε ) Tr ⁡ ( Q ρ ) 1 − Tr ⁡ ( Q ρ ) . {\displaystyle \mu ={\frac {(1-\varepsilon )\operatorname {Tr} (Q\rho )}{1-\operatorname {Tr} (Q\rho )}}~.} This means μ = ( 1 − ε + μ ) Tr ⁡ ( Q ρ ) {\displaystyle \mu =(1-\varepsilon +\mu )\operatorname {Tr} (Q\rho )} and thus Tr ⁡ ( Q ¯ ρ ) = ( ε − μ ) + ( 1 − ε + μ ) Tr ⁡ ( Q ρ ) = ε . {\displaystyle \operatorname {Tr} ({\bar {Q}}\rho )~=~\left(\varepsilon -\mu \right)+\left(1-\varepsilon +\mu \right)\operatorname {Tr} (Q\rho )~=~\varepsilon \,.} Moreover, Tr ⁡ ( Q ¯ σ ) = ε − μ + ( 1 − ε + μ ) Tr ⁡ ( Q σ ) . {\displaystyle \operatorname {Tr} ({\bar {Q}}\sigma )~=~\varepsilon -\mu +\left(1-\varepsilon +\mu \right)\operatorname {Tr} (Q\sigma )~.} Using μ = ( 1 − ε + μ ) Tr ⁡ ( Q ρ ) {\displaystyle \mu =(1-\varepsilon +\mu )\operatorname {Tr} (Q\rho )} , our choice of Q {\displaystyle Q} , and finally the definition of μ {\displaystyle \mu } , we can re-write this as Tr ⁡ ( Q ¯ σ ) = ε − ( 1 − ε + μ ) Tr ⁡ ( Q ρ ) + ( 1 − ε + μ ) Tr ⁡ ( Q σ ) = ε − ( 1 − ε ) δ 1 − Tr ⁡ ( Q ρ ) ≤ ε − ( 1 − ε ) δ . {\displaystyle {\begin{aligned}\operatorname {Tr} ({\bar {Q}}\sigma )&=\varepsilon -\left(1-\varepsilon +\mu \right)\operatorname {Tr} (Q\rho )+\left(1-\varepsilon +\mu \right)\operatorname {Tr} (Q\sigma )\\&=\varepsilon -{\frac {\left(1-\varepsilon \right)\delta }{1-\operatorname {Tr} (Q\rho )}}\\[1ex]&\leq \varepsilon -\left(1-\varepsilon \right)\delta ~.\end{aligned}}} Hence D ε ( ρ | | σ ) ≥ log ⁡ ε ε − ( 1 − ε ) δ . {\displaystyle D^{\varepsilon }(\rho ||\sigma )\geq \log {\frac {\varepsilon }{\varepsilon -\left(1-\varepsilon \right)\delta }}~.} === Proof of inequality b) === To derive this Pinsker-like inequality, observe that log ⁡ ε ε − ( 1 − ε ) δ = − log ⁡ ( 1 − ( 1 − ε ) δ ε ) ≥ δ 1 − ε ε . {\displaystyle \log {\frac {\varepsilon }{\varepsilon -\left(1-\varepsilon \right)\delta }}~=~-\log \left(1-{\frac {\left(1-\varepsilon \right)\delta }{\varepsilon }}\right)~\geq ~\delta {\frac {1-\varepsilon }{\varepsilon }}~.} == Alternative proof of the Data Processing inequality == A fundamental property of von Neumann entropy is strong subadditivity. Let S ( σ ) {\displaystyle S(\sigma )} denote the von Neumann entropy of the quantum state σ {\displaystyle \sigma } , and let ρ A B C {\displaystyle \rho _{ABC}} be a quantum state on the tensor product Hilbert space H A ⊗ H B ⊗ H C {\displaystyle {\mathcal {H}}_{A}\otimes {\mathcal {H}}_{B}\otimes {\mathcal {H}}_{C}} . Strong subadditivity states that S ( ρ A B C ) + S ( ρ B ) ≤ S ( ρ A B ) + S ( ρ B C ) {\displaystyle S(\rho _{ABC})+S(\rho _{B})\leq S(\rho _{AB})+S(\rho _{BC})} where ρ A B , ρ B C , ρ B {\displaystyle \rho _{AB},\rho _{BC},\rho _{B}} refer to the reduced density matrices on the spaces indicated by the subscripts. When re-written in terms of mutual information, this inequality has an intuitive interpretation; it states that the information content in a system cannot increase by the action of a local quantum operation on that system. In this form, it is better known as the data processing inequality, and is equivalent to the monotonicity of relative entropy under quantum operations: S ( ρ | | σ ) − S ( E ( ρ ) | | E ( σ ) ) ≥ 0 {\displaystyle S(\rho ||\sigma )-S({\mathcal {E}}(\rho )||{\mathcal {E}}(\sigma ))\geq 0} for every CPTP map E {\displaystyle {\mathcal {E}}} , where S ( ω | | τ ) {\displaystyle S(\omega ||\tau )} denotes the relative entropy of the quantum states ω , τ {\displaystyle \omega ,\tau } . It is readily seen that ε {\displaystyle \varepsilon } -relative entropy also obeys monotonicity under quantum operations: D ε ( ρ | | σ ) ≥ D ε ( E ( ρ ) | | E ( σ ) ) {\displaystyle D^{\varepsilon }(\rho ||\sigma )\geq D^{\varepsilon }({\mathcal {E}}(\rho )||{\mathcal {E}}(\sigma ))} , for any CPTP map E {\displaystyle {\mathcal {E}}} . To see this, suppose we have a POVM ( R , I − R ) {\displaystyle (R,I-R)} to distinguish between E ( ρ ) {\displaystyle {\mathcal {E}}(\rho )} and E ( σ ) {\displaystyle {\mathcal {E}}(\sigma )} such that ⟨ R , E ( ρ ) ⟩ = ⟨ E † ( R ) , ρ ⟩ ≥ ε {\displaystyle \langle R,{\mathcal {E}}(\rho )\rangle =\langle {\mathcal {E}}^{\dagger }(R),\rho \rangle \geq \varepsilon } . We construct a new POVM ( E † ( R ) , I − E † ( R ) ) {\displaystyle ({\mathcal {E}}^{\dagger }(R),I-{\mathcal {E}}^{\dagger }(R))} to distinguish between ρ {\displaystyle \rho } and σ {\displaystyle \sigma } . Since the adjoint of any CPTP map is also positive and unital, this is a valid POVM. Note that ⟨ R , E ( σ ) ⟩ = ⟨ E † ( R ) , σ ⟩ ≥ ⟨ Q , σ ⟩ {\displaystyle \langle R,{\mathcal {E}}(\sigma )\rangle =\langle {\mathcal {E}}^{\dagger }(R),\sigma \rangle \geq \langle Q,\sigma \rangle } , where ( Q , I − Q ) {\displaystyle (Q,I-Q)} is the POVM that achieves D ε ( ρ | | σ ) {\displaystyle D^{\varepsilon }(\rho ||\sigma )} . Not only is this interesting in itself, but it also gives us the following alternative method to prove the data processing inequality. By the quantum analogue of the Stein lemma, lim n → ∞ 1 n D ε ( ρ ⊗ n | | σ ⊗ n ) = lim n → ∞ − 1 n log ⁡ min 1 ε Tr ⁡ ( σ ⊗ n Q ) = D ( ρ | | σ ) − lim n → ∞ 1 n ( log ⁡ 1 ε ) = D ( ρ | | σ ) , {\displaystyle {\begin{aligned}\lim _{n\to \infty }{\frac {1}{n}}D^{\varepsilon }\left(\rho ^{\otimes n}||\sigma ^{\otimes n}\right)&=\lim _{n\to \infty }{\frac {-1}{n}}\log \min {\frac {1}{\varepsilon }}\operatorname {Tr} \left(\sigma ^{\otimes n}Q\right)\\&=D(\rho ||\sigma )-\lim _{n\to \infty }{\frac {1}{n}}\left(\log {\frac {1}{\varepsilon }}\right)\\&=D(\rho ||\sigma )~,\end{aligned}}} where the minimum is taken over 0 ≤ Q ≤ 1 {\displaystyle 0\leq Q\leq 1} such that Tr ⁡ ( Q ρ ⊗ n ) ≥ ε . {\displaystyle \operatorname {Tr} (Q\rho ^{\otimes n})\geq \varepsilon ~.} Applying the data processing inequality to the states ρ ⊗ n {\displaystyle \rho ^{\otimes n}} and σ ⊗ n {\displaystyle \sigma ^{\otimes n}} with the CPTP map E ⊗ n {\displaystyle {\mathcal {E}}^{\otimes n}} , we get D ε ( ρ ⊗ n | | σ ⊗ n ) ≥ D ε ( E ( ρ ) ⊗ n | | E ( σ ) ⊗ n ) . {\displaystyle D^{\varepsilon }(\rho ^{\otimes n}||\sigma ^{\otimes n})~\geq ~D^{\varepsilon }({\mathcal {E}}(\rho )^{\otimes n}||{\mathcal {E}}(\sigma )^{\otimes n})~.} Dividing by n {\displaystyle n} on either side and taking the limit as n → ∞ {\displaystyle n\rightarrow \infty } , we get the desired result. == See also == Entropic value at risk Quantum relative entropy Strong subadditivity Classical information theory Min-entropy == References ==
Wikipedia/Generalized_relative_entropy
The Hartley function is a measure of uncertainty, introduced by Ralph Hartley in 1928. If a sample from a finite set A uniformly at random is picked, the information revealed after the outcome is known is given by the Hartley function H 0 ( A ) := l o g b | A | , {\displaystyle H_{0}(A):=\mathrm {log} _{b}\vert A\vert ,} where |A| denotes the cardinality of A. If the base of the logarithm is 2, then the unit of uncertainty is the shannon (more commonly known as bit). If it is the natural logarithm, then the unit is the nat. Hartley used a base-ten logarithm, and with this base, the unit of information is called the hartley (aka ban or dit) in his honor. It is also known as the Hartley entropy or max-entropy. == Hartley function, Shannon entropy, and Rényi entropy == The Hartley function coincides with the Shannon entropy (as well as with the Rényi entropies of all orders) in the case of a uniform probability distribution. It is a special case of the Rényi entropy since: H 0 ( X ) = 1 1 − 0 log ⁡ ∑ i = 1 | X | p i 0 = log ⁡ | X | . {\displaystyle H_{0}(X)={\frac {1}{1-0}}\log \sum _{i=1}^{|{\mathcal {X}}|}p_{i}^{0}=\log |{\mathcal {X}}|.} But it can also be viewed as a primitive construction, since, as emphasized by Kolmogorov and Rényi, the Hartley function can be defined without introducing any notions of probability (see Uncertainty and information by George J. Klir, p. 423). == Characterization of the Hartley function == The Hartley function only depends on the number of elements in a set, and hence can be viewed as a function on natural numbers. Rényi showed that the Hartley function in base 2 is the only function mapping natural numbers to real numbers that satisfies H ( m n ) = H ( m ) + H ( n ) {\displaystyle H(mn)=H(m)+H(n)} (additivity) H ( m ) ≤ H ( m + 1 ) {\displaystyle H(m)\leq H(m+1)} (monotonicity) H ( 2 ) = 1 {\displaystyle H(2)=1} (normalization) Condition 1 says that the uncertainty of the Cartesian product of two finite sets A and B is the sum of uncertainties of A and B. Condition 2 says that a larger set has larger uncertainty. == Derivation of the Hartley function == We want to show that the Hartley function, log2(n), is the only function mapping natural numbers to real numbers that satisfies H ( m n ) = H ( m ) + H ( n ) {\displaystyle H(mn)=H(m)+H(n)\,} (additivity) H ( m ) ≤ H ( m + 1 ) {\displaystyle H(m)\leq H(m+1)\,} (monotonicity) H ( 2 ) = 1 {\displaystyle H(2)=1\,} (normalization) Let f be a function on positive integers that satisfies the above three properties. From the additive property, we can show that for any integer n and k, f ( n k ) = k f ( n ) . {\displaystyle f(n^{k})=kf(n).\,} Let a, b, and t be any positive integers. There is a unique integer s determined by a s ≤ b t ≤ a s + 1 . ( 1 ) {\displaystyle a^{s}\leq b^{t}\leq a^{s+1}.\qquad (1)} Therefore, s log 2 ⁡ a ≤ t log 2 ⁡ b ≤ ( s + 1 ) log 2 ⁡ a {\displaystyle s\log _{2}a\leq t\log _{2}b\leq (s+1)\log _{2}a\,} and s t ≤ log 2 ⁡ b log 2 ⁡ a ≤ s + 1 t . {\displaystyle {\frac {s}{t}}\leq {\frac {\log _{2}b}{\log _{2}a}}\leq {\frac {s+1}{t}}.} On the other hand, by monotonicity, f ( a s ) ≤ f ( b t ) ≤ f ( a s + 1 ) . {\displaystyle f(a^{s})\leq f(b^{t})\leq f(a^{s+1}).\,} Using equation (1), one gets s f ( a ) ≤ t f ( b ) ≤ ( s + 1 ) f ( a ) , {\displaystyle sf(a)\leq tf(b)\leq (s+1)f(a),\,} and s t ≤ f ( b ) f ( a ) ≤ s + 1 t . {\displaystyle {\frac {s}{t}}\leq {\frac {f(b)}{f(a)}}\leq {\frac {s+1}{t}}.} Hence, | f ( b ) f ( a ) − log 2 ⁡ ( b ) log 2 ⁡ ( a ) | ≤ 1 t . {\displaystyle \left\vert {\frac {f(b)}{f(a)}}-{\frac {\log _{2}(b)}{\log _{2}(a)}}\right\vert \leq {\frac {1}{t}}.} Since t can be arbitrarily large, the difference on the left hand side of the above inequality must be zero, f ( b ) f ( a ) = log 2 ⁡ ( b ) log 2 ⁡ ( a ) . {\displaystyle {\frac {f(b)}{f(a)}}={\frac {\log _{2}(b)}{\log _{2}(a)}}.} So, f ( a ) = μ log 2 ⁡ ( a ) {\displaystyle f(a)=\mu \log _{2}(a)\,} for some constant μ, which must be equal to 1 by the normalization property. == See also == Rényi entropy Min-entropy == References == This article incorporates material from Hartley function on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. This article incorporates material from Derivation of Hartley function on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia/Hartley_entropy
Sample entropy (SampEn; more appropriately K_2 entropy or Takens-Grassberger-Procaccia correlation entropy ) is a modification of approximate entropy (ApEn; more appropriately "Procaccia-Cohen entropy"), used for assessing the complexity of physiological and other time-series signals, diagnosing e.g. diseased states. SampEn has two advantages over ApEn: data length independence and a relatively trouble-free implementation. Also, there is a small computational difference: In ApEn, the comparison between the template vector (see below) and the rest of the vectors also includes comparison with itself. This guarantees that probabilities C i ′ m ( r ) {\displaystyle C_{i}'^{m}(r)} are never zero. Consequently, it is always possible to take a logarithm of probabilities. Because template comparisons with itself lower ApEn values, the signals are interpreted to be more regular than they actually are. These self-matches are not included in SampEn. However, since SampEn makes direct use of the correlation integrals, it is not a real measure of information but an approximation. The foundations and differences with ApEn, as well as a step-by-step tutorial for its application is available at. SampEn is indeed identical to the "correlation entropy" K_2 of Grassberger & Procaccia, except that it is suggested in the latter that certain limits should be taken in order to achieve a result invariant under changes of variables. No such limits and no invariance properties are considered in SampEn. There is a multiscale version of SampEn as well, suggested by Costa and others. SampEn can be used in biomedical and biomechanical research, for example to evaluate postural control. == Definition == Like approximate entropy (ApEn), Sample entropy (SampEn) is a measure of complexity. But it does not include self-similar patterns as ApEn does. For a given embedding dimension m {\displaystyle m} , tolerance r {\displaystyle r} and number of data points N {\displaystyle N} , SampEn is the negative natural logarithm of the probability that if two sets of simultaneous data points of length m {\displaystyle m} have distance < r {\displaystyle <r} then two sets of simultaneous data points of length m + 1 {\displaystyle m+1} also have distance < r {\displaystyle <r} . And we represent it by S a m p E n ( m , r , N ) {\displaystyle SampEn(m,r,N)} (or by S a m p E n ( m , r , τ , N ) {\displaystyle SampEn(m,r,\tau ,N)} including sampling time τ {\displaystyle \tau } ). Now assume we have a time-series data set of length N = { x 1 , x 2 , x 3 , . . . , x N } {\displaystyle N={\{x_{1},x_{2},x_{3},...,x_{N}\}}} with a constant time interval τ {\displaystyle \tau } . We define a template vector of length m {\displaystyle m} , such that X m ( i ) = { x i , x i + 1 , x i + 2 , . . . , x i + m − 1 } {\displaystyle X_{m}(i)={\{x_{i},x_{i+1},x_{i+2},...,x_{i+m-1}\}}} and the distance function d [ X m ( i ) , X m ( j ) ] {\displaystyle d[X_{m}(i),X_{m}(j)]} (i≠j) is to be the Chebyshev distance (but it could be any distance function, including Euclidean distance). We define the sample entropy to be S a m p E n = − ln ⁡ A B {\displaystyle SampEn=-\ln {A \over B}} Where A {\displaystyle A} = number of template vector pairs having d [ X m + 1 ( i ) , X m + 1 ( j ) ] < r {\displaystyle d[X_{m+1}(i),X_{m+1}(j)]<r} B {\displaystyle B} = number of template vector pairs having d [ X m ( i ) , X m ( j ) ] < r {\displaystyle d[X_{m}(i),X_{m}(j)]<r} It is clear from the definition that A {\displaystyle A} will always have a value smaller or equal to B {\displaystyle B} . Therefore, S a m p E n ( m , r , τ ) {\displaystyle SampEn(m,r,\tau )} will be always either be zero or positive value. A smaller value of S a m p E n {\displaystyle SampEn} also indicates more self-similarity in data set or less noise. Generally we take the value of m {\displaystyle m} to be 2 {\displaystyle 2} and the value of r {\displaystyle r} to be 0.2 × s t d {\displaystyle 0.2\times std} . Where std stands for standard deviation which should be taken over a very large dataset. For instance, the r value of 6 ms is appropriate for sample entropy calculations of heart rate intervals, since this corresponds to 0.2 × s t d {\displaystyle 0.2\times std} for a very large population. == Multiscale SampEn == The definition mentioned above is a special case of multi scale sampEn with δ = 1 {\displaystyle \delta =1} , where δ {\displaystyle \delta } is called skipping parameter. In multiscale SampEn template vectors are defined with a certain interval between its elements, specified by the value of δ {\displaystyle \delta } . And modified template vector is defined as X m , δ ( i ) = x i , x i + δ , x i + 2 × δ , . . . , x i + ( m − 1 ) × δ {\displaystyle X_{m,\delta }(i)={x_{i},x_{i+\delta },x_{i+2\times \delta },...,x_{i+(m-1)\times \delta }}} and sampEn can be written as S a m p E n ( m , r , δ ) = − ln ⁡ A δ B δ {\displaystyle SampEn\left(m,r,\delta \right)=-\ln {A_{\delta } \over B_{\delta }}} And we calculate A δ {\displaystyle A_{\delta }} and B δ {\displaystyle B_{\delta }} like before. == Implementation == Sample entropy can be implemented easily in many different programming languages. Below lies an example written in Python. An equivalent example in numerical Python. An example written in other languages can be found: Matlab R. Rust == See also == Kolmogorov complexity Approximate entropy == References ==
Wikipedia/Sample_entropy
In information theory and statistics, negentropy is used as a measure of distance to normality. The concept and phrase "negative entropy" was introduced by Erwin Schrödinger in his 1944 popular-science book What is Life? Later, French physicist Léon Brillouin shortened the phrase to néguentropie (negentropy). In 1974, Albert Szent-Györgyi proposed replacing the term negentropy with syntropy. That term may have originated in the 1940s with the Italian mathematician Luigi Fantappiè, who tried to construct a unified theory of biology and physics. Buckminster Fuller tried to popularize this usage, but negentropy remains common. In a note to What is Life? Schrödinger explained his use of this phrase. ... if I had been catering for them [physicists] alone I should have let the discussion turn on free energy instead. It is the more familiar notion in this context. But this highly technical term seemed linguistically too near to energy for making the average reader alive to the contrast between the two things. == Information theory == In information theory and statistics, negentropy is used as a measure of distance to normality. Out of all distributions with a given mean and variance, the normal or Gaussian distribution is the one with the highest entropy. Negentropy measures the difference in entropy between a given distribution and the Gaussian distribution with the same mean and variance. Thus, negentropy is always nonnegative, is invariant by any linear invertible change of coordinates, and vanishes if and only if the signal is Gaussian. Negentropy is defined as J ( p x ) = S ( φ x ) − S ( p x ) {\displaystyle J(p_{x})=S(\varphi _{x})-S(p_{x})\,} where S ( φ x ) {\displaystyle S(\varphi _{x})} is the differential entropy of the Gaussian density with the same mean and variance as p x {\displaystyle p_{x}} and S ( p x ) {\displaystyle S(p_{x})} is the differential entropy of p x {\displaystyle p_{x}} : S ( p x ) = − ∫ p x ( u ) log ⁡ p x ( u ) d u {\displaystyle S(p_{x})=-\int p_{x}(u)\log p_{x}(u)\,du} Negentropy is used in statistics and signal processing. It is related to network entropy, which is used in independent component analysis. The negentropy of a distribution is equal to the Kullback–Leibler divergence between p x {\displaystyle p_{x}} and a Gaussian distribution with the same mean and variance as p x {\displaystyle p_{x}} (see Differential entropy § Maximization in the normal distribution for a proof). In particular, it is always nonnegative. == Correlation between statistical negentropy and Gibbs' free energy == There is a physical quantity closely linked to free energy (free enthalpy), with a unit of entropy and isomorphic to negentropy known in statistics and information theory. In 1873, Willard Gibbs created a diagram illustrating the concept of free energy corresponding to free enthalpy. On the diagram one can see the quantity called capacity for entropy. This quantity is the amount of entropy that may be increased without changing an internal energy or increasing its volume. In other words, it is a difference between maximum possible, under assumed conditions, entropy and its actual entropy. It corresponds exactly to the definition of negentropy adopted in statistics and information theory. A similar physical quantity was introduced in 1869 by Massieu for the isothermal process (both quantities differs just with a figure sign) and by then Planck for the isothermal-isobaric process. More recently, the Massieu–Planck thermodynamic potential, known also as free entropy, has been shown to play a great role in the so-called entropic formulation of statistical mechanics, applied among the others in molecular biology and thermodynamic non-equilibrium processes. J = S max − S = − Φ = − k ln ⁡ Z {\displaystyle J=S_{\max }-S=-\Phi =-k\ln Z\,} where: S {\displaystyle S} is entropy J {\displaystyle J} is negentropy (Gibbs "capacity for entropy") Φ {\displaystyle \Phi } is the Massieu potential Z {\displaystyle Z} is the partition function k {\displaystyle k} the Boltzmann constant In particular, mathematically the negentropy (the negative entropy function, in physics interpreted as free entropy) is the convex conjugate of LogSumExp (in physics interpreted as the free energy). == Brillouin's negentropy principle of information == In 1953, Léon Brillouin derived a general equation stating that the changing of an information bit value requires at least k T ln ⁡ 2 {\displaystyle kT\ln 2} energy. This is the same energy as the work Leó Szilárd's engine produces in the idealistic case. In his book, he further explored this problem concluding that any cause of this bit value change (measurement, decision about a yes/no question, erasure, display, etc.) will require the same amount of energy. == See also == Exergy Free entropy Entropy in thermodynamics and information theory == Notes ==
Wikipedia/Negative_entropy
In statistics, an approximate entropy (ApEn) is a technique used to quantify the amount of regularity and the unpredictability of fluctuations over time-series data. For example, consider two series of data: Series A: (0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, ...), which alternates 0 and 1. Series B: (0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, ...), which has either a value of 0 or 1, chosen randomly, each with probability 1/2. Moment statistics, such as mean and variance, will not distinguish between these two series. Nor will rank order statistics distinguish between these series. Yet series A is perfectly regular: knowing a term has the value of 1 enables one to predict with certainty that the next term will have the value of 0. In contrast, series B is randomly valued: knowing a term has the value of 1 gives no insight into what value the next term will have. Regularity was originally measured by exact regularity statistics, which has mainly centered on various entropy measures. However, accurate entropy calculation requires vast amounts of data, and the results will be greatly influenced by system noise, therefore it is not practical to apply these methods to experimental data. ApEn was first proposed (under a different name) by Aviad Cohen and Itamar Procaccia, as an approximate algorithm to compute an exact regularity statistic, Kolmogorov–Sinai entropy, and later popularized by Steve M. Pincus. ApEn was initially used to analyze chaotic dynamics and medical data, such as heart rate, and later spread its applications in finance, physiology, human factors engineering, and climate sciences. == Algorithm == A comprehensive step-by-step tutorial with an explanation of the theoretical foundations of Approximate Entropy is available. The algorithm is: Step 1 Assume a time series of data u ( 1 ) , u ( 2 ) , … , u ( N ) {\displaystyle u(1),u(2),\ldots ,u(N)} . These are N {\displaystyle N} raw data values from measurements equally spaced in time. Step 2 Let m ∈ Z + {\displaystyle m\in \mathbb {Z} ^{+}} be a positive integer, with m ≤ N {\displaystyle m\leq N} , which represents the length of a run of data (essentially a window).Let r ∈ R + {\displaystyle r\in \mathbb {R} ^{+}} be a positive real number, which specifies a filtering level.Let n = N − m + 1 {\displaystyle n=N-m+1} . Step 3 Define x ( i ) = [ u ( i ) , u ( i + 1 ) , … , u ( i + m − 1 ) ] {\displaystyle \mathbf {x} (i)={\big [}u(i),u(i+1),\ldots ,u(i+m-1){\big ]}} for each i {\displaystyle i} where 1 ≤ i ≤ n {\displaystyle 1\leq i\leq n} . In other words, x ( i ) {\displaystyle \mathbf {x} (i)} is an m {\displaystyle m} -dimensional vector that contains the run of data starting with u ( i ) {\displaystyle u(i)} .Define the distance between two vectors x ( i ) {\displaystyle \mathbf {x} (i)} and x ( j ) {\displaystyle \mathbf {x} (j)} as the maximum of the distances between their respective components, given by d [ x ( i ) , x ( j ) ] = max k ( | x ( i ) k − x ( j ) k | ) = max k ( | u ( i + k − 1 ) − u ( j + k − 1 ) | ) {\displaystyle {\begin{aligned}d[\mathbf {x} (i),\mathbf {x} (j)]&=\max _{k}{\big (}|\mathbf {x} (i)_{k}-\mathbf {x} (j)_{k}|{\big )}\\&=\max _{k}{\big (}|u(i+k-1)-u(j+k-1)|{\big )}\\\end{aligned}}} for 1 ≤ k ≤ m {\displaystyle 1\leq k\leq m} . Step 4 Define a count C i m {\displaystyle C_{i}^{m}} as C i m ( r ) = ( number of j such that d [ x ( i ) , x ( j ) ] ≤ r ) n {\displaystyle C_{i}^{m}(r)={({\text{number of }}j{\text{ such that }}d[\mathbf {x} (i),\mathbf {x} (j)]\leq r) \over n}} for each i {\displaystyle i} where 1 ≤ i , j ≤ n {\displaystyle 1\leq i,j\leq n} . Note that since j {\displaystyle j} takes on all values between 1 and n {\displaystyle n} , the match will be counted when j = i {\displaystyle j=i} (i.e. when the test subsequence, x ( j ) {\displaystyle \mathbf {x} (j)} , is matched against itself, x ( i ) {\displaystyle \mathbf {x} (i)} ). Step 5 Define ϕ m ( r ) = 1 n ∑ i = 1 n log ⁡ ( C i m ( r ) ) {\displaystyle \phi ^{m}(r)={1 \over n}\sum _{i=1}^{n}\log(C_{i}^{m}(r))} where log {\displaystyle \log } is the natural logarithm, and for a fixed m {\displaystyle m} , r {\displaystyle r} , and n {\displaystyle n} as set in Step 2. Step 6 Define approximate entropy ( A p E n {\displaystyle \mathrm {ApEn} } ) as A p E n ( m , r , N ) ( u ) = ϕ m ( r ) − ϕ m + 1 ( r ) {\displaystyle \mathrm {ApEn} (m,r,N)(u)=\phi ^{m}(r)-\phi ^{m+1}(r)} Parameter selection Typically, choose m = 2 {\displaystyle m=2} or m = 3 {\displaystyle m=3} , whereas r {\displaystyle r} depends greatly on the application. An implementation on Physionet, which is based on Pincus, use d [ x ( i ) , x ( j ) ] < r {\displaystyle d[\mathbf {x} (i),\mathbf {x} (j)]<r} instead of d [ x ( i ) , x ( j ) ] ≤ r {\displaystyle d[\mathbf {x} (i),\mathbf {x} (j)]\leq r} in Step 4. While a concern for artificially constructed examples, it is usually not a concern in practice. == Example == Consider a sequence of N = 51 {\displaystyle N=51} samples of heart rate equally spaced in time: S N = { 85 , 80 , 89 , 85 , 80 , 89 , … } {\displaystyle \ S_{N}=\{85,80,89,85,80,89,\ldots \}} Note the sequence is periodic with a period of 3. Let's choose m = 2 {\displaystyle m=2} and r = 3 {\displaystyle r=3} (the values of m {\displaystyle m} and r {\displaystyle r} can be varied without affecting the result). Form a sequence of vectors: x ( 1 ) = [ u ( 1 ) u ( 2 ) ] = [ 85 80 ] x ( 2 ) = [ u ( 2 ) u ( 3 ) ] = [ 80 89 ] x ( 3 ) = [ u ( 3 ) u ( 4 ) ] = [ 89 85 ] x ( 4 ) = [ u ( 4 ) u ( 5 ) ] = [ 85 80 ] ⋮ {\displaystyle {\begin{aligned}\mathbf {x} (1)&=[u(1)\ u(2)]=[85\ 80]\\\mathbf {x} (2)&=[u(2)\ u(3)]=[80\ 89]\\\mathbf {x} (3)&=[u(3)\ u(4)]=[89\ 85]\\\mathbf {x} (4)&=[u(4)\ u(5)]=[85\ 80]\\&\ \ \vdots \end{aligned}}} Distance is calculated repeatedly as follows. In the first calculation, d [ x ( 1 ) , x ( 1 ) ] = max k | x ( 1 ) k − x ( 1 ) k | = 0 {\displaystyle \ d[\mathbf {x} (1),\mathbf {x} (1)]=\max _{k}|\mathbf {x} (1)_{k}-\mathbf {x} (1)_{k}|=0} which is less than r {\displaystyle r} . In the second calculation, note that | u ( 2 ) − u ( 3 ) | > | u ( 1 ) − u ( 2 ) | {\displaystyle |u(2)-u(3)|>|u(1)-u(2)|} , so d [ x ( 1 ) , x ( 2 ) ] = max k | x ( 1 ) k − x ( 2 ) k | = | u ( 2 ) − u ( 3 ) | = 9 {\displaystyle \ d[\mathbf {x} (1),\mathbf {x} (2)]=\max _{k}|\mathbf {x} (1)_{k}-\mathbf {x} (2)_{k}|=|u(2)-u(3)|=9} which is greater than r {\displaystyle r} . Similarly, d [ x ( 1 ) , x ( 3 ) ] = | u ( 2 ) − u ( 4 ) | = 5 > r d [ x ( 1 ) , x ( 4 ) ] = | u ( 1 ) − u ( 4 ) | = | u ( 2 ) − u ( 5 ) | = 0 < r ⋮ d [ x ( 1 ) , x ( j ) ] = ⋯ ⋮ {\displaystyle {\begin{aligned}d[\mathbf {x} (1)&,\mathbf {x} (3)]=|u(2)-u(4)|=5>r\\d[\mathbf {x} (1)&,\mathbf {x} (4)]=|u(1)-u(4)|=|u(2)-u(5)|=0<r\\&\vdots \\d[\mathbf {x} (1)&,\mathbf {x} (j)]=\cdots \\&\vdots \\\end{aligned}}} The result is a total of 17 terms x ( j ) {\displaystyle \mathbf {x} (j)} such that d [ x ( 1 ) , x ( j ) ] ≤ r {\displaystyle d[\mathbf {x} (1),\mathbf {x} (j)]\leq r} . These include x ( 1 ) , x ( 4 ) , x ( 7 ) , … , x ( 49 ) {\displaystyle \mathbf {x} (1),\mathbf {x} (4),\mathbf {x} (7),\ldots ,\mathbf {x} (49)} . In these cases, C i m ( r ) {\displaystyle C_{i}^{m}(r)} is C 1 2 ( 3 ) = 17 50 {\displaystyle \ C_{1}^{2}(3)={\frac {17}{50}}} C 2 2 ( 3 ) = 17 50 {\displaystyle \ C_{2}^{2}(3)={\frac {17}{50}}} C 3 2 ( 3 ) = 16 50 {\displaystyle \ C_{3}^{2}(3)={\frac {16}{50}}} C 4 2 ( 3 ) = 17 50 ⋯ {\displaystyle \ C_{4}^{2}(3)={\frac {17}{50}}\ \cdots } Note in Step 4, 1 ≤ i ≤ n {\displaystyle 1\leq i\leq n} for x ( i ) {\displaystyle \mathbf {x} (i)} . So the terms x ( j ) {\displaystyle \mathbf {x} (j)} such that d [ x ( 3 ) , x ( j ) ] ≤ r {\displaystyle d[\mathbf {x} (3),\mathbf {x} (j)]\leq r} include x ( 3 ) , x ( 6 ) , x ( 9 ) , … , x ( 48 ) {\displaystyle \mathbf {x} (3),\mathbf {x} (6),\mathbf {x} (9),\ldots ,\mathbf {x} (48)} , and the total number is 16. At the end of these calculations, we have ϕ 2 ( 3 ) = 1 50 ∑ i = 1 50 log ⁡ ( C i 2 ( 3 ) ) ≈ − 1.0982 {\displaystyle \phi ^{2}(3)={1 \over 50}\sum _{i=1}^{50}\log(C_{i}^{2}(3))\approx -1.0982} Then we repeat the above steps for m = 3 {\displaystyle m=3} . First form a sequence of vectors: x ( 1 ) = [ u ( 1 ) u ( 2 ) u ( 3 ) ] = [ 85 80 89 ] x ( 2 ) = [ u ( 2 ) u ( 3 ) u ( 4 ) ] = [ 80 89 85 ] x ( 3 ) = [ u ( 3 ) u ( 4 ) u ( 5 ) ] = [ 89 85 80 ] x ( 4 ) = [ u ( 4 ) u ( 5 ) u ( 6 ) ] = [ 85 80 89 ] ⋮ {\displaystyle {\begin{aligned}\mathbf {x} (1)&=[u(1)\ u(2)\ u(3)]=[85\ 80\ 89]\\\mathbf {x} (2)&=[u(2)\ u(3)\ u(4)]=[80\ 89\ 85]\\\mathbf {x} (3)&=[u(3)\ u(4)\ u(5)]=[89\ 85\ 80]\\\mathbf {x} (4)&=[u(4)\ u(5)\ u(6)]=[85\ 80\ 89]\\&\ \ \vdots \end{aligned}}} By calculating distances between vector x ( i ) , x ( j ) , 1 ≤ i ≤ 49 {\displaystyle \mathbf {x} (i),\mathbf {x} (j),1\leq i\leq 49} , we find the vectors satisfying the filtering level have the following characteristic: d [ x ( i ) , x ( i + 3 ) ] = 0 < r {\displaystyle d[\mathbf {x} (i),\mathbf {x} (i+3)]=0<r} Therefore, C 1 3 ( 3 ) = 17 49 {\displaystyle \ C_{1}^{3}(3)={\frac {17}{49}}} C 2 3 ( 3 ) = 16 49 {\displaystyle \ C_{2}^{3}(3)={\frac {16}{49}}} C 3 3 ( 3 ) = 16 49 {\displaystyle \ C_{3}^{3}(3)={\frac {16}{49}}} C 4 3 ( 3 ) = 17 49 ⋯ {\displaystyle \ C_{4}^{3}(3)={\frac {17}{49}}\ \cdots } At the end of these calculations, we have ϕ 3 ( 3 ) = 1 49 ∑ i = 1 49 log ⁡ ( C i 3 ( 3 ) ) ≈ − 1.0982 {\displaystyle \phi ^{3}(3)={1 \over 49}\sum _{i=1}^{49}\log(C_{i}^{3}(3))\approx -1.0982} Finally, A p E n = ϕ 2 ( 3 ) − ϕ 3 ( 3 ) ≈ 0.000010997 {\displaystyle \mathrm {ApEn} =\phi ^{2}(3)-\phi ^{3}(3)\approx 0.000010997} The value is very small, so it implies the sequence is regular and predictable, which is consistent with the observation. == Python implementation == == MATLAB implementation == Fast Approximate Entropy from MatLab Central approximateEntropy == Interpretation == The presence of repetitive patterns of fluctuation in a time series renders it more predictable than a time series in which such patterns are absent. ApEn reflects the likelihood that similar patterns of observations will not be followed by additional similar observations. A time series containing many repetitive patterns has a relatively small ApEn; a less predictable process has a higher ApEn. == Advantages == The advantages of ApEn include: Lower computational demand. ApEn can be designed to work for small data samples ( N < 50 {\displaystyle N<50} points) and can be applied in real time. Less effect from noise. If data is noisy, the ApEn measure can be compared to the noise level in the data to determine what quality of true information may be present in the data. == Limitations == The ApEn algorithm counts each sequence as matching itself to avoid the occurrence of log ⁡ ( 0 ) {\displaystyle \log(0)} in the calculations. This step might introduce bias in ApEn, which causes ApEn to have two poor properties in practice: ApEn is heavily dependent on the record length and is uniformly lower than expected for short records. It lacks relative consistency. That is, if ApEn of one data set is higher than that of another, it should, but does not, remain higher for all conditions tested. == Applications == ApEn has been applied to classify electroencephalography (EEG) in psychiatric diseases, such as schizophrenia, epilepsy, and addiction. == See also == Recurrence quantification analysis Sample entropy == References ==
Wikipedia/Approximate_entropy
In various science/engineering applications, such as independent component analysis, image analysis, genetic analysis, speech recognition, manifold learning, and time delay estimation it is useful to estimate the differential entropy of a system or process, given some observations. The simplest and most common approach uses histogram-based estimation, but other approaches have been developed and used, each with its own benefits and drawbacks. The main factor in choosing a method is often a trade-off between the bias and the variance of the estimate, although the nature of the (suspected) distribution of the data may also be a factor, as well as the sample size and the size of the alphabet of the probability distribution. == Histogram estimator == The histogram approach uses the idea that the differential entropy of a probability distribution f ( x ) {\displaystyle f(x)} for a continuous random variable x {\displaystyle x} , h ( X ) = − ∫ X f ( x ) log ⁡ f ( x ) d x {\displaystyle h(X)=-\int _{\mathbb {X} }f(x)\log f(x)\,dx} can be approximated by first approximating f ( x ) {\displaystyle f(x)} with a histogram of the observations, and then finding the discrete entropy of a quantization of x {\displaystyle x} H ( X ) = − ∑ i = 1 n f ( x i ) log ⁡ ( f ( x i ) w ( x i ) ) {\displaystyle H(X)=-\sum _{i=1}^{n}f(x_{i})\log \left({\frac {f(x_{i})}{w(x_{i})}}\right)} with bin probabilities given by that histogram. The histogram is itself a maximum-likelihood (ML) estimate of the discretized frequency distribution ), where w {\displaystyle w} is the width of the i {\displaystyle i} th bin. Histograms can be quick to calculate, and simple, so this approach has some attraction. However, the estimate produced is biased, and although corrections can be made to the estimate, they may not always be satisfactory. A method better suited for multidimensional probability density functions (pdf) is to first make a pdf estimate with some method, and then, from the pdf estimate, compute the entropy. A useful pdf estimate method is e.g. Gaussian mixture modeling (GMM), where the expectation maximization (EM) algorithm is used to find an ML estimate of a weighted sum of Gaussian pdf's approximating the data pdf. == Estimates based on sample-spacings == If the data is one-dimensional, we can imagine taking all the observations and putting them in order of their value. The spacing between one value and the next then gives us a rough idea of (the reciprocal of) the probability density in that region: the closer together the values are, the higher the probability density. This is a very rough estimate with high variance, but can be improved, for example by thinking about the space between a given value and the one m away from it, where m is some fixed number. The probability density estimated in this way can then be used to calculate the entropy estimate, in a similar way to that given above for the histogram, but with some slight tweaks. One of the main drawbacks with this approach is going beyond one dimension: the idea of lining the data points up in order falls apart in more than one dimension. However, using analogous methods, some multidimensional entropy estimators have been developed. == Estimates based on nearest-neighbours == For each point in our dataset, we can find the distance to its nearest neighbour. We can in fact estimate the entropy from the distribution of the nearest-neighbour-distance of our datapoints. (In a uniform distribution these distances all tend to be fairly similar, whereas in a strongly nonuniform distribution they may vary a lot more.) == Bayesian estimator == When in under-sampled regime, having a prior on the distribution can help the estimation. One such Bayesian estimator was proposed in the neuroscience context known as the NSB (Nemenman–Shafee–Bialek) estimator. The NSB estimator uses a mixture of Dirichlet prior, chosen such that the induced prior over the entropy is approximately uniform. == Estimates based on expected entropy == A new approach to the problem of entropy evaluation is to compare the expected entropy of a sample of random sequence with the calculated entropy of the sample. The method gives very accurate results, but it is limited to calculations of random sequences modeled as Markov chains of the first order with small values of bias and correlations. This is the first known method that takes into account the size of the sample sequence and its impact on the accuracy of the calculation of entropy. == Deep Neural Network estimator == A deep neural network (DNN) can be used to estimate the joint entropy and called Neural Joint Entropy Estimator (NJEE). Practically, the DNN is trained as a classifier that maps an input vector or matrix X to an output probability distribution over the possible classes of random variable Y, given input X. For example, in an image classification task, the NJEE maps a vector of pixel values to probabilities over possible image classes. In practice, the probability distribution of Y is obtained by a Softmax layer with number of nodes that is equal to the alphabet size of Y. NJEE uses continuously differentiable activation functions, such that the conditions for the universal approximation theorem holds. It is shown that this method provides a strongly consistent estimator and outperforms other methods in case of large alphabet sizes. == References ==
Wikipedia/Entropy_estimation
Plant diseases are diseases in plants caused by pathogens (infectious organisms) and environmental conditions (physiological factors). Organisms that cause infectious disease include fungi, oomycetes, bacteria, viruses, viroids, virus-like organisms, phytoplasmas, protozoa, nematodes and parasitic plants. Not included are ectoparasites like insects, mites, vertebrates, or other pests that affect plant health by eating plant tissues and causing injury that may admit plant pathogens. The study of plant disease is called plant pathology. == Plant pathogens == === Fungi === Most phytopathogenic fungi are Ascomycetes or Basidiomycetes. They reproduce both sexually and asexually via the production of spores and other structures. Spores may be spread long distances by air or water, or they may be soil borne. Many soil inhabiting fungi are capable of living saprotrophically, carrying out the role of their life cycle in the soil. These are facultative saprotrophs. Fungal diseases may be controlled through the use of fungicides and other agricultural practices. However, new races of fungi often evolve that are resistant to various fungicides. Biotrophic fungal pathogens colonize living plant tissue and obtain nutrients from living host cells. Necrotrophic fungal pathogens infect and kill host tissue and extract nutrients from the dead host cells. Significant fungal plant pathogens include: ==== Ascomycetes ==== Fusarium spp. (Fusarium wilt disease) Thielaviopsis spp. (canker rot, black root rot, Thielaviopsis root rot) Verticillium spp. Magnaporthe grisea (rice blast) Sclerotinia sclerotiorum (cottony rot) ==== Basidiomycetes ==== Ustilago spp. (smuts) Rhizoctonia spp. Phakospora pachyrhizi (soybean rust) Puccinia spp. (severe rusts of cereals and grasses)(fungus)|rusts]]. Armillaria spp. (honey fungus species, virulent pathogens of trees) === Fungus-like organisms === ==== Oomycetes ==== The oomycetes are fungus-like organisms among the Stramenopiles. They include some of the most destructive plant pathogens, such as the causal agents of potato late blight root rot, and sudden oak death. Despite not being closely related to the fungi, the oomycetes have developed similar infection strategies, using effector proteins to turn off a plant's defenses. ==== Phytomyxea ==== Some slime molds in Phytomyxea cause important diseases, including clubroot in cabbage and its relatives and powdery scab in potatoes. These are caused by species of Plasmodiophora and Spongospora, respectively. === Bacteria === Most bacteria associated with plants are saprotrophic and do no harm to the plant itself. However, a small number, around 100 known species, cause disease, especially in subtropical and tropical regions of the world. Most plant pathogenic bacteria are bacilli. Erwinia uses cell wall–degrading enzymes to cause soft rot. Agrobacterium changes the level of auxins to cause tumours with phytohormones. Significant bacterial plant pathogens include: Burkholderia Pseudomonadota Xanthomonas spp. Pseudomonas spp. Pseudomonas syringae pv. tomato causes tomato plants to produce less fruit, and it "continues to adapt to the tomato by minimizing its recognition by the tomato immune system." ==== Mollicutes ==== Phytoplasma and Spiroplasma are obligate intracellular parasites, bacteria that lack cell walls and, like the mycoplasmas, which are human pathogens, they belong to the class Mollicutes. Their cells are extremely small, 1 to 2 micrometres across. They tend to have small genomes (roughly between 0.5 and 2 Mb). They are normally transmitted by leafhoppers (cicadellids) and psyllids, both sap-sucking insect vectors. These inject the bacteria into the plant's phloem, where it reproduces. === Viruses === Many plant viruses cause only a loss of crop yield. Therefore, it is not economically viable to try to control them, except when they infect perennial species, such as fruit trees. Most plant viruses have small, single-stranded RNA genomes. Some also have double stranded RNA or single or double stranded DNA. These may encode only three or four proteins: a replicase, a coat protein, a movement protein to facilitate cell to cell movement through plasmodesmata, and sometimes a protein that allows transmission by a vector. Plant viruses are generally transmitted by a vector, but mechanical and seed transmission also occur. Vectors are often insects such as aphids; others are fungi, nematodes, and protozoa. In many cases, the insect and virus are specific for virus transmission such as the beet leafhopper that transmits the curly top virus causing disease in several crop plants. === Nematodes === Some nematodes parasitize plant roots. They are a problem in tropical and subtropical regions. Potato cyst nematodes (Globodera pallida and G. rostochiensis) are widely distributed in Europe and the Americas, causing $300 million worth of damage in Europe annually. Root knot nematodes have quite a large host range, they parasitize plant root systems and thus directly affect the uptake of water and nutrients needed for normal plant growth and reproduction, whereas cyst nematodes tend to be able to infect only a few species. Nematodes are able to cause radical changes in root cells in order to facilitate their lifestyle. === Protozoa === A few plant diseases are caused by protozoa such as Phytomonas, a kinetoplastid. They are transmitted as durable zoospores that may be able to survive in a resting state in the soil for many years. Further, they can transmit plant viruses. When the motile zoospores come into contact with a root hair they produce a plasmodium which invades the roots. == Physiological plant disorders == Some abiotic disorders can be confused with pathogen-induced disorders. Abiotic causes include natural processes such as drought, frost, snow and hail; flooding and poor drainage; nutrient deficiency; deposition of mineral salts such as sodium chloride and gypsum; windburn and breakage by storms; and wildfires. == Epidemics == Plants are subject to disease epidemics. === Port and border inspection and quarantine === The introduction of harmful non native organisms into a country can be reduced by controlling human traffic (e.g., the Australian Quarantine and Inspection Service). Global trade provides unprecedented opportunities for the introduction of plant pests. In the United States, even to get a better estimate of the number of such introductions would require a substantial increase in inspections. In Australia a similar shortcoming of understanding has a different origin: Port inspections are not very useful because inspectors know too little about taxonomy. There are often pests that the Australian Government has prioritised as harmful to be kept out of the country, but which have near taxonomic relatives that confuse the issue. X-ray and electron-beam/E-beam irradiation of food has been trialed as a quarantine treatment for fruit commodities originating from Hawaii. The US FDA (Food and Drug Administration), USDA APHIS (Animal and Plant Health Inspection Service), producers, and consumers were all accepting of the results - more thorough pest eradication and lesser taste degradation than heat treatment. The International Plant Protection Convention (IPPC) anticipates that molecular diagnostics for inspections will continue to improve. Between 2020 and 2030, IPPC expects continued technological improvement to lower costs and improve performance, albeit not for less developed countries unless funding changes. === Chemical === Many natural and synthetic compounds can be employed to combat plant diseases. This method works by directly eliminating disease-causing organisms or curbing their spread; however, it has been shown to have too broad an effect, typically, to be good for the local ecosystem. From an economic standpoint, all but the simplest natural additives may disqualify a product from "organic" status, potentially reducing the value of the yield. === Biological === Crop rotation is a traditional and sometimes effective means of preventing pests and diseases from becoming well-established, alongside other benefits. Other biological methods include inoculation. Protection against infection by Agrobacterium tumefaciens, which causes gall diseases in many plants, can be provided by dipping cuttings in suspensions of Agrobacterium radiobacter before inserting them in the ground to take root. == Economic impact == Plant diseases cause major economic losses for farmers worldwide. Across large regions and many crop species, it is estimated that diseases typically reduce plant yields by 10% every year in more developed settings, but yield loss to diseases often exceeds 20% in less developed settings. The Food and Agriculture Organization estimates that pests and diseases are responsible for about 25% of crop loss. To solve this, new methods are needed to detect diseases and pests early, such as novel sensors that detect plant odours and spectroscopy and biophotonics that are able to diagnose plant health and metabolism. As of 2018 the most costly diseases of the most produced crops worldwide are: == See also == Burl or Burr Common names of plant diseases Plant disease forecasting Stunting == Notes == == References == == External links == Pacific Northwest Fungi, online mycology journal with papers on fungal plant pathogens The Pest and Pathogens Glossary
Wikipedia/Plant_disease
The Ricker model, named after Bill Ricker, is a classic discrete population model which gives the expected number N t+1 (or density) of individuals in generation t + 1 as a function of the number of individuals in the previous generation, N t + 1 = N t e r ( 1 − N t k ) . {\displaystyle N_{t+1}=N_{t}e^{r\left(1-{\frac {N_{t}}{k}}\right)}.\,} Here r is interpreted as an intrinsic growth rate and k as the carrying capacity of the environment. Unlike some other models like the Logistic map, the carrying capacity in the Ricker model is not a hard barrier that cannot be exceeded by the population, but it only determines the overall scale of the population. The Ricker model was introduced in 1954 by Ricker in the context of stock and recruitment in fisheries. The model can be used to predict the number of fish that will be present in a fishery. Subsequent work has derived the model under other assumptions such as scramble competition, within-year resource limited competition or even as the outcome of source-sink Malthusian patches linked by density-dependent dispersal. The Ricker model is a limiting case of the Hassell model which takes the form N t + 1 = k 1 N t ( 1 + k 2 N t ) c . {\displaystyle N_{t+1}=k_{1}{\frac {N_{t}}{\left(1+k_{2}N_{t}\right)^{c}}}.} When c = 1, the Hassell model is simply the Beverton–Holt model. == See also == Population dynamics of fisheries == Notes == == References == Brännström A and Sumpter DJ (2005) "The role of competition and clustering in population dynamics" Proc Biol Sci., 272(1576): 2065–72. Bravo de la Parra, R., Marvá, M., Sánchez, E. and Sanz, L. (2013) Reduction of discrete dynamical systems with applications to dynamics population models. Math Model Nat Phenom. 8(6). pp 107–129 Geritz SA and Kisdi E (2004). "On the mechanistic underpinning of discrete-time population models with complex dynamics". J Theor Biol., 21 May 2004;228(2):261–9. Marvá, M., Sánchez, E., Bravo de la Parra, R., Sanz, L. (2009). Reduction of slow–fast discrete models coupling migration and demography. J Theor Biol.. 258(371-379). Noakes, David L. G. (Ed.) (2006) Bill Ricker: an appreciation シュプリンガー・ジャパン株式会社, ISBN 978-1-4020-4707-7. Ricker, W. E. (1954) Stock and Recruitment Journal of the Fisheries Research Board of Canada, 11(5): 559–623. doi:10.1139/f54-039 Ricker, W. E. (1975) Computation and Interpretation of Biological Statistics of Fish Populations. Bulletin of the Fisheries Research Board of Canada, No 119. Ottawa.
Wikipedia/Ricker_model
In demography and population dynamics, the rate of natural increase (RNI), also known as natural population change, is defined as the birth rate minus the death rate of a particular population, over a particular time period. It is typically expressed either as a number per 1,000 individuals in the population or as a percentage. RNI can be either positive or negative. It contrasts to total population change by ignoring net migration. This RNI gives demographers an insight into how a region's population is evolving, and these analyses can inform government attempts to shape RNI. == Examples == Suppose a population of 5,000 individuals experiences 1,150 live births and 900 deaths over the course of one year. To show the RNI over that year as a percentage, the equation would be (1,150 – 900) ÷ 5,000 = 0.05 = +5% To show the RNI as a number per 1,000 individuals in the population, the equation would be (1,150 – 900) ÷ (5,000/1,000) = 250 ÷ 5 = +50 It can also be shown as natural births per 1,000 minus deaths per 1,000 (1,150 ÷ 5) – (900 ÷ 5) = 230 – 180 = +50 To convert the RNI per 1,000 population to a percentage, divide it by 1,000. The equation would be +50 ÷ 1,000 = 0.05 = +5% == Uses == The rate of natural increase gives demographers an idea of how a region's population is shifting over time. RNI excludes in-migration and out-migration, giving an indication of population growth based only on births and deaths. Comparing natural population change with total population change shows which is dominate for a particular region. Looking at this difference across regions reveals those that are changing mainly due to births exceeding deaths and those changing mainly due to migration. The map shows just such an analysis for the US. The trend of RNI over time can indicate what stage of the Demographic Transition Model (DTM) a region or country is in. == National efforts to affect RNI == Government attempts to shape the RNI of a region or country are common around the world. Policies can either encourage or discourage an increase in birth rates. For example, during the COVID-19 crisis Singapore offered families a “pandemic baby bonus” to encourage a higher birth rate, therefore increasing RNI. The US has considered similar policies. Another example was China's one-child policy, intended to decrease birth rates, therefore decreasing the RNI. A country with a good infrastructure to support families, women's health, and maternal/child health would likely have lower death rates from infant or maternal mortality, which would increase RNI. == See also == List of countries by rate of natural increaseBirth rateMortality ratePopulation growth == References ==
Wikipedia/Rate_of_natural_increase
Mathematical models can project how infectious diseases progress to show the likely outcome of an epidemic (including in plants) and help inform public health and plant health interventions. Models use basic assumptions or collected statistics along with mathematics to find parameters for various infectious diseases and use those parameters to calculate the effects of different interventions, like mass vaccination programs. The modelling can help decide which intervention(s) to avoid and which to trial, or can predict future growth patterns, etc. == History == The modelling of infectious diseases is a tool that has been used to study the mechanisms by which diseases spread, to predict the future course of an outbreak and to evaluate strategies to control an epidemic. The first scientist who systematically tried to quantify causes of death was John Graunt in his book Natural and Political Observations made upon the Bills of Mortality, in 1662. The bills he studied were listings of numbers and causes of deaths published weekly. Graunt's analysis of causes of death is considered the beginning of the "theory of competing risks" which according to Daley and Gani is "a theory that is now well established among modern epidemiologists". The earliest account of mathematical modelling of spread of disease was carried out in 1760 by Daniel Bernoulli. Trained as a physician, Bernoulli created a mathematical model to defend the practice of inoculating against smallpox. The calculations from this model showed that universal inoculation against smallpox would increase the life expectancy from 26 years 7 months to 29 years 9 months. Daniel Bernoulli's work preceded the modern understanding of germ theory. In the early 20th century, William Hamer and Ronald Ross applied the law of mass action to explain epidemic behaviour. The 1920s saw the emergence of compartmental models. The Kermack–McKendrick epidemic model (1927) and the Reed–Frost epidemic model (1928) both describe the relationship between susceptible, infected and immune individuals in a population. The Kermack–McKendrick epidemic model was successful in predicting the behavior of outbreaks very similar to that observed in many recorded epidemics. Recently, agent-based models (ABMs) have been used in exchange for simpler compartmental models. For example, epidemiological ABMs have been used to inform public health (nonpharmaceutical) interventions against the spread of SARS-CoV-2. Epidemiological ABMs, in spite of their complexity and requiring high computational power, have been criticized for simplifying and unrealistic assumptions. Still, they can be useful in informing decisions regarding mitigation and suppression measures in cases when ABMs are accurately calibrated. == Assumptions == Models are only as good as the assumptions on which they are based. If a model makes predictions that are out of line with observed results and the mathematics is correct, the initial assumptions must change to make the model useful. Rectangular and stationary age distribution, i.e., everybody in the population lives to age L and then dies, and for each age (up to L) there is the same number of people in the population. This is often well-justified for developed countries where there is a low infant mortality and much of the population lives to the life expectancy. Homogeneous mixing of the population, i.e., individuals of the population under scrutiny assort and make contact at random and do not mix mostly in a smaller subgroup. This assumption is rarely justified because social structure is widespread. For example, most people in London only make contact with other Londoners. Further, within London then there are smaller subgroups, such as the Turkish community or teenagers (just to give two examples), who mix with each other more than people outside their group. However, homogeneous mixing is a standard assumption to make the mathematics tractable. == Types of epidemic models == === Stochastic === "Stochastic" means being or having a random variable. A stochastic model is a tool for estimating probability distributions of potential outcomes by allowing for random variation in one or more inputs over time. Stochastic models depend on the chance variations in risk of exposure, disease and other illness dynamics. Statistical agent-level disease dissemination in small or large populations can be determined by stochastic methods. === Deterministic === When dealing with large populations, as in the case of tuberculosis, deterministic or compartmental mathematical models are often used. In a deterministic model, individuals in the population are assigned to different subgroups or compartments, each representing a specific stage of the epidemic. The transition rates from one class to another are mathematically expressed as derivatives, hence the model is formulated using differential equations. While building such models, it must be assumed that the population size in a compartment is differentiable with respect to time and that the epidemic process is deterministic. In other words, the changes in population of a compartment can be calculated using only the history that was used to develop the model. === Kinetic and mean-field === Formally, these models belong to the class of deterministic models; however, they incorporate heterogeneous social features into the dynamics, such as individuals' levels of sociality, opinion, wealth, geographic location, which profoundly influence disease propagation. These models are typically represented by partial differential equations, in contrast to classical models described as systems of ordinary differential equations. Following the derivation principles of kinetic theory, they provide a more rigorous description of epidemic dynamics by starting from agent-based interactions. == Sub-exponential growth == A common explanation for the growth of epidemics holds that 1 person infects 2, those 2 infect 4 and so on and so on with the number of infected doubling every generation. It is analogous to a game of tag where 1 person tags 2, those 2 tag 4 others who've never been tagged and so on. As this game progresses it becomes increasing frenetic as the tagged run past the previously tagged to hunt down those who have never been tagged. Thus this model of an epidemic leads to a curve that grows exponentially until it crashes to zero as all the population have been infected. i.e. no herd immunity and no peak and gradual decline as seen in reality. == Epidemic Models on Networks == Epidemics can be modeled as diseases spreading over networks of contact between people. Such a network can be represented mathematically with a graph and is called the contact network. Every node in a contact network is a representation of an individual and each link (edge) between a pair of nodes represents the contact between them. Links in the contact networks may be used to transmit the disease between the individuals and each disease has its own dynamics on top of its contact network. The combination of disease dynamics under the influence of interventions, if any, on a contact network may be modeled with another network, known as a transmission network. In a transmission network, all the links are responsible for transmitting the disease. If such a network is a locally tree-like network, meaning that any local neighborhood in such a network takes the form of a tree, then the basic reproduction can be written in terms of the average excess degree of the transmission network such that: R 0 = ⟨ k 2 ⟩ ⟨ k ⟩ − 1 , {\displaystyle R_{0}={\frac {\langle k^{2}\rangle }{\langle k\rangle }}-1,} where ⟨ k ⟩ {\displaystyle {\langle k\rangle }} is the mean-degree (average degree) of the network and ⟨ k 2 ⟩ {\displaystyle {\langle k^{2}\rangle }} is the second moment of the transmission network degree distribution. It is, however, not always straightforward to find the transmission network out of the contact network and the disease dynamics. For example, if a contact network can be approximated with an Erdős–Rényi graph with a Poissonian degree distribution, and the disease spreading parameters are as defined in the example above, such that β {\displaystyle \beta } is the transmission rate per person and the disease has a mean infectious period of 1 γ {\displaystyle {\dfrac {1}{\gamma }}} , then the basic reproduction number is R 0 = β γ ⟨ k ⟩ {\displaystyle R_{0}={\dfrac {\beta }{\gamma }}{\langle k\rangle }} since ⟨ k 2 ⟩ − ⟨ k ⟩ 2 = ⟨ k ⟩ {\displaystyle {\langle k^{2}\rangle }-{\langle k\rangle }^{2}={\langle k\rangle }} for a Poisson distribution. == Reproduction number == The basic reproduction number (denoted by R0) is a measure of how transferable a disease is. It is the average number of people that a single infectious person will infect over the course of their infection. This quantity determines whether the infection will increase sub-exponentially, die out, or remain constant: if R0 > 1, then each person on average infects more than one other person so the disease will spread; if R0 < 1, then each person infects fewer than one person on average so the disease will die out; and if R0 = 1, then each person will infect on average exactly one other person, so the disease will become endemic: it will move throughout the population but not increase or decrease. == Endemic steady state == An infectious disease is said to be endemic when it can be sustained in a population without the need for external inputs. This means that, on average, each infected person is infecting exactly one other person (any more and the number of people infected will grow sub-exponentially and there will be an epidemic, any less and the disease will die out). In mathematical terms, that is: R 0 S = 1. {\displaystyle \ R_{0}S\ =1.} The basic reproduction number (R0) of the disease, assuming everyone is susceptible, multiplied by the proportion of the population that is actually susceptible (S) must be one (since those who are not susceptible do not feature in our calculations as they cannot contract the disease). Notice that this relation means that for a disease to be in the endemic steady state, the higher the basic reproduction number, the lower the proportion of the population susceptible must be, and vice versa. This expression has limitations concerning the susceptibility proportion, e.g. the R0 equals 0.5 implicates S has to be 2, however this proportion exceeds the population size. Assume the rectangular stationary age distribution and let also the ages of infection have the same distribution for each birth year. Let the average age of infection be A, for instance when individuals younger than A are susceptible and those older than A are immune (or infectious). Then it can be shown by an easy argument that the proportion of the population that is susceptible is given by: S = A L . {\displaystyle S={\frac {A}{L}}.} We reiterate that L is the age at which in this model every individual is assumed to die. But the mathematical definition of the endemic steady state can be rearranged to give: S = 1 R 0 . {\displaystyle S={\frac {1}{R_{0}}}.} Therefore, due to the transitive property: 1 R 0 = A L ⇒ R 0 = L A . {\displaystyle {\frac {1}{R_{0}}}={\frac {A}{L}}\Rightarrow R_{0}={\frac {L}{A}}.} This provides a simple way to estimate the parameter R0 using easily available data. For a population with an exponential age distribution, R 0 = 1 + L A . {\displaystyle R_{0}=1+{\frac {L}{A}}.} This allows for the basic reproduction number of a disease given A and L in either type of population distribution. == Compartmental models in epidemiology == Compartmental models are formulated as Markov chains. A classic compartmental model in epidemiology is the SIR model, which may be used as a simple model for modelling epidemics. Multiple other types of compartmental models are also employed. === The SIR model === In 1927, W. O. Kermack and A. G. McKendrick created a model in which they considered a fixed population with only three compartments: susceptible, S ( t ) {\displaystyle S(t)} ; infected, I ( t ) {\displaystyle I(t)} ; and recovered, R ( t ) {\displaystyle R(t)} . The compartments used for this model consist of three classes: S ( t ) {\displaystyle S(t)} , or those susceptible to the disease of the population. I ( t ) {\displaystyle I(t)} denotes the individuals of the population who have been infected with the disease and are capable of spreading the disease to those in the susceptible category. R ( t ) {\displaystyle R(t)} is the compartment used for the individuals of the population who have been infected and then removed from the disease, either due to immunization or due to death. Those in this category are not able to be infected again or to transmit the infection to others. === Other compartmental models === There are many modifications of the SIR model, including those that include births and deaths, where upon recovery there is no immunity (SIS model), where immunity lasts only for a short period of time (SIRS), where there is a latent period of the disease where the person is not infectious (SEIS and SEIR), and where infants can be born with immunity (MSIR). == Infectious disease dynamics == Mathematical models need to integrate the increasing volume of data being generated on host-pathogen interactions. Many theoretical studies of the population dynamics, structure and evolution of infectious diseases of plants and animals, including humans, are concerned with this problem. Research topics include: antigenic shift epidemiological networks evolution and spread of resistance immuno-epidemiology intra-host dynamics Pandemic pathogen population genetics persistence of pathogens within hosts phylodynamics role and identification of infection reservoirs role of host genetic factors spatial epidemiology statistical and mathematical tools and innovations Strain (biology) structure and interactions transmission, spread and control of infection virulence == Mathematics of mass vaccination == If the proportion of the population that is immune exceeds the herd immunity level for the disease, then the disease can no longer persist in the population and its transmission dies out. Thus, a disease can be eliminated from a population if enough individuals are immune due to either vaccination or recovery from prior exposure to disease. For example, smallpox eradication, with the last wild case in 1977, and certification of the eradication of indigenous transmission of 2 of the 3 types of wild poliovirus (type 2 in 2015, after the last reported case in 1999, and type 3 in 2019, after the last reported case in 2012). The herd immunity level will be denoted q. Recall that, for a stable state: R 0 ⋅ S = 1. {\displaystyle R_{0}\cdot S=1.} In turn, R 0 = N S = μ N E ⁡ ( T L ) μ N E ⁡ [ min ( T L , T S ) ] = E ⁡ ( T L ) E ⁡ [ min ( T L , T S ) ] , {\displaystyle R_{0}={\frac {N}{S}}={\frac {\mu N\operatorname {E} (T_{L})}{\mu N\operatorname {E} [\min(T_{L},T_{S})]}}={\frac {\operatorname {E} (T_{L})}{\operatorname {E} [\min(T_{L},T_{S})]}},} which is approximately: E ⁡ ( T L ) E ⁡ ( T S ) = 1 + λ μ = β N v . {\displaystyle {\frac {\operatorname {\operatorname {E} } (T_{L})}{\operatorname {\operatorname {E} } (T_{S})}}=1+{\frac {\lambda }{\mu }}={\frac {\beta N}{v}}.} S will be (1 − q), since q is the proportion of the population that is immune and q + S must equal one (since in this simplified model, everyone is either susceptible or immune). Then: R 0 ⋅ ( 1 − q ) = 1 , 1 − q = 1 R 0 , q = 1 − 1 R 0 . {\displaystyle {\begin{aligned}&R_{0}\cdot (1-q)=1,\\[6pt]&1-q={\frac {1}{R_{0}}},\\[6pt]&q=1-{\frac {1}{R_{0}}}.\end{aligned}}} Remember that this is the threshold level. Die out of transmission will only occur if the proportion of immune individuals exceeds this level due to a mass vaccination programme. We have just calculated the critical immunization threshold (denoted qc). It is the minimum proportion of the population that must be immunized at birth (or close to birth) in order for the infection to die out in the population. q c = 1 − 1 R 0 . {\displaystyle q_{c}=1-{\frac {1}{R_{0}}}.} Because the fraction of the final size of the population p that is never infected can be defined as: lim t → ∞ S ( t ) = e − ∫ 0 ∞ λ ( t ) d t = 1 − p . {\displaystyle \lim _{t\to \infty }S(t)=e^{-\int _{0}^{\infty }\lambda (t)\,dt}=1-p.} Hence, p = 1 − e − ∫ 0 ∞ β I ( t ) d t = 1 − e − R 0 p . {\displaystyle p=1-e^{-\int _{0}^{\infty }\beta I(t)\,dt}=1-e^{-R_{0}p}.} Solving for R 0 {\displaystyle R_{0}} , we obtain: R 0 = − ln ⁡ ( 1 − p ) p . {\displaystyle R_{0}={\frac {-\ln(1-p)}{p}}.} === When mass vaccination cannot exceed the herd immunity === If the vaccine used is insufficiently effective or the required coverage cannot be reached, the program may fail to exceed qc. Such a program will protect vaccinated individuals from disease, but may change the dynamics of transmission. Suppose that a proportion of the population q (where q < qc) is immunised at birth against an infection with R0 > 1. The vaccination programme changes R0 to Rq where R q = R 0 ( 1 − q ) {\displaystyle R_{q}=R_{0}(1-q)} This change occurs simply because there are now fewer susceptibles in the population who can be infected. Rq is simply R0 minus those that would normally be infected but that cannot be now since they are immune. As a consequence of this lower basic reproduction number, the average age of infection A will also change to some new value Aq in those who have been left unvaccinated. Recall the relation that linked R0, A and L. Assuming that life expectancy has not changed, now: R q = L A q , {\displaystyle R_{q}={\frac {L}{A_{q}}},} A q = L R q = L R 0 ( 1 − q ) . {\displaystyle A_{q}={\frac {L}{R_{q}}}={\frac {L}{R_{0}(1-q)}}.} But R0 = L/A so: A q = L ( L / A ) ( 1 − q ) = A L L ( 1 − q ) = A 1 − q . {\displaystyle A_{q}={\frac {L}{(L/A)(1-q)}}={\frac {AL}{L(1-q)}}={\frac {A}{1-q}}.} Thus, the vaccination program may raise the average age of infection, and unvaccinated individuals will experience a reduced force of infection due to the presence of the vaccinated group. For a disease that leads to greater clinical severity in older populations, the unvaccinated proportion of the population may experience the disease relatively later in life than would occur in the absence of vaccine. === When mass vaccination exceeds the herd immunity === If a vaccination program causes the proportion of immune individuals in a population to exceed the critical threshold for a significant length of time, transmission of the infectious disease in that population will stop. If elimination occurs everywhere at the same time, then this can lead to eradication. Elimination Interruption of endemic transmission of an infectious disease, which occurs if each infected individual infects less than one other, is achieved by maintaining vaccination coverage to keep the proportion of immune individuals above the critical immunization threshold. Eradication Elimination everywhere at the same time such that the infectious agent dies out (for example, smallpox and rinderpest). == Reliability == Models have the advantage of examining multiple outcomes simultaneously, rather than making a single forecast. Models have shown broad degrees of reliability in past pandemics, such as SARS, SARS-CoV-2, Swine flu, MERS and Ebola. == See also == == References == == Sources == Barabási AL (2016). Network Science. Cambridge University Press. ISBN 978-1-107-07626-6. Brauer F, Castillo-Chavez C (2012). Mathematical Models in Population Biology and Epidemiology. Texts in Applied Mathematics. Vol. 40. doi:10.1007/978-1-4614-1686-9. ISBN 978-1-4614-1685-2. Daley DJ, Gani JM (1999). Epidemic Modelling: An Introduction. Cambridge University Press. ISBN 978-0-521-01467-0. Hamer WH (1929). Epidemiology, Old and New. Macmillan. hdl:2027/mdp.39015006657475. OCLC 609575950. Ross R (1910). The Prevention of Malaria. Dutton. hdl:2027/uc2.ark:/13960/t02z1ds0q. OCLC 610268760. == Further reading == == External links == Software Model-Builder: Interactive (GUI-based) software to build, simulate, and analyze ODE models. GLEaMviz Simulator: Enables simulation of emerging infectious diseases spreading across the world. STEM: Open source framework for Epidemiological Modeling available through the Eclipse Foundation. R package surveillance: Temporal and Spatio-Temporal Modeling and Monitoring of Epidemic Phenomena
Wikipedia/Mathematical_modelling_in_epidemiology
The Lotka–Volterra equations, also known as the Lotka–Volterra predator–prey model, are a pair of first-order nonlinear differential equations, frequently used to describe the dynamics of biological systems in which two species interact, one as a predator and the other as prey. The populations change through time according to the pair of equations: d x d t = α x − β x y , d y d t = − γ y + δ x y , {\displaystyle {\begin{aligned}{\frac {dx}{dt}}&=\alpha x-\beta xy,\\{\frac {dy}{dt}}&=-\gamma y+\delta xy,\end{aligned}}} where the variable x is the population density of prey (for example, the number of rabbits per square kilometre); the variable y is the population density of some predator (for example, the number of foxes per square kilometre); d y d t {\displaystyle {\tfrac {dy}{dt}}} and d x d t {\displaystyle {\tfrac {dx}{dt}}} represent the instantaneous growth rates of the two populations; t represents time; The prey's parameters, α and β, describe, respectively, the maximum prey per capita growth rate, and the effect of the presence of predators on the prey death rate. The predator's parameters, γ, δ, respectively describe the predator's per capita death rate, and the effect of the presence of prey on the predator's growth rate. All parameters are positive and real. The solution of the differential equations is deterministic and continuous. This, in turn, implies that the generations of both the predator and prey are continually overlapping. The Lotka–Volterra system of equations is an example of a Kolmogorov population model (not to be confused with the better known Kolmogorov equations), which is a more general framework that can model the dynamics of ecological systems with predator–prey interactions, competition, disease, and mutualism. == Biological interpretation and model assumptions == The prey are assumed to have an unlimited food supply and to reproduce exponentially, unless subject to predation; this exponential growth is represented in the equation above by the term αx. The rate of predation on the prey is assumed to be proportional to the rate at which the predators and the prey meet; this is represented above by βxy. If either x or y is zero, then there can be no predation. With these two terms the prey equation above can be interpreted as follows: the rate of change of the prey's population is given by its own growth rate minus the rate at which it is preyed upon. The term δxy represents the growth of the predator population. (Note the similarity to the predation rate; however, a different constant is used, as the rate at which the predator population grows is not necessarily equal to the rate at which it consumes the prey). The term γy represents the loss rate of the predators due to either natural death or emigration; it leads to an exponential decay in the absence of prey. Hence the equation expresses that the rate of change of the predator's population depends upon the rate at which it consumes prey, minus its intrinsic death rate. The Lotka–Volterra predator-prey model makes a number of assumptions about the environment and biology of the predator and prey populations: The prey population finds ample food at all times. The food supply of the predator population depends entirely on the size of the prey population. The rate of change of population is proportional to its size. During the process, the environment does not change in favour of one species, and genetic adaptation is inconsequential. Predators have limitless appetite. Both populations can be described by a single variable. This amounts to assuming that the populations do not have a spatial or age distribution that contributes to the dynamics. == Biological relevance of the model == None of the assumptions above are likely to hold for natural populations. Nevertheless, the Lotka–Volterra model shows two important properties of predator and prey populations and these properties often extend to variants of the model in which these assumptions are relaxed: Firstly, the dynamics of predator and prey populations have a tendency to oscillate. Fluctuating numbers of predators and prey have been observed in natural populations, such as the lynx and snowshoe hare data of the Hudson's Bay Company and the moose and wolf populations in Isle Royale National Park. Secondly, the population equilibrium of this model has the property that the prey equilibrium density (given by x = γ / δ {\displaystyle x=\gamma /\delta } ) depends on the predator's parameters, and the predator equilibrium density (given by y = α / β {\displaystyle y=\alpha /\beta } ) on the prey's parameters. This has as a consequence that an increase in, for instance, the prey growth rate, α {\displaystyle \alpha } , leads to an increase in the predator equilibrium density, but not the prey equilibrium density. Making the environment better for the prey benefits the predator, not the prey (this is related to the paradox of the pesticides and to the paradox of enrichment). A demonstration of this phenomenon is provided by the increased percentage of predatory fish caught had increased during the years of World War I (1914–18), when prey growth rate was increased due to a reduced fishing effort. A further example is provided by the experimental iron fertilization of the ocean. In several experiments large amounts of iron salts were dissolved in the ocean. The expectation was that iron, which is a limiting nutrient for phytoplankton, would boost growth of phytoplankton and that it would sequester carbon dioxide from the atmosphere. The addition of iron typically leads to a short bloom in phytoplankton, which is quickly consumed by other organisms (such as small fish or zooplankton) and limits the effect of enrichment mainly to increased predator density, which in turn limits the carbon sequestration. This is as predicted by the equilibrium population densities of the Lotka–Volterra predator-prey model, and is a feature that carries over to more elaborate models in which the restrictive assumptions of the simple model are relaxed. == Applications to economics and marketing == The Lotka–Volterra model has additional applications to areas such as economics and marketing. It can be used to describe the dynamics in a market with several competitors, complementary platforms and products, a sharing economy, and more. There are situations in which one of the competitors drives the other competitors out of the market and other situations in which the market reaches an equilibrium where each firm stabilizes on its market share. It is also possible to describe situations in which there are cyclical changes in the industry or chaotic situations with no equilibrium and changes are frequent and unpredictable. In economics, the Phillips curve, which shows the statistical relationship between unemployment and the rate of change in nominal wages, has been connected by the Goodwin model. This model reinterprets the dynamics of the biological prey-predator interaction, as described by the Lotka-Volterra model, in economic terms. The way the two species interact in this model led Goodwin to draw parallels with the Marxian class conflict. The Kolmogorov generalization of the prey-predator model, along with further developments of the Goodwin model, has extended these ideas. == History == The Lotka–Volterra predator–prey model was initially proposed by Alfred J. Lotka in the theory of autocatalytic chemical reactions in 1910. This was effectively the logistic equation, originally derived by Pierre François Verhulst. In 1920 Lotka extended the model, via Andrey Kolmogorov, to "organic systems" using a plant species and a herbivorous animal species as an example and in 1925 he used the equations to analyse predator–prey interactions in his book on biomathematics. The same set of equations was published in 1926 by Vito Volterra, a mathematician and physicist, who had become interested in mathematical biology. Volterra's enquiry was inspired through his interactions with the marine biologist Umberto D'Ancona, who was courting his daughter at the time and later was to become his son-in-law. D'Ancona studied the fish catches in the Adriatic Sea and had noticed that the percentage of predatory fish caught had increased during the years of World War I (1914–18). This puzzled him, as the fishing effort had been very much reduced during the war years and, as prey fish the preferred catch, one would intuitively expect this to increase of prey fish percentage. Volterra developed his model to explain D'Ancona's observation and did this independently from Alfred Lotka. He did credit Lotka's earlier work in his publication, after which the model has become known as the "Lotka-Volterra model". The model was later extended to include density-dependent prey growth and a functional response of the form developed by C. S. Holling; a model that has become known as the Rosenzweig–MacArthur model. Both the Lotka–Volterra and Rosenzweig–MacArthur models have been used to explain the dynamics of natural populations of predators and prey. In the late 1980s, an alternative to the Lotka–Volterra predator–prey model (and its common-prey-dependent generalizations) emerged, the ratio dependent or Arditi–Ginzburg model. The validity of prey- or ratio-dependent models has been much debated. The Lotka–Volterra equations have a long history of use in economic theory; their initial application is commonly credited to Richard Goodwin in 1965 or 1967. == Solutions to the equations == The equations have periodic solutions. These solutions do not have a simple expression in terms of the usual trigonometric functions, although they are quite tractable. If none of the non-negative parameters α, β, γ, δ vanishes, three can be absorbed into the normalization of variables to leave only one parameter: since the first equation is homogeneous in x, and the second one in y, the parameters β/α and δ/γ are absorbable in the normalizations of y and x respectively, and γ into the normalization of t, so that only α/γ remains arbitrary. It is the only parameter affecting the nature of the solutions. A linearization of the equations yields a solution similar to simple harmonic motion with the population of predators trailing that of prey by 90° in the cycle. === A simple example === Suppose there are two species of animals, a rabbit (prey) and a fox (predator). If the initial densities are 10 rabbits and 10 foxes per square kilometre, one can plot the progression of the two species over time; given the parameters that the growth and death rates of rabbits are 1.1 and 0.4 while that of foxes are 0.1 and 0.4 respectively. The choice of time interval is arbitrary. One may also plot solutions parametrically as orbits in phase space, without representing time, but with one axis representing the number of prey and the other axis representing the densities of predators for all times. This corresponds to eliminating time from the two differential equations above to produce a single differential equation d y d x = − y x δ x − γ β y − α {\displaystyle {\frac {dy}{dx}}=-{\frac {y}{x}}{\frac {\delta x-\gamma }{\beta y-\alpha }}} relating the variables x (predator) and y (prey). The solutions of this equation are closed curves. It is amenable to separation of variables: integrating β y − α y d y + δ x − γ x d x = 0 {\displaystyle {\frac {\beta y-\alpha }{y}}\,dy+{\frac {\delta x-\gamma }{x}}\,dx=0} yields the implicit relationship V = δ x − γ ln ⁡ ( x ) + β y − α ln ⁡ ( y ) , {\displaystyle V=\delta x-\gamma \ln(x)+\beta y-\alpha \ln(y),} where V is a constant quantity depending on the initial conditions and conserved on each curve. An aside: These graphs illustrate a serious potential limitation in the application as a biological model: for this specific choice of parameters, in each cycle, the rabbit population is reduced to extremely low numbers, yet recovers (while the fox population remains sizeable at the lowest rabbit density). In real-life situations, however, chance fluctuations of the discrete numbers of individuals might cause the rabbits to actually go extinct, and, by consequence, the foxes as well. This modelling problem has been called the "atto-fox problem", an atto-fox being a notional 10−18 of a fox. A density of 10−18 foxes per square kilometre equates to an average of approximately 5×10−10 foxes on the surface of the earth, which in practical terms means that foxes are extinct. === Hamiltonian structure of the system === Since the quantity V ( x , y ) {\displaystyle V(x,y)} is conserved over time, it plays role of a Hamiltonian function of the system. To see this we can define Poisson bracket as follows { f ( x , y ) , g ( x , y ) } = − x y ( ∂ f ∂ x ∂ g ∂ y − ∂ f ∂ y ∂ g ∂ x ) {\displaystyle \{f(x,y),g(x,y)\}=-xy\left({\frac {\partial f}{\partial x}}{\frac {\partial g}{\partial y}}-{\frac {\partial f}{\partial y}}{\frac {\partial g}{\partial x}}\right)} . Then Hamilton's equations read { x ˙ = { x , V } = α x − β x y , y ˙ = { y , V } = δ x y − γ y . {\displaystyle {\begin{cases}{\dot {x}}=\{x,V\}=\alpha x-\beta xy,\\{\dot {y}}=\{y,V\}=\delta xy-\gamma y.\end{cases}}} The variables x {\displaystyle x} and y {\displaystyle y} are not canonical, since { x , y } = − x y ≠ 1 {\displaystyle \{x,y\}=-xy\neq 1} . However, using transformations p = ln ⁡ ( x ) {\displaystyle p=\ln(x)} and q = ln ⁡ ( y ) {\displaystyle q=\ln(y)} we came up to a canonical form of the Hamilton's equations featuring the Hamiltonian H ( q , p ) = V ( x ( q , p ) , y ( q , p ) ) = δ e p − γ p + β e q − α q {\displaystyle H(q,p)=V(x(q,p),y(q,p))=\delta e^{p}-\gamma p+\beta e^{q}-\alpha q} : { q ˙ = ∂ H ∂ p = δ e p − γ , p ˙ = − ∂ H ∂ q = α − β e q . {\displaystyle {\begin{cases}{\dot {q}}={\frac {\partial H}{\partial p}}=\delta e^{p}-\gamma ,\\{\dot {p}}=-{\frac {\partial H}{\partial q}}=\alpha -\beta e^{q}.\end{cases}}} The Poisson bracket for the canonical variables ( q , p ) {\displaystyle (q,p)} now takes the standard form { F ( q , p ) , G ( q , p ) } = ( ∂ F ∂ q ∂ G ∂ p − ∂ F ∂ p ∂ G ∂ q ) {\displaystyle \{F(q,p),G(q,p)\}=\left({\frac {\partial F}{\partial q}}{\frac {\partial G}{\partial p}}-{\frac {\partial F}{\partial p}}{\frac {\partial G}{\partial q}}\right)} . === Phase-space plot of a further example === Another example covers: α = 2/3, β = 4/3, γ = 1 = δ. Assume x, y quantify thousands each. Circles represent prey and predator initial conditions from x = y = 0.9 to 1.8, in steps of 0.1. The fixed point is at (1, 1/2). == Dynamics of the system == In the model system, the predators thrive when prey is plentiful but, ultimately, outstrip their food supply and decline. As the predator population is low, the prey population will increase again. These dynamics continue in a population cycle of growth and decline. === Population equilibrium === Population equilibrium occurs in the model when neither of the population levels is changing, i.e. when both of the derivatives are equal to 0: x ( α − β y ) = 0 , {\displaystyle x(\alpha -\beta y)=0,} − y ( γ − δ x ) = 0. {\displaystyle -y(\gamma -\delta x)=0.} The above system of equations yields two solutions: { y = 0 , x = 0 } {\displaystyle \{y=0,\ \ x=0\}} and { y = α β , x = γ δ } . {\displaystyle \left\{y={\frac {\alpha }{\beta }},\ \ x={\frac {\gamma }{\delta }}\right\}.} Hence, there are two equilibria. The first solution effectively represents the extinction of both species. If both populations are at 0, then they will continue to be so indefinitely. The second solution represents a fixed point at which both populations sustain their current, non-zero numbers, and, in the simplified model, do so indefinitely. The levels of population at which this equilibrium is achieved depend on the chosen values of the parameters α, β, γ, and δ. === Stability of the fixed points === The stability of the fixed point at the origin can be determined by performing a linearization using partial derivatives. The Jacobian matrix of the predator–prey model is J ( x , y ) = [ α − β y − β x δ y δ x − γ ] . {\displaystyle J(x,y)={\begin{bmatrix}\alpha -\beta y&-\beta x\\\delta y&\delta x-\gamma \end{bmatrix}}.} and is known as the community matrix. ==== First fixed point (extinction) ==== When evaluated at the steady state of (0, 0), the Jacobian matrix J becomes J ( 0 , 0 ) = [ α 0 0 − γ ] . {\displaystyle J(0,0)={\begin{bmatrix}\alpha &0\\0&-\gamma \end{bmatrix}}.} The eigenvalues of this matrix are λ 1 = α , λ 2 = − γ . {\displaystyle \lambda _{1}=\alpha ,\quad \lambda _{2}=-\gamma .} In the model α and γ are always greater than zero, and as such the sign of the eigenvalues above will always differ. Hence the fixed point at the origin is a saddle point. The instability of this fixed point is of significance. If it were stable, non-zero populations might be attracted towards it, and as such the dynamics of the system might lead towards the extinction of both species for many cases of initial population levels. However, as the fixed point at the origin is a saddle point, and hence unstable, it follows that the extinction of both species is difficult in the model. (In fact, this could only occur if the prey were artificially completely eradicated, causing the predators to die of starvation. If the predators were eradicated, the prey population would grow without bound in this simple model.) The populations of prey and predator can get infinitesimally close to zero and still recover. ==== Second fixed point (oscillations) ==== Evaluating J at the second fixed point leads to J ( γ δ , α β ) = [ 0 − β γ δ α δ β 0 ] . {\displaystyle J\left({\frac {\gamma }{\delta }},{\frac {\alpha }{\beta }}\right)={\begin{bmatrix}0&-{\frac {\beta \gamma }{\delta }}\\{\frac {\alpha \delta }{\beta }}&0\end{bmatrix}}.} The eigenvalues of this matrix are λ 1 = i α γ , λ 2 = − i α γ . {\displaystyle \lambda _{1}=i{\sqrt {\alpha \gamma }},\quad \lambda _{2}=-i{\sqrt {\alpha \gamma }}.} As the eigenvalues are both purely imaginary and conjugate to each other, this fixed point must either be a center for closed orbits in the local vicinity or an attractive or repulsive spiral. In conservative systems, there must be closed orbits in the local vicinity of fixed points that exist at the minima and maxima of the conserved quantity. The conserved quantity is derived above to be V = δ x − γ ln ⁡ ( x ) + β y − α ln ⁡ ( y ) {\displaystyle V=\delta x-\gamma \ln(x)+\beta y-\alpha \ln(y)} on orbits. Thus orbits about the fixed point are closed and elliptic, so the solutions are periodic, oscillating on a small ellipse around the fixed point, with a frequency ω = λ 1 λ 2 = α γ {\displaystyle \omega ={\sqrt {\lambda _{1}\lambda _{2}}}={\sqrt {\alpha \gamma }}} and period T = 2 π / ( λ 1 λ 2 ) {\displaystyle T=2{\pi }/({\sqrt {\lambda _{1}\lambda _{2}}})} . As illustrated in the circulating oscillations in the figure above, the level curves are closed orbits surrounding the fixed point: the levels of the predator and prey populations cycle and oscillate without damping around the fixed point with frequency ω = α γ {\displaystyle \omega ={\sqrt {\alpha \gamma }}} . The value of the constant of motion V, or, equivalently, K = exp(−V), K = y α e − β y x γ e − δ x {\displaystyle K=y^{\alpha }e^{-\beta y}x^{\gamma }e^{-\delta x}} , can be found for the closed orbits near the fixed point. Increasing K moves a closed orbit closer to the fixed point. The largest value of the constant K is obtained by solving the optimization problem y α e − β y x γ e − δ x = y α x γ e δ x + β y ⟶ max x , y > 0 . {\displaystyle y^{\alpha }e^{-\beta y}x^{\gamma }e^{-\delta x}={\frac {y^{\alpha }x^{\gamma }}{e^{\delta x+\beta y}}}\longrightarrow \max _{x,y>0}.} The maximal value of K is thus attained at the stationary (fixed) point ( γ δ , α β ) {\displaystyle \left({\frac {\gamma }{\delta }},{\frac {\alpha }{\beta }}\right)} and amounts to K ∗ = ( α β e ) α ( γ δ e ) γ , {\displaystyle K^{*}=\left({\frac {\alpha }{\beta e}}\right)^{\alpha }\left({\frac {\gamma }{\delta e}}\right)^{\gamma },} where e is Euler's number. == See also == == Notes == == Further reading == Hofbauer, Josef; Sigmund, Karl (1998). "Dynamical Systems and Lotka–Volterra Equations". Evolutionary Games and Population Dynamics. New York: Cambridge University Press. pp. 1–54. ISBN 0-521-62570-X. Kaplan, Daniel; Glass, Leon (1995). Understanding Nonlinear Dynamics. New York: Springer. ISBN 978-0-387-94440-1. Leigh, E. R. (1968). "The ecological role of Volterra's equations". Some Mathematical Problems in Biology. – a modern discussion using Hudson's Bay Company data on lynx and hares in Canada from 1847 to 1903. Murray, J. D. (2003). Mathematical Biology I: An Introduction. New York: Springer. ISBN 978-0-387-95223-9.' Stefano Allesina's Community Ecology course lecture notes: https://stefanoallesina.github.io/Theoretical_Community_Ecology/ == External links == From the Wolfram Demonstrations Project — requires CDF player (free): Predator–Prey Equations Predator–Prey Model Predator–Prey Dynamics with Type-Two Functional Response Predator–Prey Ecosystem: A Real-Time Agent-Based Simulation Lotka-Volterra Algorithmic Simulation (Web simulation).
Wikipedia/Lotka-Volterra_equations
The Lotka–Volterra equations, also known as the Lotka–Volterra predator–prey model, are a pair of first-order nonlinear differential equations, frequently used to describe the dynamics of biological systems in which two species interact, one as a predator and the other as prey. The populations change through time according to the pair of equations: d x d t = α x − β x y , d y d t = − γ y + δ x y , {\displaystyle {\begin{aligned}{\frac {dx}{dt}}&=\alpha x-\beta xy,\\{\frac {dy}{dt}}&=-\gamma y+\delta xy,\end{aligned}}} where the variable x is the population density of prey (for example, the number of rabbits per square kilometre); the variable y is the population density of some predator (for example, the number of foxes per square kilometre); d y d t {\displaystyle {\tfrac {dy}{dt}}} and d x d t {\displaystyle {\tfrac {dx}{dt}}} represent the instantaneous growth rates of the two populations; t represents time; The prey's parameters, α and β, describe, respectively, the maximum prey per capita growth rate, and the effect of the presence of predators on the prey death rate. The predator's parameters, γ, δ, respectively describe the predator's per capita death rate, and the effect of the presence of prey on the predator's growth rate. All parameters are positive and real. The solution of the differential equations is deterministic and continuous. This, in turn, implies that the generations of both the predator and prey are continually overlapping. The Lotka–Volterra system of equations is an example of a Kolmogorov population model (not to be confused with the better known Kolmogorov equations), which is a more general framework that can model the dynamics of ecological systems with predator–prey interactions, competition, disease, and mutualism. == Biological interpretation and model assumptions == The prey are assumed to have an unlimited food supply and to reproduce exponentially, unless subject to predation; this exponential growth is represented in the equation above by the term αx. The rate of predation on the prey is assumed to be proportional to the rate at which the predators and the prey meet; this is represented above by βxy. If either x or y is zero, then there can be no predation. With these two terms the prey equation above can be interpreted as follows: the rate of change of the prey's population is given by its own growth rate minus the rate at which it is preyed upon. The term δxy represents the growth of the predator population. (Note the similarity to the predation rate; however, a different constant is used, as the rate at which the predator population grows is not necessarily equal to the rate at which it consumes the prey). The term γy represents the loss rate of the predators due to either natural death or emigration; it leads to an exponential decay in the absence of prey. Hence the equation expresses that the rate of change of the predator's population depends upon the rate at which it consumes prey, minus its intrinsic death rate. The Lotka–Volterra predator-prey model makes a number of assumptions about the environment and biology of the predator and prey populations: The prey population finds ample food at all times. The food supply of the predator population depends entirely on the size of the prey population. The rate of change of population is proportional to its size. During the process, the environment does not change in favour of one species, and genetic adaptation is inconsequential. Predators have limitless appetite. Both populations can be described by a single variable. This amounts to assuming that the populations do not have a spatial or age distribution that contributes to the dynamics. == Biological relevance of the model == None of the assumptions above are likely to hold for natural populations. Nevertheless, the Lotka–Volterra model shows two important properties of predator and prey populations and these properties often extend to variants of the model in which these assumptions are relaxed: Firstly, the dynamics of predator and prey populations have a tendency to oscillate. Fluctuating numbers of predators and prey have been observed in natural populations, such as the lynx and snowshoe hare data of the Hudson's Bay Company and the moose and wolf populations in Isle Royale National Park. Secondly, the population equilibrium of this model has the property that the prey equilibrium density (given by x = γ / δ {\displaystyle x=\gamma /\delta } ) depends on the predator's parameters, and the predator equilibrium density (given by y = α / β {\displaystyle y=\alpha /\beta } ) on the prey's parameters. This has as a consequence that an increase in, for instance, the prey growth rate, α {\displaystyle \alpha } , leads to an increase in the predator equilibrium density, but not the prey equilibrium density. Making the environment better for the prey benefits the predator, not the prey (this is related to the paradox of the pesticides and to the paradox of enrichment). A demonstration of this phenomenon is provided by the increased percentage of predatory fish caught had increased during the years of World War I (1914–18), when prey growth rate was increased due to a reduced fishing effort. A further example is provided by the experimental iron fertilization of the ocean. In several experiments large amounts of iron salts were dissolved in the ocean. The expectation was that iron, which is a limiting nutrient for phytoplankton, would boost growth of phytoplankton and that it would sequester carbon dioxide from the atmosphere. The addition of iron typically leads to a short bloom in phytoplankton, which is quickly consumed by other organisms (such as small fish or zooplankton) and limits the effect of enrichment mainly to increased predator density, which in turn limits the carbon sequestration. This is as predicted by the equilibrium population densities of the Lotka–Volterra predator-prey model, and is a feature that carries over to more elaborate models in which the restrictive assumptions of the simple model are relaxed. == Applications to economics and marketing == The Lotka–Volterra model has additional applications to areas such as economics and marketing. It can be used to describe the dynamics in a market with several competitors, complementary platforms and products, a sharing economy, and more. There are situations in which one of the competitors drives the other competitors out of the market and other situations in which the market reaches an equilibrium where each firm stabilizes on its market share. It is also possible to describe situations in which there are cyclical changes in the industry or chaotic situations with no equilibrium and changes are frequent and unpredictable. In economics, the Phillips curve, which shows the statistical relationship between unemployment and the rate of change in nominal wages, has been connected by the Goodwin model. This model reinterprets the dynamics of the biological prey-predator interaction, as described by the Lotka-Volterra model, in economic terms. The way the two species interact in this model led Goodwin to draw parallels with the Marxian class conflict. The Kolmogorov generalization of the prey-predator model, along with further developments of the Goodwin model, has extended these ideas. == History == The Lotka–Volterra predator–prey model was initially proposed by Alfred J. Lotka in the theory of autocatalytic chemical reactions in 1910. This was effectively the logistic equation, originally derived by Pierre François Verhulst. In 1920 Lotka extended the model, via Andrey Kolmogorov, to "organic systems" using a plant species and a herbivorous animal species as an example and in 1925 he used the equations to analyse predator–prey interactions in his book on biomathematics. The same set of equations was published in 1926 by Vito Volterra, a mathematician and physicist, who had become interested in mathematical biology. Volterra's enquiry was inspired through his interactions with the marine biologist Umberto D'Ancona, who was courting his daughter at the time and later was to become his son-in-law. D'Ancona studied the fish catches in the Adriatic Sea and had noticed that the percentage of predatory fish caught had increased during the years of World War I (1914–18). This puzzled him, as the fishing effort had been very much reduced during the war years and, as prey fish the preferred catch, one would intuitively expect this to increase of prey fish percentage. Volterra developed his model to explain D'Ancona's observation and did this independently from Alfred Lotka. He did credit Lotka's earlier work in his publication, after which the model has become known as the "Lotka-Volterra model". The model was later extended to include density-dependent prey growth and a functional response of the form developed by C. S. Holling; a model that has become known as the Rosenzweig–MacArthur model. Both the Lotka–Volterra and Rosenzweig–MacArthur models have been used to explain the dynamics of natural populations of predators and prey. In the late 1980s, an alternative to the Lotka–Volterra predator–prey model (and its common-prey-dependent generalizations) emerged, the ratio dependent or Arditi–Ginzburg model. The validity of prey- or ratio-dependent models has been much debated. The Lotka–Volterra equations have a long history of use in economic theory; their initial application is commonly credited to Richard Goodwin in 1965 or 1967. == Solutions to the equations == The equations have periodic solutions. These solutions do not have a simple expression in terms of the usual trigonometric functions, although they are quite tractable. If none of the non-negative parameters α, β, γ, δ vanishes, three can be absorbed into the normalization of variables to leave only one parameter: since the first equation is homogeneous in x, and the second one in y, the parameters β/α and δ/γ are absorbable in the normalizations of y and x respectively, and γ into the normalization of t, so that only α/γ remains arbitrary. It is the only parameter affecting the nature of the solutions. A linearization of the equations yields a solution similar to simple harmonic motion with the population of predators trailing that of prey by 90° in the cycle. === A simple example === Suppose there are two species of animals, a rabbit (prey) and a fox (predator). If the initial densities are 10 rabbits and 10 foxes per square kilometre, one can plot the progression of the two species over time; given the parameters that the growth and death rates of rabbits are 1.1 and 0.4 while that of foxes are 0.1 and 0.4 respectively. The choice of time interval is arbitrary. One may also plot solutions parametrically as orbits in phase space, without representing time, but with one axis representing the number of prey and the other axis representing the densities of predators for all times. This corresponds to eliminating time from the two differential equations above to produce a single differential equation d y d x = − y x δ x − γ β y − α {\displaystyle {\frac {dy}{dx}}=-{\frac {y}{x}}{\frac {\delta x-\gamma }{\beta y-\alpha }}} relating the variables x (predator) and y (prey). The solutions of this equation are closed curves. It is amenable to separation of variables: integrating β y − α y d y + δ x − γ x d x = 0 {\displaystyle {\frac {\beta y-\alpha }{y}}\,dy+{\frac {\delta x-\gamma }{x}}\,dx=0} yields the implicit relationship V = δ x − γ ln ⁡ ( x ) + β y − α ln ⁡ ( y ) , {\displaystyle V=\delta x-\gamma \ln(x)+\beta y-\alpha \ln(y),} where V is a constant quantity depending on the initial conditions and conserved on each curve. An aside: These graphs illustrate a serious potential limitation in the application as a biological model: for this specific choice of parameters, in each cycle, the rabbit population is reduced to extremely low numbers, yet recovers (while the fox population remains sizeable at the lowest rabbit density). In real-life situations, however, chance fluctuations of the discrete numbers of individuals might cause the rabbits to actually go extinct, and, by consequence, the foxes as well. This modelling problem has been called the "atto-fox problem", an atto-fox being a notional 10−18 of a fox. A density of 10−18 foxes per square kilometre equates to an average of approximately 5×10−10 foxes on the surface of the earth, which in practical terms means that foxes are extinct. === Hamiltonian structure of the system === Since the quantity V ( x , y ) {\displaystyle V(x,y)} is conserved over time, it plays role of a Hamiltonian function of the system. To see this we can define Poisson bracket as follows { f ( x , y ) , g ( x , y ) } = − x y ( ∂ f ∂ x ∂ g ∂ y − ∂ f ∂ y ∂ g ∂ x ) {\displaystyle \{f(x,y),g(x,y)\}=-xy\left({\frac {\partial f}{\partial x}}{\frac {\partial g}{\partial y}}-{\frac {\partial f}{\partial y}}{\frac {\partial g}{\partial x}}\right)} . Then Hamilton's equations read { x ˙ = { x , V } = α x − β x y , y ˙ = { y , V } = δ x y − γ y . {\displaystyle {\begin{cases}{\dot {x}}=\{x,V\}=\alpha x-\beta xy,\\{\dot {y}}=\{y,V\}=\delta xy-\gamma y.\end{cases}}} The variables x {\displaystyle x} and y {\displaystyle y} are not canonical, since { x , y } = − x y ≠ 1 {\displaystyle \{x,y\}=-xy\neq 1} . However, using transformations p = ln ⁡ ( x ) {\displaystyle p=\ln(x)} and q = ln ⁡ ( y ) {\displaystyle q=\ln(y)} we came up to a canonical form of the Hamilton's equations featuring the Hamiltonian H ( q , p ) = V ( x ( q , p ) , y ( q , p ) ) = δ e p − γ p + β e q − α q {\displaystyle H(q,p)=V(x(q,p),y(q,p))=\delta e^{p}-\gamma p+\beta e^{q}-\alpha q} : { q ˙ = ∂ H ∂ p = δ e p − γ , p ˙ = − ∂ H ∂ q = α − β e q . {\displaystyle {\begin{cases}{\dot {q}}={\frac {\partial H}{\partial p}}=\delta e^{p}-\gamma ,\\{\dot {p}}=-{\frac {\partial H}{\partial q}}=\alpha -\beta e^{q}.\end{cases}}} The Poisson bracket for the canonical variables ( q , p ) {\displaystyle (q,p)} now takes the standard form { F ( q , p ) , G ( q , p ) } = ( ∂ F ∂ q ∂ G ∂ p − ∂ F ∂ p ∂ G ∂ q ) {\displaystyle \{F(q,p),G(q,p)\}=\left({\frac {\partial F}{\partial q}}{\frac {\partial G}{\partial p}}-{\frac {\partial F}{\partial p}}{\frac {\partial G}{\partial q}}\right)} . === Phase-space plot of a further example === Another example covers: α = 2/3, β = 4/3, γ = 1 = δ. Assume x, y quantify thousands each. Circles represent prey and predator initial conditions from x = y = 0.9 to 1.8, in steps of 0.1. The fixed point is at (1, 1/2). == Dynamics of the system == In the model system, the predators thrive when prey is plentiful but, ultimately, outstrip their food supply and decline. As the predator population is low, the prey population will increase again. These dynamics continue in a population cycle of growth and decline. === Population equilibrium === Population equilibrium occurs in the model when neither of the population levels is changing, i.e. when both of the derivatives are equal to 0: x ( α − β y ) = 0 , {\displaystyle x(\alpha -\beta y)=0,} − y ( γ − δ x ) = 0. {\displaystyle -y(\gamma -\delta x)=0.} The above system of equations yields two solutions: { y = 0 , x = 0 } {\displaystyle \{y=0,\ \ x=0\}} and { y = α β , x = γ δ } . {\displaystyle \left\{y={\frac {\alpha }{\beta }},\ \ x={\frac {\gamma }{\delta }}\right\}.} Hence, there are two equilibria. The first solution effectively represents the extinction of both species. If both populations are at 0, then they will continue to be so indefinitely. The second solution represents a fixed point at which both populations sustain their current, non-zero numbers, and, in the simplified model, do so indefinitely. The levels of population at which this equilibrium is achieved depend on the chosen values of the parameters α, β, γ, and δ. === Stability of the fixed points === The stability of the fixed point at the origin can be determined by performing a linearization using partial derivatives. The Jacobian matrix of the predator–prey model is J ( x , y ) = [ α − β y − β x δ y δ x − γ ] . {\displaystyle J(x,y)={\begin{bmatrix}\alpha -\beta y&-\beta x\\\delta y&\delta x-\gamma \end{bmatrix}}.} and is known as the community matrix. ==== First fixed point (extinction) ==== When evaluated at the steady state of (0, 0), the Jacobian matrix J becomes J ( 0 , 0 ) = [ α 0 0 − γ ] . {\displaystyle J(0,0)={\begin{bmatrix}\alpha &0\\0&-\gamma \end{bmatrix}}.} The eigenvalues of this matrix are λ 1 = α , λ 2 = − γ . {\displaystyle \lambda _{1}=\alpha ,\quad \lambda _{2}=-\gamma .} In the model α and γ are always greater than zero, and as such the sign of the eigenvalues above will always differ. Hence the fixed point at the origin is a saddle point. The instability of this fixed point is of significance. If it were stable, non-zero populations might be attracted towards it, and as such the dynamics of the system might lead towards the extinction of both species for many cases of initial population levels. However, as the fixed point at the origin is a saddle point, and hence unstable, it follows that the extinction of both species is difficult in the model. (In fact, this could only occur if the prey were artificially completely eradicated, causing the predators to die of starvation. If the predators were eradicated, the prey population would grow without bound in this simple model.) The populations of prey and predator can get infinitesimally close to zero and still recover. ==== Second fixed point (oscillations) ==== Evaluating J at the second fixed point leads to J ( γ δ , α β ) = [ 0 − β γ δ α δ β 0 ] . {\displaystyle J\left({\frac {\gamma }{\delta }},{\frac {\alpha }{\beta }}\right)={\begin{bmatrix}0&-{\frac {\beta \gamma }{\delta }}\\{\frac {\alpha \delta }{\beta }}&0\end{bmatrix}}.} The eigenvalues of this matrix are λ 1 = i α γ , λ 2 = − i α γ . {\displaystyle \lambda _{1}=i{\sqrt {\alpha \gamma }},\quad \lambda _{2}=-i{\sqrt {\alpha \gamma }}.} As the eigenvalues are both purely imaginary and conjugate to each other, this fixed point must either be a center for closed orbits in the local vicinity or an attractive or repulsive spiral. In conservative systems, there must be closed orbits in the local vicinity of fixed points that exist at the minima and maxima of the conserved quantity. The conserved quantity is derived above to be V = δ x − γ ln ⁡ ( x ) + β y − α ln ⁡ ( y ) {\displaystyle V=\delta x-\gamma \ln(x)+\beta y-\alpha \ln(y)} on orbits. Thus orbits about the fixed point are closed and elliptic, so the solutions are periodic, oscillating on a small ellipse around the fixed point, with a frequency ω = λ 1 λ 2 = α γ {\displaystyle \omega ={\sqrt {\lambda _{1}\lambda _{2}}}={\sqrt {\alpha \gamma }}} and period T = 2 π / ( λ 1 λ 2 ) {\displaystyle T=2{\pi }/({\sqrt {\lambda _{1}\lambda _{2}}})} . As illustrated in the circulating oscillations in the figure above, the level curves are closed orbits surrounding the fixed point: the levels of the predator and prey populations cycle and oscillate without damping around the fixed point with frequency ω = α γ {\displaystyle \omega ={\sqrt {\alpha \gamma }}} . The value of the constant of motion V, or, equivalently, K = exp(−V), K = y α e − β y x γ e − δ x {\displaystyle K=y^{\alpha }e^{-\beta y}x^{\gamma }e^{-\delta x}} , can be found for the closed orbits near the fixed point. Increasing K moves a closed orbit closer to the fixed point. The largest value of the constant K is obtained by solving the optimization problem y α e − β y x γ e − δ x = y α x γ e δ x + β y ⟶ max x , y > 0 . {\displaystyle y^{\alpha }e^{-\beta y}x^{\gamma }e^{-\delta x}={\frac {y^{\alpha }x^{\gamma }}{e^{\delta x+\beta y}}}\longrightarrow \max _{x,y>0}.} The maximal value of K is thus attained at the stationary (fixed) point ( γ δ , α β ) {\displaystyle \left({\frac {\gamma }{\delta }},{\frac {\alpha }{\beta }}\right)} and amounts to K ∗ = ( α β e ) α ( γ δ e ) γ , {\displaystyle K^{*}=\left({\frac {\alpha }{\beta e}}\right)^{\alpha }\left({\frac {\gamma }{\delta e}}\right)^{\gamma },} where e is Euler's number. == See also == == Notes == == Further reading == Hofbauer, Josef; Sigmund, Karl (1998). "Dynamical Systems and Lotka–Volterra Equations". Evolutionary Games and Population Dynamics. New York: Cambridge University Press. pp. 1–54. ISBN 0-521-62570-X. Kaplan, Daniel; Glass, Leon (1995). Understanding Nonlinear Dynamics. New York: Springer. ISBN 978-0-387-94440-1. Leigh, E. R. (1968). "The ecological role of Volterra's equations". Some Mathematical Problems in Biology. – a modern discussion using Hudson's Bay Company data on lynx and hares in Canada from 1847 to 1903. Murray, J. D. (2003). Mathematical Biology I: An Introduction. New York: Springer. ISBN 978-0-387-95223-9.' Stefano Allesina's Community Ecology course lecture notes: https://stefanoallesina.github.io/Theoretical_Community_Ecology/ == External links == From the Wolfram Demonstrations Project — requires CDF player (free): Predator–Prey Equations Predator–Prey Model Predator–Prey Dynamics with Type-Two Functional Response Predator–Prey Ecosystem: A Real-Time Agent-Based Simulation Lotka-Volterra Algorithmic Simulation (Web simulation).
Wikipedia/Lotka–Volterra_equation
The Genetical Theory of Natural Selection is a book by Ronald Fisher which combines Mendelian genetics with Charles Darwin's theory of natural selection, with Fisher being the first to argue that "Mendelism therefore validates Darwinism" and stating with regard to mutations that "The vast majority of large mutations are deleterious; small mutations are both far more frequent and more likely to be useful", thus refuting orthogenesis. First published in 1930 by The Clarendon Press, it is one of the most important books of the modern synthesis, and helped define population genetics. It had been described by J. F. Crow as the "deepest book on evolution since Darwin". It is commonly cited in biology books, outlining many concepts that are still considered important such as Fisherian runaway, Fisher's principle, reproductive value, Fisher's fundamental theorem of natural selection, Fisher's geometric model, the sexy son hypothesis, mimicry and the evolution of dominance. It was dictated to his wife in the evenings as he worked at Rothamsted Research in the day. == Contents == In the preface, Fisher considers some general points, including that there must be an understanding of natural selection distinct from that of evolution, and that the then-recent advances in the field of genetics (see history of genetics) now allowed this. In the first chapter, Fisher considers the nature of inheritance, rejecting blending inheritance, because it would eliminate genetic variance, in favour of particulate inheritance. The second chapter introduces Fisher's fundamental theorem of natural selection. The third considers the evolution of dominance, which Fisher believed was strongly influenced by modifiers. Other chapters discuss parental investment, Fisher's geometric model, concerning how spontaneous mutations affect biological fitness, Fisher's principle which explains why the sex ratio between males and females is almost always 1:1, reproductive value, examining the demography of having girl children. Using his knowledge of statistics, the Fisherian runaway, which explores how sexual selection can lead to a positive feedback runaway loop, producing features such as the peacock's plumage. He also wrote about the evolution of dominance, which explores genetic dominance. === Eugenics === The last five chapters (8-12) include Fisher's concern about dysgenics and proposals for eugenics. Fisher attributed the fall of civilizations to the fertility of their upper classes being diminished, and used British 1911 census data to show an inverse relationship between fertility and social class, partly due, he claimed, to the lower financial costs and hence increasing social status of families with fewer children. He proposed the abolition of extra allowances to large families, with the allowances proportional to the earnings of the father. He served in several official committees to promote eugenics. In 1934, he resigned from the Eugenics Society over a dispute about increasing the power of scientists within the movement. == Editions == A second, slightly revised edition was republished in 1958. In 1999, a third variorum edition (ISBN 0-19-850440-3), with the original 1930 text, annotated with the 1958 alterations, notes and alterations accidentally omitted from the second edition was published, edited by professor John Henry Bennett of the University of Adelaide. == Dedication == The book is dedicated to Major Leonard Darwin, Fisher's friend, correspondent and son of Charles Darwin, "In gratitude for the encouragement, given to the author, during the last fifteen years, by discussing many of the problems dealt with in this book." == Reviews == The book was reviewed by Charles Galton Darwin, who sent Fisher his copy of the book, with notes in the margin, starting a correspondence which lasted several years. The book also had a major influence on W. D. Hamilton's theories on the genetic basis of kin selection. John Henry Bennett gave an account of the writing and reception of the book. Sewall Wright, who had many disagreements with Fisher, reviewed the book and wrote that it was "certain to take rank as one of the major contributions to the theory of evolution." J. B. S. Haldane described it as "brilliant." Reginald Punnett was negative, however. The book was largely overlooked for 40 years, and in particular Fisher's fundamental theorem of natural selection was misunderstood. The work had a great effect on W. D. Hamilton, who discovered it as an undergraduate at the University of Cambridge and noted in these excerpts from the rear cover of the 1999 variorum edition: The publication of the variorum edition in 1999 led to renewed interest in the work and reviews by Laurence Cook, Brian Charlesworth, James F. Crow, and A. W. F. Edwards. == References == == Bibliography == == External links == The Genetical Theory Of Natural Selection at the Internet Archive The Genetical Theory Of Natural Selection
Wikipedia/The_Genetic_Theory_of_Natural_Selection
The population dynamics of pest insects is a subject of interest to farmers, agricultural economists, ecologists, and those concerned with animal welfare. == Factors affecting populations == Density-independent: Affect a population equally regardless of its density. Examples: A winter freeze may kill a constant fraction of potato leafhoppers in a peanut field regardless of the total number of leafhoppers. Japanese beetle larvae survive well with lots of summer rain. Temperature, humidity, fires, storms, dissolved oxygen for aquatic species. Density-dependent: Affect a population more or less as the population is bigger. Examples: A bigger population may be more vulnerable to diseases and parasites. A bigger population may have more intraspecific competition, while a smaller population may have more interspecific competition. Emigration from the population may increase as it becomes more crowded. == Life tables == A life table shows how and how many insects die as they mature from eggs to adults. It helps with pest control by identifying at what life stage pest insects are most vulnerable and how mortality can be increased. A cohort life table tracks organisms through the stages of life, while a static life table shows the distribution of life stages among the population at a single point in time. Following is an example of a cohort life table based on field data from Vargas and Nishida (1980). The overall mortality rate was 94.8%, but this is probably an underestimate because the study collected the pupae in cups, and these may have protected them from birds, mice, harsh weather, and so on. === Life expectancy === From a life table we can calculate life expectancy as follows. Assume the stages x {\displaystyle x} are uniformly spaced. The average proportion L x {\displaystyle L_{x}} of organisms alive at stage x {\displaystyle x} between beginning and end is L x = l x + l x + 1 2 {\displaystyle L_{x}={\frac {l_{x}+l_{x+1}}{2}}} . The total number T x {\displaystyle T_{x}} of future stages to be lived by individuals at age x {\displaystyle x} and older is T x = L x + L x + 1 + L x + 2 + . . . {\displaystyle T_{x}=L_{x}+L_{x+1}+L_{x+2}+...} . Then the life expectancy e x {\displaystyle e_{x}} at age x {\displaystyle x} is e x = T x l x {\displaystyle e_{x}={\frac {T_{x}}{l_{x}}}} . We could have done the same computation with raw numbers of individuals rather than proportions. === Basic reproductive rate === If we further know the number F x {\displaystyle F_{x}} of eggs produced (fecundity) at age x {\displaystyle x} , we can calculate the eggs produced per surviving individual m x {\displaystyle m_{x}} as m x = F x a x {\displaystyle m_{x}={\frac {F_{x}}{a_{x}}}} , where a x {\displaystyle a_{x}} is the number of individuals alive at that stage. The basic reproductive rate R 0 {\displaystyle R_{0}} , also known as the replacement rate of a population, is the ratio of daughters to mothers. If it's greater than 1, the population is increasing. In a stable population the replacement rate should hover close to 1. We can calculate it from life-table data as R 0 = ∑ x l x m x {\displaystyle R_{0}=\sum _{x}l_{x}m_{x}} . This is because each l x m x {\displaystyle l_{x}m_{x}} product computes (first-generation parents at age x {\displaystyle x} )/(first-generation eggs) times (second-generation eggs produced by age- x {\displaystyle x} parents)/(first-generation parents at age x {\displaystyle x} ). If N 0 {\displaystyle N_{0}} is the initial population size and N T {\displaystyle N_{T}} is the population size after a generation, then R 0 = N T N 0 {\displaystyle R_{0}={\frac {N_{T}}{N_{0}}}} . === Generation time === The cohort generation time T c {\displaystyle T_{c}} is the average duration between when a parent is born and when its child is born. If x {\displaystyle x} is measured in years, then T c = ∑ x x l x m x ∑ x l x m x {\displaystyle T_{c}={\frac {\sum _{x}xl_{x}m_{x}}{\sum _{x}l_{x}m_{x}}}} . === Intrinsic rate of increase === If R 0 {\displaystyle R_{0}} remains relatively stable over generations, we can use it to approximate the intrinsic rate of increase r {\displaystyle r} for the population: r ≈ ln ⁡ R 0 T c {\displaystyle r\approx {\frac {\ln R_{0}}{T_{c}}}} . This is because ln ⁡ R 0 = ln ⁡ N T N 0 = ln ⁡ N 0 + Δ N N 0 = ln ⁡ ( 1 + Δ N N 0 ) ≈ Δ N N 0 {\displaystyle \ln R_{0}=\ln {\frac {N_{T}}{N_{0}}}=\ln {\frac {N_{0}+\Delta N}{N_{0}}}=\ln \left(1+{\frac {\Delta N}{N_{0}}}\right)\approx {\frac {\Delta N}{N_{0}}}} , where the approximation follows from the Mercator series. T c {\displaystyle T_{c}} is a change in time, Δ t {\displaystyle \Delta t} . Then we have r ≈ Δ N Δ t 1 N 0 {\displaystyle r\approx {\frac {\Delta N}{\Delta t}}{\frac {1}{N_{0}}}} , which is the discrete definition of the intrinsic rate of increase. == Growth models == In general, population growth roughly follows one of these trends: Logistic growth leveling out at some carrying capacity. Overshoot ("boom" and "bust" cycles). Oscillation at or below the carrying capacity. Insect pest growth rates are heavily influenced by temperature and rainfall, among other variables. Sometimes pest populations grow rapidly and become outbreaks. === Degree-day calculations === Because insects are ectothermic, "temperature is probably the single most important environmental factor influencing insect behavior, distribution, development, survival, and reproduction." As a result, growing degree-days are commonly used to estimate insect development, often relative to a biofix point, i.e., a biological milestone, such as when the insect comes out of pupation in spring. Degree-days can help with pest control. Yamamura and Kiritani approximated the development rate r {\displaystyle r} as r = { T − T 0 K , T ≥ T 0 0 , T < T 0 {\displaystyle r={\begin{cases}{\frac {T-T_{0}}{K}},&T\geq T_{0}\\0,&T<T_{0}\end{cases}}} , with T {\displaystyle T} being the current temperature, T 0 {\displaystyle T_{0}} being the base temperature for the species, and K {\displaystyle K} being a thermal constant for the species. A generation is defined as the duration required for the time-integral of r {\displaystyle r} to equal 1. Using linear approximations, the authors estimate that if the temperature increased by Δ T {\displaystyle \Delta T} (for instance, maybe Δ T {\displaystyle \Delta T} = 2 °C for climate change by 2100 relative to 1990), then the increase in number of generations per year Δ N {\displaystyle \Delta N} would be Δ N ≈ Δ T K ( 206.7 + 12.46 ( m − T 0 ) ) {\displaystyle \Delta N\approx {\frac {\Delta T}{K}}\left(206.7+12.46(m-T_{0})\right)} , where m {\displaystyle m} is the current annual mean temperature of a location. In particular, the authors suggest that 2 °C warming might lead to, for example, about one extra generation for Lepidoptera, Hemiptera, two extra generations for Diptera, almost three generations for Hymenoptera, and almost five generations for Aphidoidea. These changes in voltinism might happen through biological dispersal and/or natural selection; the authors point to prior examples of each in Japan. === Geometric Brownian motion === Sunding and Zivin model population growth of insect pests as a geometric Brownian motion (GBM) process. The model is stochastic in order to account for the variability of growth rates as a function of external conditions like weather. In particular, if X {\displaystyle X} is the current insect population, α {\displaystyle \alpha } is the intrinsic growth rate, and σ {\displaystyle \sigma } is a variance coefficient, the authors assume that d X = α X d t + σ X d z {\displaystyle dX=\alpha Xdt+\sigma Xdz} , where d t {\displaystyle dt} is an increment of time, and d z = ξ t d t {\displaystyle dz=\xi _{t}{\sqrt {dt}}} is an increment of a Wiener process, with ξ t {\displaystyle \xi _{t}} being standard-normal distributed. In this model, short-run population changes are dominated by the stochastic term, σ X d z {\displaystyle \sigma Xdz} , but long-run changes are dominated by the trend term, α X d t {\displaystyle \alpha Xdt} . After solving this equation, we find that the population at time t {\displaystyle t} , X t {\displaystyle X_{t}} , is log-normally distributed: X t ∼ L o g - N ⁡ ( X 0 e α t , X 0 2 e 2 α t ( e σ 2 t − 1 ) ) {\displaystyle X_{t}\sim \operatorname {Log-{\mathcal {N}}} \left(X_{0}e^{\alpha t},X_{0}^{2}e^{2\alpha t}(e^{\sigma ^{2}t}-1)\right)} , where X 0 {\displaystyle X_{0}} is the initial population. As a case study, the authors consider mevinphos application on leaf lettuce in Salinas Valley, California, for the purpose of controlling aphids. Previous research by other authors found that daily percentage growth of the green peach aphid could be modeled as an increasing linear function of average daily temperature. Combined with the fact that temperature is normally distributed, this agreed with the GBM equations described above, and the authors derived that α = 0.1199 {\displaystyle \alpha =0.1199} and σ = 0.1152 {\displaystyle \sigma =0.1152} . Since the expected population based on the log-normal distribution grows with e α t {\displaystyle e^{\alpha t}} , this implies an aphid doubling time of ln ⁡ 2 / 0.1199 = 5.8 {\displaystyle \ln 2/0.1199=5.8} days. Note that other literature has found aphid generation times to lie roughly in the range of 4.7 to 5.8 days. === Repeated outbreak cycles === A 2013 study analyzed population dynamics of the smaller tea tortrix, a moth pest that infests tea plantations, especially in Japan. The data consisted of counts of adult moths captured with light traps every 5–6 days at the Kagoshima tea station in Japan from 1961 to 2012. Peak populations were 100 to 4000 times higher than at their lowest levels. A wavelet decomposition showed a clear, relatively stationary annual cycle in the populations, as well as non-stationary punctuations between late April and early October, representing 4–6 outbreaks per year of this multivoltine species. The cycles result from population overshoot. These moths have stage-structured development life cycles, and a traditional hypothesis suggests that these cycles should be most synchronized across the population in the spring due to the preceding effects of cold winter months, and as the summer progresses, the life stages become more randomly assorted. This is often what's observed in North America. However, this study observed instead that populations were more correlated as the season progressed, perhaps because temperature fluctuations enforced synchrony. The authors found that when temperatures first increased above about 15 °C (59 °F) in the spring, the population dynamics crossed a Hopf bifurcation from stability to repeated outbreak cycles, until stabilization again in the fall. Above the Hopf threshold, population-cycle amplitude increased roughly linearly with temperature. This study affirmed the classic concept of temperature as a "pacemaker of all vital rates." Understanding life-cycle dynamics is relevant for pest control because some insecticides only work at one or two life stages of the insect. == Effects of pest control == B. Chaney, a farm advisor in Monterey County, CA, estimates that mevinphos would kill practically all aphids in a field upon application. Wyatt, citing data from various Arthropod Management Tests, estimates that the percent of lettuce aphids killed is 76.1% for endosulfan and 67.0% for imidacloprid. Insecticides used on gypsy moths in the 1970s had roughly a 90% kill rate. == Impact of climate change == Temperature change is argued to be the biggest direct abiotic impact of climate change on herbivorous insects. In temperate regions, global warming will affect overwintering, and warmer temperatures will extend the summer season, allowing for more growth and reproduction. A 2013 study estimated that on average, crop pests and pathogens have moved to higher latitudes at a rate of about 2.7 km/year since 1960. This is roughly in line with estimates of the rate of climate change in general. == See also == Insect ecology Population ecology Pesticide resistance Insecticide Pest control == Notes ==
Wikipedia/Pest_insect_population_dynamics
Viable system theory (VST) concerns cybernetic processes in relation to the development/evolution of dynamical systems: it can be used to explain living systems, which are considered to be complex and adaptive, can learn, and are capable of maintaining an autonomous existence, at least within the confines of their constraints. These attributes involve the maintenance of internal stability through adaptation to changing environments. One can distinguish between two strands such theory: formal systems and principally non-formal system. Formal viable system theory is normally referred to as viability theory, and provides a mathematical approach to explore the dynamics of complex systems set within the context of control theory. In contrast, principally non-formal viable system theory is concerned with descriptive approaches to the study of viability through the processes of control and communication, though these theories may have mathematical descriptions associated with them. == History == The concept of viability arose with Stafford Beer in the 1950s through his paradigm of management systems. Its formal relative, viability theory began its life in 1976 with the mathematical interpretation of a book by Jacques Monod published in 1971 and entitled Chance and Necessity, and which concerned processes of evolution. Viability theory is concerned with dynamic adaptation of uncertain evolutionary systems to environments defined by constraints, the values of which determine the viability of the system. Both formal and non-formal approaches ultimately concern the structure and evolutionary dynamics of viability in complex systems. An alternative non-formal paradigm arose in the late 1980s through the work of Eric Schwarz, which increases the dimensionality of Beer's paradigm == Beer viable system theory == The viable system theory of Beer is most well known through his viable system model and is concerned with viable organisations capable of evolving. Through both internal and external analysis it is possible to identify the relationships and modes of behaviour that constitute viability. The model is underpinned by the realisation that organisations are complex, and recognising the existence of complexity is inherent to processes of analysis. Beer's management systems paradigm is underpinned by a set of propositions, sometimes referred to as cybernetic laws. Sitting within this is his viable systems model (VSM) and one of its laws is a principle of recursion, so that just as the model can be applied to divisions in a department, it can also be applied to the departments themselves. This is permitted through Beer's viability law which states that every viable system contains and is contained in a viable system. The cybernetic laws are applied to all types of human activity systems like organisations and institutions. Now, paradigms are concerned with not only theory but also modes of behaviour within inquiry. One significant part of Beer's paradigm is the development of his Viable Systems Model (VSM) that addresses problem situations in terms of control and communication processes, seeking to ensure system viability within the object of attention. Another is Beer's Syntegrity protocol which centres on the means by which effective communications in complex situations can occur. VSM has been used successfully to diagnose organisational pathologies (conditions of social ill-health). The model involves not only an operative system that has both structure (e.g., divisions in an organisation or departments in a division) from which behaviour emanates that is directed towards an environment, but also a meta-system, which some have called the observer of the system. The system and meta-system are ontologically different, so that for instance where in a production company the system is concerned with production processes and their immediate management, the meta-system is more concerned with the management of the production system as a whole. The connection between the system and meta-system is explained through Beer's Cybernetic map. Beer considered that viable social systems should be seen as living systems. Humberto Maturana used the term or autopoiesis (self-production) to explain biological living systems, but was reluctant to accept that social systems were living. == Schwarz viable system theory == The viable system theory of Schwarz is more directed towards the explicit examination of issues of complexity than is that of Beer. The theory begins with the idea of dissipative systems. While all isolated systems conserve energy, in non-isolated systems, one can distinguish between conservative systems (in which the kinetic energy is conserved) and dissipative systems (where the total kinetic and potential energy is conserved, but where part of the energy is changed in form and lost). If dissipated systems are far from equilibrium they "try" to recover equilibrium so quickly that they form dissipative structures to accelerate the process. Dissipative systems can create structured spots where entropy locally decreases and so negentropy locally increases to generate order and organisation. Dissipative systems involve far-from-equilibrium process that are inherently dynamically unstable, though they survive through the creation of order that is beyond the thresholds of instability. Schwarz explicitly defined the living system in terms of its metastructure involving a system, a metasystem and a meta-meta-system, this latter being an essential attribute. As with Beer, the system is concerned with operative attributes. Schwarz's meta-system is essentially concerned with relationships, and the meta-meta system is concerned with all forms of knowledge and its acquisition. Thus, where in Beer's theory learning processes can only be discussed in terms of implicit processes, in Schwarz's theory they can be discussed in explicit terms. Schwarz's living system model is a summary of much of the knowledge of complex adaptive systems, but succinctly compressed as a graphical generic metamodel. It is this capacity of compression that establishes it as a new theoretical structure that is beyond the concept of autopoiesis/self-production proposed by Humberto Maturana, through the concept of autogenesis. While the concept of autogenesis has not had the collective coherence that autopoiesis has, Schwarz clearly defined it as a network of self-creation processes and firmly integrated it with relevant theory in complexity in a way not previously done. The outcome illustrates how a complex and adaptive viable system is able to survive - maintaining an autonomous durable existence within the confines of its own constraints. The nature of viable systems is that they should have at least potential independence in their processes of regulation, organisation, production, and cognition. The generic model provides a holistic relationship between the attributes that explains the nature of viable systems and how they survive. It addresses the emergence and possible evolution of organisations towards complexity and autonomy intended to refer to any domain of system (e.g., biological, social, or cognitive). Systems in general, but also human activity systems, are able to survive (in other words they become viable) when they develop: (a) patterns of self-organisation that lead to self-organisation through morphogenesis and complexity; (b) patterns for long term evolution towards autonomy; (c) patterns that lead to the functioning of viable systems. This theory was intended to embrace the dynamics of dissipative systems using three planes. Plane of energy. Plane of information. Plane of totality. Each of the three planes (illustrated in Figure 1 below) is an independent ontological domain, interactively connected through networks of processes, and it shows the basic ontological structure of the viable system. Connected with this is an evolutionary spiral of self-organisation (adapted from Schwarz's 1997 paper), shown in Figure 2 below. Here, there are 4 phases or modes that a viable system can pass through. Mode 3 occurs with one of three possible outcomes (trifurcation): system death when viability is lost; more of the same; and metamorphosis when the viable system survives because it changes form. The dynamic process that viable living systems have, as they move from stability to instability and back again, is explained in Table 1, referring to aspects of both Figures 1 and 2. Schwarz's VST has been further developed, set within a social knowledge context, and formulated as autonomous agency theory. == See also == Systems theory Viable systems approach == References ==
Wikipedia/Viable_System_Theory
Autonomous agency theory (AAT) is a viable system theory (VST) which models autonomous social complex adaptive systems. It can be used to model the relationship between an agency and its environment(s), and these may include other interactive agencies. The nature of that interaction is determined by both the agency's external and internal attributes and constraints. Internal attributes may include immanent dynamic "self" processes that drive agency change. == History == Stafford Beer coined the term viable systems in the 1950s, and developed it within his management cybernetics theories. He designed his viable system model as a diagnostic tool for organisational pathologies (conditions of social ill-health). This model involves a system concerned with operations and their direct management, and a meta-system that "observes" the system and controls it. Beer's work refers to Maturana's concept of autopoiesis, which explains why living systems actually live. However, Beer did not make general use of the concept in his modelling process. In the 1980s Eric Schwarz developed an alternative model from the principles of complexity science. This not only embraces the ideas of autopoiesis (self-production), but also autogenesis (self-creation) which responds to a proposition that living systems also need to learn to maintain their viability. Self-production and self-creation are both networks of processes that connect an operational system of agency structure from which behaviour arises, an observing relational meta-system, this itself observed by an "existential" meta-meta-system. As such Schwarz' VST constitutes a different paradigm from that of Beer. AAT is a development of Schwarz' paradigm through the addition of propositions setting it in a knowledge context. == Development == AAT is a generic modelling approach that has the capacity to anticipate future potentials for behaviour. Such anticipation occurs because behaviour in the agency as a living system is "structure determined", where the structure itself of the agency is responsible for that anticipation. This is like anticipating the behaviour of both a tiger or a giraffe when faced with food options. The tiger has a structure that allows it to have speed, strength and sharp inbuilt weapons to kill moving prey, but the giraffe has a structure that allows it to acquire its food in high places in a way the tiger could not duplicate. Even if a giraffe has the speed to chase prey, it does not have the resources to kill and eat it. Agency generic structure is a substructure defined by three systems that are, in general terms, referred to as: existential (pattern of thematic relevance that is the consequence of experience); noumenal (representing the nature of a phenomenal effect subjectively through conceptual relationships) phenomenal (maintaining patterns of context related structural relevance connected with action, and constituting an origin for experience). These generic systems are ontologically distinct; their natures being determined by the context in which the autonomous agency exists. The substructure also maintains a superstructure that is constructed through context related propositional theory. Superstructural theory may include attributes of collective identity, cognition, emotion, personality; purpose and intention; self-reference, self-awareness, self-reflection, self-regulation and self-organisation. The substructural systems are connected by autopoietic and autogenetic networks of processes as shown in Figure 1 below. The terminology becomes simplified when the existential system is taken to be culture, and it is recognised that Piaget's concept of operative intelligence is equivalent to autopoiesis, and his figurative intelligence to autogenesis. The noumenal system now becomes a personality system, and autonomous agency theory now becomes cultural agency theory (CAT). This is normally used to model plural situations like organisations or a nation states, when its personality system is taken to have normative characteristics (see also Normative personality), that is, driven by cultural norms as represented in Figure 2 below. This has developed further through mindset agency theory enabling agency behaviour to be anticipated. A feature of this modelling approach is that the properties of the cultural system act as an attractor for the agency as a whole, providing constraint for the properties of its personality and operative systems. This attraction ceases with cultural instability, when CAT reduces to instrumentality with no capacity to learn. Another feature is driven by possibilities of recursion permitted using Beer's proposition of viability law: every viable system contains and is contained in a viable system. == Cultural agency theory == Cultural agency theory (CAT) as a development of AAT. It is principally used to model organisational contexts that have at least potentially stable cultures. The existential system of AAT becomes the cultural system, the figurative system becomes a normative personality, and the operative system now represents the organisational structure that facilitates and constrains behaviour. The cultural system may be regarded as a (second-order) "observer" of the instrumental couple that occurs between the normative personality and the operative system. The function of this couple is to manifest figurative attributes of the personality, like goals or ideology, operatively consequently influencing behaviour. This instrumental nature occurs through feedforward processes such that personality attributes can be processed for operative action. Where there are issues in doing this, feedback processes create imperatives for adjustment. This is like having a goal, and finding that it cannot be implemented, thereby having to reconsider the goal. This instrumental couple can also be seen in terms of the operative system and its first-order "observing" system, the normative personality. So, while personality is a first-order "observer" of CAT's operative system, it is ultimately directed by its second-order cultural "observer" system. A development of this has occurred using trait theory from psychology. Unlike other trait theories of personality, this adopts epistemic traits that centres on values, an approach that tends to be more stable (since basic values tend to be stable) in terms of personality testing and retesting, than other approaches that use (for instance) agency preferences (like Myers-Briggs Type Indicator) that may change between test and retest. This trait theory for the normative personality is called mindset agency theory, and is a development of Maruyama's Mindscape Theory. The cognitive process by which personality is represented through epistemic trait functions (called types), can be explained through both instrumental and epistemic rationality, where instrumental rationality (also referred to as utilitarian, and related to the expectations about the behaviour of other human beings or objects in the environment given some cognitive basis for those expectation) is independent of, if constrained by, epistemic rationality (related to the formation of beliefs in an unbiased manner, normally set in terms of believable propositions: due to their being strongly supported by evidence, as opposed to being agnostic towards propositions that are unsupported by "sufficient" evidence, whatever this means). Applications of CAT could be found in social, political and economic sciences, for instance, recent studies analyzed Donald Trump and Theresa May's personalities. == Higher orders of autonomous agency == Stafford Beer's (1979) viable system model is a well-known diagnostic model that comes out of his management cybernetics paradigm. Related to this is the idea of first-order and second-order cybernetics. Cybernetics is concerned with feedforward and feedback processes, and first-order cybernetics is concerned with this relationship between the system and its environment. Second-order cybernetics is concerned with the relationship between the system and its internal meta-system (that some refer to as "the observer" of the system). Von Foerster has referred to second-order cybernetics as the "cybernetics of cybernetics". While attempts to explore higher orders of cybernetics have been made, no development into a general theory of higher cybernetic orders has emerged from this paradigm. In contrast, extending the principles of autonomous agency theory, a generic model has been formulated for the generation of higher cybernetic orders, developed using the concepts of recursion and incursion as proposed by Dubois. The model is reflective, for instance, of processes of knowledge creation for community learning and symbolic convergence theory. This nth-order theory of cybernetics links with "the cybernetics of cybernetics" by assigning to its second-order cybernetic concept inferences that may arise from any higher-order cybernetics that may exist, if unperceived. The network of processes in this general representation of higher cybernetic orders is expressed in terms of orders of autopoiesis, so that for instance autogenesis may be seen as a second-order of autopoiesis. == See also == Agency (philosophy) Autogenesis, a thermodynamic synergy in living systems Cybernetics Second-order cybernetics == References ==
Wikipedia/Autonomous_Agency_Theory
Systems biology is the computational and mathematical analysis and modeling of complex biological systems. It is a biology-based interdisciplinary field of study that focuses on complex interactions within biological systems, using a holistic approach (holism instead of the more traditional reductionism) to biological research. This multifaceted research domain necessitates the collaborative efforts of chemists, biologists, mathematicians, physicists, and engineers to decipher the biology of intricate living systems by merging various quantitative molecular measurements with carefully constructed mathematical models. It represents a comprehensive method for comprehending the complex relationships within biological systems. In contrast to conventional biological studies that typically center on isolated elements, systems biology seeks to combine different biological data to create models that illustrate and elucidate the dynamic interactions within a system. This methodology is essential for understanding the complex networks of genes, proteins, and metabolites that influence cellular activities and the traits of organisms. One of the aims of systems biology is to model and discover emergent properties, of cells, tissues and organisms functioning as a system whose theoretical description is only possible using techniques of systems biology. By exploring how function emerges from dynamic interactions, systems biology bridges the gaps that exist between molecules and physiological processes. As a paradigm, systems biology is usually defined in antithesis to the so-called reductionist paradigm (biological organisation), although it is consistent with the scientific method. The distinction between the two paradigms is referred to in these quotations: "the reductionist approach has successfully identified most of the components and many of the interactions but, unfortunately, offers no convincing concepts or methods to understand how system properties emerge ... the pluralism of causes and effects in biological networks is better addressed by observing, through quantitative measures, multiple components simultaneously and by rigorous data integration with mathematical models." (Sauer et al.) "Systems biology ... is about putting together rather than taking apart, integration rather than reduction. It requires that we develop ways of thinking about integration that are as rigorous as our reductionist programmes, but different. ... It means changing our philosophy, in the full sense of the term." (Denis Noble) As a series of operational protocols used for performing research, namely a cycle composed of theory, analytic or computational modelling to propose specific testable hypotheses about a biological system, experimental validation, and then using the newly acquired quantitative description of cells or cell processes to refine the computational model or theory. Since the objective is a model of the interactions in a system, the experimental techniques that most suit systems biology are those that are system-wide and attempt to be as complete as possible. Therefore, transcriptomics, metabolomics, proteomics and high-throughput techniques are used to collect quantitative data for the construction and validation of models. A comprehensive systems biology approach necessitates: (i) a thorough characterization of an organism concerning its molecular components, the interactions among these molecules, and how these interactions contribute to cellular functions; (ii) a detailed spatio-temporal molecular characterization of a cell (for example, component dynamics, compartmentalization, and vesicle transport); and (iii) an extensive systems analysis of the cell's 'molecular response' to both external and internal perturbations. Furthermore, the data from (i) and (ii) should be synthesized into mathematical models to test knowledge by generating predictions (hypotheses), uncovering new biological mechanisms, assessing the system's behavior derived from (iii), and ultimately formulating rational strategies for controlling and manipulating cells. To tackle these challenges, systems biology must incorporate methods and approaches from various disciplines that have not traditionally interfaced with one another. The emergence of multi-omics technologies has transformed systems biology by providing extensive datasets that cover different biological layers, including genomics, transcriptomics, proteomics, and metabolomics. These technologies enable the large-scale measurement of biomolecules, leading to a more profound comprehension of biological processes and interactions. Increasingly, methods such as network analysis, machine learning, and pathway enrichment are utilized to integrate and interpret multi-omics data, thereby improving our understanding of biological functions and disease mechanisms. == History == Holism vs. Reductionism It is challenging to trace the origins and beginnings of systems biology. A comprehensive perspective on the human body was central to the medical practices of Greek, Roman, and East Asian traditions, where physicians and thinkers like Hippocrates believed that health and illness were linked to the equilibrium or disruption of bodily fluids known as humors. This holistic perspective persisted in the Western world throughout the 19th and 20th centuries, with prominent physiologists viewing the body as controlled by various systems, including the nervous system, the gastrointestinal system, and the cardiovascular system. In the latter half of the 20th century, however, this way of thinking was largely supplanted by reductionism: To grasp how the body functions properly, one needed to comprehend the role of each component, from tissues and cells to the complete set of intracellular molecular building blocks. In the 17th century, the triumphs of physics and the advancement of mechanical clockwork prompted a reductionist viewpoint in biology, interpreting organisms as intricate machines made up of simpler elements. Jan Smuts (1870–1950), naturalist/philosopher and twice Prime Minister of South Africa, coined the commonly used term holism. Whole systems such as cells, tissues, organisms, and populations were proposed to have unique (emergent) properties. It was impossible to try and reassemble the behavior of the whole from the properties of the individual components, and new technologies were necessary to define and understand the behavior of systems. Even though reductionism and holism are often contrasted with one another, they can be synthesized. One must understand how organisms are built (reductionism), while it is just as important to understand why they are so arranged (systems; holism). Each provides useful insights and answers different questions. However, the study of biological systems requires knowledge about control and design paradigms, as well as principles of structural stability, resilience, and robustness that are not directly inferred from mechanistic information. More profound insight will be gained by employing computer modeling to overcome the complexity in biological systems. Nevertheless, this perspective was consistently balanced by thinkers who underscored the significance of organization and emergent traits in living systems. This reductionist perspective has achieved remarkable success, and our understanding of biological processes has expanded with incredible speed and intensity. However, alongside these extraordinary advancements, science gradually came to understand that possessing complete information about molecular components alone would not suffice to elucidate the workings of life: the individual components rarely illustrate the function of a complex system. It is now commonly recognized that we need approaches for reconstructing integrated systems from their constituent parts and processes if we are to comprehend biological phenomena and manipulate them in a thoughtful, focused way. Origin of systems biology as a field In 1968, the term "systems biology" was first introduced at a conference. Those within the discipline soon recognized—and this understanding gradually became known to the wider public—that computational approaches were necessary to fully articulate the concepts and potential of systems biology. Specifically, these techniques needed to view biological phenomena as complex, multi-layered, adaptive, and dynamic systems. They had to account for transformations and intricate nonlinearities, thereby allowing for the smooth integration of smaller models ("modules") into larger, well-organized assemblies of models within complex settings. It became clear that mathematics and computation were vital for these methods. An acceleration of systems understanding came with the publication of the first ground-breaking text compiling molecular, physiological, and anatomical individuality in animals, which has been described as a revolution. Initially, the wider scientific community was reluctant to accept the integration of computational methods and control theory in the exploration of living systems, believing that "biology was too complex to apply mathematics." However, as the new millennium neared, this viewpoint underwent a significant and lasting transformation. More scientists started working on integration of mathematical concepts to understand and solve biological problems. Now, Systems biology have been widely applied in several fields including agriculture and medicine. == Approaches to systems biology == === Top-down approach === Top-down systems biology identifies molecular interaction networks by analyzing the correlated behaviors observed in large-scale 'omics' studies. With the advent of 'omics', this top-down strategy has become a leading approach. It begins with an overarching perspective of the system's behavior – examining everything at once – by gathering genome-wide experimental data and seeks to unveil and understand biological mechanisms at a more granular level – specifically, the individual components and their interactions. In this framework of 'top-down' systems biology, the primary goal is to uncover novel molecular mechanisms through a cyclical process that initiates with experimental data, transitions into data analysis and integration to identify correlations among molecule concentrations and concludes with the development of hypotheses regarding the co- and inter-regulation of molecular groups. These hypotheses then generate new predictions of correlations, which can be explored in subsequent experiments or through additional biochemical investigations. The notable advantages of top-down systems biology lie in its potential to provide comprehensive (i.e., genome-wide) insights and its focus on the metabolome, fluxome, transcriptome, and/or proteome. Top-down methods prioritize overall system states as influencing factors in models and the computational (or optimality) principles that govern the dynamics of the global system. For instance, while the dynamics of motor control (neuro) emerge from the interactions of millions of neurons, one can also characterize the neural motor system as a sort of feedback control system, which directs a 'plant' (the body) and guides movement by minimizing 'cost functions' (e.g., achieving trajectories with minimal jerk). === Bottom-up approach === Bottom-up systems biology infers the functional characteristics that may arise from a subsystem characterized with a high degree of mechanistic detail using molecular techniques. This approach begins with the foundational elements by developing the interactive behavior (rate equation) of each component process (e.g., enzymatic processes) within a manageable portion of the system. It examines the mechanisms through which functional properties arise in the interactions of known components. Subsequently, these formulations are combined to understand the behavior of the system. The primary goal of this method is to integrate the pathway models into a comprehensive model representing the entire system - the top or whole. As research and understanding advance, these models are often expanded by incorporating additional processes with high mechanistic detail. The bottom-up approach facilitates the integration and translation of drug-specific in vitro findings to the in vivo human context. This encompasses data collected during the early phases of drug development, such as safety evaluations. When assessing cardiac safety, a purely bottom-up modeling and simulation method entails reconstructing the processes that determine exposure, which includes the plasma (or heart tissue) concentration-time profiles and their electrophysiological implications, ideally incorporating hemodynamic effects and changes in contractility. Achieving this necessitates various models, ranging from single-cell to advanced three-dimensional (3D) multiphase models. Information from multiple in vitro systems that serve as stand-ins for the in vivo absorption, distribution, metabolism, and excretion (ADME) processes enables predictions of drug exposure, while in vitro data on drug-ion channel interactions support the translation of exposure to body surface potentials and the calculation of important electrophysiological endpoints. The separation of data related to the drug, system, and trial design, which is characteristic of the bottom-up approach, allows for predictions of exposure-response relationships considering both inter- and intra-individual variability, making it a valuable tool for evaluating drug effects at a population level. Numerous successful instances of applying physiologically based pharmacokinetic (PBPK) modeling in drug discovery and development have been documented in the literature. == Associated disciplines == According to the interpretation of systems biology as using large data sets using interdisciplinary tools, a typical application is metabolomics, which is the complete set of all the metabolic products, metabolites, in the system at the organism, cell, or tissue level. Items that may be a computer database include: phenomics, organismal variation in phenotype as it changes during its life span; genomics, organismal deoxyribonucleic acid (DNA) sequence, including intra-organismal cell specific variation. (i.e., telomere length variation); epigenomics/epigenetics, organismal and corresponding cell specific transcriptomic regulating factors not empirically coded in the genomic sequence. (i.e., DNA methylation, Histone acetylation and deacetylation, etc.); transcriptomics, organismal, tissue or whole cell gene expression measurements by DNA microarrays or serial analysis of gene expression; interferomics, organismal, tissue, or cell-level transcript correcting factors (i.e., RNA interference), proteomics, organismal, tissue, or cell level measurements of proteins and peptides via two-dimensional gel electrophoresis, mass spectrometry or multi-dimensional protein identification techniques (advanced HPLC systems coupled with mass spectrometry). Sub disciplines include phosphoproteomics, glycoproteomics and other methods to detect chemically modified proteins; glycomics, organismal, tissue, or cell-level measurements of carbohydrates; lipidomics, organismal, tissue, or cell level measurements of lipids. The molecular interactions within the cell are also studied, this is called interactomics. A discipline in this field of study is protein–protein interactions, although interactomics includes the interactions of other molecules. Neuroelectrodynamics, where the computer's or a brain's computing function as a dynamic system is studied along with its (bio)physical mechanisms; and fluxomics, measurements of the rates of metabolic reactions in a biological system (cell, tissue, or organism). In approaching a systems biology problem there are two main approaches. These are the top down and bottom up approach. The top down approach takes as much of the system into account as possible and relies largely on experimental results. The RNA-Seq technique is an example of an experimental top down approach. Conversely, the bottom up approach is used to create detailed models while also incorporating experimental data. An example of the bottom up approach is the use of circuit models to describe a simple gene network. Various technologies utilized to capture dynamic changes in mRNA, proteins, and post-translational modifications. Mechanobiology, forces and physical properties at all scales, their interplay with other regulatory mechanisms; biosemiotics, analysis of the system of sign relations of an organism or other biosystems; Physiomics, a systematic study of physiome in biology. Cancer systems biology is an example of the systems biology approach, which can be distinguished by the specific object of study (tumorigenesis and treatment of cancer). It works with the specific data (patient samples, high-throughput data with particular attention to characterizing cancer genome in patient tumour samples) and tools (immortalized cancer cell lines, mouse models of tumorigenesis, xenograft models, high-throughput sequencing methods, siRNA-based gene knocking down high-throughput screenings, computational modeling of the consequences of somatic mutations and genome instability). The long-term objective of the systems biology of cancer is ability to better diagnose cancer, classify it and better predict the outcome of a suggested treatment, which is a basis for personalized cancer medicine and virtual cancer patient in more distant prospective. Significant efforts in computational systems biology of cancer have been made in creating realistic multi-scale in silico models of various tumours. The systems biology approach often involves the development of mechanistic models, such as the reconstruction of dynamic systems from the quantitative properties of their elementary building blocks. For instance, a cellular network can be modelled mathematically using methods coming from chemical kinetics and control theory. Due to the large number of parameters, variables and constraints in cellular networks, numerical and computational techniques are often used (e.g., flux balance analysis). Other aspects of computer science, informatics, and statistics are also used in systems biology. These include new forms of computational models, such as the use of process calculi to model biological processes (notable approaches include stochastic π-calculus, BioAmbients, Beta Binders, BioPEPA, and Brane calculus) and constraint-based modeling; integration of information from the literature, using techniques of information extraction and text mining; development of online databases and repositories for sharing data and models, approaches to database integration and software interoperability via loose coupling of software, websites and databases, or commercial suits; network-based approaches for analyzing high dimensional genomic data sets. For example, weighted correlation network analysis is often used for identifying clusters (referred to as modules), modeling the relationship between clusters, calculating fuzzy measures of cluster (module) membership, identifying intramodular hubs, and for studying cluster preservation in other data sets; pathway-based methods for omics data analysis, e.g. approaches to identify and score pathways with differential activity of their gene, protein, or metabolite members. Much of the analysis of genomic data sets also include identifying correlations. Additionally, as much of the information comes from different fields, the development of syntactically and semantically sound ways of representing biological models is needed. == Model and its types == === What is a model? === A model serves as a conceptual depiction of objects or processes, highlighting certain characteristics of these items or activities. A model captures only certain facets of reality; however, when created correctly, this limited scope is adequate because the primary goal of modeling is to address specific inquiries. The saying, "essentially, all models are wrong, but some are useful," attributed to the statistician George Box, is a suitable principle for constructing models. === Types of models === Boolean Models: These models are also known as logical models and represent biological systems using binary states, allowing for the analysis of gene regulatory networks and signaling pathways. They are advantageous for their simplicity and ability to capture qualitative behaviors. Petri nets (PN): A unique type of bipartite graph consisting of two types of nodes: places and transitions. When a transition is activated, a token is transferred from the input places to the output places; the process is asynchronous and non-deterministic. Polynomial dynamical systems (PDS)- An algebraically based approach that represents a specific type of sequential FDS (Finite Dynamical System) operating over a finite field. Each transition function is an element within a polynomial ring defined over the finite field. It employs advanced rapid techniques from computer algebra and computational algebraic geometry, originating from the Buchberger algorithm, to compute the Gröbner bases of ideals in these rings. An ideal consists of a set of polynomials that remain closed under polynomial combinations. Differential equation models (ODE and PDE)- Ordinary Differential Equations (ODEs) are commonly utilized to represent the temporal dynamics of networks, while Partial Differential Equations (PDEs) are employed to describe behaviors occurring in both space and time, enabling the modeling of pattern formation. These spatiotemporal Diffusion-Reaction Systems demonstrate the emergence of self-organizing patterns, typically articulated by the general local activity principle, which elucidates the factors contributing to complexity and self-organization observed in nature. Bayesian models: This kind of model is commonly referred to as dynamic models. It utilizes a probabilistic approach that enables the integration of prior knowledge through Bayes' Theorem. A challenge can arise when determining the direction of an interaction. Finite State Linear Model (FSML): This model integrates continuous variables (such as protein concentration) with discrete elements (like promoter regions that have a limited number of states) in modeling. Agent-based models (ABM): Initially created within the fields of social sciences and economics, it models the behavior of individual agents (such as genes, mRNAs (siRNA, miRNA, lncRNA), proteins, and transcription factors) and examines how their interactions influence the larger system, which in this case is the cell. Rule – based models: In this approach, molecular interactions are simulated using local rules that can be utilized even in the absence of a specific network structure, meaning that the step to infer the network is not required, allowing these network-free methods to avoid the complex challenges associated with network inference. Piecewise-linear differential equation models (PLDE): The model is composed of a piecewise-linear representation of differential equations using step functions, along with a collection of inequality restrictions for the parameter values. Stochastic models: Models utilizing the Gillespie algorithm for addressing the chemical master equation provide the likelihood that a particular molecular species will possess a defined molecular population or concentration at a specified future point in time. The Gillespie method is the most computationally intensive option available. In cases where the number of molecules is low or when modeling the effects of molecular crowding is desired, the stochastic approach is preferred. State Space Model (SSM): Linear or non-linear modeling techniques that utilize an abstract state space along with various algorithms, which include Bayesian and other statistical methods, autoregressive models, and Kalman filtering. === Creating biological models === Researchers begin by choosing a biological pathway and diagramming all of the protein, gene, and/or metabolic pathways. After determining all of the interactions, mass action kinetics or enzyme kinetic rate laws are used to describe the speed of the reactions in the system. Using mass-conservation, the differential equations for the biological system can be constructed. Experiments or parameter fitting can be done to determine the parameter values to use in the differential equations. These parameter values will be the various kinetic constants required to fully describe the model. This model determines the behavior of species in biological systems and bring new insight to the specific activities of systems. Sometimes it is not possible to gather all reaction rates of a system. Unknown reaction rates are determined by simulating the model of known parameters and target behavior which provides possible parameter values. The use of constraint-based reconstruction and analysis (COBRA) methods has become popular among systems biologists to simulate and predict the metabolic phenotypes, using genome-scale models. One of the methods is the flux balance analysis (FBA) approach, by which one can study the biochemical networks and analyze the flow of metabolites through a particular metabolic network, by optimizing the objective function of interest (e.g. maximizing biomass production to predict growth). == Tools and database == == Applications in system biology == Systems biology, an interdisciplinary field that combines biology, data analysis, and mathematical modeling, has revolutionized various sectors, including medicine, agriculture, and environmental science. By integrating omics data (genomics, proteomics, metabolomics, etc.), systems biology provides a holistic understanding of complex biological systems, enabling advancements in drug discovery, crop improvement, and environmental impact assessment. This response explores the applications of systems biology across these domains, highlighting both industrial and academic research contributions. System biology is used in agriculture to identify the genetic and metabolic components of complex characteristics through trait dissection. It aids in the comprehension of plant-pathogen interactions in disease resistance. It is utilized in nutritional quality to enhance nutritional content through metabolic engineering. === Cancer === Approaches to cancer systems biology have made it possible to effectively combine experimental data with computer algorithms and, as an exception, to apply actionable targeted medicines for the treatment of cancer. In order to apply innovative cancer systems biology techniques and boost their effectiveness for customizing new, individualized cancer treatment modalities, comprehensive multi-omics data acquired through the sequencing of tumor samples and experimental model systems will be crucial. Cancer systems biology has the potential to provide insights into intratumor heterogeneity and identify therapeutic options. In particular, enhanced cancer systems biology methods that incorporate not only multi-omics data from tumors, but also extensive experimental models derived from patients can assist clinicians in their decision-making processes, ultimately aiming to address treatment failures in cancer. === Drug development === Before the 1990s, phenotypic drug discovery formed the foundation of most research in drug discovery, utilizing cellular and animal disease models to find drugs without focusing on a specific molecular target. However, following the completion of the human genome project, target-based drug discovery has become the predominant approach in contemporary pharmaceutical research for various reasons. Gene knockout and transgenic models enable researchers to investigate and gain insights into the function of targets and the mechanisms by which drugs operate on a molecular level. Target-based assays lend themselves better to high-throughput screening, which simplifies the process of identifying second-generation drugs—those that improve upon first-in-class drugs in aspects such as potency, selectivity, and half-life, especially when combined with structure-based drug design. To do this, researchers utilize the three-dimensional structure of target proteins and computational models of interactions between small molecules and those targets to aid in the identification of superior compounds. Cell systems biology represents a phenotypic drug discovery method that integrates the complexity of human disease biology with combinatorial design to develop assays. BioMAP® systems, founded on the principles of cell systems biology, consist of assays based on primary human cells that are designed to replicate intricate human disease and tissue biology in a feasible in vitro environment. Primary human cell types and co-cultures are activated using combinations of pathway activators to create cell signaling networks that align more closely with human disease. These systems are analyzed by assessing the levels of both secreted proteins and cell surface mediators. The distinct variations in protein readouts resulting from drug effects are recorded in a database that enables users to search for functional similarities (or biological 'read across'). In this method, inhibitors or activators targeting specific pathways are discovered to consistently affect the levels of multiple endpoints, often exhibiting a uniquely defined pattern, so that the resulting signatures can be linked to particular mechanisms of action. === Food safety and quality === The multi-omics technologies in system biology can be also be used in aspects of food quality and safety. High-throughput omics techniques, including genomics, proteomics, and metabolomics, offer valuable insights into the molecular composition of food products, facilitating the identification of critical elements that affect food quality and safety. For example, integrating omics data can enhance the understanding of the metabolic pathways and associated functional gene patterns that contribute to both the nutritional value and safety of food crops. This comprehensive approach guarantees the creation of food products that are both nutritious and safe, capable of satisfying the increasing global demand. Environmental system biology Genomics examines all genes as an evolving system over time, aiming to understand their interactions and effects on biological pathways, networks, and physiology in a broader context compared to genetics. As a result, genomics holds significant potential for discovering clusters of genes associated with complex disorders, aiding in the comprehension and management of diseases induced by environmental factors. When exploring the interactions between the environment and the genome as contributors to complex diseases, it is clear that the genome itself cannot be altered for the time being. However, once these interactions are recognized, it is feasible to minimize exposure or adjust lifestyle factors related to the environmental aspect of the disease. Gene-environment interactions can occur through direct associations with active metabolites at certain locations within the genome, potentially leading to mutations that could cause human diseases. Indirect interactions with the human genome can take place through intracellular receptors that function as ligand-activated transcription factors, which modulate gene expression and maintain cellular balance, or with an environmental factor that may produce detrimental effects. This type of environmental-gene interaction could be more straightforward to investigate than direct interactions since there are numerous markers of this kind of interaction that are readily measurable before the disease manifests. Examples of this include the expression of cytochrome P450 genes following exposure to environmental substances, such as the polycyclic aromatic hydrocarbon benzo[a]pyrene, which binds to the Ah receptor. == Technical challenges == One of the main challenges in systems biology is the connection between experimental descriptions, observations, data, models, and the assumptions that stem from them. In essence, systems biology must be understood within an information management framework that significantly encompasses experimental life sciences. Models are created using various languages or representation schemes, each suitable for conveying and reasoning about distinct sets of characteristics. There is no single universal language for systems biology that can adequately cover the diverse phenomena we aim to investigate. However, this intricate scenario overlooks two important aspects. Models can be developed in multiple versions over time and by different research teams. Conflicts can occur, and observations may be disputed. Various researchers might produce models in different versions and configurations. The unpredictable elements suggest that systems biology is not likely to yield a definitive collection of established models. Instead, we can expect a rich ecosystem of models to develop within a structure that fosters discussion and cooperation among participants. Challenges also exist in verifying the constraints and creating modeling frameworks with robust compositional strategies. This may create a need to handle models that may conflict with one another, whether between schemes or across different scales. In the end, the goal could involve the creation of personalized models that reflect differences in physiology, as opposed to universal models of biological processes. Other challenges include the massive amount of data created by high-throughput omics technologies which presents considerable challenges in terms of computation and storage. Each analysis in omics can result in data files ranging from terabytes to petabytes, which requires strong computational systems and ample storage solutions to manage and process these datasets effectively. The computational requirements are made more difficult by the necessity for advanced algorithms that can integrate and analyze diverse, high-dimensional data. Approaches like deep learning and network-based methods have displayed potential in tackling these issues, but they also demand significant computational power. == Artificial intelligence (AI) in systems biology == Utilizing AI in Systems Biology enables scientists to uncover novel insights into the intricate relationships present within biological systems, such as those among genes, proteins, and cells. A significant focus within Systems Biology is the application of AI for the analysis of expansive and complex datasets, including multi-omics data produced by high-throughput methods like next-generation sequencing and proteomics. Approaches powered by AI can be employed to detect patterns and correlations within these datasets and to anticipate the behavior of biological systems under varying conditions. For instance, artificial intelligence can identify genes that are expressed differently across various cancer types or detect small molecules linked to particular disease states. A key difficulty in analyzing multi-omics data is the integration of information from multiple sources. AI can create integrative models that consider the intricate interactions between different types of molecular data. These models may be utilized to uncover new biomarkers or therapeutic targets for diseases, as well as to enhance our understanding of fundamental biological processes. By significantly speeding up our comprehension of complex biological systems, AI has the potential to lead to new treatments and therapies for a range of diseases. Structural systems biology is a multidisciplinary field that merges systems biology with structural biology to investigate biological systems at the molecular scale. This domain strives for a thorough understanding of how biological molecules interact and function within cells, tissues, and organisms. The integration of AI in structural systems biology has become increasingly vital for examining extensive and complex datasets and modeling the behavior of biological systems. AI facilitates the analysis of protein–protein interaction networks within structural systems biology. These networks can be explored using graph theory and various mathematical methods, uncovering key characteristics such as hubs and modules. AI can also assist in the discovery of new drugs or therapies by predicting the effect of a drug on a particular biological component or pathway. == See also == == References == == Further reading == Klipp, Edda; Liebermeister, Wolfram; Wierling, Christoph; Kowald, Axel (2016). Systems Biology - A Textbook, 2nd edition. Wiley. ISBN 978-3-527-33636-4. Asfar S. Azmi, ed. (2012). Systems Biology in Cancer Research and Drug Discovery. Springer. ISBN 978-94-007-4819-4. Kitano, Hiroaki (15 October 2001). Foundations of Systems Biology. MIT Press. ISBN 978-0-262-11266-6. Werner, Eric (29 March 2007). "All systems go". Nature. 446 (7135): 493–494. Bibcode:2007Natur.446..493W. doi:10.1038/446493a. provides a comparative review of three books: Alon, Uri (7 July 2006). An Introduction to Systems Biology: Design Principles of Biological Circuits. Chapman & Hall. ISBN 978-1-58488-642-6. Kaneko, Kunihiko (15 September 2006). Life: An Introduction to Complex Systems Biology. Springer-Verlag. Bibcode:2006lics.book.....K. ISBN 978-3-540-32666-3. Palsson, Bernhard O. (16 January 2006). Systems Biology: Properties of Reconstructed Networks. Cambridge University Press. ISBN 978-0-521-85903-5. Werner Dubitzky; Olaf Wolkenhauer; Hiroki Yokota; Kwan-Hyun Cho, eds. (13 August 2013). Encyclopedia of Systems Biology. Springer-Verlag. ISBN 978-1-4419-9864-4. == External links == Media related to Systems biology at Wikimedia Commons Biological Systems in bio-physics-wiki
Wikipedia/Complex_systems_biology
The Journal of Universal Computer Science is a monthly peer-reviewed open-access scientific journal covering all aspects of computer science. == History == The journal was established in 1994 and is published by the J.UCS Consortium, formed by nine research organisations. The editors-in-chief are Muhammad Tanvir Afzal (Capital University of Science & Technology), Wolf-Tilo Balke (Leibniz University Hannover), Christian Gütl (Graz University of Technology), Rocael Hernández Rizzardini (Galileo University), Matthias Jarke (RWTH Aachen University), Stefanie Lindstaedt (Graz University of Technology), Peter Serdyukov (National University), and Klaus Tochtermann (Graz University of Technology). == Abstracting and indexing == The journal is abstracted and indexed in Current Contents/Engineering, Computing & Technology, Science Citation Index Expanded, and Scopus. According to the Journal Citation Reports, the journal has a 2017 impact factor of 1.066. == References == == External links == Official website
Wikipedia/Journal_of_Universal_Computer_Science
A network partition is a division of a computer network into relatively independent subnets, either by design, to optimize them separately, or due to the failure of network devices. Distributed software must be designed to be partition-tolerant, that is, even after the network is partitioned, it still works correctly. For example, in a network with multiple subnets where nodes A and B are located in one subnet and nodes C and D are in another, a partition occurs if the network switch device between the two subnets fails. In that case nodes A and B can no longer communicate with nodes C and D, but all nodes A-D work the same as before. == As a CAP trade-off == The CAP theorem is based on three trade-offs: consistency, availability, and partition tolerance. Partition tolerance, in this context, means the ability of a data processing system to continue processing data even if a network partition causes communication errors between subsystems. == External links == Partition of the Large Network doi:10.13140/RG.2.2.20183.06565/6 == References ==
Wikipedia/Network_partition
Actor–network theory (ANT) is a theoretical and methodological approach to social theory where everything in the social and natural worlds exists in constantly shifting networks of relationships. It posits that nothing exists outside those relationships. All the factors involved in a social situation are on the same level, and thus there are no external social forces beyond what and how the network participants interact at present. Thus, objects, ideas, processes, and any other relevant factors are seen as just as important in creating social situations as humans. ANT holds that social forces do not exist in themselves, and therefore cannot be used to explain social phenomena. Instead, strictly empirical analysis should be undertaken to "describe" rather than "explain" social activity. Only after this can one introduce the concept of social forces, and only as an abstract theoretical concept, not something which genuinely exists in the world. Although it is best known for its controversial insistence on the capacity of nonhumans to act or participate in systems or networks or both, ANT is also associated with forceful critiques of conventional and critical sociology. Developed by science and technology studies (STS) scholars Michel Callon, Madeleine Akrich and Bruno Latour, the sociologist John Law, and others, it can more technically be described as a "material-semiotic" method. This means that it maps relations that are simultaneously material (between things) and semiotic (between concepts). It assumes that many relations are both material and semiotic. The term actor-network theory was coined by John Law in 1992 to describe the work being done across case studies in different areas at the Centre de Sociologie de l'Innovation at the time. The theory demonstrates that everything in the social and natural worlds, human and nonhuman, interacts in shifting networks of relationships without any other elements out of the networks. ANT challenges many traditional approaches by defining nonhumans as actors equal to humans. This claim provides a new perspective when applying the theory in practice. Broadly speaking, ANT is a constructivist approach in that it avoids essentialist explanations of events or innovations (i.e. ANT explains a successful theory by understanding the combinations and interactions of elements that make it successful, rather than saying it is true and the others are false). Likewise, it is not a cohesive theory in itself. Rather, ANT functions as a strategy that assists people in being sensitive to terms and the often unexplored assumptions underlying them. It is distinguished from many other STS and sociological network theories for its distinct material-semiotic approach. == Background and context == ANT was first developed at the Centre de Sociologie de l'Innovation (CSI) of the École nationale supérieure des mines de Paris in the early 1980s by staff (Michel Callon, Madeleine Akrich, Bruno Latour) and visitors (including John Law). The 1984 book co-authored by John Law and fellow-sociologist Peter Lodge (Science for Social Scientists; London: Macmillan Press Ltd.) is a good example of early explorations of how the growth and structure of knowledge could be analyzed and interpreted through the interactions of actors and networks. Initially created in an attempt to understand processes of innovation and knowledge-creation in science and technology, the approach drew on existing work in STS, on studies of large technological systems, and on a range of French intellectual resources including the semiotics of Algirdas Julien Greimas, the writing of philosopher Michel Serres, and the Annales School of history. ANT appears to reflect many of the preoccupations of French post-structuralism, and in particular a concern with non-foundational and multiple material-semiotic relations. At the same time, it was much more firmly embedded in English-language academic traditions than most post-structuralist-influenced approaches. Its grounding in (predominantly English) science and technology studies was reflected in an intense commitment to the development of theory through qualitative empirical case-studies. Its links with largely US-originated work on large technical systems were reflected in its willingness to analyse large scale technological developments in an even-handed manner to include political, organizational, legal, technical and scientific factors. Many of the characteristic ANT tools (including the notions of translation, generalized symmetry and the "heterogeneous network"), together with a scientometric tool for mapping innovations in science and technology ("co-word analysis") were initially developed during the 1980s, predominantly in and around the CSI. The "state of the art" of ANT in the late 1980s is well-described in Latour's 1987 text, Science in Action. From about 1990 onwards, ANT started to become popular as a tool for analysis in a range of fields beyond STS. It was picked up and developed by authors in parts of organizational analysis, informatics, health studies, geography, sociology, anthropology, archaeology, feminist studies, technical communication, and economics. As of 2008, ANT is a widespread, if controversial, range of material-semiotic approaches for the analysis of heterogeneous relations. In part because of its popularity, it is interpreted and used in a wide range of alternative and sometimes incompatible ways. There is no orthodoxy in current ANT, and different authors use the approach in substantially different ways. Some authors talk of "after-ANT" to refer to "successor projects" blending together different problem-focuses with those of ANT. == Key concepts == === Actor/Actant === An actor (actant) is something that acts or to which activity is granted by others. It implies no motivation of human individual actors nor of humans in general. An actant can literally be anything provided it is granted to be the source of action. In another word, an actor, in this circumstance, is considered as any entity that does things. For example, in the "Pasteur Network", microorganisms are not inert, they cause unsterilized materials to ferment while leaving behind sterilized materials not affected. If they took other actions, that is, if they did not cooperate with Pasteur – if they did not take action (at least according to Pasteur's intentions) – then Pasteur's story may be a bit different. It is in this sense that Latour can refer to microorganisms as actors. Under the framework of ANT, the principle of generalized symmetry requires all entities must be described in the same terms before a network is considered. Any differences between entities are generated in the network of relations, and do not exist before any network is applied. ==== Human actors ==== Human normally refers to human beings and their human behaviors. ==== Nonhuman actors ==== Traditionally, nonhuman entities are creatures including plants, animals, geology, and natural forces, as well as a collective human making of arts, languages. In ANT, nonhuman covers multiple entities including things, objects, animals, natural phenomena, material structures, transportation devices, texts, and economic goods. But nonhuman actors do not cover entities such as humans, supernatural beings, and other symbolic objects in nature. === Actor-Network === As the term implies, the actor-network is the central concept in ANT. The term "network" is somewhat problematic in that it, as Latour notes, has a number of unwanted connotations. Firstly, it implies that what is described takes the shape of a network, which is not necessarily the case. Secondly, it implies "transportation without deformation," which, in ANT, is not possible since any actor-network involves a vast number of translations. Latour, however, still contends that network is a fitting term to use, because "it has no a priori order relation; it is not tied to the axiological myth of a top and of a bottom of society; it makes absolutely no assumption whether a specific locus is macro- or micro- and does not modify the tools to study the element 'a' or the element 'b'." This use of the term "network" is very similar to Deleuze and Guattari's rhizomes; Latour even remarks tongue-in-cheek that he would have no objection to renaming ANT "actant-rhizome ontology" if it only had sounded better, which hints at Latour's uneasiness with the word "theory". Actor–network theory tries to explain how material–semiotic networks come together to act as a whole; the clusters of actors involved in creating meaning are both material and semiotic. As a part of this it may look at explicit strategies for relating different elements together into a network so that they form an apparently coherent whole. These networks are potentially transient, existing in a constant making and re-making. This means that relations need to be repeatedly "performed" or the network will dissolve. They also assume that networks of relations are not intrinsically coherent, and may indeed contain conflicts. Social relations, in other words, are only ever in process, and must be performed continuously. The Pasteur story that was mentioned above introduced the patterned network of diverse materials, which is called the idea of 'heterogenous networks'. The basic idea of patterned network is that human is not the only factor or contributor in the society, or in any social activities and networks. Thus, the network composes machines, animals, things, and any other objects. For those nonhuman actors, it might be hard for people to imagine their roles in the network. For example, say two people, Jacob and Mike, are speaking through texts. Within the current technology, they are able to communicate with each other without seeing each other in person. Therefore, when typing or writing, the communication is basically not mediated by either of them, but instead by a network of objects, like their computers or cell phones. If taken to its logical conclusion, then, nearly any actor can be considered merely a sum of other, smaller actors. A car is an example of a complicated system. It contains many electronic and mechanical components, all of which are essentially hidden from view to the driver, who simply deals with the car as a single object. This effect is known as punctualisation, and is similar to the idea of encapsulation in object-oriented programming. When an actor network breaks down, the punctualisation effect tends to cease as well. In the automobile example above, a non-working engine would cause the driver to become aware of the car as a collection of parts rather than just a vehicle capable of transporting him or her from place to place. This can also occur when elements of a network act contrarily to the network as a whole. In his book Pandora's Hope, Latour likens depunctualization to the opening of a black box. When closed, the box is perceived simply as a box, although when it is opened all elements inside it become visible. === Translation === Central to ANT is the concept of translation which is sometimes referred to as sociology of translation, in which innovators attempt to create a forum, a central network in which all the actors agree that the network is worth building and defending. In his widely debated 1986 study of how marine biologists tried to restock the St Brieuc Bay in order to produce more scallops, Michel Callon defined 4 moments of translation: Problematisation: The researchers attempted to make themselves important to the other players in the drama by identifying their nature and issues, then claiming that they could be remedied if the actors negotiated the 'obligatory passage point' of the researchers' study program. Interessement: A series of procedures used by the researchers to bind the other actors to the parts that had been assigned to them in that program. Enrollment: A collection of tactics used by the researchers to define and connect the numerous roles they had assigned to others. Mobilisation: The researchers utilized a series of approaches to ensure that ostensible spokespeople for various key collectivities were appropriately able to represent those collectivities and were not deceived by the latter. Also important to the notion is the role of network objects in helping to smooth out the translation process by creating equivalencies between what would otherwise be very challenging people, organizations or conditions to mesh together. Bruno Latour spoke about this particular task of objects in his work Reassembling the Social. === Quasi-object === For the rethinking of social relations as networks, Latour mobilizes a concept from Michel Serres and expands on it in order “to locate the position of these strange new hybrids”. Quasi-objects are simultaneously quasi-subjects – the prefix quasi denotes that neither ontological status as subject or object is pure or permanent, but that these are dynamic entities whose status shifts, depending on their respective momentous activity and their according position in a collective or network. What is decisive is circulation and participation, from which networks emerge, examples for quasi-objects are language, money, bread, love, or the ball in a soccer game: all of these human or non-human, material or immaterial actants have no agency (and thus, subject-status) in themselves, however, they can be seen as the connective tissue underlying – or even acticating – the interactions in which they are enmeshed. In Reassembling the Social, Latour refers to these in-between actants as “the mediators whose proliferation generates, among many other entities, what could be called quasi-objects and quasi-subjects.” Actor–network theory refers to these creations as tokens or quasi-objects which are passed between actors within the network. As the token is increasingly transmitted or passed through the network, it becomes increasingly punctualized and also increasingly reified. When the token is decreasingly transmitted, or when an actor fails to transmit the token (e.g., the oil pump breaks), punctualization and reification are decreased as well. == Other central concepts == === A material semiotic method === Although it is called a "theory", ANT does not usually explain "why" a network takes the form that it does. Rather, ANT is a way of thoroughly exploring the relational ties within a network (which can be a multitude of different things). As Latour notes, "explanation does not follow from description; it is description taken that much further." It is not, in other words, a theory "of" anything, but rather a method, or a "how-to book" as Latour puts it. The approach is related to other versions of material-semiotics (notably the work of philosophers Gilles Deleuze, Michel Foucault, and feminist scholar Donna Haraway). It can also be seen as a way of being faithful to the insights of ethnomethodology and its detailed descriptions of how common activities, habits and procedures sustain themselves. Similarities between ANT and symbolic interactionist approaches such as the newer forms of grounded theory like situational analysis, exist, although Latour objects to such a comparison. Although ANT is mostly associated with studies of science and technology and with the sociology of science, it has been making steady progress in other fields of sociology as well. ANT is adamantly empirical, and as such yields useful insights and tools for sociological inquiry in general. ANT has been deployed in studies of identity and subjectivity, urban transportation systems, and passion and addiction. It also makes steady progress in political and historical sociology. === Intermediaries and mediators === The distinction between intermediaries and mediators is key to ANT sociology. Intermediaries are entities which make no difference (to some interesting state of affairs which we are studying) and so can be ignored. They transport the force of some other entity more or less without transformation and so are fairly uninteresting. Mediators are entities which multiply difference and so should be the object of study. Their outputs cannot be predicted by their inputs. From an ANT point of view sociology has tended to treat too much of the world as intermediaries. For instance, a sociologist might take silk and nylon as intermediaries, holding that the former "means", "reflects", or "symbolises" the upper classes and the latter the lower classes. In such a view the real world silk–nylon difference is irrelevant– presumably many other material differences could also, and do also, transport this class distinction. But taken as mediators these fabrics would have to be engaged with by the analyst in their specificity: the internal real-world complexities of silk and nylon suddenly appear relevant, and are seen as actively constructing the ideological class distinction which they once merely reflected. For the committed ANT analyst, social things—like class distinctions in taste in the silk and nylon example, but also groups and power—must constantly be constructed or performed anew through complex engagements with complex mediators. There is no stand-alone social repertoire lying in the background to be reflected off, expressed through, or substantiated in, interactions (as in an intermediary conception). === Reflexivity === Bruno Latour's articulation of reflexivity in Actor-Network Theory (ANT) reframes it as an opportunity rather than a problem. His argument addresses the limitations of reflexivity as traditionally conceived in relativist epistemologies and replaces it with a pragmatic, relational approach tied to ANT's broader principles. Latour argues that the observer is merely one actor among many within the network, eliminating the problem of reflexivity as a paradox of status. Reflexivity instead emerges through the tangible work of navigating and translating between networks, requiring the observer to engage actively, like any other actor, in the labour of connection and translation. This grounded form of reflexivity enhances the observer's role as a "world builder" and reinforces ANT's emphasis on the relational and dynamic nature of knowledge creation. === Hybridity === The belief that neither a human nor a nonhuman is pure, in the sense that neither is human or nonhuman in an absolute sense, but rather beings created via interactions between the two. Humans are thus regarded as quasi-subjects, while nonhumans are regarded as quasi-objects. == Actor–network theory and specific disciplines == Recently, there has been a movement to introduce actor network theory as an analytical tool to a range of applied disciplines outside of sociology, including nursing, public health, urban studies (Farias and Bender, 2010), and community, urban, and regional planning (Beauregard, 2012; Beauregard and Lieto, 2015; Rydin, 2012; Rydin and Tate, 2016, Tate, 2013). === International relations === Actor–network theory has become increasingly prominent within the discipline of international relations and political science. Theoretically, scholars within IR have employed ANT in order to disrupt traditional world political binaries (civilised/barbarian, democratic/autocratic, etc.), consider the implications of a posthuman understanding of IR, explore the infrastructures of world politics, and consider the effects of technological agency. Empirically, IR scholars have drawn on insights from ANT in order to study phenomena including political violences like the use of torture and drones, piracy and maritime governance, and garbage. === Design === The actor–network theory can also be applied to design, using a perspective that is not simply limited to an analysis of an object's structure. From the ANT viewpoint, design is seen as a series of features that account for a social, psychological, and economical world. ANT argues that objects are designed to shape human action and mold or influence decisions. In this way, the objects' design serves to mediate human relationships and can even impact our morality, ethics, and politics. === Literary criticism === The literary critic Rita Felski has argued that ANT offers the fields of literary criticism and cultural studies vital new modes of interpreting and engaging with literary texts. She claims that Latour's model has the capacity to allow "us to wiggle out of the straitjacket of suspicion," and to offer meaningful solutions to the problems associated with critique. The theory has been crucial to her formulation of postcritique. Felski suggests that the purpose of applying ANT to literary studies "is no longer to diminish or subtract from the reality of the texts we study but to amplify their reality, as energetic coactors and vital partners." === Anthropology of religion === In the study of Christianity by anthropologists, the ANT has been employed in a variety of ways of understanding how humans interact with nonhuman actors. Some have been critical of the field of Anthropology of Religion in its tendency to presume that God is not a social actor. The ANT is used to problematize the role of God, as a nonhuman actor, and speak of how They affect religious practice. Others have used the ANT to speak of the structures and placements of religious buildings, especially in cross-cultural contexts, which can see architecture as agents making God's presence tangible. == ANT in practice == ANT has been considered more than just a theory, but also a methodology. In fact, ANT is a useful method that can be applied in different studies. Moreover, with the development of the digital communication, ANT now is popular in being applied in science field like IS research. In addition, it widen the horizon of researchers from arts field as well. === ANT in arts === ANT is a big influencer in the development of design. In the past, researchers or scholars from design field mainly view the world as a human interactive situation. No matter what design we [who?] applied, it is for human's action. However, the idea of ANT now applies into design principle, where design starts to be viewed as a connector. As the view of design itself has changed, the design starts to be considered more important in daily lives. Scholars [who?] analyze how design shapes, connects, reflects, interacts our daily activities. ANT has also been widely applied in museums. ANT proposes that it is difficult to discern the 'hard' from the 'soft' components of the apparatus in curatorial practice; that the object 'in progress' of being curated is slick and difficult to separate from the setting of the experiment or the experimenter's identity. === ANT in science === In recent years, actor-network theory has gained a lot of traction, and a growing number of IS academics are using it explicitly in their research. Despite the fact that these applications vary greatly, all of the scholars cited below agree that the theory provides new notions and ideas for understanding the socio-technical character of information systems. Bloomfield present an intriguing case study of the development of a specific set of resource management information systems in the UK National Health Service, and they evaluate their findings using concepts from actor-network theory. The actor-network approach does not prioritize social or technological aspects, which mirrors the situation in the case study, where arguments about social structures and technology are intertwined within actors' discourse as they try to persuade others to align with their own goals. The research emphasizes the interpretative flexibility of information technology and systems, in the sense that seemingly similar systems produce drastically different outcomes in different locales as a result of the specific translation and network-building processes that occurred. They show how the boundary between the technological and the social, as well as the link between them, is the topic of constant battles and trials of strength in the creation of facts, rather than taking technology for granted. == Impact of ANT == === Contributions of nonhuman actors === There are at least four contributions of nonhumans as actors in their ANT positions. Nonhuman actors can be considered as a condition in human social activities. Through the human's formation of nonhuman actors such as durable materials, they provide a stable foundation for interactions in society. Reciprocally, nonhumans' actions and capacities serve as a condition for the possibility of the formation of society. In Latour's We Have Never Been Modern, his conceptual "parliament of things" consists of social, natural, and discourse together as hybrids. Although the interlocks between human actors and nonhumans effects the modernized society, this parliamentary setting based on nonhuman actors would eliminate such fake modernization, and changes the dichotomy between modern society and premodern society. Nonhuman actors can be considered as mediators. On the one hand, nonhumans could constantly modify relations between actors. On the other hand, nonhumans share the same features with other actors not solely as means for human actors. In this circumstance, nonhuman actors impact human interactions. It either creates an atmosphere for humans to agree with each other, or lead to conflict as the mediators. It is noticeable that the status of mediation is more affiliated with intermediaries or means as a stable presence in the corpus of ANT, while mediators function more powers to influence actors and networks. Technical mediation exerts itself on four dimensions: interference, composition, the folding of time and space, and crossing the boundary between signs and things. Nonhuman actors can be considered as members of moral and political associations. For example, noise is a nonhuman actor if the topic is applied to actor-network theory. Noise is the criteria for humans to regulate themselves to morality, and subject to the limitations inherent in some legal rules for its political effects. After nonhumans are visible actors through their associations with morality and politics, these collectives become inherently regulative principles in social networks. Nonhuman actors can be considered as gatherings. Alike nonhumans' impacts on morality and politics, they could gather actors from other times and spaces. Interacted with variable ontologies, times, spaces, and durability, nonhumans exert subtle influences within a network. == Criticism == Some critics have argued that research based on ANT perspectives remains entirely descriptive and fails to provide explanations for social processes. ANT—like comparable social scientific methods—requires judgement calls from the researcher as to which actors are important within a network and which are not. Critics argue that the importance of particular actors cannot be determined in the absence of "out-of-network" criteria, such as is a logically proven fact about deceptively coherent systems given Gödel's incompleteness theorems. Similarly, others argue that actor-networks risk degenerating into endless chains of association (six degrees of separation—we are all networked to one another). Other research perspectives such as social constructionism, social shaping of technology, social network theory, normalization process theory, and diffusion of innovations theory are held to be important alternatives to ANT approaches. === From STS itself and organizational studies === Key early criticism came from other members of the STS community, in particular the "Epistemological Chicken" debate between Collins and Yearley with responses from Latour and Callon as well as Woolgar. In an article in Science as Practice and Culture, sociologist Harry Collins and his co-writer Steven Yearley argue that the ANT approach is a step backwards towards the positivist and realist positions held by early theory of science. Collins and Yearley accused ANTs approach of collapsing into an endless relativist regress. Whittle and organization studies professor André Spicer note that "ANT has also sought to move beyond deterministic models that trace organizational phenomena back to powerful individuals, social structures, hegemonic discourses or technological effects. Rather, ANT prefers to seek out complex patterns of causality rooted in connections between actors." They argue that ANT's ontological realism makes it "less well equipped for pursuing a critical account of organizations—that is, one which recognises the unfolding nature of reality, considers the limits of knowledge and seeks to challenge structures of domination." This implies that ANT does not account for pre-existing structures, such as power, but rather sees these structures as emerging from the actions of actors within the network and their ability to align in pursuit of their interests. Accordingly, ANT can be seen as an attempt to re-introduce Whig history into science and technology studies; like the myth of the heroic inventor, ANT can be seen as an attempt to explain successful innovators by saying only that they were successful. Likewise, for organization studies, Whittle and Spicer assert that ANT is, "ill-suited to the task of developing political alternatives to the imaginaries of market managerialism." === Human agency === Actor–network theory insists on the capacity of nonhumans to be actors or participants in networks and systems. Critics including figures such as Langdon Winner maintain that such properties as intentionality fundamentally distinguish humans from animals or from "things" (see Activity Theory). ANT scholars [who?] respond with the following arguments: They do not attribute intentionality and similar properties to nonhumans. Their conception of agency does not presuppose intentionality. They locate agency neither in human "subjects" nor in nonhuman "objects", but in heterogeneous associations of humans and nonhumans. ANT has been criticized as amoral. Wiebe Bijker has responded to this criticism by stating that the amorality of ANT is not a necessity. Moral and political positions are possible, but one must first describe the network before taking up such positions. This position has been further explored by Stuart Shapiro who contrasts ANT with the history of ecology, and argues that research decisions are moral rather than methodological, but this moral dimension has been sidelined. === Misnaming === In a workshop called "On Recalling ANT", Latour himself stated that there are four things wrong with actor-network theory: "actor", "network", "theory" and the hyphen. In a later book, however, Latour reversed himself, accepting the wide use of the term, "including the hyphen.": 9  He further remarked how he had been helpfully reminded that the ANT acronym "was perfectly fit for a blind, myopic, workaholic, trail-sniffing, and collective traveler"—qualitative hallmarks of actor-network epistemology. == See also == Annemarie Mol Helen Verran Mapping controversies Science and technology studies (STS) Obligatory passage point (OPP) Social construction of technology (SCOT) Technology dynamics Theory of structuration (according to which neither agents nor social structure have primacy) Thing theory Outline of organizational theory == References == == Further reading == Carroll, N., Whelan, E., and Richardson, I. (2012). Service Science – an Actor Network Theory Approach. International Journal of Actor-Network Theory and Technological Innovation (IJANTTI), Volume 4, Number 3, pp. 52–70. Carroll, N. (2014). Actor-Network Theory: A Bureaucratic View of Public Service Innovation. Chapter 7, p. 115-144. In Ed Tatnall (ed). Technological Advancements and the Impact of Actor-Network Theory, IGI Global. Online version of the article "On Actor Network Theory: A Few Clarifications", in which Latour responds to criticisms. Archived 2021-04-26 at the Wayback Machine Introductory article "Dolwick, JS. 2009. The 'Social' and Beyond: Introducing Actor–Network Theory", which includes an analysis of other social theories ANThology. Ein einführendes Handbuch zur Akteur–Netzwerk-Theorie, von Andréa Belliger und David Krieger, transcript Verlag (German) Transhumanism as Actor-Network Theory "N00bz & the Actor-Network: Transhumanist Traductions" Archived 2010-10-08 at the Wayback Machine (Humanity+ Magazine) by Woody Evans. John Law (1992). "Notes on the Theory of the Actor Network: Ordering, Strategy, and Heterogeneity." John Law (1987). "Technology and Heterogeneous Engineering: The Case of Portuguese Expansion." In W.E. Bijker, T.P. Hughes, and T.J. Pinch (eds.), The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology (Cambridge, MA: MIT Press). Gianpaolo Baiocchi, Diana Graizbord, and Michael Rodríguez-Muñiz. 2013. "Actor-Network Theory and the ethnographic imagination: An exercise in translation". Qualitative Sociology Volume 36, Issue 4, pp 323–341. Seio Nakajima. 2013. "Re-imagining Civil Society in Contemporary Urban China: Actor-Network-Theory and Chinese Independent Film Consumption." Qualitative Sociology Volume 36, Issue 4, pp 383–402. [1] Isaac Marrero-Guillamón. 2013. "Actor-Network Theory, Gabriel Tarde and the Study of an Urban Social Movement: The Case of Can Ricart, Barcelona." Qualitative Sociology Volume 36, Issue 4, pp 403–421. [2] John Law and Vicky Singleton. 2013. "ANT and Politics: Working in and on the World". Qualitative Sociology Volume 36, Issue 4, pp 485–502. == External links == John Law's actor-network theory resource Bruno Latour's Page Normalization Process Theory toolkit Archived 2021-04-26 at the Wayback Machine Reassembling Ethnography: Actor-Network Theory and Sociology
Wikipedia/Actor-network_theory
A narrative network is a system that represents complex event sequences or characters’ interactions as depicted by a narrative text. Network science methodology offers an alternative way of analysing the patterns of relationships, composition and activities of events and actors studied in their own context. Network theory can contribute to the understanding of the structural properties of a text and the data contained in it. The meaning of the individual and the community in a narrative is conditional on their position in a system of social relationships reported by the author. Hence, a central problem when dealing with narratives is framing and organising the author's perspective of individual and collective connections to understand better the role of both the witness (viz. the persona that emerges from the narrative) and its testimony as reflected by the text. However, the category of narrative network is in its formative, initial phase and as a consequence it is hard to view as a stable and defined notion in linguistics, and beyond sociology. == Overview: Narrative as a structure of a story in time == To be an object of study and analysis, time must be transformed into a causal sequence, and the only way this can be done is by narration. As a form of description, narrating inevitably requires sequencing in time. The direction of time is not a trivial thing, but the backbone of the information contained in the narrative itself. One has to bear in mind the fundamental concepts of Genette's narratology, mainly the concept of ‘order.’ This distinguishes three entities: story, narrative, and narration. The story generally corresponds to a series of events placed its chronological order (the story time). When these events are rearranged and represented in a form that has its own sequence and features by the author, it produces a narrative. Even if the narrated events are not chronologically ordered, being reported in the narrative's time, they always refer to a position in the story time. The survey of any textual account ought to take into account its literary nature. Far from being a window that must be revealed to penetrate into a ‘historical truth,’ each historical document adds to the number of texts that must be interpreted if an approachable and intelligible picture of a given historical milieu is to be drawn. As pointed out by Peter Munz, "Narrative is the only literary device available which will reflect the past’s time structure." The pretension that conceives of history as the representation of the ‘actual’ should be put aside to acknowledge that one can only approach past structures by contrasting them with, or bonding them to, the imaginable world. In this way, and similar to Genette's conception of narrative order and time, a historical narrative implies not simply an account of events that happened in the transition from one point in time to another. Thence, historical narrative is a progressive ‘redescription’ of events and people that dismantles a structure encoded in one verbal mode in the beginning to justify the recoding of it in another mode at the end. Narratives are, thus, structures that contain complex systems that draw images of experience. == Background == To approach new ways of making sense of narrative, one must first distinguish two different systems that can be found in the narrative structure: the sequence of events and the sequence of the actor's interactions. The former is the order in time in which all the events take place (Genette's narrative time). Although trivial, this identification is fundamental for the construction of the latter. I understand the sequence of social interactions as the set of the characters’ relationships ordered in relation to its appearance following the sequence of events. Both constitute interdependent systems that express the flow of the narrative on two different levels. Defining what constitutes a relationship depends on the specific research questions formulated for the study of the narrative. The fact that two characters are mentioned as actors in a certain event can serve as a criterion for connecting two individuals (two actors are connected by the fact that they share one reported action). Criteria can, of course, be more detailed and precise. Depending on the specific phenomena of interest, one can frame the scope of the interactions to be identified throughout the narrative. For example, one might be interested in assessing the integration of an individual within a collective body. Indicators of social ties, as indicated by the text itself, would define the interaction criteria == Current Studies == Authors such as Peter Bearman, Robert Faris, and James Moody have understood the sequence of events in a narrative as a complex event structure. By suggesting that the flow of the narrated events can be problematized as a complex structure, they focus on the similarities between the social structures and the narrative. Through these similarities they have defended the applicability of network methods for the analysis of historical data contained in texts. Roberto Franzosi, and Bearman and Stovel have offered modelling techniques for narrative networks by focusing on the sequence of events. These authors have visualised the story time by connecting the events from the original narrative time. The constructed ‘narrative networks’ connect the events by causal relationships, viz. if action B lead to action A, then A and B are linked. By working on 'narrative networks,’ these authors defend it is possible to observe and measure new structural features of the narrative. They focus on autobiographical narratives of the rise and identity of fascism (the former) and Nazism (the latter). The substantive idea that they develop is that the observable narrative structure of life stories can provide insight into the process of identity formation among the witnesses of a delimited scope of time. Although these are remarkable models of applied quantitative narrative analysis and network analysis, their proposed narrative networks represent sequence of events rather than of characters. These research strategies may have to be diversified to study aspects such as political influence and other non-institutional features of organizations or groups reported by the author through narration. == References == === Citations ===
Wikipedia/Narrative_network
A human disease network is a network of human disorders and diseases with reference to their genetic origins or other features. More specifically, it is the map of human disease associations referring mostly to disease genes. For example, in a human disease network, two diseases are linked if they share at least one associated gene. A typical human disease network usually derives from bipartite networks which consist of both diseases and genes information. Additionally, some human disease networks use other features such as symptoms and proteins to associate diseases. == History == In 2007, Goh et al. constructed a disease-gene bipartite graph using information from OMIM database and termed human disease network. In 2009, Barrenas et al. derived complex disease-gene network using GWAs (Genome Wide Association studies). In the same year, Hidalgo et al. published a novel way of building human phenotypic disease networks in which diseases were connected according to their calculated distance. In 2011, Cusick et al. summarized studies on genotype-phenotype associations in cellular context. In 2014, Zhou, et al. built a symptom-based human disease network by mining biomedical literature database. == Properties == A large-scale human disease network shows scale-free property. The degree distribution follows a power law suggesting that only a few diseases connect to a large number of diseases, whereas most diseases have few links to others. Such network also shows a clustering tendency by disease classes. In a symptom-based disease network, disease are also clustered according to their categories. Moreover, diseases sharing the same symptom are more likely to share the same genes and protein interactions. == See also == Bioinformatics Genome Network theory Network medicine == References == == External links == https://web.archive.org/web/20080625034729/http://hudine.neu.edu/ http://www.barabasilab.com/pubs/CCNR-ALB_Publications/200705-14_PNAS-HumanDisease/200705-14_PNAS-HumanDisease-poster.pdf [2] https://www.nytimes.com/2008/05/06/health/research/06dise.html
Wikipedia/Human_disease_network
Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing signals, such as sound, images, potential fields, seismic signals, altimetry processing, and scientific measurements. Signal processing techniques are used to optimize transmissions, digital storage efficiency, correcting distorted signals, improve subjective video quality, and to detect or pinpoint components of interest in a measured signal. == History == According to Alan V. Oppenheim and Ronald W. Schafer, the principles of signal processing can be found in the classical numerical analysis techniques of the 17th century. They further state that the digital refinement of these techniques can be found in the digital control systems of the 1940s and 1950s. In 1948, Claude Shannon wrote the influential paper "A Mathematical Theory of Communication" which was published in the Bell System Technical Journal. The paper laid the groundwork for later development of information communication systems and the processing of signals for transmission. Signal processing matured and flourished in the 1960s and 1970s, and digital signal processing became widely used with specialized digital signal processor chips in the 1980s. == Definition of a signal == A signal is a function x ( t ) {\displaystyle x(t)} , where this function is either deterministic (then one speaks of a deterministic signal) or a path ( x t ) t ∈ T {\displaystyle (x_{t})_{t\in T}} , a realization of a stochastic process ( X t ) t ∈ T {\displaystyle (X_{t})_{t\in T}} == Categories == === Analog === Analog signal processing is for signals that have not been digitized, as in most 20th-century radio, telephone, and television systems. This involves linear electronic circuits as well as nonlinear ones. The former are, for instance, passive filters, active filters, additive mixers, integrators, and delay lines. Nonlinear circuits include compandors, multipliers (frequency mixers, voltage-controlled amplifiers), voltage-controlled filters, voltage-controlled oscillators, and phase-locked loops. === Continuous time === Continuous-time signal processing is for signals that vary with the change of continuous domain (without considering some individual interrupted points). The methods of signal processing include time domain, frequency domain, and complex frequency domain. This technology mainly discusses the modeling of a linear time-invariant continuous system, integral of the system's zero-state response, setting up system function and the continuous time filtering of deterministic signals. For example, in time domain, a continuous-time signal x ( t ) {\displaystyle x(t)} passing through a linear time-invariant filter/system denoted as h ( t ) {\displaystyle h(t)} , can be expressed at the output as y ( t ) = ∫ − ∞ ∞ h ( τ ) x ( t − τ ) d τ {\displaystyle y(t)=\int _{-\infty }^{\infty }h(\tau )x(t-\tau )\,d\tau } In some contexts, h ( t ) {\displaystyle h(t)} is referred to as the impulse response of the system. The above convolution operation is conducted between the input and the system. === Discrete time === Discrete-time signal processing is for sampled signals, defined only at discrete points in time, and as such are quantized in time, but not in magnitude. Analog discrete-time signal processing is a technology based on electronic devices such as sample and hold circuits, analog time-division multiplexers, analog delay lines and analog feedback shift registers. This technology was a predecessor of digital signal processing (see below), and is still used in advanced processing of gigahertz signals. The concept of discrete-time signal processing also refers to a theoretical discipline that establishes a mathematical basis for digital signal processing, without taking quantization error into consideration. === Digital === Digital signal processing is the processing of digitized discrete-time sampled signals. Processing is done by general-purpose computers or by digital circuits such as ASICs, field-programmable gate arrays or specialized digital signal processors. Typical arithmetical operations include fixed-point and floating-point, real-valued and complex-valued, multiplication and addition. Other typical operations supported by the hardware are circular buffers and lookup tables. Examples of algorithms are the fast Fourier transform (FFT), finite impulse response (FIR) filter, Infinite impulse response (IIR) filter, and adaptive filters such as the Wiener and Kalman filters. === Nonlinear === Nonlinear signal processing involves the analysis and processing of signals produced from nonlinear systems and can be in the time, frequency, or spatiotemporal domains. Nonlinear systems can produce highly complex behaviors including bifurcations, chaos, harmonics, and subharmonics which cannot be produced or analyzed using linear methods. Polynomial signal processing is a type of non-linear signal processing, where polynomial systems may be interpreted as conceptually straightforward extensions of linear systems to the nonlinear case. === Statistical === Statistical signal processing is an approach which treats signals as stochastic processes, utilizing their statistical properties to perform signal processing tasks. Statistical techniques are widely used in signal processing applications. For example, one can model the probability distribution of noise incurred when photographing an image, and construct techniques based on this model to reduce the noise in the resulting image. === Graph === Graph signal processing generalizes signal processing tasks to signals living on non-Euclidean domains whose structure can be captured by a weighted graph. Graph signal processing presents several key points such as sampling signal techniques, recovery techniques and time-varying techiques. Graph signal processing has been applied with success in the field of image processing, computer vision and sound anomaly detection. == Application fields == Audio signal processing – for electrical signals representing sound, such as speech or music Image processing – in digital cameras, computers and various imaging systems Video processing – for interpreting moving pictures Wireless communication – waveform generations, demodulation, filtering, equalization Control systems Array processing – for processing signals from arrays of sensors Process control – a variety of signals are used, including the industry standard 4-20 mA current loop Seismology Feature extraction, such as image understanding, semantic audio and speech recognition. Quality improvement, such as noise reduction, image enhancement, and echo cancellation. Source coding including audio compression, image compression, and video compression. Genomic signal processing In geophysics, signal processing is used to amplify the signal vs the noise within time-series measurements of geophysical data. Processing is conducted within the time domain or frequency domain, or both. In communication systems, signal processing may occur at: OSI layer 1 in the seven-layer OSI model, the physical layer (modulation, equalization, multiplexing, etc.); OSI layer 2, the data link layer (forward error correction); OSI layer 6, the presentation layer (source coding, including analog-to-digital conversion and data compression). == Typical devices == Filters – for example analog (passive or active) or digital (FIR, IIR, frequency domain or stochastic filters, etc.) Samplers and analog-to-digital converters for signal acquisition and reconstruction, which involves measuring a physical signal, storing or transferring it as digital signal, and possibly later rebuilding the original signal or an approximation thereof. Digital signal processors (DSPs) == Mathematical methods applied == Differential equations – for modeling system behavior, connecting input and output relations in linear time-invariant systems. For instance, a low-pass filter such as an RC circuit can be modeled as a differential equation in signal processing, which allows one to compute the continuous output signal as a function of the input or initial conditions. Recurrence relations Transform theory Time-frequency analysis – for processing non-stationary signals Linear canonical transformation Spectral estimation – for determining the spectral content (i.e., the distribution of power over frequency) of a set of time series data points Statistical signal processing – analyzing and extracting information from signals and noise based on their stochastic properties Linear time-invariant system theory, and transform theory Polynomial signal processing – analysis of systems which relate input and output using polynomials System identification and classification Calculus Coding theory Complex analysis Vector spaces and Linear algebra Functional analysis Probability and stochastic processes Detection theory Estimation theory Optimization Numerical methods Data mining – for statistical analysis of relations between large quantities of variables (in this context representing many physical signals), to extract previously unknown interesting patterns == See also == Algebraic signal processing Audio filter Bounded variation Digital image processing Dynamic range compression, companding, limiting, and noise gating Fourier transform Information theory Least-squares spectral analysis Non-local means Reverberation Sensitivity (electronics) Similarity (signal processing) == References == == Further reading == Byrne, Charles (2014). Signal Processing: A Mathematical Approach. Taylor & Francis. doi:10.1201/b17672. ISBN 9780429158711. P Stoica, R Moses (2005). Spectral Analysis of Signals (PDF). NJ: Prentice Hall. Papoulis, Athanasios (1991). Probability, Random Variables, and Stochastic Processes (third ed.). McGraw-Hill. ISBN 0-07-100870-5. Kainam Thomas Wong [1]: Statistical Signal Processing lecture notes at the University of Waterloo, Canada. Ali H. Sayed, Adaptive Filters, Wiley, NJ, 2008, ISBN 978-0-470-25388-5. Thomas Kailath, Ali H. Sayed, and Babak Hassibi, Linear Estimation, Prentice-Hall, NJ, 2000, ISBN 978-0-13-022464-4. == External links == Signal Processing for Communications – free online textbook by Paolo Prandoni and Martin Vetterli (2008) Scientists and Engineers Guide to Digital Signal Processing – free online textbook by Stephen Smith Julius O. Smith III: Spectral Audio Signal Processing – free online textbook Graph Signal Processing Website – free online website by Thierry Bouwmans (2025)
Wikipedia/signal_processing
Algebraic signal processing (ASP) is an emerging area of theoretical signal processing (SP). In the algebraic theory of signal processing, a set of filters is treated as an (abstract) algebra, a set of signals is treated as a module or vector space, and convolution is treated as an algebra representation. The advantage of algebraic signal processing is its generality and portability. == History == In the original formulation of algebraic signal processing by Puschel and Moura, the signals are collected in an A {\displaystyle {\mathcal {A}}} -module for some algebra A {\displaystyle {\mathcal {A}}} of filters, and filtering is given by the action of A {\displaystyle {\mathcal {A}}} on the A {\displaystyle {\mathcal {A}}} -module. == Definitions == Let K {\displaystyle K} be a field, for instance the complex numbers, and A {\displaystyle {\mathcal {A}}} be a K {\displaystyle K} -algebra (i.e. a vector space over K {\displaystyle K} with a binary operation ∗ : A ⊗ A → A {\displaystyle \ast :{\mathcal {A}}\otimes {\mathcal {A}}\to {\mathcal {A}}} that is linear in both arguments) treated as a set of filters. Suppose M {\displaystyle {\mathcal {M}}} is a vector space representing a set signals. A representation of A {\displaystyle {\mathcal {A}}} consists of an algebra homomorphism ρ : A → E n d ( M ) {\displaystyle \rho :{\mathcal {A}}\to \mathrm {End} ({\mathcal {M}})} where E n d ( M ) {\displaystyle \mathrm {End} ({\mathcal {M}})} is the algebra of linear transformations T : M → M {\displaystyle T:{\mathcal {M}}\to {\mathcal {M}}} with composition (equivalent, in the finite-dimensional case, to matrix multiplication). For convenience, we write ρ a {\displaystyle \rho _{a}} for the endomorphism ρ ( a ) {\displaystyle \rho (a)} . To be an algebra homomorphism, ρ {\displaystyle \rho } must not only be a linear transformation, but also satisfy the property ρ a ∗ b = ρ b ∘ ρ a ∀ a , b ∈ A {\displaystyle \rho _{a\ast b}=\rho _{b}\circ \rho _{a}\quad \forall a,b\in {\mathcal {A}}} Given a signal x ∈ M {\displaystyle x\in {\mathcal {M}}} , convolution of the signal by a filter a ∈ A {\displaystyle a\in {\mathcal {A}}} yields a new signal ρ a ( x ) {\displaystyle \rho _{a}(x)} . Some additional terminology is needed from the representation theory of algebras. A subset G ⊆ A {\displaystyle {\mathcal {G}}\subseteq {\mathcal {A}}} is said to generate the algebra if every element of A {\displaystyle {\mathcal {A}}} can be represented as polynomials in the elements of A {\displaystyle {\mathcal {A}}} . The image of a generator g ∈ G {\displaystyle g\in {\mathcal {G}}} is called a shift operator. In all practically all examples, convolutions are formed as polynomials in E n d ( M ) {\displaystyle \mathrm {End} ({\mathcal {M}})} generated by shift operators. However, this is not necessarily the case for a representation of an arbitrary algebra. == Examples == === Discrete Signal Processing === In discrete signal processing (DSP), the signal space is the set of complex-valued functions M = L 2 ( Z ) {\displaystyle {\mathcal {M}}={\mathcal {L}}^{2}(\mathbb {Z} )} with bounded energy (i.e. square-integrable functions). This means the infinite series ∑ n = − ∞ ∞ | ( x ) n | < ∞ {\displaystyle \sum _{n=-\infty }^{\infty }|(x)_{n}|<\infty } where | ⋅ | {\displaystyle |\cdot |} is the modulus of a complex number. The shift operator is given by the linear endomorphism ( S x ) n = ( x ) n − 1 {\displaystyle (Sx)_{n}=(x)_{n-1}} . The filter space is the algebra of polynomials with complex coefficients A = C [ z − 1 , z ] {\displaystyle {\mathcal {A}}=\mathbb {C} [z^{-1},z]} and convolution is given by ρ h = ∑ k = − ∞ ∞ h k S k {\displaystyle \rho _{h}=\sum _{k=-\infty }^{\infty }h_{k}S^{k}} where h ( t ) = ∑ k = − ∞ ∞ h k z k {\displaystyle h(t)=\sum _{k=-\infty }^{\infty }h_{k}z^{k}} is an element of the algebra. Filtering a signal by h {\displaystyle h} , then yields ( y ) n = ∑ k = − ∞ ∞ h k x n − k {\displaystyle (y)_{n}=\sum _{k=-\infty }^{\infty }h_{k}x_{n-k}} because ( S k x ) n = ( x ) n − k {\displaystyle (S^{k}x)_{n}=(x)_{n-k}} . === Graph Signal Processing === A weighted graph is an undirected graph G = ( V , E ) {\displaystyle {\mathcal {G}}=({\mathcal {V}},{\mathcal {E}})} with pseudometric on the node set V {\displaystyle {\mathcal {V}}} written a i j {\displaystyle a_{ij}} . A graph signal is simply a real-valued function on the set of nodes of the graph. In graph neural networks, graph signals are sometimes called features. The signal space is the set of all graph signals M = R V {\displaystyle {\mathcal {M}}=\mathbb {R} ^{\mathcal {V}}} where V {\displaystyle {\mathcal {V}}} is a set of n = | V | {\displaystyle n=|{\mathcal {V}}|} nodes in G = ( V , E ) {\displaystyle {\mathcal {G}}=({\mathcal {V}},{\mathcal {E}})} . The filter algebra is the algebra of polynomials in one indeterminate A = R [ t ] {\displaystyle {\mathcal {A}}=\mathbb {R} [t]} . There a few possible choices for a graph shift operator (GSO). The (un)normalized weighted adjacency matrix of [ A ] i j = a i j {\displaystyle [A]_{ij}=a_{ij}} is a popular choice, as well as the (un)normalized graph Laplacian [ L ] i j = { ∑ j = 1 n a i j i = j − a i j i ≠ j {\displaystyle [L]_{ij}={\begin{cases}\sum _{j=1}^{n}a_{ij}&i=j\\-a_{ij}&i\neq j\end{cases}}} . The choice is dependent on performance and design considerations. If S {\displaystyle S} is the GSO, then a graph convolution is the linear transformation ρ h = ∑ k = 0 ∞ h k S k {\displaystyle \rho _{h}=\sum _{k=0}^{\infty }h_{k}S^{k}} for some h ( t ) = ∑ k = 0 ∞ h k z k {\displaystyle h(t)=\sum _{k=0}^{\infty }h_{k}z^{k}} , and convolution of a graph signal x : V → R {\displaystyle \mathbf {x} :{\mathcal {V}}\to \mathbb {R} } by a filter h ( t ) {\displaystyle h(t)} yields a new graph signal y = ( ∑ k = 0 ∞ h k S k ) ⋅ x {\displaystyle \mathbf {y} =\left(\sum _{k=0}^{\infty }h_{k}S^{k}\right)\cdot \mathbf {x} } . === Other Examples === Other mathematical objects with their own proposed signal-processing frameworks are algebraic signal models. These objects include including quivers, graphons, semilattices, finite groups, and Lie groups, and others. == Intertwining Maps == In the framework of representation theory, relationships between two representations of the same algebra are described with intertwining maps which in the context of signal processing translates to transformations of signals that respect the algebra structure. Suppose ρ : A → E n d ( M ) {\displaystyle \rho :{\mathcal {A}}\to \mathrm {End} ({\mathcal {M}})} and ρ ′ : A → E n d ( M ′ ) {\displaystyle \rho ':{\mathcal {A}}\to \mathrm {End} ({\mathcal {M}}')} are two different representations of A {\displaystyle {\mathcal {A}}} . An intertwining map is a linear transformation α : M → M ′ {\displaystyle \alpha :{\mathcal {M}}\to {\mathcal {M}}'} such that α ∘ ρ a = ρ a ′ ∘ α ∀ a ∈ A {\displaystyle \alpha \circ \rho _{a}=\rho '_{a}\circ \alpha \quad \forall a\in {\mathcal {A}}} Intuitively, this means that filtering a signal by a {\displaystyle a} then transforming it with α {\displaystyle \alpha } is equivalent to first transforming a signal with α {\displaystyle \alpha } , then filtering by a {\displaystyle a} . The z transform is a prototypical example of an intertwining map. == Algebraic Neural Networks == Inspired by a recent perspective that popular graph neural networks (GNNs) architectures are in fact convolutional neural networks (CNNs), recent work has been focused on developing novel neural network architectures from the algebraic point-of-view. An algebraic neural network is a composition of algebraic convolutions, possibly with multiple features and feature aggregations, and nonlinearities. == References == == External links == Smart Project: Algebraic Theory of Signal Processing at the Department of Electrical and Computer Engineering at Carnegie Mellon University. Lecture 12: "Algebraic Neural Networks," University of Pennsylvania (ESE 514).
Wikipedia/Algebraic_signal_processing
A voltage-controlled oscillator (VCO) is an electronic oscillator whose oscillation frequency is controlled by a voltage input. The applied input voltage determines the instantaneous oscillation frequency. Consequently, a VCO can be used for frequency modulation (FM) or phase modulation (PM) by applying a modulating signal to the control input. A VCO is also an integral part of a phase-locked loop. VCOs are used in synthesizers to generate a waveform whose pitch can be adjusted by a voltage determined by a musical keyboard or other input. A voltage-to-frequency converter (VFC) is a special type of VCO designed to be very linear in frequency control over a wide range of input control voltages. == Types == VCOs can be generally categorized into two groups based on the type of waveform produced. Linear or harmonic oscillators generate a sinusoidal waveform. Harmonic oscillators in electronics usually consist of a resonator with an amplifier that replaces the resonator losses (to prevent the amplitude from decaying) and isolates the resonator from the output (so the load does not affect the resonator). Some examples of harmonic oscillators are LC oscillators and crystal oscillators. Relaxation oscillators can generate a sawtooth or triangular waveform. They are commonly used in integrated circuits (ICs). They can provide a wide range of operational frequencies with a minimal number of external components. == Frequency control == A voltage-controlled capacitor is one method of making an LC oscillator vary its frequency in response to a control voltage. Any reverse-biased semiconductor diode displays a measure of voltage-dependent capacitance and can be used to change the frequency of an oscillator by varying a control voltage applied to the diode. Special-purpose variable-capacitance varactor diodes are available with well-characterized wide-ranging values of capacitance. A varactor is used to change the capacitance (and hence the frequency) of an LC tank. A varactor can also change loading on a crystal resonator and pull its resonant frequency. The same effect occurs with bipolar transistors, as described by Donald E. Thomas at Bell Labs in 1954: with a tank circuit connected to the collector and the modulating audio signal applied between the emitter and the base, a single-transistor FM transmitter is created. Thomas worked with a point-contact transistor, but the effect also works in junction transistors; applications include wireless microphones such as that patented by Raymond A. Litke in 1964. For low-frequency VCOs, other methods of varying the frequency (such as altering the charging rate of a capacitor by means of a voltage-controlled current source) are used (see function generator). The frequency of a ring oscillator is controlled by varying either the supply voltage, the current available to each inverter stage, or the capacitive loading on each stage. === Phase-domain equations === VCOs are used in analog applications such as frequency modulation and frequency-shift keying. The functional relationship between the control voltage and the output frequency for a VCO (especially those used at radio frequency) may not be linear, but over small ranges, the relationship is approximately linear, and linear control theory can be used. A voltage-to-frequency converter (VFC) is a special type of VCO designed to be very linear over a wide range of input voltages. Modeling for VCOs is often not concerned with the amplitude or shape (sinewave, triangle wave, sawtooth) but rather its instantaneous phase. In effect, the focus is not on the time-domain signal A sin(ωt+θ0) but rather the argument of the sine function (the phase). Consequently, modeling is often done in the phase domain. The instantaneous frequency of a VCO is often modeled as a linear relationship with its instantaneous control voltage. The output phase of the oscillator is the integral of the instantaneous frequency. f ( t ) = f 0 + K 0 ⋅ v in ( t ) θ ( t ) = ∫ − ∞ t f ( τ ) d τ {\displaystyle {\begin{aligned}f(t)&=f_{0}+K_{0}\cdot \ v_{\text{in}}(t)\\\theta (t)&=\int _{-\infty }^{t}f(\tau )\,d\tau \\\end{aligned}}} f ( t ) {\displaystyle f(t)} is the instantaneous frequency of the oscillator at time t (not the waveform amplitude) f 0 {\displaystyle f_{0}} is the quiescent frequency of the oscillator (not the waveform amplitude) K 0 {\displaystyle K_{0}} is called the oscillator sensitivity, or gain. Its units are hertz per volt. f ( τ ) {\displaystyle f(\tau )} is the VCO's frequency θ ( t ) {\displaystyle \theta (t)} is the VCO's output phase v in ( t ) {\displaystyle v_{\text{in}}(t)} is the time-domain control input or tuning voltage of the VCO For analyzing a control system, the Laplace transforms of the above signals are useful. F ( s ) = K 0 ⋅ V in ( s ) Θ ( s ) = F ( s ) s {\displaystyle {\begin{aligned}F(s)&=K_{0}\cdot \ V_{\text{in}}(s)\\\Theta (s)&={F(s) \over s}\\\end{aligned}}} == Design and circuits == Tuning range, tuning gain and phase noise are the important characteristics of a VCO. Generally, low phase noise is preferred in a VCO. Tuning gain and noise present in the control signal affect the phase noise; high noise or high tuning gain imply more phase noise. Other important elements that determine the phase noise are sources of flicker noise (1/f noise) in the circuit, the output power level, and the loaded Q factor of the resonator. (see Leeson's equation). The low frequency flicker noise affects the phase noise because the flicker noise is heterodyned to the oscillator output frequency due to the non-linear transfer function of active devices. The effect of flicker noise can be reduced with negative feedback that linearizes the transfer function (for example, emitter degeneration). VCOs generally have lower Q factor compared to similar fixed-frequency oscillators, and so suffer more jitter. The jitter can be made low enough for many applications (such as driving an ASIC), in which case VCOs enjoy the advantages of having no off-chip components (expensive) or on-chip inductors (low yields on generic CMOS processes). === LC oscillators === Commonly used VCO circuits are the Clapp and Colpitts oscillators. The more widely used oscillator of the two is Colpitts and these oscillators are very similar in configuration. === Crystal oscillators === A voltage-controlled crystal oscillator (VCXO) is used for fine adjustment of the operating frequency. The frequency of a voltage-controlled crystal oscillator can be varied a few tens of parts per million (ppm) over a control voltage range of typically 0 to 3 volts, because the high Q factor of the crystals allows frequency control over only a small range of frequencies. A temperature-compensated VCXO (TCVCXO) incorporates components that partially correct the dependence on temperature of the resonant frequency of the crystal. A smaller range of voltage control then suffices to stabilize the oscillator frequency in applications where temperature varies, such as heat buildup inside a transmitter. Placing the oscillator in a crystal oven at a constant but higher-than-ambient temperature is another way to stabilize oscillator frequency. High stability crystal oscillator references often place the crystal in an oven and use a voltage input for fine control. The temperature is selected to be the turnover temperature: the temperature where small changes do not affect the resonance. The control voltage can be used to occasionally adjust the reference frequency to a NIST source. Sophisticated designs may also adjust the control voltage over time to compensate for crystal aging. === Clock generators === A clock generator is an oscillator that provides a timing signal to synchronize operations in digital circuits. VCXO clock generators are used in many areas such as digital TV, modems, transmitters and computers. Design parameters for a VCXO clock generator are tuning voltage range, center frequency, frequency tuning range and the timing jitter of the output signal. Jitter is a form of phase noise that must be minimised in applications such as radio receivers, transmitters and measuring equipment. When a wider selection of clock frequencies is needed the VCXO output can be passed through digital divider circuits to obtain lower frequencies or be fed to a phase-locked loop (PLL). ICs containing both a VCXO (for external crystal) and a PLL are available. A typical application is to provide clock frequencies in a range from 12 kHz to 96 kHz to an audio digital-to-analog converter. === Frequency synthesizers === A frequency synthesizer generates precise and adjustable frequencies based on a stable single-frequency clock. A digitally controlled oscillator based on a frequency synthesizer may serve as a digital alternative to analog voltage controlled oscillator circuits. == Applications == VCOs are used in function generators, phase-locked loops including frequency synthesizers used in communication equipment and the production of electronic music, to generate variable tones in synthesizers. Function generators are low-frequency oscillators which feature multiple waveforms, typically sine, square, and triangle waves. Monolithic function generators are voltage-controlled. Analog phase-locked loops typically contain VCOs. High-frequency VCOs are usually used in phase-locked loops for radio receivers. Phase noise is the most important specification in this application. Audio-frequency VCOs are used in analog music synthesizers. For these, sweep range, linearity, and distortion are often the most important specifications. Audio-frequency VCOs for use in musical contexts were largely superseded in the 1980s by their digital counterparts, digitally controlled oscillators (DCOs), due to their output stability in the face of temperature changes during operation. Since the 1990s, musical software has become the dominant sound-generating method. Voltage-to-frequency converters are voltage-controlled oscillators with a highly linear relation between applied voltage and frequency. They are used to convert a slow analog signal (such as from a temperature transducer) to a signal suitable for transmission over a long distance, since the frequency will not drift or be affected by noise. Oscillators in this application may have sine or square wave outputs. Where the oscillator drives equipment that may generate radio-frequency interference, adding a varying voltage to its control input, called dithering, can disperse the interference spectrum to make it less objectionable (see spread spectrum clock). == See also == Low-frequency oscillation (LFO) Modular synthesizer Numerically-controlled oscillator (NCO) Variable-frequency oscillator (VFO) Variable-gain amplifier Voltage-controlled filter (VCF) == References == == External links == "Design of V.C.O.'s". Ian Purdie's Amateur Radio Tutorial Pages. Archived from the original on 2019-01-04. Retrieved 2018-01-28. Designing VCOs and Buffers Using the UPA family of Dual Transistors
Wikipedia/Voltage-controlled_oscillator
In Hamiltonian mechanics, the linear canonical transformation (LCT) is a family of integral transforms that generalizes many classical transforms. It has 4 parameters and 1 constraint, so it is a 3-dimensional family, and can be visualized as the action of the special linear group SL2(C) on the time–frequency plane (domain). As this defines the original function up to a sign, this translates into an action of its double cover on the original function space. The LCT generalizes the Fourier, fractional Fourier, Laplace, Gauss–Weierstrass, Bargmann and the Fresnel transforms as particular cases. The name "linear canonical transformation" is from canonical transformation, a map that preserves the symplectic structure, as SL2(R) can also be interpreted as the symplectic group Sp2, and thus LCTs are the linear maps of the time–frequency domain which preserve the symplectic form, and their action on the Hilbert space is given by the Metaplectic group. The basic properties of the transformations mentioned above, such as scaling, shift, coordinate multiplication are considered. Any linear canonical transformation is related to affine transformations in phase space, defined by time-frequency or position-momentum coordinates. == Definition == The LCT can be represented in several ways; most easily, it can be parameterized by a 2×2 matrix with determinant 1, i.e., an element of the special linear group SL2(C). Then for any such matrix ( a b c d ) , {\displaystyle {\bigl (}{\begin{smallmatrix}a&b\\c&d\end{smallmatrix}}{\bigr )},} with ad − bc = 1, the corresponding integral transform from a function x ( t ) {\displaystyle x(t)} to X ( u ) {\displaystyle X(u)} is defined as X ( a , b , c , d ) ( u ) = { 1 i b ⋅ e i π d b u 2 ∫ − ∞ ∞ e − i 2 π 1 b u t e i π a b t 2 x ( t ) d t , when b ≠ 0 , d ⋅ e i π c d u 2 x ( d ⋅ u ) , when b = 0. {\displaystyle X_{(a,b,c,d)}(u)={\begin{cases}{\sqrt {\frac {1}{ib}}}\cdot e^{i\pi {\frac {d}{b}}u^{2}}\int _{-\infty }^{\infty }e^{-i2\pi {\frac {1}{b}}ut}e^{i\pi {\frac {a}{b}}t^{2}}x(t)\,dt,&{\text{when }}b\neq 0,\\{\sqrt {d}}\cdot e^{i\pi cdu^{2}}x(d\cdot u),&{\text{when }}b=0.\end{cases}}} == Special cases == Many classical transforms are special cases of the linear canonical transform: === Scaling === Scaling, x ( u ) ↦ σ x ( σ u ) {\displaystyle x(u)\mapsto {\sqrt {\sigma }}x(\sigma u)} , corresponds to scaling the time and frequency dimensions inversely (as time goes faster, frequencies are higher and the time dimension shrinks): [ 1 / σ 0 0 σ ] {\displaystyle {\begin{bmatrix}1/\sigma &0\\0&\sigma \end{bmatrix}}} === Fourier transform === The Fourier transform corresponds to a clockwise rotation by 90° in the time–frequency plane, represented by the matrix [ a b c d ] = [ 0 1 − 1 0 ] . {\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}0&1\\-1&0\end{bmatrix}}.} === Fractional Fourier transform === The fractional Fourier transform corresponds to rotation by an arbitrary angle; they are the elliptic elements of SL2(R), represented by the matrices [ a b c d ] = [ cos ⁡ θ sin ⁡ θ − sin ⁡ θ cos ⁡ θ ] . {\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}\cos \theta &\sin \theta \\-\sin \theta &\cos \theta \end{bmatrix}}.} The Fourier transform is the fractional Fourier transform when θ = 90 ∘ . {\displaystyle \theta =90^{\circ }.} The inverse Fourier transform corresponds to θ = − 90 ∘ . {\displaystyle \theta =-90^{\circ }.} === Fresnel transform === The Fresnel transform corresponds to shearing, and are a family of parabolic elements, represented by the matrices [ a b c d ] = [ 1 λ z 0 1 ] , {\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}1&\lambda z\\0&1\end{bmatrix}},} where z is distance, and λ is wavelength. === Laplace transform === The Laplace transform corresponds to rotation by 90° into the complex domain and can be represented by the matrix [ a b c d ] = [ 0 i i 0 ] . {\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}0&i\\i&0\end{bmatrix}}.} === Fractional Laplace transform === The fractional Laplace transform corresponds to rotation by an arbitrary angle into the complex domain and can be represented by the matrix [ a b c d ] = [ i cos ⁡ θ i sin ⁡ θ i sin ⁡ θ − i cos ⁡ θ ] . {\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}i\cos \theta &i\sin \theta \\i\sin \theta &-i\cos \theta \end{bmatrix}}.} The Laplace transform is the fractional Laplace transform when θ = 90 ∘ . {\displaystyle \theta =90^{\circ }.} The inverse Laplace transform corresponds to θ = − 90 ∘ . {\displaystyle \theta =-90^{\circ }.} === Chirp multiplication === Chirp multiplication, x ( u ) ↦ e i π τ u 2 x ( u ) {\displaystyle x(u)\mapsto e^{i\pi \tau u^{2}}x(u)} , corresponds to b = 0 , c = τ {\displaystyle b=0,c=\tau } : [ a b c d ] = [ 1 0 τ 1 ] . {\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}1&0\\\tau &1\end{bmatrix}}.} == Composition == Composition of LCTs corresponds to multiplication of the corresponding matrices; this is also known as the additivity property of the Wigner distribution function (WDF). Occasionally the product of transforms can pick up a sign factor due to picking a different branch of the square root in the definition of the LCT. In the literature, this is called the metaplectic phase. If the LCT is denoted by ⁠ O F ( a , b , c , d ) {\displaystyle O_{F}^{(a,b,c,d)}} ⁠, i.e. X ( a , b , c , d ) ( u ) = O F ( a , b , c , d ) [ x ( t ) ] , {\displaystyle X_{(a,b,c,d)}(u)=O_{F}^{(a,b,c,d)}[x(t)],} then O F ( a 2 , b 2 , c 2 , d 2 ) { O F ( a 1 , b 1 , c 1 , d 1 ) [ x ( t ) ] } = O F ( a 3 , b 3 , c 3 , d 3 ) [ x ( t ) ] , {\displaystyle O_{F}^{(a_{2},b_{2},c_{2},d_{2})}\left\{O_{F}^{(a_{1},b_{1},c_{1},d_{1})}[x(t)]\right\}=O_{F}^{(a_{3},b_{3},c_{3},d_{3})}[x(t)],} where [ a 3 b 3 c 3 d 3 ] = [ a 2 b 2 c 2 d 2 ] [ a 1 b 1 c 1 d 1 ] . {\displaystyle {\begin{bmatrix}a_{3}&b_{3}\\c_{3}&d_{3}\end{bmatrix}}={\begin{bmatrix}a_{2}&b_{2}\\c_{2}&d_{2}\end{bmatrix}}{\begin{bmatrix}a_{1}&b_{1}\\c_{1}&d_{1}\end{bmatrix}}.} If W X ( a , b , c , d ) ( u , v ) {\displaystyle W_{X(a,b,c,d)}(u,v)} is the X ( a , b , c , d ) ( u ) {\displaystyle X_{(a,b,c,d)}(u)} , where X ( a , b , c , d ) ( u ) {\displaystyle X_{(a,b,c,d)}(u)} is the LCT of x ( t ) {\displaystyle x(t)} , then W X ( a , b , c , d ) ( u , v ) = W x ( d u − b v , − c u + a v ) , {\displaystyle W_{X(a,b,c,d)}(u,v)=W_{x}(du-bv,-cu+av),} W X ( a , b , c , d ) ( a u + b v , c u + d v ) = W x ( u , v ) . {\displaystyle W_{X(a,b,c,d)}(au+bv,cu+dv)=W_{x}(u,v).} LCT is equal to the twisting operation for the WDF and the Cohen's class distribution also has the twisting operation. We can freely use the LCT to transform the parallelogram whose center is at (0, 0) to another parallelogram which has the same area and the same center: From this picture we know that the point (−1, 2) transform to the point (0, 1), and the point (1, 2) transform to the point (4, 3). As the result, we can write down the equations { − a + 2 b = 0 , − c + 2 d = 1 , { a + 2 b = 4 , c + 2 d = 3. {\displaystyle {\begin{cases}-a+2b=0,\\-c+2d=1,\end{cases}}\qquad {\begin{cases}a+2b=4,\\c+2d=3.\end{cases}}} Solve these equations gives (a, b, c, d) = (2, 1, 1, 1). == In optics and quantum mechanics == Paraxial optical systems implemented entirely with thin lenses and propagation through free space and/or graded-index (GRIN) media, are quadratic-phase systems (QPS); these were known before Moshinsky and Quesne (1974) called attention to their significance in connection with canonical transformations in quantum mechanics. The effect of any arbitrary QPS on an input wavefield can be described using the linear canonical transform, a particular case of which was developed by Segal (1963) and Bargmann (1961) in order to formalize Fock's (1928) boson calculus. In quantum mechanics, linear canonical transformations can be identified with the linear transformations which mix the momentum operator with the position operator and leave invariant the canonical commutation relations. == Applications == Canonical transforms are used to analyze differential equations. These include diffusion, the Schrödinger free particle, the linear potential (free-fall), and the attractive and repulsive oscillator equations. It also includes a few others such as the Fokker–Planck equation. Although this class is far from universal, the ease with which solutions and properties are found makes canonical transforms an attractive tool for problems such as these. Wave propagation through air, a lens, and between satellite dishes are discussed here. All of the computations can be reduced to 2×2 matrix algebra. This is the spirit of LCT. === Electromagnetic wave propagation === Assuming the system looks like as depicted in the figure, the wave travels from the (xi, yi) plane to the (x, y) plane. The Fresnel transform is used to describe electromagnetic wave propagation in free space: U 0 ( x , y ) = − j λ e j k z z ∫ − ∞ ∞ ∫ − ∞ ∞ e j k 2 z [ ( x − x i ) 2 + ( y − y i ) 2 ] U i ( x i , y i ) d x i d y i , {\displaystyle U_{0}(x,y)=-{\frac {j}{\lambda }}{\frac {e^{jkz}}{z}}\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }e^{j{\frac {k}{2z}}\left[(x-x_{i})^{2}+(y-y_{i})^{2}\right]}U_{i}(x_{i},y_{i})\,dx_{i}\,dy_{i},} where ⁠ k = 2 π / λ {\displaystyle k=2\pi /\lambda } ⁠ is the wave number, λ is the wavelength, z is the distance of propagation, ⁠ j = − 1 {\displaystyle j={\sqrt {-1}}} ⁠ is the imaginary unit. This is equivalent to LCT (shearing), when [ a b c d ] = [ 1 λ z 0 1 ] . {\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}1&\lambda z\\0&1\end{bmatrix}}.} When the travel distance (z) is larger, the shearing effect is larger. === Spherical lens === With the lens as depicted in the figure, and the refractive index denoted as n, the result is U 0 ( x , y ) = e j k n Δ e − j k 2 f [ x 2 + y 2 ] U i ( x , y ) , {\displaystyle U_{0}(x,y)=e^{jkn\Delta }e^{-j{\frac {k}{2f}}[x^{2}+y^{2}]}U_{i}(x,y),} where f is the focal length, and Δ is the thickness of the lens. The distortion passing through the lens is similar to LCT, when [ a b c d ] = [ 1 0 − 1 λ f 1 ] . {\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}1&0\\{\frac {-1}{\lambda f}}&1\end{bmatrix}}.} This is also a shearing effect: when the focal length is smaller, the shearing effect is larger. === Spherical mirror === The spherical mirror—e.g., a satellite dish—can be described as a LCT, with [ a b c d ] = [ 1 0 − 1 λ R 1 ] . {\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}1&0\\{\frac {-1}{\lambda R}}&1\end{bmatrix}}.} This is very similar to lens, except focal length is replaced by the radius R of the dish. A spherical mirror with radius curvature of R is equivalent to a thin lens with the focal length f = −R/2 (by convention, R < 0 for concave mirror, R > 0 for convex mirror). Therefore, if the radius is smaller, the shearing effect is larger. === Joint free space and spherical lens === The relation between the input and output we can use LCT to represent [ a b c d ] = [ 1 λ z 2 0 1 ] [ 1 0 − 1 / λ f 1 ] [ 1 λ z 1 0 1 ] = [ 1 − z 2 / f λ ( z 1 + z 2 ) − λ z 1 z 2 / f − 1 / λ f 1 − z 1 / f ] . {\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}1&\lambda z_{2}\\0&1\end{bmatrix}}{\begin{bmatrix}1&0\\-1/\lambda f&1\end{bmatrix}}{\begin{bmatrix}1&\lambda z_{1}\\0&1\end{bmatrix}}={\begin{bmatrix}1-z_{2}/f&\lambda (z_{1}+z_{2})-\lambda z_{1}z_{2}/f\\-1/\lambda f&1-z_{1}/f\end{bmatrix}}\,.} If ⁠ z 1 = z 2 = 2 f {\displaystyle z_{1}=z_{2}=2f} ⁠, it is reverse real image. If ⁠ z 1 = z 2 = f {\displaystyle z_{1}=z_{2}=f} ⁠, it is Fourier transform+scaling If ⁠ z 1 = z 2 {\displaystyle z_{1}=z_{2}} ⁠, it is fractional Fourier transform+scaling == Basic properties == In this part, we show the basic properties of LCT Given a two-dimensional column vector r = [ x y ] , {\displaystyle r={\begin{bmatrix}x\\y\end{bmatrix}},} we show some basic properties (result) for the specific input below: == Example == The system considered is depicted in the figure to the right: two dishes – one being the emitter and the other one the receiver – and a signal travelling between them over a distance D. First, for dish A (emitter), the LCT matrix looks like this: [ 1 0 − 1 λ R A 1 ] . {\displaystyle {\begin{bmatrix}1&0\\{\frac {-1}{\lambda R_{A}}}&1\end{bmatrix}}.} Then, for dish B (receiver), the LCT matrix similarly becomes: [ 1 0 − 1 λ R B 1 ] . {\displaystyle {\begin{bmatrix}1&0\\{\frac {-1}{\lambda R_{B}}}&1\end{bmatrix}}.} Last, for the propagation of the signal in air, the LCT matrix is: [ 1 λ D 0 1 ] . {\displaystyle {\begin{bmatrix}1&\lambda D\\0&1\end{bmatrix}}.} Putting all three components together, the LCT of the system is: [ a b c d ] = [ 1 0 − 1 λ R B 1 ] [ 1 λ D 0 1 ] [ 1 0 − 1 λ R A 1 ] = [ 1 − D R A − λ D 1 λ ( R A − 1 + R B − 1 − R A − 1 R B − 1 D ) 1 − D R B ] . {\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}1&0\\{\frac {-1}{\lambda R_{B}}}&1\end{bmatrix}}{\begin{bmatrix}1&\lambda D\\0&1\end{bmatrix}}{\begin{bmatrix}1&0\\{\frac {-1}{\lambda R_{A}}}&1\end{bmatrix}}={\begin{bmatrix}1-{\frac {D}{R_{A}}}&-\lambda D\\{\frac {1}{\lambda }}(R_{A}^{-1}+R_{B}^{-1}-R_{A}^{-1}R_{B}^{-1}D)&1-{\frac {D}{R_{B}}}\end{bmatrix}}\,.} == See also == Segal–Shale–Weil distribution, a metaplectic group of operators related to the chirplet transform Other time–frequency transforms: Fractional Fourier transform Continuous Fourier transform Chirplet transform Applications: Focus recovery based on the linear canonical transform Ray transfer matrix analysis == Notes == == References == J.J. Healy, M.A. Kutay, H.M. Ozaktas and J.T. Sheridan, "Linear Canonical Transforms: Theory and Applications", Springer, New York 2016. J.J. Ding, "Time–frequency analysis and wavelet transform course note", the Department of Electrical Engineering, National Taiwan University (NTU), Taipei, Taiwan, 2007. K.B. Wolf, "Integral Transforms in Science and Engineering", Ch. 9&10, New York, Plenum Press, 1979. S.A. Collins, "Lens-system diffraction integral written in terms of matrix optics," J. Opt. Soc. Amer. 60, 1168–1177 (1970). M. Moshinsky and C. Quesne, "Linear canonical transformations and their unitary representations," J. Math. Phys. 12, 8, 1772–1783, (1971). B.M. Hennelly and J.T. Sheridan, "Fast Numerical Algorithm for the Linear Canonical Transform", J. Opt. Soc. Am. A 22, 5, 928–937 (2005). H.M. Ozaktas, A. Koç, I. Sari, and M.A. Kutay, "Efficient computation of quadratic-phase integrals in optics", Opt. Let. 31, 35–37, (2006). Bing-Zhao Li, Ran Tao, Yue Wang, "New sampling formulae related to the linear canonical transform", Signal Processing '87', 983–990, (2007). A. Koç, H.M. Ozaktas, C. Candan, and M.A. Kutay, "Digital computation of linear canonical transforms", IEEE Trans. Signal Process., vol. 56, no. 6, 2383–2394, (2008). Ran Tao, Bing-Zhao Li, Yue Wang, "On sampling of bandlimited signals associated with the linear canonical transform", IEEE Transactions on Signal Processing, vol. 56, no. 11, 5454–5464, (2008). D. Stoler, "Operator methods in Physical Optics", 26th Annual Technical Symposium. International Society for Optics and Photonics, 1982. Tian-Zhou Xu, Bing-Zhao Li, " Linear Canonical Transform and Its Applications ", Beijing, Science Press, 2013. Tatiana Alieva., Martin J. Bastiaans. (2016) The Linear Canonical Transformations: Definition and Properties. In: Healy J., Alper Kutay M., Ozaktas H., Sheridan J. (eds) Linear Canonical Transforms. Springer Series in Optical Sciences, vol 198. Springer, New York, NY
Wikipedia/Linear_canonical_transformation
In electronics, an analog-to-digital converter (ADC, A/D, or A-to-D) is a system that converts an analog signal, such as a sound picked up by a microphone or light entering a digital camera, into a digital signal. An ADC may also provide an isolated measurement such as an electronic device that converts an analog input voltage or current to a digital number representing the magnitude of the voltage or current. Typically the digital output is a two's complement binary number that is proportional to the input, but there are other possibilities. There are several ADC architectures. Due to the complexity and the need for precisely matched components, all but the most specialized ADCs are implemented as integrated circuits (ICs). These typically take the form of metal–oxide–semiconductor (MOS) mixed-signal integrated circuit chips that integrate both analog and digital circuits. A digital-to-analog converter (DAC) performs the reverse function; it converts a digital signal into an analog signal. == Explanation == An ADC converts a continuous-time and continuous-amplitude analog signal to a discrete-time and discrete-amplitude digital signal. The conversion involves quantization of the input, so it necessarily introduces a small amount of quantization error. Furthermore, instead of continuously performing the conversion, an ADC does the conversion periodically, sampling the input, and limiting the allowable bandwidth of the input signal. The performance of an ADC is primarily characterized by its bandwidth and signal-to-noise and distortion ratio (SNDR). The bandwidth of an ADC is characterized primarily by its sampling rate. The SNDR of an ADC is influenced by many factors, including the resolution, linearity and accuracy (how well the quantization levels match the true analog signal), aliasing and jitter. The SNDR of an ADC is often summarized in terms of its effective number of bits (ENOB), the number of bits of each measure it returns that are on average not noise. An ideal ADC has an ENOB equal to its resolution. ADCs are chosen to match the bandwidth and required SNDR of the signal to be digitized. If an ADC operates at a sampling rate greater than twice the bandwidth of the signal, then per the Nyquist–Shannon sampling theorem, near-perfect reconstruction is possible. The presence of quantization error limits the SNDR of even an ideal ADC. However, if the SNDR of the ADC exceeds that of the input signal, then the effects of quantization error may be neglected, resulting in an essentially perfect digital representation of the bandlimited analog input signal. === Resolution === The resolution of the converter indicates the number of different, i.e. discrete, values it can produce over the allowed range of analog input values. Thus a particular resolution determines the magnitude of the quantization error and therefore determines the maximum possible signal-to-noise ratio for an ideal ADC without the use of oversampling. The input samples are usually stored electronically in binary form within the ADC, so the resolution is usually expressed as the audio bit depth. In consequence, the number of discrete values available is usually a power of two. For example, an ADC with a resolution of 8 bits can encode an analog input to one in 256 different levels (28 = 256). The values can represent the ranges from 0 to 255 (i.e. as unsigned integers) or from −128 to 127 (i.e. as signed integer), depending on the application. Resolution can also be defined electrically, and expressed in volts. The change in voltage required to guarantee a change in the output code level is called the least significant bit (LSB) voltage. The resolution Q of the ADC is equal to the LSB voltage. The voltage resolution of an ADC is equal to its overall voltage measurement range divided by the number of intervals: Q = E F S R 2 M , {\displaystyle Q={\dfrac {E_{\mathrm {FSR} }}{2^{M}}},} where M is the ADC's resolution in bits and EFSR is the full-scale voltage range (also called 'span'). EFSR is given by E F S R = V R e f H i − V R e f L o w , {\displaystyle E_{\mathrm {FSR} }=V_{\mathrm {RefHi} }-V_{\mathrm {RefLow} },\,} where VRefHi and VRefLow are the upper and lower extremes, respectively, of the voltages that can be coded. Normally, the number of voltage intervals is given by N = 2 M , {\displaystyle N=2^{M},\,} where M is the ADC's resolution in bits. That is, one voltage interval is assigned in between two consecutive code levels. Example: Coding scheme as in figure 1 Full scale measurement range = 0 to 1 volt ADC resolution is 3 bits: 23 = 8 quantization levels (codes) ADC voltage resolution, Q = 1 V / 8 = 0.125 V. In many cases, the useful resolution of a converter is limited by the signal-to-noise ratio (SNR) and other errors in the overall system expressed as an ENOB. ==== Quantization error ==== Quantization error is introduced by the quantization inherent in an ideal ADC. It is a rounding error between the analog input voltage to the ADC and the output digitized value. The error is nonlinear and signal-dependent. In an ideal ADC, where the quantization error is uniformly distributed between −1⁄2 LSB and +1⁄2 LSB, and the signal has a uniform distribution covering all quantization levels, the signal-to-quantization-noise ratio (SQNR) is given by S Q N R = 20 log 10 ⁡ ( 2 Q ) ≈ 6.02 ⋅ Q d B {\displaystyle \mathrm {SQNR} =20\log _{10}(2^{Q})\approx 6.02\cdot Q\ \mathrm {dB} \,\!} where Q is the number of quantization bits. For example, for a 16-bit ADC, the quantization error is 96.3 dB below the maximum level. Quantization error is distributed from DC to the Nyquist frequency. Consequently, if part of the ADC's bandwidth is not used, as is the case with oversampling, some of the quantization error will occur out-of-band, effectively improving the SQNR for the bandwidth in use. In an oversampled system, noise shaping can be used to further increase SQNR by forcing more quantization error out of band. ==== Dither ==== In ADCs, performance can usually be improved using dither. This is a very small amount of random noise (e.g. white noise), which is added to the input before conversion. Its effect is to randomize the state of the LSB based on the signal. Rather than the signal simply getting cut off altogether at low levels, it extends the effective range of signals that the ADC can convert, at the expense of a slight increase in noise. Dither can only increase the resolution of a sampler. It cannot improve the linearity, and thus accuracy does not necessarily improve. Quantization distortion in an audio signal of very low level with respect to the bit depth of the ADC is correlated with the signal and sounds distorted and unpleasant. With dithering, the distortion is transformed into noise. The undistorted signal may be recovered accurately by averaging over time. Dithering is also used in integrating systems such as electricity meters. Since the values are added together, the dithering produces results that are more exact than the LSB of the analog-to-digital converter. Dither is often applied when quantizing photographic images to a fewer number of bits per pixel—the image becomes noisier but to the eye looks far more realistic than the quantized image, which otherwise becomes banded. This analogous process may help to visualize the effect of dither on an analog audio signal that is converted to digital. === Accuracy === An ADC has several sources of errors. Quantization error and (assuming the ADC is intended to be linear) non-linearity are intrinsic to any analog-to-digital conversion. These errors are measured in a unit called the least significant bit (LSB). In the above example of an eight-bit ADC, an error of one LSB is 1⁄256 of the full signal range, or about 0.4%. ==== Nonlinearity ==== All ADCs suffer from nonlinearity errors caused by their physical imperfections, causing their output to deviate from a linear function (or some other function, in the case of a deliberately nonlinear ADC) of their input. These errors can sometimes be mitigated by calibration, or prevented by testing. Important parameters for linearity are integral nonlinearity and differential nonlinearity. These nonlinearities introduce distortion that can reduce the signal-to-noise ratio performance of the ADC and thus reduce its effective resolution. === Jitter === When digitizing a sine wave x ( t ) = A sin ⁡ ( 2 π f 0 t ) {\displaystyle x(t)=A\sin {(2\pi f_{0}t)}} , the use of a non-ideal sampling clock will result in some uncertainty in when samples are recorded. Provided that the actual sampling time uncertainty due to clock jitter is Δ t {\displaystyle \Delta t} , the error caused by this phenomenon can be estimated as E a p ≤ | x ′ ( t ) Δ t | ≤ 2 A π f 0 Δ t {\displaystyle E_{ap}\leq |x'(t)\Delta t|\leq 2A\pi f_{0}\Delta t} . This will result in additional recorded noise that will reduce the effective number of bits (ENOB) below that predicted by quantization error alone. The error is zero for DC, small at low frequencies, but significant with signals of high amplitude and high frequency. The effect of jitter on performance can be compared to quantization error: Δ t < 1 2 q π f 0 {\displaystyle \Delta t<{\frac {1}{2^{q}\pi f_{0}}}} , where q is the number of ADC bits. Clock jitter is caused by phase noise. The resolution of ADCs with a digitization bandwidth between 1 MHz and 1 GHz is limited by jitter. For lower bandwidth conversions such as when sampling audio signals at 44.1 kHz, clock jitter has a less significant impact on performance. === Sampling rate === An analog signal is continuous in time and it is necessary to convert this to a flow of digital values. It is therefore required to define the rate at which new digital values are sampled from the analog signal. The rate of new values is called the sampling rate or sampling frequency of the converter. A continuously varying bandlimited signal can be sampled and then the original signal can be reproduced from the discrete-time values by a reconstruction filter. The Nyquist–Shannon sampling theorem implies that a faithful reproduction of the original signal is only possible if the sampling rate is higher than twice the highest frequency of the signal. Since a practical ADC cannot make an instantaneous conversion, the input value must necessarily be held constant during the time that the converter performs a conversion (called the conversion time). An input circuit called a sample and hold performs this task—in most cases by using a capacitor to store the analog voltage at the input, and using an electronic switch or gate to disconnect the capacitor from the input. Many ADC integrated circuits include the sample and hold subsystem internally. ==== Aliasing ==== An ADC works by sampling the value of the input at discrete intervals in time. Provided that the input is sampled above the Nyquist rate, defined as twice the highest frequency of interest, then all frequencies in the signal can be reconstructed. If frequencies above half the Nyquist rate are sampled, they are incorrectly detected as lower frequencies, a process referred to as aliasing. Aliasing occurs because instantaneously sampling a function at two or fewer times per cycle results in missed cycles, and therefore the appearance of an incorrectly lower frequency. For example, a 2 kHz sine wave being sampled at 1.5 kHz would be reconstructed as a 500 Hz sine wave. To avoid aliasing, the input to an ADC must be low-pass filtered to remove frequencies above half the sampling rate. This filter is called an anti-aliasing filter, and is essential for a practical ADC system that is applied to analog signals with higher frequency content. In applications where protection against aliasing is essential, oversampling may be used to greatly reduce or even eliminate it. Although aliasing in most systems is unwanted, it can be exploited to provide simultaneous down-mixing of a band-limited high-frequency signal (see undersampling and frequency mixer). The alias is effectively the lower heterodyne of the signal frequency and sampling frequency. ==== Oversampling ==== For economy, signals are often sampled at the minimum rate required with the result that the quantization error introduced is white noise spread over the whole passband of the converter. If a signal is sampled at a rate much higher than the Nyquist rate and then digitally filtered to limit it to the signal bandwidth produces the following advantages: Oversampling can make it easier to realize analog anti-aliasing filters Improved audio bit depth Reduced noise, especially when noise shaping is employed in addition to oversampling. Oversampling is typically used in audio frequency ADCs where the required sampling rate (typically 44.1 or 48 kHz) is very low compared to the clock speed of typical transistor circuits (>1 MHz). In this case, the performance of the ADC can be greatly increased at little or no cost. Furthermore, as any aliased signals are also typically out of band, aliasing can often be eliminated using very low cost filters. === Relative speed and precision === The speed of an ADC varies by type. The Wilkinson ADC is limited by the clock rate which is processable by current digital circuits. For a successive-approximation ADC, the conversion time scales with the logarithm of the resolution, i.e. the number of bits. Flash ADCs are certainly the fastest type of the three; The conversion is basically performed in a single parallel step. There is a potential tradeoff between speed and precision. Flash ADCs have drifts and uncertainties associated with the comparator levels results in poor linearity. To a lesser extent, poor linearity can also be an issue for successive-approximation ADCs. Here, nonlinearity arises from accumulating errors from the subtraction processes. Wilkinson ADCs have the best linearity of the three. === Sliding scale principle === The sliding scale or randomizing method can be employed to greatly improve the linearity of any type of ADC, but especially flash and successive approximation types. For any ADC the mapping from input voltage to digital output value is not exactly a floor or ceiling function as it should be. Under normal conditions, a pulse of a particular amplitude is always converted to the same digital value. The problem lies in that the ranges of analog values for the digitized values are not all of the same widths, and the differential linearity decreases proportionally with the divergence from the average width. The sliding scale principle uses an averaging effect to overcome this phenomenon. A random, but known analog voltage is added to the sampled input voltage. It is then converted to digital form, and the equivalent digital amount is subtracted, thus restoring it to its original value. The advantage is that the conversion has taken place at a random point. The statistical distribution of the final levels is decided by a weighted average over a region of the range of the ADC. This in turn desensitizes it to the width of any specific level. == Types == These are several common ways of implementing an electronic ADC. === RC charge time === Resistor-capacitor (RC) circuits have a known voltage charging and discharging curve that can be used to solve for an unknown analog value. ==== Wilkinson ==== The Wilkinson ADC was designed by Denys Wilkinson in 1950. The Wilkinson ADC is based on the comparison of an input voltage with that produced by a charging capacitor. The capacitor is allowed to charge until a comparator determines it matches the input voltage. Then, the capacitor is discharged linearly by using a constant current source. The time required to discharge the capacitor is proportional to the amplitude of the input voltage. While the capacitor is discharging, pulses from a high-frequency oscillator clock are counted by a register. The number of clock pulses recorded in the register is also proportional to the input voltage. ==== Measuring analog resistance or capacitance ==== If the analog value to measure is represented by a resistance or capacitance, then by including that element in an RC circuit (with other resistances or capacitances fixed) and measuring the time to charge the capacitance from a known starting voltage to another known ending voltage through the resistance from a known voltage supply, the value of the unknown resistance or capacitance can be determined using the capacitor charging equation: V capacitor ( t ) = V supply ( 1 − e − t R C ) {\displaystyle V_{\text{capacitor}}(t)=V_{\text{supply}}\left(1-e^{-{\frac {t}{RC}}}\right)} and solving for the unknown resistance or capacitance using those starting and ending datapoints. This is similar but contrasts to the Wilkinson ADC which measures an unknown voltage with a known resistance and capacitance, by instead measuring an unknown resistance or capacitance with a known voltage. For example, the positive (and/or negative) pulse width from a 555 Timer IC in monostable or astable mode represents the time it takes to charge (and/or discharge) its capacitor from 1⁄3 Vsupply to 2⁄3 Vsupply. By sending this pulse into a microcontroller with an accurate clock, the duration of the pulse can be measured and converted using the capacitor charging equation to produce the value of the unknown resistance or capacitance. Larger resistances and capacitances will take a longer time to measure than smaller one. And the accuracy is limited by the accuracy of the microcontroller clock and the amount of time available to measure the value, which potentially might even change during measurement or be affected by external parasitics. === Flash ADC === A flash ADC, also known as a parallel ADC, employs a bank of voltage comparators sampling the input signal in parallel, each with a different voltage threshold. The circuit consists of a resistive divider network, a set of voltage comparators and a priority encoder. Each node of the resistive divider provides a voltage threshold for one comparator. The comparator outputs are applied to a priority encoder, which generates a binary number proportional to the input voltage. Flash ADCs have a large die size and high power dissipation. They are used in a variety of applications, including video, wideband communications, and for digitizing other fast signals. The circuit has the advantage of high speed as the conversion takes place simultaneously rather than sequentially. Typical conversion time is 100 ns or less. Conversion time is limited only by the speed of the comparator and of the priority encoder. This type of ADC has the disadvantage that for each additional output bit, the number of comparators required almost doubles and priority encoder becomes more complex. === Successive approximation === A successive-approximation ADC uses a comparator and a binary search to successively narrow a range that contains the input voltage. At each successive step, the converter compares the input voltage to the output of an internal digital-to-analog converter (DAC) which initially represents the midpoint of the allowed input voltage range. At each step in this process, the approximation is stored in a successive approximation register (SAR) and the output of the digital-to-analog converter is updated for a comparison over a narrower range. === Ramp-compare === A ramp-compare ADC produces a saw-tooth signal that ramps up or down then quickly returns to zero. When the ramp starts, a timer starts counting. When the ramp voltage matches the input, a comparator fires, and the timer's value is recorded. Timed ramp converters can be implemented economically, however, the ramp time may be sensitive to temperature because the circuit generating the ramp is often a simple analog integrator. A more accurate converter uses a clocked counter driving a DAC. A special advantage of the ramp-compare system is that converting a second signal just requires another comparator and another register to store the timer value. To reduce sensitivity to input changes during conversion, a sample and hold can charge a capacitor with the instantaneous input voltage and the converter can time the time required to discharge with a constant current. === Integrating === An integrating ADC (also dual-slope or multi-slope ADC) applies the unknown input voltage to the input of an integrator and allows the voltage to ramp for a fixed time period (the run-up period). Then a known reference voltage of opposite polarity is applied to the integrator and is allowed to ramp until the integrator output returns to zero (the run-down period). The input voltage is computed as a function of the reference voltage, the constant run-up time period, and the measured run-down time period. The run-down time measurement is usually made in units of the converter's clock, so longer integration times allow for higher resolutions. Likewise, the speed of the converter can be improved by sacrificing resolution. Converters of this type (or variations on the concept) are used in most digital voltmeters for their linearity and flexibility. Charge balancing ADC The principle of charge balancing ADC is to first convert the input signal to a frequency using a voltage-to-frequency converter. This frequency is then measured by a counter and converted to an output code proportional to the analog input. The main advantage of these converters is that it is possible to transmit frequency even in a noisy environment or in isolated form. However, the limitation of this circuit is that the output of the voltage-to-frequency converter depends upon an RC product whose value cannot be accurately maintained over temperature and time. Dual-slope ADC The analog part of the circuit consists of a high input impedance buffer, precision integrator and a voltage comparator. The converter first integrates the analog input signal for a fixed duration and then it integrates an internal reference voltage of opposite polarity until the integrator output is zero. The main disadvantage of this circuit is the long duration time. They are particularly suitable for accurate measurement of slowly varying signals such as thermocouples and weighing scales. === Delta-encoded === A delta-encoded or counter-ramp ADC has an up-down counter that feeds a DAC. The input signal and the DAC both go to a comparator. The comparator controls the counter. The circuit uses negative feedback from the comparator to adjust the counter until the DAC's output matches the input signal and number is read from the counter. Delta converters have very wide ranges and high resolution, but the conversion time is dependent on the input signal behavior, though it will always have a guaranteed worst-case. Delta converters are often very good choices to read real-world signals as most signals from physical systems do not change abruptly. Some converters combine the delta and successive approximation approaches; this works especially well when high frequency components of the input signal are known to be small in magnitude. === Pipelined === A pipelined ADC (also called subranging quantizer) uses two or more conversion steps. First, a coarse conversion is done. In a second step, the difference to the input signal is determined with a DAC. This difference is then converted more precisely, and the results are combined in the last step. This can be considered a refinement of the successive-approximation ADC wherein the feedback reference signal consists of the interim conversion of a whole range of bits (for example, four bits) rather than just the next-most-significant bit. By combining the merits of the successive approximation and flash ADCs this type is fast, has a high resolution, and can be implemented efficiently. === Delta-sigma === A delta-sigma ADC (also known as a sigma-delta ADC) is based on a negative feedback loop with an analog filter and low resolution (often 1 bit) but high sampling rate ADC and DAC. The feedback loop continuously corrects accumulated quantization errors and performs noise shaping: quantization noise is reduced in the low frequencies of interest, but is increased in higher frequencies. Those higher frequencies may then be removed by a downsampling digital filter, which also converts the data stream from that high sampling rate with low bit depth to a lower rate with higher bit depth. === Time-interleaved === A time-interleaved ADC uses M parallel ADCs where each ADC samples data every M:th cycle of the effective sample clock. The result is that the sample rate is increased M times compared to what each individual ADC can manage. In practice, the individual differences between the M ADCs degrade the overall performance reducing the spurious-free dynamic range (SFDR). However, techniques exist to correct for these time-interleaving mismatch errors. === Intermediate FM stage === An ADC with an intermediate FM stage first uses a voltage-to-frequency converter to produce an oscillating signal with a frequency proportional to the voltage of the input signal, and then uses a frequency counter to convert that frequency into a digital count proportional to the desired signal voltage. Longer integration times allow for higher resolutions. Likewise, the speed of the converter can be improved by sacrificing resolution. The two parts of the ADC may be widely separated, with the frequency signal passed through an opto-isolator or transmitted wirelessly. Some such ADCs use sine wave or square wave frequency modulation; others use pulse-frequency modulation. Such ADCs were once the most popular way to show a digital display of the status of a remote analog sensor. === Time-stretch === A time-stretch analog-to-digital converter (TS-ADC) digitizes a very wide bandwidth analog signal, that cannot be digitized by a conventional electronic ADC, by time-stretching the signal prior to digitization. It commonly uses a photonic preprocessor to time-stretch the signal, which effectively slows the signal down in time and compresses its bandwidth. As a result, an electronic ADC, that would have been too slow to capture the original signal, can now capture this slowed-down signal. For continuous capture of the signal, the front end also divides the signal into multiple segments in addition to time-stretching. Each segment is individually digitized by a separate electronic ADC. Finally, a digital signal processor rearranges the samples and removes any distortions added by the preprocessor to yield the binary data that is the digital representation of the original analog signal. === Measuring physical values other than voltage === Although the term ADC is usually associated with measurement of an analog voltage, some partially-electronic devices that convert some measurable physical analog quantity into a digital number can also be considered ADCs, for instance: Rotary encoders convert from an analog physical quantity that mechanically produces an amount of rotation into a stream of digital Gray code that a microcontroller can digitally interpret to derive the direction of rotation, angular position, and rotational speed. Capacitive sensing converts from the analog physical quantity of a capacitance. That capacitance could be a proxy for some other physical quantity, such as the distance some metal object is from a metal sensing plate, or the amount of water in a tank, or the permittivity of a dielectric material. Capacitive-to-digital (CDC) converters determine capacitance by applying a known excitation to a plate of a capacitor and measuring its charge. Digital calipers convert from the analog physical quantity of an amount of displacement between two sliding rulers. Inductive-to-digital converters measure a change of inductance by a conductive target moving in an inductor's AC magnetic field. Time-to-digital converters recognize events and provide a digital representation of the analog time they occurred. Time of flight measurements for instance can convert from some analog quantity that affects a propagation delay for an event. Sensors in general that don't directly produce a voltage may indirectly produce a voltage or through other ways be converted into a digital value. Resistive output (e.g. from a potentiometer or a force-sensing resistor) can be made into a voltage by sending a known current through it, or can be made into a RC charging time measurement, to produce a digital result. == Commercial == In many cases, the most expensive part of an integrated circuit is the pins, because they make the package larger, and each pin has to be connected to the integrated circuit's silicon. To save pins, it is common for ADCs to send their data one bit at a time over a serial interface to the computer, with each bit coming out when a clock signal changes state. This saves quite a few pins on the ADC package, and in many cases, does not make the overall design any more complex. Commercial ADCs often have several inputs that feed the same converter, usually through an analog multiplexer. Different models of ADC may include sample and hold circuits, instrumentation amplifiers or differential inputs, where the quantity measured is the difference between two inputs. == Applications == === Music recording === Analog-to-digital converters are integral to modern music reproduction technology and digital audio workstation-based sound recording. Music may be produced on computers using an analog recording and therefore analog-to-digital converters are needed to create the pulse-code modulation (PCM) data streams that go onto compact discs and digital music files. The current crop of analog-to-digital converters utilized in music can sample at rates up to 192 kilohertz. Many recording studios record in 24-bit 96 kHz pulse-code modulation (PCM) format and then downsample and dither the signal for Compact Disc Digital Audio production (44.1 kHz) or to 48 kHz for radio and television broadcast applications. === Digital signal processing === ADCs are required in digital signal processing systems that process, store, or transport virtually any analog signal in digital form. TV tuner cards, for example, use fast video analog-to-digital converters. Slow on-chip 8-, 10-, 12-, or 16-bit analog-to-digital converters are common in microcontrollers. Digital storage oscilloscopes need very fast analog-to-digital converters, also crucial for software-defined radio and their new applications. === Scientific instruments === Digital imaging systems commonly use analog-to-digital converters for digitizing pixels. Some radar systems use analog-to-digital converters to convert signal strength to digital values for subsequent signal processing. Many other in situ and remote sensing systems commonly use analogous technology. Many sensors in scientific instruments produce an analog signal; temperature, pressure, pH, light intensity etc. All these signals can be amplified and fed to an ADC to produce a digital representation. === Displays === Flat-panel displays are inherently digital and need an ADC to process an analog signal such as composite or VGA. == Electrical symbol == == Testing == Testing an analog-to-digital converter requires an analog input source and hardware to send control signals and capture digital data output. Some ADCs also require an accurate source of reference signal. The key parameters to test an ADC are: DC offset error DC gain error signal-to-noise ratio (SNR) Total harmonic distortion (THD) Integral nonlinearity (INL) Differential nonlinearity (DNL) Spurious free dynamic range Power dissipation == See also == Adaptive predictive coding, a type of ADC in which the value of the signal is predicted by a linear function Audio codec Beta encoder Integral linearity Modem == Notes == == References == == Further reading == Allen, Phillip E.; Holberg, Douglas R. (2002). CMOS Analog Circuit Design. ISBN 978-0-19-511644-1. Fraden, Jacob (2010). Handbook of Modern Sensors: Physics, Designs, and Applications. Springer. ISBN 978-1441964656. Kester, Walt, ed. (2005). The Data Conversion Handbook. Elsevier: Newnes. ISBN 978-0-7506-7841-4.{{cite book}}: CS1 maint: publisher location (link) Johns, David; Martin, Ken (1997). Analog Integrated Circuit Design. Wiley. ISBN 978-0-471-14448-9. Liu, Mingliang (2006). Demystifying Switched-Capacitor Circuits. Newnes. ISBN 978-0-7506-7907-7. Norsworthy, Steven R.; Schreier, Richard; Temes, Gabor C. (1997). Delta-Sigma Data Converters. IEEE Press. ISBN 978-0-7803-1045-2. Razavi, Behzad (1995). Principles of Data Conversion System Design. New York, NY: IEEE Press. ISBN 978-0-7803-1093-3. Ndjountche, Tertulien (May 24, 2011). CMOS Analog Integrated Circuits: High-Speed and Power-Efficient Design. Boca Raton, FL: CRC Press. ISBN 978-1-4398-5491-4. Staller, Len (February 24, 2005). "Understanding analog to digital converter specifications". Embedded Systems Design. Walden, R. H. (1999). "Analog-to-digital converter survey and analysis". IEEE Journal on Selected Areas in Communications. 17 (4): 539–550. CiteSeerX 10.1.1.352.1881. doi:10.1109/49.761034. == External links == An Introduction to Delta Sigma Converters A very nice overview of Delta-Sigma converter theory. Digital Dynamic Analysis of A/D Conversion Systems through Evaluation Software based on FFT/DFT Analysis RF Expo East, 1987 Which ADC Architecture Is Right for Your Application? article by Walt Kester ADC and DAC Glossary at the Wayback Machine (archived 2009-11-24) Defines commonly used technical terms Introduction to ADC in AVR – Analog to digital conversion with Atmel microcontrollers Signal processing and system aspects of time-interleaved ADCs MATLAB Simulink model of a simple ramp ADC "Principles of Data Acquisition and Conversion" (PDF). ti.com. Texas Instruments. April 2015 [January 1994]. Retrieved October 29, 2024.
Wikipedia/Analog-to-digital_conversion
A voltage-controlled filter (VCF) is an electronic filter whose operating characteristics (primarily cutoff frequency) can be set by an input control voltage. Voltage-controlled filters are widely used in synthesizers. A music synthesizer VCF allows its cutoff frequency, and sometimes its Q factor (resonance at the cutoff frequency), to be continuously varied. The filter outputs often include a lowpass response, and sometimes highpass, bandpass or notch responses. Some musical VCFs offer a variable slope which determines the rate of attenuation outside the bandpass, often at 6 dB/octave, 12 dB/octave, 18 dB/octave or 24 dB/octave (one-, two-, three- and four-pole filters, respectively). In modular analog synthesizers, VCFs receive signal input from signal sources, including oscillators and noise, or the output of other processors. By varying the cutoff frequency, the filter passes or attenuates partials of the input signal. In some popular electronic music styles, "filter sweeps" have become a common effect. These sweeps are created by varying the cutoff frequency of the VCF (sometimes very slowly). Controlling the cutoff by means of a transient voltage control, such as an envelope generator, especially with relatively fast attack settings, may simulate the attack transients of natural or acoustic instruments. Historically, musical VCFs have included variable feedback which creates a response peak (Q) at the cutoff frequency. This peak can be quite prominent, and when the filter's frequency is swept by a control, partials present in the input signal resonate. Some filters are designed to provide enough feedback to go into self-oscillation, and it can serve as a sine-wave source. ARP Instruments made a multifunction voltage-controlled filter module capable of stable operation at a Q over 100; it could be shock-excited to ring like a vibraphone bar. Q was voltage-controllable, in part by a panel-mounted control. Its internal circuit was a classic analog computer state variable "loop", which provided outputs in quadrature. A VCF is an example of an active non-linear filter. The characteristic musical sound of a particular VCF depends on both its linear (small-signal) frequency response and its non-linear response to larger amplitude inputs. == Synthesizer filter types == Transistor ladder filter Diode ladder filter Sallen–Key filter OTA filter == See also == Audio filter Electronic filter Electronic filter topology Non-linear filter Self oscillation Subtractive synthesis Voltage-controlled amplifier Voltage-controlled oscillator == References == == External links == Schematics and PCBs for building your own VCF
Wikipedia/Voltage-controlled_filter
A variable-gain (VGA) or voltage-controlled amplifier (VCA) is an electronic amplifier that varies its gain depending on a control voltage (often abbreviated CV). VCAs have many applications, including audio level compression, synthesizers and amplitude modulation. A crude example is a typical inverting op-amp configuration with a light-dependent resistor (LDR) in the feedback loop. The gain of the amplifier then depends on the light falling on the LDR, which can be provided by an LED (an optocoupler). The gain of the amplifier is then controllable by the current through the LED. This is similar to the circuits used in optical audio compressors. A voltage-controlled amplifier can be realised by first creating a voltage-controlled resistor (VCR), which is used to set the amplifier gain. The VCR is one of the numerous interesting circuit elements that can be produced by using a JFET (junction field-effect transistor) with simple biasing. VCRs manufactured in this way can be obtained as discrete devices, e.g. VCR2N. Another type of circuit uses operational transconductance amplifiers. In audio applications logarithmic gain control is used to emulate how the ear hears loudness. David E. Blackmer's dbx 202 VCA, based on the Blackmer gain cell, was among the first successful implementations of a logarithmic VCA. Analog multipliers are a type of VCA designed to have accurate linear characteristics, the two inputs are identical and often work in all four voltage quadrants, unlike most other VCAs. == In sound mixing consoles == Some mixing consoles come equipped with VCAs in each channel for console automation. The fader, which traditionally controls the audio signal directly, becomes a DC control voltage for the VCA. The maximum voltage available to a fader can be controlled by one or more master faders called VCA groups. The VCA master fader then controls the overall level of all of the channels assigned to it. Typically VCA groups are used to control various parts of the mix; vocals, guitars, drums or percussion. The VCA master fader allows a portion of a mix to be raised or lowered without affecting the blend of the instruments in that part of the mix. A benefit of VCA sub-group is that since it is directly affecting the gain level of each channel, changes to a VCA sub-group level affect not only the channel level but also all of the levels sent to any post-fader mixes. With traditional audio sub-groups, the sub-group master fader only affects the level going into the main mix and does not affect the level going into the post-fader mixes. Consider the case of an instrument feeding a sub-group and a post-fader mix. If you completely lower the sub-group master fader, you would no longer hear the instrument itself, but you would still hear it as part of the post-fader mix, perhaps a reverb or chorus effect. VCA mixers are known to last longer than non-VCA mixers. Because the VCA controls the audio level instead of the physical fader, decay of the fader mechanism over time does not cause a degradation in audio quality. VCAs were invented by David E. Blackmer, the founder of dbx, who used them to make dynamic range compressors. The first console using VCAs was the Allison Research computer-automated recording system designed by Paul C. Buff in 1973. Another early VCA capability on a sound mixer was the series of MCI JH500 studio recording desks introduced in 1975. The first VCA mixer for live sound was the PM3000 introduced by Yamaha in 1985. == Digital variable-gain amplifier == A digitally controlled amplifier (DCA) is a variable-gain amplifier that is digitally controlled. The digitally controlled amplifier uses a stepped approach giving the circuit graduated increments of gain selection. This can be done in several fashions, but certain elements remain in any design. At its most basic form, a toggle switch strapped across the feedback resistor can provide two discrete gain settings. While this is not a computer-controlled function, it describes the core function. With eight switches and eight resistors in the feedback loop, each switch can enable a particular resistor to control the amplifier's feedback. If each switch was converted to a relay, a microcontroller could be used to activate the relays to attain the desired amount of gain. Relays can be replaced with Field Effect Transistors of an appropriate type to reduce the mechanical nature of the design. Other devices such as the CD4053 bi-directional CMOS analog multiplexer integrated circuit and digital potentiometers (combined resistor string and MUXes) can serve well as the switching function. To minimize the number of switches and resistors, combinations of resistance values can be utilized by activating multiple switches. == See also == Automixer Mix automation == References == == External links == Examples of non-optical VCAs Some schematics for VCAs "Vacuum tube VCAs". Archived from the original on 2008-05-13. University of Toronto undergraduate lecture explaining how to implement a Voltage Controlled Amplifier using an operational amplifier and a photocell at archive.today (archived 2013-02-21) Allen & Heath's Guide to VCA Sound Desk Mixing at the Wayback Machine (archived 2008-12-03)
Wikipedia/Voltage-controlled_amplifier
A triangular function (also known as a triangle function, hat function, or tent function) is a function whose graph takes the shape of a triangle. Often this is an isosceles triangle of height 1 and base 2 in which case it is referred to as the triangular function. Triangular functions are useful in signal processing and communication systems engineering as representations of idealized signals, and the triangular function specifically as an integral transform kernel function from which more realistic signals can be derived, for example in kernel density estimation. It also has applications in pulse-code modulation as a pulse shape for transmitting digital signals and as a matched filter for receiving the signals. It is also used to define the triangular window sometimes called the Bartlett window. == Definitions == The most common definition is as a piecewise function: tri ⁡ ( x ) = Λ ( x ) = def max ( 1 − | x | , 0 ) = { 1 − | x | , | x | < 1 ; 0 otherwise . {\displaystyle {\begin{aligned}\operatorname {tri} (x)=\Lambda (x)\ &{\overset {\underset {\text{def}}{}}{=}}\ \max {\big (}1-|x|,0{\big )}\\&={\begin{cases}1-|x|,&|x|<1;\\0&{\text{otherwise}}.\\\end{cases}}\end{aligned}}} Equivalently, it may be defined as the convolution of two identical unit rectangular functions: tri ⁡ ( x ) = rect ⁡ ( x ) ∗ rect ⁡ ( x ) = ∫ − ∞ ∞ rect ⁡ ( x − τ ) ⋅ rect ⁡ ( τ ) d τ . {\displaystyle {\begin{aligned}\operatorname {tri} (x)&=\operatorname {rect} (x)*\operatorname {rect} (x)\\&=\int _{-\infty }^{\infty }\operatorname {rect} (x-\tau )\cdot \operatorname {rect} (\tau )\,d\tau .\\\end{aligned}}} The triangular function can also be represented as the product of the rectangular and absolute value functions: tri ⁡ ( x ) = rect ⁡ ( x / 2 ) ( 1 − | x | ) . {\displaystyle \operatorname {tri} (x)=\operatorname {rect} (x/2){\big (}1-|x|{\big )}.} Note that some authors instead define the triangle function to have a base of width 1 instead of width 2: tri ⁡ ( 2 x ) = Λ ( 2 x ) = def max ( 1 − 2 | x | , 0 ) = { 1 − 2 | x | , | x | < 1 2 ; 0 otherwise . {\displaystyle {\begin{aligned}\operatorname {tri} (2x)=\Lambda (2x)\ &{\overset {\underset {\text{def}}{}}{=}}\ \max {\big (}1-2|x|,0{\big )}\\&={\begin{cases}1-2|x|,&|x|<{\tfrac {1}{2}};\\0&{\text{otherwise}}.\\\end{cases}}\end{aligned}}} In its most general form a triangular function is any linear B-spline: tri j ⁡ ( x ) = { ( x − x j − 1 ) / ( x j − x j − 1 ) , x j − 1 ≤ x < x j ; ( x j + 1 − x ) / ( x j + 1 − x j ) , x j ≤ x < x j + 1 ; 0 otherwise . {\displaystyle \operatorname {tri} _{j}(x)={\begin{cases}(x-x_{j-1})/(x_{j}-x_{j-1}),&x_{j-1}\leq x<x_{j};\\(x_{j+1}-x)/(x_{j+1}-x_{j}),&x_{j}\leq x<x_{j+1};\\0&{\text{otherwise}}.\end{cases}}} Whereas the definition at the top is a special case Λ ( x ) = tri j ⁡ ( x ) , {\displaystyle \Lambda (x)=\operatorname {tri} _{j}(x),} where x j − 1 = − 1 {\displaystyle x_{j-1}=-1} , x j = 0 {\displaystyle x_{j}=0} , and x j + 1 = 1 {\displaystyle x_{j+1}=1} . A linear B-spline is the same as a continuous piecewise linear function f ( x ) {\displaystyle f(x)} , and this general triangle function is useful to formally define f ( x ) {\displaystyle f(x)} as f ( x ) = ∑ j y j ⋅ tri j ⁡ ( x ) , {\displaystyle f(x)=\sum _{j}y_{j}\cdot \operatorname {tri} _{j}(x),} where x j < x j + 1 {\displaystyle x_{j}<x_{j+1}} for all integer j {\displaystyle j} . The piecewise linear function passes through every point expressed as coordinates with ordered pair ( x j , y j ) {\displaystyle (x_{j},y_{j})} , that is, f ( x j ) = y j {\displaystyle f(x_{j})=y_{j}} . == Scaling == For any parameter a ≠ 0 {\displaystyle a\neq 0} : tri ⁡ ( t a ) = ( 1 a ) rect ⁡ ( t a ) ∗ ( 1 a ) rect ⁡ ( t a ) = ∫ − ∞ ∞ 1 | a | rect ⁡ ( τ a ) ⋅ rect ⁡ ( t − τ a ) d τ = { 1 − | t / a | , | t | < | a | ; 0 otherwise . {\displaystyle {\begin{aligned}\operatorname {tri} \left({\tfrac {t}{a}}\right)&=\left({\tfrac {1}{\sqrt {a}}}\right)\operatorname {rect} \left({\tfrac {t}{a}}\right)*\left({\tfrac {1}{\sqrt {a}}}\right)\operatorname {rect} \left({\tfrac {t}{a}}\right)=\int _{-\infty }^{\infty }{\tfrac {1}{|a|}}\operatorname {rect} \left({\tfrac {\tau }{a}}\right)\cdot \operatorname {rect} \left({\tfrac {t-\tau }{a}}\right)\,d\tau \\&={\begin{cases}1-|t/a|,&|t|<|a|;\\0&{\text{otherwise}}.\end{cases}}\end{aligned}}} == Fourier transform == The transform is easily determined using the convolution property of Fourier transforms and the Fourier transform of the rectangular function: F { tri ⁡ ( t ) } = F { rect ⁡ ( t ) ∗ rect ⁡ ( t ) } = F { rect ⁡ ( t ) } ⋅ F { rect ⁡ ( t ) } = F { rect ⁡ ( t ) } 2 = s i n c 2 ( f ) , {\displaystyle {\begin{aligned}{\mathcal {F}}\{\operatorname {tri} (t)\}&={\mathcal {F}}\{\operatorname {rect} (t)*\operatorname {rect} (t)\}\\&={\mathcal {F}}\{\operatorname {rect} (t)\}\cdot {\mathcal {F}}\{\operatorname {rect} (t)\}\\&={\mathcal {F}}\{\operatorname {rect} (t)\}^{2}\\&=\mathrm {sinc} ^{2}(f),\end{aligned}}} where sinc ⁡ ( x ) = sin ⁡ ( π x ) / ( π x ) {\displaystyle \operatorname {sinc} (x)=\sin(\pi x)/(\pi x)} is the normalized sinc function. For the general form, we have: F { tri ⁡ ( t a ) } = F { 1 a rect ⁡ ( t a ) ∗ 1 a rect ⁡ ( t a ) } = 1 a F { rect ⁡ ( t a ) } ⋅ F { rect ⁡ ( t a ) } = 1 a F { rect ⁡ ( t a ) } 2 = 1 a a 2 s i n c 2 ( a ⋅ f ) = a s i n c 2 ( a ⋅ f ) . {\displaystyle {\begin{aligned}{\mathcal {F}}\{\operatorname {tri} \left({\tfrac {t}{a}}\right)\}&={\mathcal {F}}\{{\tfrac {1}{\sqrt {a}}}\operatorname {rect} \left({\tfrac {t}{a}}\right)*{\tfrac {1}{\sqrt {a}}}\operatorname {rect} \left({\tfrac {t}{a}}\right)\}\\&={\tfrac {1}{a}}\ {\mathcal {F}}\{\operatorname {rect} \left({\tfrac {t}{a}}\right)\}\cdot {\mathcal {F}}\{\operatorname {rect} \left({\tfrac {t}{a}}\right)\}\\&={\tfrac {1}{a}}\ {\mathcal {F}}\{\operatorname {rect} \left({\tfrac {t}{a}}\right)\}^{2}\\&={\tfrac {1}{a}}\ {a}^{2}\ \mathrm {sinc} ^{2}(a\cdot f)={a}\ \mathrm {sinc} ^{2}(a\cdot f).\end{aligned}}} == See also == Källén function, also known as triangle function Tent map Triangular distribution Triangle wave, a piecewise linear periodic function Trigonometric functions == References ==
Wikipedia/Triangle_function
In mathematics, the discrete Fourier transform over a ring generalizes the discrete Fourier transform (DFT), of a function whose values are commonly complex numbers, over an arbitrary ring. == Definition == Let R be any ring, let n ≥ 1 {\displaystyle n\geq 1} be an integer, and let α ∈ R {\displaystyle \alpha \in R} be a principal nth root of unity, defined by: The discrete Fourier transform maps an n-tuple ( v 0 , … , v n − 1 ) {\displaystyle (v_{0},\ldots ,v_{n-1})} of elements of R to another n-tuple ( f 0 , … , f n − 1 ) {\displaystyle (f_{0},\ldots ,f_{n-1})} of elements of R according to the following formula: By convention, the tuple ( v 0 , … , v n − 1 ) {\displaystyle (v_{0},\ldots ,v_{n-1})} is said to be in the time domain and the index j is called time. The tuple ( f 0 , … , f n − 1 ) {\displaystyle (f_{0},\ldots ,f_{n-1})} is said to be in the frequency domain and the index k is called frequency. The tuple ( f 0 , … , f n − 1 ) {\displaystyle (f_{0},\ldots ,f_{n-1})} is also called the spectrum of ( v 0 , … , v n − 1 ) {\displaystyle (v_{0},\ldots ,v_{n-1})} . This terminology derives from the applications of Fourier transforms in signal processing. If R is an integral domain (which includes fields), it is sufficient to choose α {\displaystyle \alpha } as a primitive nth root of unity, which replaces the condition (1) by: α k ≠ 1 {\displaystyle \alpha ^{k}\neq 1} for 1 ≤ k < n {\displaystyle 1\leq k<n} Another simple condition applies in the case where n is a power of two: (1) may be replaced by α n / 2 = − 1 {\displaystyle \alpha ^{n/2}=-1} . == Inverse == The inverse of the discrete Fourier transform is given as: where 1 / n {\displaystyle 1/n} is the multiplicative inverse of n in R (if this inverse does not exist, the DFT cannot be inverted). == Matrix formulation == Since the discrete Fourier transform is a linear operator, it can be described by matrix multiplication. In matrix notation, the discrete Fourier transform is expressed as follows: [ f 0 f 1 ⋮ f n − 1 ] = [ 1 1 1 ⋯ 1 1 α α 2 ⋯ α n − 1 1 α 2 α 4 ⋯ α 2 ( n − 1 ) ⋮ ⋮ ⋮ ⋱ ⋮ 1 α n − 1 α 2 ( n − 1 ) ⋯ α ( n − 1 ) ( n − 1 ) ] [ v 0 v 1 ⋮ v n − 1 ] . {\displaystyle {\begin{bmatrix}f_{0}\\f_{1}\\\vdots \\f_{n-1}\end{bmatrix}}={\begin{bmatrix}1&1&1&\cdots &1\\1&\alpha &\alpha ^{2}&\cdots &\alpha ^{n-1}\\1&\alpha ^{2}&\alpha ^{4}&\cdots &\alpha ^{2(n-1)}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&\alpha ^{n-1}&\alpha ^{2(n-1)}&\cdots &\alpha ^{(n-1)(n-1)}\\\end{bmatrix}}{\begin{bmatrix}v_{0}\\v_{1}\\\vdots \\v_{n-1}\end{bmatrix}}.} The matrix for this transformation is called the DFT matrix. Similarly, the matrix notation for the inverse Fourier transform is [ v 0 v 1 ⋮ v n − 1 ] = 1 n [ 1 1 1 ⋯ 1 1 α − 1 α − 2 ⋯ α − ( n − 1 ) 1 α − 2 α − 4 ⋯ α − 2 ( n − 1 ) ⋮ ⋮ ⋮ ⋱ ⋮ 1 α − ( n − 1 ) α − 2 ( n − 1 ) ⋯ α − ( n − 1 ) ( n − 1 ) ] [ f 0 f 1 ⋮ f n − 1 ] . {\displaystyle {\begin{bmatrix}v_{0}\\v_{1}\\\vdots \\v_{n-1}\end{bmatrix}}={\frac {1}{n}}{\begin{bmatrix}1&1&1&\cdots &1\\1&\alpha ^{-1}&\alpha ^{-2}&\cdots &\alpha ^{-(n-1)}\\1&\alpha ^{-2}&\alpha ^{-4}&\cdots &\alpha ^{-2(n-1)}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&\alpha ^{-(n-1)}&\alpha ^{-2(n-1)}&\cdots &\alpha ^{-(n-1)(n-1)}\end{bmatrix}}{\begin{bmatrix}f_{0}\\f_{1}\\\vdots \\f_{n-1}\end{bmatrix}}.} == Polynomial formulation == Sometimes it is convenient to identify an n-tuple ( v 0 , … , v n − 1 ) {\displaystyle (v_{0},\ldots ,v_{n-1})} with a formal polynomial p v ( x ) = v 0 + v 1 x + v 2 x 2 + ⋯ + v n − 1 x n − 1 . {\displaystyle p_{v}(x)=v_{0}+v_{1}x+v_{2}x^{2}+\cdots +v_{n-1}x^{n-1}.\,} By writing out the summation in the definition of the discrete Fourier transform (2), we obtain: f k = v 0 + v 1 α k + v 2 α 2 k + ⋯ + v n − 1 α ( n − 1 ) k . {\displaystyle f_{k}=v_{0}+v_{1}\alpha ^{k}+v_{2}\alpha ^{2k}+\cdots +v_{n-1}\alpha ^{(n-1)k}.\,} This means that f k {\displaystyle f_{k}} is just the value of the polynomial p v ( x ) {\displaystyle p_{v}(x)} for x = α k {\displaystyle x=\alpha ^{k}} , i.e., The Fourier transform can therefore be seen to relate the coefficients and the values of a polynomial: the coefficients are in the time-domain, and the values are in the frequency domain. Here, of course, it is important that the polynomial is evaluated at the nth roots of unity, which are exactly the powers of α {\displaystyle \alpha } . Similarly, the definition of the inverse Fourier transform (3) can be written: With p f ( x ) = f 0 + f 1 x + f 2 x 2 + ⋯ + f n − 1 x n − 1 , {\displaystyle p_{f}(x)=f_{0}+f_{1}x+f_{2}x^{2}+\cdots +f_{n-1}x^{n-1},} this means that v j = 1 n p f ( α − j ) . {\displaystyle v_{j}={\frac {1}{n}}p_{f}(\alpha ^{-j}).} We can summarize this as follows: if the values of p v ( x ) {\displaystyle p_{v}(x)} are the coefficients of p f ( x ) {\displaystyle p_{f}(x)} , then the values of p f ( x ) {\displaystyle p_{f}(x)} are the coefficients of p v ( x ) {\displaystyle p_{v}(x)} , up to a scalar factor and reordering. == Special cases == === Complex numbers === If F = C {\displaystyle F={\mathbb {C} }} is the field of complex numbers, then the n {\displaystyle n} th roots of unity can be visualized as points on the unit circle of the complex plane. In this case, one usually takes α = e − 2 π i n , {\displaystyle \alpha =e^{\frac {-2\pi i}{n}},} which yields the usual formula for the complex discrete Fourier transform: f k = ∑ j = 0 n − 1 v j e − 2 π i n j k . {\displaystyle f_{k}=\sum _{j=0}^{n-1}v_{j}e^{{\frac {-2\pi i}{n}}jk}.} Over the complex numbers, it is often customary to normalize the formulas for the DFT and inverse DFT by using the scalar factor 1 n {\displaystyle {\frac {1}{\sqrt {n}}}} in both formulas, rather than 1 {\displaystyle 1} in the formula for the DFT and 1 n {\displaystyle {\frac {1}{n}}} in the formula for the inverse DFT. With this normalization, the DFT matrix is then unitary. Note that n {\displaystyle {\sqrt {n}}} does not make sense in an arbitrary field. === Finite fields === If F = G F ( q ) {\displaystyle F=\mathrm {GF} (q)} is a finite field, where q is a prime power, then the existence of a primitive nth root automatically implies that n divides q − 1 {\displaystyle q-1} , because the multiplicative order of each element must divide the size of the multiplicative group of F, which is q − 1 {\displaystyle q-1} . This in particular ensures that n = 1 + 1 + ⋯ + 1 ⏟ n t i m e s {\displaystyle n=\underbrace {1+1+\cdots +1} _{n\ {\rm {times}}}} is invertible, so that the notation 1 n {\displaystyle {\frac {1}{n}}} in (3) makes sense. An application of the discrete Fourier transform over G F ( q ) {\displaystyle \mathrm {GF} (q)} is the reduction of Reed–Solomon codes to BCH codes in coding theory. Such transform can be carried out efficiently with proper fast algorithms, for example, cyclotomic fast Fourier transform. ==== Polynomial formulation without nth root ==== Suppose F = G F ( p ) {\displaystyle F=\mathrm {GF} (p)} . If p ∤ n {\displaystyle p\nmid n} , it may be the case that n ∤ p − 1 {\displaystyle n\nmid p-1} . This means we cannot find an n t h {\displaystyle n^{th}} root of unity in F {\displaystyle F} . We may view the Fourier transform as an isomorphism F [ C n ] = F [ x ] / ( x n − 1 ) ≅ ⨁ i F [ x ] / ( P i ( x ) ) {\displaystyle \mathrm {F} [C_{n}]=\mathrm {F} [x]/(x^{n}-1)\cong \bigoplus _{i}\mathrm {F} [x]/(P_{i}(x))} for some polynomials P i ( x ) {\displaystyle P_{i}(x)} , in accordance with Maschke's theorem. The map is given by the Chinese remainder theorem, and the inverse is given by applying Bézout's identity for polynomials. x n − 1 = ∏ d | n Φ d ( x ) {\displaystyle x^{n}-1=\prod _{d|n}\Phi _{d}(x)} , a product of cyclotomic polynomials. Factoring Φ d ( x ) {\displaystyle \Phi _{d}(x)} in F [ x ] {\displaystyle F[x]} is equivalent to factoring the prime ideal ( p ) {\displaystyle (p)} in Z [ ζ ] = Z [ x ] / ( Φ d ( x ) ) {\displaystyle \mathrm {Z} [\zeta ]=\mathrm {Z} [x]/(\Phi _{d}(x))} . We obtain g {\displaystyle g} polynomials P 1 … P g {\displaystyle P_{1}\ldots P_{g}} of degree f {\displaystyle f} where f g = φ ( d ) {\displaystyle fg=\varphi (d)} and f {\displaystyle f} is the order of p mod d {\displaystyle p{\text{ mod }}d} . As above, we may extend the base field to G F ( q ) {\displaystyle \mathrm {GF} (q)} in order to find a primitive root, i.e. a splitting field for x n − 1 {\displaystyle x^{n}-1} . Now x n − 1 = ∏ k ( x − α k ) {\displaystyle x^{n}-1=\prod _{k}(x-\alpha ^{k})} , so an element ∑ j = 0 n − 1 v j x j ∈ F [ x ] / ( x n − 1 ) {\displaystyle \sum _{j=0}^{n-1}v_{j}x^{j}\in F[x]/(x^{n}-1)} maps to ∑ j = 0 n − 1 v j x j mod ( x − α k ) ≡ ∑ j = 0 n − 1 v j ( α k ) j {\displaystyle \sum _{j=0}^{n-1}v_{j}x^{j}\mod (x-\alpha ^{k})\equiv \sum _{j=0}^{n-1}v_{j}(\alpha ^{k})^{j}} for each k {\displaystyle k} . ==== When p divides n ==== When p | n {\displaystyle p|n} , we may still define an F p {\displaystyle F_{p}} -linear isomorphism as above. Note that ( x n − 1 ) = ( x m − 1 ) p s {\displaystyle (x^{n}-1)=(x^{m}-1)^{p^{s}}} where n = m p s {\displaystyle n=mp^{s}} and p ∤ m {\displaystyle p\nmid m} . We apply the above factorization to x m − 1 {\displaystyle x^{m}-1} , and now obtain the decomposition F [ x ] / ( x n − 1 ) ≅ ⨁ i F [ x ] / ( P i ( x ) p s ) {\displaystyle F[x]/(x^{n}-1)\cong \bigoplus _{i}F[x]/(P_{i}(x)^{p^{s}})} . The modules occurring are now indecomposable rather than irreducible. ==== Order of the DFT matrix ==== Suppose p ∤ n {\displaystyle p\nmid n} so we have an n t h {\displaystyle n^{th}} root of unity α {\displaystyle \alpha } . Let A {\displaystyle A} be the above DFT matrix, a Vandermonde matrix with entries A i j = α i j {\displaystyle A_{ij}=\alpha ^{ij}} for 0 ≤ i , j < n {\displaystyle 0\leq i,j<n} . Recall that ∑ j = 0 n − 1 α ( k − l ) j = n δ k , l {\displaystyle \sum _{j=0}^{n-1}\alpha ^{(k-l)j}=n\delta _{k,l}} since if k = l {\displaystyle k=l} , then every entry is 1. If k ≠ l {\displaystyle k\neq l} , then we have a geometric series with common ratio α k − l {\displaystyle \alpha ^{k-l}} , so we obtain 1 − α n ( k − l ) 1 − α k − l {\displaystyle {\frac {1-\alpha ^{n(k-l)}}{1-\alpha ^{k-l}}}} . Since α n = 1 {\displaystyle \alpha ^{n}=1} the numerator is zero, but k − l ≠ 0 {\displaystyle k-l\neq 0} so the denominator is nonzero. First computing the square, ( A 2 ) i k = ∑ j = 0 n − 1 α j ( i + k ) = n δ i , − k {\displaystyle (A^{2})_{ik}=\sum _{j=0}^{n-1}\alpha ^{j(i+k)}=n\delta _{i,-k}} . Computing A 4 = ( A 2 ) 2 {\displaystyle A^{4}=(A^{2})^{2}} similarly and simplifying the deltas, we obtain ( A 4 ) i k = n 2 δ i , k {\displaystyle (A^{4})_{ik}=n^{2}\delta _{i,k}} . Thus, A 4 = n 2 I n {\displaystyle A^{4}=n^{2}I_{n}} and the order is 4 ⋅ ord ( n 2 ) {\displaystyle 4\cdot {\text{ord}}(n^{2})} . ==== Normalizing the DFT matrix ==== In order to align with the complex case and ensure the matrix is order 4 exactly, we can normalize the above DFT matrix A {\displaystyle A} with 1 n {\displaystyle {\frac {1}{\sqrt {n}}}} . Note that though n {\displaystyle {\sqrt {n}}} may not exist in the splitting field F q {\displaystyle F_{q}} of x n − 1 {\displaystyle x^{n}-1} , we may form a quadratic extension F q 2 ≅ F q [ x ] / ( x 2 − n ) {\displaystyle F_{q^{2}}\cong F_{q}[x]/(x^{2}-n)} in which the square root exists. We may then set U = 1 n A {\displaystyle U={\frac {1}{\sqrt {n}}}A} , and U 4 = I n {\displaystyle U^{4}=I_{n}} . ==== Unitarity ==== Suppose p ∤ n {\displaystyle p\nmid n} . One can ask whether the DFT matrix is unitary over a finite field. If the matrix entries are over F q {\displaystyle F_{q}} , then one must ensure q {\displaystyle q} is a perfect square or extend to F q 2 {\displaystyle F_{q^{2}}} in order to define the order two automorphism x ↦ x q {\displaystyle x\mapsto x^{q}} . Consider the above DFT matrix A i j = α i j {\displaystyle A_{ij}=\alpha ^{ij}} . Note that A {\displaystyle A} is symmetric. Conjugating and transposing, we obtain A i j ∗ = α q j i {\displaystyle A_{ij}^{*}=\alpha ^{qji}} . ( A A ∗ ) i k = ∑ j = 0 n − 1 α j ( i + q k ) = n δ i , − q k {\displaystyle (AA^{*})_{ik}=\sum _{j=0}^{n-1}\alpha ^{j(i+qk)}=n\delta _{i,-qk}} by a similar geometric series argument as above. We may remove the n {\displaystyle n} by normalizing so that U = 1 n A {\displaystyle U={\frac {1}{\sqrt {n}}}A} and ( U U ∗ ) i k = δ i , − q k {\displaystyle (UU^{*})_{ik}=\delta _{i,-qk}} . Thus U {\displaystyle U} is unitary iff q ≡ − 1 ( mod n ) {\displaystyle q\equiv -1\,({\text{mod}}\,n)} . Recall that since we have an n t h {\displaystyle n^{th}} root of unity, n | q 2 − 1 {\displaystyle n|q^{2}-1} . This means that q 2 − 1 ≡ ( q + 1 ) ( q − 1 ) ≡ 0 ( mod n ) {\displaystyle q^{2}-1\equiv (q+1)(q-1)\equiv 0\,({\text{mod}}\,n)} . Note if q {\displaystyle q} was not a perfect square to begin with, then n | q − 1 {\displaystyle n|q-1} and so q ≡ 1 ( mod n ) {\displaystyle q\equiv 1\,({\text{mod}}\,n)} . For example, when p = 3 , n = 5 {\displaystyle p=3,n=5} we need to extend to q 2 = 3 4 {\displaystyle q^{2}=3^{4}} to get a 5th root of unity. q = 9 ≡ − 1 ( mod 5 ) {\displaystyle q=9\equiv -1\,({\text{mod}}\,5)} . For a nonexample, when p = 3 , n = 8 {\displaystyle p=3,n=8} we extend to F 3 2 {\displaystyle F_{3^{2}}} to get an 8th root of unity. q 2 = 9 {\displaystyle q^{2}=9} , so q ≡ 3 ( mod 8 ) {\displaystyle q\equiv 3\,({\text{mod}}\,8)} , and in this case q + 1 ≢ 0 {\displaystyle q+1\not \equiv 0} and q − 1 ≢ 0 {\displaystyle q-1\not \equiv 0} . U U ∗ {\displaystyle UU^{*}} is a square root of the identity, so U {\displaystyle U} is not unitary. ==== Eigenvalues of the DFT matrix ==== When p ∤ n {\displaystyle p\nmid n} , we have an n t h {\displaystyle n^{th}} root of unity α {\displaystyle \alpha } in the splitting field F q ≅ F p [ x ] / ( x n − 1 ) {\displaystyle F_{q}\cong F_{p}[x]/(x^{n}-1)} . Note that the characteristic polynomial of the above DFT matrix may not split over F q {\displaystyle F_{q}} . The DFT matrix is order 4. We may need to go to a further extension F q ′ {\displaystyle F_{q'}} , the splitting extension of the characteristic polynomial of the DFT matrix, which at least contains fourth roots of unity. If a {\displaystyle a} is a generator of the multiplicative group of F q ′ {\displaystyle F_{q'}} , then the eigenvalues are { ± 1 , ± a ( q ′ − 1 ) / 4 } {\displaystyle \{\pm 1,\pm a^{(q'-1)/4}\}} , in exact analogy with the complex case. They occur with some nonnegative multiplicity. === Number-theoretic transform === The number-theoretic transform (NTT) is obtained by specializing the discrete Fourier transform to F = Z / p {\displaystyle F={\mathbb {Z} }/p} , the integers modulo a prime p. This is a finite field, and primitive nth roots of unity exist whenever n divides p − 1 {\displaystyle p-1} , so we have p = ξ n + 1 {\displaystyle p=\xi n+1} for a positive integer ξ. Specifically, let ω {\displaystyle \omega } be a primitive ( p − 1 ) {\displaystyle (p-1)} th root of unity, then an nth root of unity α {\displaystyle \alpha } can be found by letting α = ω ξ {\displaystyle \alpha =\omega ^{\xi }} . e.g. for p = 5 {\displaystyle p=5} , α = 2 {\displaystyle \alpha =2} 2 1 = 2 ( mod 5 ) 2 2 = 4 ( mod 5 ) 2 3 = 3 ( mod 5 ) 2 4 = 1 ( mod 5 ) {\displaystyle {\begin{aligned}2^{1}&=2{\pmod {5}}\\2^{2}&=4{\pmod {5}}\\2^{3}&=3{\pmod {5}}\\2^{4}&=1{\pmod {5}}\end{aligned}}} when N = 4 {\displaystyle N=4} [ F ( 0 ) F ( 1 ) F ( 2 ) F ( 3 ) ] = [ 1 1 1 1 1 2 4 3 1 4 1 4 1 3 4 2 ] [ f ( 0 ) f ( 1 ) f ( 2 ) f ( 3 ) ] {\displaystyle {\begin{bmatrix}F(0)\\F(1)\\F(2)\\F(3)\end{bmatrix}}={\begin{bmatrix}1&1&1&1\\1&2&4&3\\1&4&1&4\\1&3&4&2\end{bmatrix}}{\begin{bmatrix}f(0)\\f(1)\\f(2)\\f(3)\end{bmatrix}}} The number theoretic transform may be meaningful in the ring Z / m {\displaystyle \mathbb {Z} /m} , even when the modulus m is not prime, provided a principal root of order n exists. Special cases of the number theoretic transform such as the Fermat Number Transform (m = 2k+1), used by the Schönhage–Strassen algorithm, or Mersenne Number Transform (m = 2k − 1) use a composite modulus. In general, if m = ∏ i p i e i {\displaystyle m=\prod _{i}p_{i}^{e_{i}}} , then one may find an n t h {\displaystyle n^{th}} root of unity mod m by finding primitive n t h {\displaystyle n^{th}} roots of unity g i {\displaystyle g_{i}} mod p i e i {\displaystyle p_{i}^{e_{i}}} , yielding a tuple g = ( g i ) i ∈ ∏ i ( Z / p i e i Z ) ∗ {\displaystyle g=\left(g_{i}\right)_{i}\in \prod _{i}\left(\mathbb {Z} /p_{i}^{e_{i}}\mathbb {Z} \right)^{\ast }} . The preimage of g {\displaystyle g} under the Chinese remainder theorem isomorphism is an n t h {\displaystyle n^{th}} root of unity α {\displaystyle \alpha } such that α n / 2 = − 1 mod m {\displaystyle \alpha ^{n/2}=-1\mod m} . This ensures that the above summation conditions are satisfied. We must have that n | φ ( p i e i ) {\displaystyle n|\varphi (p_{i}^{e_{i}})} for each i {\displaystyle i} , where φ {\displaystyle \varphi } is the Euler's totient function function. === Discrete weighted transform === The discrete weighted transform (DWT) is a variation on the discrete Fourier transform over arbitrary rings involving weighting the input before transforming it by multiplying elementwise by a weight vector, then weighting the result by another vector. The Irrational base discrete weighted transform is a special case of this. == Properties == Most of the important attributes of the complex DFT, including the inverse transform, the convolution theorem, and most fast Fourier transform (FFT) algorithms, depend only on the property that the kernel of the transform is a principal root of unity. These properties also hold, with identical proofs, over arbitrary rings. In the case of fields, this analogy can be formalized by the field with one element, considering any field with a primitive nth root of unity as an algebra over the extension field F 1 n . {\displaystyle \mathbf {F} _{1^{n}}.} In particular, the applicability of O ( n log ⁡ n ) {\displaystyle O(n\log n)} fast Fourier transform algorithms to compute the NTT, combined with the convolution theorem, mean that the number-theoretic transform gives an efficient way to compute exact convolutions of integer sequences. While the complex DFT can perform the same task, it is susceptible to round-off error in finite-precision floating point arithmetic; the NTT has no round-off because it deals purely with fixed-size integers that can be exactly represented. == Fast algorithms == For the implementation of a "fast" algorithm (similar to how FFT computes the DFT), it is often desirable that the transform length is also highly composite, e.g., a power of two. However, there are specialized fast Fourier transform algorithms for finite fields, such as Wang and Zhu's algorithm, that are efficient regardless of the transform length factors. == See also == Discrete Fourier transform (complex) Fourier transform on finite groups Gauss sum Convolution Least-squares spectral analysis Multiplication algorithm == References == == External links == http://www.apfloat.org/ntt.html
Wikipedia/Discrete_Fourier_transform_(general)
In signal processing and statistics, a window function (also known as an apodization function or tapering function) is a mathematical function that is zero-valued outside of some chosen interval. Typically, window functions are symmetric around the middle of the interval, approach a maximum in the middle, and taper away from the middle. Mathematically, when another function or waveform/data-sequence is "multiplied" by a window function, the product is also zero-valued outside the interval: all that is left is the part where they overlap, the "view through the window". Equivalently, and in actual practice, the segment of data within the window is first isolated, and then only that data is multiplied by the window function values. Thus, tapering, not segmentation, is the main purpose of window functions. The reasons for examining segments of a longer function include detection of transient events and time-averaging of frequency spectra. The duration of the segments is determined in each application by requirements like time and frequency resolution. But that method also changes the frequency content of the signal by an effect called spectral leakage. Window functions allow us to distribute the leakage spectrally in different ways, according to the needs of the particular application. There are many choices detailed in this article, but many of the differences are so subtle as to be insignificant in practice. In typical applications, the window functions used are non-negative, smooth, "bell-shaped" curves. Rectangle, triangle, and other functions can also be used. A more general definition of window functions does not require them to be identically zero outside an interval, as long as the product of the window multiplied by its argument is square integrable, and, more specifically, that the function goes sufficiently rapidly toward zero. == Applications == Window functions are used in spectral analysis/modification/resynthesis, the design of finite impulse response filters, merging multiscale and multidimensional datasets, as well as beamforming and antenna design. === Spectral analysis === The Fourier transform of the function cos(ωt) is zero, except at frequency ±ω. However, many other functions and waveforms do not have convenient closed-form transforms. Alternatively, one might be interested in their spectral content only during a certain time period. In either case, the Fourier transform (or a similar transform) can be applied on one or more finite intervals of the waveform. In general, the transform is applied to the product of the waveform and a window function. Any window (including rectangular) affects the spectral estimate computed by this method. === Filter design === Windows are sometimes used in the design of digital filters, in particular to convert an "ideal" impulse response of infinite duration, such as a sinc function, to a finite impulse response (FIR) filter design. That is called the window method. === Statistics and curve fitting === Window functions are sometimes used in the field of statistical analysis to restrict the set of data being analyzed to a range near a given point, with a weighting factor that diminishes the effect of points farther away from the portion of the curve being fit. In the field of Bayesian analysis and curve fitting, this is often referred to as the kernel. === Rectangular window applications === ==== Analysis of transients ==== When analyzing a transient signal in modal analysis, such as an impulse, a shock response, a sine burst, a chirp burst, or noise burst, where the energy vs time distribution is extremely uneven, the rectangular window may be most appropriate. For instance, when most of the energy is located at the beginning of the recording, a non-rectangular window attenuates most of the energy, degrading the signal-to-noise ratio. ==== Harmonic analysis ==== One might wish to measure the harmonic content of a musical note from a particular instrument or the harmonic distortion of an amplifier at a given frequency. Referring again to Figure 2, we can observe that there is no leakage at a discrete set of harmonically-related frequencies sampled by the discrete Fourier transform (DFT). (The spectral nulls are actually zero-crossings, which cannot be shown on a logarithmic scale such as this.) This property is unique to the rectangular window, and it must be appropriately configured for the signal frequency, as described above. == Overlapping windows == When the length of a data set to be transformed is larger than necessary to provide the desired frequency resolution, a common practice is to subdivide it into smaller sets and window them individually. To mitigate the "loss" at the edges of the window, the individual sets may overlap in time. See Welch method of power spectral analysis and the modified discrete cosine transform. == Two-dimensional windows == Two-dimensional windows are commonly used in image processing to reduce unwanted high-frequencies in the image Fourier transform. They can be constructed from one-dimensional windows in either of two forms. The separable form, W ( m , n ) = w ( m ) w ( n ) {\displaystyle W(m,n)=w(m)w(n)} is trivial to compute. The radial form, W ( m , n ) = w ( r ) {\displaystyle W(m,n)=w(r)} , which involves the radius r = ( m − M / 2 ) 2 + ( n − N / 2 ) 2 {\displaystyle r={\sqrt {(m-M/2)^{2}+(n-N/2)^{2}}}} , is isotropic, independent on the orientation of the coordinate axes. Only the Gaussian function is both separable and isotropic. The separable forms of all other window functions have corners that depend on the choice of the coordinate axes. The isotropy/anisotropy of a two-dimensional window function is shared by its two-dimensional Fourier transform. The difference between the separable and radial forms is akin to the result of diffraction from rectangular vs. circular apertures, which can be visualized in terms of the product of two sinc functions vs. an Airy function, respectively. == Examples of window functions == Conventions: w 0 ( x ) {\displaystyle w_{0}(x)} is a zero-phase function (symmetrical about x = 0 {\displaystyle x=0} ), continuous for x ∈ [ − N / 2 , N / 2 ] , {\displaystyle x\in [-N/2,N/2],} where N {\displaystyle N} is a positive integer (even or odd). The sequence { w [ n ] = w 0 ( n − N / 2 ) , 0 ≤ n ≤ N } {\displaystyle \{w[n]=w_{0}(n-N/2),\quad 0\leq n\leq N\}} is symmetric, of length N + 1. {\displaystyle N+1.} { w [ n ] , 0 ≤ n ≤ N − 1 } {\displaystyle \{w[n],\quad 0\leq n\leq N-1\}} is DFT-symmetric, of length N . {\displaystyle N.} The parameter B displayed on each spectral plot is the function's noise equivalent bandwidth metric, in units of DFT bins.: p.56 eq.(16)  See spectral leakage §§ Discrete-time signals​ and Some window metrics and Normalized frequency for understanding the use of "bins" for the x-axis in these plots. The sparse sampling of a discrete-time Fourier transform (DTFT) such as the DFTs in Fig 2 only reveals the leakage into the DFT bins from a sinusoid whose frequency is also an integer DFT bin. The unseen sidelobes reveal the leakage to expect from sinusoids at other frequencies. Therefore, when choosing a window function, it is usually important to sample the DTFT more densely (as we do throughout this section) and choose a window that suppresses the sidelobes to an acceptable level. === Rectangular window === The rectangular window (sometimes known as the boxcar or uniform or Dirichlet window or misleadingly as "no window" in some programs) is the simplest window, equivalent to replacing all but N consecutive values of a data sequence by zeros, making the waveform suddenly turn on and off: w [ n ] = 1. {\displaystyle w[n]=1.} Other windows are designed to moderate these sudden changes, to reduce scalloping loss and improve dynamic range (described in § Spectral analysis). The rectangular window is the 1st-order B-spline window as well as the 0th-power power-of-sine window. The rectangular window provides the minimum mean square error estimate of the Discrete-time Fourier transform, at the cost of other issues discussed. === B-spline windows === B-spline windows can be obtained as k-fold convolutions of the rectangular window. They include the rectangular window itself (k = 1), the § Triangular window (k = 2) and the § Parzen window (k = 4). Alternative definitions sample the appropriate normalized B-spline basis functions instead of convolving discrete-time windows. A kth-order B-spline basis function is a piece-wise polynomial function of degree k − 1 that is obtained by k-fold self-convolution of the rectangular function. ==== Triangular window ==== Triangular windows are given by w [ n ] = 1 − | n − N 2 L 2 | , 0 ≤ n ≤ N , {\displaystyle w[n]=1-\left|{\frac {n-{\frac {N}{2}}}{\frac {L}{2}}}\right|,\quad 0\leq n\leq N,} where L can be N, N + 1, or N + 2. The first one is also known as Bartlett window or Fejér window. All three definitions converge at large N. The triangular window is the 2nd-order B-spline window. The L = N form can be seen as the convolution of two N⁄2-width rectangular windows. The Fourier transform of the result is the squared values of the transform of the half-width rectangular window. ==== Parzen window ==== Defining L ≜ N + 1, the Parzen window, also known as the de la Vallée Poussin window, is the 4th-order B-spline window given by w 0 ( n ) ≜ { 1 − 6 ( n L / 2 ) 2 ( 1 − | n | L / 2 ) , 0 ≤ | n | ≤ L 4 2 ( 1 − | n | L / 2 ) 3 L 4 < | n | ≤ L 2 } {\displaystyle w_{0}(n)\triangleq \left\{{\begin{array}{ll}1-6\left({\frac {n}{L/2}}\right)^{2}\left(1-{\frac {|n|}{L/2}}\right),&0\leq |n|\leq {\frac {L}{4}}\\2\left(1-{\frac {|n|}{L/2}}\right)^{3}&{\frac {L}{4}}<|n|\leq {\frac {L}{2}}\\\end{array}}\right\}} w [ n ] = w 0 ( n − N 2 ) , 0 ≤ n ≤ N {\displaystyle w[n]=\ w_{0}\left(n-{\tfrac {N}{2}}\right),\ 0\leq n\leq N} === Other polynomial windows === ==== Welch window ==== The Welch window consists of a single parabolic section: w [ n ] = 1 − ( n − N 2 N 2 ) 2 , 0 ≤ n ≤ N . {\displaystyle w[n]=1-\left({\frac {n-{\frac {N}{2}}}{\frac {N}{2}}}\right)^{2},\quad 0\leq n\leq N.} Alternatively, it can be written as two factors, as in a beta distribution: w [ n ] = ( 1 + n − N 2 N 2 ) ( 1 − n − N 2 N 2 ) , 0 ≤ n ≤ N . {\displaystyle w[n]=\left(1+{\frac {n-{\frac {N}{2}}}{\frac {N}{2}}}\right)\left(1-{\frac {n-{\frac {N}{2}}}{\frac {N}{2}}}\right),\quad 0\leq n\leq N.} The defining quadratic polynomial reaches a value of zero at the samples just outside the span of the window. The Welch window is fairly close to the sine window, and just as the power-of-sine windows are a useful parameterized family, the power-of-Welch window family is similarly useful. Powers of the Welch or parabolic window are also Pearson type II distributions and symmetric beta distributions, and are purely algebraic functions (if the powers are rational), as opposed to most windows that are transcendental functions. If different exponents are used on the two factors in the Welch polynomial, the result is a general beta distribution, which is useful for making asymmetric window functions. === Raised-cosine windows === Windows in the form of a cosine function offset by a constant, such as the popular Hamming and Hann windows, are sometimes called raised-cosine windows. The Hann window is particularly like the raised cosine distribution, which goes smoothly to zero at its ends. The raised-cosine windows have the form: w [ n ] = a 0 − ( 1 − a 0 ) ⋅ cos ⁡ ( 2 π n N ) , 0 ≤ n ≤ N , {\displaystyle w[n]=a_{0}-(1-a_{0})\cdot \cos \left({\tfrac {2\pi n}{N}}\right),\quad 0\leq n\leq N,} or alternatively as their zero-phase versions: w 0 ( n ) = w [ n + N 2 ] = a 0 − ( 1 − a 0 ) ⋅ cos ⁡ ( 2 π n N ) , − N 2 ≤ n ≤ N 2 . {\displaystyle {\begin{aligned}w_{0}(n)\ &=w\left[n+{\tfrac {N}{2}}\right]\\&=a_{0}-(1-a_{0})\cdot \cos \left({\tfrac {2\pi n}{N}}\right),\quad -{\tfrac {N}{2}}\leq n\leq {\tfrac {N}{2}}.\end{aligned}}} ==== Hann window ==== Setting a 0 = 0.5 {\displaystyle a_{0}=0.5} produces a Hann window: w [ n ] = 0.5 [ 1 − cos ⁡ ( 2 π n N ) ] = sin 2 ⁡ ( π n N ) , {\displaystyle w[n]=0.5\;\left[1-\cos \left({\frac {2\pi n}{N}}\right)\right]=\sin ^{2}\left({\frac {\pi n}{N}}\right),} named after Julius von Hann, and sometimes referred to as Hanning, which derived from the verb "to Hann". It is also known as the raised cosine, because of its similarity to a raised-cosine distribution. This function is a member of both the cosine-sum and power-of-sine families. Unlike the Hamming window, the end points of the Hann window just touch zero. The resulting side-lobes roll off at about 18 dB per octave. ==== Hamming window ==== Setting a 0 {\displaystyle a_{0}} to approximately 0.54, or more precisely 25/46, produces the Hamming window, proposed by Richard W. Hamming. This choice places a zero crossing at frequency 5π/(N − 1), which cancels the first sidelobe of the Hann window, giving it a height of about one-fifth that of the Hann window. The Hamming window is often called the Hamming blip when used for pulse shaping. Approximation of the coefficients to two decimal places substantially lowers the level of sidelobes, to a nearly equiripple condition. In the equiripple sense, the optimal values for the coefficients are a0 = 0.53836 and a1 = 0.46164. === Cosine-sum windows === This family, which generalizes the raised-cosine windows, is also known as generalized cosine windows. In most cases, including the examples below, all coefficients ak ≥ 0. These windows have only 2K + 1 non-zero N-point DFT coefficients. ==== Blackman window ==== Blackman windows are defined as w [ n ] = a 0 − a 1 cos ⁡ ( 2 π n N ) + a 2 cos ⁡ ( 4 π n N ) , {\displaystyle w[n]=a_{0}-a_{1}\cos \left({\frac {2\pi n}{N}}\right)+a_{2}\cos \left({\frac {4\pi n}{N}}\right),} a 0 = 1 − α 2 ; a 1 = 1 2 ; a 2 = α 2 . {\displaystyle a_{0}={\frac {1-\alpha }{2}};\quad a_{1}={\frac {1}{2}};\quad a_{2}={\frac {\alpha }{2}}.} By common convention, the unqualified term Blackman window refers to Blackman's "not very serious proposal" of α = 0.16 (a0 = 0.42, a1 = 0.5, a2 = 0.08), which closely approximates the exact Blackman, with a0 = 7938/18608 ≈ 0.42659, a1 = 9240/18608 ≈ 0.49656, and a2 = 1430/18608 ≈ 0.076849. These exact values place zeros at the third and fourth sidelobes, but result in a discontinuity at the edges and a 6 dB/oct fall-off. The truncated coefficients do not null the sidelobes as well, but have an improved 18 dB/oct fall-off. ==== Nuttall window, continuous first derivative ==== The continuous form of the Nuttall window, w 0 ( x ) , {\displaystyle w_{0}(x),} and its first derivative are continuous everywhere, like the Hann function. That is, the function goes to 0 at x = ±N/2, unlike the Blackman–Nuttall, Blackman–Harris, and Hamming windows. The Blackman window (α = 0.16) is also continuous with continuous derivative at the edge, but the "exact Blackman window" is not. w [ n ] = a 0 − a 1 cos ⁡ ( 2 π n N ) + a 2 cos ⁡ ( 4 π n N ) − a 3 cos ⁡ ( 6 π n N ) {\displaystyle w[n]=a_{0}-a_{1}\cos \left({\frac {2\pi n}{N}}\right)+a_{2}\cos \left({\frac {4\pi n}{N}}\right)-a_{3}\cos \left({\frac {6\pi n}{N}}\right)} a 0 = 0.355768 ; a 1 = 0.487396 ; a 2 = 0.144232 ; a 3 = 0.012604. {\displaystyle a_{0}=0.355768;\quad a_{1}=0.487396;\quad a_{2}=0.144232;\quad a_{3}=0.012604.} ==== Blackman–Nuttall window ==== w [ n ] = a 0 − a 1 cos ⁡ ( 2 π n N ) + a 2 cos ⁡ ( 4 π n N ) − a 3 cos ⁡ ( 6 π n N ) {\displaystyle w[n]=a_{0}-a_{1}\cos \left({\frac {2\pi n}{N}}\right)+a_{2}\cos \left({\frac {4\pi n}{N}}\right)-a_{3}\cos \left({\frac {6\pi n}{N}}\right)} a 0 = 0.3635819 ; a 1 = 0.4891775 ; a 2 = 0.1365995 ; a 3 = 0.0106411. {\displaystyle a_{0}=0.3635819;\quad a_{1}=0.4891775;\quad a_{2}=0.1365995;\quad a_{3}=0.0106411.} ==== Blackman–Harris window ==== A generalization of the Hamming family, produced by adding more shifted cosine functions, meant to minimize side-lobe levels w [ n ] = a 0 − a 1 cos ⁡ ( 2 π n N ) + a 2 cos ⁡ ( 4 π n N ) − a 3 cos ⁡ ( 6 π n N ) {\displaystyle w[n]=a_{0}-a_{1}\cos \left({\frac {2\pi n}{N}}\right)+a_{2}\cos \left({\frac {4\pi n}{N}}\right)-a_{3}\cos \left({\frac {6\pi n}{N}}\right)} a 0 = 0.35875 ; a 1 = 0.48829 ; a 2 = 0.14128 ; a 3 = 0.01168. {\displaystyle a_{0}=0.35875;\quad a_{1}=0.48829;\quad a_{2}=0.14128;\quad a_{3}=0.01168.} ==== Flat top window ==== A flat top window is a partially negative-valued window that has minimal scalloping loss in the frequency domain. That property is desirable for the measurement of amplitudes of sinusoidal frequency components. However, its broad bandwidth results in high noise bandwidth and wider frequency selection, which depending on the application could be a drawback. Flat top windows can be designed using low-pass filter design methods, or they may be of the usual cosine-sum variety: w [ n ] = a 0 − a 1 cos ⁡ ( 2 π n N ) + a 2 cos ⁡ ( 4 π n N ) − a 3 cos ⁡ ( 6 π n N ) + a 4 cos ⁡ ( 8 π n N ) . {\displaystyle {\begin{aligned}w[n]=a_{0}&{}-a_{1}\cos \left({\frac {2\pi n}{N}}\right)+a_{2}\cos \left({\frac {4\pi n}{N}}\right)\\&{}-a_{3}\cos \left({\frac {6\pi n}{N}}\right)+a_{4}\cos \left({\frac {8\pi n}{N}}\right).\end{aligned}}} The Matlab variant has these coefficients: a 0 = 0.21557895 ; a 1 = 0.41663158 ; a 2 = 0.277263158 ; a 3 = 0.083578947 ; a 4 = 0.006947368. {\displaystyle a_{0}=0.21557895;\quad a_{1}=0.41663158;\quad a_{2}=0.277263158;\quad a_{3}=0.083578947;\quad a_{4}=0.006947368.} Other variations are available, such as sidelobes that roll off at the cost of higher values near the main lobe. ==== Rife–Vincent windows ==== Rife–Vincent windows are customarily scaled for unity average value, instead of unity peak value. The coefficient values below, applied to Eq.1, reflect that custom. Class I, Order 1 (K = 1): a 0 = 1 ; a 1 = 1 {\displaystyle a_{0}=1;\quad a_{1}=1} Functionally equivalent to the Hann window and power of sine (α = 2). Class I, Order 2 (K = 2): a 0 = 1 ; a 1 = 4 3 ; a 2 = 1 3 {\displaystyle a_{0}=1;\quad a_{1}={\tfrac {4}{3}};\quad a_{2}={\tfrac {1}{3}}} Functionally equivalent to the power of sine (α = 4). Class I is defined by minimizing the high-order sidelobe amplitude. Coefficients for orders up to K=4 are tabulated. Class II minimizes the main-lobe width for a given maximum side-lobe. Class III is a compromise for which order K = 2 resembles the § Blackman window. === Sine window === w [ n ] = sin ⁡ ( π n N ) = cos ⁡ ( π n N − π 2 ) , 0 ≤ n ≤ N . {\displaystyle w[n]=\sin \left({\frac {\pi n}{N}}\right)=\cos \left({\frac {\pi n}{N}}-{\frac {\pi }{2}}\right),\quad 0\leq n\leq N.} The corresponding w 0 ( n ) {\displaystyle w_{0}(n)\,} function is a cosine without the π/2 phase offset. So the sine window is sometimes also called cosine window. As it represents half a cycle of a sinusoidal function, it is also known variably as half-sine window or half-cosine window. The autocorrelation of a sine window produces a function known as the Bohman window. ==== Power-of-sine/cosine windows ==== These window functions have the form: w [ n ] = sin α ⁡ ( π n N ) = cos α ⁡ ( π n N − π 2 ) , 0 ≤ n ≤ N . {\displaystyle w[n]=\sin ^{\alpha }\left({\frac {\pi n}{N}}\right)=\cos ^{\alpha }\left({\frac {\pi n}{N}}-{\frac {\pi }{2}}\right),\quad 0\leq n\leq N.} The rectangular window (α = 0), the sine window (α = 1), and the Hann window (α = 2) are members of this family. For even-integer values of α these functions can also be expressed in cosine-sum form: w [ n ] = a 0 − a 1 cos ⁡ ( 2 π n N ) + a 2 cos ⁡ ( 4 π n N ) − a 3 cos ⁡ ( 6 π n N ) + a 4 cos ⁡ ( 8 π n N ) − . . . {\displaystyle w[n]=a_{0}-a_{1}\cos \left({\frac {2\pi n}{N}}\right)+a_{2}\cos \left({\frac {4\pi n}{N}}\right)-a_{3}\cos \left({\frac {6\pi n}{N}}\right)+a_{4}\cos \left({\frac {8\pi n}{N}}\right)-...} α a 0 a 1 a 2 a 3 a 4 0 1 2 0.5 0.5 4 0.375 0.5 0.125 6 0.3125 0.46875 0.1875 0.03125 8 0.2734375 0.4375 0.21875 0.0625 7.8125 × 10 − 3 {\displaystyle {\begin{array}{l|llll}\hline \alpha &a_{0}&a_{1}&a_{2}&a_{3}&a_{4}\\\hline 0&1\\2&0.5&0.5\\4&0.375&0.5&0.125\\6&0.3125&0.46875&0.1875&0.03125\\8&0.2734375&0.4375&0.21875&0.0625&7.8125\times 10^{-3}\\\hline \end{array}}} === Adjustable windows === ==== Gaussian window ==== The Fourier transform of a Gaussian is also a Gaussian. Since the support of a Gaussian function extends to infinity, it must either be truncated at the ends of the window, or itself windowed with another zero-ended window. Since the log of a Gaussian produces a parabola, this can be used for nearly exact quadratic interpolation in frequency estimation. w [ n ] = exp ⁡ ( − 1 2 ( n − N / 2 σ N / 2 ) 2 ) , 0 ≤ n ≤ N . {\displaystyle w[n]=\exp \left(-{\frac {1}{2}}\left({\frac {n-N/2}{\sigma N/2}}\right)^{2}\right),\quad 0\leq n\leq N.} σ ≤ 0.5 {\displaystyle \sigma \leq \;0.5\,} The standard deviation of the Gaussian function is σ · N/2 sampling periods. ==== Confined Gaussian window ==== The confined Gaussian window yields the smallest possible root mean square frequency width σω for a given temporal width (N + 1) σt. These windows optimize the RMS time-frequency bandwidth products. They are computed as the minimum eigenvectors of a parameter-dependent matrix. The confined Gaussian window family contains the § Sine window and the § Gaussian window in the limiting cases of large and small σt, respectively. ==== Approximate confined Gaussian window ==== Defining L ≜ N + 1, a confined Gaussian window of temporal width L × σt is well approximated by: w [ n ] = G ( n ) − G ( − 1 2 ) [ G ( n + L ) + G ( n − L ) ] G ( − 1 2 + L ) + G ( − 1 2 − L ) {\displaystyle w[n]=G(n)-{\frac {G(-{\tfrac {1}{2}})[G(n+L)+G(n-L)]}{G(-{\tfrac {1}{2}}+L)+G(-{\tfrac {1}{2}}-L)}}} where G {\displaystyle G} is a Gaussian function: G ( x ) = exp ⁡ ( − ( x − N 2 2 L σ t ) 2 ) {\displaystyle G(x)=\exp \left(-\left({\cfrac {x-{\frac {N}{2}}}{2L\sigma _{t}}}\right)^{2}\right)} The standard deviation of the approximate window is asymptotically equal (i.e. large values of N) to L × σt for σt < 0.14. ==== Generalized normal window ==== A more generalized version of the Gaussian window is the generalized normal window. Retaining the notation from the Gaussian window above, we can represent this window as w [ n , p ] = exp ⁡ ( − ( n − N / 2 σ N / 2 ) p ) {\displaystyle w[n,p]=\exp \left(-\left({\frac {n-N/2}{\sigma N/2}}\right)^{p}\right)} for any even p {\displaystyle p} . At p = 2 {\displaystyle p=2} , this is a Gaussian window and as p {\displaystyle p} approaches ∞ {\displaystyle \infty } , this approximates to a rectangular window. The Fourier transform of this window does not exist in a closed form for a general p {\displaystyle p} . However, it demonstrates the other benefits of being smooth, adjustable bandwidth. Like the § Tukey window, this window naturally offers a "flat top" to control the amplitude attenuation of a time-series (on which we don't have a control with Gaussian window). In essence, it offers a good (controllable) compromise, in terms of spectral leakage, frequency resolution and amplitude attenuation, between the Gaussian window and the rectangular window. See also for a study on time-frequency representation of this window (or function). ==== Tukey window ==== The Tukey window, also known as the cosine-tapered window, can be regarded as a cosine lobe of width Nα/2 (spanning Nα/2 + 1 observations) that is convolved with a rectangular window of width N(1 − α/2). w [ n ] = 1 2 [ 1 − cos ⁡ ( 2 π n α N ) ] , 0 ≤ n < α N 2 w [ n ] = 1 , α N 2 ≤ n ≤ N 2 w [ N − n ] = w [ n ] , 0 ≤ n ≤ N 2 } {\displaystyle \left.{\begin{array}{lll}w[n]={\frac {1}{2}}\left[1-\cos \left({\frac {2\pi n}{\alpha N}}\right)\right],\quad &0\leq n<{\frac {\alpha N}{2}}\\w[n]=1,\quad &{\frac {\alpha N}{2}}\leq n\leq {\frac {N}{2}}\\w[N-n]=w[n],\quad &0\leq n\leq {\frac {N}{2}}\end{array}}\right\}} At α = 0 it becomes rectangular, and at α = 1 it becomes a Hann window. ==== Planck-taper window ==== The so-called "Planck-taper" window is a bump function that has been widely used in the theory of partitions of unity in manifolds. It is smooth (a C ∞ {\displaystyle C^{\infty }} function) everywhere, but is exactly zero outside of a compact region, exactly one over an interval within that region, and varies smoothly and monotonically between those limits. Its use as a window function in signal processing was first suggested in the context of gravitational-wave astronomy, inspired by the Planck distribution. It is defined as a piecewise function: w [ 0 ] = 0 , w [ n ] = ( 1 + exp ⁡ ( ε N n − ε N ε N − n ) ) − 1 , 1 ≤ n < ε N w [ n ] = 1 , ε N ≤ n ≤ N 2 w [ N − n ] = w [ n ] , 0 ≤ n ≤ N 2 } {\displaystyle \left.{\begin{array}{lll}w[0]=0,\\w[n]=\left(1+\exp \left({\frac {\varepsilon N}{n}}-{\frac {\varepsilon N}{\varepsilon N-n}}\right)\right)^{-1},\quad &1\leq n<\varepsilon N\\w[n]=1,\quad &\varepsilon N\leq n\leq {\frac {N}{2}}\\w[N-n]=w[n],\quad &0\leq n\leq {\frac {N}{2}}\end{array}}\right\}} The amount of tapering is controlled by the parameter ε, with smaller values giving sharper transitions. ==== DPSS or Slepian window ==== The DPSS (discrete prolate spheroidal sequence) or Slepian window maximizes the energy concentration in the main lobe, and is used in multitaper spectral analysis, which averages out noise in the spectrum and reduces information loss at the edges of the window. The main lobe ends at a frequency bin given by the parameter α. The Kaiser windows below are created by a simple approximation to the DPSS windows: ==== Kaiser window ==== The Kaiser, or Kaiser–Bessel, window is a simple approximation of the DPSS window using Bessel functions, discovered by James Kaiser. w [ n ] = I 0 ( π α 1 − ( 2 n N − 1 ) 2 ) I 0 ( π α ) , 0 ≤ n ≤ N {\displaystyle w[n]={\frac {I_{0}\left(\pi \alpha {\sqrt {1-\left({\frac {2n}{N}}-1\right)^{2}}}\right)}{I_{0}(\pi \alpha )}},\quad 0\leq n\leq N} : p. 73  w 0 ( n ) = I 0 ( π α 1 − ( 2 n N ) 2 ) I 0 ( π α ) , − N / 2 ≤ n ≤ N / 2 {\displaystyle w_{0}(n)={\frac {I_{0}\left(\pi \alpha {\sqrt {1-\left({\frac {2n}{N}}\right)^{2}}}\right)}{I_{0}(\pi \alpha )}},\quad -N/2\leq n\leq N/2} where I 0 {\displaystyle I_{0}} is the 0th-order modified Bessel function of the first kind. Variable parameter α {\displaystyle \alpha } determines the tradeoff between main lobe width and side lobe levels of the spectral leakage pattern. The main lobe width, in between the nulls, is given by 2 1 + α 2 , {\displaystyle 2{\sqrt {1+\alpha ^{2}}},} in units of DFT bins, and a typical value of α {\displaystyle \alpha } is 3. ==== Dolph–Chebyshev window ==== Minimizes the Chebyshev norm of the side-lobes for a given main lobe width. The zero-phase Dolph–Chebyshev window function w 0 [ n ] {\displaystyle w_{0}[n]} is usually defined in terms of its real-valued discrete Fourier transform, W 0 [ k ] {\displaystyle W_{0}[k]} : W 0 ( k ) = T N ( β cos ⁡ ( π k N + 1 ) ) T N ( β ) = T N ( β cos ⁡ ( π k N + 1 ) ) 10 α , 0 ≤ k ≤ N . {\displaystyle W_{0}(k)={\frac {T_{N}{\big (}\beta \cos \left({\frac {\pi k}{N+1}}\right){\big )}}{T_{N}(\beta )}}={\frac {T_{N}{\big (}\beta \cos \left({\frac {\pi k}{N+1}}\right){\big )}}{10^{\alpha }}},\ 0\leq k\leq N.} Tn(x) is the n-th Chebyshev polynomial of the first kind evaluated in x, which can be computed using T n ( x ) = { cos ( n cos − 1 ⁡ ( x ) ) if − 1 ≤ x ≤ 1 cosh ( n cosh − 1 ⁡ ( x ) ) if x ≥ 1 ( − 1 ) n cosh ( n cosh − 1 ⁡ ( − x ) ) if x ≤ − 1 , {\displaystyle T_{n}(x)={\begin{cases}\cos \!{\big (}n\cos ^{-1}(x){\big )}&{\text{if }}-1\leq x\leq 1\\\cosh \!{\big (}n\cosh ^{-1}(x){\big )}&{\text{if }}x\geq 1\\(-1)^{n}\cosh \!{\big (}n\cosh ^{-1}(-x){\big )}&{\text{if }}x\leq -1,\end{cases}}} and β = cosh ( 1 N cosh − 1 ⁡ ( 10 α ) ) {\displaystyle \beta =\cosh \!{\big (}{\tfrac {1}{N}}\cosh ^{-1}(10^{\alpha }){\big )}} is the unique positive real solution to T N ( β ) = 10 α {\displaystyle T_{N}(\beta )=10^{\alpha }} , where the parameter α sets the Chebyshev norm of the sidelobes to −20α decibels. The window function can be calculated from W0(k) by an inverse discrete Fourier transform (DFT): w 0 ( n ) = 1 N + 1 ∑ k = 0 N W 0 ( k ) ⋅ e i 2 π k n / ( N + 1 ) , − N / 2 ≤ n ≤ N / 2. {\displaystyle w_{0}(n)={\frac {1}{N+1}}\sum _{k=0}^{N}W_{0}(k)\cdot e^{i2\pi kn/(N+1)},\ -N/2\leq n\leq N/2.} The lagged version of the window can be obtained by: w [ n ] = w 0 ( n − N 2 ) , 0 ≤ n ≤ N , {\displaystyle w[n]=w_{0}\left(n-{\frac {N}{2}}\right),\quad 0\leq n\leq N,} which for even values of N must be computed as follows: w 0 ( n − N 2 ) = 1 N + 1 ∑ k = 0 N W 0 ( k ) ⋅ e i 2 π k ( n − N / 2 ) N + 1 = 1 N + 1 ∑ k = 0 N [ ( − e i π N + 1 ) k ⋅ W 0 ( k ) ] e i 2 π k n N + 1 , {\displaystyle {\begin{aligned}w_{0}\left(n-{\frac {N}{2}}\right)={\frac {1}{N+1}}\sum _{k=0}^{N}W_{0}(k)\cdot e^{\frac {i2\pi k(n-N/2)}{N+1}}={\frac {1}{N+1}}\sum _{k=0}^{N}\left[\left(-e^{\frac {i\pi }{N+1}}\right)^{k}\cdot W_{0}(k)\right]e^{\frac {i2\pi kn}{N+1}},\end{aligned}}} which is an inverse DFT of ( − e i π N + 1 ) k ⋅ W 0 ( k ) . {\displaystyle \left(-e^{\frac {i\pi }{N+1}}\right)^{k}\cdot W_{0}(k).} Variations: Due to the equiripple condition, the time-domain window has discontinuities at the edges. An approximation that avoids them, by allowing the equiripples to drop off at the edges, is a Taylor window. An alternative to the inverse DFT definition is also available.[1]. ==== Ultraspherical window ==== The Ultraspherical window was introduced in 1984 by Roy Streit and has application in antenna array design, non-recursive filter design, and spectrum analysis. Like other adjustable windows, the Ultraspherical window has parameters that can be used to control its Fourier transform main-lobe width and relative side-lobe amplitude. Uncommon to other windows, it has an additional parameter which can be used to set the rate at which side-lobes decrease (or increase) in amplitude. The window can be expressed in the time-domain as follows: w [ n ] = 1 N + 1 [ C N μ ( x 0 ) + ∑ k = 1 N 2 C N μ ( x 0 cos ⁡ k π N + 1 ) cos ⁡ 2 n π k N + 1 ] {\displaystyle w[n]={\frac {1}{N+1}}\left[C_{N}^{\mu }(x_{0})+\sum _{k=1}^{\frac {N}{2}}C_{N}^{\mu }\left(x_{0}\cos {\frac {k\pi }{N+1}}\right)\cos {\frac {2n\pi k}{N+1}}\right]} where C N μ {\displaystyle C_{N}^{\mu }} is the Ultraspherical polynomial of degree N, and x 0 {\displaystyle x_{0}} and μ {\displaystyle \mu } control the side-lobe patterns. Certain specific values of μ {\displaystyle \mu } yield other well-known windows: μ = 0 {\displaystyle \mu =0} and μ = 1 {\displaystyle \mu =1} give the Dolph–Chebyshev and Saramäki windows respectively. See here for illustration of Ultraspherical windows with varied parametrization. ==== Exponential or Poisson window ==== The Poisson window, or more generically the exponential window increases exponentially towards the center of the window and decreases exponentially in the second half. Since the exponential function never reaches zero, the values of the window at its limits are non-zero (it can be seen as the multiplication of an exponential function by a rectangular window ). It is defined by w [ n ] = e − | n − N 2 | 1 τ , {\displaystyle w[n]=e^{-\left|n-{\frac {N}{2}}\right|{\frac {1}{\tau }}},} where τ is the time constant of the function. The exponential function decays as e ≃ 2.71828 or approximately 8.69 dB per time constant. This means that for a targeted decay of D dB over half of the window length, the time constant τ is given by τ = N 2 8.69 D . {\displaystyle \tau ={\frac {N}{2}}{\frac {8.69}{D}}.} === Hybrid windows === Window functions have also been constructed as multiplicative or additive combinations of other windows. ==== Bartlett–Hann window ==== w [ n ] = a 0 − a 1 | n N − 1 2 | − a 2 cos ⁡ ( 2 π n N ) {\displaystyle w[n]=a_{0}-a_{1}\left|{\frac {n}{N}}-{\frac {1}{2}}\right|-a_{2}\cos \left({\frac {2\pi n}{N}}\right)} a 0 = 0.62 ; a 1 = 0.48 ; a 2 = 0.38 {\displaystyle a_{0}=0.62;\quad a_{1}=0.48;\quad a_{2}=0.38\,} ==== Planck–Bessel window ==== A § Planck-taper window multiplied by a Kaiser window which is defined in terms of a modified Bessel function. This hybrid window function was introduced to decrease the peak side-lobe level of the Planck-taper window while still exploiting its good asymptotic decay. It has two tunable parameters, ε from the Planck-taper and α from the Kaiser window, so it can be adjusted to fit the requirements of a given signal. ==== Hann–Poisson window ==== A Hann window multiplied by a Poisson window. For α ⩾ 2 {\displaystyle \alpha \geqslant 2} it has no side-lobes, as its Fourier transform drops off forever away from the main lobe without local minima. It can thus be used in hill climbing algorithms like Newton's method. The Hann–Poisson window is defined by: w [ n ] = 1 2 ( 1 − cos ⁡ ( 2 π n N ) ) e − α | N − 2 n | N {\displaystyle w[n]={\frac {1}{2}}\left(1-\cos \left({\frac {2\pi n}{N}}\right)\right)e^{\frac {-\alpha \left|N-2n\right|}{N}}\,} where α is a parameter that controls the slope of the exponential. === Other windows === ==== Generalized adaptive polynomial (GAP) window ==== The GAP window is a family of adjustable window functions that are based on a symmetrical polynomial expansion of order K {\displaystyle K} . It is continuous with continuous derivative everywhere. With the appropriate set of expansion coefficients and expansion order, the GAP window can mimic all the known window functions, reproducing accurately their spectral properties. w 0 [ n ] = a 0 + ∑ k = 1 K a 2 k ( n σ ) 2 k , − N 2 ≤ n ≤ N 2 , {\displaystyle w_{0}[n]=a_{0}+\sum _{k=1}^{K}a_{2k}\left({\frac {n}{\sigma }}\right)^{2k},\quad -{\frac {N}{2}}\leq n\leq {\frac {N}{2}},} where σ {\displaystyle \sigma } is the standard deviation of the { n } {\displaystyle \{n\}} sequence. Additionally, starting with a set of expansion coefficients a 2 k {\displaystyle a_{2k}} that mimics a certain known window function, the GAP window can be optimized by minimization procedures to get a new set of coefficients that improve one or more spectral properties, such as the main lobe width, side lobe attenuation, and side lobe falloff rate. Therefore, a GAP window function can be developed with designed spectral properties depending on the specific application. ==== Lanczos window ==== w [ n ] = sinc ⁡ ( 2 n N − 1 ) {\displaystyle w[n]=\operatorname {sinc} \left({\frac {2n}{N}}-1\right)} used in Lanczos resampling for the Lanczos window, sinc ⁡ ( x ) {\displaystyle \operatorname {sinc} (x)} is defined as sin ⁡ ( π x ) / π x {\displaystyle \sin(\pi x)/\pi x} also known as a sinc window, because: w 0 ( n ) = sinc ⁡ ( 2 n N ) {\displaystyle w_{0}(n)=\operatorname {sinc} \left({\frac {2n}{N}}\right)\,} is the main lobe of a normalized sinc function === Asymmetric window functions === The w 0 ( x ) {\displaystyle w_{0}(x)} form, according to the convention above, is symmetric around x = 0 {\displaystyle x=0} . However, there are window functions that are asymmetric, such as the gamma distribution used in FIR implementations of gammatone filters, or the beta distribution for a bounded-support approximation to the gamma distribution. These asymmetries are used to reduce the delay when using large window sizes, or to emphasize the initial transient of a decaying pulse. Any bounded function with compact support, including asymmetric ones, can be readily used as a window function. Additionally, there are ways to transform symmetric windows into asymmetric windows by transforming the time coordinate, such as with the below formula x ← N ( x N + 1 2 ) α − N 2 , {\displaystyle x\leftarrow N\left({\frac {x}{N}}+{\frac {1}{2}}\right)^{\alpha }-{\frac {N}{2}}\,,} where the window weights more highly the earliest samples when α > 1 {\displaystyle \alpha >1} , and conversely weights more highly the latest samples when α < 1 {\displaystyle \alpha <1} . == See also == Apodization Kolmogorov–Zurbenko filter Multitaper Short-time Fourier transform Spectral leakage Welch method Weight function Window design method == Notes == == Page citations == == References == == Further reading == Harris, Frederic J. (September 1976). "Windows, Harmonic Analysis, and the Discrete Fourier Transform" (PDF). apps.dtic.mil. Naval Undersea Center, San Diego. Archived (PDF) from the original on April 8, 2019. Retrieved 2019-04-08. Albrecht, Hans-Helge (2012). Tailored minimum sidelobe and minimum sidelobe cosine-sum windows. Version 1.0. Vol. ISBN 978-3-86918-281-0 ). editor: Physikalisch-Technische Bundesanstalt. Physikalisch-Technische Bundesanstalt. doi:10.7795/110.20121022aa. ISBN 978-3-86918-281-0. Bergen, S.W.A.; Antoniou, A. (2005). "Design of Nonrecursive Digital Filters Using the Ultraspherical Window Function". EURASIP Journal on Applied Signal Processing. 2005 (12): 1910–1922. Bibcode:2005EJASP2005...44B. doi:10.1155/ASP.2005.1910. Prabhu, K. M. M. (2014). Window Functions and Their Applications in Signal Processing. Boca Raton, FL: CRC Press. ISBN 978-1-4665-1583-3. US patent 7065150, Park, Young-Seo, "System and method for generating a root raised cosine orthogonal frequency division multiplexing (RRC OFDM) modulation", published 2003, issued 2006 == External links == Media related to Window function at Wikimedia Commons LabView Help, Characteristics of Smoothing Filters, http://zone.ni.com/reference/en-XX/help/371361B-01/lvanlsconcepts/char_smoothing_windows/ Creation and properties of Cosine-sum Window functions, http://electronicsart.weebly.com/fftwindows.html Online Interactive FFT, Windows, Resolution, and Leakage Simulation | RITEC | Library & Tools
Wikipedia/Window_function