text
stringlengths
559
401k
source
stringlengths
13
121
In computer networking, Rate Based Satellite Control Protocol (RBSCP) is a tunneling method proposed by Cisco to improve the performance of satellite network links with high latency and error rates. The problem RBSCP addresses is that the long RTT on the link keeps TCP virtual circuits in slow start for a long time. This, in addition to the high loss give a very low amount of bandwidth on the channel. Since satellite links may be high-throughput, the overall link utilized may be below what is optimal from a technical and economic view. == Means of operation == RBSCP works by tunneling the usual IP packets within IP packets. The transport protocol identifier is 199. On each end of the tunnel, routers buffer packets to utilize the link better. In addition to this, RBSCP tunnel routers: modify TCP options at connection setup. implement a Performance Enhancing Proxy (PEP) that resends lost packets on behalf of the client, so loss is not interpreted as congestion. == External links == https://web.archive.org/web/20110706144353/http://cisco.biz/en/US/docs/ios/12_3t/12_3t7/feature/guide/gt_rbscp.html
Wikipedia/Rate_Based_Satellite_Control_Protocol
Evolving networks are networks that change as a function of time. They are a natural extension of network science since almost all real world networks evolve over time, either by adding or removing nodes or links over time. Often all of these processes occur simultaneously, such as in social networks where people make and lose friends over time, thereby creating and destroying edges, and some people become part of new social networks or leave their networks, changing the nodes in the network. Evolving network concepts build on established network theory and are now being introduced into studying networks in many diverse fields. == Network theory background == The study of networks traces its foundations to the development of graph theory, which was first analyzed by Leonhard Euler in 1736 when he wrote the famous Seven Bridges of Königsberg paper. Probabilistic network theory then developed with the help of eight famous papers studying random graphs written by Paul Erdős and Alfréd Rényi. The Erdős–Rényi model (ER) supposes that a graph is composed of N labeled nodes where each pair of nodes is connected by a preset probability p. While the ER model's simplicity has helped it find many applications, it does not accurately describe many real world networks. The ER model fails to generate local clustering and triadic closures as often as they are found in real world networks. Therefore, the Watts and Strogatz model was proposed, whereby a network is constructed as a regular ring lattice, and then nodes are rewired according to some probability β. This produces a locally clustered network and dramatically reduces the average path length, creating networks which represent the small world phenomenon observed in many real world networks. Despite this achievement, both the ER and the Watts and Storgatz models fail to account for the formulation of hubs as observed in many real world networks. The degree distribution in the ER model follows a Poisson distribution, while the Watts and Strogatz model produces graphs that are homogeneous in degree. Many networks are instead scale free, meaning that their degree distribution follows a power law of the form: P ( k ) ∼ k − γ {\displaystyle P(k)\sim k^{-\gamma }} This exponent turns out to be approximately 3 for many real world networks, however, it is not a universal constant and depends continuously on the network's parameters == First evolving network model – scale-free networks == The Barabási–Albert (BA) model was the first widely accepted model to produce scale-free networks. This was accomplished by incorporating preferential attachment and growth, where nodes are added to the network over time and are more likely to link to other nodes with high degree distributions. The BA model was first applied to degree distributions on the web, where both of these effects can be clearly seen. New web pages are added over time, and each new page is more likely to link to highly visible hubs like Google which have high degree distributions than to nodes with only a few links. Formally this preferential attachment is: p i = k i ∑ j k j , {\displaystyle p_{i}={\frac {k_{i}}{\displaystyle \sum _{j}k_{j}}},} == Additions to BA model == The BA model was the first model to derive the network topology from the way the network was constructed with nodes and links being added over time. However, the model makes only the simplest assumptions necessary for a scale-free network to emerge, namely that there is linear growth and linear preferential attachment. This minimal model does not capture variations in the shape of the degree distribution, variations in the degree exponent, or the size independent clustering coefficient. Therefore, the original model has since been modified to more fully capture the properties of evolving networks by introducing a few new properties. === Fitness === One concern with the BA model is that the degree distributions of each nodes experience strong positive feedback whereby the earliest nodes with high degree distributions continue to dominate the network indefinitely. However, this can be alleviated by introducing a fitness for each node, which modifies the probability of new links being created with that node or even of links to that node being removed. In order to preserve the preferential attachment from the BA model, this fitness is then multiplied by the preferential attachment based on degree distribution to give the true probability that a link is created which connects to node i. Π ( k i ) = η i k i ∑ j η j k j , {\displaystyle \Pi (k_{i})={\frac {\eta _{i}k_{i}}{\displaystyle \sum _{j}\eta _{j}k_{j}}},} Where η {\displaystyle \eta } is the fitness, which may also depend on time. A decay of fitness with respect to time may occur and can be formalized by Π ( k i ) ∝ k i ( t − t i ) − ν , {\displaystyle \Pi (k_{i})\propto k_{i}(t-t_{i})^{-\nu },} where γ {\displaystyle \gamma } increases with ν . {\displaystyle \nu .} === Removing nodes and rewiring links === Further complications arise because nodes may be removed from the network with some probability. Additionally, existing links may be destroyed and new links between existing nodes may be created. The probability of these actions occurring may depend on time and may also be related to the node's fitness. Probabilities can be assigned to these events by studying the characteristics of the network in question in order to grow a model network with identical properties. This growth would take place with one of the following actions occurring at each time step: Prob p: add an internal link. Prob q: delete a link. Prob r: delete a node. Prob 1-p-q-r: add a node. == Other ways of characterizing evolving networks == In addition to growing network models as described above, there may be times when other methods are more useful or convenient for characterizing certain properties of evolving networks. === Convergence towards equilibria === In networked systems where competitive decision making takes place, game theory is often used to model system dynamics, and convergence towards equilibria can be considered as a driver of topological evolution. For example, Kasthurirathna and Piraveenan have shown that when individuals in a system display varying levels of rationality, improving the overall system rationality might be an evolutionary reason for the emergence of scale-free networks. They demonstrated this by applying evolutionary pressure on an initially random network which simulates a range of classic games, so that the network converges towards Nash equilibria while being allowed to re-wire. The networks become increasingly scale-free during this process. === Treat evolving networks as successive snapshots of a static network === The most common way to view evolving networks is by considering them as successive static networks. This could be conceptualized as the individual still images which compose a motion picture. Many simple parameters exist to describe a static network (number of nodes, edges, path length, connected components), or to describe specific nodes in the graph such as the number of links or the clustering coefficient. These properties can then individually be studied as a time series using signal processing notions. For example, we can track the number of links established to a server per minute by looking at the successive snapshots of the network and counting these links in each snapshot. Unfortunately, the analogy of snapshots to a motion picture also reveals the main difficulty with this approach: the time steps employed are very rarely suggested by the network and are instead arbitrary. Using extremely small time steps between each snapshot preserves resolution, but may actually obscure wider trends which only become visible over longer timescales. Conversely, using larger timescales loses the temporal order of events within each snapshot. Therefore, it may be difficult to find the appropriate timescale for dividing the evolution of a network into static snapshots. === Define dynamic properties === It may be important to look at properties which cannot be directly observed by treating evolving networks as a sequence of snapshots, such as the duration of contacts between nodes Other similar properties can be defined and then it is possible to instead track these properties through the evolution of a network and visualize them directly. Another issue with using successive snapshots is that only slight changes in network topology can have large effects on the outcome of algorithms designed to find communities. Therefore, it is necessary to use a non classical definition of communities which permits following the evolution of the community through a set of rules such as birth, death, merge, split, growth, and contraction. == Applications == Almost all real world networks are evolving networks since they are constructed over time. By varying the respective probabilities described above, it is possible to use the expanded BA model to construct a network with nearly identical properties as many observed networks. Moreover, the concept of scale free networks shows us that time evolution is a necessary part of understanding the network's properties, and that it is difficult to model an existing network as having been created instantaneously. Real evolving networks which are currently being studied include social networks, communications networks, the internet, the movie actor network, the World Wide Web, and transportation networks. == Further reading == "Understanding Network Science," https://web.archive.org/web/20110718151116/http://www.zangani.com/blog/2007-1030-networkingscience "Linked: The New Science of Networks", A.-L. Barabási Perseus Publishing, Cambridge. == References ==
Wikipedia/Evolving_networks
Exponential family random graph models (ERGMs) are a set of statistical models used to study the structure and patterns within networks, such as those in social, organizational, or scientific contexts. They analyze how connections (edges) form between individuals or entities (nodes) by modeling the likelihood of network features, like clustering or centrality, across diverse examples including knowledge networks, organizational networks, colleague networks, social media networks, networks of scientific collaboration, and more. Part of the exponential family of distributions, ERGMs help researchers understand and predict network behavior in fields ranging from sociology to data science. == Background == Many metrics exist to describe the structural features of an observed network such as the density, centrality, or assortativity. However, these metrics describe the observed network which is only one instance of a large number of possible alternative networks. This set of alternative networks may have similar or dissimilar structural features. To support statistical inference on the processes influencing the formation of network structure, a statistical model should consider the set of all possible alternative networks weighted on their similarity to an observed network. However because network data is inherently relational, it violates the assumptions of independence and identical distribution of standard statistical models like linear regression. Alternative statistical models should reflect the uncertainty associated with a given observation, permit inference about the relative frequency about network substructures of theoretical interest, disambiguating the influence of confounding processes, efficiently representing complex structures, and linking local-level processes to global-level properties. Degree-preserving randomization, for example, is a specific way in which an observed network could be considered in terms of multiple alternative networks. == Definition == The Exponential family is a broad family of models for covering many types of data, not just networks. An ERGM is a model from this family which describes networks. Formally a random graph Y ∈ Y {\displaystyle Y\in {\mathcal {Y}}} consists of a set of n {\displaystyle n} nodes and a collection of tie variables { Y i j : i = 1 , … , n ; j = 1 , … , n } {\displaystyle \{Y_{ij}:i=1,\dots ,n;j=1,\dots ,n\}} , indexed by pairs of nodes i j {\displaystyle ij} , where Y i j = 1 {\displaystyle Y_{ij}=1} if the nodes ( i , j ) {\displaystyle (i,j)} are connected by an edge and Y i j = 0 {\displaystyle Y_{ij}=0} otherwise. A pair of nodes i j {\displaystyle ij} is called a dyad and a dyad is an edge if Y i j = 1 {\displaystyle Y_{ij}=1} . The basic assumption of these models is that the structure in an observed graph y {\displaystyle y} can be explained by a given vector of sufficient statistics s ( y ) {\displaystyle s(y)} which are a function of the observed network and, in some cases, nodal attributes. This way, it is possible to describe any kind of dependence between the undyadic variables: P ( Y = y | θ ) = exp ⁡ ( θ T s ( y ) ) c ( θ ) , ∀ y ∈ Y {\displaystyle P(Y=y|\theta )={\frac {\exp(\theta ^{T}s(y))}{c(\theta )}},\quad \forall y\in {\mathcal {Y}}} where θ {\displaystyle \theta } is a vector of model parameters associated with s ( y ) {\displaystyle s(y)} and c ( θ ) = ∑ y ′ ∈ Y exp ⁡ ( θ T s ( y ′ ) ) {\displaystyle c(\theta )=\sum _{y'\in {\mathcal {Y}}}\exp(\theta ^{T}s(y'))} is a normalising constant. These models represent a probability distribution on each possible network on n {\displaystyle n} nodes. However, the size of the set of possible networks for an undirected network (simple graph) of size n {\displaystyle n} is 2 n ( n − 1 ) / 2 {\displaystyle 2^{n(n-1)/2}} . Because the number of possible networks in the set vastly outnumbers the number of parameters which can constrain the model, the ideal probability distribution is the one which maximizes the Gibbs entropy. == Example == Let V = { 1 , 2 , 3 } {\displaystyle V=\{1,2,3\}} be a set of three nodes and let Y {\displaystyle {\mathcal {Y}}} be the set of all undirected, loopless graphs on V {\displaystyle V} . Loopless implies that for all i = 1 , 2 , 3 {\displaystyle i=1,2,3} it is Y i i = 0 {\displaystyle Y_{ii}=0} and undirected implies that for all i , j = 1 , 2 , 3 {\displaystyle i,j=1,2,3} it is Y i j = Y j i {\displaystyle Y_{ij}=Y_{ji}} , so that there are three binary tie variables ( Y 12 , Y 13 , Y 23 {\displaystyle Y_{12},Y_{13},Y_{23}} ) and 2 3 = 8 {\displaystyle 2^{3}=8} different graphs in this example. Define a two-dimensional vector of statistics by s ( y ) = [ s 1 ( y ) , s 2 ( y ) ] T {\displaystyle s(y)=[s_{1}(y),s_{2}(y)]^{T}} , where s 1 ( y ) = e d g e s ( y ) {\displaystyle s_{1}(y)=edges(y)} is defined to be the number of edges in the graph y {\displaystyle y} and s 2 ( y ) = t r i a n g l e s ( y ) {\displaystyle s_{2}(y)=triangles(y)} is defined to be the number of closed triangles in y {\displaystyle y} . Finally, let the parameter vector be defined by θ = ( θ 1 , θ 2 ) T = ( − ln ⁡ 2 , ln ⁡ 3 ) T {\displaystyle \theta =(\theta _{1},\theta _{2})^{T}=(-\ln 2,\ln 3)^{T}} , so that the probability of every graph y ∈ Y {\displaystyle y\in {\mathcal {Y}}} in this example is given by: P ( Y = y | θ ) = exp ⁡ ( − ln ⁡ 2 ⋅ e d g e s ( y ) + ln ⁡ 3 ⋅ t r i a n g l e s ( y ) ) c ( θ ) {\displaystyle P(Y=y|\theta )={\frac {\exp(-\ln 2\cdot edges(y)+\ln 3\cdot triangles(y))}{c(\theta )}}} We note that in this example, there are just four graph isomorphism classes: the graph with zero edges, three graphs with exactly one edge, three graphs with exactly two edges, and the graph with three edges. Since isomorphic graphs have the same number of edges and the same number of triangles, they also have the same probability in this example ERGM. For a representative y {\displaystyle y} of each isomorphism class, we first compute the term x ( y ) = exp ⁡ ( − ln ⁡ 2 ⋅ e d g e s ( y ) + ln ⁡ 3 ⋅ t r i a n g l e s ( y ) ) {\displaystyle x(y)=\exp(-\ln 2\cdot edges(y)+\ln 3\cdot triangles(y))} , which is proportional to the probability of y {\displaystyle y} (up to the normalizing constant c ( θ ) {\displaystyle c(\theta )} ). If y {\displaystyle y} is the graph with zero edges, then it is e d g e s ( y ) = 0 {\displaystyle edges(y)=0} and t r i a n g l e s ( y ) = 0 {\displaystyle triangles(y)=0} , so that x ( y ) = exp ⁡ ( − ln ⁡ 2 ⋅ 0 + ln ⁡ 3 ⋅ 0 ) = exp ⁡ ( 0 ) = 1. {\displaystyle x(y)=\exp(-\ln 2\cdot 0+\ln 3\cdot 0)=\exp(0)=1.} If y {\displaystyle y} is a graph with exactly one edge, then it is e d g e s ( y ) = 1 {\displaystyle edges(y)=1} and t r i a n g l e s ( y ) = 0 {\displaystyle triangles(y)=0} , so that x ( y ) = exp ⁡ ( − ln ⁡ 2 ⋅ 1 + ln ⁡ 3 ⋅ 0 ) = exp ⁡ ( 0 ) exp ⁡ ( ln ⁡ 2 ) = 1 2 . {\displaystyle x(y)=\exp(-\ln 2\cdot 1+\ln 3\cdot 0)={\frac {\exp(0)}{\exp(\ln 2)}}={\frac {1}{2}}.} If y {\displaystyle y} is a graph with exactly two edges, then it is e d g e s ( y ) = 2 {\displaystyle edges(y)=2} and t r i a n g l e s ( y ) = 0 {\displaystyle triangles(y)=0} , so that x ( y ) = exp ⁡ ( − ln ⁡ 2 ⋅ 2 + ln ⁡ 3 ⋅ 0 ) = exp ⁡ ( 0 ) exp ⁡ ( ln ⁡ 2 ) 2 = 1 4 . {\displaystyle x(y)=\exp(-\ln 2\cdot 2+\ln 3\cdot 0)={\frac {\exp(0)}{\exp(\ln 2)^{2}}}={\frac {1}{4}}.} If y {\displaystyle y} is the graph with exactly three edges, then it is e d g e s ( y ) = 3 {\displaystyle edges(y)=3} and t r i a n g l e s ( y ) = 1 {\displaystyle triangles(y)=1} , so that x ( y ) = exp ⁡ ( − ln ⁡ 2 ⋅ 3 + ln ⁡ 3 ⋅ 1 ) = exp ⁡ ( ln ⁡ 3 ) exp ⁡ ( ln ⁡ 2 ) 3 = 3 8 . {\displaystyle x(y)=\exp(-\ln 2\cdot 3+\ln 3\cdot 1)={\frac {\exp(\ln 3)}{\exp(\ln 2)^{3}}}={\frac {3}{8}}.} The normalizing constant is computed by summing x ( y ) {\displaystyle x(y)} over all eight different graphs y ∈ Y {\displaystyle y\in {\mathcal {Y}}} . This yields: c ( θ ) = 1 + 3 ⋅ 1 2 + 3 ⋅ 1 4 + 3 8 = 29 8 . {\displaystyle c(\theta )=1+3\cdot {\frac {1}{2}}+3\cdot {\frac {1}{4}}+{\frac {3}{8}}={\frac {29}{8}}.} Finally, the probability of every graph y ∈ Y {\displaystyle y\in {\mathcal {Y}}} is given by P ( Y = y | θ ) = x ( y ) c ( θ ) {\displaystyle P(Y=y|\theta )={\frac {x(y)}{c(\theta )}}} . Explicitly, we get that the graph with zero edges has probability 8 29 {\displaystyle {\frac {8}{29}}} , every graph with exactly one edge has probability 4 29 {\displaystyle {\frac {4}{29}}} , every graph with exactly two edges has probability 2 29 {\displaystyle {\frac {2}{29}}} , and the graph with exactly three edges has probability 3 29 {\displaystyle {\frac {3}{29}}} in this example. Intuitively, the structure of graph probabilities in this ERGM example are consistent with typical patterns of social or other networks. The negative parameter ( θ 1 = − ln ⁡ 2 {\displaystyle \theta _{1}=-\ln 2} ) associated with the number of edges implies that - all other things being equal - networks with fewer edges have a higher probability than networks with more edges. This is consistent with the sparsity that is often found in empirical networks, namely that the empirical number of edges typically grows at a slower rate than the maximally possible number of edges. The positive parameter ( θ 2 = ln ⁡ 3 {\displaystyle \theta _{2}=\ln 3} ) associated with the number of closed triangles implies that - all other things being equal - networks with more triangles have a higher probability than networks with fewer triangles. This is consistent with a tendency for triadic closure that is often found in certain types of social networks. Compare these patterns with the graph probabilities computed above. The addition of every edge divides the probability by two. However, when going from a graph with two edges to the graph with three edges, the number of triangles increases by one - which additionally multiplies the probability by three. We note that the explicit calculation of all graph probabilities is only possible since there are so few different graphs in this example. Since the number of different graphs scales exponentially in the number of tie variables - which in turn scales quadratic in the number of nodes -, computing the normalizing constant is in general computationally intractable, already for a moderate number of nodes. == Sampling from an ERGM == Exact sampling from a given ERGM is computationally intractable in general since computing the normalizing constant requires summation over all y ∈ Y {\displaystyle y\in {\mathcal {Y}}} . Efficient approximate sampling from an ERGM can be done via Markov chains and is applied in current methods to approximate expected values and to estimate ERGM parameters. Informally, given an ERGM on a set of graphs Y {\displaystyle {\mathcal {Y}}} with probability mass function P ( Y = y | θ ) = exp ⁡ ( θ T s ( y ) ) c ( θ ) {\displaystyle P(Y=y|\theta )={\frac {\exp(\theta ^{T}s(y))}{c(\theta )}}} , one selects an initial graph y ( 0 ) ∈ Y {\displaystyle y^{(0)}\in {\mathcal {Y}}} (which might be arbitrarily, or randomly, chosen or might represent an observed network) and implicitly defines transition probabilities (or jump probabilities) π ( y , y ′ ) = P ( Y ( t + 1 ) = y ′ | Y ( t ) = y ) {\displaystyle \pi (y,y')=P(Y^{(t+1)}=y'|Y^{(t)}=y)} , which are the conditional probabilities that the Markov chain is on graph y ′ {\displaystyle y'} after Step t + 1 {\displaystyle t+1} , given that it is on graph y {\displaystyle y} after Step t {\displaystyle t} . The transition probabilities do not depend on the graphs in earlier steps ( y ( 0 ) , … , y ( t − 1 ) {\displaystyle y^{(0)},\dots ,y^{(t-1)}} ), which is a defining property of Markov chains, and they do not depend on t {\displaystyle t} , that is, the Markov chain is time-homogeneous. The goal is to define the transition probabilities such that for all y ∈ Y {\displaystyle y\in {\mathcal {Y}}} it is lim t → ∞ P ( Y ( t ) = y ) = exp ⁡ ( θ T s ( y ) ) c ( θ ) , {\displaystyle \lim _{t\to \infty }P(Y^{(t)}=y)={\frac {\exp(\theta ^{T}s(y))}{c(\theta )}},} independent of the initial graph y ( 0 ) {\displaystyle y^{(0)}} . If this is achieved, one can run the Markov chain for a large number of steps and then returns the current graph as a random sample from the given ERGM. The probability to return a graph y ∈ Y {\displaystyle y\in {\mathcal {Y}}} after a finite but large number of update steps is approximately the probability defined by the ERGM. Current methods for sampling from ERGMs with Markov chains usually define an update step by two sub-steps: first, randomly select a candidate y ′ {\displaystyle y'} in a neighborhood of the current graph y {\displaystyle y} and, second, to accept y ′ {\displaystyle y'} with a probability that depends on the probability ratio of the current graph y {\displaystyle y} and the candidate y ′ {\displaystyle y'} . (If the candidate is not accepted, the Markov chain remains on the current graph y {\displaystyle y} .) If the set of graphs Y {\displaystyle {\mathcal {Y}}} is unconstrained (i.e., contains any combination of values on the binary tie variables), a simple method for candidate selection is to choose one tie variable y i j {\displaystyle y_{ij}} uniformly at random and to define the candidate by flipping this single variable (i.e., to set y i j ′ = 1 − y i j {\displaystyle y'_{ij}=1-y_{ij}} ; all other variables take the same value as in y {\displaystyle y} ). A common way to define the acceptance probability is to accept y ′ {\displaystyle y'} with the conditional probability P ( Y = y ′ | Y = y ′ ∨ Y = y ) = P ( Y = y ′ ) P ( Y = y ′ ) + P ( Y = y ) , {\displaystyle P(Y=y'|Y=y'\vee Y=y)={\frac {P(Y=y')}{P(Y=y')+P(Y=y)}},} where the graph probabilities are defined by the ERGM. Crucially, the normalizing constant c ( θ ) {\displaystyle c(\theta )} cancels out in this fraction, so that the acceptance probabilities can be computed efficiently. == See also == Autologistic actor attribute models == References == == Further reading == Byshkin, M.; Stivala, A.; Mira, A.; Robins, G.; Lomi, A. (2018). "Fast Maximum Likelihood Estimation via Equilibrium Expectation for Large Network Data". Scientific Reports. 8 (1): 11509. arXiv:1802.10311. Bibcode:2018NatSR...811509B. doi:10.1038/s41598-018-29725-8. PMC 6068132. PMID 30065311. Caimo, A.; Friel, N (2011). "Bayesian inference for exponential random graph models". Social Networks. 33: 41–55. arXiv:1007.5192. doi:10.1016/j.socnet.2010.09.004. Erdős, P.; Rényi, A (1959). "On random graphs". Publicationes Mathematicae. 6: 290–297. Fienberg, S. E.; Wasserman, S. (1981). "Discussion of An Exponential Family of Probability Distributions for Directed Graphs by Holland and Leinhardt". Journal of the American Statistical Association. 76 (373): 54–57. doi:10.1080/01621459.1981.10477600. Frank, O.; Strauss, D (1986). "Markov Graphs". Journal of the American Statistical Association. 81 (395): 832–842. doi:10.2307/2289017. JSTOR 2289017. Handcock, M. S.; Hunter, D. R.; Butts, C. T.; Goodreau, S. M.; Morris, M. (2008). "statnet: Software Tools for the Representation, Visualization, Analysis and Simulation of Network Data". Journal of Statistical Software. 24 (1): 1–11. doi:10.18637/jss.v024.i01. PMC 2447931. PMID 18618019. Harris, Jenine K (2014). An introduction to exponential random graph modeling. ISBN 9781452220802. OCLC 870698788. Hunter, D. R.; Goodreau, S. M.; Handcock, M. S. (2008). "Goodness of Fit of Social Network Models". Journal of the American Statistical Association. 103 (481): 248–258. CiteSeerX 10.1.1.206.396. doi:10.1198/016214507000000446. Hunter, D. R; Handcock, M. S. (2006). "Inference in curved exponential family models for networks". Journal of Computational and Graphical Statistics. 15 (3): 565–583. CiteSeerX 10.1.1.205.9670. doi:10.1198/106186006X133069. Hunter, D. R.; Handcock, M. S.; Butts, C. T.; Goodreau, S. M.; Morris, M. (2008). "ergm: A Package to Fit, Simulate and Diagnose Exponential-Family Models for Networks". Journal of Statistical Software. 24 (3): 1–29. doi:10.18637/jss.v024.i03. PMC 2743438. Jin, I.H.; Liang, F. (2012). "Fitting social networks models using varying truncation stochastic approximation MCMC algorithm". Journal of Computational and Graphical Statistics. 22 (4): 927–952. doi:10.1080/10618600.2012.680851. Koskinen, J. H.; Robins, G. L.; Pattison, P. E. (2010). "Analysing exponential random graph (p-star) models with missing data using Bayesian data augmentation". Statistical Methodology. 7 (3): 366–384. doi:10.1016/j.stamet.2009.09.007. Morris, M.; Handcock, M. S.; Hunter, D. R. (2008). "Specification of Exponential-Family Random Graph Models: Terms and Computational Aspects". Journal of Statistical Software. 24 (4): 1548–7660. doi:10.18637/jss.v024.i04. PMC 2481518. PMID 18650964. Rinaldo, A.; Fienberg, S. E.; Zhou, Y. (2009). "On the geometry of descrete exponential random families with application to exponential random graph models". Electronic Journal of Statistics. 3: 446–484. arXiv:0901.0026. doi:10.1214/08-EJS350. Robins, G.; Snijders, T.; Wang, P.; Handcock, M.; Pattison, P (2007). "Recent developments in exponential random graph (p*) models for social networks" (PDF). Social Networks. 29 (2): 192–215. doi:10.1016/j.socnet.2006.08.003. hdl:11370/abee7276-394e-4051-a180-7b2ff57d42f5. Schweinberger, Michael (2011). "Instability, sensitivity, and degeneracy of discrete exponential families". Journal of the American Statistical Association. 106 (496): 1361–1370. doi:10.1198/jasa.2011.tm10747. PMC 3405854. PMID 22844170. Schweinberger, Michael; Handcock, Mark (2015). "Local dependence in random graph models: characterization, properties and statistical inference". Journal of the Royal Statistical Society, Series B. 77 (3): 647–676. doi:10.1111/rssb.12081. PMC 4637985. PMID 26560142. Schweinberger, Michael; Stewart, Jonathan (2020). "Concentration and consistency results for canonical and curved exponential-family models of random graphs". The Annals of Statistics. 48 (1): 374–396. arXiv:1702.01812. doi:10.1214/19-AOS1810. Snijders, T. A. B. (2002). "Markov chain Monte Carlo estimation of exponential random graph models" (PDF). Journal of Social Structure. 3. Snijders, T. A. B.; Pattison, P. E.; Robins, G. L.; Handcock, M. S. (2006). "New specifications for exponential random graph models". Sociological Methodology. 36: 99–153. CiteSeerX 10.1.1.62.7975. doi:10.1111/j.1467-9531.2006.00176.x. Strauss, D; Ikeda, M (1990). "Pseudolikelihood estimation for social networks". Journal of the American Statistical Association. 5 (409): 204–212. doi:10.2307/2289546. JSTOR 2289546. van Duijn, M. A.; Snijders, T. A. B.; Zijlstra, B. H. (2004). "p2: a random effects model with covariates for directed graphs". Statistica Neerlandica. 58 (2): 234–254. doi:10.1046/j.0039-0402.2003.00258.x. van Duijn, M. A. J.; Gile, K. J.; Handcock, M. S. (2009). "A framework for the comparison of maximum pseudo-likelihood and maximum likelihood estimation of exponential family random graph models". Social Networks. 31 (1): 52–62. doi:10.1016/j.socnet.2008.10.003. PMC 3500576. PMID 23170041.
Wikipedia/Exponential_random_graph_models
Network access control (NAC) is an approach to computer security that attempts to unify endpoint security technology (such as antivirus, host intrusion prevention, and vulnerability assessment), user or system authentication and network security enforcement. == Description == Network access control is a computer networking solution that uses a set of protocols to define and implement a policy that describes how to secure access to network nodes by devices when they initially attempt to access the network. NAC might integrate the automatic remediation process (fixing non-compliant nodes before allowing access) into the network systems, allowing the network infrastructure such as routers, switches and firewalls to work together with back office servers and end user computing equipment to ensure the information system is operating securely before interoperability is allowed. A basic form of NAC is the 802.1X standard. Network access control aims to do exactly what the name implies—control access to a network with policies, including pre-admission endpoint security policy checks and post-admission controls over where users and devices can go on a network and what they can do. === Example === When a computer connects to a computer network, it is not permitted to access anything unless it complies with a business defined policy; including anti-virus protection level, system update level and configuration. While the computer is being checked by a pre-installed software agent, it can only access resources that can remediate (resolve or update) any issues. Once the policy is met, the computer is able to access network resources and the Internet, within the policies defined by the NAC system. NAC is mainly used for endpoint health checks, but it is often tied to Role-based Access. Access to the network will be given according to the profile of the person and the results of a posture/health check. For example, in an enterprise the HR department could access only HR department files if both the role and the endpoint meets anti-virus minimums. === Goals of NAC === NAC is an emerging security products category, which definition is both evolving and controversial. The overarching goals of this concept can be distilled to: Authentication, Authorization and Accounting of network connections. While conventional IP networks enforce access policies in terms of IP addresses, NAC environments attempt to enforce access policies based on authenticated user identities, at least for user end-stations like laptops and desktop computers. Policy enforcement NAC solutions allow network-operators to define policies, like the types of computers or roles of users allowed to access areas of the network, and enforce them in switches, routers, and network middleboxes. Verification of security posture of connecting devices. The main benefit of NAC solutions is to prevent end-stations that lack antivirus, patches, or host intrusion prevention software from accessing the network and placing other computers at risk of cross-contamination of computer worms. == Concepts == === Pre-admission and post-admission === There are two prevailing designs in NAC, based on whether policies are enforced before or after end-stations gain access to the network. In the former case, called pre-admission NAC, end-stations are inspected prior to being allowed on the network. A typical use case of pre-admission NAC would be to prevent clients with out-of-date antivirus signatures from talking to sensitive servers. Alternatively, post-admission NAC makes enforcement decisions based on user actions, after those users have been provided with access to the network === Agent versus agentless === The fundamental idea behind NAC is to allow the network to make access control decisions based on intelligence about end-systems, so the manner in which the network is informed about end-systems is a key design decision. A key difference among NAC systems is whether they require agent software to report end-system characteristics, or whether they use scanning and network inventory techniques to discern those characteristics remotely. As NAC has matured, software developers such as Microsoft have adopted the approach, providing their network access protection (NAP) agent as part of their Windows 7, Vista and XP releases, however, beginning with Windows 10, Microsoft no longer supports NAP. There are also NAP compatible agents for Linux and Mac OS X that provide equal intelligence for these operating systems. === Out-of-band versus inline === In some out-of-band systems, agents are distributed on end-stations and report information to a central console, which in turn can control switches to enforce policy. In contrast the inline solutions can be single-box solutions which act as internal firewalls for access-layer networks and enforce the policy. Out-of-band solutions have the advantage of reusing existing infrastructure; inline products can be easier to deploy on new networks, and may provide more advanced network enforcement capabilities, because they are directly in control of individual packets on the wire. However, there are products that are agentless, and have both the inherent advantages of easier, less risky out-of-band deployment, but use techniques to provide inline effectiveness for non-compliant devices, where enforcement is required. === Remediation, quarantine and captive portals === Network operators deploy NAC products with the expectation that some legitimate clients will be denied access to the network (if users never had out-of-date patch levels, NAC would be unnecessary). Because of this, NAC solutions require a mechanism to remediate the end-user problems that deny them access. Two common strategies for remediation are quarantine networks and captive portals: Quarantine A quarantine network is a restricted IP network that provides users with routed access only to certain hosts and applications. Quarantine is often implemented in terms of VLAN assignment; when a NAC product determines that an end-user is out-of-date, their switch port is assigned to a VLAN that is routed only to patch and update servers, not to the rest of the network. Other solutions use Address Management techniques (such as Address Resolution Protocol (ARP) or Neighbor Discovery Protocol (NDP)) for quarantine, avoiding the overhead of managing quarantine VLANs. Captive portals A captive portal intercepts HTTP access to web pages, redirecting users to a web application that provides instructions and tools for updating their computer. Until their computer passes automated inspection, no network usage besides the captive portal is allowed. This is similar to the way paid wireless access works at public access points. External Captive Portals allow organizations to offload wireless controllers and switches from hosting web portals. A single external portal hosted by a NAC appliance for wireless and wired authentication eliminates the need to create multiple portals, and consolidates policy management processes. == Mobile NAC == Using NAC in a mobile deployment, where workers connect over various wireless networks throughout the workday, involves challenges that are not present in a wired LAN environment. When a user is denied access because of a security concern, productive use of the device is lost, which can impact the ability to complete a job or serve a customer. In addition, automated remediation that takes only seconds on a wired connection may take minutes over a slower wireless data connection, bogging down the device. A mobile NAC solution gives system administrators greater control over whether, when and how to remediate the security concern. A lower-grade concern such as out-of-date antivirus signatures may result in a simple warning to the user, while more serious issues may result in quarantining the device. Policies may be set so that automated remediation, such as pushing out and applying security patches and updates, is withheld until the device is connected over a Wi-Fi or faster connection, or after working hours. This allows administrators to most appropriately balance the need for security against the goal of keeping workers productive. == See also == Network Access Protection Network Admission Control Trusted Network Connect Unified threat management == References == == External links == Booz Allen Hamilton leaves 60k unsecured files on DOD server
Wikipedia/Network_access_control
The usage share of an operating system is the percentage of computers running that operating system (OS). These statistics are estimates as wide scale OS usage data is difficult to obtain and measure. Reliable primary sources are limited and data collection methodology is not formally agreed. Currently devices connected to the internet allow for web data collection to approximately measure OS usage. As of March 2025, Android, which uses the Linux kernel, is the world's most popular operating system with 46% of the global market, followed by Windows with 25%, iOS with 18%, macOS with 6%, and other operating systems with 5% . This is for all device types excluding embedded devices. For smartphones and other mobile devices, Android has 72% market share, and Apple's iOS has 28%. For desktop computers and laptops, Microsoft Windows has 71%, followed by Apple's macOS at 16%, unknown operating systems at 8%, desktop Linux at 4%, then Google's ChromeOS at 2%. For tablets, Apple's iPadOS (a variant of iOS) has 52% share and Android has 48% worldwide. For the top 500 most powerful supercomputers, Linux distributions have had 100% of the marketshare since 2017. The global server operating system marketshare has Linux leading with a 62.7% marketshare, followed by Windows, Unix and other operating systems. Linux is also most used for web servers, and the most common Linux distribution is Ubuntu, followed by Debian. Linux has almost caught up with the second-most popular (desktop) OS, macOS, in some regions, such as in South America, and in Asia it's at 6.4% (7% with ChromeOS) vs 9.7% for macOS. In the US, ChromeOS is third at 5.5%, followed by (desktop) Linux at 4.3%, but can arguably be combined into a single number 9.8%. The most numerous type of device with an operating system are embedded systems. Not all embedded systems have operating systems, instead running their application code on the "bare metal"; of those that do have operating systems, a high percentage are standalone or do not have a web browser, which makes their usage share difficult to measure. Some operating systems used in embedded systems are more widely used than some of those mentioned above; for example, modern Intel microprocessors contain an embedded management processor running a version of the Minix operating system. == Worldwide device shipments == According to Gartner, the following is the worldwide device shipments (referring to wholesale) by operating system, which includes smartphones, tablets, laptops and PCs together. Shipments (to stores) do not necessarily translate to sales to consumers, therefore suggesting the numbers indicate popularity and/or usage could be misleading. Not only do smartphones sell in higher numbers than PCs, but also a lot more by dollar value, with the gap only projected to widen, to well over double. On 27 January 2016, Paul Thurrott summarized the operating system market, the day after Apple announced "one billion devices": Apple's "active installed base" is now one billion devices. [..] Granted, some of those Apple devices were probably sold into the marketplace years ago. But that 1 billion figure can and should be compared to the numbers Microsoft touts for Windows 10 (200 million, most recently) or Windows more generally (1.5 billion active users, a number that hasn’t moved, magically, in years), and that Google touts for Android (over 1.4 billion, as of September). My understanding of iOS is that the user base was previously thought to be around 800 million strong, and when you factor out Macs and other non-iOS Apple devices, that's probably about right. But as you can see, there are three big personal computing platforms. === PC shipments === For 2015 (and earlier), Gartner reports for "the year, worldwide PC shipments declined for the fourth consecutive year, which started in 2012 with the launch of tablets" with an 8% decline in PC sales for 2015 (not including cumulative decline in sales over the previous years). Microsoft backed away from their goal of one billion Windows 10 devices in three years (or "by the middle of 2018") and reported on 26 September 2016 that Windows 10 was running on over 400 million devices, and in March 2019 on more than 800 million. In May 2020, Gartner predicted further decline in all market segments for 2020 due to COVID-19, predicting a decline of 13.6% for all devices. while the "Work from Home Trend Saved PC Market from Collapse", with only a decline of 10.5% predicted for PCs. However, in the end, according to Gartner, PC shipments grew 10.7% in the fourth quarter of 2020 and reached 275 million units in 2020, a 4.8% increase from 2019 and the highest growth in ten years." Apple in 4th place for PCs had the largest growth in shipments for a company in Q4 of 31.3%, while "the fourth quarter of 2020 was another remarkable period of growth for Chromebooks, with shipments increasing around 200% year over year to reach 11.7 million units. In 2020, Chromebook shipments increased over 80% to total nearly 30 million units, largely due to demand from the North American education market." Chromebooks sold more (30 million) than Apple's Macs worldwide (22.5 million) in pandemic year 2020. According to the Catalyst group, the year 2021 had record high PC shipments with total shipments of 341 million units (including Chromebooks), 15% higher than 2020 and 27% higher than 2019, while being the largest shipment total since 2012. According to Gartner, worldwide PC shipments declined by 16.2% in 2022, the largest annual decrease since the mid-1990s, due to geopolitical, economic, and supply chain challenges. === Tablet computers shipments === In 2015, eMarketer estimated at the beginning of the year that the tablet installed base would hit one billion for the first time (with China's use at 328 million, which Google Play doesn't serve or track, and the United States's use second at 156 million). At the end of the year, because of cheap tablets – not counted by all analysts – that goal was met (even excluding cumulative sales of previous years) as: Sales quintupled to an expected 1 billion units worldwide this year, from 216 million units in 2014, according to projections from the Envisioneering Group. While that number is far higher than the 200-plus million units globally projected by research firms IDC, Gartner and Forrester, Envisioneering analyst Richard Doherty says the rival estimates miss all the cheap Asian knockoff tablets that have been churning off assembly lines.[..] Forrester says its definition of tablets "is relatively narrow" while IDC says it includes some tablets by Amazon — but not all.[..] The top tech purchase of the year continued to be the smartphone, with an expected 1.5 billion sold worldwide, according to projections from researcher IDC. Last year saw some 1.2 billion sold.[..] Computers didn’t fare as well, despite the introduction of Microsoft's latest software upgrade, Windows 10, and the expected but not realized bump it would provide for consumers looking to skip the upgrade and just get a new computer instead. Some 281 million PCs were expected to be sold, according to IDC, down from 308 million in 2014. Folks tend to be happy with the older computers and keep them for longer, as more of our daily computing activities have moved to the smartphone.[..] While Windows 10 got good reviews from tech critics, only 11% of the 1-billion-plus Windows user base opted to do the upgrade, according to Microsoft. This suggests Microsoft has a ways to go before the software gets "hit" status. Apple's new operating system El Capitan has been downloaded by 25% of Apple's user base, according to Apple. This conflicts with statistics from IDC that say the tablet market contracted by 10% in 2015 with only Huawei, ranked fifth, with big gains, more than doubling their share; for fourth quarter 2015, the five biggest vendors were the same except that Amazon Fire tablets ranked third worldwide, new on the list, enabled by its not quite tripling of market share to 7.9%, with its Fire OS Android-derivative. Gartner excludes some devices from their tablet shipment statistic and includes them in a different category called "premium ultramobiles" with screen sizes of more than 10" inches. === Smartphone shipments === There are more mobile phone owners than toothbrush owners, with mobile phones the fastest growing technology in history. There are a billion more active mobile phones in the world than people (and many more than 10 billion sold so far with less than half still in use), explained by the fact that some people have more than one, such as an extra for work. All the phones have an operating system, but only a fraction of them are smartphones with an OS capable of running modern applications. In 2018, 3.1 billion smartphones and tablets were in use across the world (with tablets, a small fraction of the total, generally running the same operating systems, Android or iOS, the latter being more popular on tablets. In 2019, a variant of iOS called iPadOS built for iPad tablets was released). On 28 May 2015, Google announced that there were 1.4 billion Android users and 1 billion Google play users active during that month. This changed to 2 billion monthly active users in May 2017. By late 2016, Android had been said to be "killing" Apple's iOS market share (i.e. its declining sales of smartphones, not just relatively but also by number of units, when the whole market was increasing). Gartner's press release stated: "Apple continued its downward trend with a decline of 7.7 percent in the second quarter of 2016", which is their decline, based on absolute number of units, that underestimates the relative decline (with the market increasing), along with the misleading "1.7 percent [point]" decline. That point decline means an 11.6% relative decline (from 14.6% down to 12.9%). Although by units sold Apple was declining in the late 2010s, the company was almost the only vendor making any profit in the smartphone sector from hardware sales alone. In Q3 2016 for example, they captured 103.6% of the market profits. In May 2019 the biggest smartphone companies (by market share) were Samsung, Huawei and Apple, respectively. In November 2024, a new competitor to Android and iOS emerged, when sales of the Huawei Mate 70 started with the all-new operating system HarmonyOS NEXT installed on the flagship device. Future Huawei devices are to be sold mainly with this operating system, creating a third player on the market for smartphone operating systems. The following table shows worldwide smartphone sales to end users by operating systems, as measured by Gartner, International Data Corporation (IDG) and others: == Web clients == Data from various sources published over the 2021/2022 period is summarized in the table below. All of these sources monitor a substantial number of websites, any statistics that relate to only one web site have been excluded. Android currently ranks highest, above Windows (incl. Xbox console) systems. Windows Phone accounted for 0.51% of the web usage, before it was discontinued. Considering all personal computers, Microsoft Windows is well below 50% usage share on every continent, and at 30% in the US (24% single-day low) and in many countries lower, e.g. China, and in India at 19% (12% some days) and Windows' lowest share globally was 29% in May 2022 (25% some days), and 29% in the US. For a short time, iOS was slightly more popular than Windows in the US, but this is no longer the case. Worldwide, Android holds 45.49%, more than Windows at 25.35%, and iOS third at 18.26%. In Africa, Android is at 66.07%, Windows is 13.46 (and iOS third at 10.24%). Before iOS became the most popular operating system in any independent country, it was most popular in Guam, an unincorporated territory of the United States, for four consecutive quarters in 2017–18, although Android is now the most popular there. iOS has been the highest ranked OS in Jersey (a British Crown dependency in Europe) for years, by a wide margin, and iOS was also highest ranked in Falkland Islands, a British Overseas Territory, for one quarter in 2019, before being overtaken by Android in the following quarter. iOS is competitive with Windows in Sweden, where some days it is more used. The designation of an "Unknown" operating system is strangely high in a few countries such as Madagascar where it was at 32.44% (no longer near as high). This may be due to the fact that StatCounter uses browser detection to get OS statistics, and there the most common browsers are not often used. The version breakdown for browsers in Madagascar shows "Other" at 34.9%, and Opera Mini 4.4 is the most popular known browser at 22.1% (plus e.g. 3.34% for Opera 7.6). However browser statistics without version-breakdown has Opera at 48.11% with the "Other" category very small. In China, Android became the highest ranked operating system in July 2016 (Windows has occasionally topped it since then, while since April 2016 it or all non-mobile operating systems haven't outranked mobile operating systems, meaning Android plus iOS). In the Asian continent as a whole, Android has been ranked highest since February 2016 and Android alone has the majority share, because of a large majority in all the most populous countries of the continent, up to 84% in Bangladesh, where it has had over 70% share for over four years. Since August 2015, Android is ranked first, at 48.36% in May 2016, in the African continent – when it took a big jump ahead of Windows 7, and thereby Africa joined Asia as a mobile-majority continent. China is no longer a desktop-majority country, joining India, which has a mobile-majority of 71%, confirming Asia's significant mobile-majority. Online usage of Linux kernel derivatives (Android + ChromeOS + other Linux) exceeds that of Windows. This has been true since some time between January and April 2016, according to W3Counter and StatCounter. However, even before that, the figure for all Unix-like OSes, including those from Apple, was higher than that for Windows. == Desktop and laptop computers == Windows is still the dominant desktop OS, but the dominance varies by region and it has gradually lost market share to other desktop operating systems (not just to mobile) with the slide very noticeable in the US, where macOS usage has more than quadrupled from Jan. 2009 to Dec. 2020 to 30.62% (i.e. in Christmas month; and 34.72% in April 2020 in the middle of COVID-19, and iOS was more popular overall that year; globally Windows lost to Android that year, as for the two years prior), with Windows down to 61.136% and ChromeOS at 5.46%, plus traditional Linux at 1.73%. There is little openly published information on the device shipments of desktop and laptop computers. Gartner publishes estimates, but the way the estimates are calculated is not openly published. Another source of market share of various operating systems is StatCounter basing its estimate on web use (although this may not be very accurate). Also, sales may overstate usage. Most computers are sold with a pre-installed operating system, with some users replacing that OS with a different one due to personal preference, or installing another OS alongside it and using both. Conversely, sales underestimate usage by not counting unauthorized copies. For example, in 2009, approximately 80% of software sold in China consisted of illegitimate copies. In 2007, the statistics from an automated update of IE7 for registered Windows computers differed with the observed web browser share, leading one writer to estimate that 25–35% of all Windows XP installations were unlicensed. The usage share of Microsoft's (then latest operating system version) Windows 10 has slowly increased since July/August 2016, reaching around 27.15% (of all Windows versions, not all desktop or all operating systems) in December 2016. It eventually reached 79.79% on 5 October 2021, the same day on which its successor Windows 11 was released. In the United States, usage of Windows XP has dropped to 0.38% (of all Windows versions), and its global average to 0.59%, while in Africa it is still at 2.71%, and in Armenia it is more than 70%, as of 2017. StatCounter web usage data of desktop or laptop operating systems varies significantly by country. For example, in 2017, macOS usage in North America was at 16.82% (17.52% in the US) whereas in Asia it was only 4.4%. As of July 2023, macOS usage has increased to 30.81% in North America (31.77% in the US) and to 9.64% in Asia. The 2023 Stack Overflow developer survey counts 87,222 survey responses. However, usage of a particular system as a desktop or as a server was not differentiated in the survey responses. The operating system share among those identifying as professional developers was: Windows: 46.91% macOS: 33% Ubuntu: 26.69% BSD: 0.59% === Microsoft data on Windows usage === In June 2016, Microsoft claimed Windows 10 had half the market share of all Windows installations in the US and UK, as quoted by BetaNews: Microsoft's Windows trends page [shows] Windows 10 hit 50 percent in the US (51 percent in the UK, 39 percent globally), while ... Windows 7 was on 38 percent (36 percent in the UK, 46 percent globally). A big reason for the difference in numbers comes down to how they are recorded. ... actual OS usage (based on web browsing), while Microsoft records the number of devices Windows 10 is installed on. ... Microsoft also only records Windows 7, Windows 8, Windows 8.1 and Windows 10, while NetMarketShare includes both XP and Vista. === Desktop computer games === In recent years, Linux has gained more interest among gamers than ever before, especially thanks to projects like Wine and Proton. Wine allows Windows programs to run on Linux, while Valve's Proton makes many games on Steam directly playable without any additional configuration. The Linux version of Steam and devices such as Steam Deck using Linux-based SteamOS have made Linux more accessible as a gaming platform. The number of Steam games currently available on Linux exceeds the total number of games available on XBOX, Nintendo Switch and PlayStation platforms. In addition, on modern systems, Linux is able to offer comparable or sometimes higher performance than Windows in some games due to its lower system load. These developments have made Linux an alternative to Windows in terms of game playability compared to the past. The digital video game distribution platform Steam publishes a monthly "Hardware & Software Survey", with the statistics below: ^† These figures, as reported by Steam, do not include SteamOS statistics. == Mobile devices == === Smartphones OS by usage === By Q1 2018, mobile operating systems on smartphones included Google's dominant Android (and variants) and Apple's iOS which combined had an almost 100% market share. Smartphone penetration vs. desktop use differs substantially by country. Some countries, like Russia, still have smartphone use as low as 22.35% (as a fraction of all web use), but in most western countries, smartphone use is close to 50% of all web use. This doesn't mean that only half of the population has a smartphone, could mean almost all have, just that other platforms have about equal use. Smartphone usage share in developing countries is much higher – in Bangladesh, for example, Android smartphones had up to 84% and currently 70% share, and in Mali smartphones had over 90% (up to 95%) share for almost two years. (A section below has more information on regional trends on the move to smartphones.) There is a clear correlation between the GDP per capita of a country and that country's respective smartphone OS market share, with users in the richest countries being much more likely to choose Apple's iPhone, with Google's Android being predominant elsewhere. === Tablet computers OS by usage === Tablet computers, or simply tablets, became a significant OS market share category starting with Apple's iPad. In Q1 2018, iOS had 65.03% market share and Android had 34.58% market share. Windows tablets may not get classified as such by some analysts, and thus barely register; e.g. 2-in-1 PCs may get classified as "desktops", not tablets. Since 2016, in South America (and Cuba in North America), Android tablets have gained majority, and in Asia in 2017 Android was slightly more popular than the iPad, which was at 49.05% usage share in October 2015. In Africa, Android tablets are much more popular while elsewhere the iPad has a safe margin. As of March 2015, Android has made steady gains to becoming the most popular tablet operating system: that is the trend in many countries, having already gained the majority in large countries (India at 63.25%, and in Indonesia at 62.22%) and in the African continent with Android at 62.22% (first to gain Android majority in late 2014), with steady gains from 20.98% in August 2012 (Egypt at 62.37%, Zimbabwe at 62.04%), and South America at 51.09% in July 2015. (Peru at 52.96%). Asia is at 46%. In Nepal, Android gained majority lead in November 2014 but lost it down to 41.35% with iOS at 56.51%. In Taiwan, as of October 2016, Android after having gained a confident majority, has been on a losing streak. China is a major exception to Android gaining market share in Asia (there Android phablets are much more popular than Android tablets, while similar devices get classified as smartphones) where the iPad/iOS is at 82.84% in March 2015. === Crossover to smartphones having majority share === According to StatCounter web use statistics (a proxy for all use), smartphones are more popular than desktop computers globally (and Android in particular more popular than Windows). Including tablets with mobiles/smartphones, as they also run so-called mobile operating systems, even in the United States (and most countries) are mobiles including tablets more popular than other (older originally made for desktops) operating systems (such as Windows and macOS). Windows in the US (at 33.42%) has only 8% head-start (2.55-percentage points) over iOS only; with Android, that mobile operating system and iOS have 52.14% majority. Alternatively, Apple, with iOS plus their non-mobile macOS (9.33%) has 20% more share (6.7-percentage points more) than Microsoft's Windows in the country where both companies were built. Although desktop computers are still popular in many countries (while overall down to 44.9% in the first quarter of 2017), smartphones are more popular even in many developed countries. A few countries on all continents are desktop-minority with Android more popular than Windows; many, e.g. Poland in Europe, and about half of the countries in South America, and many in North America, e.g. Guatemala, Honduras, Haiti; up to most countries in Asia and Africa with smartphone-majority because of Android, Poland and Turkey in Europe highest with 57.68% and 62.33%, respectively. In Ireland, smartphone use at 45.55% outnumbers desktop use and mobile as a whole gains majority when including the tablet share at 9.12%. Spain was also slightly desktop-minority. As of July 2019, Sweden had been desktop-minority for eight weeks in a row. The range of measured mobile web use varies a lot by country, and a StatCounter press release recognizes "India amongst world leaders in use of mobile to surf the internet" (of the big countries) where the share is around (or over) 80% and desktop is at 19.56%, with Russia trailing with 17.8% mobile use (and desktop the rest). Smartphones (discounting tablets), first gained majority in December 2016 (desktop-majority was lost the month before), and it wasn't a Christmas-time fluke, as while close to majority after smartphone majority happened again in March 2017. In the week of 7–13 November 2016, smartphones alone (without tablets) overtook desktop for the first time, albeit for a short period. Examples of mobile-majority countries include Paraguay in South America, Poland in Europe and Turkey and most of Asia and Africa. Some of the world is still desktop-majority, with for example the United States at 54.89% (but not on all days). However, in some territories of the United States, such as Puerto Rico, desktop is significantly under majority, with Windows just under 25%, overtaken by Android. On 22 October 2016 (and subsequent weekends), mobile showed majority. Since 27 October, the desktop hasn't had a majority, including on weekdays. Smartphones alone have shown majority since 23 December to the end of the year, with the share topping at 58.22% on Christmas Day. To the "mobile"-majority share of smartphones, tablets could be added giving a 63.22% majority. While an unusually high top, a similar high also occurred on Monday 17 April 2017, with the smartphone share slightly lower and tablet share slightly higher, combining to 62.88%. Formerly, according to a StatCounter press release, the world has turned desktop-minority; as of October 2016, at about 49% desktop use for that month, but mobile wasn't ranked higher, tablet share had to be added to it to exceed desktop share. For the Christmas season (i.e. temporarily, while desktop-minority remains and smartphone-majority on weekends), the last two weeks in December 2016, Australia (and Oceania in general) was desktop-minority for the first time for an extended period, i.e. every day from 23 December. In South America, smartphones alone took majority from desktops on Christmas Day, but for a full-week-average, desktop is still at least at 58%. The UK desktop-minority dropped down to 44.02% on Christmas Day and for the eight days to the end of the year. Ireland joined some other European countries with smartphone-majority, for three days after Christmas, topping that day at 55.39%. In the US, desktop-minority happened for three days on and around Christmas (while a longer four-day stretch happened in November, and happens frequently on weekends). According to StatCounter web use statistics (a proxy for all use), in the week from 7–13 November 2016, "mobile" (meaning smartphones) alone (without tablets) overtook desktop, for the first time, with them highest ranked at 52.13% (on 27 November 2016) or up to 49.02% for a full week. Mobile-majority applies to countries such as Paraguay in South America, Poland in Europe and Turkey; and the continents Asia and Africa. Large regions of the rest of the world are still desktop-majority, while on some days, the United States, (and North America as a whole) isn't; the US is desktop-minority up to four days in a row, and up to a five-day average. Other examples, of desktop-minority on some days, include the UK, Ireland, Australia (and Oceania as a whole); in fact, at least one country on every continent has turned desktop-minority (for at least a month). On 22 October 2016 (and subsequent weekends), mobile has shown majority. Previously, according to a StatCounter press release, the world has turned desktop-minority; as of October 2016, at about 49% desktop use for that month, with desktop-minority stretching up to an 18-weeks/4-months period from 28 June to 31 October 2016, while whole of July, August or September 2016, showed desktop-majority (and many other long sub-periods in the long stretch showed desktop-minority; similarly only Fridays, Saturdays and Sundays are desktop-minority). The biggest continents, Asia and Africa, have shown vast mobile-majority for long time (any day of the week), as well as several individual countries elsewhere have also turned mobile-majority: Poland, Albania (and Turkey) in Europe and Paraguay and Bolivia in South America. According to StatCounter's web use statistics, Saturday 28 May 2016, was the day when smartphones ("mobile" at StatCounter, that now counts tablets separately) became a most used platform, ranking first, at 47.27%, above desktops. The next day, desktops slightly outnumbered "mobile" (unless counting tablets: some analysts count tablets with smartphones or separately while others with desktops – even when most tablets are iPad or Android, not Windows devices). Since Sunday 27 March 2016, the first day the world dipped to desktop-minority, it has happened almost every week, and by week of 11–17 July 2016, the world was desktop-minority, followed by the next week, and thereon also for a three-week period. The trend is still stronger on weekends, with e.g. 17 July 2016 showed desktop at 44.67%, "mobile" at 49.5% plus tablets at 5.7%. Recent weekly data shows a downward trend for desktops. According to StatCounter web use statistics (a proxy for overall use), on weekends desktops worldwide lose about 5 percent points, e.g. down to 51.46% on 15 August 2015, with the loss in (relative) web use going to mobile (and also a minuscule increase for tablets), mostly because Windows 7, ranked 1st on workdays, declines in web use, with it shifting to Android and lesser degree to iOS. Two continents have already crossed over to mobile-majority (because of Android), based on StatCounters web use statistics. In June 2015, Asia became the first continent where mobile overtook desktop (followed by Africa in August; while Nigeria had mobile majority in October 2011, because of Symbian – that later had 51% share, then Series 40 dominating, followed by Android as dominating operating system) and as far back as October 2014, they had reported this trend on a large scale in a press release: "Mobile usage has already overtaken desktop in several countries including India, South Africa and Saudi Arabia". In India, desktop went from majority, in July 2012, down to 32%. In Bangladesh desktop went from majority, in May 2013, down to 17%, with Android alone now accounting for majority web use. Only a few African countries were still desktop-majority and many have a large mobile majority including Ethiopia and Kenya, where mobile usage is over 72%. The popularity of mobile use worldwide has been driven by the huge popularity increase of Android in Asian countries, where Android is the highest ranked operating system statistically in virtually every south-east Asian country, while it also ranks most popular in almost every African country. Poland has been desktop-minority since April 2015, because of Android being vastly more popular there, and other European countries, such as Albania (and Turkey), have also crossed over. The South America continent is somewhat far from losing desktop-majority, but Paraguay had lost it as of March 2015. Android and mobile browsing in general has also become hugely popular in all other continents where desktop has a large desktop base and the trend to mobile is not as clear as a fraction of the total web use. While some analysts count tablets with desktops (as some of them run Windows), others count them with mobile phones (as the vast majority of tablets run so-called mobile operating systems, such as Android or iOS on the iPad). iPad has a clear lead globally, but has clearly lost the majority to Android in South America, and a number of Eastern European countries such as Poland; lost virtually all African countries and has lost the majority twice in Asia, but gained the majority back (while many individual countries, e.g. India and most of the middle East have clear Android majority on tablets). Android on tablets is thus second most popular after the iPad. In March 2015, for the first time in the US the number of mobile-only adult internet users exceeded the number of desktop-only internet users with 11.6% of the digital population only using mobile compared to 10.6% only using desktop; this also means the majority, 78%, use both desktop and mobile to access the internet. A few smaller countries in North America, such as Haiti (because of Android) have gone mobile majority (mobile went to up to 72.35%, and is at 64.43% in February 2016). === Revenue === The region with the largest Android usage also has the largest mobile revenue. == Public servers on the Internet == Internet based servers' market share can be measured with statistical surveys of publicly accessible servers, such as web servers, mail servers or DNS servers on the Internet: the operating systems powering such servers are found by inspecting raw response messages. This method gives insight only into market share of operating systems that are publicly accessible on the Internet. There will be differences in the result depending on how the sample is done and observations weighted. Usually the surveys are not based on a random sample of all IP addresses, domain names, hosts or organisations, but on servers found by some other method. Additionally, many domains and IP addresses may be served by one host and some domains may be served by several hosts or by one host with several IP addresses. Note Revenue comparisons often include "operating system software, other bundled software" and are not appropriate for usage comparison as the Linux operating system costs nothing (including "other bundled software"), except if optionally using commercial distributions such as Red Hat Enterprise Linux (in that case, cost of software for all software bundled with hardware has to be known for all operating systems involved, and subtracted). In cases where no-cost Linux is used, such comparisons underestimate Linux server popularity and overestimate other proprietary operating systems such as Unix and Windows. == Mainframes == Mainframes are larger and more powerful than most servers, but not supercomputers. They are used to process large sets of data, for example enterprise resource planning or credit card transactions. The most common operating system for mainframes is IBM's z/OS. Operating systems for IBM Z generation hardware include IBM's proprietary z/OS, Linux on IBM Z, z/TPF, z/VSE and z/VM. Gartner reported on 23 December 2008 that Linux on System z was used on approximately 28% of the "customer z base" and that they expected this to increase to over 50% in the following five years. Of Linux on IBM Z, Red Hat and Micro Focus compete to sell RHEL and SLES respectively: Prior to 2006, Novell claimed a market share of 85% or more for SUSE Linux Enterprise Server. Red Hat has since claimed 18.4% in 2007 and 37% in 2008. Gartner reported at the end of 2008 that Novell's SUSE Linux Enterprise Server had an 80% share of mainframe Linux. === Decline === Like today's trend of mobile devices from personal computers, in 1984 for the first time estimated sales of desktop computers ($11.6 billion) exceeded mainframe computers ($11.4 billion). IBM received the vast majority of mainframe revenue. From 1991 to 1996, AT&T Corporation briefly owned NCR, one of the major original mainframe producers. During the same period, companies found that servers based on microcomputer designs could be deployed at a fraction of the acquisition price and offer local users much greater control over their own systems given the IT policies and practices at that time. Terminals used for interacting with mainframe systems were gradually replaced by personal computers. Consequently, demand plummeted and new mainframe installations were restricted mainly to financial services and government. In the early 1990s, there was a rough consensus among industry analysts that the mainframe was a dying market as mainframe platforms were increasingly replaced by personal computer networks. In 2012, NASA powered down its last mainframe, an IBM System z9. However, IBM's successor to the z9, the z10, led a New York Times reporter to state four years earlier that "mainframe technology—hardware, software and services—remains a large and lucrative business for IBM, and mainframes are still the back-office engines behind the world's financial markets and much of global commerce". As of 2010, while mainframe technology represented less than 3% of IBM's revenues, it "continue[d] to play an outsized role in Big Blue's results". == Supercomputers == The TOP500 project lists and ranks the 500 fastest supercomputers for which benchmark results are submitted. Since the early 1990s, the field of supercomputers has been dominated by Unix or Unix-like operating systems, and starting in 2017, every top 500 fastest supercomputer uses Linux as its supercomputer operating system. The last supercomputer to rank #1 while using an operating system other than Linux was ASCI White, which ran AIX. It held the title from November 2000 to November 2001, and was decommissioned in 2006. Then in June 2017, two AIX computers held rank 493 and 494, the last non-Linux systems before they dropped off the list. Historically all kinds of Unix operating systems dominated, and in the end ultimately Linux remains. == Market share by category == == See also == Comparison of operating systems List of operating systems Timeline of operating systems Usage share of web browsers Mobile OS market share == Notes == == References ==
Wikipedia/Usage_share_of_operating_systems
A communication protocol is a system of rules that allows two or more entities of a communications system to transmit information via any variation of a physical quantity. The protocol defines the rules, syntax, semantics, and synchronization of communication and possible error recovery methods. Protocols may be implemented by hardware, software, or a combination of both. Communicating systems use well-defined formats for exchanging various messages. Each message has an exact meaning intended to elicit a response from a range of possible responses predetermined for that particular situation. The specified behavior is typically independent of how it is to be implemented. Communication protocols have to be agreed upon by the parties involved. To reach an agreement, a protocol may be developed into a technical standard. A programming language describes the same for computations, so there is a close analogy between protocols and programming languages: protocols are to communication what programming languages are to computations. An alternate formulation states that protocols are to communication what algorithms are to computation. Multiple protocols often describe different aspects of a single communication. A group of protocols designed to work together is known as a protocol suite; when implemented in software they are a protocol stack. Internet communication protocols are published by the Internet Engineering Task Force (IETF). The IEEE (Institute of Electrical and Electronics Engineers) handles wired and wireless networking and the International Organization for Standardization (ISO) handles other types. The ITU-T handles telecommunications protocols and formats for the public switched telephone network (PSTN). As the PSTN and Internet converge, the standards are also being driven towards convergence. == Communicating systems == === History === The first use of the term protocol in a modern data-commutation context occurs in April 1967 in a memorandum entitled A Protocol for Use in the NPL Data Communications Network. Under the direction of Donald Davies, who pioneered packet switching at the National Physical Laboratory in the United Kingdom, it was written by Roger Scantlebury and Keith Bartlett for the NPL network. On the ARPANET, the starting point for host-to-host communication in 1969 was the 1822 protocol, written by Bob Kahn, which defined the transmission of messages to an IMP. The Network Control Program (NCP) for the ARPANET, developed by Steve Crocker and other graduate students including Jon Postel and Vint Cerf, was first implemented in 1970. The NCP interface allowed application software to connect across the ARPANET by implementing higher-level communication protocols, an early example of the protocol layering concept. The CYCLADES network, designed by Louis Pouzin in the early 1970s was the first to implement the end-to-end principle, and make the hosts responsible for the reliable delivery of data on a packet-switched network, rather than this being a service of the network itself. His team was the first to tackle the highly complex problem of providing user applications with a reliable virtual circuit service while using a best-effort service, an early contribution to what will be the Transmission Control Protocol (TCP). Bob Metcalfe and others at Xerox PARC outlined the idea of Ethernet and the PARC Universal Packet (PUP) for internetworking. Research in the early 1970s by Bob Kahn and Vint Cerf led to the formulation of the Transmission Control Program (TCP). Its RFC 675 specification was written by Cerf with Yogen Dalal and Carl Sunshine in December 1974, still a monolithic design at this time. The International Network Working Group agreed on a connectionless datagram standard which was presented to the CCITT in 1975 but was not adopted by the CCITT nor by the ARPANET. Separate international research, particularly the work of Rémi Després, contributed to the development of the X.25 standard, based on virtual circuits, which was adopted by the CCITT in 1976. Computer manufacturers developed proprietary protocols such as IBM's Systems Network Architecture (SNA), Digital Equipment Corporation's DECnet and Xerox Network Systems. TCP software was redesigned as a modular protocol stack, referred to as TCP/IP. This was installed on SATNET in 1982 and on the ARPANET in January 1983. The development of a complete Internet protocol suite by 1989, as outlined in RFC 1122 and RFC 1123, laid the foundation for the growth of TCP/IP as a comprehensive protocol suite as the core component of the emerging Internet. International work on a reference model for communication standards led to the OSI model, published in 1984. For a period in the late 1980s and early 1990s, engineers, organizations and nations became polarized over the issue of which standard, the OSI model or the Internet protocol suite, would result in the best and most robust computer networks. === Concept === The information exchanged between devices through a network or other media is governed by rules and conventions that can be set out in communication protocol specifications. The nature of communication, the actual data exchanged and any state-dependent behaviors, is defined by these specifications. In digital computing systems, the rules can be expressed by algorithms and data structures. Protocols are to communication what algorithms or programming languages are to computations. Operating systems usually contain a set of cooperating processes that manipulate shared data to communicate with each other. This communication is governed by well-understood protocols, which can be embedded in the process code itself. In contrast, because there is no shared memory, communicating systems have to communicate with each other using a shared transmission medium. Transmission is not necessarily reliable, and individual systems may use different hardware or operating systems. To implement a networking protocol, the protocol software modules are interfaced with a framework implemented on the machine's operating system. This framework implements the networking functionality of the operating system. When protocol algorithms are expressed in a portable programming language the protocol software may be made operating system independent. The best-known frameworks are the TCP/IP model and the OSI model. At the time the Internet was developed, abstraction layering had proven to be a successful design approach for both compiler and operating system design and, given the similarities between programming languages and communication protocols, the originally monolithic networking programs were decomposed into cooperating protocols. This gave rise to the concept of layered protocols which nowadays forms the basis of protocol design. Systems typically do not use a single protocol to handle a transmission. Instead they use a set of cooperating protocols, sometimes called a protocol suite. Some of the best-known protocol suites are TCP/IP, IPX/SPX, X.25, AX.25 and AppleTalk. The protocols can be arranged based on functionality in groups, for instance, there is a group of transport protocols. The functionalities are mapped onto the layers, each layer solving a distinct class of problems relating to, for instance: application-, transport-, internet- and network interface-functions. To transmit a message, a protocol has to be selected from each layer. The selection of the next protocol is accomplished by extending the message with a protocol selector for each layer. == Types == There are two types of communication protocols, based on their representation of the content being carried: text-based and binary. === Text-based === A text-based protocol or plain text protocol represents its content in human-readable format, often in plain text encoded in a machine-readable encoding such as ASCII or UTF-8, or in structured text-based formats such as Intel hex format, XML or JSON. The immediate human readability stands in contrast to native binary protocols which have inherent benefits for use in a computer environment (such as ease of mechanical parsing and improved bandwidth utilization). Network applications have various methods of encapsulating data. One method very common with Internet protocols is a text oriented representation that transmits requests and responses as lines of ASCII text, terminated by a newline character (and usually a carriage return character). Examples of protocols that use plain, human-readable text for its commands are FTP (File Transfer Protocol), SMTP (Simple Mail Transfer Protocol), early versions of HTTP (Hypertext Transfer Protocol), and the finger protocol. Text-based protocols are typically optimized for human parsing and interpretation and are therefore suitable whenever human inspection of protocol contents is required, such as during debugging and during early protocol development design phases. === Binary === A binary protocol utilizes all values of a byte, as opposed to a text-based protocol which only uses values corresponding to human-readable characters in ASCII encoding. Binary protocols are intended to be read by a machine rather than a human being. Binary protocols have the advantage of terseness, which translates into speed of transmission and interpretation. Binary have been used in the normative documents describing modern standards like EbXML, HTTP/2, HTTP/3 and EDOC. An interface in UML may also be considered a binary protocol. == Basic requirements == Getting the data across a network is only part of the problem for a protocol. The data received has to be evaluated in the context of the progress of the conversation, so a protocol must include rules describing the context. These kinds of rules are said to express the syntax of the communication. Other rules determine whether the data is meaningful for the context in which the exchange takes place. These kinds of rules are said to express the semantics of the communication. Messages are sent and received on communicating systems to establish communication. Protocols should therefore specify rules governing the transmission. In general, much of the following should be addressed: Data formats for data exchange Digital message bitstrings are exchanged. The bitstrings are divided in fields and each field carries information relevant to the protocol. Conceptually the bitstring is divided into two parts called the header and the payload. The actual message is carried in the payload. The header area contains the fields with relevance to the operation of the protocol. Bitstrings longer than the maximum transmission unit (MTU) are divided in pieces of appropriate size. Address formats for data exchange Addresses are used to identify both the sender and the intended receiver(s). The addresses are carried in the header area of the bitstrings, allowing the receivers to determine whether the bitstrings are of interest and should be processed or should be ignored. A connection between a sender and a receiver can be identified using an address pair (sender address, receiver address). Usually, some address values have special meanings. An all-1s address could be taken to mean an addressing of all stations on the network, so sending to this address would result in a broadcast on the local network. The rules describing the meanings of the address value are collectively called an addressing scheme. Address mapping Sometimes protocols need to map addresses of one scheme on addresses of another scheme. For instance, to translate a logical IP address specified by the application to an Ethernet MAC address. This is referred to as address mapping. Routing When systems are not directly connected, intermediary systems along the route to the intended receiver(s) need to forward messages on behalf of the sender. On the Internet, the networks are connected using routers. The interconnection of networks through routers is called internetworking. Detection of transmission errors Error detection is necessary on networks where data corruption is possible. In a common approach, a CRC of the data area is added to the end of packets, making it possible for the receiver to detect differences caused by corruption. The receiver rejects the packets on CRC differences and arranges somehow for retransmission. Acknowledgements Acknowledgement of correct reception of packets is required for connection-oriented communication. Acknowledgments are sent from receivers back to their respective senders. Loss of information - timeouts and retries Packets may be lost on the network or be delayed in transit. To cope with this, under some protocols, a sender may expect an acknowledgment of correct reception from the receiver within a certain amount of time. Thus, on timeouts, the sender may need to retransmit the information. In case of a permanently broken link, the retransmission has no effect, so the number of retransmissions is limited. Exceeding the retry limit is considered an error. Direction of information flow Direction needs to be addressed if transmissions can only occur in one direction at a time as on half-duplex links or from one sender at a time as on a shared medium. This is known as media access control. Arrangements have to be made to accommodate the case of collision or contention where two parties respectively simultaneously transmit or wish to transmit. Sequence control If long bitstrings are divided into pieces and then sent on the network individually, the pieces may get lost or delayed or, on some types of networks, take different routes to their destination. As a result, pieces may arrive out of sequence. Retransmissions can result in duplicate pieces. By marking the pieces with sequence information at the sender, the receiver can determine what was lost or duplicated, ask for necessary retransmissions and reassemble the original message. Flow control Flow control is needed when the sender transmits faster than the receiver or intermediate network equipment can process the transmissions. Flow control can be implemented by messaging from receiver to sender. Queueing Communicating processes or state machines employ queues (or "buffers"), usually FIFO queues, to deal with the messages in the order sent, and may sometimes have multiple queues with different prioritization. == Protocol design == Systems engineering principles have been applied to create a set of common network protocol design principles. The design of complex protocols often involves decomposition into simpler, cooperating protocols. Such a set of cooperating protocols is sometimes called a protocol family or a protocol suite, within a conceptual framework. Communicating systems operate concurrently. An important aspect of concurrent programming is the synchronization of software for receiving and transmitting messages of communication in proper sequencing. Concurrent programming has traditionally been a topic in operating systems theory texts. Formal verification seems indispensable because concurrent programs are notorious for the hidden and sophisticated bugs they contain. A mathematical approach to the study of concurrency and communication is referred to as communicating sequential processes (CSP). Concurrency can also be modeled using finite-state machines, such as Mealy and Moore machines. Mealy and Moore machines are in use as design tools in digital electronics systems encountered in the form of hardware used in telecommunication or electronic devices in general. The literature presents numerous analogies between computer communication and programming. In analogy, a transfer mechanism of a protocol is comparable to a central processing unit (CPU). The framework introduces rules that allow the programmer to design cooperating protocols independently of one another. === Layering === In modern protocol design, protocols are layered to form a protocol stack. Layering is a design principle that divides the protocol design task into smaller steps, each of which accomplishes a specific part, interacting with the other parts of the protocol only in a small number of well-defined ways. Layering allows the parts of a protocol to be designed and tested without a combinatorial explosion of cases, keeping each design relatively simple. The communication protocols in use on the Internet are designed to function in diverse and complex settings. Internet protocols are designed for simplicity and modularity and fit into a coarse hierarchy of functional layers defined in the Internet Protocol Suite. The first two cooperating protocols, the Transmission Control Protocol (TCP) and the Internet Protocol (IP) resulted from the decomposition of the original Transmission Control Program, a monolithic communication protocol, into this layered communication suite. The OSI model was developed internationally based on experience with networks that predated the internet as a reference model for general communication with much stricter rules of protocol interaction and rigorous layering. Typically, application software is built upon a robust data transport layer. Underlying this transport layer is a datagram delivery and routing mechanism that is typically connectionless in the Internet. Packet relaying across networks happens over another layer that involves only network link technologies, which are often specific to certain physical layer technologies, such as Ethernet. Layering provides opportunities to exchange technologies when needed, for example, protocols are often stacked in a tunneling arrangement to accommodate the connection of dissimilar networks. For example, IP may be tunneled across an Asynchronous Transfer Mode (ATM) network. ==== Protocol layering ==== Protocol layering forms the basis of protocol design. It allows the decomposition of single, complex protocols into simpler, cooperating protocols. The protocol layers each solve a distinct class of communication problems. Together, the layers make up a layering scheme or model. Computations deal with algorithms and data; Communication involves protocols and messages; So the analog of a data flow diagram is some kind of message flow diagram. To visualize protocol layering and protocol suites, a diagram of the message flows in and between two systems, A and B, is shown in figure 3. The systems, A and B, both make use of the same protocol suite. The vertical flows (and protocols) are in-system and the horizontal message flows (and protocols) are between systems. The message flows are governed by rules, and data formats specified by protocols. The blue lines mark the boundaries of the (horizontal) protocol layers. ==== Software layering ==== The software supporting protocols has a layered organization and its relationship with protocol layering is shown in figure 5. To send a message on system A, the top-layer software module interacts with the module directly below it and hands over the message to be encapsulated. The lower module fills in the header data in accordance with the protocol it implements and interacts with the bottom module which sends the message over the communications channel to the bottom module of system B. On the receiving system B the reverse happens, so ultimately the message gets delivered in its original form to the top module of system B. Program translation is divided into subproblems. As a result, the translation software is layered as well, allowing the software layers to be designed independently. The same approach can be seen in the TCP/IP layering. The modules below the application layer are generally considered part of the operating system. Passing data between these modules is much less expensive than passing data between an application program and the transport layer. The boundary between the application layer and the transport layer is called the operating system boundary. ==== Strict layering ==== Strictly adhering to a layered model, a practice known as strict layering, is not always the best approach to networking. Strict layering can have a negative impact on the performance of an implementation. Although the use of protocol layering is today ubiquitous across the field of computer networking, it has been historically criticized by many researchers as abstracting the protocol stack in this way may cause a higher layer to duplicate the functionality of a lower layer, a prime example being error recovery on both a per-link basis and an end-to-end basis. === Design patterns === Commonly recurring problems in the design and implementation of communication protocols can be addressed by software design patterns. === Formal specification === Popular formal methods of describing communication syntax are Abstract Syntax Notation One (an ISO standard) and augmented Backus–Naur form (an IETF standard). Finite-state machine models are used to formally describe the possible interactions of the protocol. and communicating finite-state machines == Protocol development == For communication to occur, protocols have to be selected. The rules can be expressed by algorithms and data structures. Hardware and operating system independence is enhanced by expressing the algorithms in a portable programming language. Source independence of the specification provides wider interoperability. Protocol standards are commonly created by obtaining the approval or support of a standards organization, which initiates the standardization process. The members of the standards organization agree to adhere to the work result on a voluntary basis. Often the members are in control of large market shares relevant to the protocol and in many cases, standards are enforced by law or the government because they are thought to serve an important public interest, so getting approval can be very important for the protocol. === The need for protocol standards === The need for protocol standards can be shown by looking at what happened to the Binary Synchronous Communications (BSC) protocol invented by IBM. BSC is an early link-level protocol used to connect two separate nodes. It was originally not intended to be used in a multinode network, but doing so revealed several deficiencies of the protocol. In the absence of standardization, manufacturers and organizations felt free to enhance the protocol, creating incompatible versions on their networks. In some cases, this was deliberately done to discourage users from using equipment from other manufacturers. There are more than 50 variants of the original bi-sync protocol. One can assume, that a standard would have prevented at least some of this from happening. In some cases, protocols gain market dominance without going through a standardization process. Such protocols are referred to as de facto standards. De facto standards are common in emerging markets, niche markets, or markets that are monopolized (or oligopolized). They can hold a market in a very negative grip, especially when used to scare away competition. From a historical perspective, standardization should be seen as a measure to counteract the ill-effects of de facto standards. Positive exceptions exist; a de facto standard operating system like Linux does not have this negative grip on its market, because the sources are published and maintained in an open way, thus inviting competition. === Standards organizations === Some of the standards organizations of relevance for communication protocols are the International Organization for Standardization (ISO), the International Telecommunication Union (ITU), the Institute of Electrical and Electronics Engineers (IEEE), and the Internet Engineering Task Force (IETF). The IETF maintains the protocols in use on the Internet. The IEEE controls many software and hardware protocols in the electronics industry for commercial and consumer devices. The ITU is an umbrella organization of telecommunication engineers designing the public switched telephone network (PSTN), as well as many radio communication systems. For marine electronics the NMEA standards are used. The World Wide Web Consortium (W3C) produces protocols and standards for Web technologies. International standards organizations are supposed to be more impartial than local organizations with a national or commercial self-interest to consider. Standards organizations also do research and development for standards of the future. In practice, the standards organizations mentioned, cooperate closely with each other. Multiple standards bodies may be involved in the development of a protocol. If they are uncoordinated, then the result may be multiple, incompatible definitions of a protocol, or multiple, incompatible interpretations of messages; important invariants in one definition (e.g., that time-to-live values are monotone decreasing to prevent stable routing loops) may not be respected in another. === The standardization process === In the ISO, the standardization process starts off with the commissioning of a sub-committee workgroup. The workgroup issues working drafts and discussion documents to interested parties (including other standards bodies) in order to provoke discussion and comments. This will generate a lot of questions, much discussion and usually some disagreement. These comments are taken into account and a draft proposal is produced by the working group. After feedback, modification, and compromise the proposal reaches the status of a draft international standard, and ultimately an international standard. International standards are reissued periodically to handle the deficiencies and reflect changing views on the subject. === OSI standardization === A lesson learned from ARPANET, the predecessor of the Internet, was that protocols need a framework to operate. It is therefore important to develop a general-purpose, future-proof framework suitable for structured protocols (such as layered protocols) and their standardization. This would prevent protocol standards with overlapping functionality and would allow clear definition of the responsibilities of a protocol at the different levels (layers). This gave rise to the Open Systems Interconnection model (OSI model), which is used as a framework for the design of standard protocols and services conforming to the various layer specifications. In the OSI model, communicating systems are assumed to be connected by an underlying physical medium providing a basic transmission mechanism. The layers above it are numbered. Each layer provides service to the layer above it using the services of the layer immediately below it. The top layer provides services to the application process. The layers communicate with each other by means of an interface, called a service access point. Corresponding layers at each system are called peer entities. To communicate, two peer entities at a given layer use a protocol specific to that layer which is implemented by using services of the layer below. For each layer, there are two types of standards: protocol standards defining how peer entities at a given layer communicate, and service standards defining how a given layer communicates with the layer above it. In the OSI model, the layers and their functionality are (from highest to lowest layer): The Application layer may provide the following services to the application processes: identification of the intended communication partners, establishment of the necessary authority to communicate, determination of availability and authentication of the partners, agreement on privacy mechanisms for the communication, agreement on responsibility for error recovery and procedures for ensuring data integrity, synchronization between cooperating application processes, identification of any constraints on syntax (e.g. character sets and data structures), determination of cost and acceptable quality of service, selection of the dialogue discipline, including required logon and logoff procedures. The presentation layer may provide the following services to the application layer: a request for the establishment of a session, data transfer, negotiation of the syntax to be used between the application layers, any necessary syntax transformations, formatting and special purpose transformations (e.g., data compression and data encryption). The session layer may provide the following services to the presentation layer: establishment and release of session connections, normal and expedited data exchange, a quarantine service which allows the sending presentation entity to instruct the receiving session entity not to release data to its presentation entity without permission, interaction management so presentation entities can control whose turn it is to perform certain control functions, resynchronization of a session connection, reporting of unrecoverable exceptions to the presentation entity. The transport layer provides reliable and transparent data transfer in a cost-effective way as required by the selected quality of service. It may support the multiplexing of several transport connections on to one network connection or split one transport connection into several network connections. The network layer does the setup, maintenance and release of network paths between transport peer entities. When relays are needed, routing and relay functions are provided by this layer. The quality of service is negotiated between network and transport entities at the time the connection is set up. This layer is also responsible for network congestion control. The data link layer does the setup, maintenance and release of data link connections. Errors occurring in the physical layer are detected and may be corrected. Errors are reported to the network layer. The exchange of data link units (including flow control) is defined by this layer. The physical layer describes details like the electrical characteristics of the physical connection, the transmission techniques used, and the setup, maintenance and clearing of physical connections. In contrast to the TCP/IP layering scheme, which assumes a connectionless network, RM/OSI assumed a connection-oriented network. Connection-oriented networks are more suitable for wide area networks and connectionless networks are more suitable for local area networks. Connection-oriented communication requires some form of session and (virtual) circuits, hence the (in the TCP/IP model lacking) session layer. The constituent members of ISO were mostly concerned with wide area networks, so the development of RM/OSI concentrated on connection-oriented networks and connectionless networks were first mentioned in an addendum to RM/OSI and later incorporated into an update to RM/OSI. At the time, the IETF had to cope with this and the fact that the Internet needed protocols that simply were not there. As a result, the IETF developed its own standardization process based on "rough consensus and running code". The standardization process is described by RFC 2026. Nowadays, the IETF has become a standards organization for the protocols in use on the Internet. RM/OSI has extended its model to include connectionless services and because of this, both TCP and IP could be developed into international standards. == Wire image == The wire image of a protocol is the information that a non-participant observer is able to glean from observing the protocol messages, including both information explicitly given meaning by the protocol, but also inferences made by the observer. Unencrypted protocol metadata is one source making up the wire image, and side-channels including packet timing also contribute. Different observers with different vantages may see different wire images. The wire image is relevant to end-user privacy and the extensibility of the protocol. If some portion of the wire image is not cryptographically authenticated, it is subject to modification by intermediate parties (i.e., middleboxes), which can influence protocol operation. Even if authenticated, if a portion is not encrypted, it will form part of the wire image, and intermediate parties may intervene depending on its content (e.g., dropping packets with particular flags). Signals deliberately intended for intermediary consumption may be left authenticated but unencrypted. The wire image can be deliberately engineered, encrypting parts that intermediaries should not be able to observe and providing signals for what they should be able to. If provided signals are decoupled from the protocol's operation, they may become untrustworthy. Benign network management and research are affected by metadata encryption; protocol designers must balance observability for operability and research against ossification resistance and end-user privacy. The IETF announced in 2014 that it had determined that large-scale surveillance of protocol operations is an attack due to the ability to infer information from the wire image about users and their behaviour, and that the IETF would "work to mitigate pervasive monitoring" in its protocol designs; this had not been done systematically previously. The Internet Architecture Board recommended in 2023 that disclosure of information by a protocol to the network should be intentional, performed with the agreement of both recipient and sender, authenticated to the degree possible and necessary, only acted upon to the degree of its trustworthiness, and minimised and provided to a minimum number of entities. Engineering the wire image and controlling what signals are provided to network elements was a "developing field" in 2023, according to the IAB. == Ossification == Protocol ossification is the loss of flexibility, extensibility and evolvability of network protocols. This is largely due to middleboxes that are sensitive to the wire image of the protocol, and which can interrupt or interfere with messages that are valid but which the middlebox does not correctly recognize. This is a violation of the end-to-end principle. Secondary causes include inflexibility in endpoint implementations of protocols. Ossification is a major issue in Internet protocol design and deployment, as it can prevent new protocols or extensions from being deployed on the Internet, or place strictures on the design of new protocols; new protocols may have to be encapsulated in an already-deployed protocol or mimic the wire image of another protocol. Because of ossification, the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) are the only practical choices for transport protocols on the Internet, and TCP itself has significantly ossified, making extension or modification of the protocol difficult. Recommended methods of preventing ossification include encrypting protocol metadata, and ensuring that extension points are exercised and wire image variability is exhibited as fully as possible; remedying existing ossification requires coordination across protocol participants. QUIC is the first IETF transport protocol to have been designed with deliberate anti-ossification properties. == Taxonomies == Classification schemes for protocols usually focus on the domain of use and function. As an example of domain of use, connection-oriented protocols and connectionless protocols are used on connection-oriented networks and connectionless networks respectively. An example of function is a tunneling protocol, which is used to encapsulate packets in a high-level protocol so that the packets can be passed across a transport system using the high-level protocol. A layering scheme combines both function and domain of use. The dominant layering schemes are the ones developed by the IETF and by ISO. Despite the fact that the underlying assumptions of the layering schemes are different enough to warrant distinguishing the two, it is a common practice to compare the two by relating common protocols to the layers of the two schemes. The layering scheme from the IETF is called Internet layering or TCP/IP layering. The layering scheme from ISO is called the OSI model or ISO layering. In networking equipment configuration, a term-of-art distinction is often drawn: The term protocol strictly refers to the transport layer, and the term service refers to protocols utilizing a protocol for transport. In the common case of TCP and UDP, services are distinguished by port numbers. Conformance to these port numbers is voluntary, so in content inspection systems the term service strictly refers to port numbers, and the term application is often used to refer to protocols identified through inspection signatures. == See also == Cryptographic protocol – Aspect of cryptography Lists of network protocols Protocol Builder – Programming tool to build network connectivity components == Notes == == References == === Bibliography === Radia Perlman (1999). Interconnections: Bridges, Routers, Switches, and Internetworking Protocols (2nd ed.). Addison-Wesley. ISBN 0-201-63448-1.. In particular Ch. 18 on "network design folklore", which is also available online Gerard J. Holzmann (1991). Design and Validation of Computer Protocols. Prentice Hall. ISBN 0-13-539925-4. Douglas E. Comer (2000). Internetworking with TCP/IP - Principles, Protocols and Architecture (4th ed.). Prentice Hall. ISBN 0-13-018380-6. In particular Ch.11 Protocol layering. Also has a RFC guide and a Glossary of Internetworking Terms and Abbreviations. R. Braden, ed. (1989). Requirements for Internet Hosts -- Communication Layers. Internet Engineering Task Force abbr. IETF. doi:10.17487/RFC1122. RFC 1122. Describes TCP/IP to the implementors of protocolsoftware. In particular the introduction gives an overview of the design goals of the suite. M. Ben-ari (1982). Principles of concurrent programming (10th Print ed.). Prentice Hall International. ISBN 0-13-701078-8. C.A.R. Hoare (1985). Communicating sequential processes (10th Print ed.). Prentice Hall International. ISBN 0-13-153271-5. R.D. Tennent (1981). Principles of programming languages (10th Print ed.). Prentice Hall International. ISBN 0-13-709873-1. Brian W Marsden (1986). Communication network protocols (2nd ed.). Chartwell Bratt. ISBN 0-86238-106-1. Andrew S. Tanenbaum (1984). Structured computer organization (10th Print ed.). Prentice Hall International. ISBN 0-13-854605-3. Bryant, Stewart; Morrow, Monique, eds. (November 2009). Uncoordinated Protocol Development Considered Harmful. doi:10.17487/RFC5704. RFC 5704. Farrell, Stephen; Tschofenig, Hannes (May 2014). Pervasive Monitoring Is an Attack. doi:10.17487/RFC7258. RFC 7258. Trammell, Brian; Kuehlewind, Mirja (April 2019). The Wire Image of a Network Protocol. doi:10.17487/RFC8546. RFC 8546. Hardie, Ted, ed. (April 2019). Transport Protocol Path Signals. doi:10.17487/RFC8558. RFC 8558. Fairhurst, Gorry; Perkins, Colin (July 2021). Considerations around Transport Header Confidentiality, Network Operations, and the Evolution of Internet Transport Protocols. doi:10.17487/RFC9065. RFC 9065. Thomson, Martin; Pauly, Tommy (December 2021). Long-Term Viability of Protocol Extension Mechanisms. doi:10.17487/RFC9170. RFC 9170. Arkko, Jari; Hardie, Ted; Pauly, Tommy; Kühlewind, Mirja (July 2023). Considerations on Application - Network Collaboration Using Path Signals. doi:10.17487/RFC9419. RFC 9419. McQuistin, Stephen; Perkins, Colin; Fayed, Marwan (July 2016). Implementing Real-Time Transport Services over an Ossified Network. 2016 Applied Networking Research Workshop. doi:10.1145/2959424.2959443. hdl:1893/26111. Papastergiou, Giorgos; Fairhurst, Gorry; Ros, David; Brunstrom, Anna; Grinnemo, Karl-Johan; Hurtig, Per; Khademi, Naeem; Tüxen, Michael; Welzl, Michael; Damjanovic, Dragana; Mangiante, Simone (2017). "De-Ossifying the Internet Transport Layer: A Survey and Future Perspectives". IEEE Communications Surveys & Tutorials. 19: 619–639. doi:10.1109/COMST.2016.2626780. hdl:2164/8317. S2CID 1846371. Moschovitis, Christos J. P. (1999). History of the Internet: A Chronology, 1843 to the Present. ABC-CLIO. ISBN 978-1-57607-118-2. == External links == Javvin's Protocol Dictionary at the Wayback Machine (archived 2004-06-10) Overview of protocols in telecontrol field with OSI Reference Model
Wikipedia/Network_protocol
A network interface controller (NIC, also known as a network interface card, network adapter, LAN adapter and physical network interface) is a computer hardware component that connects a computer to a computer network. Early network interface controllers were commonly implemented on expansion cards that plugged into a computer bus. The low cost and ubiquity of the Ethernet standard means that most newer computers have a network interface built into the motherboard, or is contained into a USB-connected dongle, although network cards remain available. Modern network interface controllers offer advanced features such as interrupt and DMA interfaces to the host processors, support for multiple receive and transmit queues, partitioning into multiple logical interfaces, and on-controller network traffic processing such as the TCP offload engine. == Purpose == The network controller implements the electronic circuitry required to communicate using a specific physical layer and data link layer standard such as Ethernet or Wi-Fi. This provides a base for a full network protocol stack, allowing communication among computers on the same local area network (LAN) and large-scale network communications through routable protocols, such as Internet Protocol (IP). The NIC allows computers to communicate over a computer network, either by using cables or wirelessly. The NIC is both a physical layer and data link layer device, as it provides physical access to a networking medium and, for IEEE 802 and similar networks, provides a low-level addressing system through the use of MAC addresses that are uniquely assigned to network interfaces. == Implementation == Network controllers were originally implemented as expansion cards that plugged into a computer bus. The low cost and ubiquity of the Ethernet standard means that most new computers have a network interface controller built into the motherboard. Newer server motherboards may have multiple network interfaces built-in. The Ethernet capabilities are either integrated into the motherboard chipset or implemented via a low-cost dedicated Ethernet chip. A separate network card is typically no longer required unless additional independent network connections are needed or some non-Ethernet type of network is used. A general trend in computer hardware is towards integrating the various components of systems on a chip, and this is also applied to network interface cards. An Ethernet network controller typically has an 8P8C socket where the network cable is connected. Older NICs also supplied BNC, or AUI connections. Ethernet network controllers typically support 10 Mbit/s Ethernet, 100 Mbit/s Ethernet, and 1000 Mbit/s Ethernet varieties. Such controllers are designated as 10/100/1000, meaning that they can support data rates of 10, 100 or 1000 Mbit/s. 10 Gigabit Ethernet NICs are also available, and, as of November 2014, are beginning to be available on computer motherboards. Modular designs like SFP and SFP+ are highly popular, especially for fiber-optic communication. These define a standard receptacle for media-dependent transceivers, so users can easily adapt the network interface to their needs. LEDs adjacent to or integrated into the network connector inform the user of whether the network is connected, and when data activity occurs. The NIC may include ROM to store its factory-assigned MAC address. The NIC may use one or more of the following techniques to indicate the availability of packets to transfer: Polling is where the CPU examines the status of the peripheral under program control. Interrupt-driven I/O is where the peripheral alerts the CPU that it is ready to transfer data. NICs may use one or more of the following techniques to transfer packet data: Programmed input/output, where the CPU moves the data to or from the NIC to memory. Direct memory access (DMA), where a device other than the CPU assumes control of the system bus to move data to or from the NIC to memory. This removes load from the CPU but requires more logic on the card. In addition, a packet buffer on the NIC may not be required and latency can be reduced. == Performance and advanced functionality == Multiqueue NICs provide multiple transmit and receive queues, allowing packets received by the NIC to be assigned to one of its receive queues. The NIC may distribute incoming traffic between the receive queues using a hash function. Each receive queue is assigned to a separate interrupt; by routing each of those interrupts to different CPUs or CPU cores, processing of the interrupt requests triggered by the network traffic received by a single NIC can be distributed improving performance. The hardware-based distribution of the interrupts, described above, is referred to as receive-side scaling (RSS).: 82  Purely software implementations also exist, such as the receive packet steering (RPS), receive flow steering (RFS), and Intel Flow Director.: 98, 99  Further performance improvements can be achieved by routing the interrupt requests to the CPUs or cores executing the applications that are the ultimate destinations for network packets that generated the interrupts. This technique improves locality of reference and results in higher overall performance, reduced latency and better hardware utilization because of the higher utilization of CPU caches and fewer required context switches. With multi-queue NICs, additional performance improvements can be achieved by distributing outgoing traffic among different transmit queues. By assigning different transmit queues to different CPUs or CPU cores, internal operating system contentions can be avoided. This approach is usually referred to as transmit packet steering (XPS). Some products feature NIC partitioning (NPAR, also known as port partitioning) that uses SR-IOV virtualization to divide a single 10 Gigabit Ethernet NIC into multiple discrete virtual NICs with dedicated bandwidth, which are presented to the firmware and operating system as separate PCI device functions. Some NICs provide a TCP offload engine to offload processing of the entire TCP/IP stack to the network controller. It is primarily used with high-speed network interfaces, such as Gigabit Ethernet and 10 Gigabit Ethernet, for which the processing overhead of the network stack becomes significant. Some NICs offer integrated field-programmable gate arrays (FPGAs) for user-programmable processing of network traffic before it reaches the host computer, allowing for significantly reduced latencies in time-sensitive workloads. Moreover, some NICs offer complete low-latency TCP/IP stacks running on integrated FPGAs in combination with userspace libraries that intercept networking operations usually performed by the operating system kernel; Solarflare's open-source OpenOnload network stack that runs on Linux is an example. This kind of functionality is usually referred to as user-level networking. == See also == Converged network adapter (CNA) Host adapter Intel Data Direct I/O (DDIO) Loopback interface Network monitoring interface card (NMIC) Virtual network interface (VIF) Wireless network interface controller (WNIC) == Notes == == References == == External links == "Physical Network Interface". Microsoft. "Predictable Network Interface Names". Freedesktop.org. Multi-queue network interfaces with SMP on Linux
Wikipedia/Network_interface_controller
High availability (HA) is a characteristic of a system that aims to ensure an agreed level of operational performance, usually uptime, for a higher than normal period. There is now more dependence on these systems as a result of modernization. For example, to carry out their regular daily tasks, hospitals and data centers need their systems to be highly available. Availability refers to the ability of the user to access a service or system, whether to submit new work, update or modify existing work, or retrieve the results of previous work. If a user cannot access the system, it is considered unavailable from the user's perspective. The term downtime is generally used to refer to describe periods when a system is unavailable. == Resilience == High availability is a property of network resilience, the ability to "provide and maintain an acceptable level of service in the face of faults and challenges to normal operation." Threats and challenges for services can range from simple misconfiguration over large scale natural disasters to targeted attacks. As such, network resilience touches a very wide range of topics. In order to increase the resilience of a given communication network, the probable challenges and risks have to be identified and appropriate resilience metrics have to be defined for the service to be protected. The importance of network resilience is continuously increasing, as communication networks are becoming a fundamental component in the operation of critical infrastructures. Consequently, recent efforts focus on interpreting and improving network and computing resilience with applications to critical infrastructures. As an example, one can consider as a resilience objective the provisioning of services over the network, instead of the services of the network itself. This may require coordinated response from both the network and from the services running on top of the network. These services include: supporting distributed processing supporting network storage maintaining service of communication services such as video conferencing instant messaging online collaboration access to applications and data as needed Resilience and survivability are interchangeably used according to the specific context of a given study. == Principles == There are three principles of systems design in reliability engineering that can help achieve high availability. Elimination of single points of failure. This means adding or building redundancy into the system so that failure of a component does not mean failure of the entire system. Reliable crossover. In redundant systems, the crossover point itself tends to become a single point of failure. Reliable systems must provide for reliable crossover. Detection of failures as they occur. If the two principles above are observed, then a user may never see a failure – but the maintenance activity must. == Scheduled and unscheduled downtime == A distinction can be made between scheduled and unscheduled downtime. Typically, scheduled downtime is a result of maintenance that is disruptive to system operation and usually cannot be avoided with a currently installed system design. Scheduled downtime events might include patches to system software that require a reboot or system configuration changes that only take effect upon a reboot. In general, scheduled downtime is usually the result of some logical, management-initiated event. Unscheduled downtime events typically arise from some physical event, such as a hardware or software failure or environmental anomaly. Examples of unscheduled downtime events include power outages, failed CPU or RAM components (or possibly other failed hardware components), an over-temperature related shutdown, logically or physically severed network connections, security breaches, or various application, middleware, and operating system failures. If users can be warned away from scheduled downtimes, then the distinction is useful. But if the requirement is for true high availability, then downtime is downtime whether or not it is scheduled. Many computing sites exclude scheduled downtime from availability calculations, assuming that it has little or no impact upon the computing user community. By doing this, they can claim to have phenomenally high availability, which might give the illusion of continuous availability. Systems that exhibit truly continuous availability are comparatively rare and higher priced, and most have carefully implemented specialty designs that eliminate any single point of failure and allow online hardware, network, operating system, middleware, and application upgrades, patches, and replacements. For certain systems, scheduled downtime does not matter, for example, system downtime at an office building after everybody has gone home for the night. == Percentage calculation == Availability is usually expressed as a percentage of uptime in a given year. The following table shows the downtime that will be allowed for a particular percentage of availability, presuming that the system is required to operate continuously. Service level agreements often refer to monthly downtime or availability in order to calculate service credits to match monthly billing cycles. The following table shows the translation from a given availability percentage to the corresponding amount of time a system would be unavailable. The terms uptime and availability are often used interchangeably but do not always refer to the same thing. For example, a system can be "up" with its services not "available" in the case of a network outage. Or a system undergoing software maintenance can be "available" to be worked on by a system administrator, but its services do not appear "up" to the end user or customer. The subject of the terms is thus important here: whether the focus of a discussion is the server hardware, server OS, functional service, software service/process, or similar, it is only if there is a single, consistent subject of the discussion that the words uptime and availability can be used synonymously. === Five-by-five mnemonic === A simple mnemonic rule states that 5 nines allows approximately 5 minutes of downtime per year. Variants can be derived by multiplying or dividing by 10: 4 nines is 50 minutes and 3 nines is 500 minutes. In the opposite direction, 6 nines is 0.5 minutes (30 sec) and 7 nines is 3 seconds. === "Powers of 10" trick === Another memory trick to calculate the allowed downtime duration for an " n {\displaystyle n} -nines" availability percentage is to use the formula 8.64 × 10 4 − n {\displaystyle 8.64\times 10^{4-n}} seconds per day. For example, 90% ("one nine") yields the exponent 4 − 1 = 3 {\displaystyle 4-1=3} , and therefore the allowed downtime is 8.64 × 10 3 {\displaystyle 8.64\times 10^{3}} seconds per day. Also, 99.999% ("five nines") gives the exponent 4 − 5 = − 1 {\displaystyle 4-5=-1} , and therefore the allowed downtime is 8.64 × 10 − 1 {\displaystyle 8.64\times 10^{-1}} seconds per day. === "Nines" === Percentages of a particular order of magnitude are sometimes referred to by the number of nines or "class of nines" in the digits. For example, electricity that is delivered without interruptions (blackouts, brownouts or surges) 99.999% of the time would have 5 nines reliability, or class five. In particular, the term is used in connection with mainframes or enterprise computing, often as part of a service-level agreement. Similarly, percentages ending in a 5 have conventional names, traditionally the number of nines, then "five", so 99.95% is "three nines five", abbreviated 3N5. This is casually referred to as "three and a half nines", but this is incorrect: a 5 is only a factor of 2, while a 9 is a factor of 10, so a 5 is 0.3 nines (per below formula: log 10 ⁡ 2 ≈ 0.3 {\displaystyle \log _{10}2\approx 0.3} ): 99.95% availability is 3.3 nines, not 3.5 nines. More simply, going from 99.9% availability to 99.95% availability is a factor of 2 (0.1% to 0.05% unavailability), but going from 99.95% to 99.99% availability is a factor of 5 (0.05% to 0.01% unavailability), over twice as much. A formulation of the class of 9s c {\displaystyle c} based on a system's unavailability x {\displaystyle x} would be c := ⌊ − log 10 ⁡ x ⌋ {\displaystyle c:=\lfloor -\log _{10}x\rfloor } (cf. Floor and ceiling functions). A similar measurement is sometimes used to describe the purity of substances. In general, the number of nines is not often used by a network engineer when modeling and measuring availability because it is hard to apply in formula. More often, the unavailability expressed as a probability (like 0.00001), or a downtime per year is quoted. Availability specified as a number of nines is often seen in marketing documents. The use of the "nines" has been called into question, since it does not appropriately reflect that the impact of unavailability varies with its time of occurrence. For large amounts of 9s, the "unavailability" index (measure of downtime rather than uptime) is easier to handle. For example, this is why an "unavailability" rather than availability metric is used in hard disk or data link bit error rates. Sometimes the humorous term "nine fives" (55.5555555%) is used to contrast with "five nines" (99.999%), though this is not an actual goal, but rather a sarcastic reference to something totally failing to meet any reasonable target. == Measurement and interpretation == Availability measurement is subject to some degree of interpretation. A system that has been up for 365 days in a non-leap year might have been eclipsed by a network failure that lasted for 9 hours during a peak usage period; the user community will see the system as unavailable, whereas the system administrator will claim 100% uptime. However, given the true definition of availability, the system will be approximately 99.9% available, or three nines (8751 hours of available time out of 8760 hours per non-leap year). Also, systems experiencing performance problems are often deemed partially or entirely unavailable by users, even when the systems are continuing to function. Similarly, unavailability of select application functions might go unnoticed by administrators yet be devastating to users – a true availability measure is holistic. Availability must be measured to be determined, ideally with comprehensive monitoring tools ("instrumentation") that are themselves highly available. If there is a lack of instrumentation, systems supporting high volume transaction processing throughout the day and night, such as credit card processing systems or telephone switches, are often inherently better monitored, at least by the users themselves, than systems which experience periodic lulls in demand. An alternative metric is mean time between failures (MTBF). == Closely related concepts == Recovery time (or estimated time of repair (ETR), also known as recovery time objective (RTO) is closely related to availability, that is the total time required for a planned outage or the time required to fully recover from an unplanned outage. Another metric is mean time to recovery (MTTR). Recovery time could be infinite with certain system designs and failures, i.e. full recovery is impossible. One such example is a fire or flood that destroys a data center and its systems when there is no secondary disaster recovery data center. Another related concept is data availability, that is the degree to which databases and other information storage systems faithfully record and report system transactions. Information management often focuses separately on data availability, or Recovery Point Objective, in order to determine acceptable (or actual) data loss with various failure events. Some users can tolerate application service interruptions but cannot tolerate data loss. A service level agreement ("SLA") formalizes an organization's availability objectives and requirements. == Military control systems == High availability is one of the primary requirements of the control systems in unmanned vehicles and autonomous maritime vessels. If the controlling system becomes unavailable, the Ground Combat Vehicle (GCV) or ASW Continuous Trail Unmanned Vessel (ACTUV) would be lost. == System design == On one hand, adding more components to an overall system design can undermine efforts to achieve high availability because complex systems inherently have more potential failure points and are more difficult to implement correctly. While some analysts would put forth the theory that the most highly available systems adhere to a simple architecture (a single, high-quality, multi-purpose physical system with comprehensive internal hardware redundancy), this architecture suffers from the requirement that the entire system must be brought down for patching and operating system upgrades. More advanced system designs allow for systems to be patched and upgraded without compromising service availability (see load balancing and failover). High availability requires less human intervention to restore operation in complex systems; the reason for this being that the most common cause for outages is human error. === High availability through redundancy === On the other hand, redundancy is used to create systems with high levels of availability (e.g. popular ecommerce websites). In this case it is required to have high levels of failure detectability and avoidance of common cause failures. If redundant parts are used in parallel and have independent failure (e.g. by not being within the same data center), they can exponentially increase the availability and make the overall system highly available. If you have N parallel components each having X availability, then you can use following formula: Availability of parallel components = 1 - (1 - X)^ N So for example if each of your components has only 50% availability, by using 10 of components in parallel, you can achieve 99.9023% availability. Two kinds of redundancy are passive redundancy and active redundancy. Passive redundancy is used to achieve high availability by including enough excess capacity in the design to accommodate a performance decline. The simplest example is a boat with two separate engines driving two separate propellers. The boat continues toward its destination despite failure of a single engine or propeller. A more complex example is multiple redundant power generation facilities within a large system involving electric power transmission. Malfunction of single components is not considered to be a failure unless the resulting performance decline exceeds the specification limits for the entire system. Active redundancy is used in complex systems to achieve high availability with no performance decline. Multiple items of the same kind are incorporated into a design that includes a method to detect failure and automatically reconfigure the system to bypass failed items using a voting scheme. This is used with complex computing systems that are linked. Internet routing is derived from early work by Birman and Joseph in this area. Active redundancy may introduce more complex failure modes into a system, such as continuous system reconfiguration due to faulty voting logic. Zero downtime system design means that modeling and simulation indicates mean time between failures significantly exceeds the period of time between planned maintenance, upgrade events, or system lifetime. Zero downtime involves massive redundancy, which is needed for some types of aircraft and for most kinds of communications satellites. Global Positioning System is an example of a zero downtime system. Fault instrumentation can be used in systems with limited redundancy to achieve high availability. Maintenance actions occur during brief periods of downtime only after a fault indicator activates. Failure is only significant if this occurs during a mission critical period. Modeling and simulation is used to evaluate the theoretical reliability for large systems. The outcome of this kind of model is used to evaluate different design options. A model of the entire system is created, and the model is stressed by removing components. Redundancy simulation involves the N-x criteria. N represents the total number of components in the system. x is the number of components used to stress the system. N-1 means the model is stressed by evaluating performance with all possible combinations where one component is faulted. N-2 means the model is stressed by evaluating performance with all possible combinations where two component are faulted simultaneously. == Reasons for unavailability == A survey among academic availability experts in 2010 ranked reasons for unavailability of enterprise IT systems. All reasons refer to not following best practice in each of the following areas (in order of importance): Monitoring of the relevant components Requirements and procurement Operations Avoidance of network failures Avoidance of internal application failures Avoidance of external services that fail Physical environment Network redundancy Technical solution of backup Process solution of backup Physical location Infrastructure redundancy Storage architecture redundancy A book on the factors themselves was published in 2003. == Costs of unavailability == In a 1998 report from IBM Global Services, unavailable systems were estimated to have cost American businesses $4.54 billion in 1996, due to lost productivity and revenues. == See also == Availability Fault tolerance High-availability cluster Overall equipment effectiveness Reliability, availability and serviceability Responsiveness Scalability Ubiquitous computing == Notes == == References == == External links == Lecture Notes on Enterprise Computing Archived November 16, 2013, at the Wayback Machine University of Tübingen Lecture notes on Embedded Systems Engineering by Prof. Phil Koopman Uptime Calculator (SLA)
Wikipedia/Network_resilience
In the contexts of software architecture, service-orientation and service-oriented architecture, the term service refers to a software functionality, or a set of software functionalities (such as the retrieval of specified information or the execution of a set of operations) with a purpose that different clients can reuse for different purposes, together with the policies that should control its usage (based on the identity of the client requesting the service, for example). OASIS defines a service as "a mechanism to enable access to one or more capabilities, where the access is provided using a prescribed interface and is exercised consistent with constraints and policies as specified by the service description". == Service engineering == A business analyst, domain expert, and/or enterprise architecture team will develop the organization's service model first by defining the top level business functions. Once the business functions are defined, they are further partitioned and refined into services that represent the processes and activities needed to manage the assets of the organization in their various states. One example is the separation of the business function "Manage Orders" into services such as "Create Order", "Fulfill Order", "Ship Order", "Invoice Order" and "Cancel/Update Order". These business functions have to have a granularity that is adequate in the given project and domain context. Many analysis and design methods can be used for service engineering, both general purpose ones such as OpenUP and Domain-Driven Design as well as those discussed under Service-oriented modeling. == Bibliography == Stojanović, Zoran; Dahanayake, Ajantha, eds. (2005). Service-oriented software system engineering: challenges and practices. Hershey: Idea Group Pub. ISBN 978-1-59140-426-2. Benatallah, Boualem; Casati, Fabio; Traverso, Paolo, eds. (2005). Service-Oriented Computing ICSOC 2005: Third International Conference, Amsterdam, The Netherlands, December 12-15, 2005, Proceedings. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer Berlin Heidelberg. ISBN 978-3-540-30817-1. Huang, Jingshan (2007). Service-Oriented Computing: AAMAS 2007 International Workshop, SOCASE 2007, Honolulu, HI, USA, May 14, 2007, Proceedings. Lecture Notes in Computer Science Ser. Ryszard Kowalczyk, Zakaria Maamar, David Martin, Ingo Müller, Suzette Stoutenburg, Katia Sycara. Berlin, Heidelberg: Springer Berlin / Heidelberg. ISBN 978-3-540-72618-0. Karakostas, Bill; Zorgios, Yannis (2008). Engineering service oriented systems: a model driven approach. Hershey, PA: IGI Pub. ISBN 978-1-59904-968-7. OCLC 212204291. Kowalczyk, Ryszard (2008). Service-Oriented Computing: AAMAS 2008 International Workshop, SOCASE 2008 Estoril, Portugal, May 12, 2008 Proceedings. Lecture Notes in Computer Science Ser. Michael N. Huhns, Matthias Klusch, Zakaria Maamar, Quoc Bao Vo. Berlin, Heidelberg: Springer Berlin / Heidelberg. ISBN 978-3-540-79967-2. Hutchison, David; Pandu Rangan, C.; Ripeanu, Matei; Steffen, Bernhard; Sudan, Madhu; Terzopoulos, Demetri; Tygar, Doug; Vardi, Moshe Y.; Weikum, Gerhard, eds. (2009). Service-Oriented Computing - ICSOC 2007 Workshops: ICSOC 2007, International Workshops, Vienna, Austria, September 17, 2007, Revised Selected Papers. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer Berlin Heidelberg. ISBN 978-3-540-93850-7. Hutchison, David; Nierstrasz, Oscar; Pandu Rangan, C.; Steffen, Bernhard; Sudan, Madhu; Terzopoulos, Demetri; Tygar, Doug; Vardi, Moshe Y.; Weikum, Gerhard, eds. (2009). Service-Oriented Computing – ICSOC 2008 Workshops: ICSOC 2008 International Workshops, Sydney, Australia, December 1st, 2008, Revised Selected Papers. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer Berlin Heidelberg. ISBN 978-3-642-01246-4. Baresi, Luciano; Chi, Chi-Hung; Suzuki, Jun (2009). Service-Oriented Computing: 7th International Joint Conference, ICSOC-ServiceWave 2009, Stockholm, Sweden, November 24-27, 2009. Proceedings. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer Berlin Heidelberg. ISBN 978-3-642-10382-7. Kowalczyk, Ryszard; Huhns, Michael; Maamar, Zakaria; Vo, Quoc Bao (2009). Service-Oriented Computing: Agents, Semantics, and Engineering: AAMAS 2009 International Workshop SOCASE 2009, Budapest, Hungary, May 11, 2009. Proceedings. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer Berlin Heidelberg. ISBN 978-3-642-10738-2. Hafner, Michael; Breu, Ruth (2009). Security engineering for service-oriented architectures. Berlin Heidelberg: Springer. ISBN 978-3-540-79538-4. Dan, Asit (2010). Service-Oriented Computing. ICSOC/ServiceWave 2009 Workshops: International Workshops, ICSOC/ServiceWave 2009, Stockholm, Sweden, November 23-27, 2009, Revised Selected Papers. Lecture Notes in Computer Science Ser. Farouk Toumani, édéric Gittler. Berlin, Heidelberg: Springer Berlin / Heidelberg. ISBN 978-3-642-16131-5. Maglio, Paul P. (2010). Service-Oriented Computing: 8th International Conference, ICSOC 2010, San Francisco, CA, USA, December 7-10, 2010. Proceedings. Lecture Notes in Computer Science Ser. Mathias Weske, Jian Yang, Marcelo Fantinato. Berlin, Heidelberg: Springer Berlin / Heidelberg. ISBN 978-3-642-17357-8. Di Nitto, Elisabetta; Yahyapour, Ramin, eds. (2010). Towards a service-based Internet: third European conference, Servicewave 2010, Ghent, Belgium, December 13-15, 2010: proceedings. Lecture notes in computer science. Berlin ; New York: Springer. ISBN 978-3-642-17693-7. OCLC 690089043. Sicilia, Miguel-Angel; Kop, Christian; Sartori, Fabio, eds. (2010). Ontology, Conceptualization and Epistemology for Information Systems, Software Engineering and Service Science: 4th International Workshop, ONTOSE 2010, held at CAiSE 2010, Hammamet, Tunisia, June 7-8, 2010, Revised Selected Papers. Lecture Notes in Business Information Processing. Berlin, Heidelberg: Springer Berlin Heidelberg. ISBN 978-3-642-16495-8. Kappel, Gerti; Motahari-Nezhad, Hamid R.; Maamar, Zakaria (2011). Service-Oriented Computing: 9th International Conference, ICSOC 2011, Paphos, Cyprus, December 5-8, 2011 Proceedings. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer Berlin Heidelberg Springer e-books. ISBN 978-3-642-25535-9. Dustdar, Schahram; Li, Fei (2011). Service Engineering: European Research Results. SpringerLink Bücher. Vienna: Springer-Verlag/Wien. ISBN 978-3-7091-0414-9. Maximilien, E. Michael (2011). Service-oriented computing: ICSOC 2010 International Workshops, PAASC, WESOA, SEE, and SOC-LOG, San Francisco, CA, USA, December 7-10, 2010, Revised selected papers. Lecture notes in computer science. ICSOC 2010. Berlin Heidelberg New York: Springer. ISBN 978-3-642-19394-1. Abramowicz, Witold (2011). Towards a Service-Based Internet: 4th European Conference, ServiceWave 2011, Poznan, Poland, October 26-28, 2011, Proceedings. Lecture Notes in Computer Science Ser. Ignacio M. Llorente, Mike Surridge, Julien Vayssière, Andrea Zisman. Berlin, Heidelberg: Springer Berlin / Heidelberg. ISBN 978-3-642-24754-5. Ng, Irene (2011). Complex Engineering Service Systems: Concepts and Research. Decision Engineering Ser. Duncan McFarlane, Glenn Parry, Paul Tasker, Peter Wild. London: Springer London, Limited. ISBN 978-0-85729-188-2. Engineering methods in the service-oriented context: 4th IFIP WG 8.1 working conference on method engineering, ME 2011, Paris, France, April 20-22, 2011, proceedings. IFIP advances in information and communication technology. Heidelberg: Springer. 2011. ISBN 978-3-642-19996-7. Hölzl, Matthias (2011). Rigorous Software Engineering for Service-Oriented Systems: Results of the SENSORIA Project on Software Engineering for Service-Oriented Computing. Lecture Notes in Computer Science Ser. Martin Wirsing. Berlin, Heidelberg: Springer Berlin / Heidelberg. ISBN 978-3-642-20400-5. Dustdar, Schahram; Li, Fei (2011). Service Engineering: European Research Results. SpringerLink Bücher. Vienna: Springer-Verlag/Wien. ISBN 978-3-7091-0414-9. Service science, management, and engineering: theory and applications. Intelligent systems series (1st ed.). Oxford, U.K. Waltham, Mass: Academic Press. 2012. ISBN 978-0-12-397037-4. Lankhorst, Marc, ed. (2012). Agile service development: combining adaptive methods and flexible solutions. Enterprise engineering series. Heidelberg ; New York: Springer Verlag. ISBN 978-3-642-28187-7. OCLC 773666019. Heisel, Maritta (2012). Software service and application engineering: essays dedicated to Bernd Krärmer on the occasion of his 65th birthday. Lecture notes in computer science. Berlin: Springer. ISBN 978-3-642-30835-2. Kumar, Sandeep (2012). Agent-Based Semantic Web Service Composition. SpringerBriefs in Electrical and Computer Engineering Ser (1st ed.). New York, NY: Springer New York. ISBN 978-1-4614-4662-0. Spohrer, James C.; Freund, Louis E., eds. (2013). Advances in the human side of service engineering. Advances in human factors and ergonomics series (Online-Ausg ed.). Boca Raton, Fla: CRC Press. ISBN 978-1-4398-7026-6. Basu, Samik; Zhang, Liang (2013). Pautasso, Cesare; Fu, Xiang (eds.). Service-Oriented Computing: 11th International Conference, ICSOC 2013, Berlin, Germany, December 2-5, 2013, Proceedings. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer. ISBN 978-3-642-45004-4. Lomuscio, Alessio R. (2014). Nepal, Surya; Patrizi, Fabio; Benatallah, Boualem; Brandić, Ivona (eds.). Service-Oriented Computing – ICSOC 2013 Workshops: CCSA, CSB, PASCEB, SWESE, WESOA, and PhD Symposium, Berlin, Germany, December 2-5, 2013. Revised Selected Papers. Lecture Notes in Computer Science. Cham: Springer. ISBN 978-3-319-06858-9. Service-oriented and cloud computing: Third European Conference, ESOCC 2014, Manchester, UK, September 2-4, 2014. Proceedings. Lecture notes in computer science (1st ed.). New York: Springer. 2014. ISBN 978-3-662-44878-6. Service-oriented computing: 12th International Conference, ICSOC 2014, Paris, France, November 3-6, 2014. Proceedings. Lecture notes in computer science (1st ed.). New York: Springer. 2014. ISBN 978-3-662-45390-2. Qiu, Robin G. (2014). Service Science: The Foundations of Service Engineering and Management (1st ed.). Somerset: John Wiley & Sons, Incorporated. ISBN 978-1-118-10823-9. Motta, Gianmario (2014). Software Engineering Education for a Global e-Service Economy: State of the Art, Trends and Developments. Progress in IS Ser. Bing Wu (1st ed.). Cham: Springer International Publishing AG. ISBN 978-3-319-04216-9. Fox, Armando; Patterson, David A. (2016). Joseph, Samuel (ed.). Engineering software as a service: an Agile approach using cloud computing (1.2.2 ed.). San Francisco, Calif: Strawberry Canyon LLC. ISBN 978-0-9848812-3-9. Maximilien, Michael (2017). Service-Oriented Computing: 15th International Conference, ICSOC 2017, Malaga, Spain, November 13-16, 2017, Proceedings. Lecture Notes in Computer Science Ser. Antonio Vallecillo, Jianmin Wang, Marc Oriol. Cham: Springer International Publishing AG. ISBN 978-3-319-69034-6. Ahram, Tareq Z.; Karwowski, Waldemar, eds. (2017). Advances in The Human Side of Service Engineering: Proceedings of the AHFE 2016 International Conference on The Human Side of Service Engineering, July 27-31, 2016, Walt Disney World®, Florida, USA. Advances in Intelligent Systems and Computing. Cham s.l: Springer International Publishing. ISBN 978-3-319-41947-3. Meyer, Kyrill (2018). Service Engineering: Von Dienstleistungen Zu Digitalen Service-Systemen. Stephan Klingner, Christian Zinke. Wiesbaden: Vieweg. ISBN 978-3-658-20904-9. Ravindran, A. Ravi; Griffin, Paul; Prabhu, Vittaldas V. (2018). Service systems engineering and management. The operations research series. Boca Raton: Taylor & Francis, a CRC title, part of the Taylor & Francis imprint, a member of the Taylor & Francis Group, the academic division of T & F Informa, plc. ISBN 978-1-351-05418-8. Höckmayr, Benedikt S. (2019). Engineering Service Systems in the Digital Age. Markt- und Unternehmensentwicklung Markets and Organisations Ser. Wiesbaden: Springer Vieweg. in Springer Fachmedien Wiesbaden GmbH. ISBN 978-3-658-26202-0. Yangui, Sami (2020). Service-Oriented Computing - ICSOC 2019 Workshops: WESOACS, ASOCA, ISYCC, TBCE, and STRAPS, Toulouse, France, October 28-31, 2019, Revised Selected Papers. Lecture Notes in Computer Science Ser. Athman Bouguettaya, Xiao Xue, Noura Faci, Walid Gaaloul, Qi Yu, Zhangbing Zhou, Nathalie Hernandez, Elisa Y. Nakagawa. Cham: Springer International Publishing AG. ISBN 978-3-030-45988-8. Brogi, Antonio (2020). Service-Oriented and Cloud Computing: 8th IFIP WG 2. 14 European Conference, ESOCC 2020, Heraklion, Crete, Greece, September 28-30, 2020, Proceedings. Lecture Notes in Computer Science Ser. Wolf Zimmermann, Kyriakos Kritikos. Cham: Springer International Publishing AG. ISBN 978-3-030-44768-7. Hacid, Hakim (2021). Service-Oriented Computing - ICSOC 2020 Workshops: AIOps, CFTIC, STRAPS, AI-PA, AI-IOTS, and Satellite Events, Dubai, United Arab Emirates, December 14-17, 2020, Proceedings. Lecture Notes in Computer Science Ser. Fatma Outay, Hye-Young Paik, Amira Alloum, Marinella Petrocchi, Mohamed Reda Bouadjenek, Amin Beheshti, Xumin Liu, Abderrahmane Maaradji. Cham: Springer International Publishing AG. ISBN 978-3-030-76351-0. Jarzębowicz, Aleksander; Luković, Ivan; Przybyłek, Adam; Staron, Miroslaw; Ahmad, Muhammad Ovais; Ochodek, Mirosław, eds. (2024). Software, System, and Service Engineering: S3E 2023 Topical Area, 24th Conference on Practical Aspects of and Solutions for Software Engineering, KKIO 2023, and 8th Workshop on Advances in Programming Languages, WAPL 2023, Held as Part of FedCSIS 2023, Warsaw, Poland, 17–20 September 2023, Revised Selected Papers. Lecture Notes in Business Information Processing (1st ed. 2024 ed.). Cham: Springer Nature Switzerland. ISBN 978-3-031-51074-8. == Notes ==
Wikipedia/Service_(systems_architecture)
Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput. Network protocols that use aggressive retransmissions to compensate for packet loss due to congestion can increase congestion, even after the initial load has been reduced to a level that would not normally have induced network congestion. Such networks exhibit two stable states under the same level of load. The stable state with low throughput is known as congestive collapse. Networks use congestion control and congestion avoidance techniques to try to avoid collapse. These include: exponential backoff in protocols such as CSMA/CA in 802.11 and the similar CSMA/CD in the original Ethernet, window reduction in TCP, and fair queueing in devices such as routers and network switches. Other techniques that address congestion include priority schemes which transmit some packets with higher priority ahead of others and the explicit allocation of network resources to specific flows through the use of admission control. == Network capacity == Network resources are limited, including router processing time and link throughput. Resource contention may occur on networks in several common circumstances. A wireless LAN is easily filled by a single personal computer. Even on fast computer networks, the backbone can easily be congested by a few servers and client PCs. Denial-of-service attacks by botnets are capable of filling even the largest Internet backbone network links, generating large-scale network congestion. In telephone networks, a mass call event can overwhelm digital telephone circuits, in what can otherwise be defined as a denial-of-service attack. == Congestive collapse == Congestive collapse (or congestion collapse) is the condition in which congestion prevents or limits useful communication. Congestion collapse generally occurs at choke points in the network, where incoming traffic exceeds outgoing bandwidth. Connection points between a local area network and a wide area network are common choke points. When a network is in this condition, it settles into a stable state where traffic demand is high but little useful throughput is available, during which packet delay and loss occur and quality of service is extremely poor. Congestive collapse was identified as a possible problem by 1984. It was first observed on the early Internet in October 1986, when the NSFNET phase-I backbone dropped three orders of magnitude from its capacity of 32 kbit/s to 40 bit/s, which continued until end nodes started implementing Van Jacobson and Sally Floyd's congestion control between 1987 and 1988. When more packets were sent than could be handled by intermediate routers, the intermediate routers discarded many packets, expecting the endpoints of the network to retransmit the information. However, early TCP implementations had poor retransmission behavior. When this packet loss occurred, the endpoints sent extra packets that repeated the information lost, doubling the incoming rate. == Congestion control == Congestion control modulates traffic entry into a telecommunications network in order to avoid congestive collapse resulting from oversubscription. This is typically accomplished by reducing the rate of packets. Whereas congestion control prevents senders from overwhelming the network, flow control prevents the sender from overwhelming the receiver. === Theory of congestion control === The theory of congestion control was pioneered by Frank Kelly, who applied microeconomic theory and convex optimization theory to describe how individuals controlling their own rates can interact to achieve an optimal network-wide rate allocation. Examples of optimal rate allocation are max-min fair allocation and Kelly's suggestion of proportionally fair allocation, although many others are possible. Let x i {\displaystyle x_{i}} be the rate of flow i {\displaystyle i} , c l {\displaystyle c_{l}} be the capacity of link l {\displaystyle l} , and r l i {\displaystyle r_{li}} be 1 if flow i {\displaystyle i} uses link l {\displaystyle l} and 0 otherwise. Let x {\displaystyle x} , c {\displaystyle c} and R {\displaystyle R} be the corresponding vectors and matrix. Let U ( x ) {\displaystyle U(x)} be an increasing, strictly concave function, called the utility, which measures how much benefit a user obtains by transmitting at rate x {\displaystyle x} . The optimal rate allocation then satisfies max x ∑ i U ( x i ) {\displaystyle \max \limits _{x}\sum _{i}U(x_{i})} such that R x ≤ c {\displaystyle Rx\leq c} The Lagrange dual of this problem decouples so that each flow sets its own rate, based only on a price signaled by the network. Each link capacity imposes a constraint, which gives rise to a Lagrange multiplier, p l {\displaystyle p_{l}} . The sum of these multipliers, y i = ∑ l p l r l i , {\displaystyle y_{i}=\sum _{l}p_{l}r_{li},} is the price to which the flow responds. Congestion control then becomes a distributed optimization algorithm. Many current congestion control algorithms can be modeled in this framework, with p l {\displaystyle p_{l}} being either the loss probability or the queueing delay at link l {\displaystyle l} . A major weakness is that it assigns the same price to all flows, while sliding window flow control causes burstiness that causes different flows to observe different loss or delay at a given link. === Classification of congestion control algorithms === Among the ways to classify congestion control algorithms are: By type and amount of feedback received from the network: Loss; delay; single-bit or multi-bit explicit signals By incremental deployability: Only sender needs modification; sender and receiver need modification; only router needs modification; sender, receiver and routers need modification. By performance aspect: high bandwidth-delay product networks; lossy links; fairness; advantage to short flows; variable-rate links By fairness criterion: Max-min fairness; proportionally fair; controlled delay == Mitigation == Mechanisms have been invented to prevent network congestion or to deal with a network collapse: Network scheduler – active queue management which reorders or selectively drops network packets in the presence of congestion Explicit Congestion Notification – an extension to IP and TCP communications protocols that adds a flow control mechanism TCP congestion control – various implementations of efforts to deal with network congestion The correct endpoint behavior is usually to repeat dropped information, but progressively slow the repetition rate. Provided all endpoints do this, the congestion lifts and the network resumes normal behavior. Other strategies such as slow start ensure that new connections do not overwhelm the router before congestion detection initiates. Common router congestion avoidance mechanisms include fair queuing and other scheduling algorithms, and random early detection where packets are randomly dropped as congestion is detected. This proactively triggers the endpoints to slow transmission before congestion collapse occurs. Some end-to-end protocols are designed to behave well under congested conditions; TCP is a well known example. The first TCP implementations to handle congestion were described in 1984, but Van Jacobson's inclusion of an open source solution in the Berkeley Standard Distribution UNIX ("BSD") in 1988 first provided good behavior. UDP does not control congestion. Protocols built atop UDP must handle congestion independently. Protocols that transmit at a fixed rate, independent of congestion, can be problematic. Real-time streaming protocols, including many Voice over IP protocols, have this property. Thus, special measures, such as quality of service, must be taken to keep packets from being dropped in the presence of congestion. === Practical network congestion avoidance === Connection-oriented protocols, such as the widely used TCP protocol, watch for packet loss or queuing delay to adjust their transmission rate. Various network congestion avoidance processes support different trade-offs. === TCP/IP congestion avoidance === The TCP congestion avoidance algorithm is the primary basis for congestion control on the Internet. Problems occur when concurrent TCP flows experience tail-drops, especially when bufferbloat is present. This delayed packet loss interferes with TCP's automatic congestion avoidance. All flows that experience this packet loss begin a TCP retrain at the same moment – this is called TCP global synchronization. === Active queue management === Active queue management (AQM) is the reordering or dropping of network packets inside a transmit buffer that is associated with a network interface controller (NIC). This task is performed by the network scheduler. ==== Random early detection ==== One solution is to use random early detection (RED) on the network equipment's egress queue. On networking hardware ports with more than one egress queue, weighted random early detection (WRED) can be used. RED indirectly signals TCP sender and receiver by dropping some packets, e.g. when the average queue length is more than a threshold (e.g. 50%) and deletes linearly or cubically more packets, up to e.g. 100%, as the queue fills further. ==== Robust random early detection ==== The robust random early detection (RRED) algorithm was proposed to improve the TCP throughput against denial-of-service (DoS) attacks, particularly low-rate denial-of-service (LDoS) attacks. Experiments confirmed that RED-like algorithms were vulnerable under LDoS attacks due to the oscillating TCP queue size caused by the attacks. ==== Flow-based WRED ==== Some network equipment is equipped with ports that can follow and measure each flow and are thereby able to signal a too big bandwidth flow according to some quality of service policy. A policy could then divide the bandwidth among all flows by some criteria. ==== Explicit Congestion Notification ==== Another approach is to use Explicit Congestion Notification (ECN). ECN is used only when two hosts signal that they want to use it. With this method, a protocol bit is used to signal explicit congestion. This is better than the indirect congestion notification signaled by packet loss by the RED/WRED algorithms, but it requires support by both hosts. When a router receives a packet marked as ECN-capable and the router anticipates congestion, it sets the ECN flag, notifying the sender of congestion. The sender should respond by decreasing its transmission bandwidth, e.g., by decreasing its sending rate by reducing the TCP window size or by other means. The L4S protocol is an enhanced version of ECN which allows senders to collaborate with network devices to control congestion. ==== TCP window shaping ==== Congestion avoidance can be achieved efficiently by reducing traffic. When an application requests a large file, graphic or web page, it usually advertises a window of between 32K and 64K. This results in the server sending a full window of data (assuming the file is larger than the window). When many applications simultaneously request downloads, this data can create a congestion point at an upstream provider. By reducing the window advertisement, the remote servers send less data, thus reducing the congestion. ==== Backward ECN ==== Backward ECN (BECN) is another proposed congestion notification mechanism. It uses ICMP source quench messages as an IP signaling mechanism to implement a basic ECN mechanism for IP networks, keeping congestion notifications at the IP level and requiring no negotiation between network endpoints. Effective congestion notifications can be propagated to transport layer protocols, such as TCP and UDP, for the appropriate adjustments. == Side effects of congestive collapse avoidance == === Radio links === The protocols that avoid congestive collapse generally assume that data loss is caused by congestion. On wired networks, errors during transmission are rare. WiFi, 3G and other networks with a radio layer are susceptible to data loss due to interference and may experience poor throughput in some cases. The TCP connections running over a radio-based physical layer see the data loss and tend to erroneously believe that congestion is occurring. === Short-lived connections === The slow-start protocol performs badly for short connections. Older web browsers created many short-lived connections and opened and closed the connection for each file. This kept most connections in the slow start mode. Initial performance can be poor, and many connections never get out of the slow-start regime, significantly increasing latency. To avoid this problem, modern browsers either open multiple connections simultaneously or reuse one connection for all files requested from a particular server. == Admission control == Admission control is any system that requires devices to receive permission before establishing new network connections. If the new connection risks creating congestion, permission can be denied. Examples include Contention-Free Transmission Opportunities (CFTXOPs) in the ITU-T G.hn standard for home networking over legacy wiring, Resource Reservation Protocol for IP networks and Stream Reservation Protocol for Ethernet. == See also == Bandwidth management – Capacity control on a communications network Cascading failure – Systemic risk of failure Choke exchange – Telephone exchange designed to handle many simultaneous call attempts Erlang (unit) – Load measure in telecommunications Sorcerer's Apprentice syndrome – Network protocol flaw in the original versions of TFTP Teletraffic engineering – Application of traffic engineering theory to telecommunications Thrashing – Constant exchange between memory and storage Traffic shaping – Communication bandwidth management technique Reliability (computer networking) – Protocol acknowledgement capability == References == == External links == Floyd, S. and K. Fall, Promoting the Use of End-to-End Congestion Control in the Internet (IEEE/ACM Transactions on Networking, August 1999) Sally Floyd, On the Evolution of End-to-end Congestion Control in the Internet: An Idiosyncratic View (IMA Workshop on Scaling Phenomena in Communication Networks, October 1999) (pdf format) Linktionary term: Queuing Archived 2003-03-08 at the Wayback Machine Pierre-Francois Quet, Sriram Chellappan, Arjan Durresi, Mukundan Sridharan, Hitay Ozbay, Raj Jain, "Guidelines for optimizing Multi-Level ECN, using fluid flow based TCP model" Sally Floyd, Ratul Mahajan, David Wetherall: RED-PD: RED with Preferential Dropping Archived 2003-04-02 at the Wayback Machine A Generic Simple RED Simulator for educational purposes by Mehmet Suzen Approaches to Congestion Control in Packet Networks Papers in Congestion Control Random Early Detection Homepage Explicit Congestion Notification Homepage TFRC Homepage AIMD-FC Homepage Recent Publications in low-rate denial-of-service (DoS) attacks
Wikipedia/Network_congestion
Network delay is a design and performance characteristic of a telecommunications network. It specifies the latency for a bit of data to travel across the network from one communication endpoint to another.: 5  It is typically measured in multiples or fractions of a second. Delay may differ slightly, depending on the location of the specific pair of communicating endpoints. Engineers usually report both the maximum and average delay, and they divide the delay into several parts: Processing delay – time it takes a router to process the packet header Queuing delay – time the packet spends in routing queues Transmission delay – time it takes to push the packet's bits onto the link Propagation delay – time for a signal to propagate through the media A certain minimum level of delay is experienced by signals due to the time it takes to transmit a packet serially through a link. This delay is extended by more variable levels of delay due to network congestion. IP network delays can range from a few milliseconds to several hundred milliseconds. == See also == Age of Information End-to-end delay Lag (video games) Latency (engineering) Minimum-Pairs Protocol Round-trip delay == References == == External links == Impact of Delay in Voice over IP Services (PDF), retrieved 2018-10-31 Internet Delay Space Study at Rice University (PDF), retrieved 2018-10-31
Wikipedia/Network_latency
A home network or home area network (HAN) is a type of computer network, specifically a type of local area network (LAN), that facilitates communication among devices within the close vicinity of a home. Devices capable of participating in this network, for example, smart devices such as network printers and handheld mobile computers, often gain enhanced emergent capabilities through their ability to interact. These additional capabilities can be used to increase the quality of life inside the home in a variety of ways, such as automation of repetitive tasks, increased personal productivity, enhanced home security, and easier access to entertainment. Other than a regular LAN that are centralized and use IP technologies, a home network may also make use of direct peer-to-peer methods as well as non-IP protocols such as Bluetooth. == Infrastructure devices == Certain devices in a home network are primarily concerned with enabling or supporting the communications of the kinds of end devices residents more directly interact with. Unlike their data center counterparts, these networking devices are compact and passively cooled, aiming to be as hands-off and non-obtrusive as possible. A gateway establishes physical and data link layer connectivity to a WAN provided by a service provider. Home routers provided by internet service providers (ISP) usually have the modem integrated within the unit. It is effectively a client of the external DHCP servers owned by the ISP. A router establishes network layer connectivity between a wide area network (WAN) and the local area network of the residence. For IPv4 networking, the device may also perform the function of network address translation establishing a private network with a set of independent addresses for the network. These devices often contain an integrated wireless access point and a multi-port Ethernet LAN switch. A wireless access point provides connectivity within the home network for mobile devices and many other types using the Wi-Fi standard. When a router includes this service, it is referred to as a wireless router, which is predominantly the case. A network switch permits the connection of multiple wired Ethernet devices to the home network. While the needs of most home networks are satisfied with wireless connectivity, some devices require wired connection. Such devices, for example IP cameras and IP phones, are sometimes powered via their network cable with power over Ethernet (PoE). A network bridge binds two different network interfaces to each other, often in order to grant a wired-only device access to a wireless network medium. Controllers for home automation or smart home hubs act as a controller for light bulbs, smart plugs, and security devices. == Connectivity and protocols == Home networks may use either wired or wireless connectivity methods that are found and standardized on local area networks or personal area networks. One of the most common ways of creating a home network is by using wireless radio signal technology; the 802.11 network as certified by the IEEE. Most wireless-capable residential devices operate at a frequency of 2.4 GHz under 802.11b and 802.11g or 5 GHz under 802.11a. Some home networking devices operate in both radio-band signals and fall within the 802.11n or 802.11ac standards. Wi-Fi is a marketing and compliance certification for IEEE 802.11 technologies. The Wi-Fi Alliance has tested compliant products, and certifies them for interoperability. Low power, close range communication based on IEEE 802.15 standards has a strong presence in homes. Bluetooth continues to be the technology of choice for most wireless accessories such as keyboards, mice, headsets, and game controllers. These connections are often established in a transient, ad-hoc manner and are not thought of as permanent residents of a home network. A "low-rate" version of the original WPAN protocol was used as the basis of Zigbee. == Endpoint devices and services == Home networks may consist of a variety of devices and services. Personal computers such as desktops and mobile computers like tablets and smartphones are commonly used on home networks to communicate with other devices. A network attached storage (NAS) device may be part of the network, for general storage or backup purposes. A print server can be used to share any directly connected printers with other computers on the network. Smart speakers may be used on a network for streaming media. DLNA is a common protocol used for interoperability between networked media-centric devices in the home, allowing devices like stereo systems on the network to access the music library from a PC on the same network, for example. Using an additional Internet connection, TVs for instance may stream online video content, while video game consoles can use online multiplayer. Traditionally, data-centric equipment such as computers and media players have been the primary tenants of a home network. However, due to the lowering cost of computing and the ubiquity of smartphone usage, many traditionally non-networked home equipment categories now include new variants capable of control or remote monitoring through an app on a smartphone. Newer startups and established home equipment manufacturers alike have begun to offer these products as part of a "Smart" or "Intelligent" or "Connected Home" portfolio. Examples of such may include "connected" light bulbs (see also Li-Fi), home security alarms and smoke detectors. These often run over the Internet so that they can be accessed remotely. Individuals may opt to subscribe to managed cloud computing services that provide such services instead of maintaining similar facilities within their home network. In such situations, local services along with the devices maintaining them are replaced by those in an external data center and made accessible to the home-dweller's computing devices via a WAN Internet connection. == Network management == Apple devices aim to make networking as hidden and automatic as possible, utilizing a zero-configuration networking protocol called Bonjour embedded within their otherwise proprietary line of software and hardware products. Microsoft offers simple access control features built into their Windows operating system. Homegroup is a feature that allows shared disk access, shared printer access and shared scanner access among all computers and users (typically family members) in a home, in a similar fashion as in a small office workgroup, e.g., by means of distributed peer-to-peer networking (without a central server). Additionally, a home server may be added for increased functionality. The Windows HomeGroup feature was introduced with Microsoft Windows 7 in order to simplify file sharing in residences. All users (typically all family members), except guest accounts, may access any shared library on any computer that is connected to the home group. Passwords are not required from the family members during logon. Instead, secure file sharing is possible by means of a temporary password that is used when adding a computer to the HomeGroup. == See also == Access control Computer security software Data backup Encryption Firewall (computing) Home automation Home server Indoor positioning system (IPS) Matter Network security Smart, connected products Software update Virtual assistant == References == == External links == WikiBooks:Transferring Data between Standard Dial-Up Modems Home Net WG of the IETF
Wikipedia/Home_network
A personal area network (PAN) is a computer network for interconnecting electronic devices within an individual person's workspace. A PAN provides data transmission among devices such as computers, smartphones, tablets and personal digital assistants. PANs can be used for communication among the personal devices themselves, or for connecting to a higher level network and the Internet where one master device takes up the role as gateway. A PAN may be carried over wired interfaces such as USB, but is predominantly carried wirelessly, also called a wireless personal area network (WPAN). A PAN is wirelessly carried over a low-powered, short-distance wireless network technology such as IrDA, Wireless USB, Bluetooth, NearLink or Zigbee. The reach of a WPAN varies from a few centimeters to a few meters. WPANs specifically tailored for low-power operation of the sensors are sometimes also called low-power personal area network (LPPAN) to better distinguish them from low-power wide-area network (LPWAN). == Wired == Wired personal area networks provide short connections between peripherals. Example technologies include USB, IEEE 1394 and Thunderbolt. == Wireless == A wireless personal area network (WPAN) is a personal area network in which the connections are wireless. IEEE 802.15 has produced standards for several types of PANs operating in the ISM band including Bluetooth. The Infrared Data Association (IrDA) has produced standards for WPANs that operate using infrared communications. === Bluetooth === Bluetooth uses short-range radio waves. Uses in a WPAN include, for example, Bluetooth devices such as keyboards, pointing devices, audio headsets, and printers that may connect to smartwatches, cell phones, or computers. A Bluetooth WPAN is also called a piconet, and is composed of up to 8 active devices in a master-slave relationship (a very large number of additional devices can be connected in parked mode). The first Bluetooth device in the piconet is the master, and all other devices are slaves that communicate with the master. A piconet typically has a range of 10 metres (33 ft), although ranges of up to 100 metres (330 ft) can be reached under ideal circumstances. Long-range Bluetooth routers with augmented antenna arrays connect Bluetooth devices up to 1,000 feet (300 m). With Bluetooth mesh networking the range and number of devices is extended by using mesh networking techniques to relay information from one device to another. Such a network doesn't have a master device and may or may not be treated as a WPAN. === IrDA === IrDA uses infrared light, which has a frequency below the human eye's sensitivity. Infrared is used in other wireless communications applications, for instance, in remote controls. Typical WPAN devices that use IrDA include printers, keyboards, and other serial communication interfaces. == See also == == References == == External links == Media related to Personal area networks (PAN) at Wikimedia Commons IEEE 802.15 Working Group for WPAN
Wikipedia/Personal_area_network
A wireless network is a computer network that uses wireless data connections between network nodes. Wireless networking allows homes, telecommunications networks, and business installations to avoid the costly process of introducing cables into a building, or as a connection between various equipment locations. Admin telecommunications networks are generally implemented and administered using radio communication. This implementation takes place at the physical level (layer) of the OSI model network structure. Examples of wireless networks include cell phone networks, wireless local area networks (WLANs), wireless sensor networks, satellite communication networks, and terrestrial microwave networks. == History == === Wireless networks === The first professional wireless network was developed under the brand ALOHAnet in 1969 at the University of Hawaii and became operational in June 1971. The first commercial wireless network was the WaveLAN product family, developed by NCR in 1986. 1973 – Ethernet 802.3 1991 – 2G cell phone network June 1997 – 802.11 "Wi-Fi" protocol first release 1999 – 803.11 VoIP integration === Underlying technology === Advances in MOSFET (MOS transistor) wireless technology enabled the development of digital wireless networks. The wide adoption of RF CMOS (radio frequency CMOS), power MOSFET and LDMOS (lateral diffused MOS) devices led to the development and proliferation of digital wireless networks by the 1990s, with further advances in MOSFET technology leading to increasing bandwidth in the 2000s (Edholm's law). Most of the essential elements of wireless networks are built from MOSFETs, including the mobile transceivers, base station modules, routers, RF power amplifiers, telecommunication circuits, RF circuits, and radio transceivers, in networks such as 2G, 3G, and 4G. == Wireless links == Terrestrial microwave – Terrestrial microwave communication uses Earth-based transmitters and receivers resembling satellite dishes. Terrestrial microwaves are in the low gigahertz range, which limits all communications to line-of-sight. Relay stations are spaced approximately 48 km (30 mi) apart. Communications satellites – Satellites communicate via microwave radio waves, which are not deflected by the Earth's atmosphere. The satellites are stationed in space, typically in geosynchronous orbit 35,400 km (22,000 mi) above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals. Cellular and PCS systems use several radio communications technologies. The systems divide the region covered into multiple geographic areas. Each area has a low-power transmitter or radio relay antenna device to relay calls from one area to the next area. Radio and spread spectrum technologies – Wireless local area networks use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. IEEE 802.11 defines a common flavor of open-standards wireless radio-wave technology known as Wi-Fi. Free-space optical communication uses visible or invisible light for communications. In most cases, line-of-sight propagation is used, which limits the physical positioning of communicating devices. == Types of wireless networks == === Wireless PAN === Wireless personal area networks (WPANs) connect devices within a relatively small area, that is generally within a person's reach. For example, both Bluetooth radio and invisible infrared light provides a WPAN for interconnecting a headset to a laptop. Zigbee also supports WPAN applications. Wi-Fi PANs are becoming commonplace (2010) as equipment designers start to integrate Wi-Fi into a variety of consumer electronic devices. Intel "My WiFi" and Windows 7 "virtual Wi-Fi" capabilities have made Wi-Fi PANs simpler and easier to set up and configure. === Wireless LAN === A wireless local area network (WLAN) links two or more devices over a short distance using a wireless distribution method, usually providing a connection through an access point for internet access. The use of spread-spectrum or OFDM technologies may allow users to move around within a local coverage area, and still remain connected to the network. Products using the IEEE 802.11 WLAN standards are marketed under the Wi-Fi brand name. Fixed wireless technology implements point-to-point links between computers or networks at two distant locations, often using dedicated microwave or modulated laser light beams over line of sight paths. It is often used in cities to connect networks in two or more buildings without installing a wired link. To connect to Wi-Fi using a mobile device, one can use a device like a wireless router or the private hotspot capability of another mobile device. === Wireless ad hoc network === A wireless ad hoc network, also known as a wireless mesh network or mobile ad hoc network (MANET), is a wireless network made up of radio nodes organized in a mesh topology. Each node forwards messages on behalf of the other nodes and each node performs routing. Ad hoc networks can "self-heal", automatically re-routing around a node that has lost power. Various network layer protocols are needed to realize ad hoc mobile networks, such as Distance Sequenced Distance Vector routing, Associativity-Based Routing, Ad hoc on-demand distance-vector routing, and Dynamic Source Routing. === Wireless MAN === Wireless metropolitan area networks are a type of wireless network that connects several wireless LANs. WiMAX is a type of Wireless MAN and is described by the IEEE 802.16 standard. === Wireless WAN === Wireless wide area networks are wireless networks that typically cover large areas, such as between neighboring towns and cities, or city and suburb. These networks can be used to connect branch offices of business or as a public Internet access system. The wireless connections between access points are usually point to point microwave links using parabolic dishes on the 2.4 GHz and 5.8 GHz band, rather than omnidirectional antennas used with smaller networks. A typical system contains base station gateways, access points and wireless bridging relays. Other configurations are mesh systems where each access point acts as a relay also. When combined with renewable energy systems such as photovoltaic solar panels or wind systems they can be stand-alone systems. === Cellular network === A cellular network or mobile network is a radio network distributed over land areas called cells, each served by at least one fixed-location transceiver, known as a cell site or base station. In a cellular network, each cell characteristically uses a different set of radio frequencies from all their immediate neighbouring cells to avoid any interference. When joined these cells provide radio coverage over a wide geographic area. This enables a large number of portable transceivers (e.g., mobile phones, pagers, etc.) to communicate with each other and with fixed transceivers and telephones anywhere in the network, via base stations, even if some of the transceivers are moving through more than one cell during transmission. Although originally intended for cell phones, with the development of smartphones, cellular telephone networks routinely carry data in addition to telephone conversations: Global System for Mobile Communications (GSM): The GSM network is divided into three major systems: the switching system, the base station system, and the operation and support system. The cell phone connects to the base system station which then connects to the operation and support station; it then connects to the switching station where the call is transferred to where it needs to go. GSM is the most common standard and is used for a majority of cell phones. Personal Communications Service (PCS): PCS is a radio band that can be used by mobile phones in North America and South Asia. Sprint happened to be the first service to set up a PCS. D-AMPS: Digital Advanced Mobile Phone Service, an upgraded version of AMPS, is being phased out due to advancements in technology. The newer GSM networks are replacing the older system. ==== Private LTE/5G networks ==== Private LTE/5G networks use licensed, shared or unlicensed wireless spectrum thanks to LTE or 5G cellular network base stations, small cells and other radio access network (RAN) infrastructure to transmit voice and data to edge devices (smartphones, embedded modules, routers and gateways. 3GPP defines 5G private networks as non-public networks that typically employ a smaller-scale deployment to meet an organization's needs for reliability, accessibility, and maintainability. ==== Open Source ==== Open source private networks are based on a collaborative, community-driven software that relies on peer review and production to use, modify and share the source code. === Global area network === A global area network (GAN) is a network used for supporting mobile across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off user communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial wireless LANs. === Space network === Space networks are networks used for communication between spacecraft, usually in the vicinity of the Earth. The example of this is NASA's Space Network. == Uses == Some examples of usage include cellular phones which are part of everyday wireless networks, allowing easy personal communications. Another example, Intercontinental network systems, use radio satellites to communicate across the world. Emergency services such as the police utilize wireless networks to communicate effectively as well. Individuals and businesses use wireless networks to send and share data rapidly, whether it be in a small office building or across the world. == Properties == === General === In a general sense, wireless networks offer a vast variety of uses by both business and home users. "Now, the industry accepts a handful of different wireless technologies. Each wireless technology is defined by a standard that describes unique functions at both the Physical and the Data Link layers of the OSI model. These standards differ in their specified signaling methods, geographic ranges, and frequency usages, among other things. Such differences can make certain technologies better suited to home networks and others better suited to network larger organizations." === Performance === Each standard varies in geographical range, thus making one standard more ideal than the next depending on what it is one is trying to accomplish with a wireless network. The performance of wireless networks satisfies a variety of applications such as voice and video. The use of this technology also gives room for expansions, such as from 2G to 3G and, 4G and 5G technologies, which stand for the fourth and fifth generation of cell phone mobile communications standards. As wireless networking has become commonplace, sophistication increases through configuration of network hardware and software, and greater capacity to send and receive larger amounts of data, faster, is achieved. Now the wireless network has been running on LTE, which is a 4G mobile communication standard. Users of an LTE network should have data speeds that are 10x faster than a 3G network. === Space === Space is another characteristic of wireless networking. Wireless networks offer many advantages when it comes to difficult-to-wire areas trying to communicate such as across a street or river, a warehouse on the other side of the premises or buildings that are physically separated but operate as one. Wireless networks allow for users to designate a certain space which the network will be able to communicate with other devices through that network. Space is also created in homes as a result of eliminating clutters of wiring. This technology allows for an alternative to installing physical network mediums such as TPs, coaxes, or fiber-optics, which can also be expensive. === Home === For homeowners, wireless technology is an effective option compared to Ethernet for sharing printers, scanners, and high-speed Internet connections. WLANs help save the cost of installation of cable mediums, save time from physical installation, and also creates mobility for devices connected to the network. Wireless networks are simple and require as few as one single wireless access point connected directly to the Internet via a router. === Wireless network elements === The telecommunications network at the physical layer also consists of many interconnected wireline network elements (NEs). These NEs can be stand-alone systems or products that are either supplied by a single manufacturer or are assembled by the service provider (user) or system integrator with parts from several different manufacturers. Wireless NEs are the products and devices used by a wireless carrier to provide support for the backhaul network as well as a mobile switching center (MSC). Reliable wireless service depends on the network elements at the physical layer to be protected against all operational environments and applications (see GR-3171, Generic Requirements for Network Elements Used in Wireless Networks – Physical Layer Criteria). What are especially important are the NEs that are located on the cell tower to the base station (BS) cabinet. The attachment hardware and the positioning of the antenna and associated closures and cables are required to have adequate strength, robustness, corrosion resistance, and resistance against wind, storms, icing, and other weather conditions. Requirements for individual components, such as hardware, cables, connectors, and closures, shall take into consideration the structure to which they are attached. === Difficulties === ==== Interference ==== Compared to wired systems, wireless networks are frequently subject to electromagnetic interference. This can be caused by other networks or other types of equipment that generate radio waves that are within, or close, to the radio bands used for communication. Interference can degrade the signal or cause the system to fail. ==== Absorption and reflection ==== Some materials cause absorption of electromagnetic waves, preventing it from reaching the receiver, in other cases, particularly with metallic or conductive materials reflection occurs. This can cause dead zones where no reception is available. Aluminium foiled thermal isolation in modern homes can easily reduce indoor mobile signals by 10 dB frequently leading to complaints about the bad reception of long-distance rural cell signals. ==== Multipath fading ==== In multipath fading two or more different routes taken by the signal, due to reflections, can cause the signal to cancel out each other at certain locations, and to be stronger in other places (upfade). ==== Hidden node problem ==== The hidden node problem occurs in some types of network when a node is visible from a wireless access point (AP), but not from other nodes communicating with that AP. This leads to difficulties in medium access control (collisions). ==== Exposed terminal node problem ==== The exposed terminal problem is when a node on one network is unable to send because of co-channel interference from a node that is on a different network. ==== Shared resource problem ==== The wireless spectrum is a limited resource and shared by all nodes in the range of its transmitters. Bandwidth allocation becomes complex with multiple participating users. Often users are not aware that advertised numbers (e.g., for IEEE 802.11 equipment or LTE networks) are not their capacity, but shared with all other users and thus the individual user rate is far lower. With increasing demand, the capacity crunch is more and more likely to happen. User-in-the-loop (UIL) may be an alternative solution to ever upgrading to newer technologies for over-provisioning. === Capacity === ==== Channel ==== Shannon's theorem can describe the maximum data rate of any single wireless link, which relates to the bandwidth in hertz and to the noise on the channel. One can greatly increase channel capacity by using MIMO techniques, where multiple aerials or multiple frequencies can exploit multiple paths to the receiver to achieve much higher throughput – by a factor of the product of the frequency and aerial diversity at each end. Under Linux, the Central Regulatory Domain Agent (CRDA) controls the setting of channels. ==== Network ==== The total network bandwidth depends on how dispersive the medium is (more dispersive medium generally has better total bandwidth because it minimises interference), how many frequencies are available, how noisy those frequencies are, how many aerials are used and whether a directional antenna is in use, whether nodes employ power control and so on. Cellular wireless networks generally have good capacity, due to their use of directional aerials, and their ability to reuse radio channels in non-adjacent cells. Additionally, cells can be made very small using low power transmitters this is used in cities to give network capacity that scales linearly with population density. == Safety == Wireless access points are also often close to humans, but the drop off in power over distance is fast, following the inverse-square law. The position of the United Kingdom's Health Protection Agency (HPA) is that “...radio frequency (RF) exposures from WiFi are likely to be lower than those from mobile phones". It also saw “...no reason why schools and others should not use WiFi equipment". In October 2007, the HPA launched a new "systematic" study into the effects of WiFi networks on behalf of the UK government, in order to calm fears that had appeared in the media in a recent period up to that time". Dr Michael Clark, of the HPA, says published research on mobile phones and masts does not add up to an indictment of WiFi. == See also == Rendezvous delay Wireless access point Wireless community network Wireless grid Wireless LAN client comparison Wireless site survey Network simulation Optical mesh network Wireless mesh network Wireless mobility management == References == == Further reading == Wireless Networking in the Developing World: A practical guide to planning and building low-cost telecommunications infrastructure (PDF) (2nd ed.). Hacker Friendly LLC. 2007. p. 425. Pahlavan, Kaveh; Levesque, Allen H (1995). Wireless Information Networks. John Wiley & Sons. ISBN 0-471-10607-0. Geier, Jim (2001). Wireless LANs. Sams. ISBN 0-672-32058-4. Goldsmith, Andrea (2005). Wireless Communications. Cambridge University Press. ISBN 0-521-83716-2. Lenzini, L.; Luise, M.; Reggiannini, R. (June 2001). "CRDA: A Collision Resolution and Dynamic Allocation MAC Protocol to Integrate Date and Voice in Wireless Networks". IEEE Journal on Selected Areas in Communications. 19 (6). IEEE Communications Society: 1153–1163. doi:10.1109/49.926371. ISSN 0733-8716. Molisch, Andreas (2005). Wireless Communications. Wiley-IEEE Press. ISBN 0-470-84888-X. Pahlavan, Kaveh; Krishnamurthy, Prashant (2002). Principles of Wireless Networks – a Unified Approach. Prentice Hall. ISBN 0-13-093003-2. Rappaport, Theodore (2002). Wireless Communications: Principles and Practice. Prentice Hall. ISBN 0-13-042232-0. Rhoton, John (2001). The Wireless Internet Explained. Digital Press. ISBN 1-55558-257-5. Tse, David; Viswanath, Pramod (2005). Fundamentals of Wireless Communication. Cambridge University Press. ISBN 0-521-84527-0. Kostas Pentikousis (March 2005). "Wireless Data Networks". Internet Protocol Journal. 8 (1). Retrieved 29 August 2011. Pahlavan, Kaveh; Krishnamurthy, Prashant (2009). Networking Fundamentals – Wide, Local and Personal Area Communications. Wiley. ISBN 978-0-470-99290-6. == External links ==
Wikipedia/Wireless_network
A municipal wireless network is a citywide wireless network. This usually works by providing municipal broadband via Wi-Fi to large parts or all of a municipal area by deploying a wireless mesh network. The typical deployment design uses hundreds of wireless access points deployed outdoors, often on poles. The operator of the network acts as a wireless internet service provider. == Overview == Municipal wireless networks go far beyond the existing piggybacking opportunities available near public libraries and some coffee shops. The basic premise of carpeting an area with wireless service in urban centers is that it is more economical to the community to provide the service as a utility rather than to have individual households and businesses pay private firms for such a service. Such networks are capable of enhancing city management and public safety, especially when used directly by city employees in the field. They can also be a social service to those who cannot afford private high-speed services. When the network service is free and a small number of clients consume a majority of the available capacity, operating and regulating the network might prove difficult. In 2003, Verge Wireless formed an agreement with Tropos Networks to build a municipal wireless networks in the downtown area of Baton Rouge, Louisiana. Carlo MacDonald, the founder of Verge Wireless, suggested that it could provide cities a way to improve economic development and developers to build mobile applications that can make use of faster bandwidth. Verge Wireless built networks for Baton Rouge, New Orleans, and other areas. Some applications include wireless security cameras, police mug shot software, and location-based advertising. In 2007, some companies with existing cell sites offered high-speed wireless services where the laptop owner purchased a PC card or adapter based on EV-DO cellular data receivers or WiMAX rather than 802.11b/g. A few high-end laptops at that time featured built-in support for these newer protocols. WiMAX is designed to implement a metropolitan area network (MAN) while 802.11 is designed to implement a wireless local area network (LAN). However, the use of cellular networks is expensive for the consumers, as they are often on limited data plans. In the 2010s larger cities embraced the smart city concept to tackle problems such as traffic congestion, crime, encouraging economic growth, responding to the effects of climate change and improving the delivery of city services. However, by 2018 it has become clear that the private sector could not be relied upon to build up city-wide wireless networks to meet the smart city objectives of municipal governments and public utility providers. == Finance == The construction of municipal wireless networks is a significant part of their lifetime costs. Usually, a private firm works with local government to construct a network and operate it. Financing is usually shared by both the private firm and the municipal government. Once operational, the service may be free to users via public finance or advertising, or may be a paid service. Among deployed networks, usage as measured by number of distinct users has been shown to be moderate to light. Private firms serving multiple cities sometimes maintain an account for each user, and allow the user a limited amount of mobile service in the cities covered. As of 2007 some Muni WiFi deployments are delayed as the private and public partners negotiate the business model and financing. == Corporate city-wide wireless networks == Google WiFi is entirely funded by Google. Despite a failed attempt to provide citywide WiFi through a partnership with internet service provider Earthlink in 2007, the company claims that they are working to provide a wireless network for the city of San Francisco, California, although there is no specified completion date. Some other projects that are still in the planning stages have pared back their planned coverage from 100% of a municipal area to only densely commercially zoned areas. One of the most ambitious planned projects is to provide wireless service throughout Silicon Valley, but the winner of the bid seems ready to request that the 40 cities involved help cover more of the cost, which has raised concerns that the project will ultimately be too slow to market to be a success. Advances in technology in 2005–2007 may allow wireless community network projects to offer a viable alternative. Such projects have an advantage in that, as they do not have to negotiate with government entities, they have no contractual obligations for coverage. A promising example is Meraki's demonstration in San Francisco, which already claims 20,000 distinct users as of October 2007. In 2009, Microsoft and Yahoo also provided free wireless to select regions in the United States. Yahoo's free WiFi was made available for one year to the Times Square area in New York City beginning November 10, 2009. Microsoft made free WiFi available to select airports and hotels across the United States, in exchange for one search on the Bing search engine by the user. The City of Adelaide in South Australia in collaboration with the South Australian Government operate a meshed network "Adelaide Free WIFI. For the past five years the network attracts some 8,000 daily users as the networks popularity continues to grow despite the proliferation of 4G technology. == Criticism and externalities == Municipal wireless networks face opposition from telecommunications providers, particularly in the United States, South Africa, India and the European Union. In the 2000s telecommunications providers argued that it is neither economical nor legal for municipal governments to own or operate such businesses. The dominant type of wireless networks are the private wireless local area networks (WLANs), for which individuals or businesses pay a subscription to a local carrier. In 2006 the US Federal Trade Commission expressed concerns about such private-public partnerships as trending towards a franchise monopoly. Within the United States, providing a municipal wireless network was not recognized as a priority. Some have argued that the benefits of public approach may exceed the costs, similar to cable television. In the early 2010s concerns were articulated that a considerable percentage of the world population did not have access to affordable Internet access. Despite the growing digitalization of business and government services, 37 percent of the European and 22 percent of the north American population did not have affordable access to the Internet in 2009. Because local governments and municipalities in rural economiess either could not fund wireless networks or did not consider it a priority, numerous communities across the world have built and funded autonomous community wireless networks (CWNs), taking advantage of the free 2.4 GHz spectrum and open source software. The former New York state politician and lobbyist Thomas M. Reynolds argues that unintended externalities are possible as a result of local governments providing Internet service to their constituents. A private service provider could choose to offer limited or no service to a region if that region's largest city opted to provide free Internet service, thus eliminating the potential customer base. The private sector receives no money from taxpayers, so there isn't competition. The lack of competition prevents other municipalities in that region from benefiting from the services of the private provider. The smaller public municipalities would at the same time not benefit from the free service provided by the larger city because it is designed to be subsidized by taxpayers and not concerned about the maximization of profits. The broadband provided by the government isn't largely supported to create an income on top of the private sector not being competed with enough to make a profit. Thus, making both municipal wireless networks anticompetitive. == Cities with municipal wireless service == In many cases several points or areas are covered, without blanket area coverage. === Africa === Gaborone, Botswana - rolling out free Wi-Fi to the whole city. Francistown also the city of Botswana has the same initiative. Luxor, Egypt - pilot, paid service in tourist areas Sharm el-Sheikh, Egypt - pilot, paid service, tourist areas, EgyNet Johannesburg - City of Johannesburg is currently rolling out free Wi-Fi to many suburbs as well as the city center. Pretoria - South Africa the City of Tswhane has offered its free Wi-Fi to residents around the City, TshWi-Fi Mombasa, Kenya === East Asia === ==== China ==== Free public WiFi in tourist areas of big cities, railway stations, airports, and governmental facilities in Shanghai, Beijing, Tianjin, Harbin, Shenyang, Shenzhen, Kunming, Hangzhou, Suzhou, Wuxi, Nanjing, Xi'an, Chengdu, Chongqing, Fuzhou, Ningbo, Foshan, Dalian, Changchun, Qingdao, Yantai, Dongguan, Macau, Huangshan, Hefei, Guiyang, and Guangzhou Hong Kong - most are subscribed, paid services, but free service in selected governmental facilities is also available Shanghai - city network in tourist areas, governmental facilities, and the districts of Jiading, Minhang, Pudong, Songjiang, Baoshan, and Puxi are covered. Public WiFi in various shopping malls, restaurants, stores, along with Pudong Airport, Hongqiao Airport, and all railway stations. Beijing - Citywide network covers most districts, including downtown, along with public WiFi by stores, shopping malls, and restaurants, along with Government Facilities, transportation centers, and Beijing Capital International Airport. Tianjin - Citywide network, along with Tourist areas and railway stations including Tianjin Binhai International Airport Harbin - Network in downtown, railway stations, Shopping malls, and Harbin Taiping International Airport Shenyang - Railway Stations, Tourist Areas, Shopping malls, and Shenyang Taoxian International Airport Shenzhen - Limited to Downtown, Tourist areas, Shopping malls, railway stations, and Shenzhen Bao'an International Airport Hangzhou - Downtown WiFi, tourist areas, railway stations, and Hangzhou Xiaoshan International Airport Suzhou - Downtown WiFi, tourist areas, and railway stations. Wuxi - Sunan Shuofang International Airport, Downtown, Tourist areas, railway stations, and shopping malls. Nanjing - Downtown, along with full district coverage, tourist areas, Railway stations, shopping malls, plazas, and Nanjing Lukou International Airport Xi'an - Downtown, tourist areas, railway stations, shopping malls, and Xi'an Xianyang International Airport Chengdu - Coverage in many areas, including downtown, plazas, and tourist areas, including Chengdu Shuangliu International Airport Chongqing - Downtown coverage, railway stations, tourist areas, and Chongqing Jiangbei International Airport Fuzhou - Coverage in downtown, railway stations, tourist areas, and Fuzhou Changle International Airport Ningbo - Tourist areas, railway stations, tourist areas, and Ningbo Lishe International Airport Foshan - Downtown Coverage, Tourist Areas, railway stations, and Foshan Shadi Airport Dalian - Downtown Coverage, railway stations, tourist areas, and Dalian Zhoushuizi International Airport Changchun - Downtown coverage, railway stations, tourist areas, shopping malls, and Changchun Longjia International Airport Qingdao - Downtown coverage, railway stations, tourist areas, shopping malls, and Qingdao Liuting International Airport Yantai - Downtown coverage, railway stations, tourist areas, shopping malls, and Yantai Penglai International Airport Dongguan - Downtown coverage, railway stations, tourist areas, shopping malls, and plazas, including shops, and communities. Macau - Downtown Coverage, including transportation centers, tourist areas, shopping malls, and Macau International Airport Huangshan - Downtown Coverage, including transportation centers, tourist areas, shopping malls, and Huangshan Tunxi International Airport Hefei - Downtown Coverage, including transportation centers, tourist areas, shopping malls, and Hefei Xinqiao International Airport Guiyang - Downtown Coverage, including transportation centers, tourist areas, shopping malls, and Guiyang Longdongbao International Airport Guangzhou - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Guangzhou Baiyun International Airport Wuhan - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Wuhan Tianhe International Airport Jinan - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Jinan Yaoqiang International Airport Ordos - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Ordos Ejin Horo Airport Xiamen - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Xiamen Gaoqi International Airport Zhengzhou - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Zhengzhou Xinzheng International Airport Changsha - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Changsha Huanghua International Airport Shijiazhuang - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Shijiazhuang Zhengding International Airport Nanning - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Nanning Wuxu International Airport Luoyang - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Luoyang Beijiao Airport Haikou - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Haikou Meilan International Airport Xuzhou - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Xuzhou Guanyin Airport Nanchang - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Nanchang Changbei International Airport Changzhou - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Changzhou Benniu Airport Guilin - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Guilin Liangjiang International Airport Zhuhai - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Zhuhai Jinwan Airport Wenzhou - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Wenzhou Longwan International Airport Tangshan - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Tangshan Sannühe Airport Lanzhou - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Lanzhou Zhongchuan International Airport Xuzhou - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Xuzhou Guanyin Airport Changsha - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Changsha Huanghua International Airport Tangshan - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Tangshan Sannühe Airport Nantong - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Nantong Xingdong Airport Taiyuan - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Taiyuan Wusu International Airport Shantou - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Jieyang Chaoshan International Airport Yangzhou - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Yangzhou Taizhou Airport Quanzhou - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Quanzhou Jinjiang International Airport Kaifeng - Downtown Coverage Yichang - Downtown Coverage, tourist areas, transportation centers, shopping malls, and Yichang Sanxia Airport Nearly all cities have free WiFi coverage, hosted either by their local service carrier, or city government, all railway stations in China have free WiFi, along with all Airports. ==== Taiwan ==== Taiwan - iTaiwan, Free wifi covering government office, tourism attractions, transportation service area, constructed by the National Development Council. Taipei - Taipei Free Public Wi-Fi and paid service Wifly by Q-Ware Communications, Inc. New Taipei - free service in specific public areas in the city === South Asia === ==== India ==== Ahmedabad - Reliance Jio started free 4G services in select areas Bangalore - free coverage of M.G. Road and Brigade Road. Delhi - free Wi-Fi service in Delhi's Khan Market (August 2014), free WiFi service in Delhi's Connaught Place (November 2014), free Wi-Fi service at New Delhi Railway Station (December 2014) Greater Noida — paid, operated by Maksat Technologies (P) Ltd. Kolkata, India - free 4G service by Reliance Jio Faridabad, India - paid Wi-Fi Internet services being deployed by CSC E-governance Services in all Village Gram Panchayats Puducherry, India - paid Wi-Fi Internet services being deployed by CSC E-governance Services in all Village Gram Panchayats Jharkhand, India - paid Wi-Fi Internet services being deployed by CSC E-governance Services in all Village Gram Panchayats ==== Nepal ==== Kathmandu - Paid services through multiple providers such as wlink, NTC ==== Pakistan ==== Islamabad - Free PTCL Char G WiFi for Metro Bus, stations and routes. Lahore - Free Wifi service in all city. Rawalpindi - Free WiFi Service. Multan - Free WiFi Service. Karachi - Free Telenor WiFi https://propakistani.pk/2014/09/22/telenor-launches-wifi-hotspots-in-karachi/ https://wifispc.com/pakistan === Southeast Asia === ==== Cambodia ==== Phnom Penh - WiCam, Ltd. ==== Indonesia ==== Malang - Indoken Wireless offers roaming connectivity, T-Fi Beta offers connectivity on public transportation, free access at resource centers. ==== Malaysia ==== Kuala Lumpur - free, Wireless@KL covering major commercial areas. Penang, - Penang Free Wi-Fi started in 2009, covers some commercial spots in the state, mostly on Penang Island. Port Dickson Sarawak - paid deConnexion available in most business districts in major towns in the state of Sarawak. Kota Kinabalu - free through KK City WiFi starting from 2017 for local residents and tourists. Each user is entitled to 10GB of quota with no time limit every day. ==== Philippines ==== Balanga, Bataan - free in downtown and several tourist attractions Bogo, Cebu - free WiFi service in most government facilities provided by the city government and ICT Office. Calbayog, Samar - downtown area ==== Singapore ==== Singapore - free, Wireless@SG with more than 5,000 hotspots ==== Thailand ==== Bangkok - free service for Bangkok citizens provided by True Corporation. ==== Vietnam ==== Hạ Long Hội An Da Nang Huế === Europe === ==== Austria ==== Vienna - free service around the city through the city lights, at major train stations, and in the Vienna International Airport ==== Belgium ==== Brussels - UrbiZone covers some institutions for higher education, administration buildings, and public hospitals. ==== Bulgaria ==== Plovdiv - free throughout the city center and some of the city's outskirts. ==== Croatia ==== Samobor - paid & free service by NGO SMBWireless. Velika Gorica - free in the city center and nearby villages as a part of e-Gorica. ==== Estonia ==== Tallinn - Tiigrihüpe free WiFi covers the capital city Tallinn and most of the country. ==== Finland ==== Helsinki - free, city-operated network in the city center Oulu - free panOULU service. ==== France ==== Paris - free in many parks and in municipal libraries, museums, and public places. Some suburbs do as well. ==== Germany ==== Munich - several areas downtown Stuttgart - service along the main shopping street Königstraße and a few other locations. Karlsruhe - most of the city center and several areas in the outer skirts. ==== Greece ==== Heraklion - free, city-operated network, covers major city squares and roads. Lagkadas - free, city-operated, covers most of the city and is expanding to cover towns in Lagkadas municipality. ==== Ireland ==== Dublin - free WiFi in areas of the city centre. ==== Italy ==== Bologna - free service in and around the historical city center. Comiso - free service in and around the historical city center. Milan - free service in and around the historical city center and the Milano Malpensa airport. Ravenna - throughout the historic center of Ravenna there is a free wi-fi service called "Ravenna WiFi." Rome - The WiFimetropolitano project consists in the installation in squares, libraries and meeting places in the metropolitan area, of WiFi devices for free Internet access. Venice, free to residents and city users. Trento, free service in and around the historical city centre. ==== Lithuania ==== Kaunas - free, in some streets of the city. ==== Luxembourg ==== Luxembourg — paid & free service in downtown, Central Station Hotcity and European district. ==== Moldova ==== Chişinău - two metropolitan Wi-Fi networks exist: StarNet and Orange. StarNet's paid and free coverage area includes the city's central streets and residential districts as well as parks. Orange paid coverage area includes the city's mass transit areas and bus stops. ==== Netherlands ==== Almere - free municipal Wi-Fi covering Downtown Almere Hilversum - free municipal Wi-Fi covering Downtown Hilversum and the shopping area around de Gijsbrecht van Amstelstraat in the southern part of town Leiden - free, community project covering city and region by Wireless Leiden ==== Norway ==== Trondheim - paid and free service in city centre. ==== Poland ==== Rzeszów - free, city-operated in participating public schools. Wrocław - free service by Miejski Internet, in few places. ==== Romania ==== Brașov, Romania - free WiFi over the entire city deployed into existing 5G network by worldwifizone.com of Ireland, over 40,000 daily users at peak. Roman, Romania - free, deployed by Minisoft Romania as part of MetroWireless free internet access project, paid by advertisements, covers much of the city, expanding to nearby villages Vatra Dornei, Romania, 85% of city covered with free WiFi deployed by worldwifizone.com using free guest user and Facebook connect. ==== Russia ==== Moscow, Russia - MaximaTelecom, award-winning Moscow Metro and public transport public network Moscow, Russia - paid service, Golden Telecom ==== Serbia ==== Zrenjanin, Serbia - free, city center only Pančevo, Serbia - free, city center only, with time limit session. ==== Slovenia ==== Ljubljana, Slovenia - free for 1 hour, city center only ==== Spain ==== Moralzarzal, Spain - free for inscribed citizens, limited time for visitors. Madrid, Spain - free and open Wi-Fi on the municipal bus system, EMT. ==== Sweden ==== Helsingborg, Sweden - unrestricted, free and city-operated in 220 locations around the town. SSID: Helsingborg Helpdesk: #freewifihbg on most social platforms. Lidköping, Sweden - unrestricted, free and commercially operated. Available in town square. SSID: Lidkoping Örebro, Sweden - free, around Järntorget. Malmö, Sweden - free, operated by Pjodd.se, sharing around 65 access points around central town. ==== Switzerland ==== Geneva, Switzerland - free, city-operated ==== Ukraine ==== Kyiv, Ukraine - free WiFi in certain areas of city centre and Passenger Railway Station. ==== United Kingdom ==== Aberdeen - free access across city centre introduced in April 2017. Blackpool - free, 1.6 km area around city centre Wireless Blackpool - Wireless Blackpool Leaflet Bristol - free, 3 km area around city centre Dundee - free access, limited to the redeveloped waterfront area from July 2018, with plans for wider coverage. Edinburgh - free coverage across the city centre was introduced in summer 2016. Fort William - free, town centre. Glasgow - free citywide access introduced in Scotland's largest city as part of an initiative called "Urban Wireless" by British Telecom in July 2014. Inverness - free, city centre. Liverpool - paid service, covering central areas. Newcastle, Northern Ireland Norwich - free, city center and university, 18-month pilot Openlink (Norwich, UK) Sheffield - free, covering the entire city centre; currently under development since December 2017. York - free, entire city centre, museums, libraries and universities === North America === ==== Canada ==== Calgary, Alberta - paid service operated by WestNet Wireless, first City Wi-Fi in Canada Fredericton, New Brunswick - free, Fred-e Zone Iqaluit, Nunavut - Community Free Access and Paid Service provided by Meshnet, and service of mnemonics.ca London, Ontario - free (pilot project) on Dundas Street, provided by London Downtown Business Association Mississauga, Ontario - free, Wireless access at Mississauga Libraries, Community Centres, Arenas and select transit stops Moncton, New Brunswick - free, Service provided by Red Ball Internet of Moncton. Wireless access available at Arenas and Moncton's Public Library. It was also the first city in Canada to provide wireless internet on its public transportation fleet. Montreal, Quebec - free, community supported Ilesansfil Moose Jaw, Saskatchewan - free, city center and campus Prince Albert, Saskatchewan - free, city center and campus Quebec City, Quebec - free, community supported ZAP Quebec Regina, Saskatchewan - free, city center and campus Saint-Hyacinthe, Quebec - free service in selected parks, municipal buildings and commercial center, provided by ZAP Monteregie Saskatoon, Saskatchewan - free, city center and campus Sherbrooke, Québec - free, limited to downtown, provided by ZAP Sherbrooke Shawinigan, Quebec - free service, limited to downtown. City-operated. Stratford, Ontario - paid service, covers entire city. Toronto, Ontario - free service provided by Wireless Toronto and the Toronto Public Library system for locations throughout the Greater Toronto Area Windsor, Ontario- free service for the downtown core provided by the Downtown Windsor Business Improvement Association. ==== United States ==== Akron, Ohio: ConnectAkron Albany, New York Albanyfreenet Albuquerque, New Mexico Amherst, Massachusetts - free service in downtown area Anderson, Indiana free WiFi Arcata, California Baldwin, Georgia - free Public WiFi available in select locations. Baltimore, Maryland free WiFi Bethany Beach, Delaware Beach and Boardwalk free WiFi Binghamton, New York - free service, Boston, Massachusetts - Wicked Free WiFi available throughout the City of Boston for the public to use Brevard County, Florida - free at all County Library Buildings [8] Bristol, Virginia Burlington, North Carolina - Free public WiFi in select downtown areas. Burlington, Vermont - Citywide WiFi hotspots through Burlington Telecom Cambridge, Massachusetts - free (pilot), through the Cambridge Public Internet (CPI) Initiative Cedar Rapids, Iowa - has Free WiFi Downtown & around the city Charleston, South Carolina - free public wi-fi in Marion Square Chattanooga, Tennessee - free public WiFi citywide; operated by EPB Chicago - free public WiFi in many public places; municipally operated; no technical support Clearwater Beach, Florida - free service, Cleveland, Ohio—free service in the Old Brooklyn neighborhood Corpus Christi, Texas - paid service, Earthlink Decatur, Georgia - Free WiFi in Downtown Decatur Dubuque, Iowa - free, city-operated, provided Mediacom covers downtown area since 2006. El Paso, Texas - Free WiFi in Downtown El Paso. Englewood, New Jersey - Free ultra fast WiFi throughout almost two miles of downtown Englewood 2014. Escondido, California - free service in downtown area and Public Library. Fenton, Michigan - free or paid service in downtown area and public parks, through Tri-County Wireless, Inc. Gerlach, Nevada - Gifted to the public by Black Rock City LLC. Greensboro, North Carolina - Free WiFi in Downtown Greensboro, Greensboro Historical Museum, The Depot, and others. Harrisburg, North Carolina - free, Time Warner Cable Hattiesburg, Mississippi Free WiFi in the downtown area of Front, Main and Pine Streets and the Oaks Cultural District. Hollywood, Florida - Johnson Controls, Sling Broadband Wimax deploy municipal Wi-Fi network for wireless automated meter reading (AMR), public safety and free Wi-Fi service for residents. Muni Wireless Houston, Texas - free service in downtown area and selected neighborhoods around the city; free service also available in all Houston Public Library and Harris County Public Library branches Honolulu, Hawaii - free, Tri-Net Solutions LLC Hiawatha Iowa - Has Free WiFi at Public parks & Public Library Indianapolis, Indiana -free AT&T WiFi downtown Kansas City, Missouri - free WiFi downtown through Sprint/AT&T Kennesaw, Georgia - free, City of Kennesaw WiFi - available in city parks and other areas [9] Kenosha, Wisconsin - Low Cost Paid WiFi located in Downtown Kenosha, service provided by Infinite Technologies LLC [10] Kenosha, Wisconsin - Expensive Lake coverage pre-approved by Kenosha County Board without pre-approval by the City of Kenosha was declined by the City 2/13/2014. The ISP service the County was attempting would undermine the existing small business owner, who has found it a challenge for the city to accept any attempts to grow the WiFi. Kissimmee, Florida - free, Bright House Networks Lafayette, Louisiana Lawrence, Kansas - free, Lawrence Freenet, not-for-profit company that works in conjunction with the City of Lawrence and local internet providers [11] Leverett, Massachusetts Lexington, Kentucky - SSID: "LexingtonPublic" free, originally only for police, firefighters and civil service employees, available along major streets miles outside downtown, available in downtown, East End and Cardinal Hill neighborhoods Linden, Michigan - free or paid service in downtown area and public parks, through Tri-County Wireless, Inc. Los Lunas, New Mexico - http://www.loslunasnm.gov/196/Wi-Fi-Service Longmont, Colorado - Municipal gigabit fiber citywide. Madison, Wisconsin - paid, only covers central part of city. Marion, Illinois - Free. Initially just the downtown square but plans to expand to Public Safety. Maywood, California - Free. Initially just the business corridors, now citywide. Minneapolis, Minnesota - paid, USI Wireless Mountain View, California - free (no longer operating) - Google WiFi Naperville, Illinois - free, downtown area only, known as "napernet" New York City - LinkNYC began service in 2016; intended to have thousands of stations Newton, North Carolina - free, downtown area [12] Ocala, Florida - Free, Downtown Square Pacifica, California - paid service, PacificaNet Palm Bay, Florida - free at City Hall and six parks, Map [13] Peachtree City, Georgia - free at two parks and the public library/City Hall plaza - Philomath, Oregon - free 300 kbit/s access, paid tiers. Serves city limits: also has APs in downtown Corvallis. Pittsburgh, Pennsylvania - free downtown 2 hours per day Plattsmouth, Nebraska - free in all public buildings (Court House, Public Library, City Hall, Community Center) and Main Street Ponca City, Oklahoma - covers the whole city Powell, Ohio - Free, covers downtown Rochester, Minnesota - Downtown in Peace Plaza, near the Mayo Clinic and University of Minnesota Rochester Rockport, Maine San Jose, California - Free in downtown area, and in key low resource neighborhoods through the East Side Access partnership with East Side Union High School District Santa Clara, California - Free, outdoors in most areas of the city Santa Monica, California - Free, outdoors in most areas of the city Skokie, Illinois, - Downtown and park areas Southaven, Mississippi - paid service, city-operated, branded as Magnoliawave South Bend, Indiana - Free service intended to establish downtown as a meeting place and bridge the digital divide Spokane, Washington - two free hours/day, paid after. Statesville, North Carolina- free access Storrs, Connecticut - used for students of The University of Connecticut Springfield, Ohio - free, downtown and Clark State Community College campus The Dalles, Oregon - free, via Google grant to downtown and key event areas. City-operated. Wilkes-Barre, Pennsylvania - Day pass, monthly service, or even pre-paid wireless data cards are available Williamsburg, Virginia - free, limited to Merchants Square Winston-Salem, North Carolina - free, limited to downtown. City-operated: no technical support. Warwick, Massachusetts - paid service, municipally-operated Yazoo City, Mississippi - Paid network. Branded as Yazoo Wireless, Provided by CYTEC Yorktown, Indiana - Free, limited to downtown In addition, a few U.S. states, such as Illinois, Iowa, and Massachusetts, offer free Wi-Fi service at welcome centers and roadside rest areas located along major Interstate highways. ==== Mexico ==== Guadalajara, Jalisco - Free, 150 parks and municipal areas. 1 hour continuous connect and 2 hour connection time allowed per day. In operation since 2011. Installation and operation is municipal government funded. A few of the areas are provided with free electrical outlets to charge / use your device. Mérida, Yucatán - Free. Most major city parks and other areas. Provided by Axtel and Telmex. Usually also provide standing tables with power outlets. The parks are identified by "parque en linea" (online park) signs and branding of the utility providing the connectivity. The SSID is usually "park en linea". === Oceania === Margaret River, Western Australia, This free public WIFI is provided by Margaret River Rotary Club and covers the main street all the way up to Reuther Park at the corner of Bussell Hwy & Wallcliffe Rd, Margaret River WA 6285. Melbourne, Australia - VICFREE WiFi is available outdoors in the Melbourne CBD it includes: Bourke St Mall Queen Victoria Market Melbourne Convention and Exhibition Centre Melbourne Museum on platforms at CBD train stations It's also available in central Ballarat and central Bendigo. NOTE Telstra also have Telstra air fon hotspots available to Telstra and fon customers Australia wide Adelaide, Australia - AdelaideFree WiFi is a contiguous network available throughout the CBD, provided by Internode Auckland, New Zealand - Citywide network based in all popular areas across Auckland including CBD and Waterfront [14] from Tomizone. Perth, Australia - paid, RoamAD-based metro wide coverage in the CBD by metromesh Hawke's Bay, New Zealand, prepaid access and free 1 hr daily, available at many locations region wide by NOW Wellington, New Zealand - Free Wifi at the Waterfront, CBD & Airport Brisbane, Australia - in public areas and the CBD Nelson, New Zealand - Public areas within CBD === South America === Aparecida, Brazil Free service Belo Horizonte, Brazil La Plata, Argentina - free, city center only Buenos Aires, Argentina - free, without registration, 120 spots for all over the city General Lavalle, Argentina - Free service Resistencia, Chaco, Argentina - free, without registration, 12 spots for all over the city. Sud Mennucci, Brazil—free, limited to downtown. City-operated. Medellin, Colombia City-operated free wifi in over 180 locations. Miraflores, Lima Peru Free service, various spots over the district. City-operated. === Planned === ==== Africa ==== Stellenbosch, South Africa Free service. Town centre online since February 25, 2012. Coverage to be increased to whole town. Northpine, South Africa Paid. WISP and media delivery services as well as video surveillance focused on the suburb. Community social portal for information sharing, collaboration and local business partnerships. Proof of concept to be expanded to neighbouring areas. Harare, Zimbabwe Available around the city on various hotspots. Provided by ZOL. 1 hour time limit, paid after. ==== South Asia ==== Delhi, India - Delhi Government constituted a Task Force (March 2015) to provide Free Wi-Fi connectivity in Delhi. The new Task Force is a part of Delhi Dialogue Commission (DDC), an advisory body of the Aam Aadmi Party government, Aam Aadmi Party government decide to consult with various stakeholders to implement its pre-poll promise of providing Free Wi-Fi connectivity across the city, Delhi Dialogue Commission (DDC) chaired by Chief Minister Arvind Kejriwal asks people for suggestions for Free WiFi plan (March 2015) Dhaka, Bangladesh Free WiFi is now available in Dhaka Airport Road, Dhanmondi Lake Park, Selected BRTC Buses, Kamlapur Railway Station, Airport Railstation and Dhanmondi Residential Area. The Free WiFi Networks are provided by telecom operators, notably Robi and Aamra. The service is to be rolled out in full Northern Dhaka within December 2018 Mumbai, India NOIDA, India Karachi, Pakistan ==== Southeast Asia ==== Makati, Philippines ==== West Asia ==== Tel Aviv Downtown and later north part as well. ==== Europe ==== Swindon, Wiltshire, UK Leicester, UK London, UK (London Underground) ==== North America ==== Mexico City, Mexico free, coupled with new surveillance system (planned 2008) Panama Tecumseh, Ontario ===== United States ===== Oakland County, Michigan - free 128 kbit/s, paid for high speed, Wireless Oakland Sacramento, California Silicon Valley, California Joint Venture Wireless Project - free, prototyped for Palo Alto and San Carlos by 2008, Silicon Valley Metro Connect. St. Louis Park, Minnesota - Set up, but not yet deployed due to contracting disputes. Tampa, Florida - Tampabayconnect.net Waukesha, Wisconsin ==== Oceania ==== Brisbane, Australia Canberra, Australia Melbourne, Australia Ballarat, Australia Bendigo, Australia ==== South America ==== Jacareí, Brazil São José dos Campos, Brazil São Paulo, Brazil === Canceled or closed === Baton Rouge, United States Charleston, South Carolina, United States (on hold) Dublin, Ireland Groningen, Netherlands - Municipal Wireless network with open service model, covering entire city, first parts operational, 2010–2012 expanding to 54sq km MetroFi - free with advertisements, deployed to 10 cities in the western United States, closed in 2008 Milwaukee, Wisconsin, United States - paid service, Midwest Fiber Networks, target date: March 2008 New Orleans, Louisiana, United States Parramatta, Australia Portland, Oregon, United States Puerto Montt, Chile Regional Municipality of Waterloo, Canada - plans to create paid service to cover the entire Waterloo Region, specifically Kitchener, Ontario, Waterloo, Ontario, Cambridge, Ontario (The "Tri-City Area"), to be provided by Atria Networks, was scrapped in 2011 as Atria has been acquired by Rogers Communications, no explanation was given. Riverside, California, United States San Francisco, California, United States Sydney, Australia Tempe, Arizona, United States - paid service, Kite Networks Dubrovnik, Croatia - closed when the new mayor took over == See also == List of deployed WiMAX networks Municipal broadband Switched mesh == References == == External links == How Municipal WiFi Works at HowStuffWorks
Wikipedia/Municipal_wireless_network
Network topology is the arrangement of the elements (links, nodes, etc.) of a communication network. Network topology can be used to define or describe the arrangement of various types of telecommunication networks, including command and control radio networks, industrial fieldbusses and computer networks. Network topology is the topological structure of a network and may be depicted physically or logically. It is an application of graph theory wherein communicating devices are modeled as nodes and the connections between the devices are modeled as links or lines between the nodes. Physical topology is the placement of the various components of a network (e.g., device location and cable installation), while logical topology illustrates how data flows within a network. Distances between nodes, physical interconnections, transmission rates, or signal types may differ between two different networks, yet their logical topologies may be identical. A network's physical topology is a particular concern of the physical layer of the OSI model. Examples of network topologies are found in local area networks (LAN), a common computer network installation. Any given node in the LAN has one or more physical links to other devices in the network; graphically mapping these links results in a geometric shape that can be used to describe the physical topology of the network. A wide variety of physical topologies have been used in LANs, including ring, bus, mesh and star. Conversely, mapping the data flow between the components determines the logical topology of the network. In comparison, Controller Area Networks, common in vehicles, are primarily distributed control system networks of one or more controllers interconnected with sensors and actuators over, invariably, a physical bus topology. == Topologies == Two basic categories of network topologies exist, physical topologies and logical topologies. The transmission medium layout used to link devices is the physical topology of the network. For conductive or fiber optical mediums, this refers to the layout of cabling, the locations of nodes, and the links between the nodes and the cabling. The physical topology of a network is determined by the capabilities of the network access devices and media, the level of control or fault tolerance desired, and the cost associated with cabling or telecommunication circuits. In contrast, logical topology is the way that the signals act on the network media, or the way that the data passes through the network from one device to the next without regard to the physical interconnection of the devices. A network's logical topology is not necessarily the same as its physical topology. For example, the original twisted pair Ethernet using repeater hubs was a logical bus topology carried on a physical star topology. Token Ring is a logical ring topology, but is wired as a physical star from the media access unit. Physically, Avionics Full-Duplex Switched Ethernet (AFDX) can be a cascaded star topology of multiple dual redundant Ethernet switches; however, the AFDX virtual links are modeled as time-switched single-transmitter bus connections, thus following the safety model of a single-transmitter bus topology previously used in aircraft. Logical topologies are often closely associated with media access control methods and protocols. Some networks are able to dynamically change their logical topology through configuration changes to their routers and switches. == Links == The transmission media (often referred to in the literature as the physical media) used to link devices to form a computer network include electrical cables (Ethernet, HomePNA, power line communication, G.hn), optical fiber (fiber-optic communication), and radio waves (wireless networking). In the OSI model, these are defined at layers 1 and 2 — the physical layer and the data link layer. A widely adopted family of transmission media used in local area network (LAN) technology is collectively known as Ethernet. The media and protocol standards that enable communication between networked devices over Ethernet are defined by IEEE 802.3. Ethernet transmits data over both copper and fiber cables. Wireless LAN standards (e.g. those defined by IEEE 802.11) use radio waves, or others use infrared signals as a transmission medium. Power line communication uses a building's power cabling to transmit data. === Wired technologies === The orders of the following wired technologies are, roughly, from slowest to fastest transmission speed. Coaxial cable is widely used for cable television systems, office buildings, and other work-sites for local area networks. The cables consist of copper or aluminum wire surrounded by an insulating layer (typically a flexible material with a high dielectric constant), which itself is surrounded by a conductive layer. The insulation between the conductors helps maintain the characteristic impedance of the cable which can help improve its performance. Transmission speed ranges from 200 million bits per second to more than 500 million bits per second. ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and power lines) to create a high-speed (up to 1 Gigabit/s) local area network. Signal traces on printed circuit boards are common for board-level serial communication, particularly between certain types integrated circuits, a common example being SPI. Ribbon cable (untwisted and possibly unshielded) has been a cost-effective media for serial protocols, especially within metallic enclosures or rolled within copper braid or foil, over short distances, or at lower data rates. Several serial network protocols can be deployed without shielded or twisted pair cabling, that is, with flat or ribbon cable, or a hybrid flat and twisted ribbon cable, should EMC, length, and bandwidth constraints permit: RS-232, RS-422, RS-485, CAN, GPIB, SCSI, etc. Twisted pair wire is the most widely used medium for all telecommunication. Twisted-pair cabling consist of copper wires that are twisted into pairs. Ordinary telephone wires consist of two insulated copper wires twisted into pairs. Computer network cabling (wired Ethernet as defined by IEEE 802.3) consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed ranges from 2 million bits per second to 10 billion bits per second. Twisted pair cabling comes in two forms: unshielded twisted pair (UTP) and shielded twisted pair (STP). Each form comes in several category ratings, designed for use in various scenarios. An optical fiber is a glass fiber. It carries pulses of light that represent data. Some advantages of optical fibers over metal wires are very low transmission loss and immunity from electrical interference. Optical fibers can simultaneously carry multiple wavelengths of light, which greatly increases the rate that data can be sent, and helps enable data rates of up to trillions of bits per second. Optic fibers can be used for long runs of cable carrying very high data rates, and are used for undersea communications cables to interconnect continents. Price is a main factor distinguishing wired- and wireless technology options in a business. Wireless options command a price premium that can make purchasing wired computers, printers and other devices a financial benefit. Before making the decision to purchase hard-wired technology products, a review of the restrictions and limitations of the selections is necessary. Business and employee needs may override any cost considerations. === Wireless technologies === Terrestrial microwave – Terrestrial microwave communication uses Earth-based transmitters and receivers resembling satellite dishes. Terrestrial microwaves are in the low gigahertz range, which limits all communications to line-of-sight. Relay stations are spaced approximately 50 km (30 mi) apart. Communications satellites – Satellites communicate via microwave radio waves, which are not deflected by the Earth's atmosphere. The satellites are stationed in space, typically in geostationary orbit 35,786 km (22,236 mi) above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals. Cellular and PCS systems use several radio communications technologies. The systems divide the region covered into multiple geographic areas. Each area has a low-power transmitter or radio relay antenna device to relay calls from one area to the next area. Radio and spread spectrum technologies – Wireless local area networks use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. IEEE 802.11 defines a common flavor of open-standards wireless radio-wave technology known as Wi-Fi. Free-space optical communication uses visible or invisible light for communications. In most cases, line-of-sight propagation is used, which limits the physical positioning of communicating devices. === Exotic technologies === There have been various attempts at transporting data over exotic media: IP over Avian Carriers was a humorous April fool's Request for Comments, issued as RFC 1149. It was implemented in real life in 2001. Extending the Internet to interplanetary dimensions via radio waves, the Interplanetary Internet. Both cases have a large round-trip delay time, which gives slow two-way communication, but does not prevent sending large amounts of information. == Nodes == Network nodes are the points of connection of the transmission medium to transmitters and receivers of the electrical, optical, or radio signals carried in the medium. Nodes may be associated with a computer, but certain types may have only a microcontroller at a node or possibly no programmable device at all. In the simplest of serial arrangements, one RS-232 transmitter can be connected by a pair of wires to one receiver, forming two nodes on one link, or a Point-to-Point topology. Some protocols permit a single node to only either transmit or receive (e.g., ARINC 429). Other protocols have nodes that can both transmit and receive into a single channel (e.g., CAN can have many transceivers connected to a single bus). While the conventional system building blocks of a computer network include network interface controllers (NICs), repeaters, hubs, bridges, switches, routers, modems, gateways, and firewalls, most address network concerns beyond the physical network topology and may be represented as single nodes on a particular physical network topology. === Network interfaces === A network interface controller (NIC) is computer hardware that provides a computer with the ability to access the transmission media, and has the ability to process low-level network information. For example, the NIC may have a connector for accepting a cable, or an aerial for wireless transmission and reception, and the associated circuitry. The NIC responds to traffic addressed to a network address for either the NIC or the computer as a whole. In Ethernet networks, each network interface controller has a unique Media Access Control (MAC) address—usually stored in the controller's permanent memory. To avoid address conflicts between network devices, the Institute of Electrical and Electronics Engineers (IEEE) maintains and administers MAC address uniqueness. The size of an Ethernet MAC address is six octets. The three most significant octets are reserved to identify NIC manufacturers. These manufacturers, using only their assigned prefixes, uniquely assign the three least-significant octets of every Ethernet interface they produce. === Repeaters and hubs === A repeater is an electronic device that receives a network signal, cleans it of unnecessary noise and regenerates it. The signal may be reformed or retransmitted at a higher power level, to the other side of an obstruction possibly using a different transmission medium, so that the signal can cover longer distances without degradation. Commercial repeaters have extended RS-232 segments from 15 meters to over a kilometer. In most twisted pair Ethernet configurations, repeaters are required for cable that runs longer than 100 meters. With fiber optics, repeaters can be tens or even hundreds of kilometers apart. Repeaters work within the physical layer of the OSI model, that is, there is no end-to-end change in the physical protocol across the repeater, or repeater pair, even if a different physical layer may be used between the ends of the repeater, or repeater pair. Repeaters require a small amount of time to regenerate the signal. This can cause a propagation delay that affects network performance and may affect proper function. As a result, many network architectures limit the number of repeaters that can be used in a row, e.g., the Ethernet 5-4-3 rule. A repeater with multiple ports is known as hub, an Ethernet hub in Ethernet networks, a USB hub in USB networks. USB networks use hubs to form tiered-star topologies. Ethernet hubs and repeaters in LANs have been mostly obsoleted by modern switches. === Bridges === A network bridge connects and filters traffic between two network segments at the data link layer (layer 2) of the OSI model to form a single network. This breaks the network's collision domain but maintains a unified broadcast domain. Network segmentation breaks down a large, congested network into an aggregation of smaller, more efficient networks. Bridges come in three basic types: Local bridges: Directly connect LANs Remote bridges: Can be used to create a wide area network (WAN) link between LANs. Remote bridges, where the connecting link is slower than the end networks, largely have been replaced with routers. Wireless bridges: Can be used to join LANs or connect remote devices to LANs. === Switches === A network switch is a device that forwards and filters OSI layer 2 datagrams (frames) between ports based on the destination MAC address in each frame. A switch is distinct from a hub in that it only forwards the frames to the physical ports involved in the communication rather than all ports connected. It can be thought of as a multi-port bridge. It learns to associate physical ports to MAC addresses by examining the source addresses of received frames. If an unknown destination is targeted, the switch broadcasts to all ports but the source. Switches normally have numerous ports, facilitating a star topology for devices, and cascading additional switches. Multi-layer switches are capable of routing based on layer 3 addressing or additional logical levels. The term switch is often used loosely to include devices such as routers and bridges, as well as devices that may distribute traffic based on load or based on application content (e.g., a Web URL identifier). === Routers === A router is an internetworking device that forwards packets between networks by processing the routing information included in the packet or datagram (Internet protocol information from layer 3). The routing information is often processed in conjunction with the routing table (or forwarding table). A router uses its routing table to determine where to forward packets. A destination in a routing table can include a black hole because data can go into it, however, no further processing is done for said data, i.e. the packets are dropped. === Modems === Modems (MOdulator-DEModulator) are used to connect network nodes via wire not originally designed for digital network traffic, or for wireless. To do this one or more carrier signals are modulated by the digital signal to produce an analog signal that can be tailored to give the required properties for transmission. Modems are commonly used for telephone lines, using a digital subscriber line technology. === Firewalls === A firewall is a network device for controlling network security and access rules. Firewalls are typically configured to reject access requests from unrecognized sources while allowing actions from recognized ones. The vital role firewalls play in network security grows in parallel with the constant increase in cyber attacks. == Classification == The study of network topology recognizes eight basic topologies: point-to-point, bus, star, ring or circular, mesh, tree, hybrid, or daisy chain. === Point-to-point === The simplest topology with a dedicated link between two endpoints. Easiest to understand, of the variations of point-to-point topology, is a point-to-point communication channel that appears, to the user, to be permanently associated with the two endpoints. A child's tin can telephone is one example of a physical dedicated channel. Using circuit-switching or packet-switching technologies, a point-to-point circuit can be set up dynamically and dropped when no longer needed. Switched point-to-point topologies are the basic model of conventional telephony. The value of a permanent point-to-point network is unimpeded communications between the two endpoints. The value of an on-demand point-to-point connection is proportional to the number of potential pairs of subscribers and has been expressed as Metcalfe's Law. === Daisy chain === Daisy chaining is accomplished by connecting each computer in series to the next. If a message is intended for a computer partway down the line, each system bounces it along in sequence until it reaches the destination. A daisy-chained network can take two basic forms: linear and ring. A linear topology puts a two-way link between one computer and the next. However, this was expensive in the early days of computing, since each computer (except for the ones at each end) required two receivers and two transmitters. By connecting the computers at each end of the chain, a ring topology can be formed. When a node sends a message, the message is processed by each computer in the ring. An advantage of the ring is that the number of transmitters and receivers can be cut in half. Since a message will eventually loop all of the way around, transmission does not need to go both directions. Alternatively, the ring can be used to improve fault tolerance. If the ring breaks at a particular link then the transmission can be sent via the reverse path thereby ensuring that all nodes are always connected in the case of a single failure. === Bus === In local area networks using bus topology, each node is connected by interface connectors to a single central cable. This is the 'bus', also referred to as the backbone, or trunk – all data transmission between nodes in the network is transmitted over this common transmission medium and is able to be received by all nodes in the network simultaneously. A signal containing the address of the intended receiving machine travels from a source machine in both directions to all machines connected to the bus until it finds the intended recipient, which then accepts the data. If the machine address does not match the intended address for the data, the data portion of the signal is ignored. Since the bus topology consists of only one wire it is less expensive to implement than other topologies, but the savings are offset by the higher cost of managing the network. Additionally, since the network is dependent on the single cable, it can be the single point of failure of the network. In this topology data being transferred may be accessed by any node. ==== Linear bus ==== In a linear bus network, all of the nodes of the network are connected to a common transmission medium which has just two endpoints. When the electrical signal reaches the end of the bus, the signal is reflected back down the line, causing unwanted interference. To prevent this, the two endpoints of the bus are normally terminated with a device called a terminator. ==== Distributed bus ==== In a distributed bus network, all of the nodes of the network are connected to a common transmission medium with more than two endpoints, created by adding branches to the main section of the transmission medium – the physical distributed bus topology functions in exactly the same fashion as the physical linear bus topology because all nodes share a common transmission medium. === Star === In star topology (also called hub-and-spoke), every peripheral node (computer workstation or any other peripheral) is connected to a central node called a hub or switch. The hub is the server and the peripherals are the clients. The network does not necessarily have to resemble a star to be classified as a star network, but all of the peripheral nodes on the network must be connected to one central hub. All traffic that traverses the network passes through the central hub, which acts as a signal repeater. The star topology is considered the easiest topology to design and implement. One advantage of the star topology is the simplicity of adding additional nodes. The primary disadvantage of the star topology is that the hub represents a single point of failure. Also, since all peripheral communication must flow through the central hub, the aggregate central bandwidth forms a network bottleneck for large clusters. ==== Extended star ==== The extended star network topology extends a physical star topology by one or more repeaters between the central node and the peripheral (or 'spoke') nodes. The repeaters are used to extend the maximum transmission distance of the physical layer, the point-to-point distance between the central node and the peripheral nodes. Repeaters allow greater transmission distance, further than would be possible using just the transmitting power of the central node. The use of repeaters can also overcome limitations from the standard upon which the physical layer is based. A physical extended star topology in which repeaters are replaced with hubs or switches is a type of hybrid network topology and is referred to as a physical hierarchical star topology, although some texts make no distinction between the two topologies. A physical hierarchical star topology can also be referred as a tier-star topology. This topology differs from a tree topology in the way star networks are connected together. A tier-star topology uses a central node, while a tree topology uses a central bus and can also be referred as a star-bus network. ==== Distributed star ==== A distributed star is a network topology that is composed of individual networks that are based upon the physical star topology connected in a linear fashion – i.e., 'daisy-chained' – with no central or top level connection point (e.g., two or more 'stacked' hubs, along with their associated star connected nodes or 'spokes'). === Ring === A ring topology is a daisy chain in a closed loop. Data travels around the ring in one direction. When one node sends data to another, the data passes through each intermediate node on the ring until it reaches its destination. The intermediate nodes repeat (retransmit) the data to keep the signal strong. Every node is a peer; there is no hierarchical relationship of clients and servers. If one node is unable to retransmit data, it severs communication between the nodes before and after it in the bus. Advantages: When the load on the network increases, its performance is better than bus topology. There is no need of network server to control the connectivity between workstations. Disadvantages: Aggregate network bandwidth is bottlenecked by the weakest link between two nodes. === Mesh === The value of fully meshed networks is proportional to the exponent of the number of subscribers, assuming that communicating groups of any two endpoints, up to and including all the endpoints, is approximated by Reed's Law. ==== Fully connected network ==== In a fully connected network, all nodes are interconnected. (In graph theory this is called a complete graph.) The simplest fully connected network is a two-node network. A fully connected network doesn't need to use packet switching or broadcasting. However, since the number of connections grows quadratically with the number of nodes: c = n ( n − 1 ) 2 . {\displaystyle c={\frac {n(n-1)}{2}}.\,} This makes it impractical for large networks. This kind of topology does not trip and affect other nodes in the network. ==== Partially connected network ==== In a partially connected network, certain nodes are connected to exactly one other node; but some nodes are connected to two or more other nodes with a point-to-point link. This makes it possible to make use of some of the redundancy of mesh topology that is physically fully connected, without the expense and complexity required for a connection between every node in the network. === Hybrid === Hybrid topology is also known as hybrid network. Hybrid networks combine two or more topologies in such a way that the resulting network does not exhibit one of the standard topologies (e.g., bus, star, ring, etc.). For example, a tree network (or star-bus network) is a hybrid topology in which star networks are interconnected via bus networks. However, a tree network connected to another tree network is still topologically a tree network, not a distinct network type. A hybrid topology is always produced when two different basic network topologies are connected. A star-ring network consists of two or more ring networks connected using a multistation access unit (MAU) as a centralized hub. Snowflake topology is meshed at the core, but tree shaped at the edges. Two other hybrid network types are hybrid mesh and hierarchical star. == Centralization == The star topology reduces the probability of a network failure by connecting all of the peripheral nodes (computers, etc.) to a central node. When the physical star topology is applied to a logical bus network such as Ethernet, this central node (traditionally a hub) rebroadcasts all transmissions received from any peripheral node to all peripheral nodes on the network, sometimes including the originating node. All peripheral nodes may thus communicate with all others by transmitting to, and receiving from, the central node only. The failure of a transmission line linking any peripheral node to the central node will result in the isolation of that peripheral node from all others, but the remaining peripheral nodes will be unaffected. However, the disadvantage is that the failure of the central node will cause the failure of all of the peripheral nodes. If the central node is passive, the originating node must be able to tolerate the reception of an echo of its own transmission, delayed by the two-way round trip transmission time (i.e. to and from the central node) plus any delay generated in the central node. An active star network has an active central node that usually has the means to prevent echo-related problems. A tree topology (a.k.a. hierarchical topology) can be viewed as a collection of star networks arranged in a hierarchy. This tree structure has individual peripheral nodes (e.g. leaves) which are required to transmit to and receive from one other node only and are not required to act as repeaters or regenerators. Unlike the star network, the functionality of the central node may be distributed. As in the conventional star network, individual nodes may thus still be isolated from the network by a single-point failure of a transmission path to the node. If a link connecting a leaf fails, that leaf is isolated; if a connection to a non-leaf node fails, an entire section of the network becomes isolated from the rest. To alleviate the amount of network traffic that comes from broadcasting all signals to all nodes, more advanced central nodes were developed that are able to keep track of the identities of the nodes that are connected to the network. These network switches will learn the layout of the network by listening on each port during normal data transmission, examining the data packets and recording the address/identifier of each connected node and which port it is connected to in a lookup table held in memory. This lookup table then allows future transmissions to be forwarded to the intended destination only. Daisy chain topology is a way of connecting network nodes in a linear or ring structure. It is used to transmit messages from one node to the next until they reach the destination node. A daisy chain network can have two types: linear and ring. A linear daisy chain network is like an electrical series, where the first and last nodes are not connected. A ring daisy chain network is where the first and last nodes are connected, forming a loop. == Decentralization == In a partially connected mesh topology, there are at least two nodes with two or more paths between them to provide redundant paths in case the link providing one of the paths fails. Decentralization is often used to compensate for the single-point-failure disadvantage that is present when using a single device as a central node (e.g., in star and tree networks). A special kind of mesh, limiting the number of hops between two nodes, is a hypercube. The number of arbitrary forks in mesh networks makes them more difficult to design and implement, but their decentralized nature makes them very useful. This is similar in some ways to a grid network, where a linear or ring topology is used to connect systems in multiple directions. A multidimensional ring has a toroidal topology, for instance. A fully connected network, complete topology, or full mesh topology is a network topology in which there is a direct link between all pairs of nodes. In a fully connected network with n nodes, there are n ( n − 1 ) 2 {\displaystyle {\frac {n(n-1)}{2}}\,} direct links. Networks designed with this topology are usually very expensive to set up, but provide a high degree of reliability due to the multiple paths for data that are provided by the large number of redundant links between nodes. This topology is mostly seen in military applications. == See also == == References == == External links == Tetrahedron Core Network: Application of a tetrahedral structure to create a resilient partial-mesh 3-dimensional campus backbone data network
Wikipedia/Network_topologies
Wikifunctions is a collaboratively edited catalog of computer functions to enable the creation, modification, and reuse of source code. It is closely related to Abstract Wikipedia, an extension of Wikidata to create a language-independent version of Wikipedia using its structured data. Provisionally named Wikilambda, the definitive name of Wikifunctions was announced on 22 December 2020 following a naming contest. Wikifunctions is the first Wikimedia project to launch since Wikidata in 2012. After three years of development, Wikifunctions officially launched in July 2023. == See also == Rosetta Code == References == == External links == Official website Project overview on Meta-Wiki Project updates on Meta-Wiki
Wikipedia/Wikifunctions
The Berlekamp–Massey algorithm is an algorithm that will find the shortest linear-feedback shift register (LFSR) for a given binary output sequence. The algorithm will also find the minimal polynomial of a linearly recurrent sequence in an arbitrary field. The field requirement means that the Berlekamp–Massey algorithm requires all non-zero elements to have a multiplicative inverse. Reeds and Sloane offer an extension to handle a ring. Elwyn Berlekamp invented an algorithm for decoding Bose–Chaudhuri–Hocquenghem (BCH) codes. James Massey recognized its application to linear feedback shift registers and simplified the algorithm. Massey termed the algorithm the LFSR Synthesis Algorithm (Berlekamp Iterative Algorithm), but it is now known as the Berlekamp–Massey algorithm. == Description of algorithm == The Berlekamp–Massey algorithm is an alternative to the Reed–Solomon Peterson decoder for solving the set of linear equations. It can be summarized as finding the coefficients Λj of a polynomial Λ(x) so that for all positions i in an input stream S: S i + ν + Λ 1 S i + ν − 1 + ⋯ + Λ ν − 1 S i + 1 + Λ ν S i = 0. {\displaystyle S_{i+\nu }+\Lambda _{1}S_{i+\nu -1}+\cdots +\Lambda _{\nu -1}S_{i+1}+\Lambda _{\nu }S_{i}=0.} In the code examples below, C(x) is a potential instance of Λ(x). The error locator polynomial C(x) for L errors is defined as: C ( x ) = C L x L + C L − 1 x L − 1 + ⋯ + C 2 x 2 + C 1 x + 1 {\displaystyle C(x)=C_{L}x^{L}+C_{L-1}x^{L-1}+\cdots +C_{2}x^{2}+C_{1}x+1} or reversed: C ( x ) = 1 + C 1 x + C 2 x 2 + ⋯ + C L − 1 x L − 1 + C L x L . {\displaystyle C(x)=1+C_{1}x+C_{2}x^{2}+\cdots +C_{L-1}x^{L-1}+C_{L}x^{L}.} The goal of the algorithm is to determine the minimal degree L and C(x) which results in all syndromes S n + C 1 S n − 1 + ⋯ + C L S n − L {\displaystyle S_{n}+C_{1}S_{n-1}+\cdots +C_{L}S_{n-L}} being equal to 0: S n + C 1 S n − 1 + ⋯ + C L S n − L = 0 , L ≤ n ≤ N − 1. {\displaystyle S_{n}+C_{1}S_{n-1}+\cdots +C_{L}S_{n-L}=0,\qquad L\leq n\leq N-1.} Algorithm: C(x) is initialized to 1, L is the current number of assumed errors, and initialized to zero. N is the total number of syndromes. n is used as the main iterator and to index the syndromes from 0 to N−1. B(x) is a copy of the last C(x) since L was updated and initialized to 1. b is a copy of the last discrepancy d (explained below) since L was updated and initialized to 1. m is the number of iterations since L, B(x), and b were updated and initialized to 1. Each iteration of the algorithm calculates a discrepancy d. At iteration k this would be: d ← S k + C 1 S k − 1 + ⋯ + C L S k − L . {\displaystyle d\gets S_{k}+C_{1}S_{k-1}+\cdots +C_{L}S_{k-L}.} If d is zero, the algorithm assumes that C(x) and L are correct for the moment, increments m, and continues. If d is not zero, the algorithm adjusts C(x) so that a recalculation of d would be zero: C ( x ) ← C ( x ) − ( d / b ) x m B ( x ) . {\displaystyle C(x)\gets C(x)-(d/b)x^{m}B(x).} The xm term shifts B(x) so it follows the syndromes corresponding to b. If the previous update of L occurred on iteration j, then m = k − j, and a recalculated discrepancy would be: d ← S k + C 1 S k − 1 + ⋯ − ( d / b ) ( S j + B 1 S j − 1 + ⋯ ) . {\displaystyle d\gets S_{k}+C_{1}S_{k-1}+\cdots -(d/b)(S_{j}+B_{1}S_{j-1}+\cdots ).} This would change a recalculated discrepancy to: d = d − ( d / b ) b = d − d = 0. {\displaystyle d=d-(d/b)b=d-d=0.} The algorithm also needs to increase L (number of errors) as needed. If L equals the actual number of errors, then during the iteration process, the discrepancies will become zero before n becomes greater than or equal to 2L. Otherwise L is updated and the algorithm will update B(x), b, increase L, and reset m = 1. The formula L = (n + 1 − L) limits L to the number of available syndromes used to calculate discrepancies, and also handles the case where L increases by more than 1. == Pseudocode == The algorithm from Massey (1969, p. 124) for an arbitrary field: In the case of binary GF(2) BCH code, the discrepancy d will be zero on all odd steps, so a check can be added to avoid calculating it. == See also == Reed–Solomon error correction Reeds–Sloane algorithm, an extension for sequences over integers mod n Nonlinear-feedback shift register (NLFSR) == References == == External links == "Berlekamp-Massey algorithm", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Berlekamp–Massey algorithm at PlanetMath. Weisstein, Eric W. "Berlekamp–Massey Algorithm". MathWorld. GF(2) implementation in Mathematica (in German) Applet Berlekamp–Massey algorithm Online GF(2) Berlekamp-Massey calculator
Wikipedia/Berlekamp–Massey_algorithm
The RSA (Rivest–Shamir–Adleman) cryptosystem is a public-key cryptosystem, one of the oldest widely used for secure data transmission. The initialism "RSA" comes from the surnames of Ron Rivest, Adi Shamir and Leonard Adleman, who publicly described the algorithm in 1977. An equivalent system was developed secretly in 1973 at Government Communications Headquarters (GCHQ), the British signals intelligence agency, by the English mathematician Clifford Cocks. That system was declassified in 1997. In a public-key cryptosystem, the encryption key is public and distinct from the decryption key, which is kept secret (private). An RSA user creates and publishes a public key based on two large prime numbers, along with an auxiliary value. The prime numbers are kept secret. Messages can be encrypted by anyone via the public key, but can only be decrypted by someone who knows the private key. The security of RSA relies on the practical difficulty of factoring the product of two large prime numbers, the "factoring problem". Breaking RSA encryption is known as the RSA problem. Whether it is as difficult as the factoring problem is an open question. There are no published methods to defeat the system if a large enough key is used. RSA is a relatively slow algorithm. Because of this, it is not commonly used to directly encrypt user data. More often, RSA is used to transmit shared keys for symmetric-key cryptography, which are then used for bulk encryption–decryption. == History == The idea of an asymmetric public-private key cryptosystem is attributed to Whitfield Diffie and Martin Hellman, who published this concept in 1976. They also introduced digital signatures and attempted to apply number theory. Their formulation used a shared-secret-key created from exponentiation of some number, modulo a prime number. However, they left open the problem of realizing a one-way function, possibly because the difficulty of factoring was not well-studied at the time. Moreover, like Diffie-Hellman, RSA is based on modular exponentiation. Ron Rivest, Adi Shamir, and Leonard Adleman at the Massachusetts Institute of Technology made several attempts over the course of a year to create a function that was hard to invert. Rivest and Shamir, as computer scientists, proposed many potential functions, while Adleman, as a mathematician, was responsible for finding their weaknesses. They tried many approaches, including "knapsack-based" and "permutation polynomials". For a time, they thought what they wanted to achieve was impossible due to contradictory requirements. In April 1977, they spent Passover at the house of a student and drank a good deal of wine before returning to their homes at around midnight. Rivest, unable to sleep, lay on the couch with a math textbook and started thinking about their one-way function. He spent the rest of the night formalizing his idea, and he had much of the paper ready by daybreak. The algorithm is now known as RSA – the initials of their surnames in same order as their paper. Clifford Cocks, an English mathematician working for the British intelligence agency Government Communications Headquarters (GCHQ), described a similar system in an internal document in 1973. However, given the relatively expensive computers needed to implement it at the time, it was considered to be mostly a curiosity and, as far as is publicly known, was never deployed. His ideas and concepts were not revealed until 1997 due to its top-secret classification. Kid-RSA (KRSA) is a simplified, insecure public-key cipher published in 1997, designed for educational purposes. Kid-RSA gives insight into RSA and other public-key ciphers, analogous to simplified DES. == Patent == A patent describing the RSA algorithm was granted to MIT on 20 September 1983: U.S. patent 4,405,829 "Cryptographic communications system and method". From DWPI's abstract of the patent: The system includes a communications channel coupled to at least one terminal having an encoding device and to at least one terminal having a decoding device. A message-to-be-transferred is enciphered to ciphertext at the encoding terminal by encoding the message as a number M in a predetermined set. That number is then raised to a first predetermined power (associated with the intended receiver) and finally computed. The remainder or residue, C, is... computed when the exponentiated number is divided by the product of two predetermined prime numbers (associated with the intended receiver). A detailed description of the algorithm was published in August 1977, in Scientific American's Mathematical Games column. This preceded the patent's filing date of December 1977. Consequently, the patent had no legal standing outside the United States. Had Cocks' work been publicly known, a patent in the United States would not have been legal either. When the patent was issued, terms of patent were 17 years. The patent was about to expire on 21 September 2000, but RSA Security released the algorithm to the public domain on 6 September 2000. == Operation == The RSA algorithm involves four steps: key generation, key distribution, encryption, and decryption. A basic principle behind RSA is the observation that it is practical to find three very large positive integers e, d, and n, such that for all integers m (0 ≤ m < n), both ( m e ) d {\displaystyle (m^{e})^{d}} and m {\displaystyle m} have the same remainder when divided by n {\displaystyle n} (they are congruent modulo n {\displaystyle n} ): ( m e ) d ≡ m ( mod n ) . {\displaystyle (m^{e})^{d}\equiv m{\pmod {n}}.} However, when given only e and n, it is extremely difficult to find d. The integers n and e comprise the public key, d represents the private key, and m represents the message. The modular exponentiation to e and d corresponds to encryption and decryption, respectively. In addition, because the two exponents can be swapped, the private and public key can also be swapped, allowing for message signing and verification using the same algorithm. === Key generation === The keys for the RSA algorithm are generated in the following way: Choose two large prime numbers p and q. To make factoring harder, p and q should be chosen at random, be both large and have a large difference. For choosing them the standard method is to choose random integers and use a primality test until two primes are found. p and q are kept secret. Compute n = pq. n is used as the modulus for both the public and private keys. Its length, usually expressed in bits, is the key length. n is released as part of the public key. Compute λ(n), where λ is Carmichael's totient function. Since n = pq, λ(n) = lcm(λ(p), λ(q)), and since p and q are prime, λ(p) = φ(p) = p − 1, and likewise λ(q) = q − 1. Hence λ(n) = lcm(p − 1, q − 1). The lcm may be calculated through the Euclidean algorithm, since lcm(a, b) = ⁠|ab|/gcd(a, b)⁠. λ(n) is kept secret. Choose an integer e such that 1 < e < λ(n) and gcd(e, λ(n)) = 1; that is, e and λ(n) are coprime. e having a short bit-length and small Hamming weight results in more efficient encryption – the most commonly chosen value for e is 216 + 1 = 65537. The smallest (and fastest) possible value for e is 3, but such a small value for e has been shown to be less secure in some settings. e is released as part of the public key. Determine d as d ≡ e−1 (mod λ(n)); that is, d is the modular multiplicative inverse of e modulo λ(n). This means: solve for d the equation de ≡ 1 (mod λ(n)); d can be computed efficiently by using the extended Euclidean algorithm, since, thanks to e and λ(n) being coprime, said equation is a form of Bézout's identity, where d is one of the coefficients. d is kept secret as the private key exponent. The public key consists of the modulus n and the public (or encryption) exponent e. The private key consists of the private (or decryption) exponent d, which must be kept secret. p, q, and λ(n) must also be kept secret because they can be used to calculate d. In fact, they can all be discarded after d has been computed. In the original RSA paper, the Euler totient function φ(n) = (p − 1)(q − 1) is used instead of λ(n) for calculating the private exponent d. Since φ(n) is always divisible by λ(n), the algorithm works as well. The possibility of using Euler totient function results also from Lagrange's theorem applied to the multiplicative group of integers modulo pq. Thus any d satisfying d⋅e ≡ 1 (mod φ(n)) also satisfies d⋅e ≡ 1 (mod λ(n)). However, computing d modulo φ(n) will sometimes yield a result that is larger than necessary (i.e. d > λ(n)). Most of the implementations of RSA will accept exponents generated using either method (if they use the private exponent d at all, rather than using the optimized decryption method based on the Chinese remainder theorem described below), but some standards such as FIPS 186-4 (Section B.3.1) may require that d < λ(n). Any "oversized" private exponents not meeting this criterion may always be reduced modulo λ(n) to obtain a smaller equivalent exponent. Since any common factors of (p − 1) and (q − 1) are present in the factorisation of n − 1 = pq − 1 = (p − 1)(q − 1) + (p − 1) + (q − 1), it is recommended that (p − 1) and (q − 1) have only very small common factors, if any, besides the necessary 2. Note: The authors of the original RSA paper carry out the key generation by choosing d and then computing e as the modular multiplicative inverse of d modulo φ(n), whereas most current implementations of RSA, such as those following PKCS#1, do the reverse (choose e and compute d). Since the chosen key can be small, whereas the computed key normally is not, the RSA paper's algorithm optimizes decryption compared to encryption, while the modern algorithm optimizes encryption instead. === Key distribution === Suppose that Bob wants to send information to Alice. If they decide to use RSA, Bob must know Alice's public key to encrypt the message, and Alice must use her private key to decrypt the message. To enable Bob to send his encrypted messages, Alice transmits her public key (n, e) to Bob via a reliable, but not necessarily secret, route. Alice's private key (d) is never distributed. === Encryption === After Bob obtains Alice's public key, he can send a message M to Alice. To do it, he first turns M (strictly speaking, the un-padded plaintext) into an integer m (strictly speaking, the padded plaintext), such that 0 ≤ m < n by using an agreed-upon reversible protocol known as a padding scheme. He then computes the ciphertext c, using Alice's public key e, corresponding to c ≡ m e ( mod n ) . {\displaystyle c\equiv m^{e}{\pmod {n}}.} This can be done reasonably quickly, even for very large numbers, using modular exponentiation. Bob then transmits c to Alice. Note that at least nine values of m will yield a ciphertext c equal to m, but this is very unlikely to occur in practice. === Decryption === Alice can recover m from c by using her private key exponent d by computing c d ≡ ( m e ) d ≡ m ( mod n ) . {\displaystyle c^{d}\equiv (m^{e})^{d}\equiv m{\pmod {n}}.} Given m, she can recover the original message M by reversing the padding scheme. === Example === Here is an example of RSA encryption and decryption: Choose two distinct prime numbers, such as p = 61 {\displaystyle p=61} and q = 53 {\displaystyle q=53} . Compute n = pq giving n = 61 × 53 = 3233. {\displaystyle n=61\times 53=3233.} Compute the Carmichael's totient function of the product as λ(n) = lcm(p − 1, q − 1) giving λ ( 3233 ) = lcm ⁡ ( 60 , 52 ) = 780. {\displaystyle \lambda (3233)=\operatorname {lcm} (60,52)=780.} Choose any number 1 < e < 780 that is coprime to 780. Choosing a prime number for e leaves us only to check that e is not a divisor of 780. Let e = 17 {\displaystyle e=17} . Compute d, the modular multiplicative inverse of e (mod λ(n)), yielding d = 413 , {\displaystyle d=413,} as 1 = ( 17 × 413 ) mod 7 80. {\displaystyle 1=(17\times 413){\bmod {7}}80.} The public key is (n = 3233, e = 17). For a padded plaintext message m, the encryption function is c ( m ) = m e mod n = m 17 mod 3 233. {\displaystyle {\begin{aligned}c(m)&=m^{e}{\bmod {n}}\\&=m^{17}{\bmod {3}}233.\end{aligned}}} The private key is (n = 3233, d = 413). For an encrypted ciphertext c, the decryption function is m ( c ) = c d mod n = c 413 mod 3 233. {\displaystyle {\begin{aligned}m(c)&=c^{d}{\bmod {n}}\\&=c^{413}{\bmod {3}}233.\end{aligned}}} For instance, in order to encrypt m = 65, one calculates c = 65 17 mod 3 233 = 2790. {\displaystyle c=65^{17}{\bmod {3}}233=2790.} To decrypt c = 2790, one calculates m = 2790 413 mod 3 233 = 65. {\displaystyle m=2790^{413}{\bmod {3}}233=65.} Both of these calculations can be computed efficiently using the square-and-multiply algorithm for modular exponentiation. In real-life situations the primes selected would be much larger; in our example it would be trivial to factor n = 3233 (obtained from the freely available public key) back to the primes p and q. e, also from the public key, is then inverted to get d, thus acquiring the private key. Practical implementations use the Chinese remainder theorem to speed up the calculation using modulus of factors (mod pq using mod p and mod q). The values dp, dq and qinv, which are part of the private key are computed as follows: d p = d mod ( p − 1 ) = 413 mod ( 61 − 1 ) = 53 , d q = d mod ( q − 1 ) = 413 mod ( 53 − 1 ) = 49 , q inv = q − 1 mod p = 53 − 1 mod 6 1 = 38 ⇒ ( q inv × q ) mod p = 38 × 53 mod 6 1 = 1. {\displaystyle {\begin{aligned}d_{p}&=d{\bmod {(}}p-1)=413{\bmod {(}}61-1)=53,\\d_{q}&=d{\bmod {(}}q-1)=413{\bmod {(}}53-1)=49,\\q_{\text{inv}}&=q^{-1}{\bmod {p}}=53^{-1}{\bmod {6}}1=38\\&\Rightarrow (q_{\text{inv}}\times q){\bmod {p}}=38\times 53{\bmod {6}}1=1.\end{aligned}}} Here is how dp, dq and qinv are used for efficient decryption (encryption is efficient by choice of a suitable d and e pair): m 1 = c d p mod p = 2790 53 mod 6 1 = 4 , m 2 = c d q mod q = 2790 49 mod 5 3 = 12 , h = ( q inv × ( m 1 − m 2 ) ) mod p = ( 38 × − 8 ) mod 6 1 = 1 , m = m 2 + h × q = 12 + 1 × 53 = 65. {\displaystyle {\begin{aligned}m_{1}&=c^{d_{p}}{\bmod {p}}=2790^{53}{\bmod {6}}1=4,\\m_{2}&=c^{d_{q}}{\bmod {q}}=2790^{49}{\bmod {5}}3=12,\\h&=(q_{\text{inv}}\times (m_{1}-m_{2})){\bmod {p}}=(38\times -8){\bmod {6}}1=1,\\m&=m_{2}+h\times q=12+1\times 53=65.\end{aligned}}} === Signing messages === Suppose Alice uses Bob's public key to send him an encrypted message. In the message, she can claim to be Alice, but Bob has no way of verifying that the message was from Alice, since anyone can use Bob's public key to send him encrypted messages. In order to verify the origin of a message, RSA can also be used to sign a message. Suppose Alice wishes to send a signed message to Bob. She can use her own private key to do so. She produces a hash value of the message, raises it to the power of d (modulo n) (as she does when decrypting a message), and attaches it as a "signature" to the message. When Bob receives the signed message, he uses the same hash algorithm in conjunction with Alice's public key. He raises the signature to the power of e (modulo n) (as he does when encrypting a message), and compares the resulting hash value with the message's hash value. If the two agree, he knows that the author of the message was in possession of Alice's private key and that the message has not been tampered with since being sent. This works because of exponentiation rules: h = hash ⁡ ( m ) , {\displaystyle h=\operatorname {hash} (m),} ( h e ) d = h e d = h d e = ( h d ) e ≡ h ( mod n ) . {\displaystyle (h^{e})^{d}=h^{ed}=h^{de}=(h^{d})^{e}\equiv h{\pmod {n}}.} Thus the keys may be swapped without loss of generality, that is, a private key of a key pair may be used either to: Decrypt a message only intended for the recipient, which may be encrypted by anyone having the public key (asymmetric encrypted transport). Encrypt a message which may be decrypted by anyone, but which can only be encrypted by one person; this provides a digital signature. == Proofs of correctness == === Proof using Fermat's little theorem === The proof of the correctness of RSA is based on Fermat's little theorem, stating that ap − 1 ≡ 1 (mod p) for any integer a and prime p, not dividing a. We want to show that ( m e ) d ≡ m ( mod p q ) {\displaystyle (m^{e})^{d}\equiv m{\pmod {pq}}} for every integer m when p and q are distinct prime numbers and e and d are positive integers satisfying ed ≡ 1 (mod λ(pq)). Since λ(pq) = lcm(p − 1, q − 1) is, by construction, divisible by both p − 1 and q − 1, we can write e d − 1 = h ( p − 1 ) = k ( q − 1 ) {\displaystyle ed-1=h(p-1)=k(q-1)} for some nonnegative integers h and k. To check whether two numbers, such as med and m, are congruent mod pq, it suffices (and in fact is equivalent) to check that they are congruent mod p and mod q separately. To show med ≡ m (mod p), we consider two cases: If m ≡ 0 (mod p), m is a multiple of p. Thus med is a multiple of p. So med ≡ 0 ≡ m (mod p). If m ≢ {\displaystyle \not \equiv } 0 (mod p), m e d = m e d − 1 m = m h ( p − 1 ) m = ( m p − 1 ) h m ≡ 1 h m ≡ m ( mod p ) , {\displaystyle m^{ed}=m^{ed-1}m=m^{h(p-1)}m=(m^{p-1})^{h}m\equiv 1^{h}m\equiv m{\pmod {p}},} where we used Fermat's little theorem to replace mp−1 mod p with 1. The verification that med ≡ m (mod q) proceeds in a completely analogous way: If m ≡ 0 (mod q), med is a multiple of q. So med ≡ 0 ≡ m (mod q). If m ≢ {\displaystyle \not \equiv } 0 (mod q), m e d = m e d − 1 m = m k ( q − 1 ) m = ( m q − 1 ) k m ≡ 1 k m ≡ m ( mod q ) . {\displaystyle m^{ed}=m^{ed-1}m=m^{k(q-1)}m=(m^{q-1})^{k}m\equiv 1^{k}m\equiv m{\pmod {q}}.} This completes the proof that, for any integer m, and integers e, d such that ed ≡ 1 (mod λ(pq)), ( m e ) d ≡ m ( mod p q ) . {\displaystyle (m^{e})^{d}\equiv m{\pmod {pq}}.} ==== Notes ==== === Proof using Euler's theorem === Although the original paper of Rivest, Shamir, and Adleman used Fermat's little theorem to explain why RSA works, it is common to find proofs that rely instead on Euler's theorem. We want to show that med ≡ m (mod n), where n = pq is a product of two different prime numbers, and e and d are positive integers satisfying ed ≡ 1 (mod φ(n)). Since e and d are positive, we can write ed = 1 + hφ(n) for some non-negative integer h. Assuming that m is relatively prime to n, we have m e d = m 1 + h φ ( n ) = m ( m φ ( n ) ) h ≡ m ( 1 ) h ≡ m ( mod n ) , {\displaystyle m^{ed}=m^{1+h\varphi (n)}=m(m^{\varphi (n)})^{h}\equiv m(1)^{h}\equiv m{\pmod {n}},} where the second-last congruence follows from Euler's theorem. More generally, for any e and d satisfying ed ≡ 1 (mod λ(n)), the same conclusion follows from Carmichael's generalization of Euler's theorem, which states that mλ(n) ≡ 1 (mod n) for all m relatively prime to n. When m is not relatively prime to n, the argument just given is invalid. This is highly improbable (only a proportion of 1/p + 1/q − 1/(pq) numbers have this property), but even in this case, the desired congruence is still true. Either m ≡ 0 (mod p) or m ≡ 0 (mod q), and these cases can be treated using the previous proof. == Padding == === Attacks against plain RSA === There are a number of attacks against plain RSA as described below. When encrypting with low encryption exponents (e.g., e = 3) and small values of the m (i.e., m < n1/e), the result of me is strictly less than the modulus n. In this case, ciphertexts can be decrypted easily by taking the eth root of the ciphertext over the integers. If the same clear-text message is sent to e or more recipients in an encrypted way, and the receivers share the same exponent e, but different p, q, and therefore n, then it is easy to decrypt the original clear-text message via the Chinese remainder theorem. Johan Håstad noticed that this attack is possible even if the clear texts are not equal, but the attacker knows a linear relation between them. This attack was later improved by Don Coppersmith (see Coppersmith's attack). Because RSA encryption is a deterministic encryption algorithm (i.e., has no random component) an attacker can successfully launch a chosen plaintext attack against the cryptosystem, by encrypting likely plaintexts under the public key and test whether they are equal to the ciphertext. A cryptosystem is called semantically secure if an attacker cannot distinguish two encryptions from each other, even if the attacker knows (or has chosen) the corresponding plaintexts. RSA without padding is not semantically secure. RSA has the property that the product of two ciphertexts is equal to the encryption of the product of the respective plaintexts. That is, m1em2e ≡ (m1m2)e (mod n). Because of this multiplicative property, a chosen-ciphertext attack is possible. E.g., an attacker who wants to know the decryption of a ciphertext c ≡ me (mod n) may ask the holder of the private key d to decrypt an unsuspicious-looking ciphertext c′ ≡ cre (mod n) for some value r chosen by the attacker. Because of the multiplicative property, c' is the encryption of mr (mod n). Hence, if the attacker is successful with the attack, they will learn mr (mod n), from which they can derive the message m by multiplying mr with the modular inverse of r modulo n. Given the private exponent d, one can efficiently factor the modulus n = pq. And given factorization of the modulus n = pq, one can obtain any private key (d', n) generated against a public key (e', n). === Padding schemes === To avoid these problems, practical RSA implementations typically embed some form of structured, randomized padding into the value m before encrypting it. This padding ensures that m does not fall into the range of insecure plaintexts, and that a given message, once padded, will encrypt to one of a large number of different possible ciphertexts. Standards such as PKCS#1 have been carefully designed to securely pad messages prior to RSA encryption. Because these schemes pad the plaintext m with some number of additional bits, the size of the un-padded message M must be somewhat smaller. RSA padding schemes must be carefully designed so as to prevent sophisticated attacks that may be facilitated by a predictable message structure. Early versions of the PKCS#1 standard (up to version 1.5) used a construction that appears to make RSA semantically secure. However, at Crypto 1998, Bleichenbacher showed that this version is vulnerable to a practical adaptive chosen-ciphertext attack. Furthermore, at Eurocrypt 2000, Coron et al. showed that for some types of messages, this padding does not provide a high enough level of security. Later versions of the standard include Optimal Asymmetric Encryption Padding (OAEP), which prevents these attacks. As such, OAEP should be used in any new application, and PKCS#1 v1.5 padding should be replaced wherever possible. The PKCS#1 standard also incorporates processing schemes designed to provide additional security for RSA signatures, e.g. the Probabilistic Signature Scheme for RSA (RSA-PSS). Secure padding schemes such as RSA-PSS are as essential for the security of message signing as they are for message encryption. Two USA patents on PSS were granted (U.S. patent 6,266,771 and U.S. patent 7,036,014); however, these patents expired on 24 July 2009 and 25 April 2010 respectively. Use of PSS no longer seems to be encumbered by patents. Note that using different RSA key pairs for encryption and signing is potentially more secure. == Security and practical considerations == === Using the Chinese remainder algorithm === For efficiency, many popular crypto libraries (such as OpenSSL, Java and .NET) use for decryption and signing the following optimization based on the Chinese remainder theorem. The following values are precomputed and stored as part of the private key: p {\displaystyle p} and q {\displaystyle q} – the primes from the key generation, d P = d ( mod p − 1 ) , {\displaystyle d_{P}=d{\pmod {p-1}},} d Q = d ( mod q − 1 ) , {\displaystyle d_{Q}=d{\pmod {q-1}},} q inv = q − 1 ( mod p ) . {\displaystyle q_{\text{inv}}=q^{-1}{\pmod {p}}.} These values allow the recipient to compute the exponentiation m = cd (mod pq) more efficiently as follows:   m 1 = c d P ( mod p ) {\displaystyle m_{1}=c^{d_{P}}{\pmod {p}}} ,   m 2 = c d Q ( mod q ) {\displaystyle m_{2}=c^{d_{Q}}{\pmod {q}}} ,   h = q inv ( m 1 − m 2 ) ( mod p ) {\displaystyle h=q_{\text{inv}}(m_{1}-m_{2}){\pmod {p}}} ,   m = m 2 + h q {\displaystyle m=m_{2}+hq} . This is more efficient than computing exponentiation by squaring, even though two modular exponentiations have to be computed. The reason is that these two modular exponentiations both use a smaller exponent and a smaller modulus. === Integer factorization and the RSA problem === The security of the RSA cryptosystem is based on two mathematical problems: the problem of factoring large numbers and the RSA problem. Full decryption of an RSA ciphertext is thought to be infeasible on the assumption that both of these problems are hard, i.e., no efficient algorithm exists for solving them. Providing security against partial decryption may require the addition of a secure padding scheme. The RSA problem is defined as the task of taking eth roots modulo a composite n: recovering a value m such that c ≡ me (mod n), where (n, e) is an RSA public key, and c is an RSA ciphertext. Currently the most promising approach to solving the RSA problem is to factor the modulus n. With the ability to recover prime factors, an attacker can compute the secret exponent d from a public key (n, e), then decrypt c using the standard procedure. To accomplish this, an attacker factors n into p and q, and computes lcm(p − 1, q − 1) that allows the determination of d from e. No polynomial-time method for factoring large integers on a classical computer has yet been found, but it has not been proven that none exists; see integer factorization for a discussion of this problem. The first RSA-512 factorization in 1999 used hundreds of computers and required the equivalent of 8,400 MIPS years, over an elapsed time of about seven months. By 2009, Benjamin Moody could factor an 512-bit RSA key in 73 days using only public software (GGNFS) and his desktop computer (a dual-core Athlon64 with a 1,900 MHz CPU). Just less than 5 gigabytes of disk storage was required and about 2.5 gigabytes of RAM for the sieving process. Rivest, Shamir, and Adleman noted that Miller has shown that – assuming the truth of the extended Riemann hypothesis – finding d from n and e is as hard as factoring n into p and q (up to a polynomial time difference). However, Rivest, Shamir, and Adleman noted, in section IX/D of their paper, that they had not found a proof that inverting RSA is as hard as factoring. As of 2020, the largest publicly known factored RSA number had 829 bits (250 decimal digits, RSA-250). Its factorization, by a state-of-the-art distributed implementation, took about 2,700 CPU-years. In practice, RSA keys are typically 1024 to 4096 bits long. In 2003, RSA Security estimated that 1024-bit keys were likely to become crackable by 2010. As of 2020, it is not known whether such keys can be cracked, but minimum recommendations have moved to at least 2048 bits. It is generally presumed that RSA is secure if n is sufficiently large, outside of quantum computing. If n is 300 bits or shorter, it can be factored in a few hours on a personal computer, using software already freely available. Keys of 512 bits have been shown to be practically breakable in 1999, when RSA-155 was factored by using several hundred computers, and these are now factored in a few weeks using common hardware. Exploits using 512-bit code-signing certificates that may have been factored were reported in 2011. A theoretical hardware device named TWIRL, described by Shamir and Tromer in 2003, called into question the security of 1024-bit keys. In 1994, Peter Shor showed that a quantum computer – if one could ever be practically created for the purpose – would be able to factor in polynomial time, breaking RSA; see Shor's algorithm. === Faulty key generation === Finding the large primes p and q is usually done by testing random numbers of the correct size with probabilistic primality tests that quickly eliminate virtually all of the nonprimes. The numbers p and q should not be "too close", lest the Fermat factorization for n be successful. If p − q is less than 2n1/4 (n = p⋅q, which even for "small" 1024-bit values of n is 3×1077), solving for p and q is trivial. Furthermore, if either p − 1 or q − 1 has only small prime factors, n can be factored quickly by Pollard's p − 1 algorithm, and hence such values of p or q should be discarded. It is important that the private exponent d be large enough. Michael J. Wiener showed that if p is between q and 2q (which is quite typical) and d < n1/4/3, then d can be computed efficiently from n and e. There is no known attack against small public exponents such as e = 3, provided that the proper padding is used. Coppersmith's attack has many applications in attacking RSA specifically if the public exponent e is small and if the encrypted message is short and not padded. 65537 is a commonly used value for e; this value can be regarded as a compromise between avoiding potential small-exponent attacks and still allowing efficient encryptions (or signature verification). The NIST Special Publication on Computer Security (SP 800-78 Rev. 1 of August 2007) does not allow public exponents e smaller than 65537, but does not state a reason for this restriction. In October 2017, a team of researchers from Masaryk University announced the ROCA vulnerability, which affects RSA keys generated by an algorithm embodied in a library from Infineon known as RSALib. A large number of smart cards and trusted platform modules (TPM) were shown to be affected. Vulnerable RSA keys are easily identified using a test program the team released. === Importance of strong random number generation === A cryptographically strong random number generator, which has been properly seeded with adequate entropy, must be used to generate the primes p and q. An analysis comparing millions of public keys gathered from the Internet was carried out in early 2012 by Arjen K. Lenstra, James P. Hughes, Maxime Augier, Joppe W. Bos, Thorsten Kleinjung and Christophe Wachter. They were able to factor 0.2% of the keys using only Euclid's algorithm. They exploited a weakness unique to cryptosystems based on integer factorization. If n = pq is one public key, and n′ = p′q′ is another, then if by chance p = p′ (but q is not equal to q'), then a simple computation of gcd(n, n′) = p factors both n and n', totally compromising both keys. Lenstra et al. note that this problem can be minimized by using a strong random seed of bit length twice the intended security level, or by employing a deterministic function to choose q given p, instead of choosing p and q independently. Nadia Heninger was part of a group that did a similar experiment. They used an idea of Daniel J. Bernstein to compute the GCD of each RSA key n against the product of all the other keys n' they had found (a 729-million-digit number), instead of computing each gcd(n, n′) separately, thereby achieving a very significant speedup, since after one large division, the GCD problem is of normal size. Heninger says in her blog that the bad keys occurred almost entirely in embedded applications, including "firewalls, routers, VPN devices, remote server administration devices, printers, projectors, and VOIP phones" from more than 30 manufacturers. Heninger explains that the one-shared-prime problem uncovered by the two groups results from situations where the pseudorandom number generator is poorly seeded initially, and then is reseeded between the generation of the first and second primes. Using seeds of sufficiently high entropy obtained from key stroke timings or electronic diode noise or atmospheric noise from a radio receiver tuned between stations should solve the problem. Strong random number generation is important throughout every phase of public-key cryptography. For instance, if a weak generator is used for the symmetric keys that are being distributed by RSA, then an eavesdropper could bypass RSA and guess the symmetric keys directly. === Timing attacks === Kocher described a new attack on RSA in 1995: if the attacker Eve knows Alice's hardware in sufficient detail and is able to measure the decryption times for several known ciphertexts, Eve can deduce the decryption key d quickly. This attack can also be applied against the RSA signature scheme. In 2003, Boneh and Brumley demonstrated a more practical attack capable of recovering RSA factorizations over a network connection (e.g., from a Secure Sockets Layer (SSL)-enabled webserver). This attack takes advantage of information leaked by the Chinese remainder theorem optimization used by many RSA implementations. One way to thwart these attacks is to ensure that the decryption operation takes a constant amount of time for every ciphertext. However, this approach can significantly reduce performance. Instead, most RSA implementations use an alternate technique known as cryptographic blinding. RSA blinding makes use of the multiplicative property of RSA. Instead of computing cd (mod n), Alice first chooses a secret random value r and computes (rec)d (mod n). The result of this computation, after applying Euler's theorem, is rcd (mod n), and so the effect of r can be removed by multiplying by its inverse. A new value of r is chosen for each ciphertext. With blinding applied, the decryption time is no longer correlated to the value of the input ciphertext, and so the timing attack fails. === Adaptive chosen-ciphertext attacks === In 1998, Daniel Bleichenbacher described the first practical adaptive chosen-ciphertext attack against RSA-encrypted messages using the PKCS #1 v1 padding scheme (a padding scheme randomizes and adds structure to an RSA-encrypted message, so it is possible to determine whether a decrypted message is valid). Due to flaws with the PKCS #1 scheme, Bleichenbacher was able to mount a practical attack against RSA implementations of the Secure Sockets Layer protocol and to recover session keys. As a result of this work, cryptographers now recommend the use of provably secure padding schemes such as Optimal Asymmetric Encryption Padding, and RSA Laboratories has released new versions of PKCS #1 that are not vulnerable to these attacks. A variant of this attack, dubbed "BERserk", came back in 2014. It impacted the Mozilla NSS Crypto Library, which was used notably by Firefox and Chrome. === Side-channel analysis attacks === A side-channel attack using branch-prediction analysis (BPA) has been described. Many processors use a branch predictor to determine whether a conditional branch in the instruction flow of a program is likely to be taken or not. Often these processors also implement simultaneous multithreading (SMT). Branch-prediction analysis attacks use a spy process to discover (statistically) the private key when processed with these processors. Simple Branch Prediction Analysis (SBPA) claims to improve BPA in a non-statistical way. In their paper, "On the Power of Simple Branch Prediction Analysis", the authors of SBPA (Onur Aciicmez and Cetin Kaya Koc) claim to have discovered 508 out of 512 bits of an RSA key in 10 iterations. A power-fault attack on RSA implementations was described in 2010. The author recovered the key by varying the CPU power voltage outside limits; this caused multiple power faults on the server. === Tricky implementation === There are many details to keep in mind in order to implement RSA securely (strong PRNG, acceptable public exponent, etc.). This makes the implementation challenging, to the point that the book Practical Cryptography With Go suggests avoiding RSA if possible. == Implementations == Some cryptography libraries that provide support for RSA include: Botan Bouncy Castle cryptlib Crypto++ Libgcrypt Nettle OpenSSL wolfCrypt GnuTLS mbed TLS LibreSSL == See also == Acoustic cryptanalysis Computational complexity theory Diffie–Hellman key exchange Digital Signature Algorithm Elliptic-curve cryptography Key exchange Key management Key size Public-key cryptography Rabin cryptosystem Trapdoor function == Notes == == References == == Further reading == Menezes, Alfred; van Oorschot, Paul C.; Vanstone, Scott A. (October 1996). Handbook of Applied Cryptography. CRC Press. ISBN 978-0-8493-8523-0. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001). Introduction to Algorithms (2nd ed.). MIT Press and McGraw-Hill. pp. 881–887. ISBN 978-0-262-03293-3. == External links == The Original RSA Patent as filed with the U.S. Patent Office by Rivest; Ronald L. (Belmont, MA), Shamir; Adi (Cambridge, MA), Adleman; Leonard M. (Arlington, MA), December 14, 1977, U.S. patent 4,405,829. RFC 8017: PKCS #1: RSA Cryptography Specifications Version 2.2 Explanation of RSA using colored lamps on YouTube Thorough walk through of RSA Prime Number Hide-And-Seek: How the RSA Cipher Works Onur Aciicmez, Cetin Kaya Koc, Jean-Pierre Seifert: On the Power of Simple Branch Prediction Analysis
Wikipedia/RSA_algorithm
Elementary Number Theory, Group Theory and Ramanujan Graphs is a book in mathematics whose goal is to make the construction of Ramanujan graphs accessible to undergraduate-level mathematics students. In order to do so, it covers several other significant topics in graph theory, number theory, and group theory. It was written by Giuliana Davidoff, Peter Sarnak, and Alain Valette, and published in 2003 by the Cambridge University Press, as volume 55 of the London Mathematical Society Student Texts book series. == Background == In graph theory, expander graphs are undirected graphs with high connectivity: every small-enough subset of vertices has many edges connecting it to the remaining parts of the graph. Sparse expander graphs have many important applications in computer science, including the development of error correcting codes, the design of sorting networks, and the derandomization of randomized algorithms. For these applications, the graph must be constructed explicitly, rather than merely having its existence proven. One way to show that a graph is an expander is to study the eigenvalues of its adjacency matrix. For an r {\displaystyle r} -regular graph, these are real numbers in the interval [ − r , r ] {\displaystyle [-r,r]} , and the largest eigenvalue (corresponding to the all-1s eigenvector) is exactly r {\displaystyle r} . The spectral expansion of the graph is defined from the difference between the largest and second-largest eigenvalues, the spectral gap, which controls how quickly a random walk on the graph settles to its stable distribution; this gap can be at most 2 r − 1 {\displaystyle 2{\sqrt {r-1}}} . The Ramanujan graphs are defined as the graphs that are optimal from the point of view of spectral expansion: they are r {\displaystyle r} -regular graphs whose spectral gap is exactly 2 r − 1 {\displaystyle 2{\sqrt {r-1}}} . Although Ramanujan graphs with high degree, such as the complete graphs, are easy to construct, expander graphs of low degree are needed for the applications of these graphs. Several constructions of low-degree Ramanujan graphs are now known, the first of which were by Lubotzky, Phillips & Sarnak (1988) and Margulis (1988). Reviewer Jürgen Elstrod writes that "while the description of these graphs is elementary, the proof that they have the desired properties is not". Elementary Number Theory, Group Theory and Ramanujan Graphs aims to make as much of this theory accessible at an elementary level as possible. == Topics == Its authors have divided Elementary Number Theory, Group Theory and Ramanujan Graphs into four chapters. The first of these provides background in graph theory, including material on the girth of graphs (the length of the shortest cycle), on graph coloring, and on the use of the probabilistic method to prove the existence of graphs for which both the girth and the number of colors needed are large. This provides additional motivation for the construction of Ramanujan graphs, as the ones constructed in the book provide explicit examples of the same phenomenon. This chapter also provides the expected material on spectral graph theory, needed for the definition of Ramanujan graphs. Chapter 2, on number theory, includes the sum of two squares theorem characterizing the positive integers that can be represented as sums of two squares of integers (closely connected to the norms of Gaussian integers), Lagrange's four-square theorem according to which all positive integers can be represented as sums of four squares (proved using the norms of Hurwitz quaternions), and quadratic reciprocity. Chapter 3 concerns group theory, and in particular the theory of the projective special linear groups P S L ( 2 , F q ) {\displaystyle PSL(2,\mathbb {F} _{q})} and projective linear groups P G L ( 2 , F q ) {\displaystyle PGL(2,\mathbb {F} _{q})} over the finite fields whose order is a prime number q {\displaystyle q} , and the representation theory of finite groups. The final chapter constructs the Ramanujan graph X p , q {\displaystyle X^{p,q}} for two prime numbers p {\displaystyle p} and q {\displaystyle q} as a Cayley graph of the group P S L ( 2 , F q ) {\displaystyle PSL(2,\mathbb {F} _{q})} or P G L ( 2 , F q ) {\displaystyle PGL(2,\mathbb {F} _{q})} (depending on quadratic reciprocity) with generators defined by taking modulo q {\displaystyle q} a set of p + 1 {\displaystyle p+1} quaternions coming from representations of p {\displaystyle p} as a sum of four squares. These graphs are automatically ( p + 1 ) {\displaystyle (p+1)} -regular. The chapter provides formulas for their numbers of vertices, and estimates of their girth. While not fully proving that these graphs are Ramanujan graphs, the chapter proves that they are spectral expanders, and describes how the claim that they are Ramanujan graphs follows from Pierre Deligne's proof of the Ramanujan conjecture (the connection to Ramanujan from which the name of these graphs was derived). == Audience and reception == This book is intended for advanced undergraduates who have already seen some abstract algebra and real analysis. Reviewer Thomas Shemanske suggests using it as the basis of a senior seminar, as a quick path to many important topics and an interesting example of how these seemingly-separate topics join forces in this application. On the other hand, Thomas Pfaff thinks it would be difficult going even for most senior-level undergraduates, but could be a good choice for independent study or an elective graduate course. == References ==
Wikipedia/Elementary_Number_Theory,_Group_Theory_and_Ramanujan_Graphs
Gal's accurate tables is a method devised by Shmuel Gal to provide accurate values of special functions using a lookup table and interpolation. It is a fast and efficient method for generating values of functions like the exponential or the trigonometric functions to within last-bit accuracy for almost all argument values without using extended precision arithmetic. The main idea in Gal's accurate tables is a different tabulation for the special function being computed. Commonly, the range is divided into several subranges, each with precomputed values and correction formulae. To compute the function, look up the closest point and compute a correction as a function of the distance. Gal's idea is to not precompute equally spaced values, but rather to perturb the points x so that both x and f(x) are very nearly exactly representable in the chosen numeric format. By searching approximately 1000 values on either side of the desired value x, a value can be found such that f(x) can be represented with less than ±1/2000 bit of rounding error. If the correction is also computed to ±1/2000 bit of accuracy (which does not require extra floating-point precision as long as the correction is less than 1/2000 the magnitude of the stored value f(x), and the computed correction is more than ±1/1000 of a bit away from exactly half a bit (the difficult rounding case), then it is known whether the exact function value should be rounded up or down. The technique provides an efficient way to compute the function value to within ±1/1000 least-significant bit, i.e. 10 extra bits of precision. If this approximation is more than ±1/1000 of a bit away from exactly midway between two representable values (which happens 99.8% of the time), then the correctly rounded result is clear. Combined with an extended-precision fallback algorithm, this can compute the correctly rounded result in very reasonable average time. In 2/1000 (0.2%) of the time, such a higher-precision evaluation is required to resolve the rounding uncertainty, but this is infrequent enough that it has little effect on the average calculation time. The problem of generating function values which are accurate to the last bit is known as the table-maker's dilemma. == See also == Floating point Rounding == References == Gal, Shmuel (1986). "Computing elementary functions: A new approach for achieving high accuracy and good performance". In Miranker, Willard L.; Toupin, Richard A. (eds.). Accurate Scientific Computations (1 ed.). Proceedings of Computations, Symposium, Bad Neuenahr, Federal Republic of Germany, March 12-14, 1985: Springer-Verlag Berlin Heidelberg. p. 1–16. ISBN 978-3-540-16798-3.{{cite book}}: CS1 maint: location (link) Gal, Shmuel; Bachelis, Boris (1991). "An accurate elementary mathematical library for the IEEE floating point standard". ACM Transactions on Mathematical Software. Muller, Jean-Michel (2006). Elementary Functions: Algorithms and Implementation (2 ed.). Boston, MA, USA: Birkhäuser. ISBN 978-0-8176-4372-0. LCCN 2005048094. Muller, Jean-Michel (2016-12-12). Elementary Functions: Algorithms and Implementation (3 ed.). Boston, MA, USA: Birkhäuser. ISBN 978-1-4899-7981-0. Stehlé, Damien; Zimmermann, Paul (2005). "Gal's Accurate Tables Method Revisited" (PDF). 17th IEEE Symposium on Computer Arithmetic (ARITH'05). pp. 257–264. doi:10.1109/ARITH.2005.24. ISBN 0-7695-2366-8. Archived (PDF) from the original on 2018-01-15. Retrieved 2018-01-15.
Wikipedia/Gal's_accurate_tables
In computer science, the analysis of algorithms is the process of finding the computational complexity of algorithms—the amount of time, storage, or other resources needed to execute them. Usually, this involves determining a function that relates the size of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same size may cause the algorithm to have different behavior, so best, worst and average case descriptions might all be of practical interest. When not otherwise specified, the function describing the performance of an algorithm is usually an upper bound, determined from the worst case inputs to the algorithm. The term "analysis of algorithms" was coined by Donald Knuth. Algorithm analysis is an important part of a broader computational complexity theory, which provides theoretical estimates for the resources needed by any algorithm which solves a given computational problem. These estimates provide an insight into reasonable directions of search for efficient algorithms. In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input. Big O notation, Big-omega notation and Big-theta notation are used to this end. For instance, binary search is said to run in a number of steps proportional to the logarithm of the size n of the sorted list being searched, or in O(log n), colloquially "in logarithmic time". Usually asymptotic estimates are used because different implementations of the same algorithm may differ in efficiency. However the efficiencies of any two "reasonable" implementations of a given algorithm are related by a constant multiplicative factor called a hidden constant. Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually require certain assumptions concerning the particular implementation of the algorithm, called a model of computation. A model of computation may be defined in terms of an abstract computer, e.g. Turing machine, and/or by postulating that certain operations are executed in unit time. For example, if the sorted list to which we apply binary search has n elements, and we can guarantee that each lookup of an element in the list can be done in unit time, then at most log2(n) + 1 time units are needed to return an answer. == Cost models == Time efficiency estimates depend on what we define to be a step. For the analysis to correspond usefully to the actual run-time, the time required to perform a step must be guaranteed to be bounded above by a constant. One must be careful here; for instance, some analyses count an addition of two numbers as one step. This assumption may not be warranted in certain contexts. For example, if the numbers involved in a computation may be arbitrarily large, the time required by a single addition can no longer be assumed to be constant. Two cost models are generally used: the uniform cost model, also called unit-cost model (and similar variations), assigns a constant cost to every machine operation, regardless of the size of the numbers involved the logarithmic cost model, also called logarithmic-cost measurement (and similar variations), assigns a cost to every machine operation proportional to the number of bits involved The latter is more cumbersome to use, so it is only employed when necessary, for example in the analysis of arbitrary-precision arithmetic algorithms, like those used in cryptography. A key point which is often overlooked is that published lower bounds for problems are often given for a model of computation that is more restricted than the set of operations that you could use in practice and therefore there are algorithms that are faster than what would naively be thought possible. == Run-time analysis == Run-time analysis is a theoretical classification that estimates and anticipates the increase in running time (or run-time or execution time) of an algorithm as its input size (usually denoted as n) increases. Run-time efficiency is a topic of great interest in computer science: A program can take seconds, hours, or even years to finish executing, depending on which algorithm it implements. While software profiling techniques can be used to measure an algorithm's run-time in practice, they cannot provide timing data for all infinitely many possible inputs; the latter can only be achieved by the theoretical methods of run-time analysis. === Shortcomings of empirical metrics === Since algorithms are platform-independent (i.e. a given algorithm can be implemented in an arbitrary programming language on an arbitrary computer running an arbitrary operating system), there are additional significant drawbacks to using an empirical approach to gauge the comparative performance of a given set of algorithms. Take as an example a program that looks up a specific entry in a sorted list of size n. Suppose this program were implemented on Computer A, a state-of-the-art machine, using a linear search algorithm, and on Computer B, a much slower machine, using a binary search algorithm. Benchmark testing on the two computers running their respective programs might look something like the following: Based on these metrics, it would be easy to jump to the conclusion that Computer A is running an algorithm that is far superior in efficiency to that of Computer B. However, if the size of the input-list is increased to a sufficient number, that conclusion is dramatically demonstrated to be in error: Computer A, running the linear search program, exhibits a linear growth rate. The program's run-time is directly proportional to its input size. Doubling the input size doubles the run-time, quadrupling the input size quadruples the run-time, and so forth. On the other hand, Computer B, running the binary search program, exhibits a logarithmic growth rate. Quadrupling the input size only increases the run-time by a constant amount (in this example, 50,000 ns). Even though Computer A is ostensibly a faster machine, Computer B will inevitably surpass Computer A in run-time because it is running an algorithm with a much slower growth rate. === Orders of growth === Informally, an algorithm can be said to exhibit a growth rate on the order of a mathematical function if beyond a certain input size n, the function f(n) times a positive constant provides an upper bound or limit for the run-time of that algorithm. In other words, for a given input size n greater than some n0 and a constant c, the run-time of that algorithm will never be larger than c × f(n). This concept is frequently expressed using Big O notation. For example, since the run-time of insertion sort grows quadratically as its input size increases, insertion sort can be said to be of order O(n2). Big O notation is a convenient way to express the worst-case scenario for a given algorithm, although it can also be used to express the average-case — for example, the worst-case scenario for quicksort is O(n2), but the average-case run-time is O(n log n). === Empirical orders of growth === Assuming the run-time follows power rule, t ≈ kna, the coefficient a can be found by taking empirical measurements of run-time {t1, t2} at some problem-size points {n1, n2}, and calculating t2/t1 = (n2/n1)a so that a = log(t2/t1)/log(n2/n1). In other words, this measures the slope of the empirical line on the log–log plot of run-time vs. input size, at some size point. If the order of growth indeed follows the power rule (and so the line on the log–log plot is indeed a straight line), the empirical value of will stay constant at different ranges, and if not, it will change (and the line is a curved line)—but still could serve for comparison of any two given algorithms as to their empirical local orders of growth behaviour. Applied to the above table: It is clearly seen that the first algorithm exhibits a linear order of growth indeed following the power rule. The empirical values for the second one are diminishing rapidly, suggesting it follows another rule of growth and in any case has much lower local orders of growth (and improving further still), empirically, than the first one. === Evaluating run-time complexity === The run-time complexity for the worst-case scenario of a given algorithm can sometimes be evaluated by examining the structure of the algorithm and making some simplifying assumptions. Consider the following pseudocode: 1 get a positive integer n from input 2 if n > 10 3 print "This might take a while..." 4 for i = 1 to n 5 for j = 1 to i 6 print i * j 7 print "Done!" A given computer will take a discrete amount of time to execute each of the instructions involved with carrying out this algorithm. Say that the actions carried out in step 1 are considered to consume time at most T1, step 2 uses time at most T2, and so forth. In the algorithm above, steps 1, 2 and 7 will only be run once. For a worst-case evaluation, it should be assumed that step 3 will be run as well. Thus the total amount of time to run steps 1–3 and step 7 is: T 1 + T 2 + T 3 + T 7 . {\displaystyle T_{1}+T_{2}+T_{3}+T_{7}.\,} The loops in steps 4, 5 and 6 are trickier to evaluate. The outer loop test in step 4 will execute ( n + 1 ) times, which will consume T4( n + 1 ) time. The inner loop, on the other hand, is governed by the value of j, which iterates from 1 to i. On the first pass through the outer loop, j iterates from 1 to 1: The inner loop makes one pass, so running the inner loop body (step 6) consumes T6 time, and the inner loop test (step 5) consumes 2T5 time. During the next pass through the outer loop, j iterates from 1 to 2: the inner loop makes two passes, so running the inner loop body (step 6) consumes 2T6 time, and the inner loop test (step 5) consumes 3T5 time. Altogether, the total time required to run the inner loop body can be expressed as an arithmetic progression: T 6 + 2 T 6 + 3 T 6 + ⋯ + ( n − 1 ) T 6 + n T 6 {\displaystyle T_{6}+2T_{6}+3T_{6}+\cdots +(n-1)T_{6}+nT_{6}} which can be factored as [ 1 + 2 + 3 + ⋯ + ( n − 1 ) + n ] T 6 = [ 1 2 ( n 2 + n ) ] T 6 {\displaystyle \left[1+2+3+\cdots +(n-1)+n\right]T_{6}=\left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}} The total time required to run the inner loop test can be evaluated similarly: 2 T 5 + 3 T 5 + 4 T 5 + ⋯ + ( n − 1 ) T 5 + n T 5 + ( n + 1 ) T 5 = T 5 + 2 T 5 + 3 T 5 + 4 T 5 + ⋯ + ( n − 1 ) T 5 + n T 5 + ( n + 1 ) T 5 − T 5 {\displaystyle {\begin{aligned}&2T_{5}+3T_{5}+4T_{5}+\cdots +(n-1)T_{5}+nT_{5}+(n+1)T_{5}\\={}&T_{5}+2T_{5}+3T_{5}+4T_{5}+\cdots +(n-1)T_{5}+nT_{5}+(n+1)T_{5}-T_{5}\end{aligned}}} which can be factored as T 5 [ 1 + 2 + 3 + ⋯ + ( n − 1 ) + n + ( n + 1 ) ] − T 5 = [ 1 2 ( n 2 + n ) ] T 5 + ( n + 1 ) T 5 − T 5 = [ 1 2 ( n 2 + n ) ] T 5 + n T 5 = [ 1 2 ( n 2 + 3 n ) ] T 5 {\displaystyle {\begin{aligned}&T_{5}\left[1+2+3+\cdots +(n-1)+n+(n+1)\right]-T_{5}\\={}&\left[{\frac {1}{2}}(n^{2}+n)\right]T_{5}+(n+1)T_{5}-T_{5}\\={}&\left[{\frac {1}{2}}(n^{2}+n)\right]T_{5}+nT_{5}\\={}&\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}\end{aligned}}} Therefore, the total run-time for this algorithm is: f ( n ) = T 1 + T 2 + T 3 + T 7 + ( n + 1 ) T 4 + [ 1 2 ( n 2 + n ) ] T 6 + [ 1 2 ( n 2 + 3 n ) ] T 5 {\displaystyle f(n)=T_{1}+T_{2}+T_{3}+T_{7}+(n+1)T_{4}+\left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}+\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}} which reduces to f ( n ) = [ 1 2 ( n 2 + n ) ] T 6 + [ 1 2 ( n 2 + 3 n ) ] T 5 + ( n + 1 ) T 4 + T 1 + T 2 + T 3 + T 7 {\displaystyle f(n)=\left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}+\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}} As a rule-of-thumb, one can assume that the highest-order term in any given function dominates its rate of growth and thus defines its run-time order. In this example, n2 is the highest-order term, so one can conclude that f(n) = O(n2). Formally this can be proven as follows: Prove that [ 1 2 ( n 2 + n ) ] T 6 + [ 1 2 ( n 2 + 3 n ) ] T 5 + ( n + 1 ) T 4 + T 1 + T 2 + T 3 + T 7 ≤ c n 2 , n ≥ n 0 {\displaystyle \left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}+\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}\leq cn^{2},\ n\geq n_{0}} [ 1 2 ( n 2 + n ) ] T 6 + [ 1 2 ( n 2 + 3 n ) ] T 5 + ( n + 1 ) T 4 + T 1 + T 2 + T 3 + T 7 ≤ ( n 2 + n ) T 6 + ( n 2 + 3 n ) T 5 + ( n + 1 ) T 4 + T 1 + T 2 + T 3 + T 7 ( for n ≥ 0 ) {\displaystyle {\begin{aligned}&\left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}+\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}\\\leq {}&(n^{2}+n)T_{6}+(n^{2}+3n)T_{5}+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}\ ({\text{for }}n\geq 0)\end{aligned}}} Let k be a constant greater than or equal to [T1..T7] T 6 ( n 2 + n ) + T 5 ( n 2 + 3 n ) + ( n + 1 ) T 4 + T 1 + T 2 + T 3 + T 7 ≤ k ( n 2 + n ) + k ( n 2 + 3 n ) + k n + 5 k = 2 k n 2 + 5 k n + 5 k ≤ 2 k n 2 + 5 k n 2 + 5 k n 2 ( for n ≥ 1 ) = 12 k n 2 {\displaystyle {\begin{aligned}&T_{6}(n^{2}+n)+T_{5}(n^{2}+3n)+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}\leq k(n^{2}+n)+k(n^{2}+3n)+kn+5k\\={}&2kn^{2}+5kn+5k\leq 2kn^{2}+5kn^{2}+5kn^{2}\ ({\text{for }}n\geq 1)=12kn^{2}\end{aligned}}} Therefore [ 1 2 ( n 2 + n ) ] T 6 + [ 1 2 ( n 2 + 3 n ) ] T 5 + ( n + 1 ) T 4 + T 1 + T 2 + T 3 + T 7 ≤ c n 2 , n ≥ n 0 for c = 12 k , n 0 = 1 {\displaystyle \left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}+\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}\leq cn^{2},n\geq n_{0}{\text{ for }}c=12k,n_{0}=1} A more elegant approach to analyzing this algorithm would be to declare that [T1..T7] are all equal to one unit of time, in a system of units chosen so that one unit is greater than or equal to the actual times for these steps. This would mean that the algorithm's run-time breaks down as follows: 4 + ∑ i = 1 n i ≤ 4 + ∑ i = 1 n n = 4 + n 2 ≤ 5 n 2 ( for n ≥ 1 ) = O ( n 2 ) . {\displaystyle 4+\sum _{i=1}^{n}i\leq 4+\sum _{i=1}^{n}n=4+n^{2}\leq 5n^{2}\ ({\text{for }}n\geq 1)=O(n^{2}).} === Growth rate analysis of other resources === The methodology of run-time analysis can also be utilized for predicting other growth rates, such as consumption of memory space. As an example, consider the following pseudocode which manages and reallocates memory usage by a program based on the size of a file which that program manages: while file is still open: let n = size of file for every 100,000 kilobytes of increase in file size double the amount of memory reserved In this instance, as the file size n increases, memory will be consumed at an exponential growth rate, which is order O(2n). This is an extremely rapid and most likely unmanageable growth rate for consumption of memory resources. == Relevance == Algorithm analysis is important in practice because the accidental or unintentional use of an inefficient algorithm can significantly impact system performance. In time-sensitive applications, an algorithm taking too long to run can render its results outdated or useless. An inefficient algorithm can also end up requiring an uneconomical amount of computing power or storage in order to run, again rendering it practically useless. == Constant factors == Analysis of algorithms typically focuses on the asymptotic performance, particularly at the elementary level, but in practical applications constant factors are important, and real-world data is in practice always limited in size. The limit is typically the size of addressable memory, so on 32-bit machines 232 = 4 GiB (greater if segmented memory is used) and on 64-bit machines 264 = 16 EiB. Thus given a limited size, an order of growth (time or space) can be replaced by a constant factor, and in this sense all practical algorithms are O(1) for a large enough constant, or for small enough data. This interpretation is primarily useful for functions that grow extremely slowly: (binary) iterated logarithm (log*) is less than 5 for all practical data (265536 bits); (binary) log-log (log log n) is less than 6 for virtually all practical data (264 bits); and binary log (log n) is less than 64 for virtually all practical data (264 bits). An algorithm with non-constant complexity may nonetheless be more efficient than an algorithm with constant complexity on practical data if the overhead of the constant time algorithm results in a larger constant factor, e.g., one may have K > k log ⁡ log ⁡ n {\displaystyle K>k\log \log n} so long as K / k > 6 {\displaystyle K/k>6} and n < 2 2 6 = 2 64 {\displaystyle n<2^{2^{6}}=2^{64}} . For large data linear or quadratic factors cannot be ignored, but for small data an asymptotically inefficient algorithm may be more efficient. This is particularly used in hybrid algorithms, like Timsort, which use an asymptotically efficient algorithm (here merge sort, with time complexity n log ⁡ n {\displaystyle n\log n} ), but switch to an asymptotically inefficient algorithm (here insertion sort, with time complexity n 2 {\displaystyle n^{2}} ) for small data, as the simpler algorithm is faster on small data. == See also == Amortized analysis Analysis of parallel algorithms Asymptotic computational complexity Information-based complexity Master theorem (analysis of algorithms) NP-complete Numerical analysis Polynomial time Program optimization Scalability Smoothed analysis Termination analysis — the subproblem of checking whether a program will terminate at all == Notes == == References == Sedgewick, Robert; Flajolet, Philippe (2013). An Introduction to the Analysis of Algorithms (2nd ed.). Addison-Wesley. ISBN 978-0-321-90575-8. Greene, Daniel A.; Knuth, Donald E. (1982). Mathematics for the Analysis of Algorithms (Second ed.). Birkhäuser. ISBN 3-7643-3102-X. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L. & Stein, Clifford (2001). Introduction to Algorithms. Chapter 1: Foundations (Second ed.). Cambridge, MA: MIT Press and McGraw-Hill. pp. 3–122. ISBN 0-262-03293-7. Sedgewick, Robert (1998). Algorithms in C, Parts 1-4: Fundamentals, Data Structures, Sorting, Searching (3rd ed.). Reading, MA: Addison-Wesley Professional. ISBN 978-0-201-31452-6. Knuth, Donald. The Art of Computer Programming. Addison-Wesley. Goldreich, Oded (2010). Computational Complexity: A Conceptual Perspective. Cambridge University Press. ISBN 978-0-521-88473-0. == External links == Media related to Analysis of algorithms at Wikimedia Commons
Wikipedia/Uniform_cost_model
In mathematics, the von Mangoldt function is an arithmetic function named after German mathematician Hans von Mangoldt. It is an example of an important arithmetic function that is neither multiplicative nor additive. == Definition == The von Mangoldt function, denoted by Λ(n), is defined as Λ ( n ) = { log ⁡ p if n = p k for some prime p and integer k ≥ 1 , 0 otherwise. {\displaystyle \Lambda (n)={\begin{cases}\log p&{\text{if }}n=p^{k}{\text{ for some prime }}p{\text{ and integer }}k\geq 1,\\0&{\text{otherwise.}}\end{cases}}} The values of Λ(n) for the first nine positive integers (i.e. natural numbers) are 0 , log ⁡ 2 , log ⁡ 3 , log ⁡ 2 , log ⁡ 5 , 0 , log ⁡ 7 , log ⁡ 2 , log ⁡ 3 , {\displaystyle 0,\log 2,\log 3,\log 2,\log 5,0,\log 7,\log 2,\log 3,} which is related to (sequence A014963 in the OEIS). == Properties == The von Mangoldt function satisfies the identity log ⁡ ( n ) = ∑ d ∣ n Λ ( d ) . {\displaystyle \log(n)=\sum _{d\mid n}\Lambda (d).} The sum is taken over all integers d that divide n. This is proved by the fundamental theorem of arithmetic, since the terms that are not powers of primes are equal to 0. For example, consider the case n = 12 = 22 × 3. Then ∑ d ∣ 12 Λ ( d ) = Λ ( 1 ) + Λ ( 2 ) + Λ ( 3 ) + Λ ( 4 ) + Λ ( 6 ) + Λ ( 12 ) = Λ ( 1 ) + Λ ( 2 ) + Λ ( 3 ) + Λ ( 2 2 ) + Λ ( 2 × 3 ) + Λ ( 2 2 × 3 ) = 0 + log ⁡ ( 2 ) + log ⁡ ( 3 ) + log ⁡ ( 2 ) + 0 + 0 = log ⁡ ( 2 × 3 × 2 ) = log ⁡ ( 12 ) . {\displaystyle {\begin{aligned}\sum _{d\mid 12}\Lambda (d)&=\Lambda (1)+\Lambda (2)+\Lambda (3)+\Lambda (4)+\Lambda (6)+\Lambda (12)\\&=\Lambda (1)+\Lambda (2)+\Lambda (3)+\Lambda \left(2^{2}\right)+\Lambda (2\times 3)+\Lambda \left(2^{2}\times 3\right)\\&=0+\log(2)+\log(3)+\log(2)+0+0\\&=\log(2\times 3\times 2)\\&=\log(12).\end{aligned}}} By Möbius inversion, we have Λ ( n ) = ∑ d ∣ n μ ( d ) log ⁡ ( n d ) {\displaystyle \Lambda (n)=\sum _{d\mid n}\mu (d)\log \left({\frac {n}{d}}\right)} and using the product rule for the logarithm we get Λ ( n ) = − ∑ d ∣ n μ ( d ) log ⁡ ( d ) . {\displaystyle \Lambda (n)=-\sum _{d\mid n}\mu (d)\log(d)\ .} For all x ≥ 1 {\displaystyle x\geq 1} , we have ∑ n ≤ x Λ ( n ) n = log ⁡ x + O ( 1 ) . {\displaystyle \sum _{n\leq x}{\frac {\Lambda (n)}{n}}=\log x+O(1).} Also, there exist positive constants c1 and c2 such that ψ ( x ) ≤ c 1 x , {\displaystyle \psi (x)\leq c_{1}x,} for all x ≥ 1 {\displaystyle x\geq 1} , and ψ ( x ) ≥ c 2 x , {\displaystyle \psi (x)\geq c_{2}x,} for all sufficiently large x. == Dirichlet series == The von Mangoldt function plays an important role in the theory of Dirichlet series, and in particular, the Riemann zeta function. For example, one has log ⁡ ζ ( s ) = ∑ n = 2 ∞ Λ ( n ) log ⁡ ( n ) 1 n s , Re ( s ) > 1. {\displaystyle \log \zeta (s)=\sum _{n=2}^{\infty }{\frac {\Lambda (n)}{\log(n)}}\,{\frac {1}{n^{s}}},\qquad {\text{Re}}(s)>1.} The logarithmic derivative is then ζ ′ ( s ) ζ ( s ) = − ∑ n = 1 ∞ Λ ( n ) n s . {\displaystyle {\frac {\zeta ^{\prime }(s)}{\zeta (s)}}=-\sum _{n=1}^{\infty }{\frac {\Lambda (n)}{n^{s}}}.} These are special cases of a more general relation on Dirichlet series. If one has F ( s ) = ∑ n = 1 ∞ f ( n ) n s {\displaystyle F(s)=\sum _{n=1}^{\infty }{\frac {f(n)}{n^{s}}}} for a completely multiplicative function f(n), and the series converges for Re(s) > σ0, then F ′ ( s ) F ( s ) = − ∑ n = 1 ∞ f ( n ) Λ ( n ) n s {\displaystyle {\frac {F^{\prime }(s)}{F(s)}}=-\sum _{n=1}^{\infty }{\frac {f(n)\Lambda (n)}{n^{s}}}} converges for Re(s) > σ0. == Chebyshev function == The second Chebyshev function ψ(x) is the summatory function of the von Mangoldt function: ψ ( x ) = ∑ p k ≤ x log ⁡ p = ∑ n ≤ x Λ ( n ) . {\displaystyle \psi (x)=\sum _{p^{k}\leq x}\log p=\sum _{n\leq x}\Lambda (n)\ .} It was introduced by Pafnuty Chebyshev who used it to show that the true order of the prime counting function π ( x ) {\displaystyle \pi (x)} is x / log ⁡ x {\displaystyle x/\log x} . Von Mangoldt provided a rigorous proof of an explicit formula for ψ(x) involving a sum over the non-trivial zeros of the Riemann zeta function. This was an important part of the first proof of the prime number theorem. The Mellin transform of the Chebyshev function can be found by applying Perron's formula: ζ ′ ( s ) ζ ( s ) = − s ∫ 1 ∞ ψ ( x ) x s + 1 d x {\displaystyle {\frac {\zeta ^{\prime }(s)}{\zeta (s)}}=-s\int _{1}^{\infty }{\frac {\psi (x)}{x^{s+1}}}\,dx} which holds for Re(s) > 1. == Exponential series == Hardy and Littlewood examined the series F ( y ) = ∑ n = 2 ∞ ( Λ ( n ) − 1 ) e − n y {\displaystyle F(y)=\sum _{n=2}^{\infty }\left(\Lambda (n)-1\right)e^{-ny}} in the limit y → 0+. Assuming the Riemann hypothesis, they demonstrate that F ( y ) = O ( 1 y ) and F ( y ) = Ω ± ( 1 y ) {\displaystyle F(y)=O\left({\frac {1}{\sqrt {y}}}\right)\quad {\text{and}}\quad F(y)=\Omega _{\pm }\left({\frac {1}{\sqrt {y}}}\right)} In particular this function is oscillatory with diverging oscillations: there exists a value K > 0 such that both inequalities F ( y ) < − K y , and F ( z ) > K z {\displaystyle F(y)<-{\frac {K}{\sqrt {y}}},\quad {\text{ and }}\quad F(z)>{\frac {K}{\sqrt {z}}}} hold infinitely often in any neighbourhood of 0. The graphic to the right indicates that this behaviour is not at first numerically obvious: the oscillations are not clearly seen until the series is summed in excess of 100 million terms, and are only readily visible when y < 10−5. == Riesz mean == The Riesz mean of the von Mangoldt function is given by ∑ n ≤ λ ( 1 − n λ ) δ Λ ( n ) = − 1 2 π i ∫ c − i ∞ c + i ∞ Γ ( 1 + δ ) Γ ( s ) Γ ( 1 + δ + s ) ζ ′ ( s ) ζ ( s ) λ s d s = λ 1 + δ + ∑ ρ Γ ( 1 + δ ) Γ ( ρ ) Γ ( 1 + δ + ρ ) + ∑ n c n λ − n . {\displaystyle {\begin{aligned}\sum _{n\leq \lambda }\left(1-{\frac {n}{\lambda }}\right)^{\delta }\Lambda (n)&=-{\frac {1}{2\pi i}}\int _{c-i\infty }^{c+i\infty }{\frac {\Gamma (1+\delta )\Gamma (s)}{\Gamma (1+\delta +s)}}{\frac {\zeta ^{\prime }(s)}{\zeta (s)}}\lambda ^{s}ds\\&={\frac {\lambda }{1+\delta }}+\sum _{\rho }{\frac {\Gamma (1+\delta )\Gamma (\rho )}{\Gamma (1+\delta +\rho )}}+\sum _{n}c_{n}\lambda ^{-n}.\end{aligned}}} Here, λ and δ are numbers characterizing the Riesz mean. One must take c > 1. The sum over ρ is the sum over the zeroes of the Riemann zeta function, and ∑ n c n λ − n {\displaystyle \sum _{n}c_{n}\lambda ^{-n}\,} can be shown to be a convergent series for λ > 1. == Approximation by Riemann zeta zeros == There is an explicit formula for the summatory Mangoldt function ψ ( x ) {\displaystyle \psi (x)} given by ψ ( x ) = x − ∑ ζ ( ρ ) = 0 x ρ ρ − log ⁡ ( 2 π ) . {\displaystyle \psi (x)=x-\sum _{\zeta (\rho )=0}{\frac {x^{\rho }}{\rho }}-\log(2\pi ).} If we separate out the trivial zeros of the zeta function, which are the negative even integers, we obtain ψ ( x ) = x − ∑ ζ ( ρ ) = 0 , 0 < ℜ ( ρ ) < 1 x ρ ρ − log ⁡ ( 2 π ) − 1 2 log ⁡ ( 1 − x − 2 ) . {\displaystyle \psi (x)=x-\sum _{\zeta (\rho )=0,\ 0<\Re (\rho )<1}{\frac {x^{\rho }}{\rho }}-\log(2\pi )-{\frac {1}{2}}\log(1-x^{-2}).} (The sum is not absolutely convergent, so we take the zeros in order of the absolute value of their imaginary part.) In the opposite direction, in 1911 E. Landau proved that for any fixed t > 1 ∑ 0 < γ ≤ T t ρ = − T 2 π Λ ( t ) + O ( log ⁡ T ) {\displaystyle \sum _{0<\gamma \leq T}t^{\rho }={\frac {-T}{2\pi }}\Lambda (t)+{\mathcal {O}}(\log T)} (We use the notation ρ = β + iγ for the non-trivial zeros of the zeta function.) Therefore, if we use Riemann notation α = −i(ρ − 1/2) we have that the sum over nontrivial zeta zeros expressed as lim T → + ∞ 1 T ∑ 0 < γ ≤ T cos ⁡ ( α log ⁡ t ) = − Λ ( t ) 2 π t {\displaystyle \lim _{T\rightarrow +\infty }{\frac {1}{T}}\sum _{0<\gamma \leq T}\cos(\alpha \log t)=-{\frac {\Lambda (t)}{2\pi {\sqrt {t}}}}} peaks at primes and powers of primes. The Fourier transform of the von Mangoldt function gives a spectrum with spikes at ordinates equal to the imaginary parts of the Riemann zeta function zeros. This is sometimes called a duality. == Generalized von Mangoldt function == The functions Λ k ( n ) = ∑ d ∣ n μ ( d ) log k ⁡ ( n / d ) , {\displaystyle \Lambda _{k}(n)=\sum \limits _{d\mid n}\mu (d)\log ^{k}(n/d),} where μ {\displaystyle \mu } denotes the Möbius function and k {\displaystyle k} denotes a positive integer, generalize the von Mangoldt function. The function Λ 1 {\displaystyle \Lambda _{1}} is the ordinary von Mangoldt function Λ {\displaystyle \Lambda } . == See also == Prime-counting function == References == Apostol, Tom M. (1976), Introduction to analytic number theory, Undergraduate Texts in Mathematics, New York-Heidelberg: Springer-Verlag, ISBN 978-0-387-90163-3, MR 0434929, Zbl 0335.10001 Hardy, G. H.; Wright, E. M. (2008) [1938]. Heath-Brown, D. R.; Silverman, J. H. (eds.). An Introduction to the Theory of Numbers (6th ed.). Oxford: Oxford University Press. ISBN 978-0-19-921985-8. MR 2445243. Zbl 1159.11001. Tenebaum, Gérald (1995). Introduction to analytic and probabilistic number theory. Cambridge Studies in Advanced Mathematics. Vol. 46. Translated by C.B. Thomas. Cambridge: Cambridge University Press. ISBN 0-521-41261-7. Zbl 0831.11001. == External links == Allan Gut, Some remarks on the Riemann zeta distribution (2005) S.A. Stepanov (2001) [1994], "Mangoldt function", Encyclopedia of Mathematics, EMS Press Heike, How plot Riemann zeta zero spectrum in Mathematica? (2012)
Wikipedia/Von_Mangoldt_function
Intuitively, an algorithmically random sequence (or random sequence) is a sequence of binary digits that appears random to any algorithm running on a (prefix-free or not) universal Turing machine. The notion can be applied analogously to sequences on any finite alphabet (e.g. decimal digits). Random sequences are key objects of study in algorithmic information theory. In measure-theoretic probability theory, introduced by Andrey Kolmogorov in 1933, there is no such thing as a random sequence. For example, consider flipping a fair coin infinitely many times. Any particular sequence, be it 0000 … {\displaystyle 0000\dots } or 011010 … {\displaystyle 011010\dots } , has equal probability of exactly zero. There is no way to state that one sequence is "more random" than another sequence, using the language of measure-theoretic probability. However, it is intuitively obvious that 011010 … {\displaystyle 011010\dots } looks more random than 0000 … {\displaystyle 0000\dots } . Algorithmic randomness theory formalizes this intuition. As different types of algorithms are sometimes considered, ranging from algorithms with specific bounds on their running time to algorithms which may ask questions of an oracle machine, there are different notions of randomness. The most common of these is known as Martin-Löf randomness (K-randomness or 1-randomness), but stronger and weaker forms of randomness also exist. When the term "algorithmically random" is used to refer to a particular single (finite or infinite) sequence without clarification, it is usually taken to mean "incompressible" or, in the case the sequence is infinite and prefix algorithmically random (i.e., K-incompressible), "Martin-Löf–Chaitin random". Since its inception, Martin-Löf randomness has been shown to admit many equivalent characterizations—in terms of compression, randomness tests, and gambling—that bear little outward resemblance to the original definition, but each of which satisfies our intuitive notion of properties that random sequences ought to have: random sequences should be incompressible, they should pass statistical tests for randomness, and it should be difficult to make money betting on them. The existence of these multiple definitions of Martin-Löf randomness, and the stability of these definitions under different models of computation, give evidence that Martin-Löf randomness is natural and not an accident of Martin-Löf's particular model. It is important to disambiguate between algorithmic randomness and stochastic randomness. Unlike algorithmic randomness, which is defined for computable (and thus deterministic) processes, stochastic randomness is usually said to be a property of a sequence that is a priori known to be generated by (or is the outcome of) an independent identically distributed equiprobable stochastic process. Because infinite sequences of binary digits can be identified with real numbers in the unit interval, random binary sequences are often called (algorithmically) random real numbers. Additionally, infinite binary sequences correspond to characteristic functions of sets of natural numbers; therefore those sequences might be seen as sets of natural numbers. The class of all Martin-Löf random (binary) sequences is denoted by RAND or MLR. == History == === Richard von Mises === Richard von Mises formalized the notion of a test for randomness in order to define a random sequence as one that passed all tests for randomness. He defined a "collective" (kollektiv) to be an infinite binary string x 1 : ∞ {\displaystyle x_{1:\infty }} defined such that There exists a limit lim n 1 n ∑ i = 1 n x i = p ∈ ( 0 , 1 ) {\displaystyle \lim _{n}{\frac {1}{n}}\sum _{i=1}^{n}x_{i}=p\in (0,1)} . For any "admissible" rule, such that it picks out an infinite subsequence ( x m i ) i {\displaystyle (x_{m_{i}})_{i}} from the string, we still have lim n 1 n ∑ i = 1 n x m i = p {\displaystyle \lim _{n}{\frac {1}{n}}\sum _{i=1}^{n}x_{m_{i}}=p} . He called this principle "impossibility of a gambling system". To pick out a subsequence, first pick a binary function ϕ {\displaystyle \phi } , such that given any binary string x 1 : k {\displaystyle x_{1:k}} , it outputs either 0 or 1. If it outputs 1, then we add x k + 1 {\displaystyle x_{k+1}} to the subsequence, else we continue. In this definition, some admissible rules might abstain forever on some sequences, and thus fail to pick out an infinite subsequence. We only consider those that do pick an infinite subsequence. Stated in another way, each infinite binary string is a coin-flip game, and an admissible rule is a way for a gambler to decide when to place bets. A collective is a coin-flip game where there is no way for one gambler to do better than another over the long run. That is, there is no gambling system that works for the game. The definition generalizes from binary alphabet to countable alphabet: The frequency of each letter converges to a limit greater than zero. For any "admissible" rule, such that it picks out an infinite subsequence ( x m i ) i {\displaystyle (x_{m_{i}})_{i}} from the string, the frequency of each letter in the subsequence still converges to the same limit. Usually the admissible rules are defined to be rules computable by a Turing machine, and we require p = 1 / 2 {\displaystyle p=1/2} . With this, we have the Mises–Wald–Church random sequences. This is not a restriction, since given a sequence with p = 1 / 2 {\displaystyle p=1/2} , we can construct random sequences with any other computable p ∈ ( 0 , 1 ) {\displaystyle p\in (0,1)} . (Here, "Church" refers to Alonzo Church, whose 1940 paper proposed using Turing-computable rules.) However, this definition was found not to be strong enough. Intuitively, the long-time average of a random sequence should oscillate on both sides of p {\displaystyle p} , like how a random walk should cross the origin infinitely many times. However, Jean Ville showed that, even with countably many rules, there exists a binary sequence that tends towards p {\displaystyle p} fraction of ones, but, for every finite prefix, the fraction of ones is less than p {\displaystyle p} . === Per Martin-Löf === The Ville construction suggests that the Mises–Wald–Church sense of randomness is not good enough, because some random sequences do not satisfy some laws of randomness. For example, the Ville construction does not satisfy one of the laws of the iterated logarithm: lim sup n → ∞ − ∑ k = 1 n ( x k − 1 / 2 ) 2 n log ⁡ log ⁡ n ≠ 1 {\displaystyle \limsup _{n\to \infty }{\frac {-\sum _{k=1}^{n}(x_{k}-1/2)}{\sqrt {2n\log \log n}}}\neq 1} Naively, one can fix this by requiring a sequence to satisfy all possible laws of randomness, where a "law of randomness" is a property that is satisfied by all sequences with probability 1. However, for each infinite sequence y 1 : ∞ ∈ 2 N {\displaystyle y_{1:\infty }\in 2^{\mathbb {N} }} , we have a law of randomness that x 1 : ∞ ≠ y 1 : ∞ {\displaystyle x_{1:\infty }\neq y_{1:\infty }} , leading to the conclusion that there are no random sequences. (Per Martin-Löf, 1966) defined "Martin-Löf randomness" by only allowing laws of randomness that are Turing-computable. In other words, a sequence is random iff it passes all Turing-computable tests of randomness. The thesis that the definition of Martin-Löf randomness "correctly" captures the intuitive notion of randomness has been called the Martin-Löf–Chaitin Thesis; it is somewhat similar to the Church–Turing thesis. Church–Turing thesis. The mathematical concept of "computable by Turing machines" captures the intuitive notion of a function being "computable". Like how Turing-computability has many equivalent definitions, Martin-Löf randomness also has many equivalent definitions. See next section. == Three equivalent definitions == Martin-Löf's original definition of a random sequence was in terms of constructive null covers; he defined a sequence to be random if it is not contained in any such cover. Gregory Chaitin, Leonid Levin and Claus-Peter Schnorr proved a characterization in terms of algorithmic complexity: a sequence is random if there is a uniform bound on the compressibility of its initial segments. Schnorr gave a third equivalent definition in terms of martingales. Li and Vitanyi's book An Introduction to Kolmogorov Complexity and Its Applications is the standard introduction to these ideas. Algorithmic complexity (Chaitin 1969, Schnorr 1973, Levin 1973): Algorithmic complexity (also known as (prefix-free) Kolmogorov complexity or program-size complexity) can be thought of as a lower bound on the algorithmic compressibility of a finite sequence (of characters or binary digits). It assigns to each such sequence w a natural number K(w) that, intuitively, measures the minimum length of a computer program (written in some fixed programming language) that takes no input and will output w when run. The complexity is required to be prefix-free: The program (a sequence of 0 and 1) is followed by an infinite string of 0s, and the length of the program (assuming it halts) includes the number of zeroes to the right of the program that the universal Turing machine reads. The additional requirement is needed because we can choose a length such that the length codes information about the substring. Given a natural number c and a sequence w, we say that w is c-incompressible if K ( w ) ≥ | w | − c {\displaystyle K(w)\geq |w|-c} . An infinite sequence S is Martin-Löf random if and only if there is a constant c such that all of S's finite prefixes are c-incompressible. More succinctly, K ( w ) ≥ | w | − O ( 1 ) {\displaystyle K(w)\geq |w|-O(1)} . Constructive null covers (Martin-Löf 1966): This is Martin-Löf's original definition. For a finite binary string w we let Cw denote the cylinder generated by w. This is the set of all infinite sequences beginning with w, which is a basic open set in Cantor space. The product measure μ(Cw) of the cylinder generated by w is defined to be 2−|w|. Every open subset of Cantor space is the union of a countable sequence of disjoint basic open sets, and the measure of an open set is the sum of the measures of any such sequence. An effective open set is an open set that is the union of the sequence of basic open sets determined by a recursively enumerable sequence of binary strings. A constructive null cover or effective measure 0 set is a recursively enumerable sequence U i {\displaystyle U_{i}} of effective open sets such that U i + 1 ⊆ U i {\displaystyle U_{i+1}\subseteq U_{i}} and μ ( U i ) ≤ 2 − i {\displaystyle \mu (U_{i})\leq 2^{-i}} for each natural number i. Every effective null cover determines a G δ {\displaystyle G_{\delta }} set of measure 0, namely the intersection of the sets U i {\displaystyle U_{i}} . A sequence is defined to be Martin-Löf random if it is not contained in any G δ {\displaystyle G_{\delta }} set determined by a constructive null cover. Constructive martingales (Schnorr 1971): A martingale is a function d : { 0 , 1 } ∗ → [ 0 , ∞ ) {\displaystyle d:\{0,1\}^{*}\to [0,\infty )} such that, for all finite strings w, d ( w ) = ( d ( w ⌢ 0 ) + d ( w ⌢ 1 ) ) / 2 {\displaystyle d(w)=(d(w^{\smallfrown }0)+d(w^{\smallfrown }1))/2} , where a ⌢ b {\displaystyle a^{\smallfrown }b} is the concatenation of the strings a and b. This is called the "fairness condition": if a martingale is viewed as a betting strategy, then the above condition requires that the bettor plays against fair odds. A martingale d is said to succeed on a sequence S if lim sup n → ∞ d ( S ↾ n ) = ∞ , {\displaystyle \limsup _{n\to \infty }d(S\upharpoonright n)=\infty ,} where S ↾ n {\displaystyle S\upharpoonright n} is the first n bits of S. A martingale d is constructive (also known as weakly computable, lower semi-computable) if there exists a computable function d ^ : { 0 , 1 } ∗ × N → Q {\displaystyle {\widehat {d}}:\{0,1\}^{*}\times \mathbb {N} \to {\mathbb {Q} }} such that, for all finite binary strings w d ^ ( w , t ) ≤ d ^ ( w , t + 1 ) < d ( w ) , {\displaystyle {\widehat {d}}(w,t)\leq {\widehat {d}}(w,t+1)<d(w),} for all positive integers t, lim t → ∞ d ^ ( w , t ) = d ( w ) . {\displaystyle \lim _{t\to \infty }{\widehat {d}}(w,t)=d(w).} A sequence is Martin-Löf random if and only if no constructive martingale succeeds on it. == Interpretations of the definitions == The Kolmogorov complexity characterization conveys the intuition that a random sequence is incompressible: no prefix can be produced by a program much shorter than the prefix. The null cover characterization conveys the intuition that a random real number should not have any property that is "uncommon". Each measure 0 set can be thought of as an uncommon property. It is not possible for a sequence to lie in no measure 0 sets, because each one-point set has measure 0. Martin-Löf's idea was to limit the definition to measure 0 sets that are effectively describable; the definition of an effective null cover determines a countable collection of effectively describable measure 0 sets and defines a sequence to be random if it does not lie in any of these particular measure 0 sets. Since the union of a countable collection of measure 0 sets has measure 0, this definition immediately leads to the theorem that there is a measure 1 set of random sequences. Note that if we identify the Cantor space of binary sequences with the interval [0,1] of real numbers, the measure on Cantor space agrees with Lebesgue measure. An effective measure 0 set can be interpreted as a Turing machine that is able to tell, given an infinite binary string, whether the string looks random at levels of statistical significance. The set is the intersection of shrinking sets U 1 ⊃ U 2 ⊃ U 3 ⊃ ⋯ {\displaystyle U_{1}\supset U_{2}\supset U_{3}\supset \cdots } , and since each set U n {\displaystyle U_{n}} is specified by an enumerable sequence of prefixes, given any infinite binary string, if it is in U n {\displaystyle U_{n}} , then the Turing machine can decide in finite time that the string does fall inside U n {\displaystyle U_{n}} . Therefore, it can "reject the hypothesis that the string is random at significance level 2 − n {\displaystyle 2^{-n}} ". If the Turing machine can reject the hypothesis at all significance levels, then the string is not random. A random string is one that, for each Turing-computable test of randomness, manages to remain forever un-rejected at some significance level. The martingale characterization conveys the intuition that no effective procedure should be able to make money betting against a random sequence. A martingale d is a betting strategy. d reads a finite string w and bets money on the next bit. It bets some fraction of its money that the next bit will be 0, and then remainder of its money that the next bit will be 1. d doubles the money it placed on the bit that actually occurred, and it loses the rest. d(w) is the amount of money it has after seeing the string w. Since the bet placed after seeing the string w can be calculated from the values d(w), d(w0), and d(w1), calculating the amount of money it has is equivalent to calculating the bet. The martingale characterization says that no betting strategy implementable by any computer (even in the weak sense of constructive strategies, which are not necessarily computable) can make money betting on a random sequence. == Properties and examples of Martin-Löf random sequences == === Universality === There is a universal constructive martingale d. This martingale is universal in the sense that, given any constructive martingale d, if d succeeds on a sequence, then d succeeds on that sequence as well. Thus, d succeeds on every sequence in RANDc (but, since d is constructive, it succeeds on no sequence in RAND). (Schnorr 1971) There is a constructive null cover of RANDc. This means that all effective tests for randomness (that is, constructive null covers) are, in a sense, subsumed by this universal test for randomness, since any sequence that passes this single test for randomness will pass all tests for randomness. (Martin-Löf 1966) Intuitively, this universal test for randomness says "If the sequence has increasingly long prefixes that can be increasingly well-compressed on this universal Turing machine", then it is not random." -- see next section. Construction sketch: Enumerate the effective null covers as ( ( U m , n ) n ) m {\displaystyle ((U_{m,n})_{n})_{m}} . The enumeration is also effective (enumerated by a modified universal Turing machine). Now we have a universal effective null cover by diagonalization: ( ∪ n U n , n + k + 1 ) k {\displaystyle (\cup _{n}U_{n,n+k+1})_{k}} . === Passing randomness tests === If a sequence fails an algorithmic randomness test, then it is algorithmically compressible. Conversely, if it is algorithmically compressible, then it fails an algorithmic randomness test. Construction sketch: Suppose the sequence fails a randomness test, then it can be compressed by lexicographically enumerating all sequences that fails the test, then code for the location of the sequence in the list of all such sequences. This is called "enumerative source encoding". Conversely, if the sequence is compressible, then by the pigeonhole principle, only a vanishingly small fraction of sequences are like that, so we can define a new test for randomness by "has a compression by this universal Turing machine". Incidentally, this is the universal test for randomness. For example, consider a binary sequence sampled IID from the Bernoulli distribution. After taking a large number N {\displaystyle N} of samples, we should have about M ≈ p N {\displaystyle M\approx pN} ones. We can code for this sequence as "Generate all binary sequences with length N {\displaystyle N} , and M {\displaystyle M} ones. Of those, the i {\displaystyle i} -th sequence in lexicographic order.". By Stirling approximation, log 2 ⁡ ( N p N ) ≈ N H ( p ) {\displaystyle \log _{2}{\binom {N}{pN}}\approx NH(p)} where H {\displaystyle H} is the binary entropy function. Thus, the number of bits in this description is: 2 ( 1 + ϵ ) log 2 ⁡ N + ( 1 + ϵ ) N H ( p ) + O ( 1 ) {\displaystyle 2(1+\epsilon )\log _{2}N+(1+\epsilon )NH(p)+O(1)} The first term is for prefix-coding the numbers N {\displaystyle N} and M {\displaystyle M} . The second term is for prefix-coding the number i {\displaystyle i} . (Use Elias omega coding.) The third term is for prefix-coding the rest of the description. When N {\displaystyle N} is large, this description has just ∼ H ( p ) N {\displaystyle \sim H(p)N} bits, and so it is compressible, with compression ratio ∼ H ( p ) {\displaystyle \sim H(p)} . In particular, the compression ratio is exactly one (incompressible) only when p = 1 / 2 {\displaystyle p=1/2} . (Example 14.2.8 ) === Impossibility of a gambling system === Consider a casino offering fair odds at a roulette table. The roulette table generates a sequence of random numbers. If this sequence is algorithmically random, then there is no lower semi-computable strategy to win, which in turn implies that there is no computable strategy to win. That is, for any gambling algorithm, the long-term log-payoff is zero (neither positive nor negative). Conversely, if this sequence is not algorithmically random, then there is a lower semi-computable strategy to win. === Examples === Chaitin's halting probability Ω is an example of a random sequence. Every random sequence is not computable. Every random sequence is normal, satisfies the law of large numbers, and satisfies all Turing-computable properties satisfied by an IID stream of uniformly random numbers. (Theorem 14.5.2 ) === Relation to the arithmetic hierarchy === RANDc (the complement of RAND) is a measure 0 subset of the set of all infinite sequences. This is implied by the fact that each constructive null cover covers a measure 0 set, there are only countably many constructive null covers, and a countable union of measure 0 sets has measure 0. This implies that RAND is a measure 1 subset of the set of all infinite sequences. The class RAND is a Σ 2 0 {\displaystyle \Sigma _{2}^{0}} subset of Cantor space, where Σ 2 0 {\displaystyle \Sigma _{2}^{0}} refers to the second level of the arithmetical hierarchy. This is because a sequence S is in RAND if and only if there is some open set in the universal effective null cover that does not contain S; this property can be seen to be definable by a Σ 2 0 {\displaystyle \Sigma _{2}^{0}} formula. There is a random sequence which is Δ 2 0 {\displaystyle \Delta _{2}^{0}} , that is, computable relative to an oracle for the Halting problem. (Schnorr 1971) Chaitin's Ω is an example of such a sequence. No random sequence is decidable, computably enumerable, or co-computably-enumerable. Since these correspond to the Δ 1 0 {\displaystyle \Delta _{1}^{0}} , Σ 1 0 {\displaystyle \Sigma _{1}^{0}} , and Π 1 0 {\displaystyle \Pi _{1}^{0}} levels of the arithmetical hierarchy, this means that Δ 2 0 {\displaystyle \Delta _{2}^{0}} is the lowest level in the arithmetical hierarchy where random sequences can be found. Every sequence is Turing reducible to some random sequence. (Kučera 1985/1989, Gács 1986). Thus there are random sequences of arbitrarily high Turing degree. == Relative randomness == As each of the equivalent definitions of a Martin-Löf random sequence is based on what is computable by some Turing machine, one can naturally ask what is computable by a Turing oracle machine. For a fixed oracle A, a sequence B which is not only random but in fact, satisfies the equivalent definitions for computability relative to A (e.g., no martingale which is constructive relative to the oracle A succeeds on B) is said to be random relative to A. Two sequences, while themselves random, may contain very similar information, and therefore neither will be random relative to the other. Any time there is a Turing reduction from one sequence to another, the second sequence cannot be random relative to the first, just as computable sequences are themselves nonrandom; in particular, this means that Chaitin's Ω is not random relative to the halting problem. An important result relating to relative randomness is van Lambalgen's theorem, which states that if C is the sequence composed from A and B by interleaving the first bit of A, the first bit of B, the second bit of A, the second bit of B, and so on, then C is algorithmically random if and only if A is algorithmically random, and B is algorithmically random relative to A. A closely related consequence is that if A and B are both random themselves, then A is random relative to B if and only if B is random relative to A. == Stronger than Martin-Löf randomness == Relative randomness gives us the first notion which is stronger than Martin-Löf randomness, which is randomness relative to some fixed oracle A. For any oracle, this is at least as strong, and for most oracles, it is strictly stronger, since there will be Martin-Löf random sequences which are not random relative to the oracle A. Important oracles often considered are the halting problem, ∅ ′ {\displaystyle \emptyset '} , and the nth jump oracle, ∅ ( n ) {\displaystyle \emptyset ^{(n)}} , as these oracles are able to answer specific questions which naturally arise. A sequence which is random relative to the oracle ∅ ( n − 1 ) {\displaystyle \emptyset ^{(n-1)}} is called n-random; a sequence is 1-random, therefore, if and only if it is Martin-Löf random. A sequence which is n-random for every n is called arithmetically random. The n-random sequences sometimes arise when considering more complicated properties. For example, there are only countably many Δ 2 0 {\displaystyle \Delta _{2}^{0}} sets, so one might think that these should be non-random. However, the halting probability Ω is Δ 2 0 {\displaystyle \Delta _{2}^{0}} and 1-random; it is only after 2-randomness is reached that it is impossible for a random set to be Δ 2 0 {\displaystyle \Delta _{2}^{0}} . == Weaker than Martin-Löf randomness == Additionally, there are several notions of randomness which are weaker than Martin-Löf randomness. Some of these are weak 1-randomness, Schnorr randomness, computable randomness, partial computable randomness. Yongge Wang showed that Schnorr randomness is different from computable randomness. Additionally, Kolmogorov–Loveland randomness is known to be no stronger than Martin-Löf randomness, but it is not known whether it is actually weaker. At the opposite end of the randomness spectrum there is the notion of a K-trivial set. These sets are anti-random in that all initial segment is logarithmically compressible (i.e., K ( w ) ≤ K ( | w | ) + b {\displaystyle K(w)\leq K(|w|)+b} for each initial segment w), but they are not computable. == See also == Random sequence Gregory Chaitin Stochastics Monte Carlo method K-trivial set Universality probability Statistical randomness == References == == Further reading == Eagle, Antony (2021), "Chance versus Randomness", in Zalta, Edward N. (ed.), The Stanford Encyclopedia of Philosophy (Spring 2021 ed.), Metaphysics Research Lab, Stanford University, retrieved 2024-01-28 Downey, Rod; Hirschfeldt, Denis R.; Nies, André; Terwijn, Sebastiaan A. (2006). "Calibrating Randomness". The Bulletin of Symbolic Logic. 12 (3/4): 411–491. CiteSeerX 10.1.1.135.4162. doi:10.2178/bsl/1154698741. Archived from the original on 2016-02-02. Gács, Péter (1986). "Every sequence is reducible to a random one" (PDF). Information and Control. 70 (2/3): 186–192. doi:10.1016/s0019-9958(86)80004-3. Kučera, A. (1985). "Measure, Π01-classes and complete extensions of PA". Recursion Theory Week. Lecture Notes in Mathematics. Vol. 1141. Springer-Verlag. pp. 245–259. doi:10.1007/BFb0076224. ISBN 978-3-540-39596-6. Kučera, A. (1989). "On the use of diagonally nonrecursive functions". Studies in Logic and the Foundations of Mathematics. Vol. 129. North-Holland. pp. 219–239. Levin, L. (1973). "On the notion of a random sequence". Soviet Mathematics - Doklady. 14: 1413–1416. Li, M.; Vitanyi, P. M. B. (1997). An Introduction to Kolmogorov Complexity and its Applications (Second ed.). Berlin: Springer-Verlag. Martin-Löf, P. (1966). "The definition of random sequences". Information and Control. 9 (6): 602–619. doi:10.1016/s0019-9958(66)80018-9. Nies, André (2009). Computability and randomness. Oxford Logic Guides. Vol. 51. Oxford: Oxford University Press. ISBN 978-0-19-923076-1. Zbl 1169.03034. Schnorr, C. P. (1971). "A unified approach to the definition of a random sequence". Mathematical Systems Theory. 5 (3): 246–258. doi:10.1007/BF01694181. S2CID 8931514. Schnorr, Claus P. (1973). "Process complexity and effective random tests". Journal of Computer and System Sciences. 7 (4): 376–388. doi:10.1016/s0022-0000(73)80030-3. Chaitin, Gregory J. (1969). "On the Length of Programs for Computing Finite Binary Sequences: Statistical Considerations". Journal of the ACM. 16 (1): 145–159. doi:10.1145/321495.321506. S2CID 8209877. Ville, J. (1939). Etude critique de la notion de collectif. Paris: Gauthier-Villars.
Wikipedia/Algorithmically_random_set
In computability theory, many reducibility relations (also called reductions, reducibilities, and notions of reducibility) are studied. They are motivated by the question: given sets A {\displaystyle A} and B {\displaystyle B} of natural numbers, is it possible to effectively convert a method for deciding membership in B {\displaystyle B} into a method for deciding membership in A {\displaystyle A} ? If the answer to this question is affirmative then A {\displaystyle A} is said to be reducible to B {\displaystyle B} . The study of reducibility notions is motivated by the study of decision problems. For many notions of reducibility, if any noncomputable set is reducible to a set A {\displaystyle A} then A {\displaystyle A} must also be noncomputable. This gives a powerful technique for proving that many sets are noncomputable. == Reducibility relations == A reducibility relation is a binary relation on sets of natural numbers that is Reflexive: Every set is reducible to itself. Transitive: If a set A {\displaystyle A} is reducible to a set B {\displaystyle B} and B {\displaystyle B} is reducible to a set C {\displaystyle C} then A {\displaystyle A} is reducible to C {\displaystyle C} . These two properties imply that reducibility is a preorder on the powerset of the natural numbers. Not all preorders are studied as reducibility notions, however. The notions studied in computability theory have the informal property that A {\displaystyle A} is reducible to B {\displaystyle B} if and only if any (possibly noneffective) decision procedure for B {\displaystyle B} can be effectively converted to a decision procedure for A {\displaystyle A} . The different reducibility relations vary in the methods they permit such a conversion process to use. === Degrees of a reducibility relation === Every reducibility relation (in fact, every preorder) induces an equivalence relation on the powerset of the natural numbers in which two sets are equivalent if and only if each one is reducible to the other. In computability theory, these equivalence classes are called the degrees of the reducibility relation. For example, the Turing degrees are the equivalence classes of sets of naturals induced by Turing reducibility. The degrees of any reducibility relation are partially ordered by the relation in the following manner. Let ≤ {\displaystyle \leq } be a reducibility relation and let C {\displaystyle C} and D {\displaystyle D} be two of its degrees. Then C ≤ D {\displaystyle C\leq D} if and only if there is a set A {\displaystyle A} in C {\displaystyle C} and a set B {\displaystyle B} in D {\displaystyle D} such that A ≤ B {\displaystyle A\leq B} . This is equivalent to the property that for every set A {\displaystyle A} in C {\displaystyle C} and every set B {\displaystyle B} in D {\displaystyle D} , A ≤ B {\displaystyle A\leq B} , because any two sets in C are equivalent and any two sets in D {\displaystyle D} are equivalent. It is common, as shown here, to use boldface notation to denote degrees. == Turing reducibility == The most fundamental reducibility notion is Turing reducibility. A set A {\displaystyle A} of natural numbers is Turing reducible to a set B {\displaystyle B} if and only if there is an oracle Turing machine that, when run with B {\displaystyle B} as its oracle set, will compute the indicator function (characteristic function) of A {\displaystyle A} . Equivalently, A {\displaystyle A} is Turing reducible to B {\displaystyle B} if and only if there is an algorithm for computing the indicator function for A {\displaystyle A} provided that the algorithm is provided with a means to correctly answer questions of the form "Is n {\displaystyle n} in B {\displaystyle B} ?". Turing reducibility serves as a dividing line for other reducibility notions because, according to the Church-Turing thesis, it is the most general reducibility relation that is effective. Reducibility relations that imply Turing reducibility have come to be known as strong reducibilities, while those that are implied by Turing reducibility are weak reducibilities. Equivalently, a strong reducibility relation is one whose degrees form a finer equivalence relation than the Turing degrees, while a weak reducibility relation is one whose degrees form a coarser equivalence relation than Turing equivalence. == Reductions stronger than Turing reducibility == The strong reducibilities include One-one reducibility: A {\displaystyle A} is one-one reducible to B {\displaystyle B} if there is a computable one-to-one function f {\displaystyle f} with A ( x ) = B ( f ( x ) ) {\displaystyle A(x)=B(f(x))} for all x {\displaystyle x} . Many-one reducibility: A {\displaystyle A} is many-one reducible to B {\displaystyle B} if there is a computable function f {\displaystyle f} with A ( x ) = B ( f ( x ) ) {\displaystyle A(x)=B(f(x))} for all x {\displaystyle x} . Truth-table reducible: A {\displaystyle A} is truth-table reducible to B {\displaystyle B} if A {\displaystyle A} is Turing reducible to B {\displaystyle B} via a single (oracle) Turing machine which produces a total function relative to every oracle. Weak truth-table reducible: A {\displaystyle A} is weak truth-table reducible to B {\displaystyle B} if there is a Turing reduction from B {\displaystyle B} to A {\displaystyle A} and a computable function f {\displaystyle f} which bounds the use. Whenever A {\displaystyle A} is truth-table reducible to B {\displaystyle B} , A {\displaystyle A} is also weak truth-table reducible to B {\displaystyle B} , since one can construct a computable bound on the use by considering the maximum use over the tree of all oracles, which will exist if the reduction is total on all oracles. Positive reducible: A {\displaystyle A} is positive reducible to B {\displaystyle B} if and only if A {\displaystyle A} is truth-table reducible to B {\displaystyle B} in a way that one can compute for every x {\displaystyle x} a formula consisting of atoms of the form B ( 0 ) , B ( 1 ) , . . . {\displaystyle B(0),B(1),...} such that these atoms are combined by and's and or's, where the and of a {\displaystyle a} and b {\displaystyle b} is 1 if a = 1 {\displaystyle a=1} and b = 1 {\displaystyle b=1} and so on. Enumeration reducibility: Similar to positive reducibility, relating to the effective procedure of enumerability from A {\displaystyle A} to B {\displaystyle B} . Disjunctive reducible: Similar to positive reducible with the additional constraint that only or's are permitted. Conjunctive reducibility: Similar to positive reducibility with the additional constraint that only and's are permitted. Linear reducibility: Similar to positive reducibility but with the constraint that all atoms of the form B ( n ) {\displaystyle B(n)} are combined by exclusive or's. In other words, A {\displaystyle A} is linear reducible to B {\displaystyle B} if and only if a computable function computes for each x {\displaystyle x} a finite set F ( x ) {\displaystyle F(x)} given as an explicit list of numbers such that x ∈ A {\displaystyle x\in A} if and only if F ( x ) {\displaystyle F(x)} contains an odd number of elements of B {\displaystyle B} . Many of these were introduced by Post (1944). Post was searching for a non-computable, computably enumerable set which the halting problem could not be Turing reduced to. As he could not construct such a set in 1944, he instead worked on the analogous problems for the various reducibilities that he introduced. These reducibilities have since been the subject of much research, and many relationships between them are known. === Bounded reducibilities === A bounded form of each of the above strong reducibilities can be defined. The most famous of these is bounded truth-table reduction, but there are also bounded Turing, bounded weak truth-table, and others. These first three are the most common ones and they are based on the number of queries. For example, a set A {\displaystyle A} is bounded truth-table reducible to B {\displaystyle B} if and only if the Turing machine M {\displaystyle M} computing A {\displaystyle A} relative to B {\displaystyle B} computes a list of up to n {\displaystyle n} numbers, queries B {\displaystyle B} on these numbers and then terminates for all possible oracle answers; the value n {\displaystyle n} is a constant independent of x {\displaystyle x} . The difference between bounded weak truth-table and bounded Turing reduction is that in the first case, the up to n {\displaystyle n} queries have to be made at the same time while in the second case, the queries can be made one after the other. For that reason, there are cases where A {\displaystyle A} is bounded Turing reducible to B {\displaystyle B} but not weak truth-table reducible to B {\displaystyle B} . === Strong reductions in computational complexity === The strong reductions listed above restrict the manner in which oracle information can be accessed by a decision procedure but do not otherwise limit the computational resources available. Thus if a set A {\displaystyle A} is decidable then A {\displaystyle A} is reducible to any set B {\displaystyle B} under any of the strong reducibility relations listed above, even if A {\displaystyle A} is not polynomial-time or exponential-time decidable. This is acceptable in the study of computability theory, which is interested in theoretical computability, but it is not reasonable for computational complexity theory, which studies which sets can be decided under certain asymptotical resource bounds. The most common reducibility in computational complexity theory is polynomial-time reducibility; a set A is polynomial-time reducible to a set B {\displaystyle B} if there is a polynomial-time function f such that for every n {\displaystyle n} , n {\displaystyle n} is in A {\displaystyle A} if and only if f ( n ) {\displaystyle f(n)} is in B {\displaystyle B} . This reducibility is, essentially, a resource-bounded version of many-one reducibility. Other resource-bounded reducibilities are used in other contexts of computational complexity theory where other resource bounds are of interest. == Reductions weaker than Turing reducibility == Although Turing reducibility is the most general reducibility that is effective, weaker reducibility relations are commonly studied. These reducibilities are related to the relative definability of sets over arithmetic or set theory. They include: Arithmetical reducibility: A set A {\displaystyle A} is arithmetical in a set B {\displaystyle B} if A {\displaystyle A} is definable over the standard model of Peano arithmetic with an extra predicate for B {\displaystyle B} . Equivalently, according to Post's theorem, A is arithmetical in B {\displaystyle B} if and only if A {\displaystyle A} is Turing reducible to B ( n ) {\displaystyle B^{(n)}} , the n {\displaystyle n} th Turing jump of B {\displaystyle B} , for some natural number n {\displaystyle n} . The arithmetical hierarchy gives a finer classification of arithmetical reducibility. Hyperarithmetical reducibility: A set A {\displaystyle A} is hyperarithmetical in a set B {\displaystyle B} if A {\displaystyle A} is Δ 1 1 {\displaystyle \Delta _{1}^{1}} definable (see analytical hierarchy) over the standard model of Peano arithmetic with a predicate for B {\displaystyle B} . Equivalently, A {\displaystyle A} is hyperarithmetical in B {\displaystyle B} if and only if A {\displaystyle A} is Turing reducible to B ( α ) {\displaystyle B^{(\alpha )}} , the a {\displaystyle a} th Turing jump of B {\displaystyle B} , for some B {\displaystyle B} -recursive ordinal a {\displaystyle a} . Relative constructibility: A set A {\displaystyle A} is relatively constructible from a set B {\displaystyle B} if A {\displaystyle A} is in L ( B ) {\displaystyle L(B)} , the smallest transitive model of ZFC set theory containing B {\displaystyle B} and all the ordinals. == References == == External links == Stanford Encyclopedia of Philosophy: Recursive Functions
Wikipedia/Reduction_(recursion_theory)
Information and Computation is a closed-access computer science journal published by Elsevier (formerly Academic Press). The journal was founded in 1957 under its former name Information and Control and given its current title in 1987. As of July 2022, the current editor-in-chief is David Peleg. The journal publishes 12 issues a year. == History == Information and Computation was founded as Information and Control in 1957 at the initiative of Leon Brillouin and under the editorship of Leon Brillouin, Colin Cherry and Peter Elias. Murray Eden joined as editor in 1962 and became sole editor-in-chief in 1967. He was succeeded by Albert R. Meyer in 1981, under whose editorship the journal was rebranded Information and Computation in 1987 in response to the shifted focus of the journal towards theory of computation and away from control theory. In 2020, Albert Mayer was succeeded by David Peleg as editor-in-chief of the journal. == Indexing == All articles from the Information and Computation journal can be viewed on indexing services like Scopus and Science Citation Index. They are also reviewed cover-to-cover by the AMS Mathematical Reviews and zbMATH and included in the computer science database DBLP. According to the Journal Citation Reports, Information and Computation has a 2021 impact factor of 1.24. == Landmark publications == === On certain formal properties of grammars === Chomsky, N. (1959). "On certain formal properties of grammars". Information and Control. 2 (2): 137–167. doi:10.1016/S0019-9958(59)90362-6. Description: This article introduced what is now known as the Chomsky hierarchy, a containment hierarchy of classes of formal grammars that generate formal languages. === A formal theory of inductive inference === Solomonoff, R.J. (1964). "A formal theory of inductive inference. Part II". Information and Control. 7 (2): 224–254. doi:10.1016/s0019-9958(64)90131-7. ISSN 0019-9958. Description: This was the beginning of algorithmic information theory and Kolmogorov complexity. Note that though Kolmogorov complexity is named after Andrey Kolmogorov, he said that the seeds of that idea are due to Ray Solomonoff. Andrey Kolmogorov contributed a lot to this area but in later articles. === Fuzzy sets === Zadeh, L.A. (1965). "Fuzzy sets". Information and Control. 8 (3): 338–353. doi:10.1016/s0019-9958(65)90241-x. ISSN 0019-9958. Description: The seminal paper published in 1965 provides details on the mathematics of fuzzy set theory. As of July 2022, it is the most cited paper published in the journal. === On the translation of languages from left to right === Knuth, D. E. (July 1965). "On the translation of languages from left to right". Information and Control. 8 (6): 607–639. doi:10.1016/S0019-9958(65)90426-2. Description: LR parser, which does bottom up parsing for deterministic context-free languages. Later derived parsers, such as the LALR parser, have been and continue to be standard practice, such as in Yacc and descendants. === Language identification in the limit === Gold, E Mark (1967). "Language identification in the limit". Information and Control. 10 (5): 447–474. doi:10.1016/s0019-9958(67)91165-5. ISSN 0019-9958. Description: This paper created algorithmic learning theory. As of July 2022, it is the second most cited paper published in the journal. === A Calculus of Mobile Processes, I === Milner, Robin; Parrow, Joachim; Walker, David (1992-09-01). "A calculus of mobile processes, I". Information and Computation. 100 (1): 1–40. doi:10.1016/0890-5401(92)90008-4. hdl:20.500.11820/cdd6d766-14a5-4c3e-8956-a9792bb2c6d3. ISSN 0890-5401. Description: This paper first introduced the π-calculus. As of July 2022, it is the third most cited paper published in the journal and the most cited paper published since the journal assumed its current name. == References == == External links == Official website
Wikipedia/Information_and_Control
In computer science, formal methods are mathematically rigorous techniques for the specification, development, analysis, and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. Formal methods employ a variety of theoretical computer science fundamentals, including logic calculi, formal languages, automata theory, control theory, program semantics, type systems, and type theory. == Uses == Formal methods can be applied at various points through the development process. === Specification === Formal methods may be used to give a formal description of the system to be developed, at whatever level of detail desired. Further formal methods may depend on this specification to synthesize a program or to verify the correctness of a system. Alternatively, specification may be the only stage in which formal methods is used. By writing a specification, ambiguities in the informal requirements can be discovered and resolved. Additionally, engineers can use a formal specification as a reference to guide their development processes. The need for formal specification systems has been noted for years. In the ALGOL 58 report, John Backus presented a formal notation for describing programming language syntax, later named Backus normal form then renamed Backus–Naur form (BNF). Backus also wrote that a formal description of the meaning of syntactically valid ALGOL programs was not completed in time for inclusion in the report, stating that it "will be included in a subsequent paper." However, no paper describing the formal semantics was ever released. === Synthesis === Program synthesis is the process of automatically creating a program that conforms to a specification. Deductive synthesis approaches rely on a complete formal specification of the program, whereas inductive approaches infer the specification from examples. Synthesizers perform a search over the space of possible programs to find a program consistent with the specification. Because of the size of this search space, developing efficient search algorithms is one of the major challenges in program synthesis. === Verification === Formal verification is the use of software tools to prove properties of a formal specification, or to prove that a formal model of a system implementation satisfies its specification. Once a formal specification has been developed, the specification may be used as the basis for proving properties of the specification, and by inference, properties of the system implementation. ==== Sign-off verification ==== Sign-off verification is the use of a formal verification tool that is highly trusted. Such a tool can replace traditional verification methods (the tool may even be certified). ==== Human-directed proof ==== Sometimes, the motivation for proving the correctness of a system is not the obvious need for reassurance of the correctness of the system, but a desire to understand the system better. Consequently, some proofs of correctness are produced in the style of mathematical proof: handwritten (or typeset) using natural language, using a level of informality common to such proofs. A "good" proof is one that is readable and understandable by other human readers. Critics of such approaches point out that the ambiguity inherent in natural language allows errors to be undetected in such proofs; often, subtle errors can be present in the low-level details typically overlooked by such proofs. Additionally, the work involved in producing such a good proof requires a high level of mathematical sophistication and expertise. ==== Automated proof ==== In contrast, there is increasing interest in producing proofs of correctness of such systems by automated means. Automated techniques fall into three general categories: Automated theorem proving, in which a system attempts to produce a formal proof from scratch, given a description of the system, a set of logical axioms, and a set of inference rules. Model checking, in which a system verifies certain properties by means of an exhaustive search of all possible states that a system could enter during its execution. Abstract interpretation, in which a system verifies an over-approximation of a behavioural property of the program, using a fixpoint computation over a (possibly complete) lattice representing it. Some automated theorem provers require guidance as to which properties are "interesting" enough to pursue, while others work without human intervention. Model checkers can quickly get bogged down in checking millions of uninteresting states if not given a sufficiently abstract model. Proponents of such systems argue that the results have greater mathematical certainty than human-produced proofs, since all the tedious details have been algorithmically verified. The training required to use such systems is also less than that required to produce good mathematical proofs by hand, making the techniques accessible to a wider variety of practitioners. Critics note that some of those systems are like oracles: they make a pronouncement of truth, yet give no explanation of that truth. There is also the problem of "verifying the verifier"; if the program that aids in the verification is itself unproven, there may be reason to doubt the soundness of the produced results. Some modern model checking tools produce a "proof log" detailing each step in their proof, making it possible to perform, given suitable tools, independent verification. The main feature of the abstract interpretation approach is that it provides a sound analysis, i.e. no false negatives are returned. Moreover, it is efficiently scalable, by tuning the abstract domain representing the property to be analyzed, and by applying widening operators to get fast convergence. == Techniques == Formal methods includes a number of different techniques. === Specification languages === The design of a computing system can be expressed using a specification language, which is a formal language that includes a proof system. Using this proof system, formal verification tools can reason about the specification and establish that a system adheres to the specification. === Binary decision diagrams === A binary decision diagram is a data structure that represents a Boolean function. If a Boolean formula P {\displaystyle {\mathcal {P}}} expresses that an execution of a program conforms to the specification, a binary decision diagram can be used to determine if P {\displaystyle {\mathcal {P}}} is a tautology; that is, it always evaluates to TRUE. If this is the case, then the program always conforms to the specification. === SAT solvers === A SAT solver is a program that can solve the Boolean satisfiability problem, the problem of finding an assignment of variables that makes a given propositional formula evaluate to true. If a Boolean formula P {\displaystyle {\mathcal {P}}} expresses that a specific execution of a program conforms to the specification, then determining that ¬ P {\displaystyle \neg {\mathcal {P}}} is unsatisfiable is equivalent to determining that all executions conform to the specification. SAT solvers are often used in bounded model checking, but can also be used in unbounded model checking. == Applications == Formal methods are applied in different areas of hardware and software, including routers, Ethernet switches, routing protocols, security applications, and operating system microkernels such as seL4. There are several examples in which they have been used to verify the functionality of the hardware and software used in data centres. IBM used ACL2, a theorem prover, in the AMD x86 processor development process. Intel uses such methods to verify its hardware and firmware (permanent software programmed into a read-only memory). Dansk Datamatik Center used formal methods in the 1980s to develop a compiler system for the Ada programming language that went on to become a long-lived commercial product. There are several other projects of NASA in which formal methods are applied, such as Next Generation Air Transportation System, Unmanned Aircraft System integration in National Airspace System, and Airborne Coordinated Conflict Resolution and Detection (ACCoRD). B-Method with Atelier B, is used to develop safety automatisms for the various subways installed throughout the world by Alstom and Siemens, and also for Common Criteria certification and the development of system models by ATMEL and STMicroelectronics. Formal verification has been frequently used in hardware by most of the well-known hardware vendors, such as IBM, Intel, and AMD. There are many areas of hardware, where Intel have used formal methods to verify the working of the products, such as parameterized verification of cache-coherent protocol, Intel Core i7 processor execution engine validation (using theorem proving, BDDs, and symbolic evaluation), optimization for Intel IA-64 architecture using HOL light theorem prover, and verification of high-performance dual-port gigabit Ethernet controller with support for PCI express protocol and Intel advance management technology using Cadence. Similarly, IBM has used formal methods in the verification of power gates, registers, and functional verification of the IBM Power7 microprocessor. == In software development == In software development, formal methods are mathematical approaches to solving software (and hardware) problems at the requirements, specification, and design levels. Formal methods are most likely to be applied to safety-critical or security-critical software and systems, such as avionics software. Software safety assurance standards, such as DO-178C allows the usage of formal methods through supplementation, and Common Criteria mandates formal methods at the highest levels of categorization. For sequential software, examples of formal methods include the B-Method, the specification languages used in automated theorem proving, RAISE, and the Z notation. In functional programming, property-based testing has allowed the mathematical specification and testing (if not exhaustive testing) of the expected behaviour of individual functions. The Object Constraint Language (and specializations such as Java Modeling Language) has allowed object-oriented systems to be formally specified, if not necessarily formally verified. For concurrent software and systems, Petri nets, process algebra, and finite-state machines (which are based on automata theory; see also virtual finite state machine or event driven finite state machine) allow executable software specification and can be used to build up and validate application behaviour. Another approach to formal methods in software development is to write a specification in some form of logic—usually a variation of first-order logic—and then to directly execute the logic as though it were a program. The OWL language, based on description logic, is an example. There is also work on mapping some version of English (or another natural language) automatically to and from logic, as well as executing the logic directly. Examples are Attempto Controlled English, and Internet Business Logic, which do not seek to control the vocabulary or syntax. A feature of systems that support bidirectional English–logic mapping and direct execution of the logic is that they can be made to explain their results, in English, at the business or scientific level. == Semi-formal methods == Semi-formal methods are formalisms and languages that are not considered fully "formal". It defers the task of completing the semantics to a later stage, which is then done either by human interpretation or by interpretation through software like code or test case generators. Some practitioners believe that the formal methods community has overemphasized full formalization of a specification or design. They contend that the expressiveness of the languages involved, as well as the complexity of the systems being modelled, make full formalization a difficult and expensive task. As an alternative, various lightweight formal methods, which emphasize partial specification and focused application, have been proposed. Examples of this lightweight approach to formal methods include the Alloy object modelling notation, Denney's synthesis of some aspects of the Z notation with use case driven development, and the CSK VDM Tools. == Formal methods and notations == There are a variety of formal methods and notations available. === Specification languages === Abstract State Machines (ASMs) A Computational Logic for Applicative Common Lisp (ACL2) Actor model Alloy ANSI/ISO C Specification Language (ACSL) Autonomic System Specification Language (ASSL) B-Method CADP Common Algebraic Specification Language (CASL) Esterel Java Modeling Language (JML) Knowledge Based Software Assistant (KBSA) Lustre mCRL2 Perfect Developer Petri nets Predicative programming Process calculi CSP LOTOS π-calculus RAISE Rebeca Modeling Language SPARK Ada Specification and Description Language TLA+ USL VDM VDM-SL VDM++ Z notation === Model checkers === ESBMC MALPAS Software Static Analysis Toolset – an industrial-strength model checker used for formal proof of safety-critical systems PAT – a free model checker, simulator and refinement checker for concurrent systems and CSP extensions (e.g., shared variables, arrays, fairness) SPIN UPPAAL == Solvers and competitions == Many problems in formal methods are NP-hard, but can be solved in cases arising in practice. For example, the Boolean satisfiability problem is NP-complete by the Cook–Levin theorem, but SAT solvers can solve a variety of large instances. There are "solvers" for a variety of problems that arise in formal methods, and there are many periodic competitions to evaluate the state-of-the-art in solving such problems. The SAT competition is a yearly competition that compares SAT solvers. SAT solvers are used in formal methods tools such as Alloy. CASC is a yearly competition of automated theorem provers. SMT-COMP is a yearly competition of SMT solvers, which are applied to formal verification. CHC-COMP is a yearly competition of solvers of constrained Horn clauses, which have applications to formal verification. QBFEVAL is a biennial competition of solvers for true quantified Boolean formulas, which have applications to model checking. SV-COMP is an annual competition for software verification tools. SyGuS-COMP is an annual competition for program synthesis tools. == Organizations == BCS-FACS Formal Methods Europe Z User Group == See also == Abstract interpretation Automated theorem proving Design by contract Formal methods people Formal science Formal specification Formal verification Formal system Methodism Methodology Model checking Scientific method Software engineering Specification language == References == == Further reading == == External links == Formal Methods Europe (FME) Formal Methods Wiki Formal methods from Foldoc Archival material Formal method keyword on Microsoft Academic Search via Archive.org Evidence on Formal Methods uses and impact on Industry supported by the DEPLOY Archived 2012-06-08 at the Wayback Machine project (EU FP7) in Archive.org
Wikipedia/Formal_method
In recursion theory, α recursion theory is a generalisation of recursion theory to subsets of admissible ordinals α {\displaystyle \alpha } . An admissible set is closed under Σ 1 ( L α ) {\displaystyle \Sigma _{1}(L_{\alpha })} functions, where L ξ {\displaystyle L_{\xi }} denotes a rank of Godel's constructible hierarchy. α {\displaystyle \alpha } is an admissible ordinal if L α {\displaystyle L_{\alpha }} is a model of Kripke–Platek set theory. In what follows α {\displaystyle \alpha } is considered to be fixed. == Definitions == The objects of study in α {\displaystyle \alpha } recursion are subsets of α {\displaystyle \alpha } . These sets are said to have some properties: A set A ⊆ α {\displaystyle A\subseteq \alpha } is said to be α {\displaystyle \alpha } -recursively-enumerable if it is Σ 1 {\displaystyle \Sigma _{1}} definable over L α {\displaystyle L_{\alpha }} , possibly with parameters from L α {\displaystyle L_{\alpha }} in the definition. A is α {\displaystyle \alpha } -recursive if both A and α ∖ A {\displaystyle \alpha \setminus A} (its relative complement in α {\displaystyle \alpha } ) are α {\displaystyle \alpha } -recursively-enumerable. It's of note that α {\displaystyle \alpha } -recursive sets are members of L α + 1 {\displaystyle L_{\alpha +1}} by definition of L {\displaystyle L} . Members of L α {\displaystyle L_{\alpha }} are called α {\displaystyle \alpha } -finite and play a similar role to the finite numbers in classical recursion theory. Members of L α + 1 {\displaystyle L_{\alpha +1}} are called α {\displaystyle \alpha } -arithmetic. There are also some similar definitions for functions mapping α {\displaystyle \alpha } to α {\displaystyle \alpha } : A partial function from α {\displaystyle \alpha } to α {\displaystyle \alpha } is α {\displaystyle \alpha } -recursively-enumerable, or α {\displaystyle \alpha } -partial recursive, iff its graph is Σ 1 {\displaystyle \Sigma _{1}} -definable on ( L α , ∈ ) {\displaystyle (L_{\alpha },\in )} . A partial function from α {\displaystyle \alpha } to α {\displaystyle \alpha } is α {\displaystyle \alpha } -recursive iff its graph is Δ 1 {\displaystyle \Delta _{1}} -definable on ( L α , ∈ ) {\displaystyle (L_{\alpha },\in )} . Like in the case of classical recursion theory, any total α {\displaystyle \alpha } -recursively-enumerable function f : α → α {\displaystyle f:\alpha \rightarrow \alpha } is α {\displaystyle \alpha } -recursive. Additionally, a partial function from α {\displaystyle \alpha } to α {\displaystyle \alpha } is α {\displaystyle \alpha } -arithmetical iff there exists some n ∈ ω {\displaystyle n\in \omega } such that the function's graph is Σ n {\displaystyle \Sigma _{n}} -definable on ( L α , ∈ ) {\displaystyle (L_{\alpha },\in )} . Additional connections between recursion theory and α recursion theory can be drawn, although explicit definitions may not have yet been written to formalize them: The functions Δ 0 {\displaystyle \Delta _{0}} -definable in ( L α , ∈ ) {\displaystyle (L_{\alpha },\in )} play a role similar to those of the primitive recursive functions. We say R is a reduction procedure if it is α {\displaystyle \alpha } recursively enumerable and every member of R is of the form ⟨ H , J , K ⟩ {\displaystyle \langle H,J,K\rangle } where H, J, K are all α-finite. A is said to be α-recursive in B if there exist R 0 , R 1 {\displaystyle R_{0},R_{1}} reduction procedures such that: K ⊆ A ↔ ∃ H : ∃ J : [ ⟨ H , J , K ⟩ ∈ R 0 ∧ H ⊆ B ∧ J ⊆ α / B ] , {\displaystyle K\subseteq A\leftrightarrow \exists H:\exists J:[\langle H,J,K\rangle \in R_{0}\wedge H\subseteq B\wedge J\subseteq \alpha /B],} K ⊆ α / A ↔ ∃ H : ∃ J : [ ⟨ H , J , K ⟩ ∈ R 1 ∧ H ⊆ B ∧ J ⊆ α / B ] . {\displaystyle K\subseteq \alpha /A\leftrightarrow \exists H:\exists J:[\langle H,J,K\rangle \in R_{1}\wedge H\subseteq B\wedge J\subseteq \alpha /B].} If A is recursive in B this is written A ≤ α B {\displaystyle \scriptstyle A\leq _{\alpha }B} . By this definition A is recursive in ∅ {\displaystyle \scriptstyle \varnothing } (the empty set) if and only if A is recursive. However A being recursive in B is not equivalent to A being Σ 1 ( L α [ B ] ) {\displaystyle \Sigma _{1}(L_{\alpha }[B])} . We say A is regular if ∀ β ∈ α : A ∩ β ∈ L α {\displaystyle \forall \beta \in \alpha :A\cap \beta \in L_{\alpha }} or in other words if every initial portion of A is α-finite. == Work in α recursion == Shore's splitting theorem: Let A be α {\displaystyle \alpha } recursively enumerable and regular. There exist α {\displaystyle \alpha } recursively enumerable B 0 , B 1 {\displaystyle B_{0},B_{1}} such that A = B 0 ∪ B 1 ∧ B 0 ∩ B 1 = ∅ ∧ A ≰ α B i ( i < 2 ) . {\displaystyle A=B_{0}\cup B_{1}\wedge B_{0}\cap B_{1}=\varnothing \wedge A\not \leq _{\alpha }B_{i}(i<2).} Shore's density theorem: Let A, C be α-regular recursively enumerable sets such that A < α C {\displaystyle \scriptstyle A<_{\alpha }C} then there exists a regular α-recursively enumerable set B such that A < α B < α C {\displaystyle \scriptstyle A<_{\alpha }B<_{\alpha }C} . Barwise has proved that the sets Σ 1 {\displaystyle \Sigma _{1}} -definable on L α + {\displaystyle L_{\alpha ^{+}}} are exactly the sets Π 1 1 {\displaystyle \Pi _{1}^{1}} -definable on L α {\displaystyle L_{\alpha }} , where α + {\displaystyle \alpha ^{+}} denotes the next admissible ordinal above α {\displaystyle \alpha } , and Σ {\displaystyle \Sigma } is from the Levy hierarchy. There is a generalization of limit computability to partial α → α {\displaystyle \alpha \to \alpha } functions. A computational interpretation of α {\displaystyle \alpha } -recursion exists, using " α {\displaystyle \alpha } -Turing machines" with a two-symbol tape of length α {\displaystyle \alpha } , that at limit computation steps take the limit inferior of cell contents, state, and head position. For admissible α {\displaystyle \alpha } , a set A ⊆ α {\displaystyle A\subseteq \alpha } is α {\displaystyle \alpha } -recursive iff it is computable by an α {\displaystyle \alpha } -Turing machine, and A {\displaystyle A} is α {\displaystyle \alpha } -recursively-enumerable iff A {\displaystyle A} is the range of a function computable by an α {\displaystyle \alpha } -Turing machine. A problem in α-recursion theory which is open (as of 2019) is the embedding conjecture for admissible ordinals, which is whether for all admissible α {\displaystyle \alpha } , the automorphisms of the α {\displaystyle \alpha } -enumeration degrees embed into the automorphisms of the α {\displaystyle \alpha } -enumeration degrees. == Relationship to analysis == Some results in α {\displaystyle \alpha } -recursion can be translated into similar results about second-order arithmetic. This is because of the relationship L {\displaystyle L} has with the ramified analytic hierarchy, an analog of L {\displaystyle L} for the language of second-order arithmetic, that consists of sets of integers. In fact, when dealing with first-order logic only, the correspondence can be close enough that for some results on L ω = HF {\displaystyle L_{\omega }={\textrm {HF}}} , the arithmetical and Levy hierarchies can become interchangeable. For example, a set of natural numbers is definable by a Σ 1 0 {\displaystyle \Sigma _{1}^{0}} formula iff it's Σ 1 {\displaystyle \Sigma _{1}} -definable on L ω {\displaystyle L_{\omega }} , where Σ 1 {\displaystyle \Sigma _{1}} is a level of the Levy hierarchy. More generally, definability of a subset of ω over HF with a Σ n {\displaystyle \Sigma _{n}} formula coincides with its arithmetical definability using a Σ n 0 {\displaystyle \Sigma _{n}^{0}} formula. == References == Gerald Sacks, Higher recursion theory, Springer Verlag, 1990 https://projecteuclid.org/euclid.pl/1235422631 Robert Soare, Recursively Enumerable Sets and Degrees, Springer Verlag, 1987 https://projecteuclid.org/euclid.bams/1183541465 Keith J. Devlin, An introduction to the fine structure of the constructible hierarchy (p.38), North-Holland Publishing, 1974 J. Barwise, Admissible Sets and Structures. 1975 == Inline references ==
Wikipedia/Alpha_recursion_theory
The Network News Transfer Protocol (NNTP) is an application protocol used for transporting Usenet news articles (netnews) between news servers, and for reading/posting articles by the end user client applications. Brian Kantor of the University of California, San Diego, and Phil Lapsley of the University of California, Berkeley, wrote RFC 977, the specification for the Network News Transfer Protocol, in March 1986. Other contributors included Stan O. Barber from the Baylor College of Medicine and Erik Fair of Apple Computer. Usenet was originally designed based on the UUCP network, with most article transfers taking place over direct point-to-point telephone links between news servers, which were powerful time-sharing systems. Readers and posters logged into these computers reading the articles directly from the local disk. As local area networks and Internet participation proliferated, it became desirable to allow newsreaders to be run on personal computers connected to local networks. The resulting protocol was NNTP, which resembled the Simple Mail Transfer Protocol (SMTP) but was tailored for exchanging newsgroup articles. A newsreader, also known as a news client, is a software application that reads articles on Usenet, either directly from the news server's disks or via the NNTP. The well-known TCP port 119 is reserved for NNTP. Well-known TCP port 433 (NNSP) may be used when doing a bulk transfer of articles from one server to another. When clients connect to a news server with Transport Layer Security (TLS), TCP port 563 is often used. This is sometimes referred to as NNTPS. Alternatively, a plain-text connection over port 119 may be changed to use TLS via the STARTTLS command. In October 2006, the IETF released RFC 3977, which updates NNTP and codifies many of the additions made over the years since RFC 977. At the same time, the IETF also released RFC 4642, which specifies the use of Transport Layer Security (TLS) via NNTP over STARTTLS. == Network News Reader Protocol == During an abortive attempt to update the NNTP standard in the early 1990s, a specialized form of NNTP intended specifically for use by clients, NNRP, was proposed. This protocol was never completed or fully implemented, but the name persisted in InterNetNews's (INN) nnrpd program. As a result, the subset of standard NNTP commands useful to clients is sometimes still referred to as "NNRP". == NNTP server software == Leafnode InterNetNews C News Apache James Synchronet yProxy DIABLO, a backbone news transit system, designed to replace INND on backbone machines. == See also == List of Usenet newsreaders == References == == External links == Kantor, Brian and Phil Lapsley. RFC 977 "Network News Transfer Protocol: A Proposed Standard for the Stream-Based Transmission of News." 1986. Horton, Mark, and R. Adams. RFC 1036 "Standard for Interchange of USENET Messages." 1987. Barber, Stan, et al. RFC 2980 "Common NNTP Extensions." 2000 IETF nntpext Working Group Feather, Clive. RFC 3977 "Network News Transfer Protocol (NNTP)." 2006 Murchison, K., J. Vinocur, and C. Newman. RFC 4642 "Using Transport Layer Security (TLS) with Network News Transfer Protocol (NNTP)" 2006
Wikipedia/Network_News_Transfer_Protocol
A passive optical network (PON) is a fiber-optic telecommunications network that uses only unpowered devices to carry signals, as opposed to electronic equipment. In practice, PONs are typically used for the last mile between Internet service providers (ISP) and their customers. In this use, a PON has a point-to-multipoint topology in which an ISP uses a single device to serve many end-user sites using a system such as 10G-PON or GPON. In this one-to-many topology, a single fiber serving many sites branches into multiple fibers through a passive splitter, and those fibers can each serve multiple sites through further splitters. The light from the ISP is divided through the splitters to reach all the customer sites, and light from the customer sites is combined into the single fiber. Many fiber ISPs prefer this system. == Components and characteristics == A passive optical network consists of an optical line terminal (OLT) at the service provider's central office (hub), passive (non-power-consuming) optical splitters, and a number of optical network units (ONUs) or optical network terminals (ONTs), which are near end users. There may be amplifiers between the OLT and the ONUs. Several fibers from an OLT can be carried in a single cable. A PON reduces the amount of fiber and central office equipment required compared with point-to-point architectures with dedicated connections for every user. A passive optical network is a form of fiber-optic access network. Bandwidth is commonly shared among users of a PON. In most cases, downstream signals are broadcast to all premises sharing multiple fibers. Encryption can prevent eavesdropping. Upstream signals are combined using a multiple access protocol, usually time-division multiple access (TDMA). == History == Passive optical networks were first proposed by British Telecommunications in 1987. Two major standard groups, the Institute of Electrical and Electronics Engineers (IEEE) and the Telecommunication Standardization Sector of the International Telecommunication Union (ITU-T), develop standards along with a number of other industry organizations. The Society of Cable Telecommunications Engineers (SCTE) also specified radio frequency over glass for carrying signals over a passive optical network. CableLabs has developed coherent PON (CPON) that runs at 100 Gbit/s symmetrically and supports split ratios of up to 1:512. Coherent means it only needs a single wavelength of light to operate. === FSAN and ITU === Starting in 1995, work on fiber to the home architectures was done by the Full Service Access Network (FSAN) working group, formed by major telecommunications service providers and system vendors. The International Telecommunication Union (ITU) did further work, and standardized on two generations of PON. The older ITU-T G.983 standard was based on Asynchronous Transfer Mode (ATM), and has therefore been referred to as APON (ATM PON). Further improvements to the original APON standard – as well as the gradual falling out of favor of ATM as a protocol – led to the full, final version of ITU-T G.983 being referred to more often as broadband PON, or BPON. A typical APON/BPON provides 622 megabits per second (Mbit/s) (OC-12) of downstream bandwidth and 155 Mbit/s (OC-3) of upstream traffic, although the standard accommodates higher rates. The ITU-T G.984 Gigabit-capable Passive Optical Networks (GPON, G-PON) standard, first defined in 2003, represented an increase, compared to BPON, in both the total bandwidth and bandwidth efficiency through the use of larger, variable-length packets. Again, the standards permit several choices of bit rate, but the industry has converged on 2.488 gigabits per second (Gbit/s) of downstream bandwidth, and 1.244 Gbit/s of upstream bandwidth. GPON Encapsulation Method (GEM) allows very efficient packaging of user traffic with frame segmentation. By mid-2008, Verizon had installed over 800000 lines. British Telecom, BSNL, Saudi Telecom Company, Etisalat, and AT&T were in advanced trials in Britain, India, Saudi Arabia, the UAE, and the US, respectively. GPON networks have now been deployed in numerous networks across the globe, and the trends indicate higher growth in GPON than other PON technologies. G.987 defined 10G-PON in 2010 with 10 Gbit/s downstream and 2.5 Gbit/s upstream – framing is "G-PON like" and designed to coexist with GPON devices on the same network. XGS-PON is a related technology that can deliver upstream and downstream (symmetrical) speeds of up to 10 Gbit/s (gigabits per second), first approved in 2016 as G.9807.1. Asymmetrical 50G-PON was approved by the ITU in September 2021, and symmetrical 50G-PON was approved in September 2022. The first trial of 50G-PON took place in 2024 in Turkey. 100G-PON and 200G-PON have been demonstrated. The first demonstration of 100G-PON in a live network was done in Australia in 2024. === Security === Developed in 2009 by Cable Manufacturing Business to meet SIPRNet requirements of the U.S. Air Force, secure passive optical network (SPON) integrates gigabit passive optical network (GPON) technology and protective distribution system (PDS). Changes to the NSTISSI 7003 requirements for PDS and the mandate by the US federal government for GREEN technologies allowed for the US federal government consideration of the two technologies as an alternative to active Ethernet and encryption devices. The chief information officer of the United States Department of the Army issued a directive to adopt the technology by fiscal year 2013. It is marketed to the US military by companies such as Telos Corporation. GPON used in Fiber to the x deployments may face vulnerability to Denial-of-service attack via optical signal injections, unresolved based on current commercially available technologies. === IEEE === In 2004, the Ethernet PON (EPON or GEPON) standard 802.3ah-2004 was ratified as part of the Ethernet in the first mile project of the IEEE 802.3. EPON is a "short haul" network using Ethernet packets, fiber optic cables, and single protocol layer. EPON also uses standard 802.3 Ethernet frames with symmetric 1 gigabit per second upstream and downstream rates. EPON is applicable for data-centric networks, as well as full-service voice, data and video networks. 10 Gbit/s EPON or 10G-EPON was ratified as an amendment IEEE 802.3av to IEEE 802.3. 10G-EPON supports 10/1 Gbit/s. The downstream wavelength plan support simultaneous operation of 10 Gbit/s on one wavelength and 1 Gbit/s on a separate wavelength for the operation of IEEE 802.3av and IEEE 802.3ah on the same PON concurrently. The upstream channel can support simultaneous operation of IEEE 802.3av and 1 Gbit/s 802.3ah simultaneously on a single shared (1310 nm) channel. In 2014, there were over 40 million installed EPON ports, making it the most widely deployed PON technology globally. EPON is also the foundation for cable operators' business services as part of the DOCSIS Provisioning of EPON (DPoE) specifications. 10G EPON is fully compatible with other Ethernet standards and requires no conversion or encapsulation to connect to Ethernet-based networks on either the upstream or downstream end. This technology connects seamlessly with any type of IP-based or packetized communications, and, thanks to the ubiquity of Ethernet installations in homes, workplaces, and elsewhere, EPON is generally very inexpensive to implement. == Network elements == A PON takes advantage of wavelength-division multiplexing (WDM), using one wavelength for downstream traffic and another for upstream traffic on a single mode fiber (ITU-T G.652). BPON, EPON, GEPON, and GPON have the same basic wavelength plan and use the 1490 nanometer (nm) wavelength for downstream traffic and 1310 nm wavelength for upstream traffic. 1550 nm is reserved for optional overlay services, typically RF (analog) video. As with bit rate, the standards describe several optical power budgets, most common is 28 dB of loss budget for both BPON and GPON, but products have been announced using less expensive optics as well. 28 dB corresponds to about 20 km with a 32-way split. Forward error correction (FEC) may provide for another 2–3 dB of loss budget on GPON systems. As optics improve, the 28 dB budget will likely increase. Although both the GPON and EPON protocols permit large split ratios (up to 128 subscribers for GPON, up to 32,768 for EPON), in practice most PONs are deployed with a split ratio of 1:64, 1:32 or smaller. XGS-PON networks support split ratios of up to 1:128 and 50G-PON supports split ratios of at least 1:256 depending on the OLT. Splitters may be cascaded, such as in areas with a low population density and thus a low number of subscribers in a given area. This can also be done to facilitate reducing the number of subscribers in a PON in the future. Thus, PONs can have a tree network topology. In rural areas, remote OLTs with capacity for only a few users can be used. Splitters can be made with either planar lightwave circuit (PLC) or fused biconical taper (FBT) technologies: PLC creates optical waveguides in a flat substrate made of silica to split light, and FBT fuses optical fibers together to create a splitter. A PON consists of a central office node, called an optical line terminal (OLT), one or more user nodes, called optical network units (ONUs) or optical network terminals (ONTs), and the fibers and splitters between them, called the optical distribution network (ODN). "ONT" is an ITU-T term to describe a single-tenant ONU. In multiple-tenant units, the ONU may be bridged to a customer premises device within the individual dwelling unit using technologies such as Ethernet over twisted pair, G.hn (a high-speed ITU-T standard that can operate over any existing home wiring - power lines, phone lines and coaxial cables) or DSL. An ONU is a device that terminates the PON and presents customer service interfaces to the user. Some ONUs implement a separate subscriber unit to provide services such as telephony, Ethernet data, or video. An OLT provides the interface between a PON and a service provider's core network. These typically include: IP traffic over Fast Ethernet, Gigabit Ethernet, or 10 Gigabit Ethernet; Standard TDM interfaces such as SDH/SONET; ATM UNI at 155–622 Mbit/s. The ONT or ONU terminates the PON and presents the native service interfaces to the user. These services can include voice (plain old telephone service (POTS) or voice over IP (VoIP)), data (typically Ethernet or V.35), video, and/or telemetry (TTL, ECL, RS530, etc.) Often the ONU functions are separated into two parts: The ONU, which terminates the PON and presents a converged interface—such as DSL, coaxial cable, or multiservice Ethernet—toward the user; Network termination equipment (NTE), which receives the converged interface and outputs native service interfaces to the user, such as Ethernet and POTS. A PON is a shared network, in that the OLT sends a single stream of downstream traffic that is seen by all ONUs. Each ONU reads the content of only those packets that are addressed to it. Encryption is used to prevent eavesdropping on downstream traffic. An OLT can have several ports, and each port can drive a single PON network with split ratios or splitting factors of around 1:32 or 1:64, meaning that for each port on the OLT, up to 32 or 64 ONUs at customer sites can be connected. Several PON standards can co-exist on the same ODN (optical distribution network) by using different wavelengths. == Upstream bandwidth allocation == The OLT is responsible for allocating upstream bandwidth to the ONUs. Because the optical distribution network (ODN) is shared, ONU upstream transmissions could collide if they were transmitted at random times. ONUs can lie at varying distances from the OLT, meaning that the transmission delay from each ONU is unique. The OLT measures delay and sets a register in each ONU via PLOAM (physical layer operations, administrations and maintenance) messages to equalize its delay with respect to all of the other ONUs on the PON. Once the delay of all ONUs has been set, the OLT transmits so-called grants to the individual ONUs. A grant is permission to use a defined interval of time for upstream transmission. The grant map is dynamically re-calculated every few milliseconds. The map allocates bandwidth to all ONUs, such that each ONU receives timely bandwidth for its service needs. Some services – POTS, for example – require essentially constant upstream bandwidth, and the OLT may provide a fixed bandwidth allocation to each such service that has been provisioned. DS1 and some classes of data service may also require constant upstream bit rate. But much data traffic, such as browsing web sites, is bursty and highly variable. Through dynamic bandwidth allocation (DBA), a PON can be oversubscribed for upstream traffic, according to the traffic engineering concepts of statistical multiplexing. (Downstream traffic can also be oversubscribed, in the same way that any LAN can be oversubscribed. The only special feature in the PON architecture for downstream oversubscription is the fact that the ONU must be able to accept completely arbitrary downstream time slots, both in time and in size.) In GPON there are two forms of DBA, status-reporting (SR) and non-status reporting (NSR). In NSR DBA, the OLT continuously allocates a small amount of extra bandwidth to each ONU. If the ONU has no traffic to send, it transmits idle frames during its excess allocation. If the OLT observes that a given ONU is not sending idle frames, it increases the bandwidth allocation to that ONU. Once the ONU's burst has been transferred, the OLT observes a large number of idle frames from the given ONU, and reduces its allocation accordingly. NSR DBA has the advantage that it imposes no requirements on the ONU, and the disadvantage that there is no way for the OLT to know how best to assign bandwidth across several ONUs that need more. In SR DBA, the OLT polls ONUs for their backlogs. A given ONU may have several so-called transmission containers (T-CONTs), each with its own priority or traffic class. The ONU reports each T-CONT separately to the OLT. The report message contains a logarithmic measure of the backlog in the T-CONT queue. By knowledge of the service level agreement for each T-CONT across the entire PON, as well as the size of each T-CONT's backlog, the OLT can optimize allocation of the spare bandwidth on the PON. EPON systems use a DBA mechanism equivalent to GPON's SR DBA solution. The OLT polls ONUs for their queue status and grants bandwidth using the MPCP GATE message, while ONUs report their status using the MPCP REPORT message. == Variants == === TDM-PON === APON/BPON, EPON and GPON have been widely deployed. In November 2014, EPON had approximately 40 million deployed ports and ranks first in deployments. As of 2015, GPON had a smaller market share, but is anticipated to reach $10.5 billion US dollars by 2020. For TDM-PON, a passive optical splitter is used in the optical distribution network. In the upstream direction, each ONU (optical network units) or ONT (optical network terminal) burst transmits for an assigned time-slot (multiplexed in the time domain). In this way, the OLT is receiving signals from only one ONU or ONT at any point in time. In the downstream direction, the OLT (usually) continuously transmits (or may burst transmit). ONUs or ONTs see their own data through the address labels embedded in the signal. XGS-PON is popular among fiber ISPs in the US. === DOCSIS Provisioning of EPON (DPoE) === Data Over Cable Service Interface Specification (DOCSIS) Provisioning of Ethernet Passive Optical Network, or DPoE, is a set of CableLabs specifications that implement the DOCSIS service layer interface on existing Ethernet PON (EPON, GEPON or 10G-EPON) media access control (MAC) and physical layer (PHY) standards. In short it implements the DOCSIS Operations Administration Maintenance and Provisioning (OAMP) functionality on existing EPON equipment. It makes the EPON OLT look and act like a DOCSIS Cable Modem Termination Systems (CMTS) platform (which is called a DPoE System in DPoE terminology). In addition to offering the same IP service capabilities as a CMTS, DPoE supports Metro Ethernet Forum (MEF) 9 and 14 services for the delivery of Ethernet services for business customers. Comcast Xfinity and Charter Spectrum use 10G-EPON with DPoE in newly deployed areas, including new construction and rural expansion. === Radio frequency over glass === Radio frequency over glass (RFoG) is a type of passive optical network that transports RF signals that were formerly transported over copper (principally over a hybrid fiber-coaxial cable) over PON. In the forward direction RFoG is either a stand-alone P2MP system or an optical overlay for existing PON such as GEPON/EPON. The overlay for RFoG is based on wavelength-division multiplexing (WDM)—the passive combination of wavelengths on a single strand of glass. Reverse RF support is provided by transporting the upstream or return RF onto a separate wavelength from the PON return wavelength. The Society of Cable and Telecommunications Engineers (SCTE) Interface Practices Subcommittee (IPS) Work Group 5, is currently working on IPS 910 RF over Glass. RFoG offers backwards compatibility with existing RF modulation technology, but offers no additional bandwidth for RF based services. Although not yet completed, the RFoG standard is actually a collection of standardized options which are not compatible with each other (they cannot be mixed on the same PON). Some of the standards may interoperate with other PONs, others may not. It offers a means to support RF technologies in locations where only fiber is available or where copper is not permitted or feasible. This technology is targeted towards Cable TV operators and their existing HFC networks, but is also used by Verizon, Frontier Communications and Ziply Fiber to deliver pay TV services over fiber despite these companies never having owned or deployed a HFC network. === WDM-PON === Wavelength-Division Multiplexing PON (WDM-PON) is a non-standard type of passive optical networking that is being developed by some companies. The multiple wavelengths of a WDM-PON can be used to separate Optical Network Units (ONUs) into several virtual PONs co-existing on the same physical infrastructure. Alternatively the wavelengths can be used collectively through statistical multiplexing to provide efficient wavelength utilization and lower delays experienced by the ONUs. There is no common standard for WDM-PON nor any unanimously agreed upon definition of the term. By some definitions WDM-PON is a dedicated wavelength for each ONU. Other more liberal definitions suggest the use of more than one wavelength in any one direction on a PON is WDM-PON. It is difficult to point to an un-biased list of WDM-PON vendors when there is no such unanimous definition. PONs provide higher bandwidth than traditional copper based access networks. WDM-PON has better privacy and better scalability because of each ONU only receives its own wavelength. Advantages: The MAC layer is simplified because the P2P connections between OLT and ONUs are realized in wavelength domain, so no P2MP media access control is needed. In WDM-PON each wavelength can run at a different speed and protocol so there is an easy pay-as-you-grow upgrade. Challenges: High cost of initial set-up, the cost of the WDM components. Temperature control is another challenge because of how wavelengths tend to drift with environmental temperatures. === TWDM-PON === Time- and wavelength-division multiplexed passive optical network (TWDM-PON) is a primary solution for the next-generation passive optical network stage 2 (NG-PON2) by the full service access network (FSAN) in April 2012. TWDM-PON coexists with commercially deployed Gigabit PON (G-PON) and 10 Gigabit PON (XG-PON) systems. While G-PON, XG-PON, and XGS-PON only support one wavelength per direction, NG-PON supports 4 or 8 wavelengths per direction, and 10 Gbit/s per wavelength for up to 80 Gbit/s of downstream and upstream bandwidth. === Long-Reach Optical Access Networks === The concept of the Long-Reach Optical Access Network (LROAN) is to replace the optical/electrical/optical conversion that takes place at the local exchange with a continuous optical path that extends from the customer to the core of the network. Work by Davey and Payne at BT showed that significant cost savings could be made by reducing the electronic equipment and real-estate required at the local exchange or wire center. A proof of concept demonstrator showed that it was possible to serve 1024 users at 10 Gbit/s with 100 km reach. This technology has sometimes been termed Long-Reach PON, however, many argue that the term PON is no longer applicable as, in most instances, only the distribution remains passive. == Enabling technologies == Due to the topology of PON, the transmission modes for downstream (that is, from OLT to ONU) and upstream (that is, from ONU to OLT) are different. For the downstream transmission, the OLT broadcasts optical signal to all the ONUs in continuous mode (CM), that is, the downstream channel always has optical data signal. However, in the upstream channel, ONUs can not transmit optical data signal in CM. Use of CM would result in all of the signals transmitted from the ONUs converging (with attenuation) into one fiber by the power splitter (serving as power coupler), and overlapping. To solve this problem, burst mode (BM) transmission is adopted for upstream channel. The given ONU only transmits optical packet when it is allocated a time slot and it needs to transmit, and all the ONUs share the upstream channel in the time-division multiplexing (TDM) mode. The phases of the BM optical packets received by the OLT are different from packet to packet, since the ONUs are not synchronized to transmit optical packet in the same phase, and the distance between OLT and given ONU are random. Since the distance between the OLT and ONUs are not uniform, the optical packets received by the OLT may have different amplitudes. In order to compensate the phase variation and amplitude variation in a short time (for example within 40 ns for GPON), burst mode clock and data recovery (BM-CDR) and burst mode amplifier (for example burst mode TIA) need to be employed, respectively. Furthermore, the BM transmission mode requires the transmitter to work in burst mode. Such a burst mode transmitter is able to turn on and off in short time. The above three kinds of circuitries in PON are quite different from their counterparts in the point-to-point continuous mode optical communication link. == Fiber to the premises == Passive optical networks do not use electrically powered components to split the signal. Instead, the signal is distributed using beam splitters. Each splitter typically splits the signal from a single fiber into 16, 32, or up to 256 fibers, depending on the manufacturer, and several splitters can be aggregated in a single cabinet. A beam splitter cannot provide any switching or buffering capabilities and does not use any power supply; the resulting connection is called a point-to-multipoint link. For such a connection, the optical network terminals on the customer's end must perform some special functions which would not otherwise be required. For example, due to the absence of switching, each signal leaving the central office must be broadcast to all users served by that splitter (including to those for whom the signal is not intended). It is therefore up to the optical network terminal to filter out any signals intended for other customers. In addition, since splitters have no buffering, each individual optical network terminal must be coordinated in a multiplexing scheme to prevent signals sent by customers from colliding with each other. Two types of multiplexing are possible for achieving this: wavelength-division multiplexing and time-division multiplexing. With wavelength-division multiplexing, each customer transmits their signal using a unique wavelength. With time-division multiplexing (TDM), the customers "take turns" transmitting information. TDM equipment has been on the market longest. Because there is no single definition of "WDM-PON" equipment, various vendors claim to have released the 'first' WDM-PON equipment, but there is no consensus on which product was the 'first' WDM-PON product to market. Passive optical networks have both advantages and disadvantages over active networks. They avoid the complexities involved in keeping electronic equipment operating outdoors. They also allow for analog broadcasts, which can simplify the delivery of analog television. However, because each signal must be pushed out to everyone served by the splitter (rather than to just a single switching device), the central office must be equipped with a particularly powerful piece of transmitting equipment called an optical line terminal (OLT). In addition, because each customer's optical network terminal must transmit all the way to the central office (rather than to just the nearest switching device), reach extenders would be needed to achieve the distance from central office that is possible with outside plant based active optical networks. Optical distribution networks can also be designed in a point-to-point "homerun" topology where splitters and/or active networking are all located at the central office, allowing users to be patched into whichever network is required from the optical distribution frame. == Passive optical components == The drivers behind the modern passive optical network are high reliability, low cost, and passive functionality. Single-mode, passive optical components include branching devices such as Wavelength-Division Multiplexer/Demultiplexers (WDMs), isolators, circulators, and filters. These components are used in interoffice, loop feeder, Fiber In The Loop (FITL), Hybrid Fiber-Coaxial Cable (HFC), Synchronous Optical Network (SONET), and Synchronous Digital Hierarchy (SDH) systems; and other telecommunications networks employing optical communications systems that utilize Optical Fiber Amplifiers (OFAs) and Dense Wavelength-Division Multiplexer (DWDM) systems. Proposed requirements for these components were published in 2010 by Telcordia Technologies. The broad variety of passive optical components applications include multichannel transmission, distribution, optical taps for monitoring, pump combiners for fiber amplifiers, bit-rate limiters, optical connects, route diversity, polarization diversity, interferometers, and coherent communication. WDMs are optical components in which power is split or combined based on the wavelength composition of the optical signal. Dense Wavelength-Division Multiplexers (DWDMs) are optical components that split power over at least four wavelengths. Wavelength insensitive couplers are passive optical components in which power is split or combined independently of the wavelength composition of the optical signal. A given component may combine and divide optical signals simultaneously, as in bidirectional (duplex) transmission over a single fiber. Passive optical components are data format transparent, combining and dividing optical power in some predetermined ratio (coupling ratio) regardless of the information content of the signals. WDMs can be thought of as wavelength splitters and combiners. Wavelength insensitive couplers can be thought of as power splitters and combiners. An optical isolator is a two-port passive component that allows light (in a given wavelength range) to pass through with low attenuation in one direction, while isolating (providing a high attenuation for) light propagating in the reverse direction. Isolators are used as both integral and in-line components in laser diode modules and optical amplifiers, and to reduce noise caused by multi-path reflection in high-bitrate and analog transmission systems. An optical circulator operates in a similar way to an optical isolator, except that the reverse propagating lightwave is directed to a third port for output, instead of being lost. An optical circulator can be used for bidirectional transmission, as a type of branching component that distributes (and isolates) optical power among fibers, based on the direction of the lightwave propagation. A fiber optic filter is a component with two or more ports that provides wavelength sensitive loss, isolation and/or return loss. Fiber optic filters are in-line, wavelength selective, components that allow a specific range of wavelengths to pass through (or reflect) with low attenuation for classification of filter types. == See also == 10G-PON Higher Speed PON Bandwidth guaranteed polling Broadband fiber to the x GPON, (gigabit-capable passive optical network) Interleaved polling with adaptive cycle time NG-PON2 == References == == Further reading == == External links == Media related to Passive optical network at Wikimedia Commons How Fiber-to-the-home Broadband Works, including an explanation of Active Optical Networks (AON), at Howstuffworks.com.
Wikipedia/Passive_optical_network
The Network Time Protocol (NTP) is a networking protocol for clock synchronization between computer systems over packet-switched, variable-latency data networks. In operation since before 1985, NTP is one of the oldest Internet protocols in current use. NTP was designed by David L. Mills of the University of Delaware. NTP is intended to synchronize participating computers to within a few milliseconds of Coordinated Universal Time (UTC).: 3  It uses the intersection algorithm, a modified version of Marzullo's algorithm, to select accurate time servers and is designed to mitigate the effects of variable network latency. NTP can usually maintain time to within tens of milliseconds over the public Internet, and can achieve better than one millisecond accuracy in local area networks under ideal conditions. Asymmetric routes and network congestion can cause errors of 100 ms or more. The protocol is usually described in terms of a client–server model, but can as easily be used in peer-to-peer relationships where both peers consider the other to be a potential time source.: 20  Implementations send and receive timestamps using the User Datagram Protocol (UDP) on port number 123.: 16  They can also use broadcasting or multicasting, where clients passively listen to time updates after an initial round-trip calibrating exchange. NTP supplies a warning of any impending leap second adjustment, but no information about local time zones or daylight saving time is transmitted. The current protocol is version 4 (NTPv4), which is backward compatible with version 3. == Clock synchronization algorithm == A typical NTP client regularly polls one or more NTP servers. The client must compute its time offset and round-trip delay. Time offset θ is positive or negative (client time > server time) difference in absolute time between the two clocks. It is defined by θ = ( t 1 − t 0 ) + ( t 2 − t 3 ) 2 , {\displaystyle \theta ={\frac {(t_{1}-t_{0})+(t_{2}-t_{3})}{2}},} and the round-trip delay δ by δ = ( t 3 − t 0 ) − ( t 2 − t 1 ) , {\displaystyle \delta ={(t_{3}-t_{0})-(t_{2}-t_{1})},} where t0 is the client's timestamp of the request packet transmission, t1 is the server's timestamp of the request packet reception, t2 is the server's timestamp of the response packet transmission and t3 is the client's timestamp of the response packet reception.: 19  To derive the expression for the offset, note that for the request packet, t 0 + θ + δ / 2 = t 1 {\displaystyle t_{0}+\theta +\delta /2=t_{1}} and for the response packet, t 3 + θ − δ / 2 = t 2 {\displaystyle t_{3}+\theta -\delta /2=t_{2}} Solving for θ yields the definition of the time offset. The values for θ and δ are passed through filters and subjected to statistical analysis ("mitigation"). Outliers are discarded and an estimate of time offset is derived from the best three remaining candidates. The clock frequency is then adjusted to reduce the offset gradually ("discipline"), creating a feedback loop.: 20  Accurate synchronization is achieved when both the incoming and outgoing routes between the client and the server have symmetrical nominal delay. If the routes do not have a common nominal delay, a systematic bias exists of half the difference between the forward and backward travel times. A number of approaches have been proposed to measure asymmetry, but among practical implementations only chrony seems to have one included. == History == In 1979, network time synchronization technology was used in what was possibly the first public demonstration of Internet services running over a trans-Atlantic satellite network, at the National Computer Conference in New York. The technology was later described in the 1981 Internet Engineering Note (IEN) 173 and a public protocol was developed from it that was documented in RFC 778. The technology was first deployed in a local area network as part of the Hello routing protocol and implemented in the Fuzzball router, an experimental operating system used in network prototyping, where it ran for many years. Other related network tools were available both then and now. They include the Daytime and Time protocols for recording the time of events, as well as the ICMP Timestamp messages and IP Timestamp option (RFC 781). More complete synchronization systems, although lacking NTP's data analysis and clock disciplining algorithms, include the Unix daemon timed, which uses an election algorithm to appoint a server for all the clients; and the Digital Time Synchronization Service (DTSS), which uses a hierarchy of servers similar to the NTP stratum model. In 1985, NTP version 0 (NTPv0) was implemented in both Fuzzball and Unix, and the NTP packet header and round-trip delay and offset calculations, which have persisted into NTPv4, were documented in RFC 958. Despite the relatively slow computers and networks available at the time, accuracy of better than 100 milliseconds was usually obtained on Atlantic spanning links, with accuracy of tens of milliseconds on Ethernet networks. In 1988, a much more complete specification of the NTPv1 protocol, with associated algorithms, was published in RFC 1059. It drew on the experimental results and clock filter algorithm documented in RFC 956 and was the first version to describe the client–server and peer-to-peer modes. In 1991, the NTPv1 architecture, protocol and algorithms were brought to the attention of a wider engineering community with the publication of an article by David L. Mills in the IEEE Transactions on Communications. In 1989, RFC 1119 was published defining NTPv2 by means of a state machine, with pseudocode to describe its operation. It introduced a management protocol and cryptographic authentication scheme which have both survived into NTPv4, along with the bulk of the algorithm. However the design of NTPv2 was criticized for lacking formal correctness by the DTSS community, and the clock selection procedure was modified to incorporate Marzullo's algorithm for NTPv3 onwards. In 1992, RFC 1305 defined NTPv3. The RFC included an analysis of all sources of error, from the reference clock down to the final client, which enabled the calculation of a metric that helps choose the best server where several candidates appear to disagree. Broadcast mode was introduced. In subsequent years, as new features were added and algorithm improvements were made, it became apparent that a new protocol version was required. In 2010, RFC 5905 was published containing a proposed specification for NTPv4. Following the retirement of Mills from the University of Delaware, the reference implementation is currently maintained as an open source project led by Harlan Stenn. On the IANA side, a ntp (network time protocols) work group is in charge of reviewing proposed drafts. The protocol has significantly progressed since NTPv4. As of 2022, three RFC documents describing updates to the protocol have been published, not counting the numerous peripheral standards such as Network Time Security. Mills had mentioned plans for a "NTPv5" on his page, but one was never published. An unrelated draft termed "NTPv5" by M. Lichvar of chrony was initiated in 2020 and includes security, accuracy, and scaling changes. === SNTP === As NTP replaced the use of the old Time Protocol, some use cases nevertheless found the full protocol too complicated. In 1992, Simple Network Time Protocol (SNTP) was defined to fill this niche. The SNTPv3 standard describes a way to use NTPv3, such that no storage of state over an extended period is needed. The topology becomes essentially the same as with the Time Protocol, as only one server is used. In 1996, SNTP was updated to SNTPv4 with some features of the then-in-development NTPv4. The current version of SNTPv4 was merged into the main NTPv4 standard in 2010. SNTP is fully interoperable with NTP since it does not define a new protocol.: §14  However, the simple algorithms provide times of reduced accuracy and thus it is inadvisable to sync time from an SNTP source. == Clock strata == NTP uses a hierarchical, semi-layered system of time sources. Each level of this hierarchy is termed a stratum and is assigned a number starting with zero for the reference clock at the top. A server synchronized to a stratum n server runs at stratum n + 1. The number represents the distance from the reference clock and is used to prevent cyclical dependencies in the hierarchy. Stratum is not always an indication of quality or reliability; it is common to find stratum 3 time sources that are higher quality than other stratum 2 time sources. A brief description of strata 0, 1, 2 and 3 is provided below. Stratum 0 These are high-precision timekeeping devices such as atomic clocks, GNSS (including GPS) or other radio clocks, or a PTP-synchronized clock. They generate a very accurate pulse per second signal that triggers an interrupt and timestamp on a connected computer. Stratum 0 devices are also known as reference clocks. NTP servers cannot advertise themselves as stratum 0. A stratum field set to 0 in NTP packet indicates an unspecified stratum.: 21  Stratum 1 These are computers whose system time is synchronized to within a few microseconds of their attached stratum 0 devices. Stratum 1 servers may peer with other stratum 1 servers for sanity check and backup. They are also referred to as primary time servers. Stratum 2 These are computers that are synchronized over a network to stratum 1 servers. Often a stratum 2 computer queries several stratum 1 servers. Stratum 2 computers may also peer with other stratum 2 computers to provide more stable and robust time for all devices in the peer group. Stratum 3 These are computers that are synchronized to stratum 2 servers. They employ the same algorithms for peering and data sampling as stratum 2, and can themselves act as servers for stratum 4 computers, and so on. The upper limit for stratum is 15; stratum 16 is used to indicate that a device is unsynchronized. The NTP algorithms on each computer interact to construct a Bellman–Ford shortest-path spanning tree, to minimize the accumulated round-trip delay to the stratum 1 servers for all the clients.: 20  In addition to stratum, the protocol is able to identify the synchronization source for each server in terms of a reference identifier (refid). For servers on stratum 2 and below, the refid is an encoded form of the upstream time server's IP address. For IPv4, this is simply the 32-bit address; for IPv6, it would be the first 32 bits of the MD5 hash of the source address. Refids serve to detect and prevent timing loops to the first degree. The refid field is filled with status words in the case of kiss-o'-death (KoD) packets, which tell the client to stop sending requests so that the server can rest. Some examples are INIT (initialization), STEP (step time change), and RATE (client requesting too fast). The program output may additionally use codes not transmitted in the packet to indicate error, such as XFAC to indicate a network disconnection. The IANA maintains a registry for refid source names and KoD codes. Informal assignments can still appear. == Software implementations == === Reference implementation === The NTP reference implementation, along with the protocol, has been continuously developed for over 20 years. Backwards compatibility has been maintained as new features have been added. It contains several sensitive algorithms, especially to discipline the clock, that can misbehave when synchronized to servers that use different algorithms. The software has been ported to almost every computing platform, including personal computers. It runs as a daemon called ntpd under Unix or as a service under Windows. Reference clocks are supported and their offsets are filtered and analysed in the same way as remote servers, although they are usually polled more frequently.: 15–19  This implementation was audited in 2017, finding 14 potential security issues. === Windows Time === All Microsoft Windows versions since Windows 2000 include the Windows Time service (W32Time), which has the ability to synchronize the computer clock to an NTP server. W32Time was originally implemented for the purpose of the Kerberos version 5 authentication protocol, which required time to be within 5 minutes of the correct value to prevent replay attacks. The network time server in Windows 2000 Server (and Windows XP) does not implement NTP disciplined synchronization, only locally disciplined synchronization with NTP/SNTP correction. Beginning with Windows Server 2003 and Windows Vista, the NTP provider for W32Time became compatible with a significant subset of NTPv3. Microsoft states that W32Time cannot reliably maintain time synchronization with one second accuracy. If higher accuracy is desired, Microsoft recommends using a newer version of Windows or different NTP implementation. Beginning with Windows 10 version 1607 and Windows Server 2016, W32Time can be configured to reach time accuracy of 1 s, 50 ms or 1 ms under certain specified operating conditions. === OpenNTPD === In 2004, Henning Brauer of OpenBSD presented OpenNTPD, an NTPv3/SNTPv4 implementation with a focus on security and encompassing a privilege separated design. Whilst it is aimed more closely at the simpler generic needs of OpenBSD users, it also includes some protocol security improvements while still being compatible with existing NTP servers. The simpler code base sacrifices accuracy, deemed unnecessary in this use case. A portable version is available in Linux package repositories. === NTPsec === NTPsec is a fork of the reference implementation that has been systematically security-hardened. The fork point was in June 2015 and was in response to a series of compromises in 2014. The first production release shipped in October 2017. Between removal of unsafe features, removal of support for obsolete hardware, and removal of support for obsolete Unix variants, NTPsec has been able to pare away 75% of the original codebase, making the remainder easier to audit. A 2017 audit of the code showed eight security issues, including two that were not present in the original reference implementation, but NTPsec did not suffer from eight other issues that remained in the reference implementation. === chrony === chrony is an independent NTP implementation mainly sponsored by Red Hat, who uses it as the default time program in their distributions. Being written from scratch, chrony has a simpler codebase allowing for better security and lower resource consumption. It does not however compromise on accuracy, instead syncing faster and better than the reference ntpd in many circumstances. It is versatile enough for ordinary computers, which are unstable, go into sleep mode or have intermittent connection to the Internet. It is also designed for virtual machines, a more unstable environment. chrony has been evaluated as "trustworthy", with only a few incidents. It is able to achieve improved precision on LAN connections, using hardware timestamping on the network adapter. Support for Network Time Security (NTS) was added on version 4.0. chrony is available under GNU General Public License version 2, was created by Richard Curnow in 1997 and is currently maintained by Miroslav Lichvar. === ntpd-rs === ntpd-rs is a security-focused implementation of the NTP protocol, founded by the Internet Security Research Group as part of their Prossimo initiative for the creation of memory safe Internet infrastructure. ntpd-rs is implemented in Rust programming language which offers memory safety guarantees in addition to the Real-time computing capabilities which are required for an NTP implementation. ntpd-rs is used in security-sensitive environments such as the Let's Encrypt non-profit Certificate Authority. Support for NTS is available. ntpd-rs is part of the "Pendulum" project which also includes a Precision Time Protocol implementation "statime". Both projects are available under Apache and MIT software licenses. === Others === Ntimed was started by Poul-Henning Kamp of FreeBSD in 2014 and abandoned in 2015. The implementation was sponsored by the Linux Foundation. systemd-timesyncd is the SNTP client built into systemd. It is used by Debian since version "bookworm" and the downstream Ubuntu. == Leap seconds == On the day of a leap second event, ntpd receives notification from either a configuration file, an attached reference clock, or a remote server. Although the NTP clock is actually halted during the event, because of the requirement that time must appear to be strictly increasing, any processes that query the system time cause it to increase by a tiny amount, preserving the order of events. If a negative leap second should ever become necessary, it would be deleted with the sequence 23:59:58, 00:00:00, skipping 23:59:59. An alternative implementation, called leap smearing, consists in introducing the leap second incrementally during a period of 24 hours, from noon to noon in UTC time. This implementation is used by Google (both internally and on their public NTP servers), Amazon AWS, and Facebook. chrony supports leap smear in smoothtime and leapsecmode configurations, but such use is not to be mixed with a public NTP pool as leap smear is non-standard and will throw off client calculation in a mix. == Security concerns == Because adjusting system time is generally a privileged operation, part or all of NTP code has to be run with some privileges in order to support its core functionality. Only a few other security problems have been identified in the reference implementation of the NTP codebase, but those that appeared in 2009 were cause for significant concern. The protocol has been undergoing revision and review throughout its history. The codebase for the reference implementation has undergone security audits from several sources for several years. A stack buffer overflow exploit was discovered and patched in 2014. Apple was concerned enough about this vulnerability that it used its auto-update capability for the first time. On systems using the reference implementation, which is running with root user's credential, this could allow unlimited access. Some other implementations, such as OpenNTPD, have smaller code base and adopted other mitigation measures like privilege separation, are not subject to this flaw. A 2017 security audit of three NTP implementations, conducted on behalf of the Linux Foundation's Core Infrastructure Initiative, suggested that both NTP and NTPsec were more problematic than chrony from a security standpoint. NTP servers can be susceptible to man-in-the-middle attacks unless packets are cryptographically signed for authentication. The computational overhead involved can make this impractical on busy servers, particularly during denial of service attacks. NTP message spoofing from a man-in-the-middle attack can be used to alter clocks on client computers and allow a number of attacks based on bypassing of cryptographic key expiration. Some of the services affected by fake NTP messages identified are TLS, DNSSEC, various caching schemes (such as DNS cache), Border Gateway Protocol (BGP), Bitcoin and a number of persistent login schemes. NTP has been used in distributed denial of service attacks. A small query is sent to an NTP server with the return IP address spoofed to be the target address. Similar to the DNS amplification attack, the server responds with a much larger reply that allows an attacker to substantially increase the amount of data being sent to the target. To avoid participating in an attack, NTP server software can be upgraded or servers can be configured to ignore external queries. === Secure extensions === NTP itself includes support for authenticating servers to clients. NTPv3 supports a symmetric key mode, which is not useful against MITM. The public key system known as "autokey" in NTPv4 adapted from IPSec offers useful authentication, but is not practical for a busy server. Autokey was also later found to suffer from several design flaws, with no correction published, save for a change in the message authentication code. Autokey should no longer be used. Network Time Security (NTS) is a secure version of NTPv4 with TLS and AEAD. The main improvement over previous attempts is that a separate "key establishment" server handles the heavy asymmetric cryptography, which needs to be done only once. If the server goes down, previous users would still be able to fetch time without fear of MITM. NTS is supported by several NTP servers including Cloudflare and Netnod. It can be enabled on chrony, NTPsec, and ntpd-rs. Microsoft also has an approach to authenticate NTPv3/SNTPv4 packets using a Windows domain identity, known as MS-SNTP. This system is implemented in the reference ntpd and chrony, using samba for the domain connection. == NTP packet header format == LI (Leap Indicator): 2 bits Warning of leap second insertion or deletion: 0 = no warning 1 = last minute has 61 seconds 2 = last minute has 59 seconds 3 = unknown (clock unsynchronized) VN (Version Number): 3 bits NTP version number, typically 4. Mode: 3 bits Association mode: 0 = reserved 1 = symmetric active 2 = symmetric passive 3 = client 4 = server 5 = broadcast 6 = control 7 = private Stratum: 8 bits Indicates the distance from the reference clock. 0 = invalid 1 = primary server 2–15 = secondary 16 = unsynchronized Poll: 8 bits Maximum interval between successive messages, in log₂(seconds). Typical range is 6 to 10. Precision: 8 bits Signed log₂(seconds) of system clock precision (e.g., –18 ≈ 1 microsecond). Root Delay: 32 bits Total round-trip delay to the reference clock, in NTP short format. Root Dispersion: 32 bits Total dispersion to the reference clock, in NTP short format. Reference ID: 32 bits Identifies the specific server or reference clock; interpretation depends on Stratum. Reference Timestamp: 64 bits Time when the system clock was last set or corrected, in NTP timestamp format. Origin Timestamp (org): 64 bits Time at the client when the request departed, in NTP timestamp format. Receive Timestamp (rec): 64 bits Time at the server when the request arrived, in NTP timestamp format. Transmit Timestamp (xmt): 64 bits Time at the server when the response left, in NTP timestamp format. Extension Field: variable Optional field(s) for NTP extensions (see , Section 7.5). Key Identifier: 32 bits Unsigned integer designating an MD5 key shared by the client and server. Message Digest (MD5): 128 bits MD5 hash covering the packet header and extension fields, used for authentication. === Timestamps === The 64-bit binary fixed-point timestamps used by NTP consist of a 32-bit part for seconds and a 32-bit part for fractional second, giving a time scale that rolls over every 232 seconds (136 years) and a theoretical resolution of 2−32 seconds (233 picoseconds). NTP uses an epoch of January 1, 1900. Therefore, the first rollover occurs on February 7, 2036. NTPv4 introduces a 128-bit date format: 64 bits for the second and 64 bits for the fractional-second. The most-significant 32 bits of this format is the Era Number which resolves rollover ambiguity in most cases. According to Mills, "The 64-bit value for the fraction is enough to resolve the amount of time it takes a photon to pass an electron at the speed of light. The 64-bit second value is enough to provide unambiguous time representation until the universe goes dim." == See also == Allan variance – Measure of frequency stability in clocks and oscillators Clock network – Set of clocks that synchronized to same time International Atomic Time – Time standard based on atomic clocks IRIG timecode – Standard formats for transferring time information NITZ – Mechanism for time synchronisation on mobile devices NTP pool – Networked computers providing time synchronization Ntpdate – Software to synchronize computer time Precision Time Protocol – Network time synchronization protocol == Notes == == References == == Further reading == Definitions of Managed Objects for Network Time Protocol Version 4 (NTPv4). doi:10.17487/RFC5907. RFC 5907. Network Time Protocol (NTP) Server Option for DHCPv6. doi:10.17487/RFC5908. RFC 5908. == External links == Official website Official Stratum One Time Servers list IETF NTP working group Microsoft Windows accurate time guide and more Time and NTP paper NTP Survey 2005 Current NIST leap seconds file compatible with ntpd David L. Mills, A Brief History of NTP Time: Confessions of an Internet Timekeeper (PDF), retrieved 7 February 2021
Wikipedia/Network_Time_Protocol
The Intelligent Network (IN) is the standard network architecture specified in the ITU-T Q.1200 series recommendations. It is intended for fixed as well as mobile telecom networks. It allows operators to differentiate themselves by providing value-added services in addition to the standard telecom services such as PSTN, ISDN on fixed networks, and GSM services on mobile phones or other mobile devices. The intelligence is provided by network nodes on the service layer, distinct from the switching layer of the core network, as opposed to solutions based on intelligence in the core switches or equipment. The IN nodes are typically owned by telecommunications service providers such as a telephone company or mobile phone operator. IN is supported by the Signaling System #7 (SS7) protocol between network switching centers and other network nodes owned by network operators. == Examples of IN services == Televoting Call screening Local number portability Toll-free calls/Freephone Prepaid calling Account card calling Virtual private networks (such as family group calling) Centrex service (Virtual PBX) Private-number plans (with numbers remaining unpublished in directories) Universal Personal Telecommunications service (a universal personal telephone number) Mass-calling service Prefix free dialing from cellphones abroad Seamless MMS message access from abroad Reverse charging Home Area Discount Premium Rate calls Call distribution based on various criteria associated with the call Location-based routing Time-based routing Proportional call distribution (such as between two or more call centres or offices) Call queueing Call transfer == History and key concepts == The IN concepts, architecture and protocols were originally developed as standards by the ITU-T which is the standardization committee of the International Telecommunication Union; prior to this a number of telecommunications providers had proprietary implementations. The primary aim of the IN was to enhance the core telephony services offered by traditional telecommunications networks, which usually amounted to making and receiving voice calls, sometimes with call divert. This core would then provide a basis upon which operators could build services in addition to those already present on a standard telephone exchange. A complete description of the IN emerged in a set of ITU-T standards named Q.1210 to Q.1219, or Capability Set One (CS-1) as they became known. The standards defined a complete architecture including the architectural view, state machines, physical implementation and protocols. They were universally embraced by telecom suppliers and operators, although many variants were derived for use in different parts of the world (see Variants below). Following the success of CS-1, further enhancements followed in the form of CS-2. Although the standards were completed, they were not as widely implemented as CS-1, partly because of the increasing power of the variants, but also partly because they addressed issues which pushed traditional telephone exchanges to their limits. The major driver behind the development of the IN was the need for a more flexible way of adding sophisticated services to the existing network. Before the IN was developed, all new features and/or services had to be implemented directly in the core switch systems. This made for long release cycles as the software testing had to be extensive and thorough to prevent the network from failing. With the advent of the IN, most of these services (such as toll-free numbers and geographical number portability) were moved out of the core switch systems and into self-contained nodes, creating a modular and more secure network that allowed the service providers themselves to develop variations and value-added services to their networks without submitting a request to the core switch manufacturer and waiting for the long development process. The initial use of IN technology was for number translation services, e.g. when translating toll-free numbers to regular PSTN numbers; much more complex services have since been built on the IN, such as Custom Local Area Signaling Services (CLASS) and prepaid telephone calls. == SS7 architecture == The main concepts (functional view) surrounding IN services or architecture are connected with SS7 architecture: Service Switching Function (SSF) or Service Switching Point (SSP) is co-located with the telephone exchange, and acts as the trigger point for further services to be invoked during a call. The SSP implements the Basic Call State Machine (BCSM) which is a finite-state machine that represents an abstract view of a call from beginning to end (off hook, dialing, answer, no answer, busy, hang up, etc.). As each state is traversed, the exchange encounters Detection Points (DPs) at which the SSP may invoke a query to the SCP to wait for further instructions on how to proceed. This query is usually called a trigger. Trigger criteria are defined by the operator and might include the subscriber calling number or the dialed number. The SSF is responsible for controlling calls requiring value added services. Service Control Function (SCF) or Service Control Point (SCP) is a separate set of platforms that receive queries from the SSP. The SCP contains service logic which implements the behaviour desired by the operator, i.e., the services. During service logic processing, additional data required to process the call may be obtained from the SDF. The logic on the SCP is created using the SCE. Service Data Function (SDF) or Service Data Point (SDP) is a database that contains additional subscriber data, or other data required to process a call. For example, the subscriber's remaining prepaid credit may be stored in the SDF to be queried in real-time during the call. The SDF may be a separate platform or co-located with the SCP. Service Management Function (SMF) or Service Management Point (SMP) is a platform or cluster of platforms that operators use to monitor and manage the IN services. It contains the management database which stores the services' configuration, collects the statistics and alarms, and stores the Call Data Reports and Event Data Reports. Service Creation Environment (SCE) is the development environment used to create the services present on the SCP. Although the standards permit any type of environment, it is fairly rare to see low level languages like C used. Instead, proprietary graphical languages are used to enable telecom engineers to create services directly. The languages are usually of the fourth-generation type, and the engineer may use a graphical interface to build or change a service. Specialized Resource Function (SRF) or Intelligent Peripheral (IP) is a node which can connect to both the SSP and the SCP and deliver special resources into the call, mostly related to voice communication, for example to play voice announcements or collect DTMF tones from the user. == Protocols == The core elements described above use standard protocols to communicate with each other. The use of standard protocols allows different manufacturers to concentrate on different parts of the architecture and be confident that they will all work together in any combination. The interfaces between the SSP and the SCP are SS7 based and have similarities with TCP/IP protocols. The SS7 protocols implement much of the OSI seven-layer model. This means that the IN standards only had to define the application layer, which is called the Intelligent Networks Application Part or INAP. The INAP messages are encoded using ASN.1. The interface between the SCP and the SDP is defined in the standards to be an X.500 Directory Access Protocol or DAP. A more lightweight interface called LDAP has emerged from the IETF which is considerably simpler to implement, so many SCPs have implemented that instead. == Variants == The core CS-1 specifications were adopted and extended by other standards bodies. European flavours were developed by ETSI, American flavours were developed by ANSI, and Japanese variants also exist. The main reasons for producing variants in each region was to ensure interoperability between equipment manufactured and deployed locally (for example different versions of the underlying SS7 protocols exist between the regions). New functionality was also added which meant that variants diverged from each other and the main ITU-T standard. The biggest variant was called Customised Applications for Mobile networks Enhanced Logic, or CAMEL for short. This allowed for extensions to be made for the mobile phone environment, and allowed mobile phone operators to offer the same IN services to subscribers while they are roaming as they receive in the home network. CAMEL has become a major standard in its own right and is currently maintained by 3GPP. The last major release of the standard was CAMEL phase 4. It is the only IN standard currently being actively worked on. Bellcore (subsequently Telcordia Technologies) developed the Advanced Intelligent Network (AIN) as the variant of Intelligent Network for North America, and performed the standardization of the AIN on behalf of the major US operators. The original goal of AIN was AIN 1.0, which was specified in the early 1990s (AIN Release 1, Bellcore SR-NWT-002247, 1993). AIN 1.0 proved technically infeasible to implement, which led to the definition of simplified AIN 0.1 and AIN 0.2 specifications. In North America, Telcordia SR-3511 (originally known as TA-1129+) and GR-1129-CORE protocols serve to link switches with the IN systems such as Service Control Points (SCPs) or Service Nodes. SR-3511 details a TCP/IP-based protocol which directly connects the SCP and Service Node. GR-1129-CORE provides generic requirements for an ISDN-based protocol which connects the SCP to the Service Node via the SSP. == Future == While activity in development of IN standards has declined in recent years, there are many systems deployed across the world which use this technology. The architecture has proved to be not only stable, but also a continuing source of revenue with new services added all the time. Manufacturers continue to support the equipment and obsolescence is not an issue. Nevertheless, new technologies and architectures have emerged, especially in the area of VoIP and SIP. More attention is being paid to the use of APIs in preference to protocols like INAP, and new standards have emerged in the form of JAIN and Parlay. From a technical viewpoint, the SCE began to move away from its proprietary graphical origins towards a Java application server environment. The meaning of "intelligent network" is evolving in time, largely driven by breakthroughs in computation and algorithms. From networks enhanced by more flexible algorithms and more advanced protocols, to networks designed using data-driven models to AI enabled networks. == See also == IP Multimedia Subsystem Service layer Value-added service == Notes == == References == Ambrosch, Wolf D.; Maher, Anthony; Sasscer, Barry, eds. (1989). The Intelligent Network. Berlin Heidelberg: Springer. ISBN 3-540-50897-X. Also known as the green book due to the cover . Faynberg, Igor (1997). The Intelligent Network Standards. New York: McGraw-Hill Professional Publishing. ISBN 0-07-021422-0. Magedanz, Thomas (1996). Intelligent Networks. London Bonn: Van Nostrand Reinhold Company. ISBN 1-85032-293-7. Anderson, John R. (2002-10-30). Intelligent Networks. London: IET. ISBN 0-85296-977-5. == External links == Tutorial on Intelligent Networks (archived 24 July 2011)
Wikipedia/Intelligent_Network
Simple Network Management Protocol (SNMP) is an Internet Standard protocol for collecting and organizing information about managed devices on IP networks and for modifying that information to change device behavior. Devices that typically support SNMP include cable modems, routers, network switches, servers, workstations, printers, and more. SNMP is widely used in network management for network monitoring. SNMP exposes management data in the form of variables on the managed systems organized in a management information base (MIB), which describes the system status and configuration. These variables can then be remotely queried (and, in some circumstances, manipulated) by managing applications. Three significant versions of SNMP have been developed and deployed. SNMPv1 is the original version of the protocol. More recent versions, SNMPv2c and SNMPv3, feature improvements in performance, flexibility and security. SNMP is a component of the Internet Protocol Suite as defined by the Internet Engineering Task Force (IETF). It consists of a set of standards for network management, including an application layer protocol, a database schema, and a set of data objects. == Overview and basic concepts == In typical uses of SNMP, one or more administrative computers called managers have the task of monitoring or managing a group of hosts or devices on a computer network. Each managed system executes a software component called an agent that reports information via SNMP to the manager. An SNMP-managed network consists of three key components: Managed devices Agent – software that runs on managed devices Network management station (NMS) – software that runs on the manager A managed device is a network node that implements an SNMP interface that allows unidirectional (read-only) or bidirectional (read and write) access to node-specific information. Managed devices exchange node-specific information with the NMSs. Sometimes called network elements, the managed devices can be any type of device, including, but not limited to, routers, access servers, switches, cable modems, bridges, hubs, IP telephones, IP video cameras, computer hosts, and printers. An agent is a network-management software module that resides on a managed device. An agent has local knowledge of management information and translates that information to or from an SNMP-specific form. A network management station executes applications that monitor and control managed devices. NMSs provide the bulk of the processing and memory resources required for network management. One or more NMSs may exist on any managed network. == Management information base == SNMP agents expose management data on the managed systems as variables. The protocol also permits active management tasks, such as configuration changes, through remote modification of these variables. The variables accessible via SNMP are organized in hierarchies. SNMP itself does not define which variables a managed system should offer. Rather, SNMP uses an extensible design that allows applications to define their own hierarchies. These hierarchies are described as a management information base (MIB). MIBs describe the structure of the management data of a device subsystem; they use a hierarchical namespace containing object identifiers (OID). Each OID identifies a variable that can be read or set via SNMP. MIBs use the notation defined by Structure of Management Information Version 2.0 (SMIv2, RFC 2578), a subset of ASN.1. == Protocol details == SNMP operates in the application layer of the Internet protocol suite. All SNMP messages are transported via User Datagram Protocol (UDP). The SNMP agent receives requests on UDP port 161. The manager may send requests from any available source port to port 161 in the agent. The agent response is sent back to the source port on the manager. The manager receives notifications (Traps and InformRequests) on port 162. The agent may generate notifications from any available port. When used with Transport Layer Security or Datagram Transport Layer Security, requests are received on port 10161 and notifications are sent to port 10162. SNMPv1 specifies five core protocol data units (PDUs). Two other PDUs, GetBulkRequest and InformRequest were added in SNMPv2 and the Report PDU was added in SNMPv3. All SNMP PDUs are constructed as follows: The seven SNMP PDU types as identified by the PDU-type field are as follows: GetRequest A manager-to-agent request to retrieve the value of a variable or list of variables. Desired variables are specified in variable bindings (the value field is not used). Retrieval of the specified variable values is to be done as an atomic operation by the agent. A Response with current values is returned. SetRequest A manager-to-agent request to change the value of a variable or list of variables. Variable bindings are specified in the body of the request. Changes to all specified variables are to be made as an atomic operation by the agent. A Response with (current) new values for the variables is returned. GetNextRequest A manager-to-agent request to discover available variables and their values. Returns a Response with variable binding for the lexicographically next variable in the MIB. The entire MIB of an agent can be walked by iterative application of GetNextRequest starting at OID 0. Rows of a table can be read by specifying column OIDs in the variable bindings of the request. GetBulkRequest A manager-to-agent request for multiple iterations of GetNextRequest. An optimized version of GetNextRequest. Returns a Response with multiple variable bindings walked from the variable binding or bindings in the request. PDU specific non-repeaters and max-repetitions fields are used to control response behavior. GetBulkRequest was introduced in SNMPv2. Response Returns variable bindings and acknowledgement from agent to manager for GetRequest, SetRequest, GetNextRequest, GetBulkRequest and InformRequest. Error reporting is provided by error-status and error-index fields. Although it was used as a response to both gets and sets, this PDU was called GetResponse in SNMPv1. Trap Asynchronous notification from agent to manager. While in other SNMP communication, the manager actively requests information from the agent, these are PDUs that are sent from the agent to the manager without being explicitly requested. SNMP Traps enable an agent to notify the management station of significant events by way of an unsolicited SNMP message. Trap PDUs include current sysUpTime value, an OID identifying the type of trap and optional variable bindings. Destination addressing for traps is determined in an application-specific manner typically through trap configuration variables in the MIB. The format of the trap message was changed in SNMPv2 and the PDU was renamed SNMPv2-Trap. InformRequest Acknowledged asynchronous notification. This PDU was introduced in SNMPv2 and was originally defined as manager to manager communication. Later implementations have loosened the original definition to allow agent to manager communications. Manager-to-manager notifications were already possible in SNMPv1 using a Trap, but as SNMP commonly runs over UDP where delivery is not assured and dropped packets are not reported, delivery of a Trap was not guaranteed. InformRequest fixes this as an acknowledgement is returned on receipt. RFC 1157 specifies that an SNMP implementation must accept a message of at least 484 bytes in length. In practice, SNMP implementations accept longer messages.: 1870  If implemented correctly, an SNMP message is discarded if the decoding of the message fails and thus malformed SNMP requests are ignored. A successfully decoded SNMP request is then authenticated using the community string. If the authentication fails, a trap is generated indicating an authentication failure and the message is dropped.: 1871  SNMPv1 and SNMPv2c use communities to establish trust between managers and agents. Most agents support three community names, one each for read-only, read-write and trap. These three community strings control different types of activities. The read-only community applies to get requests. The read-write community string applies to set requests. The trap community string applies to receipt of traps. SNMPv3 also uses community strings, but allows for secure authentication and communication between SNMP manager and agent. == Protocol versions == In practice, SNMP implementations often support multiple versions: typically SNMPv1, SNMPv2c, and SNMPv3. === Version 1 === SNMP version 1 (SNMPv1) is the initial implementation of the SNMP protocol. The design of SNMPv1 was done in the 1980s by a group of collaborators who viewed the officially sponsored OSI/IETF/NSF (National Science Foundation) effort (HEMS/CMIS/CMIP) as both unimplementable in the computing platforms of the time as well as potentially unworkable. SNMP was approved based on a belief that it was an interim protocol needed for taking steps towards large-scale deployment of the Internet and its commercialization. The first Request for Comments (RFCs) for SNMP, now known as SNMPv1, appeared in 1988: RFC 1065 — Structure and identification of management information for TCP/IP-based internets RFC 1066 — Management information base for network management of TCP/IP-based internets RFC 1067 — A simple network management protocol In 1990, these documents were superseded by: RFC 1155 — Structure and identification of management information for TCP/IP-based internets RFC 1156 — Management information base for network management of TCP/IP-based internets RFC 1157 — A simple network management protocol In 1991, RFC 1156 (MIB-1) was replaced by the more often used: RFC 1213 — Version 2 of management information base (MIB-2) for network management of TCP/IP-based internets SNMPv1 is widely used and is the de facto network management protocol in the Internet community. SNMPv1 may be carried by transport layer protocols such as User Datagram Protocol (UDP), OSI Connectionless-mode Network Service (CLNS), AppleTalk Datagram Delivery Protocol (DDP), and Novell Internetwork Packet Exchange (IPX). Version 1 has been criticized for its poor security. The specification does, in fact, allow room for custom authentication to be used, but widely used implementations "support only a trivial authentication service that identifies all SNMP messages as authentic SNMP messages." The security of the messages, therefore, becomes dependent on the security of the channels over which the messages are sent. For example, an organization may consider their internal network to be sufficiently secure that no encryption is necessary for its SNMP messages. In such cases, the community name, which is transmitted in cleartext, tends to be viewed as a de facto password, in spite of the original specification. === Version 2 === SNMPv2, defined by RFC 1441 and RFC 1452, revises version 1 and includes improvements in the areas of performance, security and manager-to-manager communications. It introduced GetBulkRequest, an alternative to iterative GetNextRequests for retrieving large amounts of management data in a single request. The new party-based security system introduced in SNMPv2, viewed by many as overly complex, was not widely adopted. This version of SNMP reached the Proposed Standard level of maturity, but was deemed obsolete by later versions. Community-Based Simple Network Management Protocol version 2, or SNMPv2c, is defined in RFC 1901–RFC 1908. SNMPv2c comprises SNMPv2 without the controversial new SNMP v2 security model, using instead the simple community-based security scheme of SNMPv1. This version is one of relatively few standards to meet the IETF's Draft Standard maturity level, and was widely considered the de facto SNMPv2 standard. It was later restated as part of SNMPv3. User-Based Simple Network Management Protocol version 2, or SNMPv2u, is defined in RFC 1909–RFC 1910. This is a compromise that attempts to offer greater security than SNMPv1, but without incurring the high complexity of SNMPv2. A variant of this was commercialized as SNMP v2*, and the mechanism was eventually adopted as one of two security frameworks in SNMP v3. ==== 64-bit counters ==== SNMP version 2 introduces the option for 64-bit data counters. Version 1 was designed only with 32-bit counters, which can store integer values from zero to 4.29 billion (precisely 4294967295). A 32-bit version 1 counter cannot store the maximum speed of a 10 gigabit or larger interface, expressed in bits per second. Similarly, a 32-bit counter tracking statistics for a 10 gigabit or larger interface can roll over back to zero again in less than one minute, which may be a shorter time interval than a counter is polled to read its current state. This would result in lost or invalid data due to the undetected value rollover, and corruption of trend-tracking data. The 64-bit version 2 counter can store values from zero to 18.4 quintillion (precisely 18,446,744,073,709,551,615) and so is currently unlikely to experience a counter rollover between polling events. For example, 1.6 terabit Ethernet is predicted to become available by 2025. A 64-bit counter incrementing at a rate of 1.6 trillion bits per second would be able to retain information for such an interface without rolling over for 133 days. === SNMPv1 and SNMPv2c interoperability === SNMPv2c is incompatible with SNMPv1 in two key areas: message formats and protocol operations. SNMPv2c messages use different header and protocol data unit (PDU) formats than SNMPv1 messages. SNMPv2c also uses two protocol operations that are not specified in SNMPv1. To overcome incompatibility, RFC 3584 defines two SNMPv1/v2c coexistence strategies: proxy agents and bilingual network-management systems. ==== Proxy agents ==== An SNMPv2 agent can act as a proxy agent on behalf of SNMPv1-managed devices. When an SNMPv2 NMS issues a command intended for an SNMPv1 agent it sends it to the SNMPv2 proxy agent instead. The proxy agent forwards Get, GetNext, and Set messages to the SNMPv1 agent unchanged. GetBulk messages are converted by the proxy agent to GetNext messages and then are forwarded to the SNMPv1 agent. Additionally, the proxy agent receives and maps SNMPv1 trap messages to SNMPv2 trap messages and then forwards them to the NMS. ==== Bilingual network-management system ==== Bilingual SNMPv2 network-management systems support both SNMPv1 and SNMPv2. To support this dual-management environment, a management application examines information stored in a local database to determine whether the agent supports SNMPv1 or SNMPv2. Based on the information in the database, the NMS communicates with the agent using the appropriate version of SNMP. === Version 3 === Although SNMPv3 makes no changes to the protocol aside from the addition of cryptographic security, it looks very different due to new textual conventions, concepts, and terminology. The most visible change was to define a secure version of SNMP, by adding security and remote configuration enhancements to SNMP. The security aspect is addressed by offering both strong authentication and data encryption for privacy. For the administration aspect, SNMPv3 focuses on two parts, namely notification originators and proxy forwarders. The changes also facilitate remote configuration and administration of the SNMP entities, as well as addressing issues related to the large-scale deployment, accounting, and fault management. Features and enhancements included: Identification of SNMP entities to facilitate communication only between known SNMP entities – Each SNMP entity has an identifier called the SNMPEngineID, and SNMP communication is possible only if an SNMP entity knows the identity of its peer. Traps and Notifications are exceptions to this rule. Support for security models – A security model may define the security policy within an administrative domain or an intranet. SNMPv3 contains the specifications for a user-based security model (USM). Definition of security goals where the goals of message authentication service include protection against the following: Modification of Information – Protection against some unauthorized SNMP entity altering in-transit messages generated by an authorized principal. Masquerade – Protection against attempting management operations not authorized for some principal by assuming the identity of another principal that has the appropriate authorizations. Message stream modification – Protection against messages getting maliciously re-ordered, delayed, or replayed to affect unauthorized management operations. Disclosure – Protection against eavesdropping on the exchanges between SNMP engines. Specification for USM – USM consists of the general definition of the following communication mechanisms available: Communication without authentication and privacy (NoAuthNoPriv). Communication with authentication and without privacy (AuthNoPriv). Communication with authentication and privacy (AuthPriv). Definition of different authentication and privacy protocols – MD5, SHA and HMAC-SHA-2 authentication protocols and the CBC_DES and CFB_AES_128 privacy protocols are supported in the USM. Definition of a discovery procedure – To find the SNMPEngineID of an SNMP entity for a given transport address and transport endpoint address. Definition of the time synchronization procedure – To facilitate authenticated communication between the SNMP entities. Definition of the SNMP framework MIB – To facilitate remote configuration and administration of the SNMP entity. Definition of the USM MIBs – To facilitate remote configuration and administration of the security module. Definition of the view-based access control model (VACM) MIBs – To facilitate remote configuration and administration of the access control module. Security was one of the biggest weaknesses of SNMP until v3. Authentication in SNMP Versions 1 and 2 amounts to nothing more than a password (community string) sent in clear text between a manager and agent. Each SNMPv3 message contains security parameters that are encoded as an octet string. The meaning of these security parameters depends on the security model being used. The security approach in v3 targets: Confidentiality – Encryption of packets to prevent snooping by an unauthorized source. Integrity – Message integrity to ensure that a packet has not been tampered while in transit including an optional packet replay protection mechanism. Authentication – to verify that the message is from a valid source. v3 also defines the USM and VACM, which were later followed by a transport security model (TSM) that provided support for SNMPv3 over SSH and SNMPv3 over TLS and DTLS. USM (User-based Security Model) provides authentication and privacy (encryption) functions and operates at the message level. VACM (View-based Access Control Model) determines whether a given principal is allowed access to a particular MIB object to perform specific functions and operates at the PDU level. TSM (Transport Security Model) provides a method for authenticating and encrypting messages over external security channels. Two transports, SSH and TLS/DTLS, have been defined that make use of the TSM specification. As of 2004 the IETF recognizes Simple Network Management Protocol version 3 as defined by RFC 3411–RFC 3418 (also known as STD0062) as the current standard version of SNMP. The IETF has designated SNMPv3 a full Internet standard, the highest maturity level for an RFC. It considers earlier versions to be obsolete (designating them variously Historic or Obsolete). == Implementation issues == SNMP's powerful write capabilities, which would allow the configuration of network devices, are not being fully utilized by many vendors, partly because of a lack of security in SNMP versions before SNMPv3, and partly because many devices simply are not capable of being configured via individual MIB object changes. Some SNMP values (especially tabular values) require specific knowledge of table indexing schemes, and these index values are not necessarily consistent across platforms. This can cause correlation issues when fetching information from multiple devices that may not employ the same table indexing scheme (for example fetching disk utilization metrics, where a specific disk identifier is different across platforms.) Some major equipment vendors tend to over-extend their proprietary command line interface (CLI) centric configuration and control systems. In February 2002 the Carnegie Mellon Software Engineering Institute (CM-SEI) Computer Emergency Response Team Coordination Center (CERT-CC) issued an Advisory on SNMPv1, after the Oulu University Secure Programming Group conducted a thorough analysis of SNMP message handling. Most SNMP implementations, regardless of which version of the protocol they support, use the same program code for decoding protocol data units (PDU) and problems were identified in this code. Other problems were found with decoding SNMP trap messages received by the SNMP management station or requests received by the SNMP agent on the network device. Many vendors had to issue patches for their SNMP implementations.: 1875  == Security implications == === Using SNMP to attack a network === Because SNMP is designed to allow administrators to monitor and configure network devices remotely it can also be used to penetrate a network. A significant number of software tools can scan the entire network using SNMP, therefore mistakes in the configuration of the read-write mode can make a network susceptible to attacks.: 52  In 2001, Cisco released information that indicated that, even in read-only mode, the SNMP implementation of Cisco IOS is vulnerable to certain denial of service attacks. These security issues can be fixed through an IOS upgrade. If SNMP is not used in a network it should be disabled in network devices. When configuring SNMP read-only mode, close attention should be paid to the configuration of the access control and from which IP addresses SNMP messages are accepted. If the SNMP servers are identified by their IP, SNMP is only allowed to respond to these IPs and SNMP messages from other IP addresses would be denied. However, IP address spoofing remains a security concern.: 54  === Authentication === SNMP is available in different versions, and each version has its own security issues. SNMP v1 sends passwords in plaintext over the network. Therefore, passwords can be read with packet sniffing. SNMP v2 allows password hashing with MD5, but this has to be configured. Virtually all network management software support SNMP v1, but not necessarily SNMP v2 or v3. SNMP v2 was specifically developed to provide data security, that is authentication, privacy and authorization, but only SNMP version 2c gained the endorsement of the Internet Engineering Task Force (IETF), while versions 2u and 2* failed to gain IETF approval due to security issues. SNMP v3 uses MD5, Secure Hash Algorithm (SHA) and keyed algorithms to offer protection against unauthorized data modification and spoofing attacks. If a higher level of security is needed the Data Encryption Standard (DES) can be optionally used in the cipher block chaining mode. SNMP v3 is implemented on Cisco IOS since release 12.0(3)T.: 52  SNMPv3 may be subject to brute force and dictionary attacks for guessing the authentication keys, or encryption keys, if these keys are generated from short (weak) passwords or passwords that can be found in a dictionary. SNMPv3 allows both providing random uniformly distributed cryptographic keys and generating cryptographic keys from a password supplied by the user. The risk of guessing authentication strings from hash values transmitted over the network depends on the cryptographic hash function used and the length of the hash value. SNMPv3 uses the HMAC-SHA-2 authentication protocol for the User-based Security Model (USM). SNMP does not use a more secure challenge-handshake authentication protocol. SNMPv3 (like other SNMP protocol versions) is a stateless protocol, and it has been designed with a minimal amount of interactions between the agent and the manager. Thus introducing a challenge-response handshake for each command would impose a burden on the agent (and possibly on the network itself) that the protocol designers deemed excessive and unacceptable. The security deficiencies of all SNMP versions can be mitigated by IPsec authentication and confidentiality mechanisms. SNMP also may be carried securely over Datagram Transport Layer Security (DTLS). Many SNMP implementations include a type of automatic discovery where a new network component, such as a switch or router, is discovered and polled automatically. In SNMPv1 and SNMPv2c this is done through a community string that is transmitted in clear-text to other devices. Clear-text passwords are a significant security risk. Once the community string is known outside the organization it could become the target for an attack. To alert administrators of other attempts to glean community strings, SNMP can be configured to pass community-name authentication failure traps.: 54  If SNMPv2 is used, the issue can be avoided by enabling password encryption on the SNMP agents of network devices. The common default configuration for community strings are "public" for read-only access and "private" for read-write.: 1874  Because of the well-known defaults, SNMP topped the list of the SANS Institute's Common Default Configuration Issues and was number ten on the SANS Top 10 Most Critical Internet Security Threats for the year 2000. System and network administrators frequently do not change these configurations.: 1874  Whether it runs over TCP or UDP, SNMPv1 and v2 are vulnerable to IP spoofing attacks. With spoofing, attackers may bypass device access lists in agents that are implemented to restrict SNMP access. SNMPv3 security mechanisms such as USM or TSM can prevent spoofing attacks. == See also == Agent Extensibility Protocol (AgentX) – Subagent protocol for SNMP Common Management Information Protocol (CMIP) – Management protocol by ISO/OSI used by telecommunications devices Common Management Information Service (CMIS) Comparison of network monitoring systems IEC 62379 – Control protocol based on Simple Network Management Protocol Net-SNMP – Open source reference implementation of SNMP NETCONF – Protocol that is an XML-based configuration protocol for network equipment Remote Network Monitoring (RMON) Simple Gateway Monitoring Protocol (SGMP) – Obsolete protocol replaced by SNMP SNMP simulator – Software that simulates devices supporting SNMP == References == == Further reading == Douglas Mauro; Kevin Schmidt (2005). Essential SNMP (Second ed.). O'Reilly Media. ISBN 978-0596008406. William Stallings (1999). SNMP, SNMPv2, SNMPv3, and RMON 1 and 2. Addison Wesley Longman, Inc. ISBN 978-0201485349. Marshall T. Rose (1996). The Simple Book. Prentice Hall. ISBN 0-13-451659-1. RFC 1155 (STD 16) — Structure and Identification of Management Information for the TCP/IP-based Internets RFC 1156 (Historic) — Management Information Base for Network Management of TCP/IP-based internets RFC 1157 (Historic) — A Simple Network Management Protocol (SNMP) RFC 1213 (STD 17) — Management Information Base for Network Management of TCP/IP-based internets: MIB-II RFC 1452 (Informational) — Coexistence between version 1 and version 2 of the Internet-standard Network Management Framework (Obsoleted by RFC 1908) RFC 1901 (Experimental) — Introduction to Community-based SNMPv2 RFC 1902 (Draft Standard) — Structure of Management Information for SNMPv2 (Obsoleted by RFC 2578) RFC 1908 (Standards Track) — Coexistence between Version 1 and Version 2 of the Internet-standard Network Management Framework RFC 2570 (Informational) — Introduction to Version 3 of the Internet-standard Network Management Framework (Obsoleted by RFC 3410) RFC 2578 (STD 58) — Structure of Management Information Version 2 (SMIv2) RFC 3410 (Informational) — Introduction and Applicability Statements for Internet Standard Management Framework STD 62 contains the following RFCs: RFC 3411 — An Architecture for Describing Simple Network Management Protocol (SNMP) Management Frameworks RFC 3412 — Message Processing and Dispatching for the Simple Network Management Protocol (SNMP) RFC 3413 — Simple Network Management Protocol (SNMP) Applications RFC 3414 — User-based Security Model (USM) for version 3 of the Simple Network Management Protocol (SNMPv3) RFC 3415 — View-based Access Control Model (VACM) for the Simple Network Management Protocol (SNMP) RFC 3416 — Version 2 of the Protocol Operations for the Simple Network Management Protocol (SNMP) RFC 3417 — Transport Mappings for the Simple Network Management Protocol (SNMP) RFC 3418 — Management Information Base (MIB) for the Simple Network Management Protocol (SNMP) RFC 3430 (Experimental) — Simple Network Management Protocol (SNMP) over Transmission Control Protocol (TCP) Transport Mapping RFC 3584 (BCP 74) — Coexistence between Version 1, Version 2, and Version 3 of the Internet-standard Network Management Framework RFC 3826 (Proposed) — The Advanced Encryption Standard (AES) Cipher Algorithm in the SNMP User-based Security Model RFC 4789 (Proposed) — Simple Network Management Protocol (SNMP) over IEEE 802 Networks RFC 5343 (STD 78) — Simple Network Management Protocol (SNMP) Context EngineID Discovery RFC 5590 (STD 78) — Transport Subsystem for the Simple Network Management Protocol (SNMP) RFC 5591 (STD 78) — Transport Security Model for the Simple Network Management Protocol (SNMP) RFC 5592 (Proposed) — Secure Shell Transport Model for the Simple Network Management Protocol (SNMP) RFC 5608 (Proposed) — Remote Authentication Dial-In User Service (RADIUS) Usage for Simple Network Management Protocol (SNMP) Transport Models. RFC 6353 (STD 78) — Transport Layer Security (TLS) Transport Model for the Simple Network Management Protocol (SNMP) RFC 7630 (Proposed|Historic) — HMAC-SHA-2 Authentication Protocols in the User-based Security Model (USM) for SNMPv3 RFC 7860 (Proposed) — HMAC-SHA-2 Authentication Protocols in User-Based Security Model (USM) for SNMPv3 == External links ==
Wikipedia/Simple_Network_Management_Protocol
In IEEE 802 LAN/MAN standards, the medium access control (MAC), also called media access control, is the layer that controls the hardware responsible for interaction with the wired (electrical or optical) or wireless transmission medium. The MAC sublayer and the logical link control (LLC) sublayer together make up the data link layer. The LLC provides flow control and multiplexing for the logical link (i.e. EtherType, 802.1Q VLAN tag etc), while the MAC provides flow control and multiplexing for the transmission medium. These two sublayers together correspond to layer 2 of the OSI model. For compatibility reasons, LLC is optional for implementations of IEEE 802.3 (the frames are then "raw"), but compulsory for implementations of other IEEE 802 physical layer standards. Within the hierarchy of the OSI model and IEEE 802 standards, the MAC sublayer provides a control abstraction of the physical layer such that the complexities of physical link control are invisible to the LLC and upper layers of the network stack. Thus any LLC sublayer (and higher layers) may be used with any MAC. In turn, the medium access control block is formally connected to the PHY via a media-independent interface. Although the MAC block is today typically integrated with the PHY within the same device package, historically any MAC could be used with any PHY, independent of the transmission medium. When sending data to another device on the network, the MAC sublayer encapsulates higher-level frames into frames appropriate for the transmission medium (i.e. the MAC adds a syncword preamble and also padding if necessary), adds a frame check sequence to identify transmission errors, and then forwards the data to the physical layer as soon as the appropriate channel access method permits it. For topologies with a collision domain (bus, ring, mesh, point-to-multipoint topologies), controlling when data is sent and when to wait is necessary to avoid collisions. Additionally, the MAC is also responsible for compensating for collisions by initiating retransmission if a jam signal is detected. When receiving data from the physical layer, the MAC block ensures data integrity by verifying the sender's frame check sequences, and strips off the sender's preamble and padding before passing the data up to the higher layers. == Functions performed in the MAC sublayer == According to IEEE Std 802-2001 section 6.2.3 "MAC sublayer", the primary functions performed by the MAC layer are: Frame delimiting and recognition Addressing of destination stations (both as individual stations and as groups of stations) Conveyance of source-station addressing information Transparent data transfer of LLC PDUs, or of equivalent information in the Ethernet sublayer Protection against errors, generally by means of generating and checking frame check sequences Control of access to the physical transmission medium In the case of Ethernet, the functions required of a MAC are: receive/transmit normal frames half-duplex retransmission and backoff functions append/check FCS (frame check sequence) interframe gap enforcement discard malformed frames prepend(tx)/remove(rx) preamble, SFD (start frame delimiter), and padding half-duplex compatibility: append(tx)/remove(rx) MAC address == Addressing mechanism == The local network addresses used in IEEE 802 networks and FDDI networks are called MAC addresses; they are based on the addressing scheme that was used in early Ethernet implementations. A MAC address is intended as a unique serial number. MAC addresses are typically assigned to network interface hardware at the time of manufacture. The most significant part of the address identifies the manufacturer, who assigns the remainder of the address, thus providing a potentially unique address. This makes it possible for frames to be delivered on a network link that interconnects hosts by some combination of repeaters, hubs, bridges and switches, but not by network layer routers. Thus, for example, when an IP packet reaches its destination (sub)network, the destination IP address (a layer 3 or network layer concept) is resolved with the Address Resolution Protocol for IPv4, or by Neighbor Discovery Protocol (IPv6) into the MAC address (a layer 2 concept) of the destination host. Examples of physical networks are Ethernet networks and Wi-Fi networks, both of which are IEEE 802 networks and use IEEE 802 48-bit MAC addresses. A MAC layer is not required in full-duplex point-to-point communication, but address fields are included in some point-to-point protocols for compatibility reasons. == Channel access control mechanism == The channel access control mechanisms provided by the MAC layer are also known as a multiple access method. This makes it possible for several stations connected to the same physical medium to share it. Examples of shared physical media are bus networks, ring networks, hub networks, wireless networks and half-duplex point-to-point links. The multiple access method may detect or avoid data packet collisions if a packet mode contention based channel access method is used, or reserve resources to establish a logical channel if a circuit-switched or channelization-based channel access method is used. The channel access control mechanism relies on a physical layer multiplex scheme. The most widespread multiple access method is the contention-based CSMA/CD used in Ethernet networks. This mechanism is only utilized within a network collision domain, for example, an Ethernet bus network or a hub-based star topology network. An Ethernet network may be divided into several collision domains, interconnected by bridges and switches. A multiple access method is not required in a switched full-duplex network, such as today's switched Ethernet networks, but is often available in the equipment for compatibility reasons. === Channel access control mechanism for concurrent transmission === Use of directional antennas and millimeter-wave communication in a wireless personal area network increases the probability of concurrent scheduling of non‐interfering transmissions in a localized area, which results in an immense increase in network throughput. However, the optimum scheduling of concurrent transmission is an NP-hard problem. == Cellular networks == Cellular networks, such as GSM, UMTS or LTE networks, also use a MAC layer. The MAC protocol in cellular networks is designed to maximize the utilization of the expensive licensed spectrum. The air interface of a cellular network is at layers 1 and 2 of the OSI model; at layer 2, it is divided into multiple protocol layers. In UMTS and LTE, those protocols are the Packet Data Convergence Protocol (PDCP), the Radio Link Control (RLC) protocol, and the MAC protocol. The base station has absolute control over the air interface and schedules the downlink access as well as the uplink access of all devices. The MAC protocol is specified by 3GPP in TS 25.321 for UMTS, TS 36.321 for LTE and TS 38.321 for 5G. == See also == Isochronous media access controller List of channel access methods MAC-Forced Forwarding MACsec (IEEE 802.1AE) == References ==
Wikipedia/Medium_access_control
Peer-to-peer (P2P) computing or networking is a distributed application architecture that partitions tasks or workloads between peers. Peers are equally privileged, equipotent participants in the network, forming a peer-to-peer network of nodes. In addition, a personal area network (PAN) is also in nature a type of decentralized peer-to-peer network typically between two devices. Peers make a portion of their resources, such as processing power, disk storage, or network bandwidth, directly available to other network participants, without the need for central coordination by servers or stable hosts. Peers are both suppliers and consumers of resources, in contrast to the traditional client–server model in which the consumption and supply of resources are divided. While P2P systems had previously been used in many application domains, the architecture was popularized by the Internet file sharing system Napster, originally released in 1999. P2P is used in many protocols such as BitTorrent file sharing over the Internet and in personal networks like Miracast displaying and Bluetooth radio. The concept has inspired new structures and philosophies in many areas of human interaction. In such social contexts, peer-to-peer as a meme refers to the egalitarian social networking that has emerged throughout society, enabled by Internet technologies in general. == Development == While P2P systems had previously been used in many application domains, the concept was popularized by file sharing systems such as the music-sharing application Napster. The peer-to-peer movement allowed millions of Internet users to connect "directly, forming groups and collaborating to become user-created search engines, virtual supercomputers, and filesystems". The basic concept of peer-to-peer computing was envisioned in earlier software systems and networking discussions, reaching back to principles stated in the first Request for Comments, RFC 1. Tim Berners-Lee's vision for the World Wide Web was close to a P2P network in that it assumed each user of the web would be an active editor and contributor, creating and linking content to form an interlinked "web" of links. The early Internet was more open than the present day, where two machines connected to the Internet could send packets to each other without firewalls and other security measures. This contrasts with the broadcasting-like structure of the web as it has developed over the years. As a precursor to the Internet, ARPANET was a successful peer-to-peer network where "every participating node could request and serve content". However, ARPANET was not self-organized, and it could not "provide any means for context or content-based routing beyond 'simple' address-based routing." Therefore, Usenet, a distributed messaging system that is often described as an early peer-to-peer architecture, was established. It was developed in 1979 as a system that enforces a decentralized model of control. The basic model is a client–server model from the user or client perspective that offers a self-organizing approach to newsgroup servers. However, news servers communicate with one another as peers to propagate Usenet news articles over the entire group of network servers. The same consideration applies to SMTP email in the sense that the core email-relaying network of mail transfer agents has a peer-to-peer character, while the periphery of Email clients and their direct connections is strictly a client-server relationship. In May 1999, with millions more people on the Internet, Shawn Fanning introduced the music and file-sharing application called Napster. Napster was the beginning of peer-to-peer networks, as we know them today, where "participating users establish a virtual network, entirely independent from the physical network, without having to obey any administrative authorities or restrictions". == Architecture == A peer-to-peer network is designed around the notion of equal peer nodes simultaneously functioning as both "clients" and "servers" to the other nodes on the network. This model of network arrangement differs from the client–server model where communication is usually to and from a central server. A typical example of a file transfer that uses the client-server model is the File Transfer Protocol (FTP) service in which the client and server programs are distinct: the clients initiate the transfer, and the servers satisfy these requests. === Routing and resource discovery === Peer-to-peer networks generally implement some form of virtual overlay network on top of the physical network topology, where the nodes in the overlay form a subset of the nodes in the physical network. Data is still exchanged directly over the underlying TCP/IP network, but at the application layer peers can communicate with each other directly, via the logical overlay links (each of which corresponds to a path through the underlying physical network). Overlays are used for indexing and peer discovery, and make the P2P system independent from the physical network topology. Based on how the nodes are linked to each other within the overlay network, and how resources are indexed and located, we can classify networks as unstructured or structured (or as a hybrid between the two). ==== Unstructured networks ==== Unstructured peer-to-peer networks do not impose a particular structure on the overlay network by design, but rather are formed by nodes that randomly form connections to each other. (Gnutella, Gossip, and Kazaa are examples of unstructured P2P protocols). Because there is no structure globally imposed upon them, unstructured networks are easy to build and allow for localized optimizations to different regions of the overlay. Also, because the role of all peers in the network is the same, unstructured networks are highly robust in the face of high rates of "churn"—that is, when large numbers of peers are frequently joining and leaving the network. However, the primary limitations of unstructured networks also arise from this lack of structure. In particular, when a peer wants to find a desired piece of data in the network, the search query must be flooded through the network to find as many peers as possible that share the data. Flooding causes a very high amount of signaling traffic in the network, uses more CPU/memory (by requiring every peer to process all search queries), and does not ensure that search queries will always be resolved. Furthermore, since there is no correlation between a peer and the content managed by it, there is no guarantee that flooding will find a peer that has the desired data. Popular content is likely to be available at several peers and any peer searching for it is likely to find the same thing. But if a peer is looking for rare data shared by only a few other peers, then it is highly unlikely that the search will be successful. ==== Structured networks ==== In structured peer-to-peer networks the overlay is organized into a specific topology, and the protocol ensures that any node can efficiently search the network for a file/resource, even if the resource is extremely rare. The most common type of structured P2P networks implement a distributed hash table (DHT), in which a variant of consistent hashing is used to assign ownership of each file to a particular peer. This enables peers to search for resources on the network using a hash table: that is, (key, value) pairs are stored in the DHT, and any participating node can efficiently retrieve the value associated with a given key. However, in order to route traffic efficiently through the network, nodes in a structured overlay must maintain lists of neighbors that satisfy specific criteria. This makes them less robust in networks with a high rate of churn (i.e. with large numbers of nodes frequently joining and leaving the network). More recent evaluation of P2P resource discovery solutions under real workloads have pointed out several issues in DHT-based solutions such as high cost of advertising/discovering resources and static and dynamic load imbalance. Notable distributed networks that use DHTs include Tixati, an alternative to BitTorrent's distributed tracker, the Kad network, the Storm botnet, and the YaCy. Some prominent research projects include the Chord project, Kademlia, PAST storage utility, P-Grid, a self-organized and emerging overlay network, and CoopNet content distribution system. DHT-based networks have also been widely utilized for accomplishing efficient resource discovery for grid computing systems, as it aids in resource management and scheduling of applications. ==== Hybrid models ==== Hybrid models are a combination of peer-to-peer and client–server models. A common hybrid model is to have a central server that helps peers find each other. Spotify was an example of a hybrid model [until 2014]. There are a variety of hybrid models, all of which make trade-offs between the centralized functionality provided by a structured server/client network and the node equality afforded by the pure peer-to-peer unstructured networks. Currently, hybrid models have better performance than either pure unstructured networks or pure structured networks because certain functions, such as searching, do require a centralized functionality but benefit from the decentralized aggregation of nodes provided by unstructured networks. ==== CoopNet content distribution system ==== CoopNet (Cooperative Networking) was a proposed system for off-loading serving to peers who have recently downloaded content, proposed by computer scientists Venkata N. Padmanabhan and Kunwadee Sripanidkulchai, working at Microsoft Research and Carnegie Mellon University. When a server experiences an increase in load it redirects incoming peers to other peers who have agreed to mirror the content, thus off-loading balance from the server. All of the information is retained at the server. This system makes use of the fact that the bottleneck is most likely in the outgoing bandwidth than the CPU, hence its server-centric design. It assigns peers to other peers who are 'close in IP' to its neighbors [same prefix range] in an attempt to use locality. If multiple peers are found with the same file it designates that the node choose the fastest of its neighbors. Streaming media is transmitted by having clients cache the previous stream, and then transmit it piece-wise to new nodes. === Security and trust === Peer-to-peer systems pose unique challenges from a computer security perspective. Like any other form of software, P2P applications can contain vulnerabilities. What makes this particularly dangerous for P2P software, however, is that peer-to-peer applications act as servers as well as clients, meaning that they can be more vulnerable to remote exploits. ==== Routing attacks ==== Since each node plays a role in routing traffic through the network, malicious users can perform a variety of "routing attacks", or denial of service attacks. Examples of common routing attacks include "incorrect lookup routing" whereby malicious nodes deliberately forward requests incorrectly or return false results, "incorrect routing updates" where malicious nodes corrupt the routing tables of neighboring nodes by sending them false information, and "incorrect routing network partition" where when new nodes are joining they bootstrap via a malicious node, which places the new node in a partition of the network that is populated by other malicious nodes. ==== Corrupted data and malware ==== The prevalence of malware varies between different peer-to-peer protocols. Studies analyzing the spread of malware on P2P networks found, for example, that 63% of the answered download requests on the gnutella network contained some form of malware, whereas only 3% of the content on OpenFT contained malware. In both cases, the top three most common types of malware accounted for the large majority of cases (99% in gnutella, and 65% in OpenFT). Another study analyzing traffic on the Kazaa network found that 15% of the 500,000 file sample taken were infected by one or more of the 365 different computer viruses that were tested for. Corrupted data can also be distributed on P2P networks by modifying files that are already being shared on the network. For example, on the FastTrack network, the RIAA managed to introduce faked chunks into downloads and downloaded files (mostly MP3 files). Files infected with the RIAA virus were unusable afterwards and contained malicious code. The RIAA is also known to have uploaded fake music and movies to P2P networks in order to deter illegal file sharing. Consequently, the P2P networks of today have seen an enormous increase of their security and file verification mechanisms. Modern hashing, chunk verification and different encryption methods have made most networks resistant to almost any type of attack, even when major parts of the respective network have been replaced by faked or nonfunctional hosts. === Resilient and scalable computer networks === The decentralized nature of P2P networks increases robustness because it removes the single point of failure that can be inherent in a client–server based system. As nodes arrive and demand on the system increases, the total capacity of the system also increases, and the likelihood of failure decreases. If one peer on the network fails to function properly, the whole network is not compromised or damaged. In contrast, in a typical client–server architecture, clients share only their demands with the system, but not their resources. In this case, as more clients join the system, fewer resources are available to serve each client, and if the central server fails, the entire network is taken down. === Distributed storage and search === There are both advantages and disadvantages in P2P networks related to the topic of data backup, recovery, and availability. In a centralized network, the system administrators are the only forces controlling the availability of files being shared. If the administrators decide to no longer distribute a file, they simply have to remove it from their servers, and it will no longer be available to users. Along with leaving the users powerless in deciding what is distributed throughout the community, this makes the entire system vulnerable to threats and requests from the government and other large forces. For example, YouTube has been pressured by the RIAA, MPAA, and entertainment industry to filter out copyrighted content. Although server-client networks are able to monitor and manage content availability, they can have more stability in the availability of the content they choose to host. A client should not have trouble accessing obscure content that is being shared on a stable centralized network. P2P networks, however, are more unreliable in sharing unpopular files because sharing files in a P2P network requires that at least one node in the network has the requested data, and that node must be able to connect to the node requesting the data. This requirement is occasionally hard to meet because users may delete or stop sharing data at any point. In a P2P network, the community of users is entirely responsible for deciding which content is available. Unpopular files eventually disappear and become unavailable as fewer people share them. Popular files, however, are highly and easily distributed. Popular files on a P2P network are more stable and available than files on central networks. In a centralized network, a simple loss of connection between the server and clients can cause a failure, but in P2P networks, the connections between every node must be lost to cause a data-sharing failure. In a centralized system, the administrators are responsible for all data recovery and backups, while in P2P systems, each node requires its backup system. Because of the lack of central authority in P2P networks, forces such as the recording industry, RIAA, MPAA, and the government are unable to delete or stop the sharing of content on P2P systems. == Applications == === Content delivery === In P2P networks, clients both provide and use resources. This means that unlike client–server systems, the content-serving capacity of peer-to-peer networks can actually increase as more users begin to access the content (especially with protocols such as BitTorrent that require users to share, refer a performance measurement study). This property is one of the major advantages of using P2P networks because it makes the setup and running costs very small for the original content distributor. === File-sharing networks === Peer-to-peer file sharing networks such as Gnutella, G2, and the eDonkey network have been useful in popularizing peer-to-peer technologies. These advancements have paved the way for Peer-to-peer content delivery networks and services, including distributed caching systems like Correli Caches to enhance performance. Furthermore, peer-to-peer networks have made possible the software publication and distribution, enabling efficient sharing of Linux distribution and various games through file sharing networks. ==== Copyright infringements ==== Peer-to-peer networking involves data transfer from one user to another without using an intermediate server. Companies developing P2P applications have been involved in numerous legal cases, primarily in the United States, over conflicts with copyright law. Two major cases are Grokster vs RIAA and MGM Studios, Inc. v. Grokster, Ltd.. In the last case, the Court unanimously held that defendant peer-to-peer file sharing companies Grokster and Streamcast could be sued for inducing copyright infringement. === Multimedia === The P2PTV and PDTP protocols are used in various peer-to-peer applications. Some proprietary multimedia applications leverage a peer-to-peer network in conjunction with streaming servers to stream audio and video to their clients. Peercasting is employed for multicasting streams. Additionally, a project called LionShare, undertaken by Pennsylvania State University, MIT, and Simon Fraser University, aims to facilitate file sharing among educational institutions globally. Another notable program, Osiris, enables users to create anonymous and autonomous web portals that are distributed via a peer-to-peer network. === Other P2P applications === Dat is a distributed version-controlled publishing platform. I2P, is an overlay network used to browse the Internet anonymously. Unlike the related I2P, the Tor network is not itself peer-to-peer; however, it can enable peer-to-peer applications to be built on top of it via onion services. The InterPlanetary File System (IPFS) is a protocol and network designed to create a content-addressable, peer-to-peer method of storing and sharing hypermedia distribution protocol, with nodes in the IPFS network forming a distributed file system. Jami is a peer-to-peer chat and SIP app. JXTA is a peer-to-peer protocol designed for the Java platform. Netsukuku is a Wireless community network designed to be independent from the Internet. Open Garden is a connection-sharing application that shares Internet access with other devices using Wi-Fi or Bluetooth. Resilio Sync is a directory-syncing app. Research includes projects such as the Chord project, the PAST storage utility, the P-Grid, and the CoopNet content distribution system. Secure Scuttlebutt is a peer-to-peer gossip protocol capable of supporting many different types of applications, primarily social networking. Syncthing is also a directory-syncing app. Tradepal l and M-commerce applications are designed to power real-time marketplaces. The U.S. Department of Defense is conducting research on P2P networks as part of its modern network warfare strategy. In May 2003, Anthony Tether, then director of DARPA, testified that the United States military uses P2P networks. WebTorrent is a P2P streaming torrent client in JavaScript for use in web browsers, as well as in the WebTorrent Desktop standalone version that bridges WebTorrent and BitTorrent serverless networks. Microsoft, in Windows 10, uses a proprietary peer-to-peer technology called "Delivery Optimization" to deploy operating system updates using end-users' PCs either on the local network or other PCs. According to Microsoft's Channel 9, this led to a 30%-50% reduction in Internet bandwidth usage. Artisoft's LANtastic was built as a peer-to-peer operating system where machines can function as both servers and workstations simultaneously. Hotline Communications Hotline Client was built with decentralized servers and tracker software dedicated to any type of files and continues to operate today. Cryptocurrencies are peer-to-peer-based digital currencies that use blockchains List of cryptocurrencies List of blockchains == Social implications == === Incentivizing resource sharing and cooperation === Cooperation among a community of participants is key to the continued success of P2P systems aimed at casual human users; these reach their full potential only when large numbers of nodes contribute resources. But in current practice, P2P networks often contain large numbers of users who utilize resources shared by other nodes, but who do not share anything themselves (often referred to as the "freeloader problem"). Freeloading can have a profound impact on the network and in some cases can cause the community to collapse. In these types of networks "users have natural disincentives to cooperate because cooperation consumes their own resources and may degrade their own performance". Studying the social attributes of P2P networks is challenging due to large populations of turnover, asymmetry of interest and zero-cost identity. A variety of incentive mechanisms have been implemented to encourage or even force nodes to contribute resources. Some researchers have explored the benefits of enabling virtual communities to self-organize and introduce incentives for resource sharing and cooperation, arguing that the social aspect missing from today's P2P systems should be seen both as a goal and a means for self-organized virtual communities to be built and fostered. Ongoing research efforts for designing effective incentive mechanisms in P2P systems, based on principles from game theory, are beginning to take on a more psychological and information-processing direction. ==== Privacy and anonymity ==== Some peer-to-peer networks (e.g. Freenet) place a heavy emphasis on privacy and anonymity—that is, ensuring that the contents of communications are hidden from eavesdroppers, and that the identities/locations of the participants are concealed. Public key cryptography can be used to provide encryption, data validation, authorization, and authentication for data/messages. Onion routing and other mix network protocols (e.g. Tarzan) can be used to provide anonymity. Perpetrators of live streaming sexual abuse and other cybercrimes have used peer-to-peer platforms to carry out activities with anonymity. == Political implications == === Intellectual property law and illegal sharing === Although peer-to-peer networks can be used for legitimate purposes, rights holders have targeted peer-to-peer over the involvement with sharing copyrighted material. Peer-to-peer networking involves data transfer from one user to another without using an intermediate server. Companies developing P2P applications have been involved in numerous legal cases, primarily in the United States, primarily over issues surrounding copyright law. Two major cases are Grokster vs RIAA and MGM Studios, Inc. v. Grokster, Ltd. In both of the cases the file sharing technology was ruled to be legal as long as the developers had no ability to prevent the sharing of the copyrighted material. To establish criminal liability for the copyright infringement on peer-to-peer systems, the government must prove that the defendant infringed a copyright willingly for the purpose of personal financial gain or commercial advantage. Fair use exceptions allow limited use of copyrighted material to be downloaded without acquiring permission from the rights holders. These documents are usually news reporting or under the lines of research and scholarly work. Controversies have developed over the concern of illegitimate use of peer-to-peer networks regarding public safety and national security. When a file is downloaded through a peer-to-peer network, it is impossible to know who created the file or what users are connected to the network at a given time. Trustworthiness of sources is a potential security threat that can be seen with peer-to-peer systems. A study ordered by the European Union found that illegal downloading may lead to an increase in overall video game sales because newer games charge for extra features or levels. The paper concluded that piracy had a negative financial impact on movies, music, and literature. The study relied on self-reported data about game purchases and use of illegal download sites. Pains were taken to remove effects of false and misremembered responses. === Network neutrality === Peer-to-peer applications present one of the core issues in the network neutrality controversy. Internet service providers (ISPs) have been known to throttle P2P file-sharing traffic due to its high-bandwidth usage. Compared to Web browsing, e-mail or many other uses of the internet, where data is only transferred in short intervals and relative small quantities, P2P file-sharing often consists of relatively heavy bandwidth usage due to ongoing file transfers and swarm/network coordination packets. In October 2007, Comcast, one of the largest broadband Internet providers in the United States, started blocking P2P applications such as BitTorrent. Their rationale was that P2P is mostly used to share illegal content, and their infrastructure is not designed for continuous, high-bandwidth traffic. Critics point out that P2P networking has legitimate legal uses, and that this is another way that large providers are trying to control use and content on the Internet, and direct people towards a client–server-based application architecture. The client–server model provides financial barriers-to-entry to small publishers and individuals, and can be less efficient for sharing large files. As a reaction to this bandwidth throttling, several P2P applications started implementing protocol obfuscation, such as the BitTorrent protocol encryption. Techniques for achieving "protocol obfuscation" involves removing otherwise easily identifiable properties of protocols, such as deterministic byte sequences and packet sizes, by making the data look as if it were random. The ISP's solution to the high bandwidth is P2P caching, where an ISP stores the part of files most accessed by P2P clients in order to save access to the Internet. == Current research == Researchers have used computer simulations to aid in understanding and evaluating the complex behaviors of individuals within the network. "Networking research often relies on simulation in order to test and evaluate new ideas. An important requirement of this process is that results must be reproducible so that other researchers can replicate, validate, and extend existing work." If the research cannot be reproduced, then the opportunity for further research is hindered. "Even though new simulators continue to be released, the research community tends towards only a handful of open-source simulators. The demand for features in simulators, as shown by our criteria and survey, is high. Therefore, the community should work together to get these features in open-source software. This would reduce the need for custom simulators, and hence increase repeatability and reputability of experiments." Popular simulators that were widely used in the past are NS2, OMNeT++, SimPy, NetLogo, PlanetLab, ProtoPeer, QTM, PeerSim, ONE, P2PStrmSim, PlanetSim, GNUSim, and Bharambe. Besides all the above stated facts, there has also been work done on ns-2 open source network simulators. One research issue related to free rider detection and punishment has been explored using ns-2 simulator here. == See also == == References == == External links ==
Wikipedia/P2P_network
In the IEEE 802 reference model of computer networking, the logical link control (LLC) data communication protocol layer is the upper sublayer of the data link layer (layer 2) of the seven-layer OSI model. The LLC sublayer acts as an interface between the medium access control (MAC) sublayer and the network layer. The LLC sublayer provides multiplexing mechanisms that make it possible for several network protocols (e.g. IP, IPX and DECnet) to coexist within a multipoint network and to be transported over the same network medium. It can also provide flow control and automatic repeat request (ARQ) error management mechanisms. == Operation == The LLC sublayer is primarily concerned with multiplexing protocols transmitted over the MAC layer (when transmitting) and demultiplexing them (when receiving). It can also provide node-to-node flow control and error management. The flow control and error management capabilities of the LLC sublayer are used by protocols such as the NetBIOS Frames protocol. However, most protocol stacks running atop 802.2 do not use LLC sublayer flow control and error management. In these cases flow control and error management are taken care of by a transport layer protocol such as TCP or by some application layer protocol. These higher layer protocols work in an end-to-end fashion, i.e. re-transmission is done from the original source to the final destination, rather than on individual physical segments. For these protocol stacks only the multiplexing capabilities of the LLC sublayer are used. == Application examples == === X.25 and LAPB === An LLC sublayer was a key component in early packet switching networks such as X.25 networks with the LAPB data link layer protocol, where flow control and error management were carried out in a node-to-node fashion, meaning that if an error was detected in a frame, the frame was retransmitted from one switch to next instead. This extensive handshaking between the nodes made the networks slow. === Local area network === The IEEE 802.2 standard specifies the LLC sublayer for all IEEE 802 local area networks, such as IEEE 802.3/Ethernet (when Ethernet II frame format is not used), IEEE 802.5, and IEEE 802.11. IEEE 802.2 is also used in some non-IEEE 802 networks such as FDDI. ==== Ethernet ==== Since bit errors are very rare in wired networks, Ethernet does not provide flow control or automatic repeat request (ARQ), meaning that incorrect packets are detected but only cancelled, not retransmitted (except in case of collisions detected by the CSMA/CD MAC layer protocol). Instead, retransmissions rely on higher-layer protocols. As the EtherType in an Ethernet frame using Ethernet II framing is used to multiplex different protocols on top of the Ethernet MAC header it can be seen as an LLC identifier. However, Ethernet frames lacking an EtherType have no LLC identifier in the Ethernet header, and, instead, use an IEEE 802.2 LLC header after the Ethernet header to provide the protocol multiplexing function. ==== Wireless LAN ==== In wireless communications, bit errors are very common. In wireless networks such as IEEE 802.11, flow control and error management is part of the CSMA/CA MAC protocol, and not part of the LLC layer. The LLC sublayer follows the IEEE 802.2 standard. === HDLC === Some non-IEEE 802 protocols can be thought of as being split into MAC and LLC layers. For example, while HDLC specifies both MAC functions (framing of packets) and LLC functions (protocol multiplexing, flow control, detection, and error control through retransmission of dropped packets when indicated), some protocols such as Cisco HDLC can use HDLC-like packet framing and their own LLC protocol. === PPP and modems === Over telephone network modems, PPP link layer protocols can be considered as a LLC protocol, providing multiplexing, but it does not provide flow control and error management. In a telephone network, bit errors might be common, meaning that error management is crucial, but that is today provided by modern protocols. Today's modem protocols have inherited LLC features from the older LAPM link layer protocol, made for modem communication in old X.25 networks. === Cellular systems === The GPRS LLC layer also does ciphering and deciphering of SN-PDU (SNDCP) packets. === Power lines === Another example of a data link layer which is split between LLC (for flow and error control) and MAC (for multiple access) is the ITU-T G.hn standard, which provides high-speed local area networking over existing home wiring (power lines, phone lines and coaxial cables). == See also == Subnetwork Access Protocol (SNAP) Virtual Circuit Multiplexing (VC-MUX) == References ==
Wikipedia/Logical_link_control
An optical transport network (OTN) is a digital wrapper that encapsulates frames of data, to allow multiple data sources to be sent on the same channel. This creates an optical virtual private network for each client signal. ITU-T defines an optical transport network as a set of optical network elements (ONE) connected by optical fiber links, able to provide functionality of transport, multiplexing, switching, management, supervision and survivability of optical channels carrying client signals. An ONE may re-time, re-Amplify, re-shape (3R) but it does not have to be 3R – it can be purely photonic. Unless connected by optical fibre links, it shall not be OTN. Mere functionality of switching, management, supervision shall not make it OTN, unless the signals are carried through optical fibre. Unlike SONET/SDH, OTN provides a mechanism to manage multiplexed wavelengths in a DWDM system. == Comparing OTN and SONET/SDH == == Standards == OTN was designed to provide higher throughput (currently 400G) than its predecessor SONET/SDH, which stops at 40 Gbit/s, per channel. ITU-T Recommendation G.709 is commonly called Optical Transport Network (OTN) (also called digital wrapper technology or optical channel wrapper). As of December 2009, OTN has standardized the following line rates. The OTUk (k=1/2/2e/3/3e2/4) is an information structure into which another information structure called ODUk (k=1/2/2e/3/3e2/4) is mapped. The ODUk signal is the server layer signal for client signals. The following ODUk information structures are defined in ITU-T Recommendation G.709 == Equipment == At a very high level, the typical signals processed by OTN equipment at the Optical Channel layer are: SONET/SDH Ethernet/FibreChannel Packets OTN A few of the key functions performed on these signals are: Protocol processing of all the signals:- Mapping and de-mapping of non-OTN signals into and out of OTN signals Multiplexing and de-multiplexing of OTN signals Forward error correction (FEC) on OTN signals Packet processing in conjunction with mapping/de-mapping of packet into and out of OTN signals === Switch Fabric === The OTN signals at all data-rates have the same frame structure but the frame period reduces as the data-rate increases. As a result, the Time-Slot Interchange (TSI) technique of implementing SONET/SDH switch fabrics is not directly applicable to OTN switch fabrics. OTN switch fabrics are typically implemented using Packet Switch Fabrics. === FEC Latency === On a point-to-point OTN link there is latency due to forward error correction (FEC) processing. Hamming distance of the RS(255,239) code is 17 == See also == G.709 == References == == External links == Anritsu Poster - Details of all OTN areas including breakdown of the full frame Anritsu Poster - Details of all OTN areas including breakdown of the full frame at the Wayback Machine (archived 2014-05-17) Optical Transport Network (OTN) Tutorial, ITU-T, only covers G.709 (2003/03) [1] Hot topics in Optical Transport Networks, Steve Trowbridge (Nokia), Chairman, ITU-T Study Group 15
Wikipedia/Optical_Transport_Network
In computer networking, the Datagram Congestion Control Protocol (DCCP) is a message-oriented transport layer protocol. DCCP implements reliable connection setup, teardown, Explicit Congestion Notification (ECN), congestion control, and feature negotiation. The IETF published DCCP as RFC 4340, a proposed standard, in March 2006. RFC 4336 provides an introduction. == Operation == DCCP provides a way to gain access to congestion-control mechanisms without having to implement them at the application layer. It allows for flow-based semantics like in Transmission Control Protocol (TCP), but does not provide reliable in-order delivery. Sequenced delivery within multiple streams as in the Stream Control Transmission Protocol (SCTP) is not available in DCCP. A DCCP connection contains acknowledgment traffic as well as data traffic. Acknowledgments inform a sender whether its packets have arrived, and whether they were marked by Explicit Congestion Notification (ECN). Acknowledgements are transmitted as reliably as the congestion control mechanism in use requires, possibly completely reliably. DCCP has the option for very long (48-bit) sequence numbers corresponding to a packet ID, rather than a byte ID as in TCP. The long length of the sequence numbers aims to guard against "some blind attacks, such as the injection of DCCP-Resets into the connection". == Applications == DCCP is useful for applications with timing constraints on the delivery of data. Such applications include streaming media, multiplayer online games and Internet telephony. In such applications, old messages quickly become useless, so that getting new messages is preferred to resending lost messages. As of 2017 such applications have often either settled for TCP or used User Datagram Protocol (UDP) and implemented their own congestion-control mechanisms, or have no congestion control at all. While being useful for these applications, DCCP can also serve as a general congestion-control mechanism for UDP-based applications, by adding, as needed, mechanisms for reliable or in-order delivery on top of UDP/DCCP. In this context, DCCP allows the use of different, but generally TCP-friendly congestion-control mechanisms. == Implementations == The following operating systems implement DCCP: FreeBSD, version 5.1 as patch Linux since version 2.6.14, but marked deprecated since version 6.4 due to lack of maintenance and scheduled for removal in 2025. Linux 6.16 drops DCCP. DCCP was removed from Linux 6.16. Userspace library: DCCP-TP Archived 2008-07-23 at the Wayback Machine implementation is optimized for portability, but has had no changes since June 2008. GoDCCP purpose of this implementation is to provide a standardized, portable NAT-friendly framework for peer-to-peer communications with flexible congestion control, depending on application. == Packet structure == The DCCP generic header takes different forms depending on the value of X, the Extended Sequence Numbers bit. If X is one, the Sequence Number field is 48 bits long, and the generic header takes 16 bytes, as follows. If X is zero, only the low 24 bits of the Sequence Number are transmitted, and the generic header is 12 bytes long. Source Port: 16 bits Identifies the sending port. Destination Port: 16 bits Identifies the receiving port. Data Offset: 8 bits The offset from the start of the packet's DCCP header to the start of its application data area, in 32-bit words. CCVal: 4 bits Used by the HC-Sender CCID. Checksum Coverage (CsCov): 4 bits Checksum Coverage determines the parts of the packet that are covered by the Checksum field. Checksum: 16 bits The Internet checksum of the packet's DCCP header (including options), a network-layer pseudoheader, and, depending on Checksum Coverage, all, some, or none of the application data. Reserved (Res): 3 bits; Res == 0 Senders MUST set this field to all zeroes on generated packets, and receivers MUST ignore its value. Type: 4 bits The Type field specifies the type of the packet. Extended Sequence Numbers (X): 1 bit Set to one to indicate the use of an extended generic header with 48-bit Sequence and Acknowledgement Numbers. Sequence Number: 48 or 24 bits Identifies the packet uniquely in the sequence of all packets the source sent on this connection. == Current development == Similarly to the extension of TCP protocol by multipath capability (MPTCP) also for DCCP the multipath feature is under discussion at IETF correspondingly denoted as MP-DCCP. First implementations have already been developed, tested, and presented in a collaborative approach between operators and academia and are available as an open source solution. == See also == Stream Control Transmission Protocol (SCTP) Transport layer § Comparison of transport layer protocols == References == == External links == IETF Datagram Congestion Control Protocol (dccp) Charter === Protocol specifications === RFC 4340 — Datagram Congestion Control Protocol RFC 5595 — The Datagram Congestion Control Protocol (DCCP) Service Codes RFC 5596 — DCCP Simultaneous-Open Technique to Facilitate NAT/Middlebox Traversal RFC 5762 — RTP and the DCCP RFC 5238 — Datagram Transport Layer Security (DTLS) over DCCP RFC 5634 — Quick-Start for DCCP RFC 6773 — A Datagram Congestion Control Protocol UDP Encapsulation for NAT Traversal === Congestion control IDs === RFC 4341 — Profile for DCCP Congestion Control ID 2: TCP-like Congestion Control RFC 4342 — Profile for DCCP Congestion Control ID 3: TCP-Friendly Rate Control (TFRC) RFC 5622 — Profile for DCCP Congestion Control ID 4: TCP-Friendly Rate Control for Small Packets (TFRC-SP) === Other information === RFC 4336 — Problem Statement for the Datagram Congestion Control Protocol (DCCP) DCCP page from one of DCCP authors DCCP support in Linux Datagram Congestion Control Protocol (DCCP)
Wikipedia/Datagram_Congestion_Control_Protocol
The Open Systems Interconnection (OSI) model is a reference model developed by the International Organization for Standardization (ISO) that "provides a common basis for the coordination of standards development for the purpose of systems interconnection." In the OSI reference model, the components of a communication system are distinguished in seven abstraction layers: Physical, Data Link, Network, Transport, Session, Presentation, and Application. The model describes communications from the physical implementation of transmitting bits across a transmission medium to the highest-level representation of data of a distributed application. Each layer has well-defined functions and semantics and serves a class of functionality to the layer above it and is served by the layer below it. Established, well-known communication protocols are decomposed in software development into the model's hierarchy of function calls. The Internet protocol suite as defined in RFC 1122 and RFC 1123 is a model of networking developed contemporarily to the OSI model, and was funded primarily by the U.S. Department of Defense. It was the foundation for the development of the Internet. It assumed the presence of generic physical links and focused primarily on the software layers of communication, with a similar but much less rigorous structure than the OSI model. In comparison, several networking models have sought to create an intellectual framework for clarifying networking concepts and activities, but none have been as successful as the OSI reference model in becoming the standard model for discussing and teaching networking in the field of information technology. The model allows transparent communication through equivalent exchange of protocol data units (PDUs) between two parties, through what is known as peer-to-peer networking (also known as peer-to-peer communication). As a result, the OSI reference model has not only become an important piece among professionals and non-professionals alike, but also in all networking between one or many parties, due in large part to its commonly accepted user-friendly framework. == History == The development of the OSI model started in the late 1970s to support the emergence of the diverse computer networking methods that were competing for application in the large national networking efforts in the world (see OSI protocols and Protocol Wars). In the 1980s, the model became a working product of the Open Systems Interconnection group at the International Organization for Standardization (ISO). While attempting to provide a comprehensive description of networking, the model failed to garner reliance during the design of the Internet, which is reflected in the less prescriptive Internet Protocol Suite, principally sponsored under the auspices of the Internet Engineering Task Force (IETF). In the early- and mid-1970s, networking was largely either government-sponsored (NPL network in the UK, ARPANET in the US, CYCLADES in France) or vendor-developed with proprietary standards, such as IBM's Systems Network Architecture and Digital Equipment Corporation's DECnet. Public data networks were only just beginning to emerge, and these began to use the X.25 standard in the late 1970s. The Experimental Packet Switched System in the UK c. 1973–1975 identified the need for defining higher-level protocols. The UK National Computing Centre publication, Why Distributed Computing, which came from considerable research into future configurations for computer systems, resulted in the UK presenting the case for an international standards committee to cover this area at the ISO meeting in Sydney in March 1977. Beginning in 1977, the ISO initiated a program to develop general standards and methods of networking. A similar process evolved at the International Telegraph and Telephone Consultative Committee (CCITT, from French: Comité Consultatif International Téléphonique et Télégraphique). Both bodies developed documents that defined similar networking models. The British Department of Trade and Industry acted as the secretariat, and universities in the United Kingdom developed prototypes of the standards. The OSI model was first defined in raw form in Washington, D.C., in February 1978 by French software engineer Hubert Zimmermann, and the refined but still draft standard was published by the ISO in 1980. The drafters of the reference model had to contend with many competing priorities and interests. The rate of technological change made it necessary to define standards that new systems could converge to rather than standardizing procedures after the fact; the reverse of the traditional approach to developing standards. Although not a standard itself, it was a framework in which future standards could be defined. In May 1983, the CCITT and ISO documents were merged to form The Basic Reference Model for Open Systems Interconnection, usually referred to as the Open Systems Interconnection Reference Model, OSI Reference Model, or simply OSI model. It was published in 1984 by both the ISO, as standard ISO 7498, and the renamed CCITT (now called the Telecommunications Standardization Sector of the International Telecommunication Union or ITU-T) as standard X.200. OSI had two major components: an abstract model of networking, called the Basic Reference Model or seven-layer model, and a set of specific protocols. The OSI reference model was a major advance in the standardisation of network concepts. It promoted the idea of a consistent model of protocol layers, defining interoperability between network devices and software. The concept of a seven-layer model was provided by the work of Charles Bachman at Honeywell Information Systems. Various aspects of OSI design evolved from experiences with the NPL network, ARPANET, CYCLADES, EIN, and the International Network Working Group (IFIP WG6.1). In this model, a networking system was divided into layers. Within each layer, one or more entities implement its functionality. Each entity interacted directly only with the layer immediately beneath it and provided facilities for use by the layer above it. The OSI standards documents are available from the ITU-T as the X.200 series of recommendations. Some of the protocol specifications were also available as part of the ITU-T X series. The equivalent ISO/IEC standards for the OSI model were available from ISO. Not all are free of charge. OSI was an industry effort, attempting to get industry participants to agree on common network standards to provide multi-vendor interoperability. It was common for large networks to support multiple network protocol suites, with many devices unable to interoperate with other devices because of a lack of common protocols. For a period in the late 1980s and early 1990s, engineers, organizations and nations became polarized over the issue of which standard, the OSI model or the Internet protocol suite, would result in the best and most robust computer networks. However, while OSI developed its networking standards in the late 1980s, TCP/IP came into widespread use on multi-vendor networks for internetworking. The OSI model is still used as a reference for teaching and documentation; however, the OSI protocols originally conceived for the model did not gain popularity. Some engineers argue the OSI reference model is still relevant to cloud computing. Others say the original OSI model does not fit today's networking protocols and have suggested instead a simplified approach. == Definitions == Communication protocols enable an entity in one host to interact with a corresponding entity at the same layer in another host. Service definitions, like the OSI model, abstractly describe the functionality provided to a layer N by a layer N−1, where N is one of the seven layers of protocols operating in the local host. At each level N, two entities at the communicating devices (layer N peers) exchange protocol data units (PDUs) by means of a layer N protocol. Each PDU contains a payload, called the service data unit (SDU), along with protocol-related headers or footers. Data processing by two communicating OSI-compatible devices proceeds as follows: The data to be transmitted is composed at the topmost layer of the transmitting device (layer N) into a protocol data unit (PDU). The PDU is passed to layer N−1, where it is known as the service data unit (SDU). At layer N−1 the SDU is concatenated with a header, a footer, or both, producing a layer N−1 PDU. It is then passed to layer N−2. The process continues until reaching the lowermost level, from which the data is transmitted to the receiving device. At the receiving device the data is passed from the lowest to the highest layer as a series of SDUs while being successively stripped from each layer's header or footer until reaching the topmost layer, where the last of the data is consumed. === Standards documents === The OSI model was defined in ISO/IEC 7498 which consists of the following parts: ISO/IEC 7498-1 The Basic Model ISO/IEC 7498-2 Security Architecture ISO/IEC 7498-3 Naming and addressing ISO/IEC 7498-4 Management framework ISO/IEC 7498-1 is also published as ITU-T Recommendation X.200. == Layer architecture == The recommendation X.200 describes seven layers, labelled 1 to 7. Layer 1 is the lowest layer in this model. === Layer 1: Physical layer === The physical layer is responsible for the transmission and reception of unstructured raw data between a device, such as a network interface controller, Ethernet hub, or network switch, and a physical transmission medium. It converts the digital bits into electrical, radio, or optical signals (analogue signals). Layer specifications define characteristics such as voltage levels, the timing of voltage changes, physical data rates, maximum transmission distances, modulation scheme, channel access method and physical connectors. This includes the layout of pins, voltages, line impedance, cable specifications, signal timing and frequency for wireless devices. Bit rate control is done at the physical layer and may define transmission mode as simplex, half duplex, and full duplex. The components of a physical layer can be described in terms of the network topology. Physical layer specifications are included in the specifications for the ubiquitous Bluetooth, Ethernet, and USB standards. An example of a less well-known physical layer specification would be for the CAN standard. The physical layer also specifies how encoding occurs over a physical signal, such as electrical voltage or a light pulse. For example, a 1 bit might be represented on a copper wire by the transition from a 0-volt to a 5-volt signal, whereas a 0 bit might be represented by the transition from a 5-volt to a 0-volt signal. As a result, common problems occurring at the physical layer are often related to the incorrect media termination, EMI or noise scrambling, and NICs and hubs that are misconfigured or do not work correctly. === Layer 2: Data link layer === The data link layer provides node-to-node data transfer—a link between two directly connected nodes. It detects and possibly corrects errors that may occur in the physical layer. It defines the protocol to establish and terminate a connection between two physically connected devices. It also defines the protocol for flow control between them. IEEE 802 divides the data link layer into two sublayers: Medium access control (MAC) layer – responsible for controlling how devices in a network gain access to a medium and permission to transmit data. Logical link control (LLC) layer – responsible for identifying and encapsulating network layer protocols, and controls error checking and frame synchronization. The MAC and LLC layers of IEEE 802 networks such as 802.3 Ethernet, 802.11 Wi-Fi, and 802.15.4 Zigbee operate at the data link layer. The Point-to-Point Protocol (PPP) is a data link layer protocol that can operate over several different physical layers, such as synchronous and asynchronous serial lines. The ITU-T G.hn standard, which provides high-speed local area networking over existing wires (power lines, phone lines and coaxial cables), includes a complete data link layer that provides both error correction and flow control by means of a selective-repeat sliding-window protocol. Security, specifically (authenticated) encryption, at this layer can be applied with MACsec. === Layer 3: Network layer === The network layer provides the functional and procedural means of transferring packets from one node to another connected in "different networks". A network is a medium to which many nodes can be connected, on which every node has an address and which permits nodes connected to it to transfer messages to other nodes connected to it by merely providing the content of a message and the address of the destination node and letting the network find the way to deliver the message to the destination node, possibly routing it through intermediate nodes. If the message is too large to be transmitted from one node to another on the data link layer between those nodes, the network may implement message delivery by splitting the message into several fragments at one node, sending the fragments independently, and reassembling the fragments at another node. It may, but does not need to, report delivery errors. Message delivery at the network layer is not necessarily guaranteed to be reliable; a network layer protocol may provide reliable message delivery, but it does not need to do so. A number of layer-management protocols, a function defined in the management annex, ISO 7498/4, belong to the network layer. These include routing protocols, multicast group management, network-layer information and error, and network-layer address assignment. It is the function of the payload that makes these belong to the network layer, not the protocol that carries them. === Layer 4: Transport layer === The transport layer provides the functional and procedural means of transferring variable-length data sequences from a source host to a destination host from one application to another across a network while maintaining the quality-of-service functions. Transport protocols may be connection-oriented or connectionless. This may require breaking large protocol data units or long data streams into smaller chunks called "segments", since the network layer imposes a maximum packet size called the maximum transmission unit (MTU), which depends on the maximum packet size imposed by all data link layers on the network path between the two hosts. The amount of data in a data segment must be small enough to allow for a network-layer header and a transport-layer header. For example, for data being transferred across Ethernet, the MTU is 1500 bytes, the minimum size of a TCP header is 20 bytes, and the minimum size of an IPv4 header is 20 bytes, so the maximum segment size is 1500−(20+20) bytes, or 1460 bytes. The process of dividing data into segments is called segmentation; it is an optional function of the transport layer. Some connection-oriented transport protocols, such as TCP and the OSI connection-oriented transport protocol (COTP), perform segmentation and reassembly of segments on the receiving side; connectionless transport protocols, such as UDP and the OSI connectionless transport protocol (CLTP), usually do not. The transport layer also controls the reliability of a given link between a source and destination host through flow control, error control, and acknowledgments of sequence and existence. Some protocols are state- and connection-oriented. This means that the transport layer can keep track of the segments and retransmit those that fail delivery through the acknowledgment hand-shake system. The transport layer will also provide the acknowledgement of the successful data transmission and sends the next data if no errors occurred. Reliability, however, is not a strict requirement within the transport layer. Protocols like UDP, for example, are used in applications that are willing to accept some packet loss, reordering, errors or duplication. Streaming media, real-time multiplayer games and voice over IP (VoIP) are examples of applications in which loss of packets is not usually a fatal problem. The OSI connection-oriented transport protocol defines five classes of connection-mode transport protocols, ranging from class 0 (which is also known as TP0 and provides the fewest features) to class 4 (TP4, designed for less reliable networks, similar to the Internet). Class 0 contains no error recovery and was designed for use on network layers that provide error-free connections. Class 4 is closest to TCP, although TCP contains functions, such as the graceful close, which OSI assigns to the session layer. Also, all OSI TP connection-mode protocol classes provide expedited data and preservation of record boundaries. Detailed characteristics of TP0–4 classes are shown in the following table: An easy way to visualize the transport layer is to compare it with a post office, which deals with the dispatch and classification of mail and parcels sent. A post office inspects only the outer envelope of mail to determine its delivery. Higher layers may have the equivalent of double envelopes, such as cryptographic presentation services that can be read by the addressee only. Roughly speaking, tunnelling protocols operate at the transport layer, such as carrying non-IP protocols such as IBM's SNA or Novell's IPX over an IP network, or end-to-end encryption with IPsec. While Generic Routing Encapsulation (GRE) might seem to be a network-layer protocol, if the encapsulation of the payload takes place only at the endpoint, GRE becomes closer to a transport protocol that uses IP headers but contains complete Layer 2 frames or Layer 3 packets to deliver to the endpoint. L2TP carries PPP frames inside transport segments. Although not developed under the OSI Reference Model and not strictly conforming to the OSI definition of the transport layer, the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) of the Internet Protocol Suite are commonly categorized as layer 4 protocols within OSI. Transport Layer Security (TLS) does not strictly fit inside the model either. It contains characteristics of the transport and presentation layers. === Layer 5: Session layer === The session layer creates the setup, controls the connections, and ends the teardown, between two or more computers, which is called a "session". Common functions of the session layer include user logon (establishment) and user logoff (termination) functions. Including this matter, authentication methods are also built into most client software, such as FTP Client and NFS Client for Microsoft Networks. Therefore, the session layer establishes, manages and terminates the connections between the local and remote applications. The session layer also provides for full-duplex, half-duplex, or simplex operation, and establishes procedures for checkpointing, suspending, restarting, and terminating a session between two related streams of data, such as an audio and a video stream in a web-conferencing application. Therefore, the session layer is commonly implemented explicitly in application environments that use remote procedure calls. === Layer 6: Presentation layer === The presentation layer establishes data formatting and data translation into a format specified by the application layer during the encapsulation of outgoing messages while being passed down the protocol stack, and possibly reversed during the deencapsulation of incoming messages when being passed up the protocol stack. For this very reason, outgoing messages during encapsulation are converted into a format specified by the application layer, while the conversion for incoming messages during deencapsulation are reversed. The presentation layer handles protocol conversion, data encryption, data decryption, data compression, data decompression, incompatibility of data representation between operating systems, and graphic commands. The presentation layer transforms data into the form that the application layer accepts, to be sent across a network. Since the presentation layer converts data and graphics into a display format for the application layer, the presentation layer is sometimes called the syntax layer. For this reason, the presentation layer negotiates the transfer of syntax structure through the Basic Encoding Rules of Abstract Syntax Notation One (ASN.1), with capabilities such as converting an EBCDIC-coded text file to an ASCII-coded file, or serialization of objects and other data structures from and to XML. === Layer 7: Application layer === The application layer is the layer of the OSI model that is closest to the end user, which means both the OSI application layer and the user interact directly with a software application that implements a component of communication between the client and server, such as File Explorer and Microsoft Word. Such application programs fall outside the scope of the OSI model unless they are directly integrated into the application layer through the functions of communication, as is the case with applications such as web browsers and email programs. Other examples of software are Microsoft Network Software for File and Printer Sharing and Unix/Linux Network File System Client for access to shared file resources. Application-layer functions typically include file sharing, message handling, and database access, through the most common protocols at the application layer, known as HTTP, FTP, SMB/CIFS, TFTP, and SMTP. When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. The most important distinction in the application layer is the distinction between the application entity and the application. For example, a reservation website might have two application entities: one using HTTP to communicate with its users, and one for a remote database protocol to record reservations. Neither of these protocols have anything to do with reservations. That logic is in the application itself. The application layer has no means to determine the availability of resources in the network. == Cross-layer functions == Cross-layer functions are services that are not tied to a given layer, but may affect more than one layer. Some orthogonal aspects, such as management and security, involve all of the layers (See ITU-T X.800 Recommendation). These services are aimed at improving the CIA triad—confidentiality, integrity, and availability—of the transmitted data. Cross-layer functions are the norm, in practice, because the availability of a communication service is determined by the interaction between network design and network management protocols. Specific examples of cross-layer functions include the following: Security service (telecommunication) as defined by ITU-T X.800 recommendation. Management functions, i.e. functions that permit to configure, instantiate, monitor, terminate the communications of two or more entities: there is a specific application-layer protocol, Common Management Information Protocol (CMIP) and its corresponding service, Common Management Information Service (CMIS), they need to interact with every layer in order to deal with their instances. Multiprotocol Label Switching (MPLS), ATM, and X.25 are 3a protocols. OSI subdivides the Network Layer into three sublayers: 3a) Subnetwork Access, 3b) Subnetwork Dependent Convergence and 3c) Subnetwork Independent Convergence. It was designed to provide a unified data-carrying service for both circuit-based clients and packet-switching clients which provide a datagram-based service model. It can be used to carry many different kinds of traffic, including IP packets, as well as native ATM, SONET, and Ethernet frames. Sometimes one sees reference to a Layer 2.5. Cross MAC and PHY Scheduling is essential in wireless networks because of the time-varying nature of wireless channels. By scheduling packet transmission only in favourable channel conditions, which requires the MAC layer to obtain channel state information from the PHY layer, network throughput can be significantly improved and energy waste can be avoided. == Programming interfaces == Neither the OSI Reference Model, nor any OSI protocol specifications, outline any programming interfaces, other than deliberately abstract service descriptions. Protocol specifications define a methodology for communication between peers, but the software interfaces are implementation-specific. For example, the Network Driver Interface Specification (NDIS) and Open Data-Link Interface (ODI) are interfaces between the media (layer 2) and the network protocol (layer 3). == Comparison to other networking suites == The table below presents a list of OSI layers, the original OSI protocols, and some approximate modern matches. This correspondence is rough: the OSI model contains idiosyncrasies not found in later systems such as the IP stack in modern Internet. === Comparison with TCP/IP model === The design of protocols in the TCP/IP model of the Internet does not concern itself with strict hierarchical encapsulation and layering. RFC 3439 contains a section entitled "Layering considered harmful". TCP/IP does recognize four broad layers of functionality which are derived from the operating scope of their contained protocols: the scope of the software application; the host-to-host transport path; the internetworking range; and the scope of the direct links to other nodes on the local network. Despite using a different concept for layering than the OSI model, these layers are often compared with the OSI layering scheme in the following manner: The Internet application layer maps to the OSI application layer, presentation layer, and most of the session layer. The TCP/IP transport layer maps to the graceful close function of the OSI session layer as well as the OSI transport layer. The internet layer performs functions as those in a subset of the OSI network layer. The link layer corresponds to the OSI data link layer and may include similar functions as the physical layer, as well as some protocols of the OSI's network layer. These comparisons are based on the original seven-layer protocol model as defined in ISO 7498, rather than refinements in the internal organization of the network layer. The OSI protocol suite that was specified as part of the OSI project was considered by many as too complicated and inefficient, and to a large extent unimplementable. Taking the "forklift upgrade" approach to networking, it specified eliminating all existing networking protocols and replacing them at all layers of the stack. This made implementation difficult and was resisted by many vendors and users with significant investments in other network technologies. In addition, the protocols included so many optional features that many vendors' implementations were not interoperable. Although the OSI model is often still referenced, the Internet protocol suite has become the standard for networking. TCP/IP's pragmatic approach to computer networking and to independent implementations of simplified protocols made it a practical methodology. Some protocols and specifications in the OSI stack remain in use, one example being IS-IS, which was specified for OSI as ISO/IEC 10589:2002 and adapted for Internet use with TCP/IP as RFC 1142. == See also == == References == == Further reading == Day, John D. (2008). Patterns in Network Architecture: A Return to Fundamentals. Upper Saddle River, N.J.: Pearson Education. ISBN 978-0-13-225242-3. OCLC 213482801. Dickson, Gary; Lloyd, Alan (1992). Open Systems Interconnection. New York: Prentice Hall. ISBN 978-0-13-640111-7. OCLC 1245634475 – via Internet Archive. Piscitello, David M.; Chapin, A. Lyman (1993). Open systems networking : TCP/IP and OSI. Reading, Mass.: Addison-Wesley Pub. Co. ISBN 978-0-201-56334-4. OCLC 624431223 – via Internet Archive. Rose, Marshall T. (1990). The Open Book: A Practical Perspective on OSI. Englewood Cliffs, N.J.: Prentice Hall. ISBN 978-0-13-643016-2. OCLC 1415988401 – via Internet Archive. Russell, Andrew L. (2014). Open Standards and the Digital Age: History, Ideology, and Networks. Cambridge University Press. ISBN 978-1-139-91661-5. OCLC 881237495. Partial preview at Google Books. Zimmermann, Hubert (April 1980). "OSI Reference Model — The ISO Model of Architecture for Open Systems Interconnection". IEEE Transactions on Communications. 28 (4): 425–432. CiteSeerX 10.1.1.136.9497. doi:10.1109/TCOM.1980.1094702. ISSN 0090-6778. OCLC 5858668034. S2CID 16013989. == External links == "Windows network architecture and the OSI model". Microsoft Learn. 2 February 2024. Retrieved 12 July 2024. "ISO/IEC standard 7498-1:1994 - Service definition for the association control service element". ISO Standards Maintenance Portal. Retrieved 12 July 2024. (PDF document inside ZIP archive) (requires HTTP cookies in order to accept licence agreement) "ITU Recommendation X.200". International Telecommunication Union. 2 June 1998. Retrieved 12 July 2024. "INFormation CHanGe Architectures and Flow Charts powered by Google App Engine". infchg.appspot.com. Archived from the original on 26 May 2012. "Internetworking Technology Handbook". docwiki.cisco.com. 10 July 2015. Archived from the original on 6 September 2015. EdXD; Saikot, Mahmud Hasan (25 November 2021). "7 Layers of OSI Model Explained". ByteXD. Retrieved 12 July 2024.
Wikipedia/Open_Systems_Interconnection_model
The Stream Control Transmission Protocol (SCTP) is a computer networking communications protocol in the transport layer of the Internet protocol suite. Originally intended for Signaling System 7 (SS7) message transport in telecommunication, the protocol provides the message-oriented feature of the User Datagram Protocol (UDP), while ensuring reliable, in-sequence transport of messages with congestion control like the Transmission Control Protocol (TCP). Unlike UDP and TCP, the protocol supports multihoming and redundant paths to increase resilience and reliability. SCTP is standardized by the Internet Engineering Task Force (IETF) in RFC 9260. The SCTP reference implementation was released as part of FreeBSD version 7, and has since been widely ported to other platforms. == Formal oversight == The IETF Signaling Transport (SIGTRAN) working group defined the protocol (number 132) in October 2000, and the IETF Transport Area (TSVWG) working group maintains it. RFC 9260 defines the protocol. RFC 3286 provides an introduction. == Message-based multi-streaming == SCTP applications submit data for transmission in messages (groups of bytes) to the SCTP transport layer. SCTP places messages and control information into separate chunks (data chunks and control chunks), each identified by a chunk header. The protocol can fragment a message into multiple data chunks, but each data chunk contains data from only one user message. SCTP bundles the chunks into SCTP packets. The SCTP packet, which is submitted to the Internet Protocol, consists of a packet header, SCTP control chunks (when necessary), followed by SCTP data chunks (when available). SCTP may be characterized as message-oriented, meaning it transports a sequence of messages (each being a group of bytes), rather than transporting an unbroken stream of bytes as in TCP. As in UDP, in SCTP a sender sends a message in one operation, and that exact message is passed to the receiving application process in one operation. In contrast, TCP is a stream-oriented protocol, transporting streams of bytes reliably and in order. However TCP does not allow the receiver to know how many times the sender application called on the TCP transport passing it groups of bytes to be sent out. At the sender, TCP simply appends more bytes to a queue of bytes waiting to go out over the network, rather than having to keep a queue of individual separate outbound messages which must be preserved as such. The term multi-streaming refers to the capability of SCTP to transmit several independent streams of chunks in parallel, for example transmitting web page images simultaneously with the web page text. In essence, it involves bundling several connections into a single SCTP association, operating on messages (or chunks) rather than bytes. TCP preserves byte order in the stream by including a byte sequence number with each segment. SCTP, on the other hand, assigns a sequence number or a message-id to each message sent in a stream. This allows independent ordering of messages in different streams. However, message ordering is optional in SCTP; a receiving application may choose to process messages in the order of receipt instead of in the order of sending. == Features == Features of SCTP include: Reliable transmission of both ordered and unordered data streams Multihoming support in which one or both endpoints of a connection can consist of more than one IP address, enabling transparent fail-over between redundant network paths Delivery of chunks within independent streams eliminates unnecessary head-of-line blocking, as opposed to TCP byte-stream delivery. Explicit partial reliability Path selection and monitoring to select a primary data transmission path and test the connectivity of the transmission path Validation and acknowledgment mechanisms protect against flooding attacks and provide notification of duplicated or missing data chunks. Improved error detection suitable for Ethernet jumbo frames The designers of SCTP originally intended it for the transport of telephony (i.e. Signaling System 7) over Internet Protocol, with the goal of duplicating some of the reliability attributes of the SS7 signaling network in IP. This IETF effort is known as SIGTRAN. In the meantime, other uses have been proposed, for example, the Diameter protocol and Reliable Server Pooling (RSerPool). == Motivation and adoption == TCP has provided the primary means to transfer data reliably across the Internet. However, TCP has imposed limitations on several applications. From RFC 4960: TCP provides both reliable data transfer and strict order-of-transmission delivery of data. Some applications need reliable transfer without sequence maintenance, while others would be satisfied with partial ordering of the data. In both of these cases, the head-of-line blocking property of TCP causes unnecessary delay. For applications exchanging distinct records or messages, the stream-oriented nature of TCP requires the addition of explicit markers or other encoding to delineate the individual records. In order to avoid sending many small IP packets where one single larger packet would have sufficed, the TCP implementation may delay transmitting data while waiting for possibly more data being queued by the application (Nagle's algorithm). Although many TCP implementations allow the disabling of Nagle's algorithm, this is not required by the specification. SCTP on the other hand allows undelayed transmission to be configured as a default for an association, eliminating any undesired delays, but at the cost of higher transfer overhead. The limited scope of TCP sockets complicates the task of providing highly-available data transfer capability using multihomed hosts. TCP is relatively vulnerable to denial-of-service attacks, such as SYN attacks. Adoption has been slowed by lack of awareness, lack of implementations (particularly in Microsoft Windows), lack of application support and lack of network support. SCTP has seen adoption in the mobile telephony space as the transport protocol for several core network interfaces. == Multihoming == SCTP provides redundant paths to increase reliability. Each SCTP end point needs to check reachability of the primary and redundant addresses of the remote end point using a heartbeat. Each SCTP end point needs to acknowledge the heartbeats it receives from the remote end point. When SCTP sends a message to a remote address, the source interface will only be decided by the routing table of the host (and not by SCTP). In asymmetric multihoming, one of the two endpoints does not support multihoming. In local multihoming and remote single homing, if the remote primary address is not reachable, the SCTP association fails even if an alternate path is possible. == Packet structure == An SCTP packet consists of two basic sections: The common header, which occupies the first 12 bytes and is highlighted in blue. The data chunks, which occupy the remaining portion of the packet. The first chunk is highlighted in green, and the last of N chunks (Chunk N) is highlighted in red. Each chunk starts with a one-byte type identifier, with 15 chunk types defined by RFC 9260, and at least 5 more defined by additional RFCs. Eight flag bits, a two-byte length field, and the data compose the remainder of the chunk. If the chunk does not form a multiple of 4 bytes (i.e., the length is not a multiple of 4), then it is padded with zeros, which are not included in the chunk length. The two-byte length field limits each chunk to a 65,535-byte length (including the type, flags and length fields). == Security == Although encryption was not part of the original SCTP design, SCTP was designed with features for improved security, such as 4-way handshake (compared to TCP 3-way handshake) to protect against SYN flooding attacks, and large "cookies" for association verification and authenticity. Reliability was also a key part of the security design of SCTP. Multihoming enables an association to stay open even when some routes and interfaces are down. This is of particular importance for SIGTRAN as it carries SS7 over an IP network using SCTP, and requires strong resilience during link outages to maintain telecommunication service even when enduring network anomalies. == Implementations == The SCTP reference implementation runs on FreeBSD, Mac OS X, Microsoft Windows, and Linux. The following operating systems implement SCTP: AIX Version 5 and newer NetBSD since 8.0 Cisco IOS 12 and above DragonFly BSD since version 1.4, however support is being deprecated in version 4.2 FreeBSD, version 7 and above, contains the reference SCTP implementation HP-UX, 11i v2 and above illumos Linux kernel 2.4 and above QNX Neutrino Realtime OS, 6.3.0 to 6.3.2, deprecated since 6.4.0 Tru64 with the Compaq SCTP add-on package Sun Solaris 10 and above VxWorks versions 6.2.x to 6.4.x, and 6.7 and newer Third-party drivers: Microsoft Windows: The SctpDrv kernel driver is a port of the BSD SCTP stack to Windows (Abandoned after 2012) MacOS: SCTP Network Kernel Extension for Mac OS X Userspace library: Portable SCTP userland stack The SCTP library Windows XP port Oracle Java SE 7 Erlang/OTP The following applications implement SCTP: WebRTC NetFlow === Tunneling over UDP === In the absence of native SCTP support in operating systems, it is possible to tunnel SCTP over UDP, as well as to map TCP API calls to SCTP calls so existing applications can use SCTP without modification. == RFCs == RFC 9260 Stream Control Transmission Protocol RFC 8540 Stream Control Transmission Protocol: Errata and Issues in RFC 4960 (obsoleted by RFC 9260) RFC 7829 SCTP-PF: A Quick Failover Algorithm for the Stream Control Transmission Protocol RFC 7765 TCP and Stream Control Transmission Protocol (SCTP) RTO Restart RFC 7496 Additional Policies for the Partially Reliable Stream Control Transmission Protocol Extension RFC 7053 SACK-IMMEDIATELY Extension for the Stream Control Transmission Protocol (obsoleted by RFC 9260) RFC 6951 UDP Encapsulation of Stream Control Transmission Protocol (SCTP) Packets for End-Host to End-Host Communication RFC 6525 Stream Control Transmission Protocol (SCTP) Stream Reconfiguration RFC 6458 Sockets API Extensions for the Stream Control Transmission Protocol (SCTP) RFC 6096 Stream Control Transmission Protocol (SCTP) Chunk Flags Registration (obsoleted by RFC 9260) RFC 5062 Security Attacks Found Against the Stream Control Transmission Protocol (SCTP) and Current Countermeasures RFC 5061 Stream Control Transmission Protocol (SCTP) Dynamic Address Reconfiguration RFC 5043 Stream Control Transmission Protocol (SCTP) Direct Data Placement (DDP) Adaptation RFC 4960 Stream Control Transmission Protocol (obsoleted by RFC 9260) RFC 4895 Authenticated Chunks for the Stream Control Transmission Protocol (SCTP) RFC 4820 Padding Chunk and Parameter for the Stream Control Transmission Protocol (SCTP) RFC 4460 Stream Control Transmission Protocol (SCTP) Specification Errata and Issues (obsoleted by RFC 9260) RFC 3873 Stream Control Transmission Protocol (SCTP) Management Information Base (MIB) RFC 3758 Stream Control Transmission Protocol (SCTP) Partial Reliability Extension RFC 3554 On the Use of Stream Control Transmission Protocol (SCTP) with IPsec RFC 3436 Transport Layer Security over Stream Control Transmission Protocol RFC 3309 Stream Control Transmission Protocol (SCTP) Checksum Change (obsoleted by RFC 4960) RFC 3286 An Introduction to the Stream Control Transmission Protocol RFC 3257 Stream Control Transmission Protocol Applicability Statement RFC 2960 Stream Control Transmission Protocol (updated by RFC 3309 and obsoleted by RFC 4960) == See also == Transport layer § Comparison of transport layer protocols Session Initiation Protocol (SIP) – which may initiate multiple streams over SCTP, TCP, or UDP Multipath TCP – which allows a TCP connection to use multiple paths to maximize resource usage and increase redundancy Happy Eyeballs – originally designed for efficient selection of IPv4 or IPv6 for a connection; could also be adapted to select from different transport protocols such as TCP and SCTP == Notes == == References == == External links == sigtran (archived) "Signaling Transport (sigtran) Working Group". "Transport Area Working Group (tsvwg)". "OpenSS7 Project". SCTP workgroup for Linux "Michael Tüxen's SCTP Page". "Lode Coene's SCTP Page". "Thomas Dreibholz's SCTP Project Page".
Wikipedia/Stream_Control_Transmission_Protocol
Internetwork Packet Exchange (IPX) is the network-layer protocol in the IPX/SPX protocol suite. IPX is derived from Xerox Network Systems' IDP. It also has the ability to act as a transport layer protocol. The IPX/SPX protocol suite was very popular through the late 1980s and mid-1990s because it was used by Novell NetWare, a network operating system. Due to Novell NetWare's popularity, IPX became a prominent protocol for internetworking. A big advantage of IPX was a small memory footprint of the IPX driver, which was vital for DOS and Windows up to Windows 95 due to the limited size at that time of conventional memory. Another IPX advantage was easy configuration of its client computers. However, IPX does not scale well for large networks such as the Internet. As such, IPX usage decreased as the boom of the Internet made TCP/IP nearly universal. Computers and networks can run multiple network protocols, so almost all IPX sites also ran TCP/IP, to allow Internet connectivity. It was also possible to run later Novell products without IPX, with the beginning of full support for both IPX and TCP/IP by NetWare version 5 in late 1998. == Description == A big advantage of IPX protocol is its little or no need for configuration. In the time when protocols for dynamic host configuration did not exist and the BOOTP protocol for centralized assigning of addresses was not common, the IPX network could be configured almost automatically. A client computer uses the MAC address of its network card as the node address and learns what it needs to know about the network topology from the servers or routers – routes are propagated by Routing Information Protocol, services by Service Advertising Protocol. A small IPX network administrator had to care only to assign all servers in the same network the same network number, to assign different network numbers to different frame formats in the same network, to assign different network numbers to different interfaces of servers with multiple network cards (Novell NetWare server with multiple network cards worked automatically as a router), to assign different network numbers to servers in different interconnected networks, to start router process on nodes with multiple network cards in more complex networks. == IPX packet structure == Each IPX packet begins with a header with the following structure: The Packet Type values are: == IPX addressing == An IPX address has the following structure: === Network number === The network number allows to address (and communicate with) the IPX nodes which do not belong to the same network or cabling system. The cabling system is a network in which a data link layer protocol can be used for communication. To allow communication between different networks, they must be connected with IPX routers. A set of interconnected networks is called an internetwork. Any Novell NetWare server may serve as an IPX router. Novell also supplied stand-alone routers. Other vendors ' multiprotocol routers often support IPX routing. Using different frame formats in one cabling system is possible, but it works similarly as if separate cabling systems were used (i.e. different network numbers must be used for different frame formats even in the same cabling system and a router must be used to allow communication between nodes using different frame formats in the same cabling system). Logical networks are assigned a unique 32-bit address in the range 0x1 to 0xFFFFFFFE (hexadecimal). Hosts have a 48-bit node address, which is by default set to the 6 bytes of the network interface card MAC address. Network addresses, which exist in addition to the node address but are not part of the MAC layer, are assigned only if an IPX router is present or by manual configuration in the network. The network address covers every network participant that can talk to another participant without the aid of an IPX router. In combination, both network and node address form an 80-bit unique identifier for each IPX node across connected logical networks. The node number itself is unique to the logical network only. Network number 00:00:00:00 refers to the current network, and is also used during router discovery. It's also the default in case no router is present, but can be changed by manual configuration, depending on the IPX implementation. Broadcast network number is FF:FF:FF:FF. === Node number === The node number is used to address an individual computer (or more exactly, a network interface) in the network. Client stations use its network interface card MAC address as the node number. The value FF:FF:FF:FF:FF:FF may be used as a node number in a destination address to broadcast a packet to "all nodes in the current network". === Socket number === The socket number serves to select a process or application in the destination node. The presence of a socket number in the IPX address allows the IPX to act as a transport layer protocol, comparable with the User Datagram Protocol (UDP) in the Internet protocol suite. === Comparison with IP === The IPX network number is conceptually identical to the network part of the IP address (the parts with netmask bits set to 1); the node number has the same meaning as the bits of IP address with netmask bits set to 0. The difference is that the boundary between network and node part of address in IP is variable, while in IPX it is fixed. As the node address is usually identical to the MAC address of the network adapter, the Address Resolution Protocol is not needed in IPX. For routing, the entries in the IPX routing table are similar to IP routing tables; routing is done by network address, and for each network address a network:node of the next router is specified in a similar fashion an IP address/netmask is specified in IP routing tables. There are three routing protocols available for IPX networks. In early IPX networks, a version of Routing Information Protocol (RIP) was the only available protocol to exchange routing information. Unlike RIP for IP, it uses delay time as the main metric, retaining the hop count as a secondary metric. Since NetWare 3, the NetWare Link Services Protocol (NLSP) based on IS-IS is available, which is more suitable for larger networks. Cisco routers implement an IPX version of EIGRP protocol as well. == Frame formats == IPX can be transmitted over Ethernet using one of the following 4 frame formats or encapsulation types: 802.3 (raw) encapsulation comprises an IEEE 802.3 frame header (destination MAC, source MAC, length) immediately followed by IPX data. It is used in legacy systems, and can be distinguished by the first two bytes of the IPX header always containing a value of 0xFFFF, which cannot be interpreted as valid LLC Destination and Source Service Access Points in this location of the frame. 802.2 (LLC or Novell) comprises an IEEE 802.3 frame header (destination MAC, source MAC, length) followed by an LLC header (DSAP 0xE0, SSAP 0xE0, control 0x03) followed by IPX data. The 0xE0 fields of the LLC header indicate "NetWare". 802.2 (SNAP) comprises an IEEE 802.3 frame header, an LLC header (DSAP 0xAA, SSAP 0xAA, control 0x03), a SNAP header (OUI 0x000000, type 0x8137), and IPX data. The 0xAA fields of the LLC header indicate "SNAP", and the OUI 0x000000 in the SNAP header indicates an encapsulated EtherType. Ethernet II encapsulation comprises an Ethernet II frame header (destination MAC, source MAC, EtherType 0x8137) followed by IPX data. In non-Ethernet networks, only 802.2 and SNAP frame types are available. == References == == External links == RFC 1132 - A Standard for the Transmission of 802.2 Packets over IPX Networks Ethernet Frame Types: Don Provan's Definitive Answer
Wikipedia/Internetwork_Packet_Exchange
High-Level Data Link Control (HDLC) is a communication protocol used for transmitting data between devices in telecommunication and networking. Developed by the International Organization for Standardization (ISO), it is defined in the standard ISO/IEC 13239:2002. HDLC ensures reliable data transfer, allowing one device to understand data sent by another. It can operate with or without a continuous connection between devices, making it versatile for various network configurations. Originally, HDLC was used in multi-device networks, where one device acted as the master and others as slaves, through modes like Normal Response Mode (NRM) and Asynchronous Response Mode (ARM). These modes are now rarely used. Currently, HDLC is primarily employed in point-to-point connections, such as between routers or network interfaces, using a mode called Asynchronous Balanced Mode (ABM). == History == HDLC is based on IBM's SDLC protocol, which is the layer 2 protocol for IBM's Systems Network Architecture (SNA). It was extended and standardized by the ITU as LAP (Link Access Procedure), while ANSI named their essentially identical version ADCCP. The HDLC specification does not specify the full semantics of the frame fields. This allows other fully compliant standards to be derived from it, and derivatives have since appeared in innumerable standards. It was adopted into the X.25 protocol stack as LAPB, into the V.42 protocol as LAPM, into the Frame Relay protocol stack as LAPF and into the ISDN protocol stack as LAPD. The original ISO standards for HDLC are the following: ISO 3309-1979 – Frame Structure ISO 4335-1979 – Elements of Procedure ISO 6159-1980 – Unbalanced Classes of Procedure ISO 6256-1981 – Balanced Classes of Procedure ISO/IEC 13239:2002, the current standard, replaced all of these specifications. HDLC was the inspiration for the IEEE 802.2 LLC protocol, and it is the basis for the framing mechanism used with the PPP on synchronous lines, as used by many servers to connect to a WAN, most commonly the Internet. A similar version is used as the control channel for E-carrier (E1) and SONET multichannel telephone lines. Cisco HDLC uses low-level HDLC framing techniques but adds a protocol field to the standard HDLC header. == Framing == HDLC frames can be transmitted over synchronous or asynchronous serial communication links. Those links have no mechanism to mark the beginning or end of a frame, so the beginning and end of each frame has to be identified. This is done by using a unique sequence of bits as a frame delimiter, or flag, and encoding the data to ensure that the flag sequence is never seen inside a frame. Each frame begins and ends with a frame delimiter. A frame delimiter at the end of a frame may also mark the start of the next frame. On both synchronous and asynchronous links, the flag sequence is binary "01111110", or hexadecimal 0x7E, but the details are quite different. === Synchronous framing === Because a flag sequence consists of six consecutive 1-bits, other data is coded to ensure that it never contains more than five 1-bits in a row. This is done by bit stuffing: any time that five consecutive 1-bits appear in the transmitted data, the data is paused and a 0-bit is transmitted. The receiving device knows that this is being done, and after seeing five 1-bits in a row, a following 0-bit is stripped out of the received data. If instead the sixth bit is 1, this is either a flag (if the seventh bit is 0), or an error (if the seventh bit is 1). In the latter case, the frame receive procedure is aborted, to be restarted when a flag is next seen. This bit-stuffing serves a second purpose, that of ensuring a sufficient number of signal transitions. On synchronous links, the data is NRZI encoded, so that a 0-bit is transmitted as a change in the signal on the line, and a 1-bit is sent as no change. Thus, each 0 bit provides an opportunity for a receiving modem to synchronize its clock via a phase-locked loop. If there are too many 1-bits in a row, the receiver can lose count. Bit-stuffing provides a minimum of one transition per six bit times during transmission of data, and one transition per seven bit times during transmission of a flag. When no frames are being transmitted on a simplex or full-duplex synchronous link, a frame delimiter is continuously transmitted on the link. This generates one of two continuous waveforms, depending on the initial state: The HDLC specification allows the 0-bit at the end of a frame delimiter to be shared with the start of the next frame delimiter, i.e. "011111101111110". Some hardware does not support this. For half-duplex or multi-drop communication, where several transmitters share a line, a receiver on the line will see continuous idling 1-bits in the inter-frame period when no transmitter is active. HDLC transmits bytes of data with the least significant bit first (not to be confused with little-endian order, which refers to byte ordering within a multi-byte field). === Asynchronous framing === When using asynchronous serial communication such as standard RS-232 serial ports, synchronous-style bit stuffing is inappropriate for several reasons: Bit stuffing is not needed to ensure an adequate number of transitions, as start and stop bits provide that, Because the data is NRZ encoded for transmission, rather than NRZI encoded, the encoded waveform is different, RS-232 sends bits in groups of 8, making adding single bits very awkward, and For the same reason, it is only necessary to specially code flag bytes; it is not necessary to worry about the bit pattern straddling multiple bytes. Instead asynchronous framing uses "control-octet transparency", also called "byte stuffing" or "octet stuffing". The frame boundary octet is 01111110, (0x7E in hexadecimal notation). A "control escape octet", has the value 0x7D (bit sequence '10111110', as RS-232 transmits least-significant bit first). If either of these two octets appears in the transmitted data, an escape octet is sent, followed by the original data octet with bit 5 inverted. For example, the byte 0x7E would be transmitted as 0x7D 0x5E ("10111110 01111010"). Other reserved octet values (such as XON or XOFF) can be escaped in the same way if necessary. The "abort sequence" 0x7D 0x7E ends a packet with an incomplete byte-stuff sequence, forcing the receiver to detect an error. This can be used to abort packet transmission with no chance the partial packet will be interpreted as valid by the receiver. == Structure == The contents of an HDLC frame are shown in the following table: Note that the end flag of one frame may be (but does not have to be) the beginning (start) flag of the next frame. Data is usually sent in multiples of 8 bits, but only some variants require this; others theoretically permit data alignments on other than 8-bit boundaries. The frame check sequence (FCS) is a 16-bit CRC-CCITT or a 32-bit CRC-32 computed over the Address, Control, and Information fields. It provides a means by which the receiver can detect errors that may have been induced during the transmission of the frame, such as lost bits, flipped bits, and extraneous bits. However, given that the algorithms used to calculate the FCS are such that the probability of certain types of transmission errors going undetected increases with the length of the data being checked for errors, the FCS can implicitly limit the practical size of the frame. If the receiver's calculation of the FCS does not match that of the sender's, indicating that the frame contains errors, the receiver can either send a negative acknowledge packet to the sender, or send nothing. After either receiving a negative acknowledge packet or timing out waiting for a positive acknowledge packet, the sender can retransmit the failed frame. The FCS was implemented because many early communication links had a relatively high bit error rate, and the FCS could readily be computed by simple, fast circuitry or software. More effective forward error correction schemes are now widely used by other protocols. == Types of stations (computers) and data transfer modes == Synchronous Data Link Control (SDLC) was originally designed to connect one computer with multiple peripherals via a multidrop bus. The original "normal response mode" is a primary-secondary mode where the computer (or primary terminal) gives each peripheral (secondary terminal) permission to speak in turn. Because all communication is either to or from the primary terminal, frames include only one address, that of the secondary terminal; the primary terminal is not assigned an address. There is a distinction between commands sent by the primary to a secondary, and responses sent by a secondary to the primary, but this is not reflected in the encoding; commands and responses are indistinguishable except for the difference in the direction in which they are transmitted. Normal response mode allows the secondary-to-primary link to be shared without contention, because it has the primary give the secondaries permission to transmit one at a time. It also allows operation over half-duplex communication links, as long as the primary is aware that it may not transmit when it has permitted a secondary to do so. Asynchronous response mode is an HDLC addition for use over full-duplex links. While retaining the primary/secondary distinction, it allows the secondary to transmit at any time. Thus, there must be some other mechanism to ensure that multiple secondaries do not try to transmit at the same time (or only one secondary). Asynchronous balanced mode adds the concept of a combined terminal which can act as both a primary and a secondary. Unfortunately, this mode of operation has some implementation subtleties. While the most common frames sent do not care whether they are in a command or response frame, some essential ones do (notably most unnumbered frames, and any frame with the P/F bit set), and the address field of a received frame must be examined to determine whether it contains a command (the address received is ours) or a response (the address received is that of the other terminal). This means that the address field is not optional, even on point-to-point links where it is not needed to disambiguate the peer being talked to. Some HDLC variants extend the address field to include both source and destination addresses, or an explicit command/response bit. == HDLC operations and frame types == Three fundamental types of HDLC frames may be distinguished: Information frames, or I-frames, transport user data from the network layer. They can also include flow and error control information piggybacked on data. Supervisory frames, or S-frames, are used for flow and error control whenever piggybacking is impossible or inappropriate, such as when a station does not have data to send. S-frames do not have information fields. Unnumbered frames, or U-frames, are used for various miscellaneous purposes, including link management. Some U-frames contain an information field, depending on the type. === Control field === The general format of the control field is: There are also extended (two-byte) forms of I and S frames. Again, the least significant bit (rightmost in this table) is sent first. === P/F bit === Poll/Final is a single bit with two names. It is called Poll when part of a command (set by the primary station to obtain a response from a secondary station), and Final when part of a response (set by the secondary station to indicate a response or the end of transmission). In all other cases, the bit is clear. The bit is used as a token that is passed back and forth between the stations. Only one token should exist at a time. The secondary only sends a Final when it has received a Poll from the primary. The primary only sends a Poll when it has received a Final back from the secondary, or after a timeout indicating that the bit has been lost. In NRM, possession of the poll token also grants the addressed secondary permission to transmit. The secondary sets the F-bit in its last response frame to give up permission to transmit. (It is equivalent to the word "Over" in radio voice procedure.) In ARM and ABM, the P bit forces a response. In these modes, the secondary need not wait for a poll to transmit, so the final bit may be included in the first response after the poll. If no response is received to a P bit in a reasonable period of time, the primary station times out and sends P again. The P/F bit is at the heart of the basic checkpoint retransmission scheme that is required to implement HDLC; all other variants (such as the REJ S-frame) are optional and only serve to increase efficiency. Whenever a station receives a P/F bit, it may assume that any frames that it sent before it last transmitted the P/F bit and not yet acknowledged will never arrive, and so should be retransmitted. When operating as a combined station, it is important to maintain the distinction between P and F bits, because there may be two checkpoint cycles operating simultaneously. A P bit arriving in a command from the remote station is not in response to our P bit; only an F bit arriving in a response is. === N(R), the receive sequence number === Both I and S frames contain a receive sequence number N(R). N(R) provides a positive acknowledgement for the receipt of I-frames from the other side of the link. Its value is always the first frame not yet received; it acknowledges that all frames with N(S) values up to N(R)−1 (modulo 8 or modulo 128) have been received and indicates the N(S) of the next frame it expects to receive. N(R) operates the same way whether it is part of a command or response. A combined station only has one sequence number space. === N(S), the sequence number of the sent frame === This is incremented for successive I-frames, modulo 8 or modulo 128. Depending on the number of bits in the sequence number, up to 7 or 127 I-frames may be awaiting acknowledgment at any time. === I-Frames (user data) === Information frames, or I-frames, transport user data from the network layer. In addition they also include flow and error control information piggybacked on data. The sub-fields in the control field define these functions. The least significant bit (first transmitted) defines the frame type. 0 means an I-frame. Except for the interpretation of the P/F field, there is no difference between a command I frame and a response I frame; when P/F is 0, the two forms are exactly equivalent. === S-frames (control) === Supervisory Frames, or 'S-frames', are used for flow and error control whenever piggybacking is impossible or inappropriate, such as when a station does not have data to send. S-frames in HDLC do not have information fields, although some HDLC-derived protocols use information fields for "multi-selective reject". The S-frame control field includes a leading "10" indicating that it is an S-frame. This is followed by a 2-bit type, a poll/final bit, and a 3-bit sequence number. (Or a 4-bit padding field followed by a 7-bit sequence number.) The first (least significant) 2 bits mean it is an S-frame. All S frames include a P/F bit and a receive sequence number as described above. Except for the interpretation of the P/F field, there is no difference between a command S frame and a response S frame; when P/F is 0, the two forms are exactly equivalent. ==== Receive Ready (RR) ==== Bit value = 00 (0x00 to match above table type field bit order) Indicate that the sender is ready to receive more data (cancels the effect of a previous RNR). Send this packet if you need to send a packet but have no I frame to send. A primary station can send this with the P-bit set to solicit data from a secondary station. A secondary terminal can use this with the F-bit set to respond to a poll if it has no data to send. ==== Receive Not Ready (RNR) ==== Bit value = 01 (0x04 to match above table type field bit order) Acknowledge some packets but request no more be sent until further notice. Can be used like RR with P bit set to solicit the status of a secondary station Can be used like RR with F bit set to respond to a poll if the station is busy. ==== Reject (REJ) ==== Bit value = 10 (0x08 to match above table type field bit order) Requests immediate retransmission starting with N(R). Sent in response to an observed sequence number gap; e.g. after seeing I1/I2/I3/I5, send REJ4. Optional to generate; a working implementation may use only RR. ==== Selective Reject (SREJ) ==== Bit value = 11 (0x0c to match above table type field bit order) Requests retransmission of only the frame N(R). Not supported by all HDLC variants. Optional to generate; a working implementation may use only RR, or only RR and REJ. === U-Frames === Unnumbered frames, or U-frames, are primarily used for link management, although a few are used to transfer user data. They exchange session management and control information between connected devices, and some U-frames contain an information field, used for system management information or user data. The first 2 bits (11) mean it is a U-frame. The five type bits (2 before P/F bit and 3 bit after P/F bit) can create 32 different types of U-frame. In a few cases, the same encoding is used for different things as a command and a response. ==== Mode setting ==== The various modes are described in § Link configurations. Briefly, there are two non-operational modes (initialization mode and disconnected mode) and three operational modes (normal response, asynchronous response, and asynchronous balanced modes) with 3-bit or 7-bit (extended) sequence numbers. Disconnected mode (DM) response When the secondary is disconnected (the default state on power-up), it sends this generic response to any poll (command frame with the poll flag set) except an acceptable mode setting command. It may alternatively give a FRMR response to an unacceptable mode set command. Unnumbered acknowledge (UA) response This is the secondary's response to an acceptable mode set command, indicating that it is now in the requested mode. Set ... mode (SNRM, SARM, SABM) command Place the secondary in the specified mode, with 3-bit sequence numbers (1-byte control field). The secondary acknowledges with UA. If the secondary does not implement the mode, it responds with DM or FRMR. Set ... mode extended (SNRME, SARME, SABME) command Place the secondary in the specified mode, with 7-bit sequence numbers (2-byte control field). Set mode (SM) command Generic mode set, new in ISO/IEC 13239, using an information field to select parameters. ISO/IEC 13239 added many additional options to HDLC, including 15- and 31-bit sequence numbers, which can only be selected with this command. Disconnect (DISC) command This command causes the secondary to acknowledge with UA and disconnect (enter disconnected mode). Any unacknowledged frames are lost. Request disconnect (RD) response This response requests the primary to send a DISC command. The primary should do so promptly, but may delay long enough to ensure all pending frames are acknowledged. Set initialization mode (SIM) command This rarely-implemented command is used to perform some secondary-specific initialization, such as downloading firmware. What happens in initialization mode is not otherwise specified in the HDLC standard. Request initialization mode (RIM) response This requests the primary to send SIM and initialize the secondary. It sent in lieu of DM if the secondary requires initialization. ==== Information transfer ==== These frames may be used as part of normal information transfer. Unnumbered information (UI) This frame (command or response) communicates user data, but without acknowledgement or retransmission in case of error. UI with header check (UIH) This frame (command or response), a ISO/IEC 13239 addition and rarely used, is like UI but also excludes CRC protection. Only a configurable-length prefix ("header") of the frame is covered by the CRC polynomial; errors in the rest of the frame are not detected. Unnumbered poll (UP) command This command solicits a response from the secondary. With the poll bit set, it acts like any other poll frame, without the acknowledgement that must be included in I or S frame. With the poll bit clear, it has a special meaning in normal response mode: the secondary may respond, even though it has not received the poll bit. This is rarely used in HDLC, but was used in the original IBM SDLC as a substitute for the lack of asynchronous response mode; where the communication channel could handle simultaneous responses, the primary would periodically send UP to the broadcast address to collect any pending responses. ==== Error Recovery ==== Frame reject (FRMR) response The FRMR response contains a description of the unacceptable frame, in a standardized format. The first 1 or 2 bytes are a copy of the rejected control field, the next 1 or 2 contain the secondary's current send and receive sequence numbers (and a flag indicating that the frame was a response, applicable only in balanced mode), and the following 4 or 5 bits are error flags indicating the reason for the rejection. The secondary repeats the same FRMR response to every poll until the error is cleared by a mode set command or RSET. The error flags are: W: the frame type (control field) is not understood or not implemented. X: the frame type is not understood with a non-empty information field, but one was present. Y: the frame included an information field that is larger than the secondary can accept. Z: the frame included an invalid receive sequence number N(R), one which is not between the previously received value and the highest sequence number transmitted. (This error cannot be cleared by receiving RSET, but can be cleared by sending RSET.) V: the frame included an invalid send sequence number N(S), greater than the last number acknowledged plus the transmit window size. This error is only possible if a transmit window size smaller than the maximum has been negotiated. The error flags are normally padded with 0 bits to an 8-bit boundary, but HDLC permits frames which are not a multiple of a byte long. Reset (RSET) command The RSET command causes a secondary to reset its receive sequence number so the next expected frame is sequence number 0. This is a possible alternative to sending a new mode set command, which resets both sequence numbers. It is acknowledged with UA, like a mode set command. ==== Peer discovery ==== Exchange identification (XID) An XID command includes an information field specifying the primary's capabilities; the secondary responds with an XID response specifying its capabilities. This is normally done before sending a mode set command. Systems Network Architecture defined one format for the information field, in which the most significant bit of the first byte is clear (0), but HDLC implementations normally implement the variant defined in ISO 8885, which has the most significant bit of the first byte set (1). TEST A TEST command is simply a ping command for debugging purposes. The payload of the TEST command is returned in the TEST response. ==== Defined in other standards ==== There are several U frames which are not part of HDLC, but defined in other related standards. Nonreserved (NR0, NR1, NR2, NR3) The "nonreserved" commands and responses are guaranteed by the HDLC standard to be available for other uses. Ack connectionless (AC0, AC1) These are defined in the IEEE 802.2 logical link control standard. Configure (CFGR) This command was defined in SDLC for debugging. It had a 1-byte payload which specified a non-standard test mode for the secondary. Even numbers disabled the mode, while odd numbers enabled it. A payload of 0 disabled all test modes. The secondary normally acknowledged a configure command by echoing it in response. Beacon (BCN) response This response was defined in SDLC to indicate a communications failure. A secondary which received no frames at all for a long time would begin sending a stream of beacon responses, allowing a unidirectional fault to be located. Note that ISO/IEC 13239 assigns UIH the same encoding as BCN. == Link configurations == Link configurations can be categorized as being either: Unbalanced, which consists of one primary terminal, and one or more secondary terminals. Balanced, which consists of two peer terminals. The three link configurations are: Normal Response Mode (NRM) is an unbalanced configuration in which only the primary terminal may initiate data transfer. The secondary terminals transmit data only in response to commands from the primary terminal. The primary terminal polls each secondary terminal to give it an opportunity to transmit any data it has. Asynchronous Response Mode (ARM) is an unbalanced configuration in which secondary terminals may transmit without permission from the primary terminal. However, there is still a distinguished primary terminal which retains responsibility for line initialization, error recovery, and logical disconnect. Asynchronous Balanced Mode (ABM) is a balanced configuration in which either station may initialize, supervise, recover from errors, and send frames at any time. There is no master/slave relationship. The DTE (data terminal equipment) and DCE (data circuit-terminating equipment) are treated as equals. The initiator for Asynchronous Balanced Mode sends an SABM. An additional link configuration is Disconnected mode. This is the mode that a secondary station is in before it is initialized by the primary, or when it is explicitly disconnected. In this mode, the secondary responds to almost every frame other than a mode set command with a "Disconnected mode" response. The purpose of this mode is to allow the primary to reliably detect a secondary being powered off or otherwise reset. == HDLC Command and response repertoire == The minimal set required for operation are: Commands: I, RR, RNR, DISC, and one of SNRM, SARM or SABM Responses: I, RR, RNR, UA, DM, FRMR === Basic operations === Initialization can be requested by either side. When the primary sends one of the six mode-set commands, it: Signals the other side that initialization is requested Specifies the mode, NRM, ABM, ARM Specifies whether 3 or 7 bit sequence numbers are in use. The HDLC module on the other end transmits (UA) frame when the request is accepted. If the request is rejected it sends (DM) disconnect mode frame. === Functional extensions (options) === For Switched Circuits Commands: ADD – XID Responses: ADD – XID, RD For 2-way Simultaneous commands & responses are ADD – REJ For Single Frame Retransmission commands & responses: ADD – SREJ For Information Commands & Responses: ADD – Ul For Initialization Commands: ADD – SIM Responses: ADD – RIM For Group Polling Commands: ADD – UP Extended Addressing Delete Response I Frames Delete Command I Frames Extended Numbering For Mode Reset (ABM only) Commands are: ADD – RSET Data Link Test Commands & Responses are: ADD – TEST Request Disconnect. Responses are ADD – RD 32-bit FCS == HDLC command and response repertoire == === Unnumbered frames === Unnumbered frames are identified by the low two bits being 1. With the P/F flag, that leaves 5 bits as a frame type. Even though fewer than 32 values are in use, some types have different meanings depending on the direction they are sent: as a command or as a response. The relationship between the DISC (disconnect) command and the RD (request disconnect) response seems clear enough, but the reason for making SARM command numerically equal to the DM response is obscure. * ^ ^ ISO/IEC 13239 addition † ^ ^ ^ Not part of HDLC The UI, UIH, XID, TEST frames contain a payload, and can be used as both commands and responses. The SM command and FRMR response also contain a payload. A UI frame contains user information, but unlike an I frame it is neither acknowledged nor retransmitted if lost. A UIH frame (an ISO/IEC 13239 addition) is like a UI frame, but additionally applies the frame check sequence only to a specified-length prefix of the frame; transmission errors after this prefix are not detected. The XID frame is used to exchange terminal capabilities. Systems Network Architecture defined one format, but the variant defined in ISO 8885 is more commonly used. A primary advertises its capabilities with an XID command, and a secondary returns its own capabilities in an XID response. The TEST frame is simply a ping command for debugging purposes. The payload of the TEST command is returned in the TEST response. The SM command (an ISO/IEC 13239 addition) is a generic "set mode" command which includes an information field (in the same ISO 8885 format as XID) specifying parameters. This allows parameter values (like 15- and 31-bit sequence numbers) and parameters like window sizes and maximum frame sizes not expressible by the standard six mode-set commands to be negotiated. The FRMR response contains a description of the unacceptable frame, in a standardized format. The first 1 or 2 bytes are a copy of the rejected control field, the next 1 or 2 contain the secondary's current send and receive sequence numbers, and the following 4 or 5 bits are error flags indicating the reason for the rejection. == See also == Point-to-Point Protocol Serial Line Internet Protocol Self-synchronizing code == Notes == == References == Friend, George E.; Fike, John L.; Baker, H. Charles; Bellamy, John C. (1988). Understanding Data Communications (2nd ed.). Indianapolis: Howard W. Sams & Company. ISBN 0-672-27270-9. Stallings, William (2004). Data and Computer Communications (7th ed.). Upper Saddle River: Pearson/Prentice Hall. ISBN 978-0-13-100681-2. S. Tanenbaum, Andrew (2005). Computer Networks (4th ed.). 482, F.I.E., Patparganj, Delhi 110 092: Dorling Kindersley(India)Pvt. Ltd., licenses of Pearson Education in South Asia. ISBN 81-7758-165-1.{{cite book}}: CS1 maint: location (link) == External links == PPP in a Real-time Oriented HDLC-like Framing. RFC 2687. PPP in HDLC-like Framing. RFC 1662. STD 51. Data Communication Lectures of Manfred Lindner – Part HDLC HDLC packet format and other information ISO 3309:1984 Information Processing Systems—Data Communication—High Level Data Link Control Procedures—Frame Structure (archived) ISO 4335:1984 Data Communication—High Level Data Link Control Procedures—Consolidation of Elements of Procedures (archived) ISO/IEC 13239:2002
Wikipedia/High-Level_Data_Link_Control
Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems (Sun) in 1984, allowing a user on a client computer to access files over a computer network much like local storage is accessed. NFS, like many other protocols, builds on the Open Network Computing Remote Procedure Call (ONC RPC) system. NFS is an open IETF standard defined in a Request for Comments (RFC), allowing anyone to implement the protocol. == Versions and variations == Sun used version 1 only for in-house experimental purposes. When the development team added substantial changes to NFS version 1 and released it outside of Sun, they decided to release the new version as v2, so that version interoperation and RPC version fallback could be tested. === NFSv2 === Version 2 of the protocol (defined in RFC 1094, March 1989) originally operated only over User Datagram Protocol (UDP). Its designers meant to keep the server side stateless, with locking (for example) implemented outside of the core protocol. People involved in the creation of NFS version 2 include Russel Sandberg, Bob Lyon, Bill Joy, Steve Kleiman, and others. The Virtual File System interface allows a modular implementation, reflected in a simple protocol. By February 1986, implementations were demonstrated for operating systems such as System V release 2, DOS, and VAX/VMS using Eunice. NFSv2 only allows the first 2 GB of a file to be read due to 32-bit limitations. === NFSv3 === Version 3 (RFC 1813, June 1995) added: support for 64-bit file sizes and offsets, to handle files larger than 2 gigabytes (GB); support for asynchronous writes on the server, to improve write performance; additional file attributes in many replies, to avoid the need to re-fetch them; a READDIRPLUS operation, to get file handles and attributes along with file names when scanning a directory; assorted other improvements. The first NFS Version 3 proposal within Sun Microsystems was created not long after the release of NFS Version 2. The principal motivation was an attempt to mitigate the performance issue of the synchronous write operation in NFS Version 2. By July 1992, implementation practice had solved many shortcomings of NFS Version 2, leaving only lack of large file support (64-bit file sizes and offsets) a pressing issue. At the time of introduction of Version 3, vendor support for TCP as a transport-layer protocol began increasing. While several vendors had already added support for NFS Version 2 with TCP as a transport, Sun Microsystems added support for TCP as a transport for NFS at the same time it added support for Version 3. Using TCP as a transport made using NFS over a WAN more feasible, and allowed the use of larger read and write transfer sizes beyond the 8 KB limit imposed by User Datagram Protocol. ==== YANFS/WebNFS ==== YANFS (Yet Another NFS), formerly WebNFS, is an extension to NFSv2 and NFSv3 allowing it to function behind restrictive firewalls without the complexity of Portmap and MOUNT protocols. YANFS/WebNFS has a fixed TCP/UDP port number (2049), and instead of requiring the client to contact the MOUNT RPC service to determine the initial filehandle of every filesystem, it introduced the concept of a public filehandle (null for NFSv2, zero-length for NFSv3) which could be used as the starting point. Both of those changes were later incorporated into NFSv4. YANFS's post-WebNFS development has also included server-side integration. === NFSv4 === Version 4 (RFC 3010, December 2000; revised in RFC 3530, April 2003 and again in RFC 7530, March 2015), influenced by Andrew File System (AFS) and Server Message Block (SMB), includes performance improvements, mandates strong security, and introduces a stateful protocol. Version 4 became the first version developed with the Internet Engineering Task Force (IETF) after Sun Microsystems handed over the development of the NFS protocols. NFS version 4.1 (RFC 5661, January 2010; revised in RFC 8881, August 2020) aims to provide protocol support to take advantage of clustered server deployments including the ability to provide scalable parallel access to files distributed among multiple servers (pNFS extension). Version 4.1 includes Session trunking mechanism (Also known as NFS Multipathing) and is available in some enterprise solutions as VMware ESXi. NFS version 4.2 (RFC 7862) was published in November 2016 with new features including: server-side clone and copy, application I/O advise, sparse files, space reservation, application data block (ADB), labeled NFS with sec_label that accommodates any MAC security system, and two new operations for pNFS (LAYOUTERROR and LAYOUTSTATS). One big advantage of NFSv4 over its predecessors is that only one UDP or TCP port, 2049, is used to run the service, which simplifies using the protocol across firewalls. === Other extensions === WebNFS, an extension to Version 2 and Version 3, allows NFS to integrate more easily into Web-browsers and to enable operation through firewalls. In 2007 Sun Microsystems open-sourced their client-side WebNFS implementation. Various side-band protocols have become associated with NFS. Note: the byte-range advisory Network Lock Manager (NLM) protocol (added to support UNIX System V file locking APIs) the remote quota-reporting (RQUOTAD) protocol, which allows NFS users to view their data-storage quotas on NFS servers NFS over RDMA, an adaptation of NFS that uses remote direct memory access (RDMA) as a transport NFS-Ganesha, an NFS server, running in user-space and supporting various file systems like GPFS/Spectrum Scale, CephFS via respective FSAL (File System Abstraction Layer) modules. The CephFS FSAL is supported using libcephfs Trusted NFS (TNFS) == Platforms == NFS is available on: Unix-like operating systems (Solaris, AIX, HP-UX, FreeBSD, and Linux distros) AmigaOS ArcaOS Haiku IBM i, although the default networking protocol is OS/400 File Server (QFileSvr.400) macOS, although the default networking protocol is Apple Filing Protocol (AFP) Microsoft Windows, although the default networking protocol is Server Message Block (SMB) MS-DOS Novell NetWare, although the default networking protocol is NetWare Core Protocol (NCP) OpenVMS OS/2 RISC OS == Protocol development == During the development of the ONC protocol (called SunRPC at the time), only Apollo's Network Computing System (NCS) offered comparable functionality. Two competing groups developed over fundamental differences in the two remote procedure call systems. Arguments focused on the method for data-encoding — ONC's External Data Representation (XDR) always rendered integers in big-endian order, even if both peers of the connection had little-endian machine-architectures, whereas NCS's method attempted to avoid byte-swap whenever two peers shared a common endianness in their machine-architectures. An industry-group called the Network Computing Forum formed (March 1987) in an (ultimately unsuccessful) attempt to reconcile the two network-computing environments. In 1987, Sun and AT&T announced they would jointly develop AT&T's UNIX System V Release 4. This caused many of AT&T's other licensees of UNIX System to become concerned that this would put Sun in an advantaged position, and ultimately led to Digital Equipment, HP, IBM, and others forming the Open Software Foundation (OSF) in 1988. Ironically, Sun and AT&T had formerly competed over Sun's NFS versus AT&T's Remote File System (RFS), and the quick adoption of NFS over RFS by Digital Equipment, HP, IBM, and many other computer vendors tipped the majority of users in favor of NFS. NFS interoperability was aided by events called "Connectathons" starting in 1986 that allowed vendor-neutral testing of implementations with each other. OSF adopted the Distributed Computing Environment (DCE) and the DCE Distributed File System (DFS) over Sun/ONC RPC and NFS. DFS used DCE as the RPC, and DFS derived from the Andrew File System (AFS); DCE itself derived from a suite of technologies, including Apollo's NCS and Kerberos. === 1990s === Sun Microsystems and the Internet Society (ISOC) reached an agreement to cede "change control" of ONC RPC so that the ISOC's engineering-standards body, the Internet Engineering Task Force (IETF), could publish standards documents (RFCs) related to ONC RPC protocols and could extend ONC RPC. OSF attempted to make DCE RPC an IETF standard, but ultimately proved unwilling to give up change control. Later, the IETF chose to extend ONC RPC by adding a new authentication flavor based on Generic Security Services Application Program Interface (GSSAPI), RPCSEC GSS, to meet IETF requirements that protocol standards have adequate security. Later, Sun and ISOC reached a similar agreement to give ISOC change control over NFS, although writing the contract carefully to exclude NFS version 2 and version 3. Instead, ISOC gained the right to add new versions to the NFS protocol, which resulted in IETF specifying NFS version 4 in 2003. === 2000s === By the 21st century, neither DFS nor AFS had achieved any major commercial success as compared to SMB or NFS. IBM, which had formerly acquired the primary commercial vendor of DFS and AFS, Transarc, donated most of the AFS source code to the free software community in 2000. The OpenAFS project lives on. In early 2005, IBM announced end of sales for AFS and DFS. In January, 2010, Panasas proposed an NFSv4.1 based on their Parallel NFS (pNFS) technology claiming to improve data-access parallelism capability. The NFSv4.1 protocol defines a method of separating the filesystem meta-data from file data location; it goes beyond the simple name/data separation by striping the data amongst a set of data servers. This differs from the traditional NFS server which holds the names of files and their data under the single umbrella of the server. Some products are multi-node NFS servers, but the participation of the client in separation of meta-data and data is limited. The NFSv4.1 pNFS server is a set of server resources or components; these are assumed to be controlled by the meta-data server. The pNFS client still accesses one meta-data server for traversal or interaction with the namespace; when the client moves data to and from the server it may directly interact with the set of data servers belonging to the pNFS server collection. The NFSv4.1 client can be enabled to be a direct participant in the exact location of file data and to avoid solitary interaction with one NFS server when moving data. In addition to pNFS, NFSv4.1 provides: Sessions Directory Delegation and Notifications Multi-server Namespace access control lists and discretionary access control Retention Attributions SECINFO_NO_NAME == See also == == References == == External links ==
Wikipedia/Network_File_System
The Internet Control Message Protocol (ICMP) is a supporting protocol in the Internet protocol suite. It is used by network devices, including routers, to send error messages and operational information indicating success or failure when communicating with another IP address. For example, an error is indicated when a requested service is not available or that a host or router could not be reached. ICMP differs from transport protocols such as TCP and UDP in that it is not typically used to exchange data between systems, nor is it regularly employed by end-user network applications (with the exception of some diagnostic tools like ping and traceroute). A separate Internet Control Message Protocol (called ICMPv6) is used with IPv6. == Technical details == ICMP is part of the Internet protocol suite as defined in RFC 792. ICMP messages are typically used for diagnostic or control purposes or generated in response to errors in IP operations (as specified in RFC 1122). ICMP errors are directed to the source IP address of the originating packet. For example, every device (such as an intermediate router) forwarding an IP datagram first decrements the time to live (TTL) field in the IP header by one. If the resulting TTL is 0, the packet is discarded and an ICMP time exceeded message is sent to the datagram's source address. Many commonly used network utilities are based on ICMP messages. The traceroute command can be implemented by transmitting IP datagrams with specially set IP TTL header fields, and looking for ICMP time exceeded in transit and destination unreachable messages generated in response. The related ping utility is implemented using the ICMP echo request and echo reply messages. ICMP uses the basic support of IP as if it were a higher-level protocol, however, ICMP is actually an integral part of IP. Although ICMP messages are contained within standard IP packets, ICMP messages are usually processed as a special case, distinguished from normal IP processing. In many cases, it is necessary to inspect the contents of the ICMP message and deliver the appropriate error message to the application responsible for transmitting the IP packet that prompted the ICMP message to be sent. ICMP is a network-layer protocol; this makes it a layer 3 protocol in the seven-layer OSI model. Based on the four-layer TCP/IP model, ICMP is an internet-layer protocol, which makes it a layer 2 protocol in the Internet Standard RFC 1122 TCP/IP four-layer model or a layer 3 protocol in the modern five-layer TCP/IP protocol definitions (by Kozierok, Comer, Tanenbaum, Forouzan, Kurose, Stallings). There is no port number associated with an ICMP packet, as these numbers are associated with protocols in the transport layer above, such as TCP and UDP. == Datagram structure == The ICMP packet is encapsulated in an IPv4 packet. The packet consists of header and data sections. === Header === The ICMP header starts after the IPv4 header and is identified by its protocol number, 1. All ICMP packets have an eight-byte header and variable-sized data section. The first four bytes of the header have fixed format, while the last four bytes depend on the type and code of the ICMP packet. Type: 8 bits ICMP type, see § Control messages. Code: 8 bits ICMP subtype, see § Control messages. Checksum: 16 bits Internet checksum for error checking, calculated from the ICMP header and data with value 0 substituted for this field. Rest of Header: 32 bits Four-byte field, contents vary based on the ICMP type and code. === Data === ICMP error messages contain a data section that includes a copy of the entire IPv4 header, plus at least the first eight bytes of data from the IPv4 packet that caused the error message. The length of ICMP error messages should not exceed 576 bytes. This data is used by the host to match the message to the appropriate process. If a higher level protocol uses port numbers, they are assumed to be in the first eight bytes of the original datagram's data. The variable size of the ICMP packet data section has been exploited. In the "Ping of death", large or fragmented ICMP packets are used for denial-of-service attacks. ICMP data can also be used to create covert channels for communication. These channels are known as ICMP tunnels. == Control messages == Control messages are identified by the value in the type field. The code field gives additional context information for the message. Some control messages have been deprecated since the protocol was first introduced. === Source quench === Source Quench requests that the sender decrease the rate of messages sent to a router or host. This message may be generated if a router or host does not have sufficient buffer space to process the request, or may occur if the router or host buffer is approaching its limit. Data is sent at a very high speed from a host or from several hosts at the same time to a particular router on a network. Although a router has buffering capabilities, the buffering is limited to within a specified range. The router cannot queue any more data than the capacity of the limited buffering space. Thus if the queue gets filled up, incoming data is discarded until the queue is no longer full. But as no acknowledgement mechanism is present in the network layer, the client does not know whether the data has reached the destination successfully. Hence some remedial measures should be taken by the network layer to avoid these kind of situations. These measures are referred to as source quench. In a source quench mechanism, the router sees that the incoming data rate is much faster than the outgoing data rate, and sends an ICMP message to the clients, informing them that they should slow down their data transfer speeds or wait for a certain amount of time before attempting to send more data. When a client receives this message, it automatically slows down the outgoing data rate or waits for a sufficient amount of time, which enables the router to empty the queue. Thus the source quench ICMP message acts as flow control in the network layer. Since research suggested that "ICMP Source Quench [was] an ineffective (and unfair) antidote for congestion", routers' creation of source quench messages was deprecated in 1995 by RFC 1812. Furthermore, forwarding of and any kind of reaction to (flow control actions) source quench messages was deprecated from 2012 by RFC 6633. Where: Type must be set to 4 Code must be set to 0 IP header and additional data is used by the sender to match the reply with the associated request === Redirect === Redirect requests data packets be sent on an alternative route. ICMP Redirect is a mechanism for routers to convey routing information to hosts. The message informs a host to update its routing information (to send packets on an alternative route). If a host tries to send data through a router (R1) and R1 sends the data on another router (R2) and a direct path from the host to R2 is available (that is, the host and R2 are on the same subnetwork), then R1 will send a redirect message to inform the host that the best route for the destination is via R2. The host should then change its route information and send packets for that destination directly to R2. The router will still send the original datagram to the intended destination. However, if the datagram contains routing information, this message will not be sent even if a better route is available. RFC 1122 states that redirects should only be sent by gateways and should not be sent by Internet hosts. Where: Type must be set to 5. Code specifies the reason for the redirection, and may be one of the following: IP address is the 32-bit address of the gateway to which the redirection should be sent. IP header and additional data is included to allow the host to match the reply with the request that caused the redirection reply. === Time exceeded === Time Exceeded is generated by a gateway to inform the source of a discarded datagram due to the time to live field reaching zero. A time exceeded message may also be sent by a host if it fails to reassemble a fragmented datagram within its time limit. Time exceeded messages are used by the traceroute utility to identify gateways on the path between two hosts. Where: Type must be set to 11 Code specifies the reason for the time exceeded message, include the following: IP header and first 64 bits of the original payload are used by the source host to match the time exceeded message to the discarded datagram. For higher-level protocols such as UDP and TCP the 64-bit payload will include the source and destination ports of the discarded packet. === Timestamp === Timestamp is used for time synchronization. The originating timestamp is set to the time (in milliseconds since midnight) the sender last touched the packet. The receive and transmit timestamps are not used. Where: Type must be set to 13 Code must be set to 0 Identifier and Sequence Number can be used by the client to match the timestamp reply with the timestamp request. Originate timestamp is the number of milliseconds since midnight Universal Time (UT). If a UT reference is not available the most-significant bit can be set to indicate a non-standard time value. === Timestamp reply === Timestamp Reply replies to a Timestamp message. It consists of the originating timestamp sent by the sender of the Timestamp as well as a receive timestamp indicating when the Timestamp was received and a transmit timestamp indicating when the Timestamp reply was sent. Where: Type must be set to 14 Code must be set to 0 Identifier and Sequence number can be used by the client to match the reply with the request that caused the reply. Originate timestamp is the time the sender last touched the message before sending it. Receive timestamp is the time the echoer first touched it on receipt. Transmit timestamp is the time the echoer last touched the message on sending it. All timestamps are in units of milliseconds since midnight UT. If the time is not available in milliseconds or cannot be provided with respect to midnight UT then any time can be inserted in a timestamp provided the high order bit of the timestamp is also set to indicate this non-standard value. The use of Timestamp and Timestamp Reply messages to synchronize the clocks of Internet nodes has largely been replaced by the UDP-based Network Time Protocol and the Precision Time Protocol. === Address mask request === Address mask request is normally sent by a host to a router in order to obtain an appropriate subnet mask. Recipients should reply to this message with an Address mask reply message. Where: Type must be set to 17 Code must be set to 0 Address mask can be set to 0 ICMP Address Mask Request may be used as a part of reconnaissance attack to gather information on the target network, therefore ICMP Address Mask Reply is disabled by default on Cisco IOS. === Address mask reply === Address mask reply is used to reply to an address mask request message with an appropriate subnet mask. Where: Type must be set to 18 Code must be set to 0 Address mask should be set to the subnet mask === Destination unreachable === Destination unreachable is generated by the host or its inbound gateway to inform the client that the destination is unreachable for some reason. Reasons for this message may include: the physical connection to the host does not exist (distance is infinite); the indicated protocol or port is not active; the data must be fragmented but the 'don't fragment' flag is on. Unreachable TCP ports notably respond with TCP RST rather than a destination unreachable type 3 as might be expected. Destination unreachable is never reported for IP multicast transmissions. With the following field contents: Type: 8 bits; Type == 3 A value of 3 indicates 'Destination unreachable'. Code: 8 bits This specifies the type of error, and can be any of the following: Unused: 8 - 32 bits; Unused == 0 Unused, must be set to zero. If Length or Next-hop MTU are not used, they are considered part of this field. Length: 8 bits Optional. The Length field indicates the length of the original datagram data, in 32-bit words. This allows this ICMP message to be extended with extra information. If used, the original datagram data must be padded with zeroes to the nearest 32-bit boundary. Next-hop MTU: 16 bits Optional. Contains the MTU of the next-hop network if a code 4 error occurs. IP header and data: 20 - 568 bytes The IP header (20 bytes) and at most 548 bytes of the start of the original datagram (as not to exceed the minimum IPv4 reassembly buffer size). If this message is extended then this field must contain at least 128 bytes of original datagram data (padded with zeroes if necessary).These data are included to allow the client to match the reply with the request that caused the Destination unreachable reply. == Extensions == ICMP messages can be extended with extra information. This information is carried in one or more Extension Objects, which are preceded by an ICMP Extension Header. Version: 4 bits; Version == 2 Extension header version. Reserved: 12 bits; Reserved == 0 Reserved. Checksum: 16 bits Checksum over this header and all extension objects. This field itself is included, so it is set to zero while performing the calculation. Extension objects have the following general structure: Length: 16 bits The length of the object in octets, including the header. Class-Num: 8 bits Identifies the object's class. C-Type: 8 bits Identifies the object's subtype. Object payload: Variable Optional payload. If nonempty, it contains a data structure, which size is a multiple of 32 bits. == See also == ICMP hole punching PathPing Path MTU Discovery Smurf attack == References == == External links == IANA protocol numbers Explanation of ICMP Redirect Behavior at the Wayback Machine (archived 2015-01-10)
Wikipedia/Internet_Control_Message_Protocol
The public switched telephone network (PSTN) is the aggregate of the world's telephone networks that are operated by national, regional, or local telephony operators. It provides infrastructure and services for public telephony. The PSTN consists of telephone lines, fiber-optic cables, microwave transmission links, cellular networks, communications satellites, and undersea telephone cables interconnected by switching centers, such as central offices, network tandems, and international gateways, which allow telephone users to communicate with each other. Originally a network of fixed-line analog telephone systems, the PSTN is now predominantly digital in its core network and includes terrestrial cellular, satellite, and landline systems. These interconnected networks enable global communication, allowing calls to be made to and from nearly any telephone worldwide. Many of these networks are progressively transitioning to Internet Protocol to carry their telephony traffic. The technical operation of the PSTN adheres to the standards internationally promulgated by the ITU-T. These standards have their origins in the development of local telephone networks, primarily in the Bell System in the United States and in the networks of European ITU members. The E.164 standard provides a single global address space in the form of telephone numbers. The combination of the interconnected networks and a global telephone numbering plan allows telephones around the world to connect with each other. == History == Commercialization of the telephone began shortly after its invention, with instruments operated in pairs for private use between two locations. Users who wanted to communicate with persons at multiple locations had as many telephones as necessary for the purpose. Alerting another user of the desire to establish a telephone call was accomplished by whistling loudly into the transmitter until the other party heard the alert. Bells were soon added to stations for signaling. Later telephone systems took advantage of the exchange principle already employed in telegraph networks. Each telephone was wired to a telephone exchange established for a town or area. For communication outside this exchange area, trunks were installed between exchanges. Networks were designed in a hierarchical manner until they spanned cities, states, and international distances. Automation introduced pulse dialing between the telephone and the exchange so that each subscriber could directly dial another subscriber connected to the same exchange, but long-distance calling across multiple exchanges required manual switching by operators. Later, more sophisticated address signaling, including multi-frequency signaling methods, enabled direct-dialed long-distance calls by subscribers, culminating in the Signalling System 7 (SS7) network that controlled calls between most exchanges by the end of the 20th century. The growth of the PSTN was enabled by teletraffic engineering techniques to deliver quality of service (QoS) in the network. The work of A. K. Erlang established the mathematical foundations of methods required to determine the capacity requirements and configuration of equipment and the number of personnel required to deliver a specific level of service. In the 1970s, the telecommunications industry began implementing packet-switched network data services using the X.25 protocol transported over much of the end-to-end equipment as was already in use in the PSTN. These became known as public data networks, or public switched data networks. In the 1980s, the industry began planning for digital services assuming they would follow much the same pattern as voice services and conceived end-to-end circuit-switched services, known as the Broadband Integrated Services Digital Network (B-ISDN). The B-ISDN vision was overtaken by the disruptive technology of the Internet. At the turn of the 21st century, the oldest parts of the telephone network still used analog baseband technology to deliver audio-frequency connectivity over the last mile to the end-user. However, digital technologies such as DSL, ISDN, FTTx, and cable modems were progressively deployed in this portion of the network, primarily to provide high-speed Internet access. As of 2023, operators worldwide are in the process of retiring support for both last-mile analog telephony and ISDN, and transitioning voice service to Voice over IP via Internet access delivered either via DSL, cable modems or fiber-to-the-premises, eliminating the expense and complexity of running two separate technology infrastructures for PSTN and Internet access. Several large private telephone networks are not linked to the PSTN, usually for military purposes. There are also private networks run by large companies that are linked to the PSTN only through limited gateways, such as a large private branch exchange (PBX). == Operators == The task of building the networks and selling services to customers fell to the network operators. The first company to be incorporated to provide PSTN services was the Bell Telephone Company in the United States. In some countries, however, the job of providing telephone networks fell to government as the investment required was very large and the provision of telephone service was increasingly becoming an essential public utility. For example, the General Post Office in the United Kingdom brought together a number of private companies to form a single nationalized company. In more recent decades, these state monopolies were broken up or sold off through privatization. == Technology == === Network topology === The architecture of the PSTN evolved over time to support an increasing number of subscribers, call volume, destinations, features, and technologies. The principles developed in North America and in Europe were adopted by other nations, with adaptations for local markets. A key concept was that the telephone exchanges are arranged into hierarchies, so that if a call cannot be handled in a local cluster, it is passed to one higher up for onward routing. This reduced the number of connecting trunks required between operators over long distances, and also kept local traffic separate. Modern technologies have brought simplifications === Digital channels === Most automated telephone exchanges use digital switching rather than mechanical or analog switching. The trunks connecting the exchanges are also digital, called circuits or channels. However analog two-wire circuits are still used to connect the last mile from the exchange to the telephone in the home (also called the local loop). To carry a typical phone call from a calling party to a called party, the analog audio signal is digitized at an 8 kHz sample rate with 8-bit resolution using a special type of nonlinear pulse-code modulation known as G.711. The call is then transmitted from one end to another via telephone exchanges. The call is switched using a call set up protocol (usually ISUP) between the telephone exchanges under an overall routing strategy. The call is carried over the PSTN using a 64 kbit/s channel, originally designed by Bell Labs. The name given to this channel is Digital Signal 0 (DS0). The DS0 circuit is the basic granularity of circuit switching in a telephone exchange. A DS0 is also known as a timeslot because DS0s are aggregated in time-division multiplexing (TDM) equipment to form higher capacity communication links. A Digital Signal 1 (DS1) circuit carries 24 DS0s on a North American or Japanese T-carrier (T1) line, or 32 DS0s (30 for calls plus two for framing and signaling) on an E-carrier (E1) line used in most other countries. In modern networks, the multiplexing function is moved as close to the end user as possible, usually into cabinets at the roadside in residential areas, or into large business premises. These aggregated circuits are conveyed from the initial multiplexer to the exchange over a set of equipment collectively known as the access network. The access network and inter-exchange transport use synchronous optical transmission, for example, SONET and Synchronous Digital Hierarchy (SDH) technologies, although some parts still use the older PDH technology. The access network defines a number of reference points. Most of these are of interest mainly to ISDN but one, the V reference point, is of more general interest. This is the reference point between a primary multiplexer and an exchange. The protocols at this reference point were standardized in ETSI areas as the V5 interface. === Impact on IP standards === Voice quality in PSTN networks was used as a benchmark for the development of the Telecommunications Industry Association's TIA-TSB-116 standard on voice-quality recommendations for IP telephony, to determine acceptable levels of audio latency and echo. == Regulation == In most countries, the government has a regulatory agency dedicated to provisioning of PSTN services. The agency regulate technical standards, legal requirements, and set service tasks may be for example to ensure that end customers are not over-charged for services where monopolies may exist. These regulatory agencies may also regulate the prices charged between the operators to carry each other's traffic. == Service retirement == In the United Kingdom, the copper POTS and ISDN-based PSTN is being retired in favour of SIP telephony, with an original completion date of December 2025, although this has now been put back to January 2027. See United Kingdom PSTN switch-off. Voice telephony will continue to follow the E.163 and E.164 standards, as with current mobile telephony, with the interface to end-users remaining the same. Several other European countries, including Estonia, Germany, Iceland, the Netherlands, Spain and Portugal, have also retired, or are planning to retire, their PSTN networks. Countries in other continents are also performing similar transitions. == See also == List of country calling codes Managed facilities-based voice network Phreaking Plain old telephone service (POTS) Via Net Loss == References ==
Wikipedia/Public_switched_telephone_network
In information systems, applications architecture or application architecture is one of several architecture domains that form the pillars of an enterprise architecture (EA). == Scope == An applications architecture describes the behavior of applications used in a business, focused on how they interact with each other and with users. It is focused on the data consumed and produced by applications rather than their internal structure. By example, in application portfolio management, applications are mapped to business functions and processes as well as costs, functional quality and technical quality in order to assess the value provided. The applications architecture is specified on the basis of business and functional requirements. This involves defining the interaction between application packages, databases, and middleware systems in terms of functional coverage. This helps identify any integration problems or gaps in functional coverage. A migration plan can then be drawn up for systems which are at the end of the software life cycle or which have inherent technological risks, a potential to disrupt the business as a consequence of a technological failure. Applications architecture tries to ensure the suite of applications being used by an organization to create the composite architecture is scalable, reliable, available and manageable. Applications architecture defines how multiple applications are poised to work together. It is different from software architecture, which deals with technical designs of how a system is built. One not only needs to understand and manage the dynamics of the functionalities the composite architecture is implementing but also help formulate the deployment strategy and keep an eye out for technological risks that could jeopardize the growth and/or operations of the organization. == Strategy == Applications architecture strategy involves ensuring the applications and the integration align with the growth strategy of the organization. If an organization is a manufacturing organization with fast growth plans through acquisitions, the applications architecture should be nimble enough to encompass inherited legacy systems as well as other large competing systems. == Patterns == Applications can be classified in various types depending on the applications architecture pattern they follow. A "pattern" has been defined as: "an idea that has been useful in one practical context and will probably be useful in others”. To create patterns, one needs building blocks. Building blocks are components of software, mostly reusable, which can be utilized to create certain functions. Patterns are a way of putting building blocks into context and describe how to use the building blocks to address one or multiple architectural concerns. An application is a compilation of various functionalities, all typically following the same pattern. This pattern defines the application's pattern. Application patterns can describe structural (deployment/distribution-related) or behavioural (process flow or interaction/integration-related) characteristics and an application architecture may leverage one or a mix of patterns. The idea of patterns has been around almost since the beginning of computer science, but it was most famously popularized by the "Gang of Four" (GoF) though many of their patterns are "software architecture" patterns rather than "application architecture" patterns. In addition to the GoF, Thomas Erl is a well-known author of various types of patterns, and most of the large software tools vendors, such as Microsoft, have published extensive pattern libraries. Despite the plethora of patterns that have been published, there are relatively few patterns that can be thought of as "industry standard". Some of the best-known of these include: single-tier/thick client/desktop application (structural pattern): an application that exists only on a single computer, typically a desktop. One can, of course have the same desktop application on many computers, but they do not interact with one another (with rare exceptions). client-server/2-tier (structural pattern): an application that consists of a front-end (user-facing) layer running as a rich client that communicates to a back-end (server) which provides business logic, workflow, integration and data services. In contrast to desktop applications (which are single-user), client-server applications are almost always multi-user applications. n-tier (structural pattern): an extension of the client-server pattern, where the server functions are split into multiple layers, which are distributed onto different computers across a local-area network (LAN). distributed (structural pattern): an extension of the n-tier pattern where the server functions are distributed across a wide-area network (WAN) or cloud. This pattern also include some behavioural pattern attributes because the server functions must be designed to be more autonomous and function in an asynchronous dialog with the other functions in order to deal with potentially-significant latency that can occur in WAN and cloud deployment scenarios. horizontal scalability (structural pattern): a pattern for running multiple copies of server functions on multiple computers in such a way that increasing processing load can be spread across increasing numbers of instances of the functions rather than having to re-deploy the functions on larger, more powerful computers. Cloud-native applications are fundamentally-based on horizontal scalability. event-driven architecture (behavioural pattern): Data events (which may have initially originated from a device, application, user, data store or clock) and event detection logic which may conditionally discard the event, initiate an event-related process, alert a user or device manager, or update a data store. The event-driven pattern is fundamental to the asynchronous processing required by the distributed architecture pattern. ETL (behavioural pattern): An application process pattern for extracting data from an originating source, transforming that data according to some business rules, and then loading that data into a destination. Variations on the ETL pattern include ELT and ETLT. Request-Reply (behavioural pattern): An application integration pattern for exchanging data where the application requests data from another application and waits for a reply containing the requested data. This is the most prominent example of a synchronous pattern, in contrast to the asynchronous processing referred to in previous pattern descriptions. The right applications pattern depends on the organization's industry and use of the component applications. An organization could have a mix of multiple patterns if it has grown both organically and through acquisitions. == Application architect == TOGAF describes both the skills and the role expectations of an Application architect. These skills include an understanding of application modularization/distribution, integration, high availability, and scalability patterns, technology and trends. Increasingly, an understanding of application containers, serverless computing, storage, data and analytics, and other cloud-related technology and services are required application architect skills. While a software background is a great foundation for an application architect, programming and software design are not skills required of an application architect (these are actually skills for a Software Architect, who is a leader on the computer programming team). === Knowledge domains === Application modeling Employs modeling as a framework for the deployment and integration of new or enhanced applications, uses modeling to find problems, reduce risk, improve predictability, reduce cost and time-to-market, tests various product scenarios, incorporating clients' nonfunctional needs/requirements, adds test design decisions to the development process as necessary, evaluates product design problems. Competitive intelligence, business modeling, strategic analysis Understanding of the global marketplace, consumers, industries and competition, and how global business models, strategies, finances, operations and structures interrelate. Understanding of the competitive environment, including current trend in the market, industry, competition and regulatory environment, as well as understanding of how the components of business model (i.e. strategy, finances, operations) interrelate to make organization competitive in the marketplace. Understanding of organization's business processes, systems, tools, regulations and structure and how they interrelate to provide products and services that create value for customers, consumers and key stakeholders. Understanding of how the value create for customers, consumers and key stakeholders aligns with organization's vision, business, culture, value proposition, brand promise and strategic imperatives. Understanding of organization's past and present achievements and shortcomings to assess strengths, weaknesses, opportunities and risks in relation to the competitive environment. Technology Understanding of IT strategy, development lifecycle and application/infrastructure maintenance; Understanding of IT service and support processes to promote competitive advantage, create efficiencies and add value to the business. Technology standards Demonstrates a thorough understanding of the key technologies which form the infrastructure necessary to effectively support existing and future business requirements, ensures that all hardware and software comply with baseline requirements and standards before being integrated into the business environment, understands and is able to develop technical standards and procedures to facilitate the use of new technologies, develops useful guidelines for using and applying new technologies. === Tasks === An applications architect is a master of everything application-specific in an organization. An applications architect provides strategic guidelines to the applications maintenance teams by understanding all the applications from the following perspectives: Interoperability capability Performance and scalability Reliability and availability Application lifecycle stage Technological risks Number of instances The above analysis will point out applications that need a range of changes – from change in deployment strategy for fragmented applications to a total replacement for applications at the end of their technology or functionality lifecycle. ==== Functionality footprint ==== Understand the system process flow of the primary business processes. It gives a clear picture of the functionality map and the applications footprint of various applications across the map. Many organizations do not have documentation discipline and hence lack detailed business process flows and system process flows. One may have to start an initiative to put those in place first. ==== Create solution architecture guidelines ==== Every organization has a core set of applications that are used across multiple divisions either as a single instance or a different instance per division. Create a solution architecture template for all the core applications so that all the projects have a common starting ground for designing implementations. The standards in architecture world are defined in TOGAF, The Open Group Architecture Framework describes the four components of EA as BDAT (Business architecture, Data architecture, Application Architecture and Technical architecture, There are also other standards to consider, depending on the level of complexity of the organization: The Zachman Framework for EA Federal enterprise architecture (FEA) Gartner == See also == ISO/IEC 42010 Systems and software engineering — Architecture description is an international standard for architecture descriptions of systems and software. IEEE 1471 a superseded IEEE Standard for describing the architecture of a "software-intensive system", also known as software architecture. IBM Systems Application Architecture Enterprise architecture planning High-availability application architecture == References == "Phase C: Information Systems Architectures - Application Architecture". TOGAF 9.1. Retrieved 2017-07-26. Hunter, Roy; Rasmussen, Brian. "Applications Architecture". Oracle. Retrieved 2017-07-26.
Wikipedia/Applications_architecture
An application layer is an abstraction layer that specifies the shared communication protocols and interface methods used by hosts in a communications network. An application layer abstraction is specified in both the Internet Protocol Suite (TCP/IP) and the OSI model. Although both models use the same term for their respective highest-level layer, the detailed definitions and purposes are different. == Internet protocol suite == In the Internet protocol suite, the application layer contains the communications protocols and interface methods used in process-to-process communications across an Internet Protocol (IP) computer network. The application layer only standardizes communication and depends upon the underlying transport layer protocols to establish host-to-host data transfer channels and manage the data exchange in a client–server or peer-to-peer networking model. Though the TCP/IP application layer does not describe specific rules or data formats that applications must consider when communicating, the original specification (in RFC 1123) does rely on and recommend the robustness principle for application design. == OSI model == In the OSI model, the definition of the application layer is narrower in scope. The OSI model defines the application layer as only the interface responsible for communicating with host-based and user-facing applications. OSI then explicitly distinguishes the functionality of two additional layers, the session layer and presentation layer, as separate levels below the application layer and above the transport layer. OSI specifies a strict modular separation of functionality at these layers and provides protocol implementations for each. In contrast, the Internet Protocol Suite compiles these functions into a single layer. === Sublayers === Originally the OSI model consisted of two kinds of application layer services with their related protocols. These two sublayers are the common application service element (CASE) and specific application service element (SASE). Generally, an application layer protocol is realized by using the functionality of several application service elements. Some application service elements invoke different procedures based on the version of the session service available. ==== CASE ==== The common application service element sublayer provides services for the application layer and request services from the session layer. It provides support for common application services, such as: ACSE (Association Control Service Element) ROSE (Remote Operation Service Element) CCR (Commitment Concurrency and Recovery) RTSE (Reliable Transfer Service Element) ==== SASE ==== The specific application service element sublayer provides application-specific services (protocols), such as: FTAM (File Transfer, Access and Manager) VT (Virtual Terminal) MOTIS (Message Oriented Text Interchange Standard) CMIP (Common Management Information Protocol) JTM (Job Transfer and Manipulation) MMS (Manufacturing Messaging Specification) RDA (Remote Database Access) DTP (Distributed Transaction Processing) == Protocols == The IETF definition document for the application layer in the Internet Protocol Suite is RFC 1123. It provided an initial set of protocols that covered the major aspects of the functionality of the early Internet: Hypertext documents: Hypertext Transfer Protocol (HTTP) Remote login to hosts: Telnet, Secure Shell File transfer: File Transfer Protocol (FTP), Trivial File Transfer Protocol (TFTP) Electronic mail transport: Simple Mail Transfer Protocol (SMTP) Networking support: Domain Name System (DNS) Host initialization: BOOTP Remote host management: Simple Network Management Protocol (SNMP), Common Management Information Protocol over TCP (CMOT) === Examples === Additional notable application-layer protocols include the following: == References == == External links == Media related to Application layer protocols at Wikimedia Commons Learning materials related to Application layer at Wikiversity
Wikipedia/Application_layer
A dumb pipe or dumb network, in relation to a mobile network operator (MNO), is a simple network that has high enough bandwidth to transfer bytes between the customer's device and the Internet without the need to prioritize content. This means it can afford to be completely neutral with regard to the services and applications the customer accesses. This is in contrast to a smart pipe where the operator affects the customer's accessibility of the Internet by either limiting the available services or applications to its own proprietary portal (like a walled garden) or offer additional capabilities and services beyond simple connectivity. A dumb pipe primarily provides simple bandwidth and network capacity which is greater than the maximum network loads expected thus avoiding the need to discriminate between packet types. Among the commonly understood operational models for an MNO are dumb pipes, smart pipes, and the walled gardens. == Description == A dumb network is marked by using intelligent devices (e.g. computers) at the periphery that make use of a network that does not interfere with or manage an application's operation / communication. The dumb network concept is the natural outcome of the end-to-end principle. The Internet was originally designed to operate as a dumb network. In some circles the dumb network is regarded as a natural culmination of technological progress in network technology. With the justification that the dumb network uniquely satisfies the requirements of the end to end principle for application creation, supporters see the dumb network as uniquely qualified for this purpose, as – by design – it is not sensitive to the needs of applications. The dumb network model can, in some ways, allow for flexibility and ease of innovation in the development of applications that is not matched by other models. == Opinions == === Criticism === Critics of dumb network architecture posit two arguments in favor of intelligent networks. The first, that certain users and transmission needs of certain applications are more important than others and thus should be granted greater network priority or quality of service. An example is that of real-time video applications that are more time sensitive than say, text applications. Thus video transmissions would receive network priority to prevent picture skips, while text transmissions could be delayed without significantly affecting its application performance. The second is that networks should be able to defend against malware and cyberattacks. === Support === Advocates of dumb networks counter the first argument by pointing out that prioritizing network traffic is very expensive in monetary, technology, and network performance. Dumb networks advocates also consider the real purpose for prioritizing network traffic is to overcome insufficient bandwidth to handle traffic and not a network protocol issue. The security argument is that malware is an end-to-end problem and thus should be dealt with at the endpoints and that attempting to adapt the network to counterattacks is both cumbersome and inefficient. "In a world of dumb terminals and telephones, networks had to be smart. But in a world of smart terminals, networks have to be dumb." == See also == Closed platform Quality of service Series of tubes Smart pipe == References == == External links == The Operators vs. the Media Brands Archived 2015-03-29 at the Wayback Machine The Pipe Is Only Dumb If You Make It That Way Juniper Research Report: Business Models for Mobile Content Players, Strategic Options & Scenarios 2007-2012 (Paywall) Study About MNOs Share of Mobile Content Market Rise of the Stupid Network, original release May 1997, by David S. Isenberg of AT&T Labs Research that explains several dumb network concepts.
Wikipedia/Dumb_network
Spawning networks are a new class of programmable networks that automate the life cycle process for the creation, deployment, and management of network architecture. These networks represent a groundbreaking approach to the development of programmable networks, enabling the automated creation, deployment, and management of virtual network architectures. This concept revolutionizes the traditional manual and ad hoc process of network deployment, allowing for the dynamic spawning of distinct "child" virtual networks with their own transport, control, and management systems. Spawning networks are capable of operating on a subset of their "parent's" network resources and in isolation from other spawned networks, offering controlled access to communities of users with specific connectivity, security, and quality of service requirements. Their significance lies in their potential to address the limitations of existing network architectures, paving the way for rapid adaptation to new user needs and requirements. By automating the life cycle process for network architectures, spawning networks represent a major advancement in open network control, network programmability, and distributed systems technology. By supporting the controlled access to communities of users with specific connectivity, security, and quality of service requirements, spawning networks provide a flexible and scalable solution to meet evolving network demands. Their automated life cycle process for network architectures represents a significant advancement in open network control, network programmability, and distributed systems technology. == Genesis Kernel == The Genesis Kernel plays a pivotal role in enabling the creation, deployment, and management of spawning networks. As a virtual network operating system, the Genesis Kernel has the capability to spawn child network architectures that can support alternative distributed network algorithms and services. It acts as a resource allocator, arbitrating between conflicting requests made by spawned virtual networks, thereby facilitating the efficient utilization of network resources. The Genesis Kernel supports a virtual network life cycle process, which includes the dynamic creation, deployment, and management of virtual network architectures. This process is realized through the interaction of the transport, programming, and life cycle environments, all of which are integral components of the Genesis Kernel framework. Overall, the Genesis Kernel provides the foundational framework and infrastructure necessary for the automated and systematic realization of spawning networks, representing a significant advancement in the field of programmable networks. == The virtual network life cycle == The virtual network life cycle process involves the dynamic creation, deployment, and management of virtual network architectures. It comprises three key phases: 1.Profiling: This phase captures the blueprint of the virtual network architecture, including addressing, routing, signaling, security, control, and management requirements. It generates an executable profiling script that automates the deployment of programmable virtual networks. 2.Spawning: Systematically sets up the network topology, allocates resources, and binds transport, routing, and network management objects to the physical network infrastructure. Based on the profiling script and available network resources, network objects are created and dispatched to network nodes, dynamically creating a new virtual network architecture. 3.Management: Supports virtual network resource management based on per-virtual-network policy to exert control over multiple spawned network architectures. It also facilitates virtual network architecting, allowing the network designer to analyze and refine network objects that characterize the spawned network architecture. Through these phases, the virtual network life cycle process enables the automated and systematic creation, deployment, and management of virtual network architectures, providing a flexible and scalable approach to network customization and adaptation. === Potential Impacts === Spawning networks have the potential to significantly impact the field of programmable networks by addressing key limitations in existing network architectures. By automating the creation, deployment, and management of virtual network architectures, spawning networks offer several benefits: 1.Flexibility and Adaptability: Spawning networks enable rapid adaptation to new user needs and requirements, allowing for the dynamic spawning of distinct virtual networks with specific connectivity, security, and quality of service requirements. 2.Efficient Resource Utilization: The automated life cycle process for network architectures facilitates the efficient utilization of network resources, optimizing resource allocation and network performance. 3.Scalability: Spawning networks provide a scalable solution for network customization, allowing for the controlled access to communities of users with diverse connectivity and service needs. 4.Automation: By automating the network deployment process, spawning networks reduce the manual effort and time required for architecting and deploying new network architectures, leading to improved operational efficiency. ==== Implementation Challenges ==== The implementation of spawning networks presents several challenges and considerations, encompassing both engineering and research issues: 1.Computational Efficiency: Addressing the computational efficiency and performance of spawning networks is crucial, especially in the context of increasing transmission rates. Balancing the computational power needed for routing and congestion control with the requirements of programmable networks is a significant challenge. 2.Performance Optimization: Implementing fast-track and cut-through techniques to offset potential performance costs associated with nested virtual networks is essential. This involves optimizing packet forwarding and hierarchical link sharing design to maintain network performance. 3.Complexity of Profiling: Profiling network architectures and addressing the complexity associated with this process is a key consideration. Developing efficient profiling mechanisms and tools to capture the blueprint of virtual network architectures is a significant engineering challenge. 4.Inheritance and Provisioning: Leveraging existing network objects and architectural components when constructing new child networks introduces challenges related to inheritance and provisioning characteristics. Ensuring efficient inheritance and provisioning of architectural components is a critical research issue. 5.Scalability and Flexibility: Ensuring that spawning networks are scalable, flexible, and capable of meeting the diverse communication needs of distinct communities is a significant engineering consideration. This involves designing spawning networks to efficiently support a wide range of network architectures and services. 6.Resource Management: Efficiently managing network resources to support the introduction and architecting of spawned virtual networks is a critical challenge. This includes addressing resource partitioning, isolation, and the allocation of resources to spawned virtual networks. Addressing these challenges and considerations is essential for the successful implementation of spawning networks, requiring a combination of engineering innovation and research advancements in the field of programmable networks. ==== Research and development ==== The field of programmable networks has seen significant research and development, with several related works and advancements: 1.Open Signaling (Opensig) Community: The Opensig community has been actively involved in designing and developing programmable network prototypes. Their work focuses on modeling communication hardware using open programmable network interfaces to enable third-party software providers to enter the market for telecommunications software. 2.Active Network Program: The DARPA Active Network Program has contributed to the development of active network technologies, exploring the dynamic deployment of network protocols and services. 3.Cellular IP: Research on Cellular IP has been conducted to address the challenges of mobility and programmability in wireless networks, aiming to provide seamless mobility support and efficient network management. 4.NetScript: The NetScript project has explored a language-based approach to active networks, focusing on the development of programming languages and tools for active network environments. 5.Smart Packets for Active Networks: This work has focused on developing smart packets for active networks, aiming to enhance the programmability and intelligence of network packets to support dynamic network services. 6.Survey of Programmable Networks: A comprehensive survey of programmable networks has been conducted, providing insights into the state of the art, challenges, and future directions in the field. These research efforts and developments have contributed to the advancement of programmable networks, addressing challenges related to network programmability, transportable software, distributed systems technology, and open network control. They have laid the foundation for the development of spawning networks and other innovative approaches to network customization and management. Overall, spawning networks have the potential to revolutionize the field of programmable networks by offering a systematic and automated approach to network customization, resource control, and adaptation to evolving user demands. The definition was introduced in a paper titled Spawning Networks, published in IEEE Networks by a group of researchers from Columbia University, University of Hamburg, Intel Corporation, Hitachi Limited, and Nortel Networks. The authors are Andrew T. Campbell, Michael E. Kounavis, Daniel A. Villela, of Columbia University, John B. Vicente, of Intel Corporation, Hermann G. De Meer, of University of Hamburg, Kazuho Miki, of Hitachi Limited, and Kalai S. Kalaichelvan, of Nortel Networks. There was also a paper titled "The Genesis Kernel: A Programming System for Spawning Network Architectures", Michael E. Kounavis, Andrew T. Campbell, Stephen Chou, Fabien Modoux, John Vicente and Hao Zhuang. A first implementation of Spawning Networks was realized at Columbia University as part of the Ph.D thesis work of Michael Kounavis. This implementation is based on the design of the Genesis Kernel, a programming system consisting of three layers: A transport environment which is a collection of programmable virtual routers, a programming environment which offers open access to the programmable data path and a life cycle environment which is responsible for spawning and managing network architectures. One of the concepts used in design of the Genesis Kernel is the creation of a network architecture based on a profiling script specifying the architecture components and their interaction. == References ==
Wikipedia/Spawning_networks
Adobe Inc. ( ə-DOH-bee), formerly Adobe Systems Incorporated, is an American computer software company based in San Jose, California. It offers a wide range of programs from web design tools, photo manipulation and vector creation, through to video/audio editing, mobile app development, print layout and animation software. It has historically specialized in software for the creation and publication of a wide range of content, including graphics, photography, illustration, animation, multimedia/video, motion pictures, and print. Its flagship products include Adobe Photoshop image editing software; Adobe Illustrator vector-based illustration software; Adobe Acrobat Reader and the Portable Document Format (PDF); and a host of tools primarily for audio-visual content creation, editing and publishing. Adobe offered a bundled solution of its products named Adobe Creative Suite, which evolved into a subscription-based offering named Adobe Creative Cloud. The company also expanded into digital marketing software and in 2021 was considered one of the top global leaders in Customer Experience Management (CXM). Adobe was founded in December 1982 by John Warnock and Charles Geschke, who established the company after leaving Xerox PARC to develop and sell the PostScript page description language. In 1985, Apple Computer licensed PostScript for use in its LaserWriter printers, which helped spark the desktop publishing revolution. Adobe later developed animation and multimedia through its acquisition of Macromedia, from which it acquired Macromedia Flash; video editing and compositing software with Adobe Premiere, later known as Adobe Premiere Pro; low-code web development with Adobe Muse; and a suite of software for digital marketing management. As of 2022, Adobe had more than 26,000 employees worldwide. Adobe also has major development operations in the United States in Newton, New York City, Arden Hills, Lehi, Seattle, Austin and San Francisco. It also has major development operations in Noida and Bangalore in India. The company has long been the dominant tech firm in design and creative software, despite attracting criticism for its policies and practices particularly around Adobe Creative Cloud's switch to subscription only pricing and its early termination fees for its most promoted Creative Cloud plan, the latter of which attracted a joint civil lawsuit from the US Federal Trade Commission and the U.S. Department of Justice in 2024. == History == === PostScript (1982–1986) === The company was started in John Warnock's garage. The name of the company, Adobe, comes from Adobe Creek in Los Altos, California, a stream which ran behind Warnock's house. The creek is named because of the type of clay found there (Adobe being a Spanish word for Mudbrick). Adobe's corporate logo features a stylized "A" and was designed by graphic designer Marva Warnock, John Warnock's wife. Steve Jobs attempted to buy the company for $5 million in 1982, but Warnock and Geschke refused. Their investors urged them to work something out with Jobs, so they agreed to sell him shares worth 19 percent of the company. Jobs paid a five-times multiple of their company's valuation at the time, plus a five-year license fee for PostScript, in advance. The purchase and advance made Adobe the first company in the history of Silicon Valley to become profitable in its first year. Warnock and Geschke considered various business options including a copy-service business and a turnkey system for office printing. Then they chose to focus on developing specialized printing software and created the Adobe PostScript page description language. PostScript was the first international standard for computer printing as it included algorithms describing the letter-forms of many languages. Adobe added kanji printer products in 1988. Warnock and Geschke were also able to bolster the credibility of PostScript by connecting with a typesetting manufacturer. They weren't able to work with Compugraphic, but then worked with Linotype to license the Helvetica and Times Roman fonts (through the Linotron 100). By 1987, PostScript had become the industry-standard printer language with more than 400 third-party software programs and licensing agreements with 19 printer companies. Adobe's first products after PostScript were digital fonts which they released in a proprietary format called Type 1, worked on by Bill Paxton after he left Stanford. Apple subsequently developed a competing standard, TrueType, which provided full scalability and precise control of the pixel pattern created by the font's outlines, and licensed it to Microsoft. === Introduction of creative software (1986–1996) === Starting in the mid-1980s, Adobe entered the consumer software market, starting with Adobe Illustrator, a vector-based drawing program for the Apple Macintosh. Illustrator, which grew out of the firm's in-house font-development software, helped popularize PostScript-enabled laser printers. By the mid-1990s, Adobe would either develop or acquire Photoshop from John and Thomas Knoll, FrameMaker from Frame Technology Corporation, and After Effects and PageMaker from Aldus, as well as develop Adobe Premiere, later known as Premiere Pro, in-house, initially releasing it in 1991. Around the same time as the development of Illustrator, Adobe entered the NASDAQ Composite index in August 1986. === PDFs and file formats (1993–1999) === In 1993, Adobe introduced the Portable Document Format, commonly shortened to the initialism PDF, and its Adobe Acrobat and Reader software. Warnock originally developed the PDF under a code name, "The Camelot Project", using PostScript technology to create a widely available digital document format, able to display text, raster graphics, vector graphics, and fonts. Adobe kept the PDF as a proprietary file format from its introduction until 2008, when the PDF became an ISO international standard under ISO number ISO 32000-1:2008, though the PDF file format was free for viewers since its introduction. With its acquisition of Aldus, in addition to gaining PageMaker and After Effects, Adobe gained control over the TIFF file format for images. === Creative Suite and the Macromedia acquisition (2000–2009) === The 2000s saw various developments for the company. Its first notable acquisition in the decade was in 2002, when Adobe acquired Canadian company Accelio, also known as JetForm. In May 2003, Adobe purchased audio editing and multitrack recording software Cool Edit Pro from Syntrillium Software for $16.5 million, as well as a large loop library called "Loopology". Adobe then renamed Cool Edit Pro to Adobe Audition. It was in 2003 that the company introduced the first version of Adobe Creative Suite, bundling its creative software into a single package. The first version of Creative Suite introduced InDesign (the successor to PageMaker), Illustrator, Photoshop, ImageReady and InCopy, with the 2005 second edition of Creative Suite including an updated version of Adobe Acrobat, Premiere Pro, GoLive, the file manager Adobe Bridge, and Adobe Dreamweaver, the latter of which was acquired from a $3.4 billion acquisition of Macromedia, most notably. In addition to bringing in Dreamweaver, the $3.4 billion Macromedia acquisition, completed as a stock swap, added ColdFusion, Contribute, Captivate, Breeze (rebranded as Adobe Connect), Director, Fireworks, Flash, FlashPaper, Flex, FreeHand, HomeSite, JRun, Presenter, and Authorware to Adobe's product line. By April 2008, Adobe released Adobe Media Player. On April 27, Adobe discontinued the development and sales of its older HTML/web development software, GoLive, in favor of Dreamweaver. Adobe offered a discount on Dreamweaver for GoLive users and supports those who still use GoLive with online tutorials and migration assistance. On June 1, Adobe launched Acrobat.com, a series of web applications geared for collaborative work. Creative Suite 4, which includes Design, Web, Production Premium, and Master Collection came out in October 2008 in six configurations at prices from about US$1,700 to $2,500 or by individual application. The Windows version of Photoshop includes 64-bit processing. On December 3, 2008, Adobe laid off 600 of its employees (8% of the worldwide staff) citing the weak economic environment. On September 15, 2009, Adobe Systems announced that it would acquire online marketing and web analytics company Omniture for $1.8 billion. The deal was completed on October 23, 2009. Former Omniture products were integrated into the Adobe Marketing Cloud. On November 10, 2009, the company laid off a further 680 employees. === End of Flash, security breach, and employee compensation class action (2010–2014) === Adobe's 2010 was marked by continuing arguments with Apple over the latter's non-support for Adobe Flash on its iPhone, iPad and other products. Former Apple CEO Steve Jobs claimed that Flash was not reliable or secure enough, while Adobe executives have argued that Apple wishes to maintain control over the iOS platform. In April 2010, Steve Jobs published a post titled Thoughts on Flash where he outlined his thoughts on Flash and the rise of HTML5. In July 2010, Adobe bought Day Software integrating their line of CQ Products: WCM, DAM, SOCO, and Mobile In January 2011, Adobe acquired DemDex, Inc. with the intent of adding DemDex's audience-optimization software to its online marketing suite. At Photoshop World 2011, Adobe unveiled a new mobile photo service. Carousel was a new application for iPhone, iPad, and Mac that used Photoshop Lightroom technology to allow users to adjust and fine-tune images on all platforms. Carousel also allowed users to automatically sync, share and browse photos. The service was later renamed "Adobe Revel". Later that same year in October, Adobe acquired Nitobi Software, the maker of the mobile application development framework PhoneGap. As part of the acquisition, the source code of PhoneGap was submitted to the Apache Foundation, where it became Apache Cordova. In November 2011, Adobe announced that they would cease development of Flash for mobile devices following version 11.1. Instead, it would focus on HTML5 for mobile devices. In December 2011, Adobe announced that it had entered into a definitive agreement to acquire privately held Efficient Frontier. In December 2012, Adobe opened a new 280,000-square-foot (26,000 m2) corporate campus in Lehi, Utah. In 2013, Adobe endured a major security breach. Vast portions of the source code for the company's software were stolen and posted online and over 150 million records of Adobe's customers were made readily available for download. In 2012, about 40 million sets of payment card information were compromised by a hack at Adobe. A class-action lawsuit alleging that the company suppressed employee compensation was filed against Adobe, and three other Silicon Valley–based companies in a California federal district court in 2013. In May 2014, it was revealed the four companies, Adobe, Apple, Google, and Intel had reached an agreement with the plaintiffs, 64,000 employees of the four companies, to pay a sum of $324.5 million to settle the suit. === Adobe Creative Cloud (Since 2011) === 2011 saw the company first introduce Adobe Creative Cloud, a $600/year subscription plan to its creative software as opposed to a one-time perpetual license payment which could often top $2000 for creative professionals. The initial launch of Creative Cloud alongside Creative Suite 5 users came at the same time that Adobe ran into controversy from users of Adobe's creative software, with users of Adobe software stating that the original perpetual and subscription pricing plans for CS5 would be unaffordable for not only individuals but also businesses, as well as refusing to extend a Creative Suite 6 discount to non-CS5 users. The original announcement of Adobe Creative Cloud was met with a positive reception from CNET journalists as a much more enticing plan, and Creative Cloud was first released in 2012, though a later CNET survey evidenced that more users had a negative perception about subscription creative software than a positive view. The original pricing plan for Creative Cloud was $75 per month for the entire suite of software, though Adobe discounted the monthly cost to $50 for users willing to commit to at least one year of continuous subscription for Creative Cloud, and down to $30 per month for former CS users with the one year commitment. By 2013, Adobe decided that CS6 would be the last version of Creative Suite software that would be sold through perpetual licensing option, and in May announced that a Creative Cloud subscription would be the only way to get the newest versions of Photoshop, Illustrator, and other Adobe creative software. Reception to the mandatory subscriptions for future Adobe software was mostly negative, despite some positive testimonies on the move from customers and Adobe's attraction of 500,000 Creative Cloud subscribers by the service's first year. The switch to subscription only also did not deter software piracy of Creative Cloud services; within the first day of the first version of Photoshop exclusively made for Creative Cloud being released, cracked versions of Adobe Photoshop CC 2013 were found on The Pirate Bay, an online website used for distributing pirated software. === Further acquisitions and failed buyout of Figma (2018–2023) === In March 2018, at Adobe Summit, the company and Nvidia announced their association to upgrade their AI and profound learning innovations. They planned to streamline Adobe Sensei AI and machine learning structure for Nvidia GPUs. Adobe and Nvidia had cooperated for 10 years on GPU quickening. This incorporates Sensei-powered features, e.g. auto lip-sync in Adobe Character Animator CC and face-aware editing in Photoshop CC, and also cloud-based AI/ML items and features, for example, picture investigation for Adobe Stock and Lightroom CC and auto-labeling in Adobe Experience Supervisor. Adobe further spent its time from 2018 to 2023 acquiring more companies to boost both Creative Cloud and the Adobe Experience Cloud, a software suite which increased business. These included e-commerce services provider Magento Commerce from private equity firm Permira for $1.68 billion in June 2018, Marketo for $4.75 billion in 2018, Allegorithmic in 2019 for just under $160 million, and Workfront in December 2020 for $1.5 billion. 2021 additionally saw Adobe add payment services to its e-commerce platforms in an attempt to compete with Shopify, accepting both credit cards and PayPal. In July 2020, as the United States presidential elections approached, the software giant imposed a ban on the political ads features on its digital advertising sales platform. On November 9, 2020, Adobe announced it would spend US$1.5 billion to acquire Workfront, a provider of marketing collaboration software. The acquisition was completed in early December 2020. On August 19, 2021, Adobe announced it had entered into a definitive agreement to acquire Frame.io, a leading cloud-based video collaboration platform. The transaction is valued at $1.275 billion and closed during the fourth quarter of Adobe's 2021 fiscal year. Adobe announced a $20 billion acquisition of Figma, an Adobe XD competitor, in September 2022, its largest to date. Regulatory scrutiny from the US and European Union began shortly after due to concerns that Adobe, already a major player in the design software market with XD, would have too much control if it also owned Figma. At the time of the announcement to acquire Figma, Adobe's share over the creative software market and design-software market was almost a monopoly. In December 2023, the two companies called off their merger, citing the regulatory challenges as a sign to both that the deal was not likely to be approved. Adobe paid Figma a $1 billion termination fee per their merger agreement. === FTC lawsuit and terms of service update (2024–present) === On June 17, 2024, the US Federal Trade Commission together with the US Department of Justice filed a lawsuit against Adobe for its subscription business model practice, citing hidden termination fees and the company pushing customers towards more expensive plans. In June 2024, after facing backlash for its changes to the terms of service, Adobe updated them to explicitly pledge it will not use customer data to train its AI models. == Products == Adobe's currently supported roster of software, online services and file formats comprises the following (as of October 2022): === Formats === Portable Document Format (PDF), PDF's predecessor PostScript, ActionScript, Shockwave Flash (SWF), Flash Video (FLV), and Filmstrip (.flm) === Web-hosted services === Adobe Color, Photoshop Express, Acrobat.com, Behance and Adobe Express. === Adobe Renderer === Adobe Media Encoder === Adobe Stock === A microstock agency that presently provides over 57 million high-resolution, royalty-free images and videos available to license (via subscription or credit purchase methods). In 2015, Adobe acquired Fotolia, a stock content marketplace founded in 2005 by Thibaud Elziere, Oleg Tscheltzoff, and Patrick Chassany which operated in 23 countries. It was run as a stand-alone website until 2019, but has since been integrated into Adobe Stock. === Adobe Experience Platform === A family of content, development, and customer relationship management products, with what Adobe calls the "next generation" of its Sensei artificial intelligence and machine learning framework, introduced in March 2019. == Criticisms == === Pricing === Adobe has been criticized for its pricing practices, with retail prices being up to twice as much in non-US countries. After Adobe revealed the pricing for the Creative Suite 3 Master Collection, which was £1,000 higher for European customers, a petition to protest over "unfair pricing" was published and signed by 10,000 users. In June 2009, Adobe further increased its prices in the UK by 10% in spite of weakening of the pound against the dollar, and UK users were not allowed to buy from the US store. Adobe's Reader and Flash programs were listed on "The 10 most hated programs of all time" article by TechRadar. === Security === Hackers have exploited vulnerabilities in Adobe programs, such as Adobe Reader, to gain unauthorized access to computers. Adobe's Flash Player has also been criticized for, among other things, suffering from performance, memory usage and security problems. A report by security researchers from Kaspersky Lab criticized Adobe for producing the products having top 10 security vulnerabilities. Observers noted that Adobe was spying on its customers by including spyware in the Creative Suite 3 software and quietly sending user data to a firm named Omniture. When users became aware, Adobe explained what the suspicious software did and admitted that they: "could and should do a better job taking security concerns into account". When a security flaw was later discovered in Photoshop CS5, Adobe sparked outrage by saying it would leave the flaw unpatched, so anyone who wanted to use the software securely would have to pay for an upgrade. Following a fierce backlash Adobe decided to provide the software patch. Adobe has been criticized for pushing unwanted software including third-party browser toolbars and free virus scanners, usually as part of the Flash update process, and for pushing a third-party scareware program designed to scare users into paying for unneeded system repairs. === Customer data breach === On October 3, 2013, the company initially revealed that 2.9 million customers' sensitive and personal data was stolen in a security breach which included encrypted credit card information. Adobe later admitted that 38 million active users have been affected and the attackers obtained access to their IDs and encrypted passwords, as well as to many inactive Adobe accounts. The company did not make it clear if all the personal information was encrypted, such as email addresses and physical addresses, though data privacy laws in 44 states require this information to be encrypted. In late 2013 a 3.8 GB file stolen from Adobe and containing 152 million usernames, reversibly encrypted passwords and unencrypted password hints was posted on AnonNews.org. LastPass, a password security firm, said that Adobe failed to use best practices for securing the passwords and has not salted them. Another security firm, Sophos, showed that Adobe used a weak encryption method permitting the recovery of a lot of information with very little effort. According to IT expert Simon Bain, Adobe has failed its customers and 'should hang their heads in shame'. Many of the credit cards were tied to the Creative Cloud software-by-subscription service. Adobe offered its affected US customers a free membership in a credit monitoring service, but no similar arrangements have been made for non-US customers. When a data breach occurs in the US, penalties depend on the state where the victim resides, not where the company is based. After stealing the customers' data, cyber-thieves also accessed Adobe's source code repository, likely in mid-August 2013. Because hackers acquired copies of the source code of Adobe proprietary products, they could find and exploit any potential weaknesses in its security, computer experts warned. Security researcher Alex Holden, chief information security officer of Hold Security, characterized this Adobe breach, which affected Acrobat, ColdFusion and numerous other applications, as "one of the worst in US history". Adobe also announced that hackers stole parts of the source code of Photoshop, which according to commentators could allow programmers to copy its engineering techniques and would make it easier to pirate Adobe's expensive products. Published on a server of a Russian-speaking hacker group, the "disclosure of encryption algorithms, other security schemes, and software vulnerabilities can be used to bypass protections for individual and corporate data" and may have opened the gateway to new generation zero-day attacks. Hackers already used ColdFusion exploits to make off with usernames and encrypted passwords of PR Newswire's customers, which has been tied to the Adobe security breach. They also used a ColdFusion exploit to breach Washington state court and expose up to 200,000 Social Security numbers. === Anti-competitive practices === In 1994, Adobe acquired Aldus Corp., a software vendor that sold FreeHand, a competing product. FreeHand was direct competition to Adobe Illustrator, Adobe's flagship vector-graphics editor. The Federal Trade Commission (FTC) intervened and forced Adobe to sell FreeHand back to Altsys, and also banned Adobe from buying back FreeHand or any similar program for the next 10 years (1994–2004). Altsys was then bought by Macromedia, which released versions 5 to 11. When Adobe acquired Macromedia in December 2005, it stalled development of FreeHand in 2007, effectively rendering it obsolete. With FreeHand and Illustrator, Adobe controlled the only two products that compete in the professional illustration program market for Macintosh operating systems. In 2011, a group of 5,000 FreeHand graphic designers convened under the banner Free FreeHand, and filed a civil antitrust complaint in the US District Court for the Northern District of California against Adobe. The suit alleged that: Adobe has violated federal and state antitrust laws by abusing its dominant position in the professional vector graphic illustration software market [...] Adobe has engaged in a series of exclusionary and anti-competitive acts and strategies designed to kill FreeHand, the dominant competitor to Adobe's Illustrator software product, instead of competing on the basis of product merit according to the principals of free market capitalism. Adobe had no response to the claims and the lawsuit was eventually settled. The FreeHand community believes Adobe should release the product to an open-source community if it cannot update it internally. As of 2010, on its FreeHand product page, Adobe stated, "While we recognize FreeHand has a loyal customer base, we encourage users to migrate to the new Adobe Illustrator CS4 software which supports both PowerPC and Intel–based Macs and Microsoft Windows XP and Windows Vista." As of 2016, the FreeHand page no longer exists; instead, it simply redirects to the Illustrator page. Adobe's software FTP server still contains a directory for FreeHand, but it is empty. === Cancellation fees === In April 2021, Adobe received criticism from Twitter users for the company's cancellation fees after a customer shared a tweet showing they had been charged a $291.45 cancellation fee for their Adobe Creative Cloud subscription. Many also showed their cancellation fees for Adobe Creative Cloud, with this leading to many encouraging piracy of Adobe products and/or purchase of alternatives with lower prices or using free and open-source software instead. Furthermore, there have been reports that with changing subscriptions it is possible to avoid paying this fee. The U.S. Department of Justice and the FTC filed a lawsuit against Adobe and two of its executives in June 2024, alleging that the company's deceptive subscription practices and cancellation policies violated the Restore Online Shoppers' Confidence Act. According to the lawsuit, the company purportedly used small text disclosures, optional input fields, and complex web of links to obscure a concealed early termination fee. This fee reportedly amounted to fifty percent of the remaining value of annual contracts for users who chose to cancel early in the first year, resulting in significant penalties. Customers who tried to cancel services by contacting customer service faced obstacles, including dropped calls and multiple transfers between representatives; others continued to be billed by Adobe, under the mistaken belief that they had successfully ended their subscriptions. === 2024 terms of service update === On June 5, 2024, Adobe updated their terms of service (TOS) for Photoshop stating "we may access your content through both manual and automated methods, such as for content review." This sparked outrage with Adobe users, as the new terms implied that the users' work would be used to train Adobe's generative AI, even if the work was under a non-disclosure agreement (NDA). Adobe responded the following day clarifying that they will not use user data to train generative AI or take users work as their own; however, they neglected to respond to the part in the TOS that gives Adobe the ability to view or use work that is contracted under an NDA. == See also == Adobe MAX Digital rights management (DRM) List of acquisitions by Adobe United States v. Elcom Ltd. == References == == External links == Official website Business data for Adobe Inc.: "Patents owned by Adobe Inc". US Patent & Trademark Office. Retrieved December 8, 2005.
Wikipedia/Adobe_Systems
Deep learning is a subset of machine learning that focuses on utilizing multilayered neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience and is centered around stacking artificial neurons into layers and "training" them to process data. The adjective "deep" refers to the use of multiple layers (ranging from three to several hundred or thousands) in the network. Methods used can be either supervised, semi-supervised or unsupervised. Some common deep learning network architectures include fully connected networks, deep belief networks, recurrent neural networks, convolutional neural networks, generative adversarial networks, transformers, and neural radiance fields. These architectures have been applied to fields including computer vision, speech recognition, natural language processing, machine translation, bioinformatics, drug design, medical image analysis, climate science, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance. Early forms of neural networks were inspired by information processing and distributed communication nodes in biological systems, particularly the human brain. However, current neural networks do not intend to model the brain function of organisms, and are generally seen as low-quality models for that purpose. == Overview == Most modern deep learning models are based on multi-layered neural networks such as convolutional neural networks and transformers, although they can also include propositional formulas or latent variables organized layer-wise in deep generative models such as the nodes in deep belief networks and deep Boltzmann machines. Fundamentally, deep learning refers to a class of machine learning algorithms in which a hierarchy of layers is used to transform input data into a progressively more abstract and composite representation. For example, in an image recognition model, the raw input may be an image (represented as a tensor of pixels). The first representational layer may attempt to identify basic shapes such as lines and circles, the second layer may compose and encode arrangements of edges, the third layer may encode a nose and eyes, and the fourth layer may recognize that the image contains a face. Importantly, a deep learning process can learn which features to optimally place at which level on its own. Prior to deep learning, machine learning techniques often involved hand-crafted feature engineering to transform the data into a more suitable representation for a classification algorithm to operate on. In the deep learning approach, features are not hand-crafted and the model discovers useful feature representations from the data automatically. This does not eliminate the need for hand-tuning; for example, varying numbers of layers and layer sizes can provide different degrees of abstraction. The word "deep" in "deep learning" refers to the number of layers through which the data is transformed. More precisely, deep learning systems have a substantial credit assignment path (CAP) depth. The CAP is the chain of transformations from input to output. CAPs describe potentially causal connections between input and output. For a feedforward neural network, the depth of the CAPs is that of the network and is the number of hidden layers plus one (as the output layer is also parameterized). For recurrent neural networks, in which a signal may propagate through a layer more than once, the CAP depth is potentially unlimited. No universally agreed-upon threshold of depth divides shallow learning from deep learning, but most researchers agree that deep learning involves CAP depth higher than two. CAP of depth two has been shown to be a universal approximator in the sense that it can emulate any function. Beyond that, more layers do not add to the function approximator ability of the network. Deep models (CAP > two) are able to extract better features than shallow models and hence, extra layers help in learning the features effectively. Deep learning architectures can be constructed with a greedy layer-by-layer method. Deep learning helps to disentangle these abstractions and pick out which features improve performance. Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data is more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are deep belief networks. The term Deep Learning was introduced to the machine learning community by Rina Dechter in 1986, and to artificial neural networks by Igor Aizenberg and colleagues in 2000, in the context of Boolean threshold neurons. Although the history of its appearance is apparently more complicated. == Interpretations == Deep neural networks are generally interpreted in terms of the universal approximation theorem or probabilistic inference. The classic universal approximation theorem concerns the capacity of feedforward neural networks with a single hidden layer of finite size to approximate continuous functions. In 1989, the first proof was published by George Cybenko for sigmoid activation functions and was generalised to feed-forward multi-layer architectures in 1991 by Kurt Hornik. Recent work also showed that universal approximation also holds for non-bounded activation functions such as Kunihiko Fukushima's rectified linear unit. The universal approximation theorem for deep neural networks concerns the capacity of networks with bounded width but the depth is allowed to grow. Lu et al. proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; if the width is smaller or equal to the input dimension, then a deep neural network is not a universal approximator. The probabilistic interpretation derives from the field of machine learning. It features inference, as well as the optimization concepts of training and testing, related to fitting and generalization, respectively. More specifically, the probabilistic interpretation considers the activation nonlinearity as a cumulative distribution function. The probabilistic interpretation led to the introduction of dropout as regularizer in neural networks. The probabilistic interpretation was introduced by researchers including Hopfield, Widrow and Narendra and popularized in surveys such as the one by Bishop. == History == === Before 1980 === There are two types of artificial neural network (ANN): feedforward neural network (FNN) or multilayer perceptron (MLP) and recurrent neural networks (RNN). RNNs have cycles in their connectivity structure, FNNs don't. In the 1920s, Wilhelm Lenz and Ernst Ising created the Ising model which is essentially a non-learning RNN architecture consisting of neuron-like threshold elements. In 1972, Shun'ichi Amari made this architecture adaptive. His learning RNN was republished by John Hopfield in 1982. Other early recurrent neural networks were published by Kaoru Nakano in 1971. Already in 1948, Alan Turing produced work on "Intelligent Machinery" that was not published in his lifetime, containing "ideas related to artificial evolution and learning RNNs". Frank Rosenblatt (1958) proposed the perceptron, an MLP with 3 layers: an input layer, a hidden layer with randomized weights that did not learn, and an output layer. He later published a 1962 book that also introduced variants and computer experiments, including a version with four-layer perceptrons "with adaptive preterminal networks" where the last two layers have learned weights (here he credits H. D. Block and B. W. Knight).: section 16  The book cites an earlier network by R. D. Joseph (1960) "functionally equivalent to a variation of" this four-layer system (the book mentions Joseph over 30 times). Should Joseph therefore be considered the originator of proper adaptive multilayer perceptrons with learning hidden units? Unfortunately, the learning algorithm was not a functional one, and fell into oblivion. The first working deep learning algorithm was the Group method of data handling, a method to train arbitrarily deep neural networks, published by Alexey Ivakhnenko and Lapa in 1965. They regarded it as a form of polynomial regression, or a generalization of Rosenblatt's perceptron. A 1971 paper described a deep network with eight layers trained by this method, which is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates". The first deep learning multilayer perceptron trained by stochastic gradient descent was published in 1967 by Shun'ichi Amari. In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learned internal representations to classify non-linearily separable pattern classes. Subsequent developments in hardware and hyperparameter tunings have made end-to-end stochastic gradient descent the currently dominant training technique. In 1969, Kunihiko Fukushima introduced the ReLU (rectified linear unit) activation function. The rectifier has become the most popular activation function for deep learning. Deep learning architectures for convolutional neural networks (CNNs) with convolutional layers and downsampling layers began with the Neocognitron introduced by Kunihiko Fukushima in 1979, though not trained by backpropagation. Backpropagation is an efficient application of the chain rule derived by Gottfried Wilhelm Leibniz in 1673 to networks of differentiable nodes. The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt, but he did not know how to implement this, although Henry J. Kelley had a continuous precursor of backpropagation in 1960 in the context of control theory. The modern form of backpropagation was first published in Seppo Linnainmaa's master thesis (1970). G.M. Ostrovski et al. republished it in 1971. Paul Werbos applied backpropagation to neural networks in 1982 (his 1974 PhD thesis, reprinted in a 1994 book, did not yet describe the algorithm). In 1986, David E. Rumelhart et al. popularised backpropagation but did not cite the original work. === 1980s-2000s === The time delay neural network (TDNN) was introduced in 1987 by Alex Waibel to apply CNN to phoneme recognition. It used convolutions, weight sharing, and backpropagation. In 1988, Wei Zhang applied a backpropagation-trained CNN to alphabet recognition. In 1989, Yann LeCun et al. created a CNN called LeNet for recognizing handwritten ZIP codes on mail. Training required 3 days. In 1990, Wei Zhang implemented a CNN on optical computing hardware. In 1991, a CNN was applied to medical image object segmentation and breast cancer detection in mammograms. LeNet-5 (1998), a 7-level CNN by Yann LeCun et al., that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32x32 pixel images. Recurrent neural networks (RNN) were further developed in the 1980s. Recurrence is used for sequence processing, and when a recurrent network is unrolled, it mathematically resembles a deep feedforward layer. Consequently, they have similar properties and issues, and their developments had mutual influences. In RNN, two early influential works were the Jordan network (1986) and the Elman network (1990), which applied RNN to study problems in cognitive psychology. In the 1980s, backpropagation did not work well for deep learning with long credit assignment paths. To overcome this problem, in 1991, Jürgen Schmidhuber proposed a hierarchy of RNNs pre-trained one level at a time by self-supervised learning where each RNN tries to predict its own next input, which is the next unexpected input of the RNN below. This "neural history compressor" uses predictive coding to learn internal representations at multiple self-organizing time scales. This can substantially facilitate downstream deep learning. The RNN hierarchy can be collapsed into a single RNN, by distilling a higher level chunker network into a lower level automatizer network. In 1993, a neural history compressor solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time. The "P" in ChatGPT refers to such pre-training. Sepp Hochreiter's diploma thesis (1991) implemented the neural history compressor, and identified and analyzed the vanishing gradient problem. Hochreiter proposed recurrent residual connections to solve the vanishing gradient problem. This led to the long short-term memory (LSTM), published in 1995. LSTM can learn "very deep learning" tasks with long credit assignment paths that require memories of events that happened thousands of discrete time steps before. That LSTM was not yet the modern architecture, which required a "forget gate", introduced in 1999, which became the standard RNN architecture. In 1991, Jürgen Schmidhuber also published adversarial neural networks that contest with each other in the form of a zero-sum game, where one network's gain is the other network's loss. The first network is a generative model that models a probability distribution over output patterns. The second network learns by gradient descent to predict the reactions of the environment to these patterns. This was called "artificial curiosity". In 2014, this principle was used in generative adversarial networks (GANs). During 1985–1995, inspired by statistical mechanics, several architectures and methods were developed by Terry Sejnowski, Peter Dayan, Geoffrey Hinton, etc., including the Boltzmann machine, restricted Boltzmann machine, Helmholtz machine, and the wake-sleep algorithm. These were designed for unsupervised learning of deep generative models. However, those were more computationally expensive compared to backpropagation. Boltzmann machine learning algorithm, published in 1985, was briefly popular before being eclipsed by the backpropagation algorithm in 1986. (p. 112 ). A 1988 network became state of the art in protein structure prediction, an early application of deep learning to bioinformatics. Both shallow and deep learning (e.g., recurrent nets) of ANNs for speech recognition have been explored for many years. These methods never outperformed non-uniform internal-handcrafting Gaussian mixture model/Hidden Markov model (GMM-HMM) technology based on generative models of speech trained discriminatively. Key difficulties have been analyzed, including gradient diminishing and weak temporal correlation structure in neural predictive models. Additional difficulties were the lack of training data and limited computing power. Most speech recognition researchers moved away from neural nets to pursue generative modeling. An exception was at SRI International in the late 1990s. Funded by the US government's NSA and DARPA, SRI researched in speech and speaker recognition. The speaker recognition team led by Larry Heck reported significant success with deep neural networks in speech processing in the 1998 NIST Speaker Recognition benchmark. It was deployed in the Nuance Verifier, representing the first major industrial application of deep learning. The principle of elevating "raw" features over hand-crafted optimization was first explored successfully in the architecture of deep autoencoder on the "raw" spectrogram or linear filter-bank features in the late 1990s, showing its superiority over the Mel-Cepstral features that contain stages of fixed transformation from spectrograms. The raw features of speech, waveforms, later produced excellent larger-scale results. === 2000s === Neural networks entered a lull, and simpler models that use task-specific handcrafted features such as Gabor filters and support vector machines (SVMs) became the preferred choices in the 1990s and 2000s, because of artificial neural networks' computational cost and a lack of understanding of how the brain wires its biological networks. In 2003, LSTM became competitive with traditional speech recognizers on certain tasks. In 2006, Alex Graves, Santiago Fernández, Faustino Gomez, and Schmidhuber combined it with connectionist temporal classification (CTC) in stacks of LSTMs. In 2009, it became the first RNN to win a pattern recognition contest, in connected handwriting recognition. In 2006, publications by Geoff Hinton, Ruslan Salakhutdinov, Osindero and Teh deep belief networks were developed for generative modeling. They are trained by training one restricted Boltzmann machine, then freezing it and training another one on top of the first one, and so on, then optionally fine-tuned using supervised backpropagation. They could model high-dimensional probability distributions, such as the distribution of MNIST images, but convergence was slow. The impact of deep learning in industry began in the early 2000s, when CNNs already processed an estimated 10% to 20% of all the checks written in the US, according to Yann LeCun. Industrial applications of deep learning to large-scale speech recognition started around 2010. The 2009 NIPS Workshop on Deep Learning for Speech Recognition was motivated by the limitations of deep generative models of speech, and the possibility that given more capable hardware and large-scale data sets that deep neural nets might become practical. It was believed that pre-training DNNs using generative models of deep belief nets (DBN) would overcome the main difficulties of neural nets. However, it was discovered that replacing pre-training with large amounts of training data for straightforward backpropagation when using DNNs with large, context-dependent output layers produced error rates dramatically lower than then-state-of-the-art Gaussian mixture model (GMM)/Hidden Markov Model (HMM) and also than more-advanced generative model-based systems. The nature of the recognition errors produced by the two types of systems was characteristically different, offering technical insights into how to integrate deep learning into the existing highly efficient, run-time speech decoding system deployed by all major speech recognition systems. Analysis around 2009–2010, contrasting the GMM (and other generative speech models) vs. DNN models, stimulated early industrial investment in deep learning for speech recognition. That analysis was done with comparable performance (less than 1.5% in error rate) between discriminative DNNs and generative models. In 2010, researchers extended deep learning from TIMIT to large vocabulary speech recognition, by adopting large output layers of the DNN based on context-dependent HMM states constructed by decision trees. === Deep learning revolution === The deep learning revolution started around CNN- and GPU-based computer vision. Although CNNs trained by backpropagation had been around for decades and GPU implementations of NNs for years, including CNNs, faster implementations of CNNs on GPUs were needed to progress on computer vision. Later, as deep learning becomes widespread, specialized hardware and algorithm optimizations were developed specifically for deep learning. A key advance for the deep learning revolution was hardware advances, especially GPU. Some early work dated back to 2004. In 2009, Raina, Madhavan, and Andrew Ng reported a 100M deep belief network trained on 30 Nvidia GeForce GTX 280 GPUs, an early demonstration of GPU-based deep learning. They reported up to 70 times faster training. In 2011, a CNN named DanNet by Dan Ciresan, Ueli Meier, Jonathan Masci, Luca Maria Gambardella, and Jürgen Schmidhuber achieved for the first time superhuman performance in a visual pattern recognition contest, outperforming traditional methods by a factor of 3. It then won more contests. They also showed how max-pooling CNNs on GPU improved performance significantly. In 2012, Andrew Ng and Jeff Dean created an FNN that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images taken from YouTube videos. In October 2012, AlexNet by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton won the large-scale ImageNet competition by a significant margin over shallow machine learning methods. Further incremental improvements included the VGG-16 network by Karen Simonyan and Andrew Zisserman and Google's Inceptionv3. The success in image classification was then extended to the more challenging task of generating descriptions (captions) for images, often as a combination of CNNs and LSTMs. In 2014, the state of the art was training “very deep neural network” with 20 to 30 layers. Stacking too many layers led to a steep reduction in training accuracy, known as the "degradation" problem. In 2015, two techniques were developed to train very deep networks: the Highway Network was published in May 2015, and the residual neural network (ResNet) in Dec 2015. ResNet behaves like an open-gated Highway Net. Around the same time, deep learning started impacting the field of art. Early examples included Google DeepDream (2015), and neural style transfer (2015), both of which were based on pretrained image classification neural networks, such as VGG-19. Generative adversarial network (GAN) by (Ian Goodfellow et al., 2014) (based on Jürgen Schmidhuber's principle of artificial curiosity) became state of the art in generative modeling during 2014-2018 period. Excellent image quality is achieved by Nvidia's StyleGAN (2018) based on the Progressive GAN by Tero Karras et al. Here the GAN generator is grown from small to large scale in a pyramidal fashion. Image generation by GAN reached popular success, and provoked discussions concerning deepfakes. Diffusion models (2015) eclipsed GANs in generative modeling since then, with systems such as DALL·E 2 (2022) and Stable Diffusion (2022). In 2015, Google's speech recognition improved by 49% by an LSTM-based model, which they made available through Google Voice Search on smartphone. Deep learning is part of state-of-the-art systems in various disciplines, particularly computer vision and automatic speech recognition (ASR). Results on commonly used evaluation sets such as TIMIT (ASR) and MNIST (image classification), as well as a range of large-vocabulary speech recognition tasks have steadily improved. Convolutional neural networks were superseded for ASR by LSTM. but are more successful in computer vision. Yoshua Bengio, Geoffrey Hinton and Yann LeCun were awarded the 2018 Turing Award for "conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing". == Neural networks == Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve their ability) to do tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as "cat" or "no cat" and using the analytic results to identify cats in other images. They have found most use in applications difficult to express with a traditional computer algorithm using rule-based programming. An ANN is based on a collection of connected units called artificial neurons, (analogous to biological neurons in a biological brain). Each connection (synapse) between neurons can transmit a signal to another neuron. The receiving (postsynaptic) neuron can process the signal(s) and then signal downstream neurons connected to it. Neurons may have state, generally represented by real numbers, typically between 0 and 1. Neurons and synapses may also have a weight that varies as learning proceeds, which can increase or decrease the strength of the signal that it sends downstream. Typically, neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times. The original goal of the neural network approach was to solve problems in the same way that a human brain would. Over time, attention focused on matching specific mental abilities, leading to deviations from biology such as backpropagation, or passing information in the reverse direction and adjusting the network to reflect that information. Neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis. As of 2017, neural networks typically have a few thousand to a few million units and millions of connections. Despite this number being several order of magnitude less than the number of neurons on a human brain, these networks can perform many tasks at a level beyond that of humans (e.g., recognizing faces, or playing "Go"). === Deep neural networks === A deep neural network (DNN) is an artificial neural network with multiple layers between the input and output layers. There are different types of neural networks but they always consist of the same components: neurons, synapses, weights, biases, and functions. These components as a whole function in a way that mimics functions of the human brain, and can be trained like any other ML algorithm. For example, a DNN that is trained to recognize dog breeds will go over the given image and calculate the probability that the dog in the image is a certain breed. The user can review the results and select which probabilities the network should display (above a certain threshold, etc.) and return the proposed label. Each mathematical manipulation as such is considered a layer, and complex DNN have many layers, hence the name "deep" networks. DNNs can model complex non-linear relationships. DNN architectures generate compositional models where the object is expressed as a layered composition of primitives. The extra layers enable composition of features from lower layers, potentially modeling complex data with fewer units than a similarly performing shallow network. For instance, it was proved that sparse multivariate polynomials are exponentially easier to approximate with DNNs than with shallow networks. Deep architectures include many variants of a few basic approaches. Each architecture has found success in specific domains. It is not always possible to compare the performance of multiple architectures, unless they have been evaluated on the same data sets. DNNs are typically feedforward networks in which data flows from the input layer to the output layer without looping back. At first, the DNN creates a map of virtual neurons and assigns random numerical values, or "weights", to connections between them. The weights and inputs are multiplied and return an output between 0 and 1. If the network did not accurately recognize a particular pattern, an algorithm would adjust the weights. That way the algorithm can make certain parameters more influential, until it determines the correct mathematical manipulation to fully process the data. Recurrent neural networks, in which data can flow in any direction, are used for applications such as language modeling. Long short-term memory is particularly effective for this use. Convolutional neural networks (CNNs) are used in computer vision. CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR). ==== Challenges ==== As with ANNs, many issues can arise with naively trained DNNs. Two common issues are overfitting and computation time. DNNs are prone to overfitting because of the added layers of abstraction, which allow them to model rare dependencies in the training data. Regularization methods such as Ivakhnenko's unit pruning or weight decay ( ℓ 2 {\displaystyle \ell _{2}} -regularization) or sparsity ( ℓ 1 {\displaystyle \ell _{1}} -regularization) can be applied during training to combat overfitting. Alternatively dropout regularization randomly omits units from the hidden layers during training. This helps to exclude rare dependencies. Another interesting recent development is research into models of just enough complexity through an estimation of the intrinsic complexity of the task being modelled. This approach has been successfully applied for multivariate time series prediction tasks such as traffic prediction. Finally, data can be augmented via methods such as cropping and rotating such that smaller training sets can be increased in size to reduce the chances of overfitting. DNNs must consider many training parameters, such as the size (number of layers and number of units per layer), the learning rate, and initial weights. Sweeping through the parameter space for optimal parameters may not be feasible due to the cost in time and computational resources. Various tricks, such as batching (computing the gradient on several training examples at once rather than individual examples) speed up computation. Large processing capabilities of many-core architectures (such as GPUs or the Intel Xeon Phi) have produced significant speedups in training, because of the suitability of such processing architectures for the matrix and vector computations. Alternatively, engineers may look for other types of neural networks with more straightforward and convergent training algorithms. CMAC (cerebellar model articulation controller) is one such kind of neural network. It doesn't require learning rates or randomized initial weights. The training process can be guaranteed to converge in one step with a new batch of data, and the computational complexity of the training algorithm is linear with respect to the number of neurons involved. == Hardware == Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer. By 2019, graphics processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method for training large-scale commercial cloud AI . OpenAI estimated the hardware computation used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017) and found a 300,000-fold increase in the amount of computation required, with a doubling-time trendline of 3.4 months. Special electronic circuits called deep learning processors were designed to speed up deep learning algorithms. Deep learning processors include neural processing units (NPUs) in Huawei cellphones and cloud computing servers such as tensor processing units (TPU) in the Google Cloud Platform. Cerebras Systems has also built a dedicated system to handle large deep learning models, the CS-2, based on the largest processor in the industry, the second-generation Wafer Scale Engine (WSE-2). Atomically thin semiconductors are considered promising for energy-efficient deep learning hardware where the same basic device structure is used for both logic operations and data storage. In 2020, Marega et al. published experiments with a large-area active channel material for developing logic-in-memory devices and circuits based on floating-gate field-effect transistors (FGFETs). In 2021, J. Feldmann et al. proposed an integrated photonic hardware accelerator for parallel convolutional processing. The authors identify two key advantages of integrated photonics over its electronic counterparts: (1) massively parallel data transfer through wavelength division multiplexing in conjunction with frequency combs, and (2) extremely high data modulation speeds. Their system can execute trillions of multiply-accumulate operations per second, indicating the potential of integrated photonics in data-heavy AI applications. == Applications == === Automatic speech recognition === Large-scale automatic speech recognition is the first and most convincing successful case of deep learning. LSTM RNNs can learn "Very Deep Learning" tasks that involve multi-second intervals containing speech events separated by thousands of discrete time steps, where one time step corresponds to about 10 ms. LSTM with forget gates is competitive with traditional speech recognizers on certain tasks. The initial success in speech recognition was based on small-scale recognition tasks based on TIMIT. The data set contains 630 speakers from eight major dialects of American English, where each speaker reads 10 sentences. Its small size lets many configurations be tried. More importantly, the TIMIT task concerns phone-sequence recognition, which, unlike word-sequence recognition, allows weak phone bigram language models. This lets the strength of the acoustic modeling aspects of speech recognition be more easily analyzed. The error rates listed below, including these early results and measured as percent phone error rates (PER), have been summarized since 1991. The debut of DNNs for speaker recognition in the late 1990s and speech recognition around 2009-2011 and of LSTM around 2003–2007, accelerated progress in eight major areas: Scale-up/out and accelerated DNN training and decoding Sequence discriminative training Feature processing by deep models with solid understanding of the underlying mechanisms Adaptation of DNNs and related deep models Multi-task and transfer learning by DNNs and related deep models CNNs and how to design them to best exploit domain knowledge of speech RNN and its rich LSTM variants Other types of deep models including tensor-based models and integrated deep generative/discriminative models. All major commercial speech recognition systems (e.g., Microsoft Cortana, Xbox, Skype Translator, Amazon Alexa, Google Now, Apple Siri, Baidu and iFlyTek voice search, and a range of Nuance speech products, etc.) are based on deep learning. === Image recognition === A common evaluation set for image classification is the MNIST database data set. MNIST is composed of handwritten digits and includes 60,000 training examples and 10,000 test examples. As with TIMIT, its small size lets users test multiple configurations. A comprehensive list of results on this set is available. Deep learning-based image recognition has become "superhuman", producing more accurate results than human contestants. This first occurred in 2011 in recognition of traffic signs, and in 2014, with recognition of human faces. Deep learning-trained vehicles now interpret 360° camera views. Another example is Facial Dysmorphology Novel Analysis (FDNA) used to analyze cases of human malformation connected to a large database of genetic syndromes. === Visual art processing === Closely related to the progress that has been made in image recognition is the increasing application of deep learning techniques to various visual art tasks. DNNs have proven themselves capable, for example, of identifying the style period of a given painting Neural Style Transfer – capturing the style of a given artwork and applying it in a visually pleasing manner to an arbitrary photograph or video generating striking imagery based on random visual input fields. === Natural language processing === Neural networks have been used for implementing language models since the early 2000s. LSTM helped to improve machine translation and language modeling. Other key techniques in this field are negative sampling and word embedding. Word embedding, such as word2vec, can be thought of as a representational layer in a deep learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in a vector space. Using word embedding as an RNN input layer allows the network to parse sentences and phrases using an effective compositional vector grammar. A compositional vector grammar can be thought of as probabilistic context free grammar (PCFG) implemented by an RNN. Recursive auto-encoders built atop word embeddings can assess sentence similarity and detect paraphrasing. Deep neural architectures provide the best results for constituency parsing, sentiment analysis, information retrieval, spoken language understanding, machine translation, contextual entity linking, writing style recognition, named-entity recognition (token classification), text classification, and others. Recent developments generalize word embedding to sentence embedding. Google Translate (GT) uses a large end-to-end long short-term memory (LSTM) network. Google Neural Machine Translation (GNMT) uses an example-based machine translation method in which the system "learns from millions of examples". It translates "whole sentences at a time, rather than pieces". Google Translate supports over one hundred languages. The network encodes the "semantics of the sentence rather than simply memorizing phrase-to-phrase translations". GT uses English as an intermediate between most language pairs. === Drug discovery and toxicology === A large percentage of candidate drugs fail to win regulatory approval. These failures are caused by insufficient efficacy (on-target effect), undesired interactions (off-target effects), or unanticipated toxic effects. Research has explored use of deep learning to predict the biomolecular targets, off-targets, and toxic effects of environmental chemicals in nutrients, household products and drugs. AtomNet is a deep learning system for structure-based rational drug design. AtomNet was used to predict novel candidate biomolecules for disease targets such as the Ebola virus and multiple sclerosis. In 2017 graph neural networks were used for the first time to predict various properties of molecules in a large toxicology data set. In 2019, generative neural networks were used to produce molecules that were validated experimentally all the way into mice. === Customer relationship management === Deep reinforcement learning has been used to approximate the value of possible direct marketing actions, defined in terms of RFM variables. The estimated value function was shown to have a natural interpretation as customer lifetime value. === Recommendation systems === Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music and journal recommendations. Multi-view deep learning has been applied for learning user preferences from multiple domains. The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks. === Bioinformatics === An autoencoder ANN was used in bioinformatics, to predict gene ontology annotations and gene-function relationships. In medical informatics, deep learning was used to predict sleep quality based on data from wearables and predictions of health complications from electronic health record data. Deep neural networks have shown unparalleled performance in predicting protein structure, according to the sequence of the amino acids that make it up. In 2020, AlphaFold, a deep-learning based system, achieved a level of accuracy significantly higher than all previous computational methods. === Deep Neural Network Estimations === Deep neural networks can be used to estimate the entropy of a stochastic process and called Neural Joint Entropy Estimator (NJEE). Such an estimation provides insights on the effects of input random variables on an independent random variable. Practically, the DNN is trained as a classifier that maps an input vector or matrix X to an output probability distribution over the possible classes of random variable Y, given input X. For example, in image classification tasks, the NJEE maps a vector of pixels' color values to probabilities over possible image classes. In practice, the probability distribution of Y is obtained by a Softmax layer with number of nodes that is equal to the alphabet size of Y. NJEE uses continuously differentiable activation functions, such that the conditions for the universal approximation theorem holds. It is shown that this method provides a strongly consistent estimator and outperforms other methods in case of large alphabet sizes. === Medical image analysis === Deep learning has been shown to produce competitive results in medical application such as cancer cell classification, lesion detection, organ segmentation and image enhancement. Modern deep learning tools demonstrate the high accuracy of detecting various diseases and the helpfulness of their use by specialists to improve the diagnosis efficiency. === Mobile advertising === Finding the appropriate mobile audience for mobile advertising is always challenging, since many data points must be considered and analyzed before a target segment can be created and used in ad serving by any ad server. Deep learning has been used to interpret large, many-dimensioned advertising datasets. Many data points are collected during the request/serve/click internet advertising cycle. This information can form the basis of machine learning to improve ad selection. === Image restoration === Deep learning has been successfully applied to inverse problems such as denoising, super-resolution, inpainting, and film colorization. These applications include learning methods such as "Shrinkage Fields for Effective Image Restoration" which trains on an image dataset, and Deep Image Prior, which trains on the image that needs restoration. === Financial fraud detection === Deep learning is being successfully applied to financial fraud detection, tax evasion detection, and anti-money laundering. === Materials science === In November 2023, researchers at Google DeepMind and Lawrence Berkeley National Laboratory announced that they had developed an AI system known as GNoME. This system has contributed to materials science by discovering over 2 million new materials within a relatively short timeframe. GNoME employs deep learning techniques to efficiently explore potential material structures, achieving a significant increase in the identification of stable inorganic crystal structures. The system's predictions were validated through autonomous robotic experiments, demonstrating a noteworthy success rate of 71%. The data of newly discovered materials is publicly available through the Materials Project database, offering researchers the opportunity to identify materials with desired properties for various applications. This development has implications for the future of scientific discovery and the integration of AI in material science research, potentially expediting material innovation and reducing costs in product development. The use of AI and deep learning suggests the possibility of minimizing or eliminating manual lab experiments and allowing scientists to focus more on the design and analysis of unique compounds. === Military === The United States Department of Defense applied deep learning to train robots in new tasks through observation. === Partial differential equations === Physics informed neural networks have been used to solve partial differential equations in both forward and inverse problems in a data driven manner. One example is the reconstructing fluid flow governed by the Navier-Stokes equations. Using physics informed neural networks does not require the often expensive mesh generation that conventional CFD methods rely on. === Deep backward stochastic differential equation method === Deep backward stochastic differential equation method is a numerical method that combines deep learning with Backward stochastic differential equation (BSDE). This method is particularly useful for solving high-dimensional problems in financial mathematics. By leveraging the powerful function approximation capabilities of deep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods in high-dimensional settings. Specifically, traditional methods like finite difference methods or Monte Carlo simulations often struggle with the curse of dimensionality, where computational cost increases exponentially with the number of dimensions. Deep BSDE methods, however, employ deep neural networks to approximate solutions of high-dimensional partial differential equations (PDEs), effectively reducing the computational burden. In addition, the integration of Physics-informed neural networks (PINNs) into the deep BSDE framework enhances its capability by embedding the underlying physical laws directly into the neural network architecture. This ensures that the solutions not only fit the data but also adhere to the governing stochastic differential equations. PINNs leverage the power of deep learning while respecting the constraints imposed by the physical models, resulting in more accurate and reliable solutions for financial mathematics problems. === Image reconstruction === Image reconstruction is the reconstruction of the underlying images from the image-related measurements. Several works showed the better and superior performance of the deep learning methods compared to analytical methods for various applications, e.g., spectral imaging and ultrasound imaging. === Weather prediction === Traditional weather prediction systems solve a very complex system of partial differential equations. GraphCast is a deep learning based model, trained on a long history of weather data to predict how weather patterns change over time. It is able to predict weather conditions for up to 10 days globally, at a very detailed level, and in under a minute, with precision similar to state of the art systems. === Epigenetic clock === An epigenetic clock is a biochemical test that can be used to measure age. Galkin et al. used deep neural networks to train an epigenetic aging clock of unprecedented accuracy using >6,000 blood samples. The clock uses information from 1000 CpG sites and predicts people with certain conditions older than healthy controls: IBD, frontotemporal dementia, ovarian cancer, obesity. The aging clock was planned to be released for public use in 2021 by an Insilico Medicine spinoff company Deep Longevity. == Relation to human cognitive and brain development == Deep learning is closely related to a class of theories of brain development (specifically, neocortical development) proposed by cognitive neuroscientists in the early 1990s. These developmental theories were instantiated in computational models, making them predecessors of deep learning systems. These developmental models share the property that various proposed learning dynamics in the brain (e.g., a wave of nerve growth factor) support the self-organization somewhat analogous to the neural networks utilized in deep learning models. Like the neocortex, neural networks employ a hierarchy of layered filters in which each layer considers information from a prior layer (or the operating environment), and then passes its output (and possibly the original input), to other layers. This process yields a self-organizing stack of transducers, well-tuned to their operating environment. A 1995 description stated, "...the infant's brain seems to organize itself under the influence of waves of so-called trophic-factors ... different regions of the brain become connected sequentially, with one layer of tissue maturing before another and so on until the whole brain is mature". A variety of approaches have been used to investigate the plausibility of deep learning models from a neurobiological perspective. On the one hand, several variants of the backpropagation algorithm have been proposed in order to increase its processing realism. Other researchers have argued that unsupervised forms of deep learning, such as those based on hierarchical generative models and deep belief networks, may be closer to biological reality. In this respect, generative neural network models have been related to neurobiological evidence about sampling-based processing in the cerebral cortex. Although a systematic comparison between the human brain organization and the neuronal encoding in deep networks has not yet been established, several analogies have been reported. For example, the computations performed by deep learning units could be similar to those of actual neurons and neural populations. Similarly, the representations developed by deep learning models are similar to those measured in the primate visual system both at the single-unit and at the population levels. == Commercial activity == Facebook's AI lab performs tasks such as automatically tagging uploaded pictures with the names of the people in them. Google's DeepMind Technologies developed a system capable of learning how to play Atari video games using only pixels as data input. In 2015 they demonstrated their AlphaGo system, which learned the game of Go well enough to beat a professional Go player. Google Translate uses a neural network to translate between more than 100 languages. In 2017, Covariant.ai was launched, which focuses on integrating deep learning into factories. As of 2008, researchers at The University of Texas at Austin (UT) developed a machine learning framework called Training an Agent Manually via Evaluative Reinforcement, or TAMER, which proposed new methods for robots or computer programs to learn how to perform tasks by interacting with a human instructor. First developed as TAMER, a new algorithm called Deep TAMER was later introduced in 2018 during a collaboration between U.S. Army Research Laboratory (ARL) and UT researchers. Deep TAMER used deep learning to provide a robot with the ability to learn new tasks through observation. Using Deep TAMER, a robot learned a task with a human trainer, watching video streams or observing a human perform a task in-person. The robot later practiced the task with the help of some coaching from the trainer, who provided feedback such as "good job" and "bad job". == Criticism and comment == Deep learning has attracted both criticism and comment, in some cases from outside the field of computer science. === Theory === A main criticism concerns the lack of theory surrounding some methods. Learning in the most common deep architectures is implemented using well-understood gradient descent. However, the theory surrounding other algorithms, such as contrastive divergence is less clear. (e.g., Does it converge? If so, how fast? What is it approximating?) Deep learning methods are often looked at as a black box, with most confirmations done empirically, rather than theoretically. In further reference to the idea that artistic sensitivity might be inherent in relatively low levels of the cognitive hierarchy, a published series of graphic representations of the internal states of deep (20-30 layers) neural networks attempting to discern within essentially random data the images on which they were trained demonstrate a visual appeal: the original research notice received well over 1,000 comments, and was the subject of what was for a time the most frequently accessed article on The Guardian's website. === Errors === Some deep learning architectures display problematic behaviors, such as confidently classifying unrecognizable images as belonging to a familiar category of ordinary images (2014) and misclassifying minuscule perturbations of correctly classified images (2013). Goertzel hypothesized that these behaviors are due to limitations in their internal representations and that these limitations would inhibit integration into heterogeneous multi-component artificial general intelligence (AGI) architectures. These issues may possibly be addressed by deep learning architectures that internally form states homologous to image-grammar decompositions of observed entities and events. Learning a grammar (visual or linguistic) from training data would be equivalent to restricting the system to commonsense reasoning that operates on concepts in terms of grammatical production rules and is a basic goal of both human language acquisition and artificial intelligence (AI). === Cyber threat === As deep learning moves from the lab into the world, research and experience show that artificial neural networks are vulnerable to hacks and deception. By identifying patterns that these systems use to function, attackers can modify inputs to ANNs in such a way that the ANN finds a match that human observers would not recognize. For example, an attacker can make subtle changes to an image such that the ANN finds a match even though the image looks to a human nothing like the search target. Such manipulation is termed an "adversarial attack". In 2016 researchers used one ANN to doctor images in trial and error fashion, identify another's focal points, and thereby generate images that deceived it. The modified images looked no different to human eyes. Another group showed that printouts of doctored images then photographed successfully tricked an image classification system. One defense is reverse image search, in which a possible fake image is submitted to a site such as TinEye that can then find other instances of it. A refinement is to search using only parts of the image, to identify images from which that piece may have been taken. Another group showed that certain psychedelic spectacles could fool a facial recognition system into thinking ordinary people were celebrities, potentially allowing one person to impersonate another. In 2017 researchers added stickers to stop signs and caused an ANN to misclassify them. ANNs can however be further trained to detect attempts at deception, potentially leading attackers and defenders into an arms race similar to the kind that already defines the malware defense industry. ANNs have been trained to defeat ANN-based anti-malware software by repeatedly attacking a defense with malware that was continually altered by a genetic algorithm until it tricked the anti-malware while retaining its ability to damage the target. In 2016, another group demonstrated that certain sounds could make the Google Now voice command system open a particular web address, and hypothesized that this could "serve as a stepping stone for further attacks (e.g., opening a web page hosting drive-by malware)". In "data poisoning", false data is continually smuggled into a machine learning system's training set to prevent it from achieving mastery. === Data collection ethics === The deep learning systems that are trained using supervised learning often rely on data that is created or annotated by humans, or both. It has been argued that not only low-paid clickwork (such as on Amazon Mechanical Turk) is regularly deployed for this purpose, but also implicit forms of human microwork that are often not recognized as such. The philosopher Rainer Mühlhoff distinguishes five types of "machinic capture" of human microwork to generate training data: (1) gamification (the embedding of annotation or computation tasks in the flow of a game), (2) "trapping and tracking" (e.g. CAPTCHAs for image recognition or click-tracking on Google search results pages), (3) exploitation of social motivations (e.g. tagging faces on Facebook to obtain labeled facial images), (4) information mining (e.g. by leveraging quantified-self devices such as activity trackers) and (5) clickwork. == See also == Applications of artificial intelligence Comparison of deep learning software Compressed sensing Differentiable programming Echo state network List of artificial intelligence projects Liquid state machine List of datasets for machine-learning research Reservoir computing Scale space and deep learning Sparse coding Stochastic parrot Topological deep learning == References == == Further reading ==
Wikipedia/Deep_neural_networks
A raster graphics editor (also called bitmap graphics editor) is a computer program that allows users to create and edit images interactively on the computer screen and save them in one of many raster graphics file formats (also known as bitmap images) such as JPEG, PNG, and GIF. == Comparison to vector graphic editors == Vector graphics editors are often contrasted with raster graphics editors, yet their capabilities complement each other. The technical difference between vector and raster editors stem from the difference between vector and raster images. Vector graphics are created mathematically, using geometric formulas. Each element is created and manipulated numerically; essentially using Cartesian coordinates for the placement of key points, and then a mathematical algorithm to connect the dots and define the colors. Raster images include digital photos. A raster image is made up of rows and columns of dots, called pixels, and is generally more photo-realistic. This is the standard form for digital cameras; whether it be a .raw file or .jpg file, the concept is the same. The image is represented pixel by pixel, like a microscopic jigsaw puzzle. Vector editors tend to be better suited for graphic design, page layout, typography, logos, sharp-edged artistic illustrations, e.g., cartoons, clip art, complex geometric patterns, technical illustrations, diagramming and flowcharting. Advanced raster editors, like GIMP and Adobe Photoshop, use vector methods (mathematics) for general layout and elements such as text, but are equipped to deal with raster images down to the pixel and often have special capabilities in doing so, such as brightness/contrast, and even adding "lighting" to a raster image or photograph. == Popular editors == Adobe Photoshop: Industry standard for photography, design, and digital art GIMP: Free, open-source alternative with similar features to Photoshop Corel Painter: Focuses on digital painting with traditional art simulation Affinity Photo: Professional-grade tools with a one-time purchase model Procreate(iOS): Popular app for digital painting on iPad Krita : A popular software for Windows. == Common features == Select a region for editing Draw lines with simulated brushes of different color, size, shape and pressure Fill a region with a single color, gradient of colors, or a texture Select a color using different color models, e.g., RGB, HSV, or by using a color dropper Edit and convert between various color models. Add typed letters in various font styles Remove imperfections from photo images Composite editing using layers Apply filters for effects including sharpening and blurring Convert between various image file formats == See also == Comparison of raster graphics editors Vector graphics editor Texture mapping Text editor 3D modeling == References == == External links == Media related to Raster graphics software at Wikimedia Commons
Wikipedia/Raster_graphics_editor
Lance J. Williams (September 25, 1949 – August 20, 2017) was a prominent graphics researcher who made major contributions to texture map prefiltering, shadow rendering algorithms, facial animation, and antialiasing techniques. Williams was one of the first people to recognize the potential of computer graphics to transform film and video making. Williams died at 67 years old on August 20, 2017, after a battle with cancer. He is survived by his wife and two children. == Education == Williams was an Honors student majoring in English with a minor in Asian Studies at the University of Kansas and graduated with a B.A. in 1972. While a student at KU he competed in collegiate chess tournaments and is said to have had a rating of 1800. He was drawn to the University of Utah by a "Humanistic Computation" summer seminar held by Jef Raskin at KU. He joined the graduate Computer Science program at the University of Utah in 1973 and studied computer graphics and animation under Ivan Sutherland, David Evans, and Steven Coons. At this time in the early 1970s, the University of Utah was the hub for much of the pioneering work being done in computer graphics. Lance left Utah (having completed his PhD course work and exams except the writing of a thesis) in 1977 to join the New York Institute of Technology (NYIT). While at NYIT, Williams invented the mipmapping technique for texture filtering, which is ubiquitously used today by graphics hardware for PCs and video games, and wrote and directed the abandoned project The Works which would have been the first entirely 3D CGI film had it been finished in the early 1980s as intended. Williams was awarded his PhD in 2000 from the University of Utah based on a rule allowing someone who published three seminal papers in his field to bind them together as his thesis. The three papers are Casting Curved Shadows on Curved Surfaces (1978), Pyramidal Parametrics (1983) and View Interpolation for Image Synthesis (1993). == Professional career == Williams worked at the New York Institute of Technology (NYIT) from 1976-1986 on research and commercial animation, and the development of shadow mapping and "mip" texture mapping. Williams, acting as lab director, was also the creator of The Works, a feature film project in development at the lab from roughly 1978-1982. The film was eventually canceled due to the lack of sufficient technology. Subsequently Williams consulted for Jim Henson Associates, independently developed facial tracking for computer animation, worked for six years in Apple Computer's Advanced Technology Group starting in 1987. While there he collaborated with Eric Chen to pioneer early image based rendering work, developed "Virtual Integral Holography," (with Dan Venolia), created 3D paint systems and contributed to QuickTime VR. He has pioneered work in motion capture facial animation systems for over 20 years. In 1997, Williams joined DreamWorks SKG. In 2002 he became Chief Scientist at Walt Disney Animation Studios. In 2006, Williams joined Google and worked with the Google Geo Group (Maps and Earth). In 2008 he was a Principal Member of Research Staff at Nokia and as of 2012, he joined NVIDIA Research. == Publications == • “Shadows for Cel Animation," (with Adam Finkelstein et al.) Computer Graphics (SIGGRAPH 2000 Proceedings) 511-516. • "Motion Signal Processing," (with Armin Bruderlin) Computer Graphics (SIGGRAPH '95 Proceedings) 97-104. • "Animating Images with Line Drawings," (with Pete Litwinowicz) Computer Graphics (SIGGRAPH '94 Proceedings) 409-412. • “View Interpolation Image Synthesis," (with Shenchang Eric Chen) Computer Graphics (SIGGRAPH '93 Proceedings) 279-288. • "Living Pictures," (invited paper) Computer Animation '93, Switzerland, 1993. • "Shading in Two Dimensions," Graphics Interface '91, Calgary, Alberta, 1991. • "3D Paint," Computer Graphics 24, 2, 1990 Symposium on Interactive 3D Graphics, 1990. • "Performance-Driven Facial Animation," Computer Graphics (SIGGRAPH '90 Proceedings) vol. 24, no. 4, 235-242. • "Pyramidal Parametrics," Computer Graphics (SIGGRAPH '83 Proceedings) vol. 17, no 3, 1-11. • "Casting Curved Shadows on Curved Surfaces," Computer Graphics (SIGGRAPH '78 Proceedings) vol. 12, no. 3, 270-274. == Recognition == 1971 - Five State Intercollegiate Chess Championship On August 15, 2001, Williams won the ACM SIGGRAPH Coons Award for Outstanding Creative Contributions to computer graphics. On March 2, 2002, Williams was awarded a 2001 Technical Achievement Award by the Academy of Motion Picture Arts and Sciences for "his pioneering influence in the field of computer-generated animation and effects for motion pictures." 2002 - Honorary Doctorate of Fine Arts, Columbus College of Art and Design. == References == == External links == University of Utah Computer Graphics History New York Institute of Technology Computer Graphics History 2001 Coons Award Announcement The 74th Scientific & Technical Awards of the Academy of Motion Picture Arts and Sciences 2001 | 2002 Lance Williams Obituary - Lawrence Journal World
Wikipedia/Lance_Williams_(graphics_researcher)
Bresenham's line algorithm is a line drawing algorithm that determines the points of an n-dimensional raster that should be selected in order to form a close approximation to a straight line between two points. It is commonly used to draw line primitives in a bitmap image (e.g. on a computer screen), as it uses only integer addition, subtraction, and bit shifting, all of which are very cheap operations in historically common computer architectures. It is an incremental error algorithm, and one of the earliest algorithms developed in the field of computer graphics. An extension to the original algorithm called the midpoint circle algorithm may be used for drawing circles. While algorithms such as Wu's algorithm are also frequently used in modern computer graphics because they can support antialiasing, Bresenham's line algorithm is still important because of its speed and simplicity. The algorithm is used in hardware such as plotters and in the graphics chips of modern graphics cards. It can also be found in many software graphics libraries. Because the algorithm is very simple, it is often implemented in either the firmware or the graphics hardware of modern graphics cards. The label "Bresenham" is used today for a family of algorithms extending or modifying Bresenham's original algorithm. == History == Bresenham's line algorithm is named after Jack Elton Bresenham who developed it in 1962 at IBM. In 2001 Bresenham wrote: I was working in the computation lab at IBM's San Jose development lab. A Calcomp plotter had been attached to an IBM 1401 via the 1407 typewriter console. [The algorithm] was in production use by summer 1962, possibly a month or so earlier. Programs in those days were freely exchanged among corporations so Calcomp (Jim Newland and Calvin Hefte) had copies. When I returned to Stanford in Fall 1962, I put a copy in the Stanford comp center library. A description of the line drawing routine was accepted for presentation at the 1963 ACM national convention in Denver, Colorado. It was a year in which no proceedings were published, only the agenda of speakers and topics in an issue of Communications of the ACM. A person from the IBM Systems Journal asked me after I made my presentation if they could publish the paper. I happily agreed, and they printed it in 1965. == Method == The following conventions will be applied: the top-left is (0,0) such that pixel coordinates increase in the right and down directions (e.g. that the pixel at (7,4) is directly above the pixel at (7,5)), and the pixel centers have integer coordinates. The endpoints of the line are the pixels at ( x 0 , y 0 ) {\displaystyle (x_{0},y_{0})} and ( x 1 , y 1 ) {\displaystyle (x_{1},y_{1})} , where the first coordinate of the pair is the column and the second is the row. The algorithm will be initially presented only for the octant in which the segment goes down and to the right ( x 0 ≤ x 1 {\displaystyle x_{0}\leq x_{1}} and y 0 ≤ y 1 {\displaystyle y_{0}\leq y_{1}} ), and its horizontal projection x 1 − x 0 {\displaystyle x_{1}-x_{0}} is longer than the vertical projection y 1 − y 0 {\displaystyle y_{1}-y_{0}} (the line has a positive slope less than 1). In this octant, for each column x between x 0 {\displaystyle x_{0}} and x 1 {\displaystyle x_{1}} , there is exactly one row y (computed by the algorithm) containing a pixel of the line, while each row between y 0 {\displaystyle y_{0}} and y 1 {\displaystyle y_{1}} may contain multiple rasterized pixels. Bresenham's algorithm chooses the integer y corresponding to the pixel center that is closest to the ideal (fractional) y for the same x; on successive columns y can remain the same or increase by 1. The general equation of the line through the endpoints is given by: y − y 0 y 1 − y 0 = x − x 0 x 1 − x 0 {\displaystyle {\frac {y-y_{0}}{y_{1}-y_{0}}}={\frac {x-x_{0}}{x_{1}-x_{0}}}} . Since we know the column, x, the pixel's row, y, is given by rounding this quantity to the nearest integer: y = y 1 − y 0 x 1 − x 0 ( x − x 0 ) + y 0 {\displaystyle y={\frac {y_{1}-y_{0}}{x_{1}-x_{0}}}(x-x_{0})+y_{0}} . The slope ( y 1 − y 0 ) / ( x 1 − x 0 ) {\displaystyle (y_{1}-y_{0})/(x_{1}-x_{0})} depends on the endpoint coordinates only and can be precomputed, and the ideal y for successive integer values of x can be computed starting from y 0 {\displaystyle y_{0}} and repeatedly adding the slope. In practice, the algorithm does not keep track of the y coordinate, which increases by m = ∆y/∆x each time the x increases by one; it keeps an error bound at each stage, which represents the negative of the distance from (a) the point where the line exits the pixel to (b) the top edge of the pixel. This value is first set to y 0 − 0.5 {\displaystyle y_{0}-0.5} (due to using the pixel's center coordinates), and is incremented by m each time the x coordinate is incremented by one. If the error becomes greater than 0.5, we know that the line has moved upwards one pixel, and that we must increment our y coordinate and readjust the error to represent the distance from the top of the new pixel – which is done by subtracting one from error. == Derivation == To derive Bresenham's algorithm, two steps must be taken. The first step is transforming the equation of a line from the typical slope-intercept form into something different; and then using this new equation to draw a line based on the idea of accumulation of error. === Line equation === The slope-intercept form of a line is written as y = f ( x ) = m x + b {\displaystyle y=f(x)=mx+b} where m {\displaystyle m} is the slope and b {\displaystyle b} is the y-intercept. Because this is a function of only x {\displaystyle x} , it can't represent a vertical line. Therefore, it would be useful to make this equation written as a function of both x {\displaystyle x} and y {\displaystyle y} , to be able to draw lines at any angle. The angle (or slope) of a line can be stated as "rise over run", or Δ y / Δ x {\displaystyle \Delta y/\Delta x} . Then, using algebraic manipulation, y = m x + b y = Δ y Δ x x + b ( Δ x ) y = ( Δ y ) x + ( Δ x ) b 0 = ( Δ y ) x − ( Δ x ) y + ( Δ x ) b {\displaystyle {\begin{aligned}y&=mx+b\\y&={\frac {\Delta y}{\Delta x}}x+b\\(\Delta x)y&=(\Delta y)x+(\Delta x)b\\0&=(\Delta y)x-(\Delta x)y+(\Delta x)b\end{aligned}}} Letting this last equation be a function of x {\displaystyle x} and y {\displaystyle y} , it can be written as f ( x , y ) := A x + B y + C = 0 {\displaystyle f(x,y):=Ax+By+C=0} where the constants are A = Δ y = y 1 − y 0 {\displaystyle A=\Delta y=y_{1}-y_{0}} B = − Δ x = − ( x 1 − x 0 ) {\displaystyle B=-\Delta x=-(x_{1}-x_{0})} C = ( Δ x ) b = ( x 1 − x 0 ) b {\displaystyle C=(\Delta x)b=(x_{1}-x_{0})b} The line is then defined for some constants A {\displaystyle A} , B {\displaystyle B} , and C {\displaystyle C} anywhere f ( x , y ) = 0 {\displaystyle f(x,y)=0} . That is, for any ( x , y ) {\displaystyle (x,y)} not on the line, f ( x , y ) ≠ 0 {\displaystyle f(x,y)\neq 0} . This form involves only integers if x {\displaystyle x} and y {\displaystyle y} are integers, since the constants A {\displaystyle A} , B {\displaystyle B} , and C {\displaystyle C} are defined as integers. As an example, the line y = 1 2 x + 1 {\textstyle y={\frac {1}{2}}x+1} then this could be written as f ( x , y ) = x − 2 y + 2 {\displaystyle f(x,y)=x-2y+2} . The point (2,2) is on the line f ( 2 , 2 ) = x − 2 y + 2 = ( 2 ) − 2 ( 2 ) + 2 = 2 − 4 + 2 = 0 {\displaystyle f(2,2)=x-2y+2=(2)-2(2)+2=2-4+2=0} and the point (2,3) is not on the line f ( 2 , 3 ) = ( 2 ) − 2 ( 3 ) + 2 = 2 − 6 + 2 = − 2 {\displaystyle f(2,3)=(2)-2(3)+2=2-6+2=-2} and neither is the point (2,1) f ( 2 , 1 ) = ( 2 ) − 2 ( 1 ) + 2 = 2 − 2 + 2 = 2 {\displaystyle f(2,1)=(2)-2(1)+2=2-2+2=2} Notice that the points (2,1) and (2,3) are on opposite sides of the line and f ( x , y ) {\displaystyle f(x,y)} evaluates to positive or negative. A line splits a plane into halves and the half-plane that has a negative f ( x , y ) {\displaystyle f(x,y)} can be called the negative half-plane, and the other half can be called the positive half-plane. This observation is very important in the remainder of the derivation. === Algorithm === The starting point is on the line f ( x 0 , y 0 ) = 0 {\displaystyle f(x_{0},y_{0})=0} only because the line is defined to start and end on integer coordinates (though it is entirely reasonable to want to draw a line with non-integer end points). Keeping in mind that the slope is at most 1 {\displaystyle 1} , the problem now presents itself as to whether the next point should be at ( x 0 + 1 , y 0 ) {\displaystyle (x_{0}+1,y_{0})} or ( x 0 + 1 , y 0 + 1 ) {\displaystyle (x_{0}+1,y_{0}+1)} . Perhaps intuitively, the point should be chosen based upon which is closer to the line at x 0 + 1 {\displaystyle x_{0}+1} . If it is closer to the former then include the former point on the line, if the latter then the latter. To answer this, evaluate the line function at the midpoint between these two points: f ( x 0 + 1 , y 0 + 1 2 ) {\displaystyle f(x_{0}+1,y_{0}+{\tfrac {1}{2}})} If the value of this is positive then the ideal line is below the midpoint and closer to the candidate point ( x 0 + 1 , y 0 + 1 ) {\displaystyle (x_{0}+1,y_{0}+1)} ; i.e. the y coordinate should increase. Otherwise, the ideal line passes through or above the midpoint, and the y coordinate should stay the same; in which case the point ( x 0 + 1 , y 0 ) {\displaystyle (x_{0}+1,y_{0})} is chosen. The value of the line function at this midpoint is the sole determinant of which point should be chosen. The adjacent image shows the blue point (2,2) chosen to be on the line with two candidate points in green (3,2) and (3,3). The black point (3, 2.5) is the midpoint between the two candidate points. === Algorithm for integer arithmetic === Alternatively, the difference between points can be used instead of evaluating f(x,y) at midpoints. This alternative method allows for integer-only arithmetic, which is generally faster than using floating-point arithmetic. To derive the other method, define the difference to be as follows: D i = f ( x i + 1 , y i + 1 2 ) − f ( x 0 , y 0 ) {\displaystyle D_{i}=f(x_{i}+1,y_{i}+{\tfrac {1}{2}})-f(x_{0},y_{0})} For the first decision, this formulation is equivalent to the midpoint method since f ( x 0 , y 0 ) = 0 {\displaystyle f(x_{0},y_{0})=0} at the starting point. Simplifying this expression yields: D 0 = [ A ( x 0 + 1 ) + B ( y 0 + 1 2 ) + C ] − [ A x 0 + B y 0 + C ] = [ A x 0 + B y 0 + C + A + 1 2 B ] − [ A x 0 + B y 0 + C ] = A + 1 2 B = Δ y − 1 2 Δ x {\displaystyle {\begin{array}{rclcl}D_{0}&=&\left[A(x_{0}+1)+B\left(y_{0}+{\frac {1}{2}}\right)+C\right]&-&\left[Ax_{0}+By_{0}+C\right]\\&=&\left[Ax_{0}+By_{0}+C+A+{\frac {1}{2}}B\right]&-&\left[Ax_{0}+By_{0}+C\right]\\&=&A+{\frac {1}{2}}B=\Delta y-{\frac {1}{2}}\Delta x\end{array}}} Just as with the midpoint method, if D 0 {\displaystyle D_{0}} is positive, then choose ( x 0 + 1 , y 0 + 1 ) {\displaystyle (x_{0}+1,y_{0}+1)} , otherwise choose ( x 0 + 1 , y 0 ) {\displaystyle (x_{0}+1,y_{0})} . If ( x 0 + 1 , y 0 ) {\displaystyle (x_{0}+1,y_{0})} is chosen, the change in D i {\displaystyle D_{i}} will be: Δ D = f ( x 0 + 2 , y 0 + 1 2 ) − f ( x 0 + 1 , y 0 + 1 2 ) = A = Δ y {\displaystyle {\begin{array}{lclcl}\Delta D&=&f(x_{0}+2,y_{0}+{\tfrac {1}{2}})-f(x_{0}+1,y_{0}+{\tfrac {1}{2}})&=&A&=&\Delta y\\\end{array}}} If ( x 0 + 1 , y 0 + 1 ) {\displaystyle (x_{0}+1,y_{0}+1)} is chosen the change in D i {\displaystyle D_{i}} will be: Δ D = f ( x 0 + 2 , y 0 + 3 2 ) − f ( x 0 + 1 , y 0 + 1 2 ) = A + B = Δ y − Δ x {\displaystyle {\begin{array}{lclcl}\Delta D&=&f(x_{0}+2,y_{0}+{\tfrac {3}{2}})-f(x_{0}+1,y_{0}+{\tfrac {1}{2}})&=&A+B&=&\Delta y-\Delta x\end{array}}} If the new D is positive then ( x 0 + 2 , y 0 + 1 ) {\displaystyle (x_{0}+2,y_{0}+1)} is chosen, otherwise ( x 0 + 2 , y 0 ) {\displaystyle (x_{0}+2,y_{0})} . This decision can be generalized by accumulating the error on each subsequent point. All of the derivation for the algorithm is done. One performance issue is the 1/2 factor in the initial value of D. Since all of this is about the sign of the accumulated difference, then everything can be multiplied by 2 with no consequence. This results in an algorithm that uses only integer arithmetic. plotLine(x0, y0, x1, y1) dx = x1 - x0 dy = y1 - y0 D = 2*dy - dx y = y0 for x from x0 to x1 plot(x, y) if D > 0 y = y + 1 D = D - 2*dx end if D = D + 2*dy Running this algorithm for f ( x , y ) = x − 2 y + 2 {\displaystyle f(x,y)=x-2y+2} from (0,1) to (6,4) yields the following differences with dx=6 and dy=3: D=2*3-6=0 Loop from 0 to 6 * x=0: plot(0, 1), D≤0: D=0+6=6 * x=1: plot(1, 1), D>0: D=6-12=-6, y=1+1=2, D=-6+6=0 * x=2: plot(2, 2), D≤0: D=0+6=6 * x=3: plot(3, 2), D>0: D=6-12=-6, y=2+1=3, D=-6+6=0 * x=4: plot(4, 3), D≤0: D=0+6=6 * x=5: plot(5, 3), D>0: D=6-12=-6, y=3+1=4, D=-6+6=0 * x=6: plot(6, 4), D≤0: D=0+6=6 The result of this plot is shown to the right. The plotting can be viewed by plotting at the intersection of lines (blue circles) or filling in pixel boxes (yellow squares). Regardless, the plotting is the same. === All cases === However, as mentioned above this only works for octant zero, that is lines starting at the origin with a slope between 0 and 1 where x increases by exactly 1 per iteration and y increases by 0 or 1. The algorithm can be extended to cover slopes between 0 and -1 by checking whether y needs to increase or decrease (i.e. dy < 0) plotLineLow(x0, y0, x1, y1) dx = x1 - x0 dy = y1 - y0 yi = 1 if dy < 0 yi = -1 dy = -dy end if D = (2 * dy) - dx y = y0 for x from x0 to x1 plot(x, y) if D > 0 y = y + yi D = D + (2 * (dy - dx)) else D = D + 2*dy end if By switching the x and y axis an implementation for positive or negative steep slopes can be written as plotLineHigh(x0, y0, x1, y1) dx = x1 - x0 dy = y1 - y0 xi = 1 if dx < 0 xi = -1 dx = -dx end if D = (2 * dx) - dy x = x0 for y from y0 to y1 plot(x, y) if D > 0 x = x + xi D = D + (2 * (dx - dy)) else D = D + 2*dx end if A complete solution would need to detect whether x1 > x0 or y1 > y0 and reverse the input coordinates before drawing, thus plotLine(x0, y0, x1, y1) if abs(y1 - y0) < abs(x1 - x0) if x0 > x1 plotLineLow(x1, y1, x0, y0) else plotLineLow(x0, y0, x1, y1) end if else if y0 > y1 plotLineHigh(x1, y1, x0, y0) else plotLineHigh(x0, y0, x1, y1) end if end if In low level implementations which access the video memory directly, it would be typical for the special cases of vertical and horizontal lines to be handled separately as they can be highly optimized. Some versions use Bresenham's principles of integer incremental error to perform all octant line draws, balancing the positive and negative error between the x and y coordinates. plotLine(x0, y0, x1, y1) dx = abs(x1 - x0) sx = x0 < x1 ? 1 : -1 dy = -abs(y1 - y0) sy = y0 < y1 ? 1 : -1 error = dx + dy while true plot(x0, y0) e2 = 2 * error if e2 >= dy if x0 == x1 break error = error + dy x0 = x0 + sx end if if e2 <= dx if y0 == y1 break error = error + dx y0 = y0 + sy end if end while == Similar algorithms == The Bresenham algorithm can be interpreted as slightly modified digital differential analyzer (using 0.5 as error threshold instead of 0, which is required for non-overlapping polygon rasterizing). The principle of using an incremental error in place of division operations has other applications in graphics. It is possible to use this technique to calculate the U,V co-ordinates during raster scan of texture mapped polygons. The voxel heightmap software-rendering engines seen in some PC games also used this principle. Bresenham also published a Run-Slice computational algorithm: while the above described Run-Length algorithm runs the loop on the major axis, the Run-Slice variation loops the other way. This method has been represented in a number of US patents: US patent 5815163, "Method and apparatus to draw line slices during calculation" US patent 5740345, "Method and apparatus for displaying computer graphics data stored in a compressed format with an efficient color indexing system" US patent 5657435, "Run slice line draw engine with non-linear scaling capabilities" US patent 5627957, "Run slice line draw engine with enhanced processing capabilities" US patent 5627956, "Run slice line draw engine with stretching capabilities" US patent 5617524, "Run slice line draw engine with shading capabilities" US patent 5611029, "Run slice line draw engine with non-linear shading capabilities" US patent 5604852, "Method and apparatus for displaying a parametric curve on a video display" US patent 5600769, "Run slice line draw engine with enhanced clipping techniques" The algorithm has been extended to: Draw lines of arbitrary thickness, an algorithm created by Alan Murphy at IBM. Draw multiple kinds curves (circles, ellipses, cubic, quadratic, and rational Bézier curves) and antialiased lines and curves; a set of algorithms by Alois Zingl. == See also == Digital differential analyzer (graphics algorithm), a simple and general method for rasterizing lines and triangles Xiaolin Wu's line algorithm, a similarly fast method of drawing lines with antialiasing Midpoint circle algorithm, a similar algorithm for drawing circles == Notes == == References == Bresenham, J. E. (1965). "Algorithm for computer control of a digital plotter" (PDF). IBM Systems Journal. 4 (1): 25–30. doi:10.1147/sj.41.0025. Archived from the original (PDF) on May 28, 2008. "The Bresenham Line-Drawing Algorithm", by Colin Flanagan Abrash, Michael (1997). Michael Abrash's graphics programming black book. Albany, NY: Coriolis. pp. 654–678. ISBN 978-1-57610-174-2. A very optimized version of the algorithm in C and assembly for use in video games with complete details of its inner workings Zingl, Alois (2016) [2012]. "A Rasterizing Algorithm for Drawing Curves" (PDF)., The Beauty of Bresenham's Algorithms == Further reading == Patrick-Gillesbanda Thesis, containing an extension of the Bresenham line drawing algorithm to perform 3D hidden lines removal also published in MICAD '87 proceedings on CAD/CAM and Computer Graphics, page 591 - ISBN 2-86601-084-1. Line Thickening by Modification To Bresenham's Algorithm, A.S. Murphy, IBM Technical Disclosure Bulletin, Vol. 20, No. 12, May 1978. Bresenham, Jack (February 1977). "A linear algorithm for incremental digital display of circular arcs". Communications of the ACM. 20 (2): 100–106. doi:10.1145/359423.359432. – also Technical Report 1964 Jan-27 -11- Circle Algorithm TR-02-286 IBM San Jose Lab == External links == Michael Abrash's Graphics Programming Black Book Special Edition: Chapter 35: Bresenham Is Fast, and Fast Is Good The Bresenham Line-Drawing Algorithm by Colin Flanagan National Institute of Standards and Technology page on Bresenham's algorithm Calcomp 563 Incremental Plotter Information Bresenham Algorithm in several programming languages The Beauty of Bresenham’s Algorithm — A simple implementation to plot lines, circles, ellipses and Bézier curves
Wikipedia/Bresenham's_line_algorithm
An image file format is a file format for a digital image. There are many formats that can be used, such as JPEG, PNG, and GIF. Most formats up until 2022 were for storing 2D images, not 3D ones. The data stored in an image file format may be compressed or uncompressed. If the data is compressed, it may be done so using lossy compression or lossless compression. For graphic design applications, vector formats are often used. Some image file formats support transparency. Raster formats are for 2D images. A 3D image can be represented within a 2D format, as in a stereogram or autostereogram, but this 3D image will not be a true light field, and thereby may cause the vergence-accommodation conflict. Image files are composed of digital data in one of these formats so that the data can be displayed on a digital (computer) display or printed out using a printer. A common method for displaying digital image information has historically been rasterization. == Image file sizes == The size of raster image files is positively correlated with the number of pixels in the image and the color depth (bits per pixel). Images can be compressed in various ways, however. A compression algorithm stores either an exact representation or an approximation of the original image in a smaller number of bytes that can be expanded back to its uncompressed form with a corresponding decompression algorithm. Images with the same number of pixels and color depth can have very different compressed file sizes. Considering exactly the same compression, number of pixels, and color depth for two images, different graphical complexity of the original images may also result in very different file sizes after compression due to the nature of compression algorithms. With some compression formats, images that are less complex may result in smaller compressed file sizes. This characteristic sometimes results in a smaller file size for some lossless formats than lossy formats. For example, graphically simple images (i.e., images with large continuous regions like line art or animation sequences) may be losslessly compressed into a GIF or PNG format and result in a smaller file size than a lossy JPEG format. For example, a 640 × 480 pixel image with 24-bit color would occupy almost a megabyte of space: 640 × 480 × 24 = 7,372,800 bits = 921,600 bytes = 900 KiB With vector images, the file size increases only with the addition of more vectors. == Image file compression == There are two types of image file compression algorithms: lossless and lossy. Lossless compression algorithms reduce file size while preserving a perfect copy of the original uncompressed image. Lossless compression generally, but not always, results in larger files than lossy compression. Lossless compression should be used to avoid accumulating stages of re-compression when editing images. Lossy compression algorithms preserve a representation of the original uncompressed image that may appear to be a perfect copy, but is not a perfect copy. Often lossy compression is able to achieve smaller file sizes than lossless compression. Most lossy compression algorithms allow for variable compression that trades image quality for file size. == Major graphic file formats == Including proprietary types, there are hundreds of image file types. The PNG, JPEG, and GIF formats are most often used to display images on the Internet. Some of these graphic formats are listed and briefly described below, separated into the two main families of graphics: raster and vector. Raster images are further divided into formats primarily aimed at (web) delivery (i.e., supporting relatively strong compression) versus formats primarily aimed at authoring or interchange (uncompressed or only relatively weak compression). In addition to straight image formats, Metafile formats are portable formats that can include both raster and vector information. Examples are application-independent formats such as WMF and EMF. The metafile format is an intermediate format. Most applications open metafiles and then save them in their own native format. Page description language refers to formats used to describe the layout of a printed page containing text, objects, and images. Examples are PostScript, PDF, and PCL. === Raster formats (2D) === ==== Delivery formats ==== ===== JPEG ===== JPEG (Joint Photographic Experts Group) is a lossy compression method; JPEG-compressed images are usually stored in the JFIF (JPEG File Interchange Format) or the Exif (Exchangeable Image File Format) file format. The JPEG filename extension is JPG or JPEG. Nearly every digital camera can save images in the JPEG format, which supports eight-bit grayscale images and 24-bit color images (eight bits each for red, green, and blue). JPEG applies lossy compression to images, which can result in a significant reduction of the file size. Applications can determine the degree of compression to apply, and the amount of compression affects the visual quality of the result. When not too great, the compression does not noticeably affect or detract from the image's quality, but JPEG files suffer generational degradation when repeatedly edited and saved. (JPEG also provides lossless image storage, but the lossless version is not widely supported.) ===== GIF ===== The GIF (Graphics Interchange Format) is in normal use limited to an 8-bit palette, or 256 colors (while 24-bit color depth is technically possible). GIF is most suitable for storing graphics with few colors, such as simple diagrams, shapes, logos, and cartoon-style images, as it uses LZW lossless compression, which is more effective when large areas have a single color and less effective for photographic or dithered images. Due to GIF's simplicity and age, it achieved almost universal software support. Due to its animation capabilities, it is still widely used to provide image animation effects, despite its low compression ratio compared to modern video formats. ===== PNG ===== The PNG (Portable Network Graphics) file format was created as a free, open-source alternative to GIF. The PNG file format supports 8-bit (256 colors) paletted images (with optional transparency for all palette colors) and 24-bit truecolor (16 million colors) or 48-bit truecolor with and without an alpha channel – while GIF supports only 8-bit palettes with a single transparent color. Compared to JPEG, PNG excels when the image has large, uniformly colored areas. Even for photographs – where JPEG is often the choice for final distribution since its lossy compression typically yields smaller file sizes – PNG is still well-suited to storing images during the editing process because of its lossless compression. PNG provides a patent-free replacement for GIF (though GIF is itself now patent-free) and can also replace many common uses of TIFF. Indexed-color, grayscale, and truecolor images are supported, plus an optional alpha channel. The Adam7 interlacing allows an early preview, even when only a small percentage of the image data has been transmitted—useful in online viewing applications like web browsers. PNG can store gamma and chromaticity data, as well as ICC profiles, for accurate color matching on heterogeneous platforms. Animated formats derived from PNG are MNG and APNG, which is backwards compatible with PNG and supported by most browsers. ===== JPEG 2000 ===== JPEG 2000 is a compression standard enabling both lossless and lossy storage. The compression methods used are different from the ones in standard JFIF/JPEG; they improve quality and compression ratios, but also require more computational power to process. JPEG 2000 also adds features that are missing in JPEG. It is not nearly as common as JPEG but it is used currently in professional movie editing and distribution (some digital cinemas, for example, use JPEG 2000 for individual movie frames). ===== WebP ===== WebP is an open image format released in 2010 that uses both lossless and lossy compression. It was designed by Google to reduce image file size to speed up web page loading: its principal purpose is to supersede JPEG as the primary format for photographs on the web. WebP is based on VP8's intra-frame coding and uses a container based on RIFF. In 2011, Google added an "Extended File Format" allowing WebP support for animation, ICC profile, XMP and Exif metadata, and tiling. The support for animation allowed for converting older animated GIFs to animated WebP. The WebP container (i.e., RIFF container for WebP) allows feature support over and above the basic use case of WebP (i.e., a file containing a single image encoded as a VP8 key frame). The WebP container provides additional support for: Lossless compression – An image can be losslessly compressed, using the WebP Lossless Format. Metadata – An image may have metadata stored in EXIF or XMP formats. Transparency – An image may have transparency, i.e., an alpha channel. Color Profile – An image may have an embedded ICC profile as described by the International Color Consortium. Animation – An image may have multiple frames with pauses between them, making it an animation. ===== HDR raster formats ===== Most typical raster formats cannot store HDR data (32 bit floating point values per pixel component), which is why some relatively old or complex formats are still predominant here, and worth mentioning separately. Newer alternatives are showing up, though. RGBE is the format for HDR images originating from Radiance and also supported by Adobe Photoshop. JPEG-HDR is a file format from Dolby Labs similar to RGBE encoding, standardized as JPEG XT Part 2. JPEG XT Part 7 includes support for encoding floating point HDR images in the base 8-bit JPEG file using enhancement layers encoded with four profiles (A-D); Profile A is based on the RGBE format and Profile B on the XDepth format from Trellis Management. ===== HEIF ===== The High Efficiency Image File Format (HEIF) is an image container format that was standardized by MPEG on the basis of the ISO base media file format. While HEIF can be used with any image compression format, the HEIF standard specifies the storage of HEVC intra-coded images and HEVC-coded image sequences taking advantage of inter-picture prediction. ===== AVIF ===== AVIF is an image container, that is used to store AV1 encoded images. It was created by Alliance for open media (AOMedia) and is completely open source and royalty-free. It supports encoding images in 8, 10 and 12-bit depth. ===== JPEG XL ===== JPEG XL is a royalty-free raster-graphics file format that supports both lossy and lossless compression. It supports reversible recompression of existing JPEG files, as well as high-precision HDR (up to 32-bit floating point values per pixel component). It is designed to be usable for both delivery and authoring use cases. ==== Authoring / Interchange formats ==== ===== TIFF ===== The TIFF (Tag Image File Format) format is a flexible format usually using either the TIFF or TIF filename extension. The tag structure was designed to be easily extendible, and many vendors have introduced proprietary special-purpose tags – with the result that no one reader handles every flavor of TIFF file. TIFFs can be lossy or lossless, depending on the technique chosen for storing the pixel data. Some offer relatively good lossless compression for bi-level (black&white) images. Some digital cameras can save images in TIFF format, using the LZW compression algorithm for lossless storage. TIFF image format is not widely supported by web browsers, but it remains widely accepted as a photograph file standard in the printing business. TIFF can handle device-specific color spaces, such as the CMYK defined by a particular set of printing press inks. OCR (Optical Character Recognition) software packages commonly generate some form of TIFF image (often monochromatic) for scanned text pages. ===== BMP ===== The BMP file format (Windows bitmap) is a raster-based, device-independent file type designed in the early days of computer graphics. It handles graphic files within the Microsoft Windows OS. Typically, BMP files are uncompressed and therefore large and lossless; their advantage is their simple structure and wide acceptance in Windows programs. ===== PPM, PGM, PBM, and PNM ===== Netpbm format is a family including the portable pixmap file format (PPM), the portable graymap file format (PGM), and the portable bitmap file format (PBM). These are either pure ASCII files or raw binary files with an ASCII header that provide very basic functionality and serve as a lowest common denominator for converting pixmap, graymap, or bitmap files between different platforms. Several applications refer to them collectively as PNM ("Portable aNy Map"). ===== Container formats of raster graphics editors ===== These image formats contain various images, layers and objects, out of which the final image is to be composed AFPhoto (Affinity Photo Document) CD5 (Chasys Draw Image) CLIP (Clip Studio Paint) CPT (Corel Photo Paint) KRA (Krita) MDP (Medibang and FireAlpaca) PDN (Paint Dot Net) PLD (PhotoLine Document) PSD (Adobe PhotoShop Document) PSP (Corel Paint Shop Pro) SAI (Paint Tool SAI) XCF (eXperimental Computing Facility format)—native GIMP format ==== Other raster formats ==== BPG (Better Portable Graphics)—an image format from 2014. Its purpose is to replace JPEG when quality or file size is an issue. To that end, it features a high data compression ratio, based on a subset of the HEVC video compression standard, including lossless compression. In addition, it supports various meta data (such as EXIF). DEEP—IFF-style format used by TVPaint DRW (Drawn File) ECW (Enhanced Compression Wavelet) FITS (Flexible Image Transport System) FLIF (Free Lossless Image Format)—a discontinued lossless image format which claims to outperform PNG, lossless WebP, lossless BPG and lossless JPEG 2000 in terms of compression ratio. It uses the MANIAC (Meta-Adaptive Near-zero Integer Arithmetic Coding) entropy encoding algorithm, a variant of the CABAC (context-adaptive binary arithmetic coding) entropy encoding algorithm. ICO—container for one or more icons (subsets of BMP and/or PNG) ILBM—IFF-style format for up to 32 bit in planar representation, plus optional 64 bit extensions IMG (ERDAS IMAGINE Image) IMG (Graphics Environment Manager (GEM) image file)—planar, run-length encoded JPEG XR—JPEG standard based on Microsoft HD Photo Nrrd (Nearly raw raster data) PAM (Portable Arbitrary Map)—late addition to the Netpbm family PCX (PiCture eXchange)—obsolete PGF (Progressive Graphics File) SGI (Silicon Graphics Image)—native raster graphics file format for Silicon Graphics workstations SID (multiresolution seamless image database, MrSID) Sun Raster—obsolete TGA (TARGA)—obsolete VICAR file format—NASA/JPL image transport format XISF (Extensible Image Serialization Format) === Vector formats === As opposed to the raster image formats above (where the data describes the characteristics of each individual pixel), vector image formats contain a geometric description which can be rendered smoothly at any desired display size. At some point, all vector graphics must be rasterized in order to be displayed on digital monitors. Vector images may also be displayed with analog CRT technology such as that used in some electronic test equipment, medical monitors, radar displays, laser shows and early video games. Plotters are printers that use vector data rather than pixel data to draw graphics. ==== CGM ==== CGM (Computer Graphics Metafile) is a file format for 2D vector graphics, raster graphics, and text, and is defined by ISO/IEC 8632. All graphical elements can be specified in a textual source file that can be compiled into a binary file or one of two text representations. CGM provides a means of graphics data interchange for computer representation of 2D graphical information independent from any particular application, system, platform, or device. It has been adopted to some extent in the areas of technical illustration and professional design, but has largely been superseded by formats such as SVG and DXF. ==== Gerber format (RS-274X) ==== The Gerber format (aka Extended Gerber, RS-274X) is a 2D bi-level image description format developed by Ucamco. It is the de facto standard format for printed circuit board or PCB software. ==== SVG ==== SVG (Scalable Vector Graphics) is an open standard created and developed by the World Wide Web Consortium to address the need (and attempts of several corporations) for a versatile, scriptable and all-purpose vector format for the web and otherwise. The SVG format does not have a compression scheme of its own, but due to the textual nature of XML, an SVG graphic can be compressed using a program such as gzip. Because of its scripting potential, SVG is a key component in web applications: interactive web pages that look and act like applications. ==== Other 2D vector formats ==== AFDesign (Affinity Designer document) AI (Adobe Illustrator Artwork)— proprietary file format developed by Adobe Systems CDR—proprietary format for CorelDRAW vector graphics editor !DRAW—a native vector graphic format (in several backward compatible versions) for the RISC-OS computer system begun by Acorn in the mid-1980s and still present on that platform today DrawingML—used in Office Open XML documents GEM—metafiles interpreted and written by the Graphics Environment Manager VDI subsystem GLE (Graphics Layout Engine)—graphics scripting language HP-GL (Hewlett-Packard Graphics Language)—introduced on Hewlett-Packard plotters, but generalized into a printer language HVIF (Haiku Vector Icon Format) Lottie—format for vector graphics animation MathML (Mathematical Markup Language)—an application of XML for describing mathematical notations NAPLPS (North American Presentation Layer Protocol Syntax) ODG (OpenDocument Graphics) PGML (Precision Graphics Markup Language)—a W3C submission that was not adopted as a recommendation PSTricks and PGF/TikZ are languages for creating graphics in TeX documents QCC—used by Quilt Manager (by Quilt EZ) for designing quilts ReGIS (Remote Graphic Instruction Set)—used by DEC computer terminals Remote imaging protocol—system for sending vector graphics over low-bandwidth links TinyVG—binary, simpler alternative to SVG VML (Vector Markup Language)—obsolete XML-based format Xar—format used in vector applications from Xara XPS (XML Paper Specification)—page description language and a fixed-document format ==== 3D vector formats ==== AMF – Additive Manufacturing File Format Asymptote – A language that lifts TeX to 3D. .blend – Blender COLLADA DGN .dwf .dwg .dxf eDrawings .flt – OpenFlight FVRML – and FX3D, function-based extensions of VRML and X3D glTF - 3D asset delivery format (.glb binary version) HSF IGES JT .MA (Maya ASCII format) .MB (Maya Binary format) .OBJ Wavefront OpenGEX – Open Game Engine Exchange PLY POV-Ray scene description language PRC STEP SKP STL – A stereolithography format U3D – Universal 3D file format VRML – Virtual Reality Modeling Language XAML XGL XVL xVRML X3D 3DF .3DM .3ds – Autodesk 3D Studio 3DXML X3D – Vector format used in 3D applications from Xara === Compound formats === These are formats containing both pixel and vector data, possible other data, e.g. the interactive features of PDF. EPS (Encapsulated PostScript) MODCA (Mixed Object:Document Content Architecture) PDF (Portable Document Format) PostScript, a page description language with strong graphics capabilities PICT (Classic Macintosh QuickDraw file) WMF / EMF (Windows Metafile / Enhanced Metafile) SWF (Shockwave Flash) XAML User interface language using vector graphics for images. === Stereo formats === MPO The Multi Picture Object (.mpo) format consists of multiple JPEG images (Camera & Imaging Products Association) (CIPA). PNS The PNG Stereo (.pns) format consists of a side-by-side image based on PNG (Portable Network Graphics). JPS The JPEG Stereo (.jps) format consists of a side-by-side image format based on JPEG. == See also == Display resolution Display aspect ratio List of common display resolutions Display resolution standards == References ==
Wikipedia/Graphics_standards
Graphics (from Ancient Greek γραφικός (graphikós) 'pertaining to drawing, painting, writing, etc.') are visual images or designs on some surface, such as a wall, canvas, screen, paper, or stone, to inform, illustrate, or entertain. In contemporary usage, it includes a pictorial representation of the data, as in design and manufacture, in typesetting and the graphic arts, and in educational and recreational software. Images that are generated by a computer are called computer graphics. Examples are photographs, drawings, line art, mathematical graphs, line graphs, charts, diagrams, typography, numbers, symbols, geometric designs, maps, engineering drawings, or other images. Graphics often combine text, illustration, and color. Graphic design may consist of the deliberate selection, creation, or arrangement of typography alone, as in a brochure, flyer, poster, web site, or book without any other element. The objective can be clarity or effective communication, association with other cultural elements, or merely the creation of a distinctive style. Graphics can be functional or artistic. The latter can be a recorded version, such as a photograph, or an interpretation by a scientist to highlight essential features, or an artist, in which case the distinction with imaginary graphics may become blurred. It can also be used for architecture. == History == The earliest graphics known to anthropologists studying prehistoric periods are cave paintings and markings on boulders, bone, ivory, and antlers, which were created during the Upper Palaeolithic period from 40,000 to 10,000 B.C. or earlier. Many of these were found to record astronomical, seasonal, and chronological details. Some of the earliest graphics and drawings are known to the modern world, from almost 6,000 years ago, are that of engraved stone tablets and ceramic cylinder seals, marking the beginning of the historical periods and the keeping of records for accounting and inventory purposes. Records from Egypt predate these and papyrus was used by the Egyptians as a material on which to plan the building of pyramids; they also used slabs of limestone and wood. From 600 to 250 BC, the Greeks played a major role in geometry. They used graphics to represent their mathematical theories such as the Circle Theorem and the Pythagorean theorem. In art, "graphics" is often used to distinguish work in a monotone and made up of lines, as opposed to painting. === Drawing === Drawing generally involves making marks on a surface by applying pressure from a tool or moving a tool across a surface. In which a tool is always used as if there were no tools it would be art. Graphical drawing is an instrumental guided drawing. === Printmaking === Woodblock printing, including images is first seen in China after paper was invented (about A.D. 105). In the West, the main techniques have been woodcut, engraving and etching, but there are many others. ==== Etching ==== Etching is an intaglio method of printmaking in which the image is incised into the surface of a metal plate using an acid. The acid eats the metal, leaving behind roughened areas, or, if the surface exposed to the acid is very thin, burning a line into the plate. The use of the process in printmaking is believed to have been invented by Daniel Hopfer (c. 1470–1536) of Augsburg, Germany, who decorated armour in this way. Etching is also used in the manufacturing of printed circuit boards and semiconductor devices. === Line art === Line art is a rather non-specific term sometimes used for any image that consists of distinct straight and curved lines placed against a (usually plain) background, without gradations in shade (darkness) or hue (color) to represent two-dimensional or three-dimensional objects. Line art is usually monochromatic, although lines may be of different colors. === Illustration === An illustration is a visual representation such as a drawing, painting, photograph or other work of art that stresses the subject more than form. The aim of an illustration is to elucidate or decorate a story, poem or piece of textual information (such as a newspaper article), traditionally by providing a visual representation of something described in the text. The editorial cartoon, also known as a political cartoon, is an illustration containing a political or social message. Illustrations can be used to display a wide range of subject matter and serve a variety of functions, such as: giving faces to characters in a story displaying a number of examples of an item described in an academic textbook (e.g. A Typology) visualizing step-wise sets of instructions in a technical manual communicating subtle thematic tone in a narrative linking brands to the ideas of human expression, individuality, and creativity making a reader laugh or smile for fun (to make laugh) funny === Graphs === A graph or chart is a graphic that represents tabular or numeric data. Charts are often used to make it easier to understand large quantities of data and the relationships between different parts of the data. === Diagrams === A diagram is a simplified and structured visual representation of concepts, ideas, constructions, relations, statistical data, etc., used to visualize and clarify the topic. === Symbols === A symbol, in its basic sense, is a representation of a concept or quantity; i.e., an idea, object, concept, quality, etc. In more psychological and philosophical terms, all concepts are symbolic in nature, and representations for these concepts are simply token artifacts that are allegorical to (but do not directly codify) a symbolic meaning, or symbolism. === Maps === A map is a simplified depiction of a space, a navigational aid which highlights relations between objects within that space. Usually, a map is a two-dimensional, geometrically accurate representation of a three-dimensional space. One of the first 'modern' maps was made by Waldseemüller. === Photography === One difference between photography and other forms of graphics is that a photographer, in principle, just records a single moment in reality, with seemingly no interpretation. However, a photographer can choose the field of view and angle, and may also use other techniques, such as various lenses to choose the view or filters to change the colors. In recent times, digital photography has opened the way to an infinite number of fast, but strong, manipulations. Even in the early days of photography, there was controversy over photographs of enacted scenes that were presented as 'real life' (especially in war photography, where it can be very difficult to record the original events). Shifting the viewer's eyes ever so slightly with simple pinpricks in the negative could have a dramatic effect. The choice of the field of view can have a strong effect, effectively 'censoring out' other parts of the scene, accomplished by cropping them out or simply not including them in the photograph. This even touches on the philosophical question of what reality is. The human brain processes information based on previous experience, making us see what we want to see or what we were taught to see. Photography does the same, although the photographer interprets the scene for their viewer. === Engineering drawings === An engineering drawing is a type of drawing and is technical in nature, used to fully and clearly define requirements for engineered items. It is usually created in accordance with standardized conventions for layout, nomenclature, interpretation, appearance (such as typefaces and line styles), size, etc. === Computer graphics === There are two types of computer graphics: raster graphics, where each pixel is separately defined (as in a digital photograph), and vector graphics, where mathematical formulas are used to draw lines and shapes, which are then interpreted at the viewer's end to produce the graphic. Using vectors results in infinitely sharp graphics and often smaller files, but, when complex, like vectors take time to render and may have larger file sizes than a raster equivalent. In 1950, the first computer-driven display was attached to MIT's Whirlwind I computer to generate simple pictures. This was followed by MIT's TX-0 and TX-2, interactive computing which increased interest in computer graphics during the late 1950s. In 1962, Ivan Sutherland invented Sketchpad, an innovative program that influenced alternative forms of interaction with computers. In the mid-1960s, large computer graphics research projects were begun at MIT, General Motors, Bell Labs, and Lockheed Corporation. Douglas T. Ross of MIT developed an advanced compiler language for graphics programming. S.A.Coons, also at MIT, and J. C. Ferguson at Boeing, began work in sculptured surfaces. GM developed their DAC-1 system, and other companies, such as Douglas, Lockheed, and McDonnell, also made significant developments. In 1968, ray tracing was first described by Arthur Appel of the IBM Research Center, Yorktown Heights, N.Y. During the late 1970s, home computers became more powerful, capable of drawing both basic and complex shapes and designs. In the 1980s, artists and graphic designers began to see the personal computer as a serious design tool, one that could save time and draw more accurately than other methods. 3D computer graphics began being used in video games in the 1970s with Spasim for the PLATO system in 1974 and FS1 Flight Simulator in 1979. Atari, Inc.'s Battlezone (1980) exposed 3D graphics to a wide audience. Other wireframe and flat-shaded 3D games appeared throughout the 1980s. Ultima Underworld: The Stygian Abyss (1992) was one of the first major video games with texture-mapped polygons. Computer systems dating from the 1980s and onwards often use a graphical user interface (GUI) to present data and information with symbols, icons, and pictures, rather than text. 3D computer graphics and creation tools became more accessible to video game and film developers in the late 1980s with SGI computers, which were later used to create some of the first fully computer-generated short films at Pixar. 3D graphics became more popular in the 1990s in video games, multimedia, and animation. In 1995, Toy Story, the first full-length computer-generated animation film, was released in cinemas. Since then, computer graphics have become more accurate and detailed, due to more advanced computers and better 3D modeling software applications, such as Maya, 3D Studio Max, and Cinema 4D. Consumer-level 3D graphics acceleration hardware became common in IBM PC compatibles near the end of the decade. Another use of computer graphics is screensavers, originally intended to prevent the layout of much-used GUIs from 'burning into' the computer screen. They have since evolved into true pieces of art, their practical purpose obsolete; modern screens are not susceptible to such artifacts. === Web graphics === In the 1990s, Internet speeds increased, and web browsers capable of viewing images were released, the first being Mosaic. Websites began to use the GIF format to display small graphics, such as banners, advertisements, and navigation buttons, on web pages. Modern web browsers can now display JPEG, PNG and increasingly, SVG images in addition to GIFs on web pages. SVG, and to some extent VML, support in some modern web browsers have made it possible to display vector graphics that are clear at any size. Plugins expand the web browser functions to display animated, interactive and 3-D graphics contained within file formats such as SWF and X3D. Modern web graphics can be made with software such as Adobe Photoshop, the GIMP, or Corel Paint Shop Pro. Users of Microsoft Windows have MS Paint, which many find to be lacking in features. This is because MS Paint is a drawing package and not a graphics package. Numerous platforms and websites have been created to cater to web graphics artists and to host their communities. A growing number of people use create internet forum signatures—generally, appearing after a user's post—and other digital artwork, such as photo manipulations and large graphics. With computer games' developers creating their own communities around their products, many more websites are being developed to offer graphics for the fans and to enable them to show their appreciation of such games in their own gaming profiles. == Uses == Graphics are visual elements often used to point readers and viewers to particular information. They are also used to supplement text in an effort to aid readers in their understanding of a particular concept or make the concept more clear or interesting. Popular magazines, such as Time, Wired and Newsweek, usually contain graphic material in abundance to attract readers, unlike the majority of scholarly journals. In computing, they are used to create a graphical interface for the user; and graphics are one of the five key elements of multimedia technology. Graphics are among the primary ways of advertising the sale of goods or services. === Business === Graphics are commonly used in business and economics to create financial charts and tables. The term business graphics came into use in the late 1970s, when personal computers became capable of drawing graphs and charts instead of using a tabular format. Business graphics can be used to highlight changes over time. === Advertising === Advertising is one of the most profitable uses of graphics; artists often do advertising work or take advertising potential into account when creating art, to increase the chances of selling the artwork. === Political === The use of graphics for overtly political purposes—cartoons, graffiti, poster art, flag design, etc.—is a centuries-old practice which thrives today in every part of the world. The Northern Irish murals are one such example. A more recent example is Shepard Fairey's 2008 U.S. presidential election Barack Obama "Hope" poster. It was first published on the web, but soon found its way onto streets throughout the United States. === Education === Graphics are heavily used in textbooks, especially those concerning subjects such as geography, science, and mathematics, in order to illustrate theories and concepts, such as the human anatomy. Diagrams are also used to label photographs and pictures. Educational animation is an important emerging field of graphics. Animated graphics have obvious advantages over static graphics when explaining subject matter that changes over time. The Oxford Illustrated Dictionary uses graphics and technical illustrations to make reading material more interesting and easier to understand. In an encyclopedia, graphics are used to illustrate concepts and show examples of the particular topic being discussed. In order for a graphic to function effectively as an educational aid, the learner must be able to interpret it successfully. This interpretative capacity is one aspect of graphicacy. === Film and animation === Computer graphics are often used in the majority of new feature films, especially those with a large budget. Films that heavily use computer graphics include The Lord of the Rings film trilogy, the Harry Potter films, Spider-Man and War of the Worlds. == Graphics education == The majority of schools, colleges, and universities around the world educate students on the subject of graphic design and art. The subject is taught in a broad variety of ways, each course teaching its own distinctive balance of craft skills and intellectual response to the client's needs. Some graphics courses prioritize traditional craft skills—drawing, printmaking, and typography—over modern craft skills. Other courses may place an emphasis on teaching digital craft skills. Still, other courses may downplay the crafts entirely, concentrating on training students to generate novel intellectual responses that engage with the brief. Despite these apparent differences in training and curriculum, the staff and students on any of these courses will generally consider themselves to be graphic designers. The typical pedagogy of a graphic design (or graphic communication, visual communication, graphic arts or any number of synonymous course titles) will be broadly based on the teaching models developed in the Bauhaus school in Germany or Vkhutemas in Russia. The teaching model will tend to expose students to a variety of craft skills (currently everything from drawing to motion capture), combined with an effort to engage the student with the world of visual culture. == Noted graphic designers == Aldus Manutius designed the first italic type style which is often used in desktop publishing and graphic design. April Greiman is known for her influential poster design. Paul Rand is well known as a design pioneer for designing many popular corporate logos, including the logo for IBM, NeXT and UPS. William Caslon, during the mid-18th century, designed many typefaces, including ITC Founder's Caslon, ITC Founder's Caslon Ornaments, Caslon Graphique, ITC Caslon No. 224, Caslon Old Face and Big Caslon. == See also == Editorial cartoon Visualization (graphics) Semiotics == References == == External links == A Historical Timeline of Computer Graphics and Animation
Wikipedia/Graphics
3D computer graphics, sometimes called CGI, 3D-CGI or three-dimensional computer graphics, are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for the purposes of performing calculations and rendering digital images, usually 2D images but sometimes 3D images. The resulting images may be stored for viewing later (possibly as an animation) or displayed in real time. 3D computer graphics, contrary to what the name suggests, are most often displayed on two-dimensional displays. Unlike 3D film and similar techniques, the result is two-dimensional, without visual depth. More often, 3D graphics are being displayed on 3D displays, like in virtual reality systems. 3D graphics stand in contrast to 2D computer graphics which typically use completely different methods and formats for creation and rendering. 3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, 2D applications may use 3D techniques to achieve effects such as lighting, and similarly, 3D may use some 2D rendering techniques. The objects in 3D computer graphics are often referred to as 3D models. Unlike the rendered image, a model's data is contained within a graphical data file. A 3D model is a mathematical representation of any three-dimensional object; a model is not technically a graphic until it is displayed. A model can be displayed visually as a two-dimensional image through a process called 3D rendering, or it can be used in non-graphical computer simulations and calculations. With 3D printing, models are rendered into an actual 3D physical representation of themselves, with some limitations as to how accurately the physical model can match the virtual model. == History == William Fetter was credited with coining the term computer graphics in 1961 to describe his work at Boeing. An early example of interactive 3-D computer graphics was explored in 1963 by the Sketchpad program at Massachusetts Institute of Technology's Lincoln Laboratory. One of the first displays of computer animation was Futureworld (1976), which included an animation of a human face and a hand that had originally appeared in the 1971 experimental short A Computer Animated Hand, created by University of Utah students Edwin Catmull and Fred Parke. 3-D computer graphics software began appearing for home computers in the late 1970s. The earliest known example is 3D Art Graphics, a set of 3-D computer graphics effects, written by Kazumasa Mitazawa and released in June 1978 for the Apple II. Virtual Reality 3D is a version of 3D computer graphics. With the first headset coming out in the late 1950s, the popularity of VR didn't take off until the 2000s. In 2012 the Oculus was released and since then, the 3D VR headset world has expanded. == Overview == 3D computer graphics production workflow falls into three basic phases: 3D modeling – the process of forming a computer model of an object's shape Layout and CGI animation – the placement and movement of objects (models, lights etc.) within a scene 3D rendering – the computer calculations that, based on light placement, surface types, and other qualities, generate (rasterize the scene into) an image === Modeling === The modeling describes the process of forming the shape of an object. The two most common sources of 3D models are those that an artist or engineer originates on the computer with some kind of 3D modeling tool, and models scanned into a computer from real-world objects (Polygonal Modeling, Patch Modeling and NURBS Modeling are some popular tools used in 3D modeling). Models can also be produced procedurally or via physical simulation. Basically, a 3D model is formed from points called vertices that define the shape and form polygons. A polygon is an area formed from at least three vertices (a triangle). A polygon of n points is an n-gon. The overall integrity of the model and its suitability to use in animation depend on the structure of the polygons. === Layout and animation === Before rendering into an image, objects must be laid out in a 3D scene. This defines spatial relationships between objects, including location and size. Animation refers to the temporal description of an object (i.e., how it moves and deforms over time. Popular methods include keyframing, inverse kinematics, and motion-capture). These techniques are often used in combination. As with animation, physical simulation also specifies motion. Stop Motion has multiple categories within such as Claymation, Cutout, Silhouette, Lego, Puppets, and Pixelation. Claymation is the use of models made of clay used for an animation. Some examples are Clay Fighter and Clay Jam. Lego animation is one of the more common types of stop motion. Lego stop motion is the use of the figures themselves moving around. Some examples of this are Lego Island and Lego Harry Potter. === Materials and textures === Materials and textures are properties that the render engine uses to render the model. One can give the model materials to tell the render engine how to treat light when it hits the surface. Textures are used to give the material color using a color or albedo map, or give the surface features using a bump map or normal map. It can be also used to deform the model itself using a displacement map. === Rendering === Rendering converts a model into an image either by simulating light transport to get photo-realistic images, or by applying an art style as in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light gets from one place to another) and scattering (how surfaces interact with light). This step is usually performed using 3-D computer graphics software or a 3-D graphics API. Altering the scene into a suitable form for rendering also involves 3D projection, which displays a three-dimensional image in two dimensions. Although 3-D modeling and CAD software may perform 3-D rendering as well (e.g., Autodesk 3ds Max or Blender), exclusive 3-D rendering software also exists (e.g., OTOY's Octane Rendering Engine, Maxon's Redshift) Examples of 3-D rendering == Software == 3-D computer graphics software produces computer-generated imagery (CGI) through 3D modeling and 3D rendering or produces 3-D models for analytical, scientific and industrial purposes. === File formats === There are many varieties of files supporting 3-D graphics, for example, Wavefront .obj files, .fbx and .x DirectX files. Each file type generally tends to have its own unique data structure. Each file format can be accessed through their respective applications, such as DirectX files, and Quake. Alternatively, files can be accessed through third-party standalone programs, or via manual decompilation. === Modeling === 3-D modeling software is a class of 3-D computer graphics software used to produce 3-D models. Individual programs of this class are called modeling applications or modelers. 3-D modeling starts by describing 3 display models : Drawing Points, Drawing Lines and Drawing triangles and other Polygonal patches. 3-D modelers allow users to create and alter models via their 3-D mesh. Users can add, subtract, stretch and otherwise change the mesh to their desire. Models can be viewed from a variety of angles, usually simultaneously. Models can be rotated and the view can be zoomed in and out. 3-D modelers can export their models to files, which can then be imported into other applications as long as the metadata are compatible. Many modelers allow importers and exporters to be plugged-in, so they can read and write data in the native formats of other applications. Most 3-D modelers contain a number of related features, such as ray tracers and other rendering alternatives and texture mapping facilities. Some also contain features that support or allow animation of models. Some may be able to generate full-motion video of a series of rendered scenes (i.e. animation). === Computer-aided design (CAD) === Computer aided design software may employ the same fundamental 3-D modeling techniques that 3-D modeling software use but their goal differs. They are used in computer-aided engineering, computer-aided manufacturing, Finite element analysis, product lifecycle management, 3D printing and computer-aided architectural design. === Complementary tools === After producing a video, studios then edit or composite the video using programs such as Adobe Premiere Pro or Final Cut Pro at the mid-level, or Autodesk Combustion, Digital Fusion, Shake at the high-end. Match moving software is commonly used to match live video with computer-generated video, keeping the two in sync as the camera moves. Use of real-time computer graphics engines to create a cinematic production is called machinima. == Other types of 3D appearance == === Photorealistic 2D graphics === Not all computer graphics that appear 3D are based on a wireframe model. 2D computer graphics with 3D photorealistic effects are often achieved without wire-frame modeling and are sometimes indistinguishable in the final form. Some graphic art software includes filters that can be applied to 2D vector graphics or 2D raster graphics on transparent layers. Visual artists may also copy or visualize 3D effects and manually render photo-realistic effects without the use of filters. === 2.5D === Some video games use 2.5D graphics, involving restricted projections of three-dimensional environments, such as isometric graphics or virtual cameras with fixed angles, either as a way to improve performance of the game engine or for stylistic and gameplay concerns. By contrast, games using 3D computer graphics without such restrictions are said to use true 3D. === Other forms of animation === Cutout is the use of flat materials such as paper. Everything is cut out of paper including the environment, characters, and even some props. An example of this is Paper Mario. Silhouette is similar to cutouts except they are one solid color, black. Limbo is an example of this. Puppets are dolls and different puppets used in the game. An example of this would be Yoshi's Wooly World. Pixelation is when the entire game appears pixelated, this includes the characters and the environment around them. One example of this is seen in Shovel Knight. == See also == Graphics processing unit (GPU) List of 3D computer graphics software 3D data acquisition and object reconstruction 3D projection on 2D planes Geometry processing Isometric graphics in video games and pixel art List of stereoscopic video games Medical animation Render farm == References == == External links == A Critical History of Computer Graphics and Animation (Wayback Machine copy) How Stuff Works - 3D Graphics History of Computer Graphics series of articles (Wayback Machine copy) How 3D Works - Explains 3D modeling for an illuminated manuscript
Wikipedia/3D_graphics
A computed tomography scan (CT scan), formerly called computed axial tomography scan (CAT scan), is a medical imaging technique used to obtain detailed internal images of the body. The personnel that perform CT scans are called radiographers or radiology technologists. CT scanners use a rotating X-ray tube and a row of detectors placed in a gantry to measure X-ray attenuations by different tissues inside the body. The multiple X-ray measurements taken from different angles are then processed on a computer using tomographic reconstruction algorithms to produce tomographic (cross-sectional) images (virtual "slices") of a body. CT scans can be used in patients with metallic implants or pacemakers, for whom magnetic resonance imaging (MRI) is contraindicated. Since its development in the 1970s, CT scanning has proven to be a versatile imaging technique. While CT is most prominently used in medical diagnosis, it can also be used to form images of non-living objects. The 1979 Nobel Prize in Physiology or Medicine was awarded jointly to South African-American physicist Allan MacLeod Cormack and British electrical engineer Godfrey Hounsfield "for the development of computer-assisted tomography". == Types == On the basis of image acquisition and procedures, various type of scanners are available in the market. === Sequential CT === Sequential CT, also known as step-and-shoot CT, is a type of scanning method in which the CT table moves stepwise. The table increments to a particular location and then stops which is followed by the X-ray tube rotation and acquisition of a slice. The table then increments again, and another slice is taken. The table movement stops while taking slices. This results in an increased time of scanning. === Spiral CT === Spinning tube, commonly called spiral CT, or helical CT, is an imaging technique in which an entire X-ray tube is spun around the central axis of the area being scanned. These are the dominant type of scanners on the market because they have been manufactured longer and offer a lower cost of production and purchase. The main limitation of this type of CT is the bulk and inertia of the equipment (X-ray tube assembly and detector array on the opposite side of the circle) which limits the speed at which the equipment can spin. Some designs use two X-ray sources and detector arrays offset by an angle, as a technique to improve temporal resolution. === Electron beam tomography === Electron beam tomography (EBT) is a specific form of CT in which a large enough X-ray tube is constructed so that only the path of the electrons, travelling between the cathode and anode of the X-ray tube, are spun using deflection coils. This type had a major advantage since sweep speeds can be much faster, allowing for less blurry imaging of moving structures, such as the heart and arteries. Fewer scanners of this design have been produced when compared with spinning tube types, mainly due to the higher cost associated with building a much larger X-ray tube and detector array and limited anatomical coverage. === Dual energy CT === Dual energy CT, also known as spectral CT, is an advancement of computed Tomography in which two energies are used to create two sets of data. A dual energy CT may employ dual source, single source with dual detector layer, single source with energy switching methods to get two different sets of data. Dual source CT is an advanced scanner with a two X-ray tube detector system, unlike conventional single tube systems. These two detector systems are mounted on a single gantry at 90° in the same plane. Dual source CT scanners allow fast scanning with higher temporal resolution by acquiring a full CT slice in only half a rotation. Fast imaging reduces motion blurring at high heart rates and potentially allowing for shorter breath-hold time. This is particularly useful for ill patients having difficulty holding their breath or unable to take heart-rate lowering medication. Single source with energy switching is another mode of dual energy CT in which a single tube is operated at two different energies by switching the energies frequently. === CT perfusion imaging === CT perfusion imaging is a specific form of CT to assess flow through blood vessels whilst injecting a contrast agent. Blood flow, blood transit time, and organ blood volume, can all be calculated with reasonable sensitivity and specificity. This type of CT may be used on the heart, although sensitivity and specificity for detecting abnormalities are still lower than for other forms of CT. This may also be used on the brain, where CT perfusion imaging can often detect poor brain perfusion well before it is detected using a conventional spiral CT scan. This is better for stroke diagnosis than other CT types. === PET CT === Positron emission tomography–computed tomography is a hybrid CT modality which combines, in a single gantry, a positron emission tomography (PET) scanner and an X-ray computed tomography (CT) scanner, to acquire sequential images from both devices in the same session, which are combined into a single superposed (co-registered) image. Thus, functional imaging obtained by PET, which depicts the spatial distribution of metabolic or biochemical activity in the body can be more precisely aligned or correlated with anatomic imaging obtained by CT scanning. PET-CT gives both anatomical and functional details of an organ under examination and is helpful in detecting different type of cancers. == Medical use == Since its introduction in the 1970s, CT has become an important tool in medical imaging to supplement conventional X-ray imaging and medical ultrasonography. It has more recently been used for preventive medicine or screening for disease, for example, CT colonography for people with a high risk of colon cancer, or full-motion heart scans for people with a high risk of heart disease. Several institutions offer full-body scans for the general population although this practice goes against the advice and official position of many professional organizations in the field primarily due to the radiation dose applied. The use of CT scans has increased dramatically over the last two decades in many countries. An estimated 72 million scans were performed in the United States in 2007 and more than 80 million in 2015. === Head === CT scanning of the head is typically used to detect infarction (stroke), tumors, calcifications, haemorrhage, and bone trauma. Of the above, hypodense (dark) structures can indicate edema and infarction, hyperdense (bright) structures indicate calcifications and haemorrhage and bone trauma can be seen as disjunction in bone windows. Tumors can be detected by the swelling and anatomical distortion they cause, or by surrounding edema. CT scanning of the head is also used in CT-guided stereotactic surgery and radiosurgery for treatment of intracranial tumors, arteriovenous malformations, and other surgically treatable conditions using a device known as the N-localizer. === Neck === Contrast CT is generally the initial study of choice for neck masses in adults. CT of the thyroid plays an important role in the evaluation of thyroid cancer. CT scan often incidentally finds thyroid abnormalities, and so is often the preferred investigation modality for thyroid abnormalities. === Lungs === A CT scan can be used for detecting both acute and chronic changes in the lung parenchyma, the tissue of the lungs. It is particularly relevant here because normal two-dimensional X-rays do not show such defects. A variety of techniques are used, depending on the suspected abnormality. For evaluation of chronic interstitial processes such as emphysema, and fibrosis, thin sections with high spatial frequency reconstructions are used; often scans are performed both on inspiration and expiration. This special technique is called high resolution CT that produces a sampling of the lung, and not continuous images. Bronchial wall thickening can be seen on lung CTs and generally (but not always) implies inflammation of the bronchi. An incidentally found nodule in the absence of symptoms (sometimes referred to as an incidentaloma) may raise concerns that it might represent a tumor, either benign or malignant. Perhaps persuaded by fear, patients and doctors sometimes agree to an intensive schedule of CT scans, sometimes up to every three months and beyond the recommended guidelines, in an attempt to do surveillance on the nodules. However, established guidelines advise that patients without a prior history of cancer and whose solid nodules have not grown over a two-year period are unlikely to have any malignant cancer. For this reason, and because no research provides supporting evidence that intensive surveillance gives better outcomes, and because of risks associated with having CT scans, patients should not receive CT screening in excess of those recommended by established guidelines. === Angiography === Computed tomography angiography (CTA) is a type of contrast CT to visualize the arteries and veins throughout the body. This ranges from arteries serving the brain to those bringing blood to the lungs, kidneys, arms and legs. An example of this type of exam is CT pulmonary angiogram (CTPA) used to diagnose pulmonary embolism (PE). It employs computed tomography and an iodine-based contrast agent to obtain an image of the pulmonary arteries. CT scans can reduce the risk of angiography by providing clinicians with more information about the positioning and number of clots prior to the procedure. === Cardiac === A CT scan of the heart is performed to gain knowledge about cardiac or coronary anatomy. Traditionally, cardiac CT scans are used to detect, diagnose, or follow up coronary artery disease. More recently CT has played a key role in the fast-evolving field of transcatheter structural heart interventions, more specifically in the transcatheter repair and replacement of heart valves. The main forms of cardiac CT scanning are: Coronary CT angiography (CCTA): the use of CT to assess the coronary arteries of the heart. The subject receives an intravenous injection of radiocontrast, and then the heart is scanned using a high-speed CT scanner, allowing radiologists to assess the extent of occlusion in the coronary arteries, usually to diagnose coronary artery disease. Coronary CT calcium scan: also used for the assessment of severity of coronary artery disease. Specifically, it looks for calcium deposits in the coronary arteries that can narrow arteries and increase the risk of a heart attack. A typical coronary CT calcium scan is done without the use of radiocontrast, but it can possibly be done from contrast-enhanced images as well. To better visualize the anatomy, post-processing of the images is common. Most common are multiplanar reconstructions (MPR) and volume rendering. For more complex anatomies and procedures, such as heart valve interventions, a true 3D reconstruction or a 3D print is created based on these CT images to gain a deeper understanding. === Abdomen and pelvis === CT is an accurate technique for diagnosis of abdominal diseases like Crohn's disease, GIT bleeding, and diagnosis and staging of cancer, as well as follow-up after cancer treatment to assess response. It is commonly used to investigate acute abdominal pain. Non-contrast-enhanced CT scans are the gold standard for diagnosing kidney stone disease. They allow clinicians to estimate the size, volume, and density of stones, helping to guide further treatment; with size being especially important in predicting the time to spontaneous passage of a stone. === Axial skeleton and extremities === For the axial skeleton and extremities, CT is often used to image complex fractures, especially ones around joints, because of its ability to reconstruct the area of interest in multiple planes. Fractures, ligamentous injuries, and dislocations can easily be recognized with a 0.2 mm resolution. With modern dual-energy CT scanners, new areas of use have been established, such as aiding in the diagnosis of gout. === Biomechanical use === CT is used in biomechanics to quickly reveal the geometry, anatomy, density and elastic moduli of biological tissues. == Other uses == === Industrial use === Industrial CT scanning (industrial computed tomography) is a process which uses X-ray equipment to produce 3D representations of components both externally and internally. Industrial CT scanning has been used in many areas of industry for internal inspection of components. Some of the key uses for CT scanning have been flaw detection, failure analysis, metrology, assembly analysis, image-based finite element methods and reverse engineering applications. CT scanning is also employed in the imaging and conservation of museum artifacts. === Aviation security === CT scanning has also found an application in transport security (predominantly airport security) where it is currently used in a materials analysis context for explosives detection CTX (explosive-detection device) and is also under consideration for automated baggage/parcel security scanning using computer vision based object recognition algorithms that target the detection of specific threat items based on 3D appearance (e.g. guns, knives, liquid containers). Its usage in airport security pioneered at Shannon Airport in March 2022 has ended the ban on liquids over 100 ml there, a move that Heathrow Airport plans for a full roll-out on 1 December 2022 and the TSA spent $781.2 million on an order for over 1,000 scanners, ready to go live in the summer. === Geological use === X-ray CT is used in geological studies to quickly reveal materials inside a drill core. Dense minerals such as pyrite and barite appear brighter and less dense components such as clay appear dull in CT images. === Paleontological use === Traditional methods of studying fossils are often destructive, such as the use of thin sections and physical preparation. X-ray CT is used in paleontology to non-destructively visualize fossils in 3D. This has many advantages. For example, we can look at fragile structures that might never otherwise be able to be studied. In addition, one can freely move around models of fossils in virtual 3D space to inspect it without damaging the fossil. === Cultural heritage use === X-ray CT and micro-CT can also be used for the conservation and preservation of objects of cultural heritage. For many fragile objects, direct research and observation can be damaging and can degrade the object over time. Using CT scans, conservators and researchers are able to determine the material composition of the objects they are exploring, such as the position of ink along the layers of a scroll, without any additional harm. These scans have been optimal for research focused on the workings of the Antikythera mechanism or the text hidden inside the charred outer layers of the En-Gedi Scroll. However, they are not optimal for every object subject to these kinds of research questions, as there are certain artifacts like the Herculaneum papyri in which the material composition has very little variation along the inside of the object. After scanning these objects, computational methods can be employed to examine the insides of these objects, as was the case with the virtual unwrapping of the En-Gedi scroll and the Herculaneum papyri. Micro-CT has also proved useful for analyzing more recent artifacts such as still-sealed historic correspondence that employed the technique of letterlocking (complex folding and cuts) that provided a "tamper-evident locking mechanism". Further examples of use cases in archaeology is imaging the contents of sarcophagi or ceramics. Recently, CWI in Amsterdam has collaborated with Rijksmuseum to investigate art object inside details in the framework called IntACT. === Microorganism research === Varied types of fungus can degrade wood to different degrees, one Belgium research group has been used X-ray CT 3 dimension with sub-micron resolution unveiled fungi can penetrate micropores of 0.6 μm under certain conditions. === Timber sawmill === Sawmills use industrial CT scanners to detect round defects, for instance knots, to improve total value of timber productions. Most sawmills are planning to incorporate this robust detection tool to improve productivity in the long run, however initial investment cost is high. == Interpretation of results == === Presentation === The result of a CT scan is a volume of voxels, which may be presented to a human observer by various methods, which broadly fit into the following categories: Slices (of varying thickness). Thin slice is generally regarded as planes representing a thickness of less than 3 mm. Thick slice is generally regarded as planes representing a thickness between 3 mm and 5 mm. Projection, including maximum intensity projection and average intensity projection Volume rendering (VR) Technically, all volume renderings become projections when viewed on a 2-dimensional display, making the distinction between projections and volume renderings a bit vague. The epitomes of volume rendering models feature a mix of for example coloring and shading in order to create realistic and observable representations. Two-dimensional CT images are conventionally rendered so that the view is as though looking up at it from the patient's feet. Hence, the left side of the image is to the patient's right and vice versa, while anterior in the image also is the patient's anterior and vice versa. This left-right interchange corresponds to the view that physicians generally have in reality when positioned in front of patients. ==== Grayscale ==== Pixels in an image obtained by CT scanning are displayed in terms of relative radiodensity. The pixel itself is displayed according to the mean attenuation of the tissue(s) that it corresponds to on a scale from +3,071 (most attenuating) to −1,024 (least attenuating) on the Hounsfield scale. A pixel is a two dimensional unit based on the matrix size and the field of view. When the CT slice thickness is also factored in, the unit is known as a voxel, which is a three-dimensional unit. Water has an attenuation of 0 Hounsfield units (HU), while air is −1,000 HU, cancellous bone is typically +400 HU, and cranial bone can reach 2,000 HU. The attenuation of metallic implants depends on the atomic number of the element used: Titanium usually has an amount of +1000 HU, iron steel can completely block the X-ray and is, therefore, responsible for well-known line-artifacts in computed tomograms. Artifacts are caused by abrupt transitions between low- and high-density materials, which results in data values that exceed the dynamic range of the processing electronics. ==== Windowing ==== CT data sets have a very high dynamic range which must be reduced for display or printing. This is typically done via a process of "windowing", which maps a range (the "window") of pixel values to a grayscale ramp. For example, CT images of the brain are commonly viewed with a window extending from 0 HU to 80 HU. Pixel values of 0 and lower, are displayed as black; values of 80 and higher are displayed as white; values within the window are displayed as a gray intensity proportional to position within the window. The window used for display must be matched to the X-ray density of the object of interest, in order to optimize the visible detail. Window width and window level parameters are used to control the windowing of a scan. ==== Multiplanar reconstruction and projections ==== Multiplanar reconstruction (MPR) is the process of converting data from one anatomical plane (usually transverse) to other planes. It can be used for thin slices as well as projections. Multiplanar reconstruction is possible as present CT scanners provide almost isotropic resolution. MPR is used almost in every scan. The spine is frequently examined with it. An image of the spine in axial plane can only show one vertebral bone at a time and cannot show its relation with other vertebral bones. By reformatting the data in other planes, visualization of the relative position can be achieved in sagittal and coronal plane. New software allows the reconstruction of data in non-orthogonal (oblique) planes, which help in the visualization of organs which are not in orthogonal planes. It is better suited for visualization of the anatomical structure of the bronchi as they do not lie orthogonal to the direction of the scan. Curved-plane reconstruction (or curved planar reformation = CPR) is performed mainly for the evaluation of vessels. This type of reconstruction helps to straighten the bends in a vessel, thereby helping to visualize a whole vessel in a single image or in multiple images. After a vessel has been "straightened", measurements such as cross-sectional area and length can be made. This is helpful in preoperative assessment of a surgical procedure. For 2D projections used in radiation therapy for quality assurance and planning of external beam radiotherapy, including digitally reconstructed radiographs, see Beam's eye view. ==== Volume rendering ==== A threshold value of radiodensity is set by the operator (e.g., a level that corresponds to bone). With the help of edge detection image processing algorithms a 3D model can be constructed from the initial data and displayed on screen. Various thresholds can be used to get multiple models, each anatomical component such as muscle, bone and cartilage can be differentiated on the basis of different colours given to them. However, this mode of operation cannot show interior structures. Surface rendering is limited technique as it displays only the surfaces that meet a particular threshold density, and which are towards the viewer. However, In volume rendering, transparency, colours and shading are used which makes it easy to present a volume in a single image. For example, Pelvic bones could be displayed as semi-transparent, so that, even viewing at an oblique angle one part of the image does not hide another. === Image quality === ==== Dose versus image quality ==== An important issue within radiology today is how to reduce the radiation dose during CT examinations without compromising the image quality. In general, higher radiation doses result in higher-resolution images, while lower doses lead to increased image noise and unsharp images. However, increased dosage raises the adverse side effects, including the risk of radiation-induced cancer – a four-phase abdominal CT gives the same radiation dose as 300 chest X-rays. Several methods that can reduce the exposure to ionizing radiation during a CT scan exist. New software technology can significantly reduce the required radiation dose. New iterative tomographic reconstruction algorithms (e.g., iterative Sparse Asymptotic Minimum Variance) could offer super-resolution without requiring higher radiation dose. Individualize the examination and adjust the radiation dose to the body type and body organ examined. Different body types and organs require different amounts of radiation. Higher resolution is not always suitable, such as detection of small pulmonary masses. ==== Artifacts ==== Although images produced by CT are generally faithful representations of the scanned volume, the technique is susceptible to a number of artifacts, such as the following:Chapters 3 and 5 Streak artifact Streaks are often seen around materials that block most X-rays, such as metal or bone. Numerous factors contribute to these streaks: under sampling, photon starvation, motion, beam hardening, and Compton scatter. This type of artifact commonly occurs in the posterior fossa of the brain, or if there are metal implants. The streaks can be reduced using newer reconstruction techniques. Approaches such as metal artifact reduction (MAR) can also reduce this artifact. MAR techniques include spectral imaging, where CT images are taken with photons of different energy levels, and then synthesized into monochromatic images with special software such as GSI (Gemstone Spectral Imaging). Partial volume effect This appears as "blurring" of edges. It is due to the scanner being unable to differentiate between a small amount of high-density material (e.g., bone) and a larger amount of lower density (e.g., cartilage). The reconstruction assumes that the X-ray attenuation within each voxel is homogeneous; this may not be the case at sharp edges. This is most commonly seen in the z-direction (craniocaudal direction), due to the conventional use of highly anisotropic voxels, which have a much lower out-of-plane resolution, than in-plane resolution. This can be partially overcome by scanning using thinner slices, or an isotropic acquisition on a modern scanner. Ring artifact Probably the most common mechanical artifact, the image of one or many "rings" appears within an image. They are usually caused by the variations in the response from individual elements in a two dimensional X-ray detector due to defect or miscalibration. Ring artifacts can largely be reduced by intensity normalization, also referred to as flat field correction. Remaining rings can be suppressed by a transformation to polar space, where they become linear stripes. A comparative evaluation of ring artefact reduction on X-ray tomography images showed that the method of Sijbers and Postnov can effectively suppress ring artefacts. Noise This appears as grain on the image and is caused by a low signal to noise ratio. This occurs more commonly when a thin slice thickness is used. It can also occur when the power supplied to the X-ray tube is insufficient to penetrate the anatomy. Windmill Streaking appearances can occur when the detectors intersect the reconstruction plane. This can be reduced with filters or a reduction in pitch. Beam hardening This can give a "cupped appearance" when grayscale is visualized as height. It occurs because conventional sources, like X-ray tubes emit a polychromatic spectrum. Photons of higher photon energy levels are typically attenuated less. Because of this, the mean energy of the spectrum increases when passing the object, often described as getting "harder". This leads to an effect increasingly underestimating material thickness, if not corrected. Many algorithms exist to correct for this artifact. They can be divided into mono- and multi-material methods. == Advantages == CT scanning has several advantages over traditional two-dimensional medical radiography. First, CT eliminates the superimposition of images of structures outside the area of interest. Second, CT scans have greater image resolution, enabling examination of finer details. CT can distinguish between tissues that differ in radiographic density by 1% or less. Third, CT scanning enables multiplanar reformatted imaging: scan data can be visualized in the transverse (or axial), coronal, or sagittal plane, depending on the diagnostic task. The improved resolution of CT has permitted the development of new investigations. For example, CT angiography avoids the invasive insertion of a catheter. CT scanning can perform a virtual colonoscopy with greater accuracy and less discomfort for the patient than a traditional colonoscopy. Virtual colonography is far more accurate than a barium enema for detection of tumors and uses a lower radiation dose. CT is a moderate-to-high radiation diagnostic technique. The radiation dose for a particular examination depends on multiple factors: volume scanned, patient build, number and type of scan protocol, and desired resolution and image quality. Two helical CT scanning parameters, tube current and pitch, can be adjusted easily and have a profound effect on radiation. CT scanning is more accurate than two-dimensional radiographs in evaluating anterior interbody fusion, although they may still over-read the extent of fusion. == Adverse effects == === Cancer === The radiation used in CT scans can damage body cells, including DNA molecules, which can lead to radiation-induced cancer. The radiation doses received from CT scans is variable. Compared to the lowest dose X-ray techniques, CT scans can have 100 to 1,000 times higher dose than conventional X-rays. However, a lumbar spine X-ray has a similar dose as a head CT. Articles in the media often exaggerate the relative dose of CT by comparing the lowest-dose X-ray techniques (chest X-ray) with the highest-dose CT techniques. In general, a routine abdominal CT has a radiation dose similar to three years of average background radiation. Large scale population-based studies have consistently demonstrated that low dose radiation from CT scans has impacts on cancer incidence in a variety of cancers. For example, in a large population-based Australian cohort it was found that up to 3.7% of brain cancers were caused by CT scan radiation. Some experts project that in the future, between three and five percent of all cancers would result from medical imaging. An Australian study of 10.9 million people reported that the increased incidence of cancer after CT scan exposure in this cohort was mostly due to irradiation. In this group, one in every 1,800 CT scans was followed by an excess cancer. If the lifetime risk of developing cancer is 40% then the absolute risk rises to 40.05% after a CT. The risks of CT scan radiation are especially important in patients undergoing recurrent CT scans within a short time span of one to five years. Some experts note that CT scans are known to be "overused," and "there is distressingly little evidence of better health outcomes associated with the current high rate of scans." On the other hand, a recent paper analyzing the data of patients who received high cumulative doses showed a high degree of appropriate use. This creates an important issue of cancer risk to these patients. Moreover, a highly significant finding that was previously unreported is that some patients received >100 mSv dose from CT scans in a single day, which counteracts existing criticisms some investigators may have on the effects of protracted versus acute exposure. There are contrarian views and the debate is ongoing. Some studies have shown that publications indicating an increased risk of cancer from typical doses of body CT scans are plagued with serious methodological limitations and several highly improbable results, concluding that no evidence indicates such low doses cause any long-term harm. One study estimated that as many as 0.4% of cancers in the United States resulted from CT scans, and that this may have increased to as much as 1.5 to 2% based on the rate of CT use in 2007. Others dispute this estimate, as there is no consensus that the low levels of radiation used in CT scans cause damage. Lower radiation doses are used in many cases, such as in the investigation of renal colic. A person's age plays a significant role in the subsequent risk of cancer. Estimated lifetime cancer mortality risks from an abdominal CT of a one-year-old is 0.1%, or 1:1000 scans. The risk for someone who is 40 years old is half that of someone who is 20 years old with substantially less risk in the elderly. The International Commission on Radiological Protection estimates that the risk to a fetus being exposed to 10 mGy (a unit of radiation exposure) increases the rate of cancer before 20 years of age from 0.03% to 0.04% (for reference a CT pulmonary angiogram exposes a fetus to 4 mGy). A 2012 review did not find an association between medical radiation and cancer risk in children noting however the existence of limitations in the evidences over which the review is based. CT scans can be performed with different settings for lower exposure in children with most manufacturers of CT scans as of 2007 having this function built in. Furthermore, certain conditions can require children to be exposed to multiple CT scans. Current recommendations are to inform patients of the risks of CT scanning. However, employees of imaging centers tend not to communicate such risks unless patients ask. === Contrast reactions === In the United States half of CT scans are contrast CTs using intravenously injected radiocontrast agents. The most common reactions from these agents are mild, including nausea, vomiting, and an itching rash. Severe life-threatening reactions may rarely occur. Overall reactions occur in 1 to 3% with nonionic contrast and 4 to 12% of people with ionic contrast. Skin rashes may appear within a week to 3% of people. The old radiocontrast agents caused anaphylaxis in 1% of cases while the newer, low-osmolar agents cause reactions in 0.01–0.04% of cases. Death occurs in about 2 to 30 people per 1,000,000 administrations, with newer agents being safer. There is a higher risk of mortality in those who are female, elderly or in poor health, usually secondary to either anaphylaxis or acute kidney injury. The contrast agent may induce contrast-induced nephropathy. This occurs in 2 to 7% of people who receive these agents, with greater risk in those who have preexisting kidney failure, preexisting diabetes, or reduced intravascular volume. People with mild kidney impairment are usually advised to ensure full hydration for several hours before and after the injection. For moderate kidney failure, the use of iodinated contrast should be avoided; this may mean using an alternative technique instead of CT. Those with severe kidney failure requiring dialysis require less strict precautions, as their kidneys have so little function remaining that any further damage would not be noticeable and the dialysis will remove the contrast agent; it is normally recommended, however, to arrange dialysis as soon as possible following contrast administration to minimize any adverse effects of the contrast. In addition to the use of intravenous contrast, orally administered contrast agents are frequently used when examining the abdomen. These are frequently the same as the intravenous contrast agents, merely diluted to approximately 10% of the concentration. However, oral alternatives to iodinated contrast exist, such as very dilute (0.5–1% w/v) barium sulfate suspensions. Dilute barium sulfate has the advantage that it does not cause allergic-type reactions or kidney failure, but cannot be used in patients with suspected bowel perforation or suspected bowel injury, as leakage of barium sulfate from damaged bowel can cause fatal peritonitis. Side effects from contrast agents, administered intravenously in some CT scans, might impair kidney performance in patients with kidney disease, although this risk is now believed to be lower than previously thought. === Scan dose === The table reports average radiation exposures; however, there can be a wide variation in radiation doses between similar scan types, where the highest dose could be as much as 22 times higher than the lowest dose. A typical plain film X-ray involves radiation dose of 0.01 to 0.15 mGy, while a typical CT can involve 10–20 mGy for specific organs, and can go up to 80 mGy for certain specialized CT scans. For purposes of comparison, the world average dose rate from naturally occurring sources of background radiation is 2.4 mSv per year, equal for practical purposes in this application to 2.4 mGy per year. While there is some variation, most people (99%) received less than 7 mSv per year as background radiation. Medical imaging as of 2007 accounted for half of the radiation exposure of those in the United States with CT scans making up two thirds of this amount. In the United Kingdom it accounts for 15% of radiation exposure. The average radiation dose from medical sources is ≈0.6 mSv per person globally as of 2007. Those in the nuclear industry in the United States are limited to doses of 50 mSv a year and 100 mSv every 5 years. Lead is the main material used by radiography personnel for shielding against scattered X-rays. ==== Radiation dose units ==== The radiation dose reported in the gray or mGy unit is proportional to the amount of energy that the irradiated body part is expected to absorb, and the physical effect (such as DNA double strand breaks) on the cells' chemical bonds by X-ray radiation is proportional to that energy. The sievert unit is used in the report of the effective dose. The sievert unit, in the context of CT scans, does not correspond to the actual radiation dose that the scanned body part absorbs but to another radiation dose of another scenario, the whole body absorbing the other radiation dose and the other radiation dose being of a magnitude, estimated to have the same probability to induce cancer as the CT scan. Thus, as is shown in the table above, the actual radiation that is absorbed by a scanned body part is often much larger than the effective dose suggests. A specific measure, termed the computed tomography dose index (CTDI), is commonly used as an estimate of the radiation absorbed dose for tissue within the scan region, and is automatically computed by medical CT scanners. The equivalent dose is the effective dose of a case, in which the whole body would actually absorb the same radiation dose, and the sievert unit is used in its report. In the case of non-uniform radiation, or radiation given to only part of the body, which is common for CT examinations, using the local equivalent dose alone would overstate the biological risks to the entire organism. ==== Effects of radiation ==== Most adverse health effects of radiation exposure may be grouped in two general categories: deterministic effects (harmful tissue reactions) due in large part to the killing/malfunction of cells following high doses; stochastic effects, i.e., cancer and heritable effects involving either cancer development in exposed individuals owing to mutation of somatic cells or heritable disease in their offspring owing to mutation of reproductive (germ) cells. The added lifetime risk of developing cancer by a single abdominal CT of 8 mSv is estimated to be 0.05%, or 1 one in 2,000. Because of increased susceptibility of fetuses to radiation exposure, the radiation dosage of a CT scan is an important consideration in the choice of medical imaging in pregnancy. ==== Excess doses ==== In October, 2009, the US Food and Drug Administration (FDA) initiated an investigation of brain perfusion CT (PCT) scans, based on radiation burns caused by incorrect settings at one particular facility for this particular type of CT scan. Over 200 patients were exposed to radiation at approximately eight times the expected dose for an 18-month period; over 40% of them lost patches of hair. This event prompted a call for increased CT quality assurance programs. It was noted that "while unnecessary radiation exposure should be avoided, a medically needed CT scan obtained with appropriate acquisition parameter has benefits that outweigh the radiation risks." Similar problems have been reported at other centers. These incidents are believed to be due to human error. == Procedure == CT scan procedure varies according to the type of the study and the organ being imaged. The patient lies on the CT table and the centering of the table is done according to the body part. The IV line is established in case of contrast-enhanced CT. After selecting proper and rate of contrast from the pressure injector, the scout is taken to localize and plan the scan. Once the plan is selected, the contrast is given. The raw data is processed according to the study and proper windowing is done to make scans easy to diagnose. === Preparation === Patient preparation may vary according to the type of scan. The general patient preparation includes. Signing the informed consent. Removal of metallic objects and jewelry from the region of interest. Changing to the hospital gown according to hospital protocol. Checking of kidney function, especially creatinine and urea levels (in case of CECT). == Mechanism == Computed tomography operates by using an X-ray generator that rotates around the object; X-ray detectors are positioned on the opposite side of the circle from the X-ray source. As the X-rays pass through the patient, they are attenuated differently by various tissues according to the tissue density. A visual representation of the raw data obtained is called a sinogram, yet it is not sufficient for interpretation. Once the scan data has been acquired, the data must be processed using a form of tomographic reconstruction, which produces a series of cross-sectional images. These cross-sectional images are made up of small units of pixels or voxels. Pixels in an image obtained by CT scanning are displayed in terms of relative radiodensity. The pixel itself is displayed according to the mean attenuation of the tissue(s) that it corresponds to on a scale from +3,071 (most attenuating) to −1,024 (least attenuating) on the Hounsfield scale. A pixel is a two dimensional unit based on the matrix size and the field of view. When the CT slice thickness is also factored in, the unit is known as a voxel, which is a three-dimensional unit. Water has an attenuation of 0 Hounsfield units (HU), while air is −1,000 HU, cancellous bone is typically +400 HU, and cranial bone can reach 2,000 HU or more (os temporale) and can cause artifacts. The attenuation of metallic implants depends on the atomic number of the element used: Titanium usually has an amount of +1000 HU, iron steel can completely extinguish the X-ray and is, therefore, responsible for well-known line-artifacts in computed tomograms. Artifacts are caused by abrupt transitions between low- and high-density materials, which results in data values that exceed the dynamic range of the processing electronics. Two-dimensional CT images are conventionally rendered so that the view is as though looking up at it from the patient's feet. Hence, the left side of the image is to the patient's right and vice versa, while anterior in the image also is the patient's anterior and vice versa. This left-right interchange corresponds to the view that physicians generally have in reality when positioned in front of patients. Initially, the images generated in CT scans were in the transverse (axial) anatomical plane, perpendicular to the long axis of the body. Modern scanners allow the scan data to be reformatted as images in other planes. Digital geometry processing can generate a three-dimensional image of an object inside the body from a series of two-dimensional radiographic images taken by rotation around a fixed axis. These cross-sectional images are widely used for medical diagnosis and therapy. === Contrast === Contrast media used for X-ray CT, as well as for plain film X-ray, are called radiocontrasts. Radiocontrasts for CT are, in general, iodine-based. This is useful to highlight structures such as blood vessels that otherwise would be difficult to delineate from their surroundings. Using contrast material can also help to obtain functional information about tissues. Often, images are taken both with and without radiocontrast. == History == The history of X-ray computed tomography goes back to at least 1917 with the mathematical theory of the Radon transform. In October 1963, William H. Oldendorf received a U.S. patent for a "radiant energy apparatus for investigating selected areas of interior objects obscured by dense material". The first commercially viable CT scanner was invented by Godfrey Hounsfield in 1972. It is often claimed that revenues from the sales of The Beatles' records in the 1960s helped fund the development of the first CT scanner at EMI. The first production X-ray CT machines were in fact called EMI scanners. === Etymology === The word tomography is derived from the Greek tome 'slice' and graphein 'to write'. Computed tomography was originally known as the "EMI scan" as it was developed in the early 1970s at a research branch of EMI, a company best known today for its music and recording business. It was later known as computed axial tomography (CAT or CT scan) and body section röntgenography. The term CAT scan is no longer in technical use because current CT scans enable for multiplanar reconstructions. This makes CT scan the most appropriate term, which is used by radiologists in common vernacular as well as in textbooks and scientific papers. In Medical Subject Headings (MeSH), computed axial tomography was used from 1977 to 1979, but the current indexing explicitly includes X-ray in the title. The term sinogram was introduced by Paul Edholm and Bertil Jacobson in 1975. == Society and culture == === Campaigns === In response to increased concern by the public and the ongoing progress of best practices, the Alliance for Radiation Safety in Pediatric Imaging was formed within the Society for Pediatric Radiology. In concert with the American Society of Radiologic Technologists, the American College of Radiology and the American Association of Physicists in Medicine, the Society for Pediatric Radiology developed and launched the Image Gently Campaign which is designed to maintain high-quality imaging studies while using the lowest doses and best radiation safety practices available on pediatric patients. This initiative has been endorsed and applied by a growing list of various professional medical organizations around the world and has received support and assistance from companies that manufacture equipment used in Radiology. Following upon the success of the Image Gently campaign, the American College of Radiology, the Radiological Society of North America, the American Association of Physicists in Medicine and the American Society of Radiologic Technologists have launched a similar campaign to address this issue in the adult population called Image Wisely. The World Health Organization and International Atomic Energy Agency (IAEA) of the United Nations have also been working in this area and have ongoing projects designed to broaden best practices and lower patient radiation dose. === Prevalence === Use of CT has increased dramatically over the last two decades. An estimated 72 million scans were performed in the United States in 2007, accounting for close to half of the total per-capita dose rate from radiologic and nuclear medicine procedures. Of the CT scans, six to eleven percent are done in children, an increase of seven to eightfold from 1980. Similar increases have been seen in Europe and Asia. In Calgary, Canada, 12.1% of people who present to the emergency with an urgent complaint received a CT scan, most commonly either of the head or of the abdomen. The percentage who received CT, however, varied markedly by the emergency physician who saw them from 1.8% to 25%. In the emergency department in the United States, CT or MRI imaging is done in 15% of people who present with injuries as of 2007 (up from 6% in 1998). The increased use of CT scans has been the greatest in two fields: screening of adults (screening CT of the lung in smokers, virtual colonoscopy, CT cardiac screening, and whole-body CT in asymptomatic patients) and CT imaging of children. Shortening of the scanning time to around 1 second, eliminating the strict need for the subject to remain still or be sedated, is one of the main reasons for the large increase in the pediatric population (especially for the diagnosis of appendicitis). As of 2007, in the United States a proportion of CT scans are performed unnecessarily. Some estimates place this number at 30%. There are a number of reasons for this including: legal concerns, financial incentives, and desire by the public. For example, some healthy people avidly pay to receive full-body CT scans as screening. In that case, it is not at all clear that the benefits outweigh the risks and costs. Deciding whether and how to treat incidentalomas is complex, radiation exposure is not negligible, and the money for the scans involves opportunity cost. == Manufacturers == Major manufacturers of CT scanning devices and equipment are: Canon Medical Systems Corporation Fujifilm Healthcare GE HealthCare Neusoft Medical Systems Philips Siemens Healthineers United Imaging == Research == Photon-counting computed tomography is a CT technique currently under development. Typical CT scanners use energy integrating detectors; photons are measured as a voltage on a capacitor which is proportional to the X-rays detected. However, this technique is susceptible to noise and other factors which can affect the linearity of the voltage to X-ray intensity relationship. Photon counting detectors (PCDs) are still affected by noise but it does not change the measured counts of photons. PCDs have several potential advantages, including improving signal (and contrast) to noise ratios, reducing doses, improving spatial resolution, and through use of several energies, distinguishing multiple contrast agents. PCDs have only recently become feasible in CT scanners due to improvements in detector technologies that can cope with the volume and rate of data required. As of February 2016, photon counting CT is in use at three sites. Some early research has found the dose reduction potential of photon counting CT for breast imaging to be very promising. In view of recent findings of high cumulative doses to patients from recurrent CT scans, there has been a push for scanning technologies and techniques that reduce ionising radiation doses to patients to sub-milliSievert (sub-mSv in the literature) levels during the CT scan process, a goal that has been lingering. == See also == == References == == External links == Development of CT imaging CT Artefacts—PPT by David Platten Filler A (2009-06-30). "The History, Development and Impact of Computed Imaging in Neurological Diagnosis and Neurosurgery: CT, MRI, and DTI". Nature Precedings: 1. doi:10.1038/npre.2009.3267.4. ISSN 1756-0357. Boone JM, McCollough CH (2021). "Computed tomography turns 50". Physics Today. 74 (9): 34–40. Bibcode:2021PhT....74i..34B. doi:10.1063/PT.3.4834. ISSN 0031-9228. S2CID 239718717.
Wikipedia/Computed_axial_tomography
The Blinn–Phong reflection model, also called the modified Phong reflection model, is a modification developed by Jim Blinn to the Phong reflection model in 1977. Blinn–Phong is a shading model used in OpenGL and Direct3D's fixed-function pipeline (before Direct3D 10 and OpenGL 3.1), and is carried out on each vertex as it passes down the graphics pipeline; pixel values between vertices are interpolated by Gouraud shading by default, rather than the more computationally-expensive Phong shading. == Description == In Phong shading, one must continually recalculate the dot product R ⋅ V {\displaystyle R\cdot V} between a viewer (V) and the beam from a light-source (L) reflected (R) on a surface. If, instead, one calculates a halfway vector between the viewer and light-source vectors, H = L + V ‖ L + V ‖ {\displaystyle H={\frac {L+V}{\left\|L+V\right\|}}} R ⋅ V {\displaystyle R\cdot V} can be replaced with N ⋅ H {\displaystyle N\cdot H} , where N {\displaystyle N} is the normalized surface normal. In the above equation, L {\displaystyle L} and V {\displaystyle V} are both normalized vectors, and H {\displaystyle H} is a solution to the equation V = P H ( − L ) , {\displaystyle V=P_{H}(-L),} where P H {\displaystyle P_{H}} is the Householder matrix that reflects a point in the hyperplane that contains the origin and has the normal H . {\displaystyle H.} This dot product represents the cosine of an angle that is half of the angle represented by Phong's dot product if V, L, N and R all lie in the same plane. This relation between the angles remains approximately true when the vectors don't lie in the same plane, especially when the angles are small. The angle between N and H is therefore sometimes called the halfway angle. Considering that the angle between the halfway vector and the surface normal is likely to be smaller than the angle between R and V used in Phong's model (unless the surface is viewed from a very steep angle for which it is likely to be larger), and since Phong is using ( R ⋅ V ) α , {\displaystyle \left(R\cdot V\right)^{\alpha },} an exponent can be set α ′ > α {\displaystyle \alpha ^{\prime }>\alpha } such that ( N ⋅ H ) α ′ {\displaystyle \left(N\cdot H\right)^{\alpha ^{\prime }}} is closer to the former expression. For front-lit surfaces (specular reflections on surfaces facing the viewer), α ′ = 4 α {\displaystyle \alpha ^{\prime }=4\,\alpha } will result in specular highlights that very closely match the corresponding Phong reflections. However, while the Phong reflections are always round for a flat surface, the Blinn–Phong reflections become elliptical when the surface is viewed from a steep angle. This can be compared to the case where the sun is reflected in the sea close to the horizon, or where a far away street light is reflected in wet pavement, where the reflection will always be much more extended vertically than horizontally. Additionally, while it can be seen as an approximation to the Phong model, it produces more accurate models of empirically determined bidirectional reflectance distribution functions than Phong for many types of surfaces. == Efficiency == Blinn-Phong will be faster than Phong in the case where the viewer and light are treated to be very remote, such as approaching or at infinity. This is the case for directional lights and orthographic/isometric cameras. In this case, the halfway vector is independent of position and surface curvature simply because the halfway vector is dependent on the direction to viewer's position and the direction to the light's position, which individually converge at this remote distance, hence the halfway vector can be thought of as constant in this case. H {\displaystyle H} therefore can be computed once for each light and then used for the entire frame, or indeed while light and viewpoint remain in the same relative position. The same is not true with Phong's method of using the reflection vector which depends on the surface curvature and must be recalculated for each pixel of the image (or for each vertex of the model in the case of vertex lighting). In 3D scenes with perspective cameras, this optimization is not possible. == Code samples == === High-Level Shading Language code sample === This sample in High-Level Shading Language is a method of determining the diffuse and specular light from a point light. The light structure, position in space of the surface, view direction vector and the normal of the surface are passed through. A Lighting structure is returned; The below also needs to clamp certain dot products to zero in the case of negative answers. Without that, light heading away from the camera is treated the same way as light heading towards it. For the specular calculation, an incorrect "halo" of light glancing off the edges of an object and away from the camera might appear as bright as the light directly being reflected towards the camera. === OpenGL Shading Language code sample === This sample in the OpenGL Shading Language consists of two code files, or shaders. The first one is a so-called vertex shader and implements Phong shading, which is used to interpolate the surface normal between vertices. The second shader is a so-called fragment shader and implements the Blinn–Phong shading model in order to determine the diffuse and specular light from a point light source. ==== Vertex shader ==== This vertex shader implements Phong shading: ==== Fragment shader ==== This fragment shader implements the Blinn–Phong shading model and gamma correction: The colors ambientColor, diffuseColor and specColor are not supposed to be gamma corrected. If they are colors obtained from gamma-corrected image files (JPEG, PNG, etc.), they need to be linearized before working with them, which is done by scaling the channel values to the range [0, 1] and raising them to the gamma value of the image, which for images in the sRGB color space can be assumed to be about 2.2 (even though for this specific color space, a simple power relation is just an approximation of the actual transformation). Modern graphics APIs have the ability to perform this gamma correction automatically when sampling from a texture or writing to a framebuffer. == See also == List of common shading algorithms Phong reflection model for Phong's corresponding model Gamma correction Specular highlight == References ==
Wikipedia/Blinn–Phong_reflection_model
A graphing calculator (also graphics calculator or graphic display calculator) is a handheld computer that is capable of plotting graphs, solving simultaneous equations, and performing other tasks with variables. Most popular graphing calculators are programmable calculators, allowing the user to create customized programs, typically for scientific, engineering or education applications. They have large screens that display several lines of text and calculations. == History == An early graphing calculator was designed in 1921 by electrical engineer Edith Clarke. The calculator was used to solve problems with electrical power line transmission. Casio produced the first commercially available graphing calculator in 1985. Sharp produced its first graphing calculator in 1986, with Hewlett Packard following in 1988, and Texas Instruments in 1990. == Features == === Computer algebra systems === Some graphing calculators have a computer algebra system (CAS), which means that they are capable of producing symbolic results. These calculators can manipulate algebraic expressions, performing operations such as factor, expand, and simplify. In addition, they can give answers in exact form without numerical approximations. Calculators that have a computer algebra system are called symbolic or CAS calculators. === Laboratory usage === Many graphing calculators can be attached to devices like electronic thermometers, pH gauges, weather instruments, decibel and light meters, accelerometers, and other sensors and therefore function as data loggers, as well as WiFi or other communication modules for monitoring, polling and interaction with the teacher. Student laboratory exercises with data from such devices enhances learning of math, especially statistics and mechanics. === Games and utilities === Since graphing calculators are typically user-programmable, they are also widely used for utilities and calculator gaming, with a sizable body of user-created game software on most popular platforms. The ability to create games and utilities has spurred the creation of calculator application sites (e.g., Cemetech) which, in some cases, may offer programs created using calculators' assembly language. Even though handheld gaming devices fall in a similar price range, graphing calculators offer superior math programming capability for math based games. However ,due to poor display resolution, slow processor speed and lack of a dedicated keyboard, they are mostly preferred only by high school students. However, for developers and advanced users like researchers, analysts and gamers, third-party software development involving firmware modifications, whether for powerful gaming or exploiting capabilities beyond the published data sheet and programming language, is a contentious issue with manufacturers and education authorities as it might incite unfair calculator use during standardized high school and college tests where these devices are targeted. == Software Graphing Calculators == There are many graphing calculators that do not require dedicated hardware, but run on a device in a web browser or as an app. Notable graphing calculators of this type include Desmos and GeoGebra. == Graphing calculators in education == North America – high school mathematics teachers allow and even encourage their students to use graphing calculators in class. In some cases (especially in calculus courses) they are required. College Board of the United States – permits the use of most graphing or CAS calculators that do not have a QWERTY-style keyboard for parts of its AP and SAT exams, but the ACT exam and IB schools do not permit the use of calculators with computer algebra systems. United Kingdom – a graphing calculator is allowed for A-level maths courses, however they are not required and the exams are designed to be broadly 'calculator neutral'. Similarly, at GCSE, all current courses include one paper where no calculator of any kind can be used, but students are permitted to use graphical calculators for other papers. The use of graphical calculators at GCSE is not widespread with cost being a likely factor. The use of CAS is not allowed for either A-level or GCSE. Similarly, calculators with QWERTY keyboard layout are also not allowed as well. The Scottish SQA allows the use of graphic calculators in maths exams (excluding paper 1, which is exclusively non-calculator), however these should either be checked before exams by invigilators or handed out by the exam centre, as certain functions / information is not allowed to be stored on a calculator in the exam. Finland and Slovenia – and certain other countries, it is forbidden to use calculators with symbolic calculation (CAS) or 3D graphics features in the matriculation exam. This changed in the case of Finland, however, as symbolic calculators were allowed from spring 2012 onwards. Norway – calculators with wireless communication capabilities, such as IR links, have been banned at some technical universities. Australia – policies vary from state to state. Victoria – the VCE specifies approved calculators as applicable for its mathematics exams. For Further Mathematics an approved graphics calculator (for example TI-83/84, Casio 9860, HP-39G) or CAS (for example TI-89, the ClassPad series, HP-40G) can be used. Mathematical Methods (CAS) has a technology free examination consisting of short answer and some extended answer questions. It then also has a technology-active examination consisting of extended response and multiple choice questions: a CAS is the assumed technology for Mathematical Methods (CAS). Specialist Mathematics has a technology free examination and a technology-active examination where either an approved graphics calculator or CAS may be used. Calculator memories are not required to be cleared. In subjects like Physics and Chemistry, students are only allowed a standard scientific calculator. Western Australia – all tertiary entrance examinations in Mathematics involve a calculator section which assume the student has a graphics calculator; CAS enabled calculators are also permitted. In subjects such as Physics, Chemistry and Accounting only non-programmable calculators are permitted. New South Wales – graphics calculators are allowed for the General Mathematics Higher School Certificate exam, but disallowed in the higher level Mathematics courses. China - Only the Shanghai College Entrance Examination allows the use of calculators without graphing and memory. Except for Shanghai, the other provinces and cities do not allow the use of calculators, so calculators in general are banned in primary and secondary education in most parts of China. India - Calculators are prohibited in primary and secondary education. (ICSE allows the Casio fx-82MS, or equivalent scientific calculator in 12th boards). University degree and diploma courses have their own rules on use of permitted models of calculators in exams. Casio's fx-991MS, fx-991ES, fx-100MS, and fx-350MS scientific calculators are used in many university degree and diploma courses. These calculators are also permitted for university exams as they are non-programmable since programmable calculators are not allowed for university exams. During the online GATE examinations and other competitive examinations, candidates are provided with a virtual scientific calculator as physical calculators of any type are not permitted. New Zealand – Calculators identified as having high-level algebraic manipulation capability are prohibited in NCEA examinations unless specifically allowed by a standard or subject prescription. This includes calculators such as the TI-89 series [1]. Turkey – any type of calculator whatsoever is prohibited in all primary and high schools. Singapore – graphing calculators are used in junior colleges; it is required in the Mathematics paper of the GCE 'A' Levels, and most schools use the TI-84 Plus or TI-84 Plus Silver Edition. Netherlands – high school students are obliged to use graphing calculators during tests and exams in their final three years. Most students use the TI-83 Plus or TI-84 Plus, but other graphing calculators are allowed, including the Casio fx-9860G and HP-39G. Graphing calculators are almost always allowed to be used during tests instead of normal calculators, which sometimes results in cheat sheets being made on forehand and exchanged before the test starts using link cables. Israel – Graphing calculators are forbidden to use in the Bagrut (equivalent to the British A-Levels) math exam, in addition to programmable calculators. University degree and diploma courses have their own rules on use and permitted models of calculators in exams. == Programming == Most graphing calculators, as well as some non-graphing scientific calculators and programmer's calculators can be programmed to automate complex and frequently used series of calculations and those inaccessible from the keyboard. The actual programming can often be done on a computer then later uploaded to the calculators. The most common tools for this include the PC link cable and software for the given calculator, configurable text editors or hex editors, and specialized programming tools such as the below-mentioned implementation of various languages on the computer side. Earlier calculators stored programs on magnetic cards and the like; increased memory capacity has made storage on the calculator the most common implementation. Some of the newer machines can also use memory cards. Many graphing and scientific calculators will tokenize the program text, replacing textual programming elements with short numerical tokens. For example, take this line of TI-BASIC code: Disp [A] . In a conventional programming language, this line of code would be nine characters long (eight not including a newline character). For a system as slow as a graphing calculator, this is too inefficient for an interpreted language. To increase program speed and coding efficiency, the above line of code would be only three characters. "Disp_" as a single character, "[A]" as a single character, and a newline character. This normally means that single byte chars will query the standard ASCII chart while two byte chars (the Disp_ for example) will build a graphical string of single byte characters but retain the two byte character in the program memory. Many graphical calculators work much like computers and use versions of 7-bit, 8-bit or 9-bit ASCII-derived character sets or even UTF-8 and Unicode. Many of them have a tool similar to the character map on Windows. They also have BASIC like functions such as chr$, chr, char, asc, and so on, which sometimes may be more Pascal or C like. One example may be use of ord, as in Pascal, instead of the asc of many Basic variants, to return the code of a character, i.e. the position of the character in the collating sequence of the machine. A cable and/or IrDA transceiver connecting the calculator to a computer make the process easier and expands other possibilities such as on-board spreadsheet, database, graphics, and word processing programs. The second option is being able to code the programs on board the calculator itself. This option is facilitated by the inclusion of full-screen text editors and other programming tools in the default feature set of the calculator or as optional items. Some calculators have QWERTY keyboards and others can be attached to an external keyboard which can be close to the size of a regular 102-key computer keyboard. Programming is a major use for the software and cables used to connect calculators to computers. The most common programming languages used for calculators are similar to keystroke-macro languages and variants of BASIC. The latter can have a large feature set—approaching that of BASIC as found in computers—including character and string manipulation, advanced conditional and branching statements, sound, graphics, and more including, of course, the huge spectrum of mathematical, string, bit-manipulation, number base, I/O, and graphics functions built into the machine. Languages for programming calculators fall into all of the main groups, i.e. machine code, low-level, mid-level, high-level languages for systems and application programming, scripting, macro, and glue languages, procedural, functional, imperative &. object-oriented programming can be achieved in some cases. Most calculators capable to being connected to a computer can be programmed in assembly language and machine code, although on some calculators this is only possible through using exploits. The most common assembly and machine languages are for TMS9900, SH-3, Zilog Z80, and various Motorola chips (e.g. a modified 68000) which serve as the main processors of the machines although many (not all) are modified to some extent from their use elsewhere. Some manufacturers do not document and even mildly discourage the assembly language programming of their machines because they must be programmed in this way by putting together the program on the PC and then forcing it into the calculator by various improvised methods. Other on-board programming languages include purpose-made languages, variants of Eiffel, Forth, and Lisp, and Command Script facilities which are similar in function to batch/shell programming and other glue languages on computers but generally not as full featured. Ports of other languages like BBC BASIC and development of on-board interpreters for Fortran, REXX, AWK, Perl, Unix shells (e.g., bash, zsh), other shells (DOS/Windows 9x, OS/2, and Windows NT family shells as well as the related 4DOS, 4NT and 4OS2 as well as DCL), COBOL, C, Python, Tcl, Pascal, Delphi, ALGOL, and other languages are at various levels of development. Some calculators, especially those with other PDA-like functions have actual operating systems including the TI proprietary OS for its more recent machines, DOS, Windows CE, and rarely Windows NT 4.0 Embedded et seq, and Linux. Experiments with the TI-89, TI-92, TI-92 Plus and Voyage 200 machines show the possibility of installing some variants of other systems such as a chopped-down variant of CP/M-68K, an operating system which has been used for portable devices in the past. Tools which allow for programming the calculators in C/C++ and possibly Fortran and assembly language are used on the computer side, such as HPGCC, TIGCC and others. Flash memory is another means of conveyance of information to and from the calculator. The on-board BASIC variants in TI graphing calculators and the languages available on the HP-48 series can be used for rapid prototyping by developers, professors, and students, often when a computer is not close at hand. Most graphing calculators have on-board spreadsheets which usually integrate with Microsoft Excel on the computer side. At this time, spreadsheets with macro and other automation facilities on the calculator side are not on the market. In some cases, the list, matrix, and data grid facilities can be combined with the native programming language of the calculator to have the effect of a macro and scripting enabled spreadsheet. == Gallery == == See also == Personal digital assistant Category:Graphing calculators Category:Plotting software Scientific calculator == References == == Further reading == Dick, Thomas P. (1996). Much More than a Toy. Graphing Calculators in Secondary school Calculus. In P. Gómez and B. Waits (Eds.), Roles of Calculators in the Classroom pp 31–46). Una Empresa Docente. Ellington, A. J. (2003). A meta-analysis of the effects of calculators on students' achievement and attitude levels in precollege mathematics classes. Journal for Research in Mathematics Education. 34(5), 433–463. Heller, J. L., Curtis, D. A., Jaffe, R., & Verboncoeur, C. J. (2005). Impact of handheld graphing calculator use on student achievement in algebra 1: Heller Research Associates. Khoju, M., Jaciw, A., & Miller, G. I. (2005). Effectiveness of graphing calculators in K-12 mathematics achievement: A systematic review. Palo Alto, CA: Empirical Education, Inc. National Center for Education Statistics. (2001). The nation's report card: Mathematics 2000. (No. NCES 2001-571). Washington DC: U.S. Department of Education.
Wikipedia/Graphing_calculator
Real-time computer graphics or real-time rendering is the sub-field of computer graphics focused on producing and analyzing images in real time. The term can refer to anything from rendering an application's graphical user interface (GUI) to real-time image analysis, but is most often used in reference to interactive 3D computer graphics, typically using a graphics processing unit (GPU). One example of this concept is a video game that rapidly renders changing 3D environments to produce an illusion of motion. Computers have been capable of generating 2D images such as simple lines, images and polygons in real time since their invention. However, quickly rendering detailed 3D objects is a daunting task for traditional Von Neumann architecture-based systems. An early workaround to this problem was the use of sprites, 2D images that could imitate 3D graphics. Different techniques for rendering now exist, such as ray-tracing and rasterization. Using these techniques and advanced hardware, computers can now render images quickly enough to create the illusion of motion while simultaneously accepting user input. This means that the user can respond to rendered images in real time, producing an interactive experience. == Principles of real-time 3D computer graphics == The goal of computer graphics is to generate computer-generated images, or frames, using certain desired metrics. One such metric is the number of frames generated in a given second. Real-time computer graphics systems differ from traditional (i.e., non-real-time) rendering systems in that non-real-time graphics typically rely on ray tracing. In this process, millions or billions of rays are traced from the camera to the world for detailed rendering—this expensive operation can take hours or days to render a single frame. Real-time graphics systems must render each image in less than 1/30th of a second. Ray tracing is far too slow for these systems; instead, they employ the technique of z-buffer triangle rasterization. In this technique, every object is decomposed into individual primitives, usually triangles. Each triangle gets positioned, rotated and scaled on the screen, and rasterizer hardware (or a software emulator) generates pixels inside each triangle. These triangles are then decomposed into atomic units called fragments that are suitable for displaying on a display screen. The fragments are drawn on the screen using a color that is computed in several steps. For example, a texture can be used to "paint" a triangle based on a stored image, and then shadow mapping can alter that triangle's colors based on line-of-sight to light sources. === Video game graphics === Real-time graphics optimizes image quality subject to time and hardware constraints. GPUs and other advances increased the image quality that real-time graphics can produce. GPUs are capable of handling millions of triangles per frame, and modern DirectX/OpenGL class hardware is capable of generating complex effects, such as shadow volumes, motion blurring, and triangle generation, in real-time. The advancement of real-time graphics is evidenced in the progressive improvements between actual gameplay graphics and the pre-rendered cutscenes traditionally found in video games. Cutscenes are typically rendered in real-time—and may be interactive. Although the gap in quality between real-time graphics and traditional off-line graphics is narrowing, offline rendering remains much more accurate. === Advantages === Real-time graphics are typically employed when interactivity (e.g., player feedback) is crucial. When real-time graphics are used in films, the director has complete control of what has to be drawn on each frame, which can sometimes involve lengthy decision-making. Teams of people are typically involved in the making of these decisions. In real-time computer graphics, the user typically operates an input device to influence what is about to be drawn on the display. For example, when the user wants to move a character on the screen, the system updates the character's position before drawing the next frame. Usually, the display's response-time is far slower than the input device—this is justified by the immense difference between the (fast) response time of a human being's motion and the (slow) perspective speed of the human visual system. This difference has other effects too: because input devices must be very fast to keep up with human motion response, advancements in input devices (e.g., the current Wii remote) typically take much longer to achieve than comparable advancements in display devices. Another important factor controlling real-time computer graphics is the combination of physics and animation. These techniques largely dictate what is to be drawn on the screen—especially where to draw objects in the scene. These techniques help realistically imitate real world behavior (the temporal dimension, not the spatial dimensions), adding to the computer graphics' degree of realism. Real-time previewing with graphics software, especially when adjusting lighting effects, can increase work speed. Some parameter adjustments in fractal generating software may be made while viewing changes to the image in real time. == Rendering pipeline == The graphics rendering pipeline ("rendering pipeline" or simply "pipeline") is the foundation of real-time graphics. Its main function is to render a two-dimensional image in relation to a virtual camera, three-dimensional objects (an object that has width, length, and depth), light sources, lighting models, textures and more. === Architecture === The architecture of the real-time rendering pipeline can be divided into conceptual stages: application, geometry and rasterization. === Application stage === The application stage is responsible for generating "scenes", or 3D settings that are drawn to a 2D display. This stage is implemented in software that developers optimize for performance. This stage may perform processing such as collision detection, speed-up techniques, animation and force feedback, in addition to handling user input. Collision detection is an example of an operation that would be performed in the application stage. Collision detection uses algorithms to detect and respond to collisions between (virtual) objects. For example, the application may calculate new positions for the colliding objects and provide feedback via a force feedback device such as a vibrating game controller. The application stage also prepares graphics data for the next stage. This includes texture animation, animation of 3D models, animation via transforms, and geometry morphing. Finally, it produces primitives (points, lines, and triangles) based on scene information and feeds those primitives into the geometry stage of the pipeline. === Geometry stage === The geometry stage manipulates polygons and vertices to compute what to draw, how to draw it and where to draw it. Usually, these operations are performed by specialized hardware or GPUs. Variations across graphics hardware mean that the "geometry stage" may actually be implemented as several consecutive stages. ==== Model and view transformation ==== Before the final model is shown on the output device, the model is transformed onto multiple spaces or coordinate systems. Transformations move and manipulate objects by altering their vertices. Transformation is the general term for the four specific ways that manipulate the shape or position of a point, line or shape. ==== Lighting ==== In order to give the model a more realistic appearance, one or more light sources are usually established during transformation. However, this stage cannot be reached without first transforming the 3D scene into view space. In view space, the observer (camera) is typically placed at the origin. If using a right-handed coordinate system (which is considered standard), the observer looks in the direction of the negative z-axis with the y-axis pointing upwards and the x-axis pointing to the right. ==== Projection ==== Projection is a transformation used to represent a 3D model in a 2D space. The two main types of projection are orthographic projection (also called parallel) and perspective projection. The main characteristic of an orthographic projection is that parallel lines remain parallel after the transformation. Perspective projection utilizes the concept that if the distance between the observer and model increases, the model appears smaller than before. Essentially, perspective projection mimics human sight. ==== Clipping ==== Clipping is the process of removing primitives that are outside of the view box in order to facilitate the rasterizer stage. Once those primitives are removed, the primitives that remain will be drawn into new triangles that reach the next stage. ==== Screen mapping ==== The purpose of screen mapping is to find out the coordinates of the primitives during the clipping stage. ==== Rasterizer stage ==== The rasterizer stage applies color and turns the graphic elements into pixels or picture elements. == See also == == References == == Bibliography == Möller, Tomas; Haines, Eric (1999). Real-Time Rendering (1st ed.). Natick, MA: A K Peters, Ltd. Salvator, Dave (21 June 2001). "3D Pipeline". Extremetech.com. Extreme Tech. Archived from the original on 17 May 2008. Retrieved 2 Feb 2007. Malhotra, Priya (July 2002). Issues involved in Real-Time Rendering of Virtual Environments (Master's). Blacksburg, VA: Virginia Tech. pp. 20–31. hdl:10919/35382. Retrieved 31 January 2007. Haines, Eric (1 February 2007). "Real-Time Rendering Resources". Retrieved 12 Feb 2007. == External links == RTR Portal – a trimmed-down "best of" set of links to resources
Wikipedia/Interactive_computer_graphics
A text-to-image model is a machine learning model which takes an input natural language description and produces an image matching that description. Text-to-image models began to be developed in the mid-2010s during the beginnings of the AI boom, as a result of advances in deep neural networks. In 2022, the output of state-of-the-art text-to-image models—such as OpenAI's DALL-E 2, Google Brain's Imagen, Stability AI's Stable Diffusion, and Midjourney—began to be considered to approach the quality of real photographs and human-drawn art. Text-to-image models are generally latent diffusion models, which combine a language model, which transforms the input text into a latent representation, and a generative image model, which produces an image conditioned on that representation. The most effective models have generally been trained on massive amounts of image and text data scraped from the web. == History == Before the rise of deep learning, attempts to build text-to-image models were limited to collages by arranging existing component images, such as from a database of clip art. The inverse task, image captioning, was more tractable, and a number of image captioning deep learning models came prior to the first text-to-image models. The first modern text-to-image model, alignDRAW, was introduced in 2015 by researchers from the University of Toronto. alignDRAW extended the previously-introduced DRAW architecture (which used a recurrent variational autoencoder with an attention mechanism) to be conditioned on text sequences. Images generated by alignDRAW were in small resolution (32×32 pixels, attained from resizing) and were considered to be 'low in diversity'. The model was able to generalize to objects not represented in the training data (such as a red school bus) and appropriately handled novel prompts such as "a stop sign is flying in blue skies", exhibiting output that it was not merely "memorizing" data from the training set. In 2016, Reed, Akata, Yan et al. became the first to use generative adversarial networks for the text-to-image task. With models trained on narrow, domain-specific datasets, they were able to generate "visually plausible" images of birds and flowers from text captions like "an all black bird with a distinct thick, rounded bill". A model trained on the more diverse COCO (Common Objects in Context) dataset produced images which were "from a distance... encouraging", but which lacked coherence in their details. Later systems include VQGAN-CLIP, XMC-GAN, and GauGAN2. One of the first text-to-image models to capture widespread public attention was OpenAI's DALL-E, a transformer system announced in January 2021. A successor capable of generating more complex and realistic images, DALL-E 2, was unveiled in April 2022, followed by Stable Diffusion that was publicly released in August 2022. In August 2022, text-to-image personalization allows to teach the model a new concept using a small set of images of a new object that was not included in the training set of the text-to-image foundation model. This is achieved by textual inversion, namely, finding a new text term that correspond to these images. Following other text-to-image models, language model-powered text-to-video platforms such as Runway, Make-A-Video, Imagen Video, Midjourney, and Phenaki can generate video from text and/or text/image prompts. == Architecture and training == Text-to-image models have been built using a variety of architectures. The text encoding step may be performed with a recurrent neural network such as a long short-term memory (LSTM) network, though transformer models have since become a more popular option. For the image generation step, conditional generative adversarial networks (GANs) have been commonly used, with diffusion models also becoming a popular option in recent years. Rather than directly training a model to output a high-resolution image conditioned on a text embedding, a popular technique is to train a model to generate low-resolution images, and use one or more auxiliary deep learning models to upscale it, filling in finer details. Text-to-image models are trained on large datasets of (text, image) pairs, often scraped from the web. With their 2022 Imagen model, Google Brain reported positive results from using a large language model trained separately on a text-only corpus (with its weights subsequently frozen), a departure from the theretofore standard approach. == Datasets == Training a text-to-image model requires a dataset of images paired with text captions. One dataset commonly used for this purpose is the COCO dataset. Released by Microsoft in 2014, COCO consists of around 123,000 images depicting a diversity of objects with five captions per image, generated by human annotators. Originally, the main focus of COCO was on the recognition of objects and scenes in images. Oxford-120 Flowers and CUB-200 Birds are smaller datasets of around 10,000 images each, restricted to flowers and birds, respectively. It is considered less difficult to train a high-quality text-to-image model with these datasets because of their narrow range of subject matter. One of the largest open datasets for training text-to-image models is LAION-5B, containing more than 5 billion image-text pairs. This dataset was created using web scraping and automatic filtering based on similarity to high-quality artwork and professional photographs. Because of this, however, it also contains controversial content, which has led to discussions about the ethics of its use. Some modern AI platforms not only generate images from text but also create synthetic datasets to improve model training and fine-tuning. These datasets help avoid copyright issues and expand the diversity of training data. == Quality evaluation == Evaluating and comparing the quality of text-to-image models is a problem involving assessing multiple desirable properties. A desideratum specific to text-to-image models is that generated images semantically align with the text captions used to generate them. A number of schemes have been devised for assessing these qualities, some automated and others based on human judgement. A common algorithmic metric for assessing image quality and diversity is the Inception Score (IS), which is based on the distribution of labels predicted by a pretrained Inceptionv3 image classification model when applied to a sample of images generated by the text-to-image model. The score is increased when the image classification model predicts a single label with high probability, a scheme intended to favour "distinct" generated images. Another popular metric is the related Fréchet inception distance, which compares the distribution of generated images and real training images according to features extracted by one of the final layers of a pretrained image classification model. == Impact and applications == == List of notable text-to-image models == == Explanatory notes == == See also == Artificial intelligence art Text-to-video model AI slop == References ==
Wikipedia/Text-to-image_model
Drug design, often referred to as rational drug design or simply rational design, is the inventive process of finding new medications based on the knowledge of a biological target. The drug is most commonly an organic small molecule that activates or inhibits the function of a biomolecule such as a protein, which in turn results in a therapeutic benefit to the patient. In the most basic sense, drug design involves the design of molecules that are complementary in shape and charge to the biomolecular target with which they interact and therefore will bind to it. Drug design frequently but not necessarily relies on computer modeling techniques. This type of modeling is sometimes referred to as computer-aided drug design. Finally, drug design that relies on the knowledge of the three-dimensional structure of the biomolecular target is known as structure-based drug design. In addition to small molecules, biopharmaceuticals including peptides and especially therapeutic antibodies are an increasingly important class of drugs and computational methods for improving the affinity, selectivity, and stability of these protein-based therapeutics have also been developed. == Definition == The phrase "drug design" is similar to ligand design (i.e., design of a molecule that will bind tightly to its target). Although design techniques for prediction of binding affinity are reasonably successful, there are many other properties, such as bioavailability, metabolic half-life, and side effects, that first must be optimized before a ligand can become a safe and effective drug. These other characteristics are often difficult to predict with rational design techniques. Due to high attrition rates, especially during clinical phases of drug development, more attention is being focused early in the drug design process on selecting candidate drugs whose physicochemical properties are predicted to result in fewer complications during development and hence more likely to lead to an approved, marketed drug. Furthermore, in vitro experiments complemented with computation methods are increasingly used in early drug discovery to select compounds with more favorable ADME (absorption, distribution, metabolism, and excretion) and toxicological profiles. == Drug targets == A biomolecular target (most commonly a protein or a nucleic acid) is a key molecule involved in a particular metabolic or signaling pathway that is associated with a specific disease condition or pathology or to the infectivity or survival of a microbial pathogen. Potential drug targets are not necessarily disease causing but must by definition be disease modifying. In some cases, small molecules will be designed to enhance or inhibit the target function in the specific disease modifying pathway. Small molecules (for example receptor agonists, antagonists, inverse agonists, or modulators; enzyme activators or inhibitors; or ion channel openers or blockers) will be designed that are complementary to the binding site of target. Small molecules (drugs) can be designed so as not to affect any other important "off-target" molecules (often referred to as antitargets) since drug interactions with off-target molecules may lead to undesirable side effects. Due to similarities in binding sites, closely related targets identified through sequence homology have the highest chance of cross reactivity and hence highest side effect potential. Most commonly, drugs are organic small molecules produced through chemical synthesis, but biopolymer-based drugs (also known as biopharmaceuticals) produced through biological processes are becoming increasingly more common. In addition, mRNA-based gene silencing technologies may have therapeutic applications. For example, nanomedicines based on mRNA can streamline and expedite the drug development process, enabling transient and localized expression of immunostimulatory molecules. In vitro transcribed (IVT) mRNA allows for delivery to various accessible cell types via the blood or alternative pathways. The use of IVT mRNA serves to convey specific genetic information into a person's cells, with the primary objective of preventing or altering a particular disease. === Drug discovery === ==== Phenotypic drug discovery ==== Phenotypic drug discovery is a traditional drug discovery method, also known as forward pharmacology or classical pharmacology. It uses the process of phenotypic screening on collections of synthetic small molecules, natural products, or extracts within chemical libraries to pinpoint substances exhibiting beneficial therapeutic effects. This method is to first discover the in vivo or in vitro functional activity of drugs (such as extract drugs or natural products), and then perform target identification. Phenotypic discovery uses a practical and target-independent approach to generate initial leads, aiming to discover pharmacologically active compounds and therapeutics that operate through novel drug mechanisms. This method allows the exploration of disease phenotypes to find potential treatments for conditions with unknown, complex, or multifactorial origins, where the understanding of molecular targets is insufficient for effective intervention. ==== Rational drug discovery ==== Rational drug design (also called reverse pharmacology) begins with a hypothesis that modulation of a specific biological target may have therapeutic value. In order for a biomolecule to be selected as a drug target, two essential pieces of information are required. The first is evidence that modulation of the target will be disease modifying. This knowledge may come from, for example, disease linkage studies that show an association between mutations in the biological target and certain disease states. The second is that the target is capable of binding to a small molecule and that its activity can be modulated by the small molecule. Once a suitable target has been identified, the target is normally cloned and produced and purified. The purified protein is then used to establish a screening assay. In addition, the three-dimensional structure of the target may be determined. The search for small molecules that bind to the target is begun by screening libraries of potential drug compounds. This may be done by using the screening assay (a "wet screen"). In addition, if the structure of the target is available, a virtual screen may be performed of candidate drugs. Ideally, the candidate drug compounds should be "drug-like", that is they should possess properties that are predicted to lead to oral bioavailability, adequate chemical and metabolic stability, and minimal toxic effects. Several methods are available to estimate druglikeness such as Lipinski's Rule of Five and a range of scoring methods such as lipophilic efficiency. Several methods for predicting drug metabolism have also been proposed in the scientific literature. Due to the large number of drug properties that must be simultaneously optimized during the design process, multi-objective optimization techniques are sometimes employed. Finally because of the limitations in the current methods for prediction of activity, drug design is still very much reliant on serendipity and bounded rationality. == Computer-aided drug design == The most fundamental goal in drug design is to predict whether a given molecule will bind to a target and if so how strongly. Molecular mechanics or molecular dynamics is most often used to estimate the strength of the intermolecular interaction between the small molecule and its biological target. These methods are also used to predict the conformation of the small molecule and to model conformational changes in the target that may occur when the small molecule binds to it. Semi-empirical, ab initio quantum chemistry methods, or density functional theory are often used to provide optimized parameters for the molecular mechanics calculations and also provide an estimate of the electronic properties (electrostatic potential, polarizability, etc.) of the drug candidate that will influence binding affinity. Molecular mechanics methods may also be used to provide semi-quantitative prediction of the binding affinity. Also, knowledge-based scoring function may be used to provide binding affinity estimates. These methods use linear regression, machine learning, neural nets or other statistical techniques to derive predictive binding affinity equations by fitting experimental affinities to computationally derived interaction energies between the small molecule and the target. Ideally, the computational method will be able to predict affinity before a compound is synthesized and hence in theory only one compound needs to be synthesized, saving enormous time and cost. The reality is that present computational methods are imperfect and provide, at best, only qualitatively accurate estimates of affinity. In practice, it requires several iterations of design, synthesis, and testing before an optimal drug is discovered. Computational methods have accelerated discovery by reducing the number of iterations required and have often provided novel structures. Computer-aided drug design may be used at any of the following stages of drug discovery: hit identification using virtual screening (structure- or ligand-based design) hit-to-lead optimization of affinity and selectivity (structure-based design, QSAR, etc.) lead optimization of other pharmaceutical properties while maintaining affinity In order to overcome the insufficient prediction of binding affinity calculated by recent scoring functions, the protein-ligand interaction and compound 3D structure information are used for analysis. For structure-based drug design, several post-screening analyses focusing on protein-ligand interaction have been developed for improving enrichment and effectively mining potential candidates: Consensus scoring Selecting candidates by voting of multiple scoring functions May lose the relationship between protein-ligand structural information and scoring criterion Cluster analysis Represent and cluster candidates according to protein-ligand 3D information Needs meaningful representation of protein-ligand interactions. == Types == There are two major types of drug design. The first is referred to as ligand-based drug design and the second, structure-based drug design. === Ligand-based === Ligand-based drug design (or indirect drug design) relies on knowledge of other molecules that bind to the biological target of interest. These other molecules may be used to derive a pharmacophore model that defines the minimum necessary structural characteristics a molecule must possess in order to bind to the target. A model of the biological target may be built based on the knowledge of what binds to it, and this model in turn may be used to design new molecular entities that interact with the target. Alternatively, a quantitative structure-activity relationship (QSAR), in which a correlation between calculated properties of molecules and their experimentally determined biological activity, may be derived. These QSAR relationships in turn may be used to predict the activity of new analogs. === Structure-based === Structure-based drug design (or direct drug design) relies on knowledge of the three dimensional structure of the biological target obtained through methods such as x-ray crystallography or NMR spectroscopy. If an experimental structure of a target is not available, it may be possible to create a homology model of the target based on the experimental structure of a related protein. Using the structure of the biological target, candidate drugs that are predicted to bind with high affinity and selectivity to the target may be designed using interactive graphics and the intuition of a medicinal chemist. Alternatively, various automated computational procedures may be used to suggest new drug candidates. Current methods for structure-based drug design can be divided roughly into three main categories. The first method is identification of new ligands for a given receptor by searching large databases of 3D structures of small molecules to find those fitting the binding pocket of the receptor using fast approximate docking programs. This method is known as virtual screening. A second category is de novo design of new ligands. In this method, ligand molecules are built up within the constraints of the binding pocket by assembling small pieces in a stepwise manner. These pieces can be either individual atoms or molecular fragments. The key advantage of such a method is that novel structures, not contained in any database, can be suggested. A third method is the optimization of known ligands by evaluating proposed analogs within the binding cavity. ==== Binding site identification ==== Binding site identification is the first step in structure based design. If the structure of the target or a sufficiently similar homolog is determined in the presence of a bound ligand, then the ligand should be observable in the structure in which case location of the binding site is trivial. However, there may be unoccupied allosteric binding sites that may be of interest. Furthermore, it may be that only apoprotein (protein without ligand) structures are available and the reliable identification of unoccupied sites that have the potential to bind ligands with high affinity is non-trivial. In brief, binding site identification usually relies on identification of concave surfaces on the protein that can accommodate drug sized molecules that also possess appropriate "hot spots" (hydrophobic surfaces, hydrogen bonding sites, etc.) that drive ligand binding. ==== Scoring functions ==== Structure-based drug design attempts to use the structure of proteins as a basis for designing new ligands by applying the principles of molecular recognition. Selective high affinity binding to the target is generally desirable since it leads to more efficacious drugs with fewer side effects. Thus, one of the most important principles for designing or obtaining potential new ligands is to predict the binding affinity of a certain ligand to its target (and known antitargets) and use the predicted affinity as a criterion for selection. One early general-purposed empirical scoring function to describe the binding energy of ligands to receptors was developed by Böhm. This empirical scoring function took the form: Δ G bind = Δ G 0 + Δ G hb Σ h − b o n d s + Δ G ionic Σ i o n i c − i n t + Δ G lipophilic | A | + Δ G rot N R O T {\displaystyle \Delta G_{\text{bind}}=\Delta G_{\text{0}}+\Delta G_{\text{hb}}\Sigma _{h-bonds}+\Delta G_{\text{ionic}}\Sigma _{ionic-int}+\Delta G_{\text{lipophilic}}\left\vert A\right\vert +\Delta G_{\text{rot}}{\mathit {NROT}}} where: ΔG0 – empirically derived offset that in part corresponds to the overall loss of translational and rotational entropy of the ligand upon binding. ΔGhb – contribution from hydrogen bonding ΔGionic – contribution from ionic interactions ΔGlip – contribution from lipophilic interactions where |Alipo| is surface area of lipophilic contact between the ligand and receptor ΔGrot – entropy penalty due to freezing a rotatable in the ligand bond upon binding A more general thermodynamic "master" equation is as follows: Δ G bind = − R T ln ⁡ K d K d = [ Ligand ] [ Receptor ] [ Complex ] Δ G bind = Δ G desolvation + Δ G motion + Δ G configuration + Δ G interaction {\displaystyle {\begin{array}{lll}\Delta G_{\text{bind}}=-RT\ln K_{\text{d}}\\[1.3ex]K_{\text{d}}={\dfrac {[{\text{Ligand}}][{\text{Receptor}}]}{[{\text{Complex}}]}}\\[1.3ex]\Delta G_{\text{bind}}=\Delta G_{\text{desolvation}}+\Delta G_{\text{motion}}+\Delta G_{\text{configuration}}+\Delta G_{\text{interaction}}\end{array}}} where: desolvation – enthalpic penalty for removing the ligand from solvent motion – entropic penalty for reducing the degrees of freedom when a ligand binds to its receptor configuration – conformational strain energy required to put the ligand in its "active" conformation interaction – enthalpic gain for "resolvating" the ligand with its receptor The basic idea is that the overall binding free energy can be decomposed into independent components that are known to be important for the binding process. Each component reflects a certain kind of free energy alteration during the binding process between a ligand and its target receptor. The Master Equation is the linear combination of these components. According to Gibbs free energy equation, the relation between dissociation equilibrium constant, Kd, and the components of free energy was built. Various computational methods are used to estimate each of the components of the master equation. For example, the change in polar surface area upon ligand binding can be used to estimate the desolvation energy. The number of rotatable bonds frozen upon ligand binding is proportional to the motion term. The configurational or strain energy can be estimated using molecular mechanics calculations. Finally the interaction energy can be estimated using methods such as the change in non polar surface, statistically derived potentials of mean force, the number of hydrogen bonds formed, etc. In practice, the components of the master equation are fit to experimental data using multiple linear regression. This can be done with a diverse training set including many types of ligands and receptors to produce a less accurate but more general "global" model or a more restricted set of ligands and receptors to produce a more accurate but less general "local" model. == Examples == A particular example of rational drug design involves the use of three-dimensional information about biomolecules obtained from such techniques as X-ray crystallography and NMR spectroscopy. Computer-aided drug design in particular becomes much more tractable when there is a high-resolution structure of a target protein bound to a potent ligand. This approach to drug discovery is sometimes referred to as structure-based drug design. The first unequivocal example of the application of structure-based drug design leading to an approved drug is the carbonic anhydrase inhibitor dorzolamide, which was approved in 1995. Another case study in rational drug design is imatinib, a tyrosine kinase inhibitor designed specifically for the bcr-abl fusion protein that is characteristic for Philadelphia chromosome-positive leukemias (chronic myelogenous leukemia and occasionally acute lymphocytic leukemia). Imatinib is substantially different from previous drugs for cancer, as most agents of chemotherapy simply target rapidly dividing cells, not differentiating between cancer cells and other tissues. Additional examples include: == Drug screening == Types of drug screening include phenotypic screening, high-throughput screening, and virtual screening. Phenotypic screening is characterized by the process of screening drugs using cellular or animal disease models to identify compounds that alter the phenotype and produce beneficial disease-related effects. Emerging technologies in high-throughput screening substantially enhance processing speed and decrease the required detection volume. Virtual screening is completed by computer, enabling a large number of molecules can be screened with a short cycle and low cost. Virtual screening uses a range of computational methods that empower chemists to reduce extensive virtual libraries into more manageable sizes. == Case studies == == Criticism == It has been argued that the highly rigid and focused nature of rational drug design suppresses serendipity in drug discovery. == See also == == References == == External links == Drug+Design at the U.S. National Library of Medicine Medical Subject Headings (MeSH) [Drug Design Org](https://www.drugdesign.org/chapters/drug-design/)
Wikipedia/Rational_drug_design
In 3D computer graphics, polygonal modeling is an approach for modeling objects by representing or approximating their surfaces using polygon meshes. Polygonal modeling is well suited to scanline rendering and is therefore the method of choice for real-time computer graphics. Alternate methods of representing 3D objects include NURBS surfaces, subdivision surfaces, and equation-based (implicit surface) representations used in ray tracers. == Geometric theory and polygons == The basic object used in mesh modeling is a vertex, a point in three-dimensional space. Two vertices connected by a straight line become an edge. Three vertices, connected to each other by three edges, define a triangle, which is the simplest polygon in Euclidean space. More complex polygons can be created out of multiple triangles, or as a single object with more than 3 vertices. Four sided polygons (generally referred to as quads) and triangles are the most common shapes used in polygonal modeling. A group of polygons, connected to each other by shared vertices, is generally referred to as an element. Each of the polygons making up an element is called a face. In Euclidean geometry, any three non-collinear points determine a plane. For this reason, triangles always inhabit a single plane. This is not necessarily true of more complex polygons, however. The flat nature of triangles makes it simple to determine their surface normal, a three-dimensional vector perpendicular to the triangle's surface. Surface normals are useful for determining light transport in ray tracing, and are a key component of the popular Phong shading model. Some rendering systems use vertex normals instead of face normals to create a better-looking lighting system at the cost of more processing. Note that every triangle has two face normals, which point to opposite directions from each other. In many systems only one of these normals is considered valid – the other side of the polygon is referred to as a backface, and can be made visible or invisible depending on the programmer’s desires. Many modeling programs do not strictly enforce geometric theory; for example, it is possible for two vertices to have two distinct edges connecting them, occupying exactly the same spatial location. It is also possible for two vertices to exist at the same spatial coordinates, or two faces to exist at the same location. Situations such as these are usually not desired and many packages support an auto-cleanup function. If auto-cleanup is not present, however, they must be deleted manually. A group of polygons which are connected by shared vertices is referred to as a mesh. In order for a mesh to appear attractive when rendered, it is desirable that it be non-self-intersecting, meaning that no edge passes through a polygon. Another way of looking at this is that the mesh cannot pierce itself. It is also desirable that the mesh not contain any errors such as doubled vertices, edges, or faces. For some purposes it is important that the mesh be a manifold – that is, that it does not contain holes or singularities (locations where two distinct sections of the mesh are connected by a single vertex). == Construction of polygonal meshes == Although it is possible to construct a mesh by manually specifying vertices and faces, it is much more common to build meshes using a variety of tools. A wide variety of 3D graphics software packages are available for use in constructing polygon meshes. One of the more popular methods of constructing meshes is box modeling, which uses two simple tools: The subdivide tool splits faces and edges into smaller pieces by adding new vertices. For example, a square would be subdivided by adding one vertex in the center and one on each edge, creating four smaller squares. The extrude tool is applied to a face or a group of faces. It creates a new face of the same size and shape which is connected to each of the existing edges by a face. Thus, performing the extrude operation on a square face would create a cube connected to the surface at the location of the face. A second common modeling method is sometimes referred to as inflation modeling or extrusion modeling. In this method, the user creates a 2D shape which traces the outline of an object from a photograph or a drawing. The user then uses a second image of the subject from a different angle and extrudes the 2D shape into 3D, again following the shape’s outline. This method is especially common for creating faces and heads. In general, the artist will model half of the head and then duplicate the vertices, invert their location relative to some plane, and connect the two pieces together. This ensures that the model will be symmetrical. Another common method of creating a polygonal mesh is by connecting together various primitives, which are predefined polygonal meshes created by the modeling environment. Common primitives include: Cubes Pyramids Cylinders 2D primitives, such as squares, triangles, and disks Specialized or esoteric primitives, such as the Utah Teapot or Suzanne, Blender's monkey mascot. Spheres - Spheres are commonly represented in one of two ways: Icospheres are icosahedrons which possess a sufficient number of triangles to resemble a sphere. UV spheres are composed of quads, and resemble the grid seen on some globes - quads are larger near the "equator" of the sphere and smaller near the "poles," eventually terminating in a single vertex. Finally, some specialized methods of constructing high or low detail meshes exist. Sketch based modeling is a user-friendly interface for constructing low-detail models quickly, while 3D scanners can be used to create high detail meshes based on existing real-world objects in an almost automatic way. These devices are very expensive, and are generally only used by researchers and industry professionals but can generate high accuracy sub-millimetric digital representations. == Operations == There are a very large number of operations which may be performed on polygonal meshes. Some of these roughly correspond to real-world manipulations of 3D objects, while others do not. Polygonal mesh operations include: Creations - Create new geometry from some other mathematical object Loft - Generate a mesh by creating a shape along two or more profile curves Extrude - Creates a surface by sweeping a profile curve or polygon surface along a straight or linear line Revolve - Generate a mesh by revolving (rotating) a shape around an axis Marching cubes - Algorithm to construct a mesh from an implicit function Binary Creations - Create a new mesh from a binary operation of two other meshes Add - Boolean addition of two or more meshes Subtract - Boolean subtraction of two or more meshes Intersect - Boolean intersection Union - Boolean union of two or more meshes Attach - Attach one mesh to another (removing the interior surfaces) Chamfer - Create a beveled surface which smoothly connects two surfaces Deformations - Move only the vertices of a mesh Deform - Systematically move vertices (according to certain functions or rules) Weighted Deform - Move vertices based on localized weights per vertex Morph - Move vertices smoothly between a source and target mesh Bend - Move vertices to "bend" the object Twist - Move vertices to "twist" the object Manipulations - Modify the geometry of the mesh, but not necessarily topology Displace - Introduce additional geometry based on a "displacement map" from the surface Simplify - Systematically remove and average vertices Subdivide - Introduce new vertices into a mesh by subdividing each face. In the case of, for instance, Catmull-Clark, subdivision can also have a smoothing effect on the meshes it is applied to. Convex Hull - Generate a convex mesh which minimally encloses a given mesh Cut - Create a hole in a mesh surface Stitch - Close a hole in a mesh surface Measurements - Compute some value of the mesh Volume - Compute the 3D volume of a mesh (discrete volumetric integral) Surface Area - Compute the surface area of a mesh (discrete surface integral) Collision Detection - Determine if two complex meshes in motion have collided Fitting - Construct a parametric surface (NURBS, bicubic spline) by fitting it to a given mesh Point-Surface Distance - Compute distance from a point to the mesh Line-Surface Distance - Compute distance from a line to the mesh Line-Surface Intersection - Compute intersection of line and the mesh Cross Section - Compute the curves created by a cross-section of a plane through a mesh Centroid - Compute the centroid, geometric center, of the mesh Center-of-Mass - Compute the center of mass, balance point, of the mesh Circumcenter - Compute the center of a circle or sphere enclosing an element of the mesh Incenter - Compute the center of a circle or sphere enclosed by an element of the mesh == Extensions == Once a polygonal mesh has been constructed, further steps must be taken before it is useful for games, animation, etc. The model must be texture mapped to add colors and texture to the surface and it must be given a skeleton for animation. Meshes can also be assigned weights and centers of gravity for use in physical simulation. To display a model on a computer screen outside of the modeling environment, it is necessary to store that model in one of the file formats listed below, and then use or write a program capable of loading from that format. The two main methods of displaying 3D polygon models are OpenGL and Direct3D. Both of these methods can be used with or without a 3D accelerated graphics card. == Advantages and disadvantages == There are many disadvantages to representing an object using polygons. Polygons are incapable of accurately representing curved surfaces, so a large number of them must be used to approximate curves in a visually appealing manner. The use of complex models has a cost in lowered speed. In scanline conversion, each polygon must be converted and displayed, regardless of size, and there are frequently a large number of models on the screen at any given time. Often, programmers must use multiple models at varying levels of detail to represent the same object in order to cut down on the number of polygons being rendered. The main advantage of polygons is that they are faster than other representations. While a modern graphics card can show a highly detailed scene at a frame rate of 60 frames per second or higher, surface modelers, the main way of displaying non-polygonal models, are incapable of achieving an interactive frame rate (10 frame/s or higher) with a similar amount of detail. With sprites, another alternative to polygons, every required pose must be created individually, while a single polygonal model can perform any movement if the appropriate motion data is applied, and can be viewed from any angle. == File formats == A variety of formats are available for storing 3D polygon data. The most popular are: .3ds, .max, which is associated with 3D Studio Max .blend, which is associated with Blender .c4d associated with Cinema 4D .dae (COLLADA) .dxf, .dwg, .dwf, associated with AutoCAD .fbx (Autodesk former. Kaydara Filmbox) .jt originally developed by Siemens Digital Industries Software; now an ISO standard. .lwo, which is associated with Lightwave .lxo, which is associated with MODO .mb and .ma, which are associated with Maya .md2, .md3, associated with the Quake series of games .mdl used with Valve's Source Engine .nif (NetImmerse/gamebryo) .obj (Wavefront's "The Advanced Visualizer") .ply used to store data from 3D scanners .rwx (Renderware) .stl used in rapid prototyping .u3d (Universal 3D) .wrl (VRML 2.0) == See also == Finite element method Mesh generation Polygon (computer graphics) Polygon mesh Vector graphics Geometry processing 3D modeling == References == == Bibliography == OpenGL SuperBible (3rd ed.), by Richard S Wright and Benjamin Lipchak ISBN 0-672-32601-9 OpenGL Programming Guide: The Official Guide to Learning OpenGL, Version 1.4, Fourth Edition by OpenGL Architecture Review Board ISBN 0-321-17348-1 OpenGL(R) Reference Manual : The Official Reference Document to OpenGL, Version 1.4 (4th Edition) by OpenGL Architecture Review Board ISBN 0-321-17383-X Blender documentation: https://web.archive.org/web/20051212074804/http://blender.org/cms/Documentation.628.0.html Maya documentation: packaged with Alias Maya, http://www.alias.com/eng/index.shtml
Wikipedia/Polygonal_modeling
Clipping, in the context of computer graphics, is a method to selectively enable or disable rendering operations within a defined region of interest. Mathematically, clipping can be described using the terminology of constructive geometry. A rendering algorithm only draws pixels in the intersection between the clip region and the scene model. Lines and surfaces outside the view volume (aka. frustum) are removed. Clip regions are commonly specified to improve render performance. A well-chosen clip allows the renderer to save time and energy by skipping calculations related to pixels that the user cannot see. Pixels that will be drawn are said to be within the clip region. Pixels that will not be drawn are outside the clip region. More informally, pixels that will not be drawn are said to be "clipped." == In 2D graphics == In two-dimensional graphics, a clip region may be defined so that pixels are only drawn within the boundaries of a window or frame. Clip regions can also be used to selectively control pixel rendering for aesthetic or artistic purposes. In many implementations, the final clip region is the composite (or intersection) of one or more application-defined shapes, as well as any system hardware constraints In one example application, consider an image editing program. A user application may render the image into a viewport. As the user zooms and scrolls to view a smaller portion of the image, the application can set a clip boundary so that pixels outside the viewport are not rendered. In addition, GUI widgets, overlays, and other windows or frames may obscure some pixels from the original image. In this sense, the clip region is the composite of the application-defined "user clip" and the "device clip" enforced by the system's software and hardware implementation. Application software can take advantage of this clip information to save computation time, energy, and memory, avoiding work related to pixels that aren't visible. == In 3D graphics == In three-dimensional graphics, the terminology of clipping can be used to describe many related features. Typically, "clipping" refers to operations in the plane that work with rectangular shapes, and "culling" refers to more general methods to selectively process scene model elements. This terminology is not rigid, and exact usage varies among many sources. Scene model elements include geometric primitives: points or vertices; line segments or edges; polygons or faces; and more abstract model objects such as curves, splines, surfaces, and even text. In complicated scene models, individual elements may be selectively disabled (clipped) for reasons including visibility within the viewport (frustum culling); orientation (backface culling), obscuration by other scene or model elements (occlusion culling, depth- or "z" clipping). Sophisticated algorithms exist to efficiently detect and perform such clipping. Many optimized clipping methods rely on specific hardware acceleration logic provided by a graphics processing unit (GPU). The concept of clipping can be extended to higher dimensionality using methods of abstract algebraic geometry. === Near clipping === Beyond projection of vertices & 2D clipping, near clipping is required to correctly rasterise 3D primitives; this is because vertices may have been projected behind the eye. Near clipping ensures that all the vertices used have valid 2D coordinates. Together with far-clipping it also helps prevent overflow of depth-buffer values. Some early texture mapping hardware (using forward texture mapping) in video games suffered from complications associated with near clipping and UV coordinates. === Occlusion clipping (Z- or depth clipping) === In 3D computer graphics, "Z" often refers to the depth axis in the system of coordinates centered at the viewport origin: "Z" is used interchangeably with "depth", and conceptually corresponds to the distance "into the virtual screen." In this coordinate system, "X" and "Y" therefore refer to a conventional cartesian coordinate system laid out on the user's screen or viewport. This viewport is defined by the geometry of the viewing frustum, and parameterizes the field of view. Z-clipping, or depth clipping, refers to techniques that selectively render certain scene objects based on their depth relative to the screen. Most graphics toolkits allow the programmer to specify a "near" and "far" clip depth, and only portions of objects between those two planes are displayed. A creative application programmer can use this method to render visualizations of the interior of a 3D object in the scene. For example, a medical imaging application could use this technique to render the organs inside a human body. A video game programmer can use clipping information to accelerate game logic. For example, a tall wall or building that occludes other game entities can save GPU time that would otherwise be spent transforming and texturing items in the rear areas of the scene; and a tightly integrated software program can use this same information to save CPU time by optimizing out game logic for objects that aren't seen by the player. == Algorithms == Line clipping algorithms: Cohen–Sutherland Liang–Barsky Fast-clipping Cyrus–Beck Nicholl–Lee–Nicholl Skala O(lg N) algorithm Polygon clipping algorithms: Greiner–Hormann Sutherland–Hodgman Weiler–Atherton Vatti Rendering methodologies Painter's algorithm == See also == Boolean operations on polygons Bounding volume Clip space Distance fog Guard-band clipping Hidden-surface determination Pruning (decision trees) Visibility (geometry) == Further reading == GPU Gems: Efficient Occlusion Culling Clipping in Java AWT: java.awt.Graphics.clipRect JavaDoc Clipping in UIKit for iOS (2D): UIRectClip Clipping in SceneKit for iOS (3D): SCNCamera (Adjusting Camera Perspective) Clipping in OpenGL: OpenGL Technical FAQs: Clipping, Culling, and Visibility Testing == References ==
Wikipedia/Clipping_(computer_graphics)
In physics and many other areas of science and engineering the intensity or flux of radiant energy is the power transferred per unit area, where the area is measured on the plane perpendicular to the direction of propagation of the energy. In the SI system, it has units watts per square metre (W/m2), or kg⋅s−3 in base units. Intensity is used most frequently with waves such as acoustic waves (sound), matter waves such as electrons in electron microscopes, and electromagnetic waves such as light or radio waves, in which case the average power transfer over one period of the wave is used. Intensity can be applied to other circumstances where energy is transferred. For example, one could calculate the intensity of the kinetic energy carried by drops of water from a garden sprinkler. The word "intensity" as used here is not synonymous with "strength", "amplitude", "magnitude", or "level", as it sometimes is in colloquial speech. Intensity can be found by taking the energy density (energy per unit volume) at a point in space and multiplying it by the velocity at which the energy is moving. The resulting vector has the units of power divided by area (i.e., surface power density). The intensity of a wave is proportional to the square of its amplitude. For example, the intensity of an electromagnetic wave is proportional to the square of the wave's electric field amplitude. == Mathematical description == If a point source is radiating energy in all directions (producing a spherical wave), and no energy is absorbed or scattered by the medium, then the intensity decreases in proportion to the distance from the object squared. This is an example of the inverse-square law. Applying the law of conservation of energy, if the net power emanating is constant, P = ∫ I ⋅ d A , {\displaystyle P=\int \mathbf {I} \,\cdot d\mathbf {A} ,} where P is the net power radiated; I is the intensity vector as a function of position; the magnitude |I| is the intensity as a function of position; dA is a differential element of a closed surface that contains the source. If one integrates a uniform intensity, |I| = const., over a surface that is perpendicular to the intensity vector, for instance over a sphere centered around the point source, the equation becomes P = | I | ⋅ A s u r f = | I | ⋅ 4 π r 2 , {\displaystyle P=|I|\cdot A_{\mathrm {surf} }=|I|\cdot 4\pi r^{2},} where |I| is the intensity at the surface of the sphere; r is the radius of the sphere; A s u r f = 4 π r 2 {\displaystyle A_{\mathrm {surf} }=4\pi r^{2}} is the expression for the surface area of a sphere. Solving for |I| gives | I | = P A s u r f = P 4 π r 2 . {\displaystyle |I|={\frac {P}{A_{\mathrm {surf} }}}={\frac {P}{4\pi r^{2}}}.} If the medium is damped, then the intensity drops off more quickly than the above equation suggests. Anything that can transmit energy can have an intensity associated with it. For a monochromatic propagating electromagnetic wave, such as a plane wave or a Gaussian beam, if E is the complex amplitude of the electric field, then the time-averaged energy density of the wave, travelling in a non-magnetic material, is given by: ⟨ U ⟩ = n 2 ε 0 2 | E | 2 , {\displaystyle \left\langle U\right\rangle ={\frac {n^{2}\varepsilon _{0}}{2}}|E|^{2},} and the local intensity is obtained by multiplying this expression by the wave velocity, ⁠ c n : {\displaystyle {\tfrac {\mathrm {c} }{n}}\!:} ⁠ I = c n ε 0 2 | E | 2 , {\displaystyle I={\frac {\mathrm {c} n\varepsilon _{0}}{2}}|E|^{2},} where n is the refractive index; c is the speed of light in vacuum; ε0 is the vacuum permittivity. For non-monochromatic waves, the intensity contributions of different spectral components can simply be added. The treatment above does not hold for arbitrary electromagnetic fields. For example, an evanescent wave may have a finite electrical amplitude while not transferring any power. The intensity should then be defined as the magnitude of the Poynting vector. == Electron beams == For electron beams, intensity is the probability of electrons reaching some particular position on a detector (e.g. a charge-coupled device) which is used to produce images that are interpreted in terms of both microstructure of inorganic or biological materials, as well as atomic scale structure. The map of the intensity of scattered electrons or x-rays as a function of direction is also extensively used in crystallography. == Alternative definitions == In photometry and radiometry intensity has a different meaning: it is the luminous or radiant power per unit solid angle. This can cause confusion in optics, where intensity can mean any of radiant intensity, luminous intensity or irradiance, depending on the background of the person using the term. Radiance is also sometimes called intensity, especially by astronomers and astrophysicists, and in heat transfer. == See also == Field strength Sound intensity Magnitude (astronomy) == Footnotes == == References ==
Wikipedia/Intensity_(physics)
GeForce is a brand of graphics processing units (GPUs) designed by Nvidia and marketed for the performance market. As of the GeForce 50 series, there have been nineteen iterations of the design. In August 2017, Nvidia stated that "there are over 200 million GeForce gamers". The first GeForce products were discrete GPUs designed for add-on graphics boards, intended for the high-margin PC gaming market, and later diversification of the product line covered all tiers of the PC graphics market, ranging from cost-sensitive GPUs integrated on motherboards to mainstream add-in retail boards. Most recently, GeForce technology has been introduced into Nvidia's line of embedded application processors, designed for electronic handhelds and mobile handsets. With respect to discrete GPUs, found in add-in graphics-boards, Nvidia's GeForce and AMD's Radeon GPUs are the only remaining competitors in the high-end market. GeForce GPUs are very dominant in the general-purpose graphics processor unit (GPGPU) market thanks to their proprietary Compute Unified Device Architecture (CUDA). GPGPU is expected to expand GPU functionality beyond the traditional rasterization of 3D graphics, to turn it into a high-performance computing device able to execute arbitrary programming code in the same way a CPU does, but with different strengths (highly parallel execution of straightforward calculations) and weaknesses (worse performance for complex branching code). == Name origin == The "GeForce" name originated from a contest held by Nvidia in early 1999 called "Name That Chip". The company called out to the public to name the successor to the RIVA TNT2 line of graphics boards. There were over 12,000 entries received and seven winners received a RIVA TNT2 Ultra graphics card as a reward. Brian Burke, senior PR manager at Nvidia, told Maximum PC in 2002 that "GeForce" originally stood for "Geometry Force" since GeForce 256 was the first GPU for personal computers to calculate the transform-and-lighting geometry, offloading that function from the CPU. == Graphics processor generations == === GeForce 256 === === GeForce 2 series === Launched in March 2000, the first GeForce2 (NV15) was another high-performance graphics chip. Nvidia moved to a twin texture processor per pipeline (4x2) design, doubling texture fillrate per clock compared to GeForce 256. Later, Nvidia released the GeForce2 MX (NV11), which offered performance similar to the GeForce 256 but at a fraction of the cost. The MX was a compelling value in the low/mid-range market segments and was popular with OEM PC manufacturers and users alike. The GeForce 2 Ultra was the high-end model in this series. === GeForce 3 series === Launched in February 2001, the GeForce3 (NV20) introduced programmable vertex and pixel shaders to the GeForce family and to consumer-level graphics accelerators. It had good overall performance and shader support, making it popular with enthusiasts although it never hit the midrange price point. The NV2A developed for the Microsoft Xbox game console is a derivative of the GeForce 3. === GeForce 4 series === Launched in February 2002, the then-high-end GeForce4 Ti (NV25) was mostly a refinement to the GeForce3. The biggest advancements included enhancements to anti-aliasing capabilities, an improved memory controller, a second vertex shader, and a manufacturing process size reduction to increase clock speeds. Another member of the GeForce 4 family, the budget GeForce4 MX was based on the GeForce2, with the addition of some features from the GeForce4 Ti. It targeted the value segment of the market and lacked pixel shaders. Most of these models used the AGP 4× interface, but a few began the transition to AGP 8×. === GeForce FX series === Launched in 2003, the GeForce FX (NV30) was a huge change in architecture compared to its predecessors. The GPU was designed not only to support the new Shader Model 2 specification but also to perform well on older titles. However, initial models like the GeForce FX 5800 Ultra suffered from weak floating point shader performance and excessive heat which required infamously noisy two-slot cooling solutions. Products in this series carry the 5000 model number, as it is the fifth generation of the GeForce, though Nvidia marketed the cards as GeForce FX instead of GeForce 5 to show off "the dawn of cinematic rendering". === GeForce 6 series === Launched in April 2004, the GeForce 6 (NV40) added Shader Model 3.0 support to the GeForce family, while correcting the weak floating point shader performance of its predecessor. It also implemented high-dynamic-range imaging and introduced SLI (Scalable Link Interface) and PureVideo capability (integrated partial hardware MPEG-2, VC-1, Windows Media Video, and H.264 decoding and fully accelerated video post-processing). === GeForce 7 series === The seventh generation GeForce (G70/NV47) was launched in June 2005 and was the last Nvidia video card series that could support the AGP bus. The design was a refined version of GeForce 6, with the major improvements being a widened pipeline and an increase in clock speed. The GeForce 7 also offers new transparency supersampling and transparency multisampling anti-aliasing modes (TSAA and TMAA). These new anti-aliasing modes were later enabled for the GeForce 6 series as well. The GeForce 7950GT featured the highest performance GPU with an AGP interface in the Nvidia line. This era began the transition to the PCI-Express interface. A 128-bit, eight render output unit (ROP) variant of the 7800 GTX, called the RSX Reality Synthesizer, is used as the main GPU in the Sony PlayStation 3. === GeForce 8 series === Released on November 8, 2006, the eighth-generation GeForce (originally called G80) was the first ever GPU to fully support Direct3D 10. Manufactured using a 90 nm process and built around the new Tesla microarchitecture, it implemented the unified shader model. Initially just the 8800GTX model was launched, while the GTS variant was released months into the product line's life, and it took nearly six months for mid-range and OEM/mainstream cards to be integrated into the 8 series. The die shrink down to 65 nm and a revision to the G80 design, codenamed G92, were implemented into the 8 series with the 8800GS, 8800GT and 8800GTS-512, first released on October 29, 2007, almost one whole year after the initial G80 release. === GeForce 9 series and 100 series === The first product was released on February 21, 2008. Not even four months older than the initial G92 release, all 9-series designs are simply revisions to existing late 8-series products. The 9800GX2 uses two G92 GPUs, as used in later 8800 cards, in a dual PCB configuration while still only requiring a single PCI-Express 16x slot. The 9800GX2 utilizes two separate 256-bit memory busses, one for each GPU and its respective 512 MB of memory, which equates to an overall of 1 GB of memory on the card (although the SLI configuration of the chips necessitates mirroring the frame buffer between the two chips, thus effectively halving the memory performance of a 256-bit/512 MB configuration). The later 9800GTX features a single G92 GPU, 256-bit data bus, and 512 MB of GDDR3 memory. Prior to the release, no concrete information was known except that the officials claimed the next generation products had close to 1 TFLOPS processing power with the GPU cores still being manufactured in the 65 nm process, and reports about Nvidia downplaying the significance of Direct3D 10.1. In March 2009, several sources reported that Nvidia had quietly launched a new series of GeForce products, namely the GeForce 100 Series, which consists of rebadged 9 Series parts. GeForce 100 series products were not available for individual purchase. === GeForce 200 series and 300 series === Based on the GT200 graphics processor consisting of 1.4 billion transistors, codenamed Tesla, the 200 series was launched on June 16, 2008. The next generation of the GeForce series takes the card-naming scheme in a new direction, by replacing the series number (such as 8800 for 8-series cards) with the GTX or GTS suffix (which used to go at the end of card names, denoting their 'rank' among other similar models), and then adding model-numbers such as 260 and 280 after that. The series features the new GT200 core on a 65nm die. The first products were the GeForce GTX 260 and the more expensive GeForce GTX 280. The GeForce 310 was released on November 27, 2009, which is a rebrand of GeForce 210. The 300 series cards are rebranded DirectX 10.1 compatible GPUs from the 200 series, which were not available for individual purchase. === GeForce 400 series and 500 series === On April 7, 2010, Nvidia released the GeForce GTX 470 and GTX 480, the first cards based on the new Fermi architecture, codenamed GF100; they were the first Nvidia GPUs to utilize 1 GB or more of GDDR5 memory. The GTX 470 and GTX 480 were heavily criticized due to high power use, high temperatures, and very loud noise that were not balanced by the performance offered, even though the GTX 480 was the fastest DirectX 11 card as of its introduction. In November 2010, Nvidia released a new flagship GPU based on an enhanced GF100 architecture (GF110) called the GTX 580. It featured higher performance, less power utilization, heat and noise than the preceding GTX 480. This GPU received much better reviews than the GTX 480. Nvidia later also released the GTX 590, which packs two GF110 GPUs on a single card. === GeForce 600 series, 700 series and 800M series === In September 2010, Nvidia announced that the successor to Fermi microarchitecture would be the Kepler microarchitecture, manufactured with the TSMC 28 nm fabrication process. Earlier, Nvidia had been contracted to supply their top-end GK110 cores for use in Oak Ridge National Laboratory's "Titan" supercomputer, leading to a shortage of GK110 cores. After AMD launched their own annual refresh in early 2012, the Radeon HD 7000 series, Nvidia began the release of the GeForce 600 series in March 2012. The GK104 core, originally intended for their mid-range segment of their lineup, became the flagship GTX 680. It introduced significant improvements in performance, heat, and power efficiency compared to the Fermi architecture and closely matched AMD's flagship Radeon HD 7970. It was quickly followed by the dual-GK104 GTX 690 and the GTX 670, which featured only a slightly cut-down GK104 core and was very close in performance to the GTX 680. With the GTX Titan, Nvidia also released GPU Boost 2.0, which would allow the GPU clock speed to increase indefinitely until a user-set temperature limit was reached without passing a user-specified maximum fan speed. The final GeForce 600 series release was the GTX 650 Ti BOOST based on the GK106 core, in response to AMD's Radeon HD 7790 release. At the end of May 2013, Nvidia announced the 700 series, which was still based on the Kepler architecture, however it featured a GK110-based card at the top of the lineup. The GTX 780 was a slightly cut-down Titan that achieved nearly the same performance for two-thirds of the price. It featured the same advanced reference cooler design, but did not have the unlocked double-precision cores and was equipped with 3 GB of memory. At the same time, Nvidia announced ShadowPlay, a screen capture solution that used an integrated H.264 encoder built into the Kepler architecture that Nvidia had not revealed previously. It could be used to record gameplay without a capture card, and with negligible performance decrease compared to software recording solutions, and was available even on the previous generation GeForce 600 series cards. The software beta for ShadowPlay, however, experienced multiple delays and would not be released until the end of October 2013. A week after the release of the GTX 780, Nvidia announced the GTX 770 to be a rebrand of the GTX 680. It was followed by the GTX 760 shortly after, which was also based on the GK104 core and similar to the GTX 660 Ti. No more 700 series cards were set for release in 2013, although Nvidia announced G-Sync, another feature of the Kepler architecture that Nvidia had left unmentioned, which allowed the GPU to dynamically control the refresh rate of G-Sync-compatible monitors which would release in 2014, to combat tearing and judder. However, in October, AMD released the R9 290X, which came in at $100 less than the GTX 780. In response, Nvidia slashed the price of the GTX 780 by $150 and released the GTX 780 Ti, which featured a full 2880-core GK110 core even more powerful than the GTX Titan, along with enhancements to the power delivery system which improved overclocking, and managed to pull ahead of AMD's new release. The GeForce 800M series consists of rebranded 700M series parts based on the Kepler architecture and some lower-end parts based on the newer Maxwell architecture. === GeForce 900 series === In March 2013, Nvidia announced that the successor to Kepler would be the Maxwell microarchitecture. It was released in September 2014, with the GM10x series chips, emphasizing the new power efficiency architectural improvements in OEM, and low TDP products in desktop GTX 750/750 ti, and mobile GTX 850M/860M. Later that year Nvidia pushed the TDP with the GM20x chips for power users, skipping the 800 series for desktop entirely, with the 900 series of GPUs. This was the last GeForce series to support analog video output through DVI-I. Although, analog display adapters exist and are able to convert a digital Display Port, HDMI, or DVI-D (Digital). === GeForce 10 series === In March 2014, Nvidia announced that the successor to Maxwell would be the Pascal microarchitecture; announced on May 6, 2016, and were released several weeks later on May 27 and June 10, respectively. Architectural improvements include the following: In Pascal, an SM (streaming multiprocessor) consists of 128 CUDA cores. Kepler packed 192, Fermi 32 and Tesla only 8 CUDA cores into an SM; the GP100 SM is partitioned into two processing blocks, each having 32 single-precision CUDA Cores, an instruction buffer, a warp scheduler, 2 texture mapping units and 2 dispatch units. GDDR5X – New memory standard supporting 10 Gbit/s data rates and an updated memory controller. Only the Nvidia Titan X (and Titan Xp), GTX 1080, GTX 1080 Ti, and GTX 1060 (6 GB Version) support GDDR5X. The GTX 1070 Ti, GTX 1070, GTX 1060 (3 GB version), GTX 1050 Ti, and GTX 1050 use GDDR5. Unified memory – A memory architecture, where the CPU and GPU can access both main system memory and memory on the graphics card with the help of a technology called "Page Migration Engine". NVLink – A high-bandwidth bus between the CPU and GPU, and between multiple GPUs. Allows much higher transfer speeds than those achievable by using PCI Express; estimated to provide between 80 and 200 GB/s. 16-bit (FP16) floating-point operations can be executed at twice the rate of 32-bit floating-point operations ("single precision") and 64-bit floating-point operations ("double precision") executed at half the rate of 32-bit floating point operations (Maxwell 1/32 rate). More advanced process node, TSMC 16mm instead of the older TSMC 28 nm === GeForce 20 series and 16 series === In August 2018, Nvidia announced the GeForce successor to Pascal. The new microarchitecture name was revealed as "Turing" at the Siggraph 2018 conference. This new GPU microarchitecture is aimed to accelerate the real-time ray tracing support and AI Inferencing. It features a new Ray Tracing unit (RT Core) which can dedicate processors to the ray tracing in hardware. It supports the DXR extension in Microsoft DirectX 12. Nvidia claims the new architecture is up to 6 times faster than the older Pascal architecture. A whole new Tensor core design since Volta introduces AI deep learning acceleration, which allows the utilisation of DLSS (Deep Learning Super Sampling), a new form of anti-aliasing that uses AI to provide crisper imagery with less impact on performance. It also changes its integer execution unit which can execute in parallel with the floating point data path. A new unified cache architecture which doubles its bandwidth compared with previous generations was also announced. The new GPUs were revealed as the Quadro RTX 8000, Quadro RTX 6000 and Quadro RTX 5000. The high end Quadro RTX 8000 features 4,608 CUDA cores and 576 Tensor cores with 48 GB of VRAM. Later during the Gamescom press conference, Nvidia's CEO Jensen Huang, unveiled the new GeForce RTX series with RTX 2080 Ti, 2080, and 2070 that will use the Turing architecture. The first Turing cards were slated to ship to consumers on September 20, 2018. Nvidia announced the RTX 2060 on January 6, 2019, at CES 2019. On July 2, 2019, Nvidia announced the GeForce RTX Super line of cards, a 20 series refresh which comprises higher-spec versions of the RTX 2060, 2070 and 2080. The RTX 2070 and 2080 were discontinued. In February 2019, Nvidia announced the GeForce 16 series. It is based on the same Turing architecture used in the GeForce 20 series, but disabling the Tensor (AI) and RT (ray tracing) cores to provide more affordable graphic cards for gamers while still attaining a higher performance compared to respective cards of the previous GeForce generations. Like the RTX Super refresh, Nvidia on October 29, 2019, announced the GTX 1650 Super and 1660 Super cards, which replaced their non-Super counterparts. On June 28, 2022, Nvidia quietly released their GTX 1630 card, which was meant for low-end gamers. === GeForce 30 series === Nvidia officially announced at the GeForce Special Event that the successor to GeForce 20 series will be the 30 series, it is built on the Ampere microarchitecture. The GeForce Special Event introduced took place on September 1, 2020, and set September 17th as the official release date for the RTX 3080 GPU, September 24 for the RTX 3090 GPU and October 29th for the RTX 3070 GPU. With the latest GPU launch being the RTX 3090 Ti. The RTX 3090 Ti is the highest-end Nvidia GPU on the Ampere microarchitecture, it features a fully unlocked GA102 die built on the Samsung 8 nm node due to supply shortages with TSMC. The RTX 3090 Ti has 10,752 CUDA cores, 336 Tensor cores and texture mapping units, 112 ROPs, 84 RT cores, and 24 gigabytes of GDDR6X memory with a 384-bit bus. When compared to the RTX 2080 Ti, the 3090 Ti has 6,400 more CUDA cores. Due to the global chip shortage, the 30 series was controversial as scalpers and high demand meant that GPU prices skyrocketed for the 30 series and the AMD RX 6000 series. === GeForce 40 series === On September 20, 2022, Nvidia announced its GeForce 40 Series graphics cards. These came out as the RTX 4090, on October 12, 2022, the RTX 4080, on November 16, 2022, the RTX 4070 Ti, on January 3, 2023, with the RTX 4070, on April 13, 2023, and the RTX 4060 Ti on May 24, 2023, and the RTX 4060, following on June 29, 2023. These were built on the Ada Lovelace architecture, with current part numbers being, "AD102", "AD103", "AD104" "AD106" and "AD107". These parts are manufactured using the TSMC N4 process node which is a custom-designed process for Nvidia. At the time, the RTX 4090 was the fastest chip for the mainstream market that has been released by a major company, consisting of around 16,384 CUDA cores, boost clocks of 2.2 / 2.5 GHz, 24 GB of GDDR6X, a 384-bit memory bus, 128 3rd gen RT cores, 512 4th gen Tensor cores, DLSS 3.0 and a TDP of 450W. From October to December 2024, the RTX 4090, 4080, 4070 and relating variants were officially discontinued, marking the end of a two-year production run, in order to free up production space for the coming RTX 50 series. Notably, a China-only edition of the RTX 4090 was released, named the RTX 4090D (Dragon). The RTX 4090D features a shaved down AD102 die with 14592 CUDA cores, down from 16384 cores of the original 4090. This was primarily owing to the United States Department of Commerce beginning the enactment of restrictions on the Nvidia RTX 4090 for export to certain countries in 2023. This was targeted mainly towards China as an attempt to halt its AI development. The 40 series saw Nvidia re-releasing the 'Super' variant of graphics cards, not seen since the 20 series, as well as being the first generation in Nvidia's lineup to combine both 'Super' and 'Ti' brandings together. This began with the release of the RTX 4070 Super on January 17, 2024, following with the RTX 4070 Ti Super on January 24, 2024, and the RTX 4080 Super on January 31, 2024. === GeForce 50 series (Current) === The GeForce 50 series, based on the Blackwell microarchitecture, was announced at CES 2025, with availability starting in January. Nvidia CEO Jensen Huang presented prices for the RTX 5070, RTX 5070 Ti, RTX 5080, and RTX 5090. == Variants == === Mobile GPUs === Since the GeForce 2 series, Nvidia has produced a number of graphics chipsets for notebook computers under the GeForce Go branding. Most of the features present in the desktop counterparts are present in the mobile ones. These GPUs are generally optimized for lower power consumption and less heat output in order to be used in notebook PCs and small desktops. Beginning with the GeForce 8 series, the GeForce Go brand was discontinued and the mobile GPUs were integrated with the main line of GeForce GPUs, but their name suffixed with an M. This ended in 2016 with the launch of the laptop GeForce 10 series – Nvidia dropped the M suffix, opting to unify the branding between their desktop and laptop GPU offerings, as notebook Pascal GPUs are almost as powerful as their desktop counterparts (something Nvidia tested with their "desktop-class" notebook GTX 980 GPU back in 2015). The GeForce MX brand, previously used by Nvidia for their entry-level desktop GPUs, was revived in 2017 with the release of the GeForce MX150 for notebooks. The MX150 is based on the same Pascal GP108 GPU as used on the desktop GT 1030, and was quietly released in June 2017. === Small form factor GPUs === Similar to the mobile GPUs, Nvidia also released a few GPUs in "small form factor" format, for use in all-in-one desktops. These GPUs are suffixed with an S, similar to the M used for mobile products. === Integrated desktop motherboard GPUs === Beginning with the nForce 4, Nvidia started including onboard graphics solutions in their motherboard chipsets. These were called mGPUs (motherboard GPUs). Nvidia discontinued the nForce range, including these mGPUs, in 2009. After the nForce range was discontinued, Nvidia released their Ion line in 2009, which consisted of an Intel Atom CPU partnered with a low-end GeForce 9 series GPU, fixed on the motherboard. Nvidia released an upgraded Ion 2 in 2010, this time containing a low-end GeForce 300 series GPU. == Nomenclature == From the GeForce 4 series until the GeForce 9 series, the naming scheme below is used. Since the release of the GeForce 100 series of GPUs, Nvidia changed their product naming scheme to the one below. Earlier cards such as the GeForce4 follow a similar pattern. cf. Nvidia's Performance Graph here. == Graphics device drivers == === Official proprietary === Nvidia develops and publishes GeForce drivers for Windows 10 x86/x86-64 and later, Linux x86/x86-64/ARMv7-A, OS X 10.5 and later, Solaris x86/x86-64 and FreeBSD x86/x86-64. A current version can be downloaded from Nvidia and most Linux distributions contain it in their own repositories. Nvidia GeForce driver 340.24 from 8 July 2014 supports the EGL interface enabling support for Wayland in conjunction with this driver. This may be different for the Nvidia Quadro brand, which is based on identical hardware but features OpenGL-certified graphics device drivers. On the same day the Vulkan graphics API was publicly released, Nvidia released drivers that fully supported it. Nvidia has released drivers with optimizations for specific video games concurrent with their release since 2014, having released 150 drivers supporting 400 games in April 2022. Basic support for the DRM mode-setting interface in the form of a new kernel module named nvidia-modeset.ko has been available since version 358.09 beta. The support of Nvidia's display controller on the supported GPUs is centralized in nvidia-modeset.ko. Traditional display interactions (X11 modesets, OpenGL SwapBuffers, VDPAU presentation, SLI, stereo, framelock, G-Sync, etc.) initiate from the various user-mode driver components and flow to nvidia-modeset.ko. In May 2022, Nvidia announced that it would release a partially open-source driver for the (GSP enabled) Turing architecture and newer, in order to enhance the ability for it to be packaged as part of Linux distributions. At launch Nvidia considered the driver to be alpha quality for consumer GPUs, and production ready for datacenter GPUs. Currently the userspace components of the driver (including OpenGL, Vulkan, and CUDA) remain proprietary. In addition, the open-source components of the driver are only a wrapper (CPU-RM) for the GPU System Processor (GSP) firmware, a RISC-V binary blob that is now required for running the open-source driver. The GPU System Processor is a RISC-V coprocessor codenamed "Falcon" that is used to offload GPU initialization and management tasks. The driver itself is still split for the host CPU portion (CPU-RM) and the GSP portion (GSP-RM). Windows 11 and Linux proprietary drivers also support enabling GSP and make even gaming faster. CUDA supports GSP since version 11.6. Upcoming Linux kernel 6.7 will support GSP in Nouveau. === Third-party free and open-source === Community-created, free and open-source drivers exist as an alternative to the drivers released by Nvidia. Open-source drivers are developed primarily for Linux, however there may be ports to other operating systems. The most prominent alternative driver is the reverse-engineered free and open-source nouveau graphics device driver. Nvidia has publicly announced to not provide any support for such additional device drivers for their products, although Nvidia has contributed code to the Nouveau driver. Free and open-source drivers support a large portion (but not all) of the features available in GeForce-branded cards. For example, as of January 2014 nouveau driver lacks support for the GPU and memory clock frequency adjustments, and for associated dynamic power management. Also, Nvidia's proprietary drivers consistently perform better than nouveau in various benchmarks. However, as of August 2014 and version 3.16 of the Linux kernel mainline, contributions by Nvidia allowed partial support for GPU and memory clock frequency adjustments to be implemented. === Licensing and privacy issues === The license has common terms against reverse engineering and copying, and it disclaims warranties and liability. Starting in 2016 the GeForce license says Nvidia "SOFTWARE may access, collect non-personally identifiable information about, update, and configure Customer's system in order to properly optimize such system for use with the SOFTWARE." The privacy notice goes on to say, "We are not able to respond to "Do Not Track" signals set by a browser at this time. We also permit third party online advertising networks and social media companies to collect information... We may combine personal information that we collect about you with the browsing and tracking information collected by these [cookies and beacons] technologies." The software configures the user's system to optimize its use, and the license says, "NVIDIA will have no responsibility for any damage or loss to such system (including loss of data or access) arising from or relating to (a) any changes to the configuration, application settings, environment variables, registry, drivers, BIOS, or other attributes of the system (or any part of such system) initiated through the SOFTWARE". === GeForce Experience === GeForce Experience is a software suite developed by Nvidia that served as a companion application for PCs equipped with Nvidia graphics cards. Initially released in 2013, it was designed to enhance the gaming experience by providing performance optimization tools, driver management, and various capture and streaming features. One of its core functions was the ability to optimize game settings automatically based on the user's hardware configuration, helping to strike a balance between visual quality and performance. It also allowed users to manage driver updates seamlessly, particularly through the distribution of "Game Ready Drivers," which were released in sync with major game launches to ensure optimal performance from day one. GeForce Experience included Nvidia ShadowPlay, a popular feature that enabled gameplay recording and live streaming with minimal performance impact. It also featured Nvidia Ansel, a tool for capturing high-resolution, 360-degree, and HDR in-game screenshots, as well as Nvidia Freestyle, which allowed gamers to apply real-time visual filters. Laptop users benefited from features like Battery Boost, which helped conserve battery life while gaming by intelligently adjusting system performance. By August 2017, the software had been installed on over 90 million PCs, making it one of the most widely used applications among gamers. Despite its broad adoption, GeForce Experience faced ongoing criticism for its resource usage, mandatory login requirement, and occasional user experience issues. One major controversy stemmed from a critical security vulnerability discovered before a patch released on March 26, 2019. The vulnerability exposed users to remote code execution, denial of service, and privilege escalation attacks. Additionally, the software was known to force a system restart after installing new drivers, initiating a 60-second countdown that offered no option to cancel or postpone. On November 12, 2024, Nvidia officially retired GeForce Experience and launched its successor, the Nvidia App, with version 1.0. The new application was designed to modernize the user interface and streamline the experience, offering faster performance, better integration of features, and a more intuitive layout. It consolidated key tools like game optimization, driver updates, and hardware monitoring into a single platform, while also enhancing support for content creators through deeper integration with Nvidia Studio technologies. This transition marks a new chapter in Nvidia's software ecosystem, with the Nvidia App aiming to deliver a more efficient and user-friendly experience tailored to the needs of modern gamers and creators. === Nvidia App === The Nvidia App is a program that is intended to replace both GeForce Experience and the Nvidia Control Panel which can be downloaded from Nvidia's website. In August 2024, it was in a beta version. On November 12, 2024, version 1.0 was released, marking its stable release. New features include an overhauled user interface, a new in-game overlay, support for ShadowPlay with 120 fps, as well as RTX HDR and RTX Dynamic Vibrance, which are AI-based in-game filters that enable HDR and increase color saturation in any DirectX 9 (and newer) or Vulkan game, respectively. The Nvidia App also features Auto Tuning, which adjusts the GPU's clock rate based on regular hardware scans to ensure optimal performance. According to Nvidia, this feature will not cause any damage to the GPU and retain its warranty. However, it might cause instability issues. The feature is similar to the GeForce Experience's "Enable automatic tuning" option, which was released in 2021, with the difference being that this was a one-off overclocking feature that did not adjust the GPU's clock speed on a regular basis. In January 2025, Nvidia added Smooth Motion to the Nvidia App, a feature similar to Frame Generation which generates an extra frame between two natively randered frames. Because the feature is driver-based, it also works in games that do not support DLSS's Frame Generation option. As of its release, the feature is only available on GeForce 50 series GPUs, though Nvidia stated they will add support for GeForce 40 series GPUs in the future as well. == References == == External links == GeForce product page on Nvidia's website GeForce powered games on Nvidia's website TechPowerUp GPU Specs Database
Wikipedia/Nvidia_GeForce
Sega is a video game developer, publisher, and hardware development company headquartered in Tokyo, Japan, with multiple offices around the world. The company's involvement in the arcade game industry began as a Japan-based distributor of coin-operated machines, including pinball games and jukeboxes. Sega imported second-hand machines that required frequent maintenance. This necessitated the construction of replacement guns, flippers, and other parts for the machines. According to former Sega director Akira Nagai, this is what led to the company into developing their own games. Sega released Pong-Tron, its first video-based game, in 1973. The company prospered from the arcade game boom of the late 1970s, with revenues climbing to over US$100 million by 1979. Nagai has stated that Hang-On and Out Run helped to pull the arcade game market out of the 1983 downturn and created new genres of video games. In terms of arcades, Sega is the world's most prolific arcade game producer, having developed more than 500 games, 70 franchises, and 20 arcade system boards since 1981. It has been recognized by Guinness World Records for this achievement. The following list comprises the various arcade system boards developed and used by Sega in their arcade games. == Arcade system boards == == Additional arcade hardware == Sega has developed and released additional arcade games that use technology other than their dedicated arcade system boards. The first arcade game manufactured by Sega was Periscope, an electromechanical game. This was followed by Missile in 1969. Subsequent video-based games such as Pong-Tron (1973), Fonz (1976), and Monaco GP (1979) used discrete logic boards without a CPU microprocessor. Frogger (1981) used a system powered by two Z80 CPU microprocessors. Some titles, such as Zaxxon (1982) were developed externally from Sega, a practice that was not uncommon at the time. == See also == Sega R360 List of Sega pinball machines List of Sega video game consoles == References ==
Wikipedia/Sega_Model_3
Blender is a free and open-source 3D computer graphics software tool set that runs on Windows, macOS, BSD, Haiku, IRIX and Linux. It is used for creating animated films, visual effects, art, 3D-printed models, motion graphics, interactive 3D applications, and virtual reality. It is also used in creating video games. Blender was used to produce the Academy Award-winning film Flow (2024). == History == Blender was initially developed as an in-house application by the Dutch animation studio NeoGeo (no relation to the video game brand), and was officially launched on January 2, 1994. Version 1.00 was released in January 1995, with the primary author being the company co-owner and software developer Ton Roosendaal. The name Blender was inspired by a song by the Swiss electronic band Yello, from the album Baby, which NeoGeo used in its showreel. Some design choices and experiences for Blender were carried over from an earlier software application, called Traces, that Roosendaal developed for NeoGeo on the Commodore Amiga platform during the 1987–1991 period. On January 1, 1998, Blender was released publicly online as SGI freeware. NeoGeo was later dissolved, and its client contracts were taken over by another company. After NeoGeo's dissolution, Ton Roosendaal founded Not a Number Technologies (NaN, a reference to the computing term of the same name) in June 1998 to further develop Blender, initially distributing it as shareware until NaN went bankrupt in 2002. This also resulted in the discontinuation of Blender's development. In May 2002, Roosendaal started the non-profit Blender Foundation, with the first goal to find a way to continue developing and promoting Blender as a community-based open-source project. On July 18, 2002, Roosendaal started the "Free Blender" campaign, a crowdfunding precursor. The campaign aimed at open-sourcing Blender for a one-time payment of €100,000 (USD 100,670 at the time), with the money being collected from the community. On September 7, 2002, it was announced that they had collected enough funds and would release the Blender source code. Today, Blender is free and open-source software, largely developed by its community as well as 26 full-time employees and 12 freelancers employed by the Blender Institute. The Blender Foundation initially reserved the right to use dual licensing so that, in addition to GPL 2.0-or-later, Blender would have been available also under the "Blender License", which did not require disclosing source code but required payments to the Blender Foundation. However, this option was never exercised and was suspended indefinitely in 2005. Blender is solely available under "GNU GPLv2 or any later" and was not updated to the GPLv3, as "no evident benefits" were seen. The binary releases of Blender are under GNU GPLv3 or later because of the incorporated Apache libraries. In 2019, with the release of version 2.80, the integrated game engine for making and prototyping video games was removed; Blender's developers recommended that users migrate to more powerful open source game engines such as Godot instead. == Suzanne == In February 2002, the fate of the Blender software company, NaN, became evident as it faced imminent closure in March. Nevertheless, one more release was pushed out, Blender 2.25. As a sort of Easter egg and last personal tag, the artists and developers decided to add a 3D model of a chimpanzee head (called a "monkey" in the software). It was created by Willem-Paul van Overbruggen (SLiD3), who named it Suzanne after the orangutan in the Kevin Smith film Jay and Silent Bob Strike Back. Suzanne is Blender's alternative to more common test models such as the Utah Teapot and the Stanford Bunny. A low-polygon model with only 500 faces, Suzanne is included in Blender and often used as a quick and easy way to test materials, animations, rigs, textures, and lighting setups. It is included as a primitive, alongside other meshes such as cubes and planes. The largest Blender contest gives out an award called the Suzanne Award, underscoring the significance of this unique 3D model in the Blender community. == Features == === Modeling === Blender has support for a variety of geometric primitives, including polygon meshes, Bézier curves, NURBS surfaces, metaballs, icospheres, text, and an n-gon modeling system called B-mesh. There is also an advanced polygonal modelling system which can be accessed through an edit mode. It supports features such as extrusion, bevelling, and subdividing. ==== Modifiers ==== Modifiers apply non-destructive effects which can be applied upon rendering or exporting, such as subdivision surfaces. ==== Sculpting ==== Blender has multi-resolution digital sculpting, which includes dynamic topology, "baking", remeshing, re-symmetrization, and decimation. The latter is used to simplify models for exporting purposes (an example being game assets). ==== Geometry nodes ==== Blender has a node graph system for procedurally and non-destructively creating and manipulating geometry. It was first added to Blender 2.92, which focuses on object scattering and instancing. It takes the form of a modifier, so it can be stacked over other different modifiers. The system uses object attributes, which can be modified and overridden with string inputs. Attributes can include positions, normals and UV maps. All attributes can be viewed in an attribute spreadsheet editor. The Geometry Nodes utility also has the capability of creating primitive meshes. In Blender 3.0, support for creating and modifying curves objects was added to Geometry Nodes; in the same release, the Geometry Nodes workflow was completely redesigned with fields, in order to make the system more intuitive and work like shader nodes. === Simulation === Blender can be used to simulate smoke, rain, dust, cloth, fluids, hair, and rigid bodies. ==== Fluid simulation ==== The fluid simulator can be used for simulating liquids, like water being poured into a cup. It uses Lattice Boltzmann methods (LBM) to simulate fluids and allows for plenty of adjustment of particles and resolution. The particle physics fluid simulation creates particles that follow the smoothed-particle hydrodynamics method. Blender has simulation tools for soft-body dynamics, including mesh collision detection, LBM fluid dynamics, smoke simulation, Bullet rigid-body dynamics, an ocean generator with waves, a particle system that includes support for particle-based hair, and real-time control during physics simulation and rendering. In Blender 2.82, a new fluid simulation system called Mantaflow was added, replacing the old FLIP system. In Blender 2.92, another fluid simulation system called APIC, which builds on Mantaflow, was added. Vortices and more stable calculations are improved from the FLIP system. ==== Cloth Simulation ==== Cloth simulation is done by simulating vertices with a rigid body simulation. If done on a 3D mesh, it will produce similar effects as the soft body simulation. === Animation === Blender's keyframed animation capabilities include inverse kinematics, armatures, hooks, curve- and lattice-based deformations, shape keys, non-linear animation, constraints, and vertex weighting. In addition, its Grease Pencil tools allow for 2D animation within a full 3D pipeline. === Rendering === Blender includes three render engines since version 2.80: EEVEE, Workbench and Cycles. Cycles is a path tracing render engine. It supports rendering through both the CPU and the GPU. Cycles supports the Open Shading Language since Blender 2.65. Cycles Hybrid Rendering is possible in Version 2.92 with Optix. Tiles are calculated with GPU in combination with CPU. EEVEE is a new physically based real-time renderer. While it is capable of driving Blender's real-time viewport for creating assets thanks to its speed, it can also work as a renderer for final frames. Workbench is a real-time render engine designed for fast rendering during modelling and animation preview. It is not intended for final rendering. Workbench supports assigning colors to objects for visual distinction. ==== Cycles ==== Cycles is a path-tracing render engine that is designed to be interactive and easy to use, while still supporting many features. It has been included with Blender since 2011, with the release of Blender 2.61. Cycles supports with AVX, AVX2 and AVX-512 extensions, as well as CPU acceleration in modern hardware. ===== GPU rendering ===== Cycles supports GPU rendering, which is used to speed up rendering times. There are three GPU rendering modes: CUDA, which is the preferred method for older Nvidia graphics cards; OptiX, which utilizes the hardware ray-tracing capabilities of Nvidia's Turing architecture & Ampere architecture; HIP, which supports rendering on AMD Radeon graphics cards; and oneAPI for Intel and Intel Arc GPUs. The toolkit software associated with these rendering modes does not come within Blender and needs to be separately installed and configured as per their respective source instructions. Multiple GPUs are also supported (with the notable exception of the EEVEE render engine) which can be used to create a render farm to speed up rendering by processing frames or tiles in parallel—having multiple GPUs, however, does not increase the available memory since each GPU can only access its own memory. Since Version 2.90, this limitation of SLI cards is broken with Nvidia's NVLink. Apple's Metal API got initial implementation in Blender 3.1 for Apple computers with M1 chips and AMD graphics cards. ===== Integrator ===== The integrator is the core rendering algorithm used for lighting computations. Cycles currently supports a path tracing integrator with direct light sampling. It works well for a variety of lighting setups, but it is not as suitable for caustics and certain other complex lighting situations. Rays are traced from the camera into the scene, bouncing around until they find a light source (a lamp, an object material emitting light, or the world background), or until they are simply terminated based on the number of maximum bounces determined in the light path settings for the renderer. To find lamps and surfaces emitting light, both indirect light sampling (letting the ray follow the surface bidirectional scattering distribution function, or BSDF) and direct light sampling (picking a light source and tracing a ray towards it) are used. The default path tracing integrator is a "pure" path tracer. This integrator works by sending several light rays that act as photons from the camera out into the scene. These rays will eventually hit either: a light source, an object, or the world background. If these rays hit an object, they will bounce based on the angle of impact, and continue bouncing until a light source has been reached or until a maximum number of bounces, as determined by the user, which will cause it to terminate and result in a black, unlit pixel. Multiple rays are calculated and averaged out for each pixel, a process known as "sampling". This sampling number is set by the user and greatly affects the final image. Lower sampling often results in more noise and has the potential to create "fireflies" (which are uncharacteristically bright pixels), while higher sampling greatly reduces noise, but also increases render times. The alternative is a branched path tracing integrator, which works mostly the same way. Branched path tracing splits the light rays at each intersection with an object according to different surface components, and takes all lights into account for shading instead of just one. This added complexity makes computing each ray slower but reduces noise in the render, especially in scenes dominated by direct (one-bounce) lighting. This was removed in Blender 3.0 with the advent of Cycles X, as improvements to the pure path tracing integrator made the branched path tracing integrator redundant. ===== Open Shading Language ===== Blender users can create their own nodes using the Open Shading Language (OSL); this allows users to create stunning materials that are entirely procedural, which allows them to be used on any objects without stretching the texture as opposed to image-based textures which need to be made to fit a certain object. (Note that the shader nodes editor is shown in the image, although mostly correct, has undergone a slight change, changing how the UI is structured and looks. ===== Materials ===== Materials define the look of meshes, NURBS curves, and other geometric objects. They consist of three shaders to define the mesh's surface appearance, volume inside, and surface displacement. The surface shader defines the light interaction at the surface of the mesh. One or more bidirectional scattering distribution functions, or BSDFs, can specify if incoming light is reflected, refracted into the mesh, or absorbed. The alpha value is one measure of translucency. When the surface shader does not reflect or absorb light, it enters the volume (light transmission). If no volume shader is specified, it will pass straight through (or be refracted, see refractive index or IOR) to another side of the mesh. If one is defined, a volume shader describes the light interaction as it passes through the volume of the mesh. Light may be scattered, absorbed, or even emitted at any point in the volume. The shape of the surface may be altered by displacement shaders. In this way, textures can be used to make the mesh surface more detailed. Depending on the settings, the displacement may be virtual-only modifying the surface normals to give the impression of displacement (also known as bump mapping) – real, or a combination of real displacement with bump mapping. ==== EEVEE ==== EEVEE (or Eevee) is a real-time PBR renderer included in Blender from version 2.8. This render engine was given the nickname EEVEE, after the Pokémon species. The name was later made into the backronym "Extra Easy Virtual Environment Engine" or EEVEE. With the release of Blender 4.2 LTS in July 2024, EEVEE received an overhaul by its lead developer, Clément Foucault, called EEVEE Next. EEVEE Next boasts a variety of new features for Blender's real-time and rasterised renderer, including screen-space global illumination (SSGI), virtual shadowmapping, sunlight extraction from HDRIs, and a rewritten system for reflections and indirect lighting via light probe volumes and cubemaps. EEVEE Next also brings improved volumetric rendering, along with support for displacement shaders and an improved depth of field system similar to Cycles. Plans for future releases of EEVEE include support for hardware-accelerated ray-tracing and continued improvements to performance and shader compilation. ==== Workbench ==== Using the default 3D viewport drawing system for modeling, texturing, etc. ==== External renderers ==== Free and open-source: Mitsuba Renderer YafaRay (previously Yafray) LuxCoreRender (previously LuxRender) Appleseed Renderer POV-Ray NOX Renderer Armory3D – a free and open source game engine for Blender written in Haxe Radeon ProRender – Radeon ProRender for Blender Malt Render – a non-photorealistic renderer with GLSL shading capabilities Proprietary: Pixar RenderMan – Blender render addon for RenderMan Octane Render – OctaneRender plugin for Blender Indigo Renderer – Indigo for Blender V-Ray – V-Ray for Blender, V-Ray Standalone is needed for rendering Maxwell Render – B-Maxwell addon for Blender Thea Render – Thea for Blender Corona Renderer – Blender To Corona exporter, Corona Standalone is needed for rendering ==== Texturing and shading ==== Blender allows procedural and node-based textures, as well as texture painting, projective painting, vertex painting, weight painting and dynamic painting. === Post-production === Blender has a node-based compositor within the rendering pipeline, which is accelerated with OpenCL, and in 4.0 it supports GPU. It also includes a non-linear video editor called the Video Sequence Editor (VSE), with support for effects like Gaussian blur, color grading, fade and wipe transitions, and other video transformations. However, there is no built-in multi-core support for rendering video with the VSE. === Plugins/addons and scripts === Blender supports Python scripting for the creation of custom tools, prototyping, importing/exporting from other formats, and task automation. This allows for integration with several external render engines through plugins/addons. Blender itself can also be compiled & imported as a Python library for further automation and development. === Deprecated features === ==== Blender Game Engine ==== The Blender Game Engine was a built-in real-time graphics and logic engine with features such as collision detection, a dynamics engine, and programmable logic. It also allowed the creation of stand-alone, real-time applications ranging from architectural visualization to video games. In April 2018, the engine was removed from the upcoming Blender 2.8 release series, due to updates and revisions to the engine lagging behind other game engines such as Unity and the open-source Godot. In the 2.8 announcements, the Blender team specifically mentioned the Godot engine as a suitable replacement for migrating Blender Game Engine users. ==== Blender Internal ==== Blender Internal, a biased rasterization engine and scanline renderer used in previous versions of Blender, was also removed for the 2.80 release in favor of the new "EEVEE" renderer, a realtime physically based renderer. === File format === Blender features an internal file system that can pack multiple scenes into a single ".blend" file. Most of Blender's ".blend" files are forward, backward, and cross-platform compatible with other versions of Blender, with the following exceptions: Loading animations stored in post-2.5 files in Blender pre-2.5. This is due to the reworked animation subsystem introduced in Blender 2.5 being inherently incompatible with older versions. Loading meshes stored in post 2.63. This is due to the introduction of BMesh, a more versatile mesh format. Blender 2.8 ".blend" files are no longer fully backward compatible, causing errors when opened in previous versions. Many 3.x ".blend" files are not completely backwards-compatible as well, and may cause errors with previous versions. All scenes, objects, materials, textures, sounds, images, and post-production effects for an entire animation can be packaged and stored in a single ".blend" file. Data loaded from external sources, such as images and sounds, can also be stored externally and referenced through either an absolute or relative file path. Likewise, ".blend" files themselves can also be used as libraries of Blender assets. Interface configurations are retained in ".blend" files. A wide variety of import/export scripts that extend Blender capabilities (accessing the object data via an internal API) make it possible to interoperate with other 3D tools. Blender organizes data as various kinds of "data blocks" (akin to glTF), such as Objects, Meshes, Lamps, Scenes, Materials, Images, and so on. An object in Blender consists of multiple data blocks – for example, what the user would describe as a polygon mesh consists of at least an Object and a Mesh data block, and usually also a Material and many more, linked together. This allows various data blocks to refer to each other. There may be, for example, multiple Objects that refer to the same Mesh, and making subsequent editing of the shared mesh results in shape changes in all Objects using this Mesh. Objects, meshes, materials, textures, etc. can also be linked to other .blend files, which is what allows the use of .blend files as reusable resource libraries. == File formats supported == === Import === ==== 2D ==== .bmp, .dxf, .sgi, .hdr, .jpg, .jpeg, JPEG 2000, .png, .tif, .tiff, .tga, .exr, .cin, .dpx, .svg, .webp ==== 3D ==== .3ds, .abc, .blend, .bvh, .dae, .dxf, .fbx, .gltf, .glb, .lwo, .obj, .ply, .stl, .usd, .wrl, .x3d ==== Video ==== .avi, .mkv, .mov, .mp4, .ogv, .webm === Export === ==== 2D ==== .bmp, .dxf, .exr, .jpg, .png, .svg, .tif, .tga, .webp ==== 3D ==== .abc, .blend, .bvh, .dae, .dxf, .fbx, .gltf, .glb, .obj, .ply, .stl, .usd, .wrl, .x3d ==== Video ==== .avi, .mkv, .mp4, .ogv, .webm == User interface == === Commands === Most of the commands are accessible via hotkeys. There are also comprehensive graphical menus. Numeric buttons can be "dragged" to change their value directly without the need to aim at a particular widget, as well as being set using the keyboard. Both sliders and number buttons can be constrained to various step sizes with modifiers like the Ctrl and Shift keys. Python expressions can also be typed directly into number entry fields, allowing mathematical expressions to specify values. === Modes === Blender includes many modes for interacting with objects, the two primary ones being Object Mode and Edit Mode, which are toggled with the Tab key. Object mode is used to manipulate individual objects as a unit, while Edit mode is used to manipulate the actual object data. For example, an Object Mode can be used to move, scale, and rotate entire polygon meshes, and Edit Mode can be used to manipulate the individual vertices of a single mesh. There are also several other modes, such as Vertex Paint, Weight Paint, and Sculpt Mode. === Workspaces === The Blender GUI builds its tiled windowing system on top of one or multiple windows provided by the underlying platform. One platform window (often sized to fill the screen) is divided into sections and subsections that can be of any type of Blender's views or window types. The user can define multiple layouts of such Blender windows, called screens, and switch quickly between them by selecting from a menu or with keyboard shortcuts. Each window type's own GUI elements can be controlled with the same tools that manipulate the 3D view. For example, one can zoom in and out of GUI-buttons using similar controls, one zooms in and out in the 3D viewport. The GUI viewport and screen layout are fully user-customizable. It is possible to set up the interface for specific tasks such as video editing or UV mapping or texturing by hiding features not used for the task. == Development == Since the opening of the source code, Blender has experienced significant refactoring of the initial codebase and major additions to its feature set. Improvements include an animation system refresh; a stack-based modifier system; an updated particle system (which can also be used to simulate hair and fur); fluid dynamics; soft-body dynamics; GLSL shaders support in the game engine; advanced UV unwrapping; a fully recoded render pipeline, allowing separate render passes and "render to texture"; node-based material editing and compositing; and projection painting. Part of this development was fostered by Google's Summer of Code program, in which the Blender Foundation has participated since 2005. Historically, Blender has used Phabricator to manage its development but due to the announcement in 2021 that Phabricator would be discontinued, the Blender Institute began work on migrating to another system in early 2022. After extensive debate on what software it should choose it was finally decided to migrate to Gitea. The migration from Phabricator to Gitea is currently a work in progress. === Blender 2.8 === Official planning for the next major revision of Blender after the 2.7 series began in the latter half of 2015, with potential targets including a more configurable UI (dubbed "Blender 101"), support for physically based rendering (PBR) (dubbed EEVEE for "Extra Easy Virtual Environment Engine") to bring improved realtime 3D graphics to the viewport, allowing the use of C++11 and C99 in the codebase, moving to a newer version of OpenGL and dropping support for versions before 3.2, and a possible overhaul of the particle and constraint systems. Blender Internal renderer has been removed from 2.8. Code Quest was a project started in April 2018 set in Amsterdam, at the Blender Institute. The goal of the project was to get a large development team working in one place, in order to speed up the development of Blender 2.8. By June 29, 2018, the Code Quest project ended, and on July 2, the alpha version was completed. Beta testing commenced on November 29, 2018, and was anticipated to take until July 2019. Blender 2.80 was released on July 30, 2019. === Cycles X === On April 23, 2021, the Blender Foundation announced the Cycles X project, where they improved the Cycles architecture for future development. Key changes included a new kernel, removal of default tiled rendering (replaced by progressive refine), removal of branched path tracing, and the removal of OpenCL support. Volumetric rendering was also replaced with better algorithms. Cycles X had only been accessible in an experimental branch until September 21, 2021, when it was merged into the Blender 3.0 alpha. == Support == Blender is extensively documented on its website. There are also a number of online communities dedicated to support, such as the Blender Stack Exchange. == Modified versions == Due to Blender's open-source nature, other programs have tried to take advantage of its success by repackaging and selling cosmetically modified versions of it. Examples include IllusionMage, 3DMofun, 3DMagix, and Fluid Designer, the latter being recognized as Blender-based. == Use in industry == Blender started as an in-house tool for NeoGeo, a Dutch commercial animation company. The first large professional project that used Blender was Spider-Man 2, where it was primarily used to create animatics and pre-visualizations for the storyboard department. The French-language film Friday or Another Day (Vendredi ou un autre jour) was the first 35 mm feature film to use Blender for all the special effects, made on Linux workstations. It won a prize at the Locarno International Film Festival. The special effects were by Digital Graphics of Belgium. Tomm Moore's The Secret of Kells, which was partly produced in Blender by the Belgian studio Digital Graphics, has been nominated for an Oscar in the category "Best Animated Feature Film". Blender has also been used for shows on the History Channel, alongside many other professional 3D graphics programs. Plumíferos, a commercial animated feature film created entirely in Blender, had premiered in February 2010 in Argentina. Its main characters are anthropomorphic talking animals. Special effects for episode 6 of Red Dwarf season X, screened in 2012, were created using Blender as confirmed by Ben Simonds of Gecko Animation. Blender was used for previsualization in Captain America: The Winter Soldier. Some promotional artwork for Super Smash Bros. for Nintendo 3DS and Wii U was partially created using Blender. The alternative hip-hop group Death Grips has used Blender to produce music videos. A screenshot from the program is briefly visible in the music video for Inanimate Sensation. The visual effects for the TV series The Man in the High Castle were done in Blender, with some of the particle simulations relegated to Houdini. NASA used Blender to develop an interactive web application Experience Curiosity to celebrate the 3rd anniversary of the Curiosity rover landing on Mars. This app makes it possible to operate the rover, control its cameras and the robotic arm and reproduces some of the prominent events of the Mars Science Laboratory mission. The application was presented at the beginning of the WebGL section on SIGGRAPH 2015. Blender is also used by NASA for many publicly available 3D models. Many 3D models on NASA's 3D resources page are in a native .blend format. The 2015 animated short film Alike was developed using the operating system Linux and using Blender as primary tool for modeling, animation, rendering, composing and editing. Blender was used for both CGI and compositing for the movie Hardcore Henry. The visual effects in the feature film Sabogal were done in Blender. VFX supervisor Bill Westenhofer used Blender to create the character "Murloc" in the 2016 film Warcraft. Director David F. Sandberg used Blender for multiple shots in Lights Out, and Annabelle: Creation. Blender was used for parts of the credit sequences in Wonder Woman. Blender was used for doing the animation in the film Cinderella the Cat. VFX Artist Ian Hubert used Blender for the science fiction film Prospect. The 2018 film Next Gen was fully created in Blender by Tangent Animation. A team of developers worked on improving Blender for internal use, but it is planned to eventually add those improvements to the official Blender build. The 2019 film I Lost My Body was largely animated using Blender's Grease Pencil tool by drawing over CGI animation allowing for a real sense of camera movement that is harder to achieve in purely traditionally drawn animation. Ubisoft Animation Studio will use Blender to replace its internal content creation software starting in 2020. Khara and its child company Project Studio Q are trying to replace their main tool, 3ds Max, with Blender. They started "field verification" of Blender during their ongoing production of Evangelion: 3.0+1.0. They also signed up as Corporate Silver and Bronze members of Development Fund. The 2020 film Wolfwalkers was partially created using Blender. The 2021 Netflix production Maya and the Three was created using Blender. In 2021 SPA Studios started hiring Blender artists and as of 2022, contributes to Blender Development. Warner Bros. Animation started hiring Blender artists in 2022. VFX company Makuta VFX used Blender for the VFX for Indian blockbuster RRR. Blender was used in several cases for the 2023 film Spider-Man: Across the Spider-Verse. Sony Pictures Imageworks, the primary studio behind the film's animation, used Blender's Grease Pencil for adding line-work and 2D FX animation alongside 3D models. At 14 years old, Canadian animator Preston Mutanga used Blender to create the Lego-style sequence in the film. Mutanga was recruited after his fan-made Lego-style recreation of the film's teaser caught the attention of the filmmakers. The 2024 Latvian film Flow (Straume) was made entirely in Blender using the EEVEE render engine. It received two nominations at the 97th Academy Awards, winning for Best Animated Feature. == Use in Education and Academia == Due to its free and open source nature, Blender has become the primary software of introductory 3D art, animation, visualization, and 3D printing courses at institutions including the University of Michigan, Ann Arbor where it has been made widely available in campus laboratories. Blender has also been used to generate synthetic images for computer vision and AI training from crop monitoring to additive manufacturing. == Open projects == Since 2005, every one to two years the Blender Foundation has announced a new creative project to help drive innovation in Blender. In response to the success of the first open movie project, Elephants Dream, in 2006, the Blender Foundation founded the Blender Institute to be in charge of additional projects, such as films: Big Buck Bunny, Sintel, Tears of Steel; and Yo Frankie!, or Project Apricot, an open game utilizing the Crystal Space game engine that reused some of the assets created for Big Buck Bunny. == Online services == === Blender Foundation === ==== Blender Studio ==== The Blender Studio platform, launched in March 2014 as Blender Cloud, is a subscription-based cloud computing platform where members can access Blender add-ons, courses and to keep track of the production of Blender Studio's open movies. It is currently operated by the Blender Studio, formerly a part of the Blender Institute. It was launched to promote and fundraiser for Project: Gooseberry, and is intended to replace the selling of DVDs by the Blender Foundation with a subscription-based model for file hosting, asset sharing and collaboration. Blender Add-ons included in Blender Studio are CloudRig, Blender Kitsu, Contact sheet Add-on, Blender Purge and Shot Builder. It was rebranded from Blender Cloud to Blender Studio on 22 October 2021. ==== The Blender Development Fund ==== The Blender Development Fund is a subscription where individuals and companies can fund Blender's development. Corporate members include Epic Games, Nvidia, Microsoft, Apple, Unity, Intel, Decentraland, Amazon Web Services, Meta, AMD, Adobe and many more. Individual users can also provide one-time donations to Blender via payment card, PayPal, wire transfer, and some cryptocurrencies. ==== Blender ID ==== The Blender ID is a unified login for Blender software and service users, providing a login for Blender Studio, the Blender Store, the Blender Conference, Blender Network, Blender Development Fund, and the Blender Foundation Certified Trainer Program. ==== Blender Open Data ==== The Blender Open Data is a platform to collect, display, and query benchmark data produced by the Blender community with related Blender Benchmark software. ==== Blender Network ==== The Blender Network was an online platform to enable online professionals to conduct business with Blender and provide online support. It was terminated on 31 March 2021. ==== Blender Store ==== A store to buy Blender merchandise, such as shirts, socks, beanies, etc. ==== Blender Extensions ==== Blender Extensions acts as the main repo for extensions, introduced in Blender 4.2, which include both addons and themes. Users can then install and update extensions right in Blender itself. == Release history == The following table lists notable developments during Blender's release history: green indicates the current version, yellow indicates currently supported versions, and red indicates versions that are no longer supported (though many later versions can still be used on modern systems). As of 2021, official releases of Blender for Microsoft Windows, macOS and Linux, as well as a port for FreeBSD, are available in 64-bit versions. Blender is available for Windows 8.1 and above, and Mac OS X 10.13 and above. Blender 2.80 was the last release that had a version for 32-bit systems (x86). Blender 2.76b was the last supported release for Windows XP, and version 2.63 was the last supported release for PowerPC. Blender 2.83 LTS and 2.92 were the last supported versions for Windows 7. In 2013, Blender was released on Android as a demo, but has not been updated since. == See also == CAD library MB-Lab, a Blender add-on for the parametric 3D modeling of photorealistic humanoid characters MakeHuman List of free and open-source software packages List of video editing software List of 3D printing software == References == == Further reading == == External links == Official website
Wikipedia/Suzanne_(3D_model)
In vector computer graphics, CAD systems, and geographic information systems, a geometric primitive (or prim) is the simplest (i.e. 'atomic' or irreducible) geometric shape that the system can handle (draw, store). Sometimes the subroutines that draw the corresponding objects are called "geometric primitives" as well. The most "primitive" primitives are point and straight line segments, which were all that early vector graphics systems had. In constructive solid geometry, primitives are simple geometric shapes such as a cube, cylinder, sphere, cone, pyramid, torus. Modern 2D computer graphics systems may operate with primitives which are curves (segments of straight lines, circles and more complicated curves), as well as shapes (boxes, arbitrary polygons, circles). A common set of two-dimensional primitives includes lines, points, and polygons, although some people prefer to consider triangles primitives, because every polygon can be constructed from triangles. All other graphic elements are built up from these primitives. In three dimensions, triangles or polygons positioned in three-dimensional space can be used as primitives to model more complex 3D forms. In some cases, curves (such as Bézier curves, circles, etc.) may be considered primitives; in other cases, curves are complex forms created from many straight, primitive shapes. == Common primitives == The set of geometric primitives is based on the dimension of the region being represented: Point (0-dimensional), a single location with no height, width, or depth. Line or curve (1-dimensional), having length but no width, although a linear feature may curve through a higher-dimensional space. Planar surface or curved surface (2-dimensional), having length and width. Volumetric region or solid (3-dimensional), having length, width, and depth. In GIS, the terrain surface is often spoken of colloquially as "2 1/2 dimensional," because only the upper surface needs to be represented. Thus, elevation can be conceptualized as a scalar field property or function of two-dimensional space, affording it a number of data modeling efficiencies over true 3-dimensional objects. A shape of any of these dimensions greater than zero consists of an infinite number of distinct points. Because digital systems are finite, only a sample set of the points in a shape can be stored. Thus, vector data structures typically represent geometric primitives using a strategic sample, organized in structures that facilitate the software interpolating the remainder of the shape at the time of analysis or display, using the algorithms of Computational geometry. A Point is a single coordinate in a Cartesian coordinate system. Some data models allow for Multipoint features consisting of several disconnected points. A Polygonal chain or Polyline is an ordered list of points (termed vertices in this context). The software is expected to interpolate the intervening shape of the line between adjacent points in the list as a parametric curve, most commonly a straight line, but other types of curves are frequently available, including circular arcs, cubic splines, and Bézier curves. Some of these curves require additional points to be defined that are not on the line itself, but are used for parametric control. A Polygon is a polyline that closes at its endpoints, representing the boundary of a two-dimensional region. The software is expected to use this boundary to partition 2-dimensional space into an interior and exterior. Some data models allow for a single feature to consist of multiple polylines, which could collectively connect to form a single closed boundary, could represent a set of disjoint regions (e.g., the state of Hawaii), or could represent a region with holes (e.g., a lake with an island). A Parametric shape is a standardized two-dimensional or three-dimensional shape defined by a minimal set of parameters, such as an ellipse defined by two points at its foci, or three points at its center, vertex, and co-vertex. A Polyhedron or Polygon mesh is a set of polygon faces in three-dimensional space that are connected at their edges to completely enclose a volumetric region. In some applications, closure may not be required or may be implied, such as modeling terrain. The software is expected to use this surface to partition 3-dimensional space into an interior and exterior. A triangle mesh is a subtype of polyhedron in which all faces must be triangles, the only polygon that will always be planar, including the Triangulated irregular network (TIN) commonly used in GIS. A parametric mesh represents a three-dimensional surface by a connected set of parametric functions, similar to a spline or Bézier curve in two dimensions. The most common structure is the Non-uniform rational B-spline (NURBS), supported by most CAD and animation software. == Application in GIS == A wide variety of vector data structures and formats have been developed during the history of Geographic information systems, but they share a fundamental basis of storing a core set of geometric primitives to represent the location and extent of geographic phenomena. Locations of points are almost always measured within a standard Earth-based coordinate system, whether the spherical Geographic coordinate system (latitude/longitude), or a planar coordinate system, such as the Universal Transverse Mercator. They also share the need to store a set of attributes of each geographic feature alongside its shape; traditionally, this has been accomplished using the data models, data formats, and even software of relational databases. Early vector formats, such as POLYVRT, the ARC/INFO Coverage, and the Esri shapefile support a basic set of geometric primitives: points, polylines, and polygons, only in two dimensional space and the latter two with only straight line interpolation. TIN data structures for representing terrain surfaces as triangle meshes were also added. Since the mid 1990s, new formats have been developed that extend the range of available primitives, generally standardized by the Open Geospatial Consortium's Simple Features specification. Common geometric primitive extensions include: three-dimensional coordinates for points, lines, and polygons; a fourth "dimension" to represent a measured attribute or time; curved segments in lines and polygons; text annotation as a form of geometry; and polygon meshes for three-dimensional objects. Frequently, a representation of the shape of a real-world phenomenon may have a different (usually lower) dimension than the phenomenon being represented. For example, a city (a two-dimensional region) may be represented as a point, or a road (a three-dimensional volume of material) may be represented as a line. This dimensional generalization correlates with tendencies in spatial cognition. For example, asking the distance between two cities presumes a conceptual model of the cities as points, while giving directions involving travel "up," "down," or "along" a road imply a one-dimensional conceptual model. This is frequently done for purposes of data efficiency, visual simplicity, or cognitive efficiency, and is acceptable if the distinction between the representation and the represented is understood, but can cause confusion if information users assume that the digital shape is a perfect representation of reality (i.e., believing that roads really are lines). == In 3D modelling == In CAD software or 3D modelling, the interface may present the user with the ability to create primitives which may be further modified by edits. For example, in the practice of box modelling the user will start with a cuboid, then use extrusion and other operations to create the model. In this use the primitive is just a convenient starting point, rather than the fundamental unit of modelling. A 3D package may also include a list of extended primitives which are more complex shapes that come with the package. For example, a teapot is listed as a primitive in 3D Studio Max. == In graphics hardware == Various graphics accelerators exist with hardware acceleration for rendering specific primitives such as lines or triangles, frequently with texture mapping and shaders. Modern 3D accelerators typically accept sequences of triangles as triangle strips. == See also == 2D geometric model Sculpted prim Simplex == References == == External links == Peachpit.com Info On 3D Primitives
Wikipedia/Primitives_(computer_graphics)
In 3D computer graphics, 3D modeling is the process of developing a mathematical coordinate-based representation of a surface of an object (inanimate or living) in three dimensions via specialized software by manipulating edges, vertices, and polygons in a simulated 3D space. Three-dimensional (3D) models represent a physical body using a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. Being a collection of data (points and other information), 3D models can be created manually, algorithmically (procedural modeling), or by scanning. Their surfaces may be further defined with texture mapping. == Outline == The product is called a 3D model, while someone who works with 3D models may be referred to as a 3D artist or a 3D modeler. A 3D model can also be displayed as a two-dimensional image through a process called 3D rendering or used in a computer simulation of physical phenomena. 3D models may be created automatically or manually. The manual modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting. The 3D model can be physically created using 3D printing devices that form 2D layers of the model with three-dimensional material, one layer at a time. Without a 3D model, a 3D print is not possible. 3D modeling software is a class of 3D computer graphics software used to produce 3D models. Individual programs of this class are called modeling applications. == History == 3D models are now widely used anywhere in 3D graphics and CAD but their history predates the widespread use of 3D graphics on personal computers. In the past, many computer games used pre-rendered images of 3D models as sprites before computers could render them in real-time. The designer can then see the model in various directions and views, this can help the designer see if the object is created as intended to compared to their original vision. Seeing the design this way can help the designer or company figure out changes or improvements needed to the product. === Representation === Almost all 3D models can be divided into two categories: Solid – These models define the volume of the object they represent (like a rock). Solid models are mostly used for engineering and medical simulations, and are usually built with constructive solid geometry Shell or boundary – These models represent the surface, i.e., the boundary of the object, not its volume (like an infinitesimally thin eggshell). Almost all visual models used in games and film are shell models. Solid and shell modeling can create functionally identical objects. Differences between them are mostly variations in the way they are created and edited and conventions of use in various fields and differences in types of approximations between the model and reality. Shell models must be manifold (having no holes or cracks in the shell) to be meaningful as a real object. In a shell model of a cube, the bottom and top surfaces of the cube must have a uniform thickness with no holes or cracks in the first and last layers printed. Polygonal meshes (and to a lesser extent, subdivision surfaces) are by far the most common representation. Level sets are a useful representation for deforming surfaces that undergo many topological changes, such as fluids. The process of transforming representations of objects, such as the middle point coordinate of a sphere and a point on its circumference, into a polygon representation of a sphere is called tessellation. This step is used in polygon-based rendering, where objects are broken down from abstract representations ("primitives") such as spheres, cones etc., to so-called meshes, which are nets of interconnected triangles. Meshes of triangles (instead of e.g., squares) are popular as they have proven to be easy to rasterize (the surface described by each triangle is planar, so the projection is always convex). Polygon representations are not used in all rendering techniques, and in these cases the tessellation step is not included in the transition from abstract representation to rendered scene. == Process == There are three popular ways to represent a model: Polygonal modeling – Points in 3D space, called vertices, are connected by line segments to form a polygon mesh. The vast majority of 3D models today are built as textured polygonal models because they are flexible and because computers can render them so quickly. However, polygons are planar and can only approximate curved surfaces using many polygons. Curve modeling – Surfaces are defined by curves, which are influenced by weighted control points. The curve follows (but does not necessarily interpolate) the points. Increasing the weight for a point pulls the curve closer to that point. Curve types include nonuniform rational B-spline (NURBS), splines, patches, and geometric primitives Digital sculpting – There are three types of digital sculpting: Displacement, which is the most widely used among applications at this moment, uses a dense model (often generated by subdivision surfaces of a polygon control mesh) and stores new locations for the vertex positions through use of an image map that stores the adjusted locations. Volumetric, loosely based on voxels, has similar capabilities as displacement but does not suffer from polygon stretching when there are not enough polygons in a region to achieve a deformation. Dynamic tessellation, which is similar to voxel, divides the surface using triangulation to maintain a smooth surface and allow finer details. These methods allow for artistic exploration as the model has new topology created over it once the models form and possibly details have been sculpted. The new mesh usually has the original high-resolution mesh information transferred into displacement data or normal map data if it is for a game engine. The modeling stage consists of shaping individual objects that are later used in the scene. There are a number of modeling techniques, including: Constructive solid geometry Implicit surfaces Subdivision surfaces Modeling can be performed by means of a dedicated program (e.g., 3D modeling software like Adobe Substance, Blender, Cinema 4D, LightWave, Maya, Modo, 3ds Max, SketchUp, Rhinoceros 3D, and others) or an application component (Shaper, Lofter in 3ds Max) or some scene description language (as in POV-Ray). In some cases, there is no strict distinction between these phases; in such cases, modeling is just part of the scene creation process (this is the case, for example, with Caligari trueSpace and Realsoft 3D). 3D models can also be created using the technique of Photogrammetry with dedicated programs such as RealityCapture, Metashape and 3DF Zephyr. Cleanup and further processing can be performed with applications such as MeshLab, the GigaMesh Software Framework, netfabb or MeshMixer. Photogrammetry creates models using algorithms to interpret the shape and texture of real-world objects and environments based on photographs taken from many angles of the subject. Complex materials such as blowing sand, clouds, and liquid sprays are modeled with particle systems, and are a mass of 3D coordinates which have either points, polygons, texture splats or sprites assigned to them. == 3D modeling software == There are a variety of 3D modeling programs that can be used in the industries of engineering, interior design, film and others. Each 3D modeling software has specific capabilities and can be utilized to fulfill demands for the industry. === G-code === Many programs include export options to form a g-code, applicable to additive or subtractive manufacturing machinery. G-code (computer numerical control) works with automated technology to form a real-world rendition of 3D models. This code is a specific set of instructions to carry out steps of a product's manufacturing. === Human models === The first widely available commercial application of human virtual models appeared in 1998 on the Lands' End web site. The human virtual models were created by the company My Virtual Mode Inc. and enabled users to create a model of themselves and try on 3D clothing. There are several modern programs that allow for the creation of virtual human models (Poser being one example). === 3D clothing === The development of cloth simulation software such as Marvelous Designer, CLO3D and Optitex, has enabled artists and fashion designers to model dynamic 3D clothing on the computer. Dynamic 3D clothing is used for virtual fashion catalogs, as well as for dressing 3D characters for video games, 3D animation movies, for digital doubles in movies, as a creation tool for digital fashion brands, as well as for making clothes for avatars in virtual worlds such as SecondLife. == Comparison with 2D methods == 3D photorealistic effects are often achieved without wire-frame modeling and are sometimes indistinguishable in the final form. Some graphic art software includes filters that can be applied to 2D vector graphics or 2D raster graphics on transparent layers. Advantages of wireframe 3D modeling over exclusively 2D methods include: Flexibility, ability to change angles or animate images with quicker rendering of the changes; Ease of rendering, automatic calculation and rendering photorealistic effects rather than mentally visualizing or estimating; Accurate photorealism, less chance of human error in misplacing, overdoing, or forgetting to include a visual effect. Disadvantages compared to 2D photorealistic rendering may include a software learning curve and difficulty achieving certain photorealistic effects. Some photorealistic effects may be achieved with special rendering filters included in the 3D modeling software. For the best of both worlds, some artists use a combination of 3D modeling followed by editing the 2D computer-rendered images from the 3D model. == 3D model market == A large market for 3D models (as well as 3D-related content, such as textures, scripts, etc.) exists—either for individual models or large collections. Several online marketplaces for 3D content allow individual artists to sell content that they have created, including TurboSquid, MyMiniFactory, Sketchfab, CGTrader, and Cults. Often, the artists' goal is to get additional value out of assets they have previously created for projects. By doing so, artists can earn more money out of their old content, and companies can save money by buying pre-made models instead of paying an employee to create one from scratch. These marketplaces typically split the sale between themselves and the artist that created the asset, artists get 40% to 95% of the sales according to the marketplace. In most cases, the artist retains ownership of the 3d model while the customer only buys the right to use and present the model. Some artists sell their products directly in their own stores, offering their products at a lower price by not using intermediaries. The architecture, engineering and construction (AEC) industry is the biggest market for 3D modeling, with an estimated value of $12.13 billion by 2028. This is due to the increasing adoption of 3D modeling in the AEC industry, which helps to improve design accuracy, reduce errors and omissions and facilitate collaboration among project stakeholders. Over the last several years numerous marketplaces specializing in 3D rendering and printing models have emerged. Some of the 3D printing marketplaces are a combination of models sharing sites, with or without a built in e-com capability. Some of those platforms also offer 3D printing services on demand, software for model rendering and dynamic viewing of items. == 3D printing == The term 3D printing or three-dimensional printing is a form of additive manufacturing technology where a three-dimensional object is created from successive layers of material. Objects can be created without the need for complex expensive molds or assembly with multiple parts. 3D printing allows ideas to be prototyped and tested without having to go through a production process. 3D models can be purchased from online markets and printed by individuals or companies using commercially available 3D printers, enabling the home-production of objects such as spare parts and even medical equipment. == Uses == 3D modeling is used in many industries. The medical industry uses detailed models of organs created from multiple two-dimensional image slices from an MRI or CT scan. Other scientific fields can use 3D models to visualize and communicate information such as models of chemical compounds. The movie industry uses 3D models for computer-generated characters and objects in animated and real-life motion pictures. Similarly, the video game industry uses 3D models as assets for computer and video games. The source of the geometry for the shape of an object can be a designer, industrial engineer, or artist using a 3D CAD system; an existing object that has been reverse engineered or copied using a 3D shape digitizer or scanner; or mathematical data based on a numerical description or calculation of the object. The architecture industry uses 3D models to demonstrate proposed buildings and landscapes in lieu of traditional, physical architectural models. Additionally, the use of Level of Detail (LOD) in 3D models is becoming increasingly important in architecture, engineering, and construction. Archeologists create 3D models of cultural heritage items for research and visualization. For example, the International Institute of MetaNumismatics (INIMEN) studies the applications of 3D modeling for the digitization and preservation of numismatic artifacts. In recent decades, the earth science community has started to construct 3D geological models as a standard practice. 3D models are also used in constructing digital representations of mechanical parts before they are manufactured. Using CAD- and CAM-related software, an engineer can test the functionality of assemblies of parts then use the same data to create toolpaths for CNC machining or 3D printing. 3D modeling is used in industrial design, wherein products are 3D modeled before representing them to the clients. In media and event industries, 3D modeling is used in stage and set design. The OWL 2 translation of the vocabulary of X3D can be used to provide semantic descriptions for 3D models, which is suitable for indexing and retrieval of 3D models by features such as geometry, dimensions, material, texture, diffuse reflection, transmission spectra, transparency, reflectivity, opalescence, glazes, varnishes and enamels (as opposed to unstructured textual descriptions or 2.5D virtual museums and exhibitions using Google Street View on Google Arts & Culture, for example). The RDF representation of 3D models can be used in reasoning, which enables intelligent 3D applications which, for example, can automatically compare two 3D models by volume. == See also == == References == == External links == Media related to 3D modeling at Wikimedia Commons
Wikipedia/3D_model
In 3D computer graphics, 3D modeling is the process of developing a mathematical coordinate-based representation of a surface of an object (inanimate or living) in three dimensions via specialized software by manipulating edges, vertices, and polygons in a simulated 3D space. Three-dimensional (3D) models represent a physical body using a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. Being a collection of data (points and other information), 3D models can be created manually, algorithmically (procedural modeling), or by scanning. Their surfaces may be further defined with texture mapping. == Outline == The product is called a 3D model, while someone who works with 3D models may be referred to as a 3D artist or a 3D modeler. A 3D model can also be displayed as a two-dimensional image through a process called 3D rendering or used in a computer simulation of physical phenomena. 3D models may be created automatically or manually. The manual modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting. The 3D model can be physically created using 3D printing devices that form 2D layers of the model with three-dimensional material, one layer at a time. Without a 3D model, a 3D print is not possible. 3D modeling software is a class of 3D computer graphics software used to produce 3D models. Individual programs of this class are called modeling applications. == History == 3D models are now widely used anywhere in 3D graphics and CAD but their history predates the widespread use of 3D graphics on personal computers. In the past, many computer games used pre-rendered images of 3D models as sprites before computers could render them in real-time. The designer can then see the model in various directions and views, this can help the designer see if the object is created as intended to compared to their original vision. Seeing the design this way can help the designer or company figure out changes or improvements needed to the product. === Representation === Almost all 3D models can be divided into two categories: Solid – These models define the volume of the object they represent (like a rock). Solid models are mostly used for engineering and medical simulations, and are usually built with constructive solid geometry Shell or boundary – These models represent the surface, i.e., the boundary of the object, not its volume (like an infinitesimally thin eggshell). Almost all visual models used in games and film are shell models. Solid and shell modeling can create functionally identical objects. Differences between them are mostly variations in the way they are created and edited and conventions of use in various fields and differences in types of approximations between the model and reality. Shell models must be manifold (having no holes or cracks in the shell) to be meaningful as a real object. In a shell model of a cube, the bottom and top surfaces of the cube must have a uniform thickness with no holes or cracks in the first and last layers printed. Polygonal meshes (and to a lesser extent, subdivision surfaces) are by far the most common representation. Level sets are a useful representation for deforming surfaces that undergo many topological changes, such as fluids. The process of transforming representations of objects, such as the middle point coordinate of a sphere and a point on its circumference, into a polygon representation of a sphere is called tessellation. This step is used in polygon-based rendering, where objects are broken down from abstract representations ("primitives") such as spheres, cones etc., to so-called meshes, which are nets of interconnected triangles. Meshes of triangles (instead of e.g., squares) are popular as they have proven to be easy to rasterize (the surface described by each triangle is planar, so the projection is always convex). Polygon representations are not used in all rendering techniques, and in these cases the tessellation step is not included in the transition from abstract representation to rendered scene. == Process == There are three popular ways to represent a model: Polygonal modeling – Points in 3D space, called vertices, are connected by line segments to form a polygon mesh. The vast majority of 3D models today are built as textured polygonal models because they are flexible and because computers can render them so quickly. However, polygons are planar and can only approximate curved surfaces using many polygons. Curve modeling – Surfaces are defined by curves, which are influenced by weighted control points. The curve follows (but does not necessarily interpolate) the points. Increasing the weight for a point pulls the curve closer to that point. Curve types include nonuniform rational B-spline (NURBS), splines, patches, and geometric primitives Digital sculpting – There are three types of digital sculpting: Displacement, which is the most widely used among applications at this moment, uses a dense model (often generated by subdivision surfaces of a polygon control mesh) and stores new locations for the vertex positions through use of an image map that stores the adjusted locations. Volumetric, loosely based on voxels, has similar capabilities as displacement but does not suffer from polygon stretching when there are not enough polygons in a region to achieve a deformation. Dynamic tessellation, which is similar to voxel, divides the surface using triangulation to maintain a smooth surface and allow finer details. These methods allow for artistic exploration as the model has new topology created over it once the models form and possibly details have been sculpted. The new mesh usually has the original high-resolution mesh information transferred into displacement data or normal map data if it is for a game engine. The modeling stage consists of shaping individual objects that are later used in the scene. There are a number of modeling techniques, including: Constructive solid geometry Implicit surfaces Subdivision surfaces Modeling can be performed by means of a dedicated program (e.g., 3D modeling software like Adobe Substance, Blender, Cinema 4D, LightWave, Maya, Modo, 3ds Max, SketchUp, Rhinoceros 3D, and others) or an application component (Shaper, Lofter in 3ds Max) or some scene description language (as in POV-Ray). In some cases, there is no strict distinction between these phases; in such cases, modeling is just part of the scene creation process (this is the case, for example, with Caligari trueSpace and Realsoft 3D). 3D models can also be created using the technique of Photogrammetry with dedicated programs such as RealityCapture, Metashape and 3DF Zephyr. Cleanup and further processing can be performed with applications such as MeshLab, the GigaMesh Software Framework, netfabb or MeshMixer. Photogrammetry creates models using algorithms to interpret the shape and texture of real-world objects and environments based on photographs taken from many angles of the subject. Complex materials such as blowing sand, clouds, and liquid sprays are modeled with particle systems, and are a mass of 3D coordinates which have either points, polygons, texture splats or sprites assigned to them. == 3D modeling software == There are a variety of 3D modeling programs that can be used in the industries of engineering, interior design, film and others. Each 3D modeling software has specific capabilities and can be utilized to fulfill demands for the industry. === G-code === Many programs include export options to form a g-code, applicable to additive or subtractive manufacturing machinery. G-code (computer numerical control) works with automated technology to form a real-world rendition of 3D models. This code is a specific set of instructions to carry out steps of a product's manufacturing. === Human models === The first widely available commercial application of human virtual models appeared in 1998 on the Lands' End web site. The human virtual models were created by the company My Virtual Mode Inc. and enabled users to create a model of themselves and try on 3D clothing. There are several modern programs that allow for the creation of virtual human models (Poser being one example). === 3D clothing === The development of cloth simulation software such as Marvelous Designer, CLO3D and Optitex, has enabled artists and fashion designers to model dynamic 3D clothing on the computer. Dynamic 3D clothing is used for virtual fashion catalogs, as well as for dressing 3D characters for video games, 3D animation movies, for digital doubles in movies, as a creation tool for digital fashion brands, as well as for making clothes for avatars in virtual worlds such as SecondLife. == Comparison with 2D methods == 3D photorealistic effects are often achieved without wire-frame modeling and are sometimes indistinguishable in the final form. Some graphic art software includes filters that can be applied to 2D vector graphics or 2D raster graphics on transparent layers. Advantages of wireframe 3D modeling over exclusively 2D methods include: Flexibility, ability to change angles or animate images with quicker rendering of the changes; Ease of rendering, automatic calculation and rendering photorealistic effects rather than mentally visualizing or estimating; Accurate photorealism, less chance of human error in misplacing, overdoing, or forgetting to include a visual effect. Disadvantages compared to 2D photorealistic rendering may include a software learning curve and difficulty achieving certain photorealistic effects. Some photorealistic effects may be achieved with special rendering filters included in the 3D modeling software. For the best of both worlds, some artists use a combination of 3D modeling followed by editing the 2D computer-rendered images from the 3D model. == 3D model market == A large market for 3D models (as well as 3D-related content, such as textures, scripts, etc.) exists—either for individual models or large collections. Several online marketplaces for 3D content allow individual artists to sell content that they have created, including TurboSquid, MyMiniFactory, Sketchfab, CGTrader, and Cults. Often, the artists' goal is to get additional value out of assets they have previously created for projects. By doing so, artists can earn more money out of their old content, and companies can save money by buying pre-made models instead of paying an employee to create one from scratch. These marketplaces typically split the sale between themselves and the artist that created the asset, artists get 40% to 95% of the sales according to the marketplace. In most cases, the artist retains ownership of the 3d model while the customer only buys the right to use and present the model. Some artists sell their products directly in their own stores, offering their products at a lower price by not using intermediaries. The architecture, engineering and construction (AEC) industry is the biggest market for 3D modeling, with an estimated value of $12.13 billion by 2028. This is due to the increasing adoption of 3D modeling in the AEC industry, which helps to improve design accuracy, reduce errors and omissions and facilitate collaboration among project stakeholders. Over the last several years numerous marketplaces specializing in 3D rendering and printing models have emerged. Some of the 3D printing marketplaces are a combination of models sharing sites, with or without a built in e-com capability. Some of those platforms also offer 3D printing services on demand, software for model rendering and dynamic viewing of items. == 3D printing == The term 3D printing or three-dimensional printing is a form of additive manufacturing technology where a three-dimensional object is created from successive layers of material. Objects can be created without the need for complex expensive molds or assembly with multiple parts. 3D printing allows ideas to be prototyped and tested without having to go through a production process. 3D models can be purchased from online markets and printed by individuals or companies using commercially available 3D printers, enabling the home-production of objects such as spare parts and even medical equipment. == Uses == 3D modeling is used in many industries. The medical industry uses detailed models of organs created from multiple two-dimensional image slices from an MRI or CT scan. Other scientific fields can use 3D models to visualize and communicate information such as models of chemical compounds. The movie industry uses 3D models for computer-generated characters and objects in animated and real-life motion pictures. Similarly, the video game industry uses 3D models as assets for computer and video games. The source of the geometry for the shape of an object can be a designer, industrial engineer, or artist using a 3D CAD system; an existing object that has been reverse engineered or copied using a 3D shape digitizer or scanner; or mathematical data based on a numerical description or calculation of the object. The architecture industry uses 3D models to demonstrate proposed buildings and landscapes in lieu of traditional, physical architectural models. Additionally, the use of Level of Detail (LOD) in 3D models is becoming increasingly important in architecture, engineering, and construction. Archeologists create 3D models of cultural heritage items for research and visualization. For example, the International Institute of MetaNumismatics (INIMEN) studies the applications of 3D modeling for the digitization and preservation of numismatic artifacts. In recent decades, the earth science community has started to construct 3D geological models as a standard practice. 3D models are also used in constructing digital representations of mechanical parts before they are manufactured. Using CAD- and CAM-related software, an engineer can test the functionality of assemblies of parts then use the same data to create toolpaths for CNC machining or 3D printing. 3D modeling is used in industrial design, wherein products are 3D modeled before representing them to the clients. In media and event industries, 3D modeling is used in stage and set design. The OWL 2 translation of the vocabulary of X3D can be used to provide semantic descriptions for 3D models, which is suitable for indexing and retrieval of 3D models by features such as geometry, dimensions, material, texture, diffuse reflection, transmission spectra, transparency, reflectivity, opalescence, glazes, varnishes and enamels (as opposed to unstructured textual descriptions or 2.5D virtual museums and exhibitions using Google Street View on Google Arts & Culture, for example). The RDF representation of 3D models can be used in reasoning, which enables intelligent 3D applications which, for example, can automatically compare two 3D models by volume. == See also == == References == == External links == Media related to 3D modeling at Wikimedia Commons
Wikipedia/3D_modeling_software
Image resolution is the level of detail of an image. The term applies to digital images, film images, and other types of images. "Higher resolution" means more image detail. Image resolution can be measured in various ways. Resolution quantifies how close lines can be to each other and still be visibly resolved. Resolution units can be tied to physical sizes (e.g. lines per mm, lines per inch), to the overall size of a picture (lines per picture height, also known simply as lines, TV lines, or TVL), or to angular subtense. Instead of single lines, line pairs are often used, composed of a dark line and an adjacent light line; for example, a resolution of 10 lines per millimeter means 5 dark lines alternating with 5 light lines, or 5 line pairs per millimeter (5 LP/mm). Photographic lens are most often quoted in line pairs per millimeter. == Types == The resolution of digital cameras can be described in many different ways. === Pixel count === The term resolution is often considered equivalent to pixel count in digital imaging, though international standards in the digital camera field specify it should instead be called "Number of Total Pixels" in relation to image sensors, and as "Number of Recorded Pixels" for what is fully captured. Hence, CIPA DCG-001 calls for notation such as "Number of Recorded Pixels 1000 × 1500". According to the same standards, the "Number of Effective Pixels" that an image sensor or digital camera has is the count of pixel sensors that contribute to the final image (including pixels not in said image but nevertheless support the image filtering process), as opposed to the number of total pixels, which includes unused or light-shielded pixels around the edges. An image of N pixels height by M pixels wide can have any resolution less than N lines per picture height, or N TV lines. But when the pixel counts are referred to as "resolution", the convention is to describe the pixel resolution with the set of two positive integer numbers, where the first number is the number of pixel columns (width) and the second is the number of pixel rows (height), for example as 7680 × 6876. Another popular convention is to cite resolution as the total number of pixels in the image, typically given as number of megapixels, which can be calculated by multiplying pixel columns by pixel rows and dividing by one million. Other conventions include describing pixels per length unit or pixels per area unit, such as pixels per inch or per square inch. None of these pixel resolutions are true resolutions, but they are widely referred to as such; they serve as upper bounds on image resolution. Below is an illustration of how the same image might appear at different pixel resolutions, if the pixels were poorly rendered as sharp squares (normally, a smooth image reconstruction from pixels would be preferred, but for illustration of pixels, the sharp squares make the point better). An image that is 2048 pixels in width and 1536 pixels in height has a total of 2048×1536 = 3,145,728 pixels or 3.1 megapixels. One could refer to it as 2048 by 1536 or a 3.1-megapixel image. The image would be a very low quality image (72ppi) if printed at about 28.5 inches wide, but a very good quality (300ppi) image if printed at about 7 inches wide. The number of photodiodes in a color digital camera image sensor is often a multiple of the number of pixels in the image it produces, because information from an array of color image sensors is used to reconstruct the color of a single pixel. The image has to be interpolated or demosaiced to produce all three colors for each output pixel. === Spatial resolution === The terms blurriness and sharpness are used for digital images but other descriptors are used to reference the hardware capturing and displaying the images. Spatial resolution in radiology is the ability of the imaging modality to differentiate two objects. Low spatial resolution techniques will be unable to differentiate between two objects that are relatively close together. The measure of how closely lines can be resolved in an image is called spatial resolution, and it depends on properties of the system creating the image, not just the pixel resolution in pixels per inch (ppi). For practical purposes the clarity of the image is decided by its spatial resolution, not the number of pixels in an image. In effect, spatial resolution is the number of independent pixel values per unit length. The spatial resolution of consumer displays ranges from 50 to 800 pixel lines per inch. With scanners, optical resolution is sometimes used to distinguish spatial resolution from the number of pixels per inch. In remote sensing, spatial resolution is typically limited by diffraction, as well as by aberrations, imperfect focus, and atmospheric distortion. The ground sample distance (GSD) of an image, the pixel spacing on the Earth's surface, is typically considerably smaller than the resolvable spot size. In astronomy, one often measures spatial resolution in data points per arcsecond subtended at the point of observation, because the physical distance between objects in the image depends on their distance away and this varies widely with the object of interest. On the other hand, in electron microscopy, line or fringe resolution is the minimum separation detectable between adjacent parallel lines (e.g. between planes of atoms), whereas point resolution is instead the minimum separation between adjacent points that can be both detected and interpreted e.g. as adjacent columns of atoms, for instance. The former often helps one detect periodicity in specimens, whereas the latter (although more difficult to achieve) is key to visualizing how individual atoms interact. In Stereoscopic 3D images, spatial resolution could be defined as the spatial information recorded or captured by two viewpoints of a stereo camera (left and right camera). === Spectral resolution === Pixel encoding limits the information stored in a digital image, and the term color profile is used for digital images but other descriptors are used to reference the hardware capturing and displaying the images. Spectral resolution is the ability to resolve spectral features and bands into their separate components. Color images distinguish light of different spectra. Multispectral images can resolve even finer differences of spectrum or wavelength by measuring and storing more than the traditional 3 of common RGB color images. === Temporal resolution === Temporal resolution (TR) is the precision of a measurement with respect to time. Movie cameras and high-speed cameras can resolve events at different points in time. The time resolution used for movies is usually 24 to 48 frames per second (frames/s), whereas high-speed cameras may resolve 50 to 300 frames/s, or even more. The Heisenberg uncertainty principle describes the fundamental limit on the maximum spatial resolution of information about a particle's coordinates imposed by the measurement or existence of information regarding its momentum to any degree of precision. This fundamental limitation can, in turn, be a factor in the maximum imaging resolution at subatomic scales, as can be encountered using scanning electron microscopes. === Radiometric resolution === Radiometric resolution determines how finely a system can represent or distinguish differences of intensity, and is usually expressed as a number of levels or a number of bits, for example 8 bits or 256 levels that is typical of computer image files. The higher the radiometric resolution, the better subtle differences of intensity or reflectivity can be represented, at least in theory. In practice, the effective radiometric resolution is typically limited by the noise level, rather than by the number of bits of representation. == Resolution in various media == This is a list of traditional, analogue horizontal resolutions for various media. The list only includes popular formats, not rare formats, and all values are approximate, because the actual quality can vary machine-to-machine or tape-to-tape. For ease-of-comparison, all values are for the NTSC system. (For PAL systems, replace 480 with 576.) Analog formats usually had less chroma resolution. Analogue and early digital Many cameras and displays offset the color components relative to each other or mix up temporal with spatial resolution: Narrowscreen 4:3 computer display resolutions 320 × 200: MCGA 320 × 240: QVGA 640 × 350: EGA 640 × 480: VGA 800 × 600: Super VGA 1024 × 768: XGA / EVGA 1600 × 1200: UXGA Analog 320 × 200: CRT monitors 333 × 480: VHS, Video8, Umatic 350 × 480: Betamax 420 × 480: Super Betamax, Betacam 460 × 480: Betacam SP, Umatic SP, NTSC (over-the-air TV) 580 × 480: Super VHS, Hi8, LaserDisc 700 × 480: Enhanced Definition Betamax, Analog broadcast limit (NTSC) 768 × 576: Analog broadcast limit (PAL, SECAM) Digital 352 × 240: Video CD 500 × 480: Digital8 720 × 480: D-VHS, DVD, miniDV, Digital Betacam (NTSC) 720 × 480: Widescreen DVD (anamorphic) (NTSC) 854 × 480: EDTV (Enhanced Definition Television) 720 × 576: D-VHS, DVD, miniDV, Digital8, Digital Betacam (PAL/SECAM) 720 × 576 or 1024 × 576: Widescreen DVD (anamorphic) (PAL/SECAM) 1280 × 720: D-VHS, HD DVD, Blu-ray, HDV (miniDV) 1440 × 1080: HDV (miniDV) 1920 × 1080: HDV (miniDV), AVCHD, HD DVD, Blu-ray, HDCAM SR 1998 × 1080: 2K Flat (1.85:1) 2048 × 1080: 2K Digital Cinema 2560 × 1440: QHD (Quad HD) i.e. 4x the pixels in HD 1280x720 3840 × 2160: 4K UHDTV, Ultra HD Blu-ray 4096 × 2160: 4K Digital Cinema 7680 × 4320: 8K UHDTV 15360 × 8640: 16K Digital Cinema 30720 × 17280: 32K Sequences from newer films are scanned at 2,000, 4,000, or even 8,000 columns, called 2K, 4K, and 8K, for quality visual-effects editing on computers. IMAX, including IMAX HD and OMNIMAX: approximately 10,000×7,000 (7,000 lines) resolution. It is about 70 MP, which is currently highest-resolution single-sensor digital cinema camera (as of January 2012). Film 35 mm film is scanned for release on DVD at 1080 or 2000 lines as of 2005. The actual resolution of 35 mm original camera negatives is the subject of much debate. Measured resolutions of negative film have ranged from 25–200 LP/mm, which equates to a range of 325 lines for 2-perf, to (theoretically) over 2300 lines for 4-perf shot on T-Max 100. Kodak states that 35 mm film has the equivalent of 6K resolution horizontally according to a Senior Vice President of IMAX. Print Modern digital camera resolutions Digital medium format camera – single, not combined one large digital sensor – 80 MP (starting from 2011, current as of 2013) – 10320 × 7752 or 10380 × 7816 (81.1 MP). Mobile phone – Nokia 808 PureView – 41 MP (7728 × 5368), Nokia Lumia 1020 – also 41 MP (7712 × 5360) Digital still camera – Canon EOS 5DS – 51 MP (8688 × 5792) == See also == Display resolution Dots per inch Multi-exposure HDR capture High-resolution picture transmission Image scaling Image scanner Kell factor, which typically limits the number of visible lines to 0.7x of the device resolution Pixel density == References ==
Wikipedia/High-resolution
In computer graphics, the rendering equation is an integral equation that expresses the amount of light leaving a point on a surface as the sum of emitted light and reflected light. It was independently introduced into computer graphics by David Immel et al. and James Kajiya in 1986. The equation is important in the theory of physically based rendering, describing the relationships between the bidirectional reflectance distribution function (BRDF) and the radiometric quantities used in rendering. The rendering equation is defined at every point on every surface in the scene being rendered, including points hidden from the camera. The incoming light quantities on the right side of the equation usually come from the left (outgoing) side at other points in the scene (ray casting can be used to find these other points). The radiosity rendering method solves a discrete approximation of this system of equations. In distributed ray tracing, the integral on the right side of the equation may be evaluated using Monte Carlo integration by randomly sampling possible incoming light directions. Path tracing improves and simplifies this method. The rendering equation can be extended to handle effects such as fluorescence (in which some absorbed energy is re-emitted at different wavelengths) and can support transparent and translucent materials by using a bidirectional scattering distribution function (BSDF) in place of a BRDF. The theory of path tracing sometimes uses a path integral (integral over possible paths from a light source to a point) instead of the integral over possible incoming directions. == Equation form == The rendering equation may be written in the form L o ( x , ω o , λ , t ) = L e ( x , ω o , λ , t ) + L r ( x , ω o , λ , t ) {\displaystyle L_{\text{o}}(\mathbf {x} ,\omega _{\text{o}},\lambda ,t)=L_{\text{e}}(\mathbf {x} ,\omega _{\text{o}},\lambda ,t)+L_{\text{r}}(\mathbf {x} ,\omega _{\text{o}},\lambda ,t)} L r ( x , ω o , λ , t ) = ∫ Ω f r ( x , ω i , ω o , λ , t ) L i ( x , ω i , λ , t ) ( ω i ⋅ n ) d ⁡ ω i {\displaystyle L_{\text{r}}(\mathbf {x} ,\omega _{\text{o}},\lambda ,t)=\int _{\Omega }f_{\text{r}}(\mathbf {x} ,\omega _{\text{i}},\omega _{\text{o}},\lambda ,t)L_{\text{i}}(\mathbf {x} ,\omega _{\text{i}},\lambda ,t)(\omega _{\text{i}}\cdot \mathbf {n} )\operatorname {d} \omega _{\text{i}}} where L o ( x , ω o , λ , t ) {\displaystyle L_{\text{o}}(\mathbf {x} ,\omega _{\text{o}},\lambda ,t)} is the total spectral radiance of wavelength λ {\displaystyle \lambda } directed outward along direction ω o {\displaystyle \omega _{\text{o}}} at time t {\displaystyle t} , from a particular position x {\displaystyle \mathbf {x} } x {\displaystyle \mathbf {x} } is the location in space ω o {\displaystyle \omega _{\text{o}}} is the direction of the outgoing light λ {\displaystyle \lambda } is a particular wavelength of light t {\displaystyle t} is time L e ( x , ω o , λ , t ) {\displaystyle L_{\text{e}}(\mathbf {x} ,\omega _{\text{o}},\lambda ,t)} is emitted spectral radiance L r ( x , ω o , λ , t ) {\displaystyle L_{\text{r}}(\mathbf {x} ,\omega _{\text{o}},\lambda ,t)} is reflected spectral radiance ∫ Ω … d ⁡ ω i {\displaystyle \int _{\Omega }\dots \operatorname {d} \omega _{\text{i}}} is an integral over Ω {\displaystyle \Omega } Ω {\displaystyle \Omega } is the unit hemisphere centered around n {\displaystyle \mathbf {n} } containing all possible values for ω i {\displaystyle \omega _{\text{i}}} where ω i ⋅ n > 0 {\displaystyle \omega _{\text{i}}\cdot \mathbf {n} >0} f r ( x , ω i , ω o , λ , t ) {\displaystyle f_{\text{r}}(\mathbf {x} ,\omega _{\text{i}},\omega _{\text{o}},\lambda ,t)} is the bidirectional reflectance distribution function, the proportion of light reflected from ω i {\displaystyle \omega _{\text{i}}} to ω o {\displaystyle \omega _{\text{o}}} at position x {\displaystyle \mathbf {x} } , time t {\displaystyle t} , and at wavelength λ {\displaystyle \lambda } ω i {\displaystyle \omega _{\text{i}}} is the negative direction of the incoming light L i ( x , ω i , λ , t ) {\displaystyle L_{\text{i}}(\mathbf {x} ,\omega _{\text{i}},\lambda ,t)} is spectral radiance of wavelength λ {\displaystyle \lambda } coming inward toward x {\displaystyle \mathbf {x} } from direction ω i {\displaystyle \omega _{\text{i}}} at time t {\displaystyle t} n {\displaystyle \mathbf {n} } is the surface normal at x {\displaystyle \mathbf {x} } ω i ⋅ n {\displaystyle \omega _{\text{i}}\cdot \mathbf {n} } is the weakening factor of outward irradiance due to incident angle, as the light flux is smeared across a surface whose area is larger than the projected area perpendicular to the ray. This is often written as cos ⁡ θ i {\displaystyle \cos \theta _{i}} . Two noteworthy features are: its linearity—it is composed only of multiplications and additions, and its spatial homogeneity—it is the same in all positions and orientations. These mean a wide range of factorings and rearrangements of the equation are possible. It is a Fredholm integral equation of the second kind, similar to those that arise in quantum field theory. Note this equation's spectral and time dependence — L o {\displaystyle L_{\text{o}}} may be sampled at or integrated over sections of the visible spectrum to obtain, for example, a trichromatic color sample. A pixel value for a single frame in an animation may be obtained by fixing t ; {\displaystyle t;} motion blur can be produced by averaging L o {\displaystyle L_{\text{o}}} over some given time interval (by integrating over the time interval and dividing by the length of the interval). Note that a solution to the rendering equation is the function L o {\displaystyle L_{\text{o}}} . The function L i {\displaystyle L_{\text{i}}} is related to L o {\displaystyle L_{\text{o}}} via a ray-tracing operation: The incoming radiance from some direction at one point is the outgoing radiance at some other point in the opposite direction. == Applications == Solving the rendering equation for any given scene is the primary challenge in realistic rendering. One approach to solving the equation is based on finite element methods, leading to the radiosity algorithm. Another approach using Monte Carlo methods has led to many different algorithms including path tracing, photon mapping, and Metropolis light transport, among others. == Limitations == Although the equation is very general, it does not capture every aspect of light reflection. Some missing aspects include the following: Transmission, which occurs when light is transmitted through the surface, such as when it hits a glass object or a water surface, Subsurface scattering, where the spatial locations for incoming and departing light are different. Surfaces rendered without accounting for subsurface scattering may appear unnaturally opaque — however, it is not necessary to account for this if transmission is included in the equation, since that will effectively include also light scattered under the surface, Polarization, where different light polarizations will sometimes have different reflection distributions, for example when light bounces at a water surface, Phosphorescence, which occurs when light or other electromagnetic radiation is absorbed at one moment and emitted at a later moment, usually with a longer wavelength (unless the absorbed electromagnetic radiation is very intense), Interference, where the wave properties of light are exhibited, Fluorescence, where the absorbed and emitted light have different wavelengths, Non-linear effects, where very intense light can increase the energy level of an electron with more energy than that of a single photon (this can occur if the electron is hit by two photons at the same time), and emission of light with higher frequency than the frequency of the light that hit the surface suddenly becomes possible, and Doppler effect, where light that bounces off an object moving at a very high speed will get its wavelength changed: if the light bounces off an object that is moving towards it, the light will be blueshifted and the photons will be packed more closely so the photon flux will be increased; if it bounces off an object moving away from it, it will be redshifted and the photon flux will be decreased. This effect becomes apparent only at speeds comparable to the speed of light, which is not the case for most rendering applications. For scenes that are either not composed of simple surfaces in a vacuum or for which the travel time for light is an important factor, researchers have generalized the rendering equation to produce a volume rendering equation suitable for volume rendering and a transient rendering equation for use with data from a time-of-flight camera. == References == == External links == Lecture notes from Stanford University course CS 348B, Computer Graphics: Image Synthesis Techniques
Wikipedia/Rendering_equation
A raster graphics editor (also called bitmap graphics editor) is a computer program that allows users to create and edit images interactively on the computer screen and save them in one of many raster graphics file formats (also known as bitmap images) such as JPEG, PNG, and GIF. == Comparison to vector graphic editors == Vector graphics editors are often contrasted with raster graphics editors, yet their capabilities complement each other. The technical difference between vector and raster editors stem from the difference between vector and raster images. Vector graphics are created mathematically, using geometric formulas. Each element is created and manipulated numerically; essentially using Cartesian coordinates for the placement of key points, and then a mathematical algorithm to connect the dots and define the colors. Raster images include digital photos. A raster image is made up of rows and columns of dots, called pixels, and is generally more photo-realistic. This is the standard form for digital cameras; whether it be a .raw file or .jpg file, the concept is the same. The image is represented pixel by pixel, like a microscopic jigsaw puzzle. Vector editors tend to be better suited for graphic design, page layout, typography, logos, sharp-edged artistic illustrations, e.g., cartoons, clip art, complex geometric patterns, technical illustrations, diagramming and flowcharting. Advanced raster editors, like GIMP and Adobe Photoshop, use vector methods (mathematics) for general layout and elements such as text, but are equipped to deal with raster images down to the pixel and often have special capabilities in doing so, such as brightness/contrast, and even adding "lighting" to a raster image or photograph. == Popular editors == Adobe Photoshop: Industry standard for photography, design, and digital art GIMP: Free, open-source alternative with similar features to Photoshop Corel Painter: Focuses on digital painting with traditional art simulation Affinity Photo: Professional-grade tools with a one-time purchase model Procreate(iOS): Popular app for digital painting on iPad Krita : A popular software for Windows. == Common features == Select a region for editing Draw lines with simulated brushes of different color, size, shape and pressure Fill a region with a single color, gradient of colors, or a texture Select a color using different color models, e.g., RGB, HSV, or by using a color dropper Edit and convert between various color models. Add typed letters in various font styles Remove imperfections from photo images Composite editing using layers Apply filters for effects including sharpening and blurring Convert between various image file formats == See also == Comparison of raster graphics editors Vector graphics editor Texture mapping Text editor 3D modeling == References == == External links == Media related to Raster graphics software at Wikimedia Commons
Wikipedia/Graphics_application
Industrial Light & Magic (ILM) is an American motion picture visual effects, computer animation and stereo conversion digital studio founded by George Lucas on May 26, 1975. It is a division of the film production company Lucasfilm, which Lucas founded, and was created when he began production on the original Star Wars, now the fourth episode of the Skywalker Saga. ILM originated in Van Nuys, California, then later moved to San Rafael in 1978, and since 2005 it has been based at the Letterman Digital Arts Center in the Presidio of San Francisco. In 2012, The Walt Disney Company acquired ILM as part of its purchase of Lucasfilm. As of 2025, Industrial Light & Magic has won 15 Academy Awards for Best Visual Effects. == History == Lucas wanted his 1977 film Star Wars to include visual effects that had never been seen on film before. After discovering that the in-house effects department at 20th Century Fox was no longer operational, Lucas approached Douglas Trumbull, best known for the effects on 2001: A Space Odyssey (1968) and Silent Running (1972). Trumbull declined as he was already committed to working on Steven Spielberg's film Close Encounters of the Third Kind (1977), but suggested his assistant John Dykstra to Lucas. Dykstra brought together a small team of college students, artists, and engineers and set them up in a warehouse in Van Nuys, California. After seeing the map for the location was zoned as light industrial, Lucas named the group Industrial Light and Magic, which became the Special Visual Effects department on Star Wars. Alongside Dykstra, other leading members of the original ILM team were Ken Ralston, Richard Edlund, Dennis Muren, Robert Blalack, Joe Johnston, Phil Tippett, Steve Gawley, Lorne Peterson, and Paul Huston. In late 1978, when in pre-production for The Empire Strikes Back, Lucas reformed most of the team into Industrial Light & Magic in Marin County, California. From here on, the company expanded and has since gone on to produce special effects for over three hundred films, including the entire Star Wars saga, the Indiana Jones series, and the Jurassic Park series. After the success of the first Star Wars movie, Lucas became interested in using computer graphics on the sequel. He contacted Triple-I, known for their early computer effects in movies like Westworld (1973), Futureworld (1976), Tron (1982), and The Last Starfighter which ended up making a computer-generated test of five X-wing fighters flying in formation. He found it to be too expensive and returned to handmade models. Nevertheless, the test had shown him it was possible, and he decided he would create his own computer graphics department instead. As a result, they started investing in Apple and SGI computers. One of Lucas' employees was given the task to find the right people to hire. His search would lead him to NYIT, where he found Edwin Catmull and his colleagues. Catmull and others accepted Lucas' job offer, and a new computer division at Lucasfilm, named The Graphics Group, was created in 1979, which technically belonged to another division than ILM, with the hiring of Ed Catmull as the first NYIT employee who joined the company. Lucas' list for them was a digital film editing system, a digital sound editing system, a laser film printer, and further exploration of computer graphics. John Lasseter, who was hired a few years later, worked on computer-animation as part of ILM's contribution to Young Sherlock Holmes. The Graphics Group was later sold to Steve Jobs, named Pixar Animation Studios, and created the first CGI-animated feature, Toy Story. In 2000, ILM created the OpenEXR format for high-dynamic-range imaging. ILM operated from an inconspicuous property in San Rafael, California until 2005. The company was known to locals as The Kerner Company, a name that did not draw any attention, allowing the company to operate in secret, thus preventing the compromise of sensitive information on its productions to the media or fans. In 2005, when Lucas decided to move locations to the Presidio of San Francisco and focus on digital effects, a management-led team bought the five physical and practical effects divisions and formed a new company that included the George Lucas Theater, retained the "Kerner" name as Kerner Technologies, Inc. and provided physical effects for major motion pictures, often working with ILM, until its Chapter 7 bankruptcy in 2011. In 2005, ILM extended its operations to Lucasfilm Singapore, which also includes the Singapore arm of Lucasfilm Animation. In 2006, ILM invented IMoCap (Image Based Motion Capture Technology). By 2007, ILM was one of the largest visual effects vendors in the motion picture industry and had one of the largest render farms (named Death Star). In 2011, it was announced the company was considering a project-based facility in Vancouver. ILM first opened a temporary facility in Vancouver before relocating to a new 30,000-square-foot studio on Water Street in the Gastown district in 2014. In October 2012, Disney bought ILM's parent company, Lucasfilm, acquiring ILM, Skywalker Sound, and LucasArts in the process. Disney stated that it had no immediate plans to change ILM's operations, but began to lay off employees by April of the next year. Following the restructuring of LucasArts in April 2013, ILM was left overstaffed and the faculty was reduced to serve only ILM's visual effects department. ILM opened a London studio headquartered in the city's Soho district on October 15, 2014. On November 7, 2018, ILM opened a new division targeted at television series called ILM TV. It will be based in ILM's new 47,000-square-foot London studio with support from the company's locations in San Francisco, Vancouver and Singapore. In July 2019, ILM announced the opening of a new facility in Sydney, Australia. In the same year, ILM introduced StageCraft. Also known as "The Volume", it uses high-definition LED video walls to generate virtual sceneries and was first used in The Mandalorian. Following Disney's acquisition of 21st Century Fox, Fox VFX Lab was folded into ILM, including the Technoprops division. In October 2022, ILM opened a new studio in Mumbai. In May 2023, ILMxLAB was rebranded as ILM Immersive. In August 2023, Lucasfilm announced it would close the ILM studio in Singapore due to economic factors affecting the industry and the 2023 Hollywood labor disputes. The closure affected 340 Singapore-based jobs. Employees continued working until the end of the year. Disney confirmed that it would be helping employees to either find work with local companies with similar skills requirements or relocate to ILM's other studios in London, Vancouver, Sydney and Mumbai. An ILM Singapore employee confirmed that the closure of the Singaporean studio was linked to the strike. == Milestones == 1975: ILM used VistaVision for Star Wars: Episode IV - A New Hope 1980: ILM's first use of Go motion was used to animate the Tauntaun creatures and AT-ATs of Star Wars: Episode V - The Empire Strikes Back 1982: ILM's first in-house completely computer-generated sequence was the "Genesis sequence" in Star Trek II: The Wrath of Khan. (Former computer graphics in Star Wars - Episode IV: A New Hope were done outside of ILM.) 1985: ILM's first completely computer-generated character, the "stained glass man", featured in Young Sherlock Holmes 1988: ILM did their first morphing sequence in Willow 1989: The first digital compositing of a full-screen live-action image was done by ILM during the final sequence in Indiana Jones and the Last Crusade 1989: ILM created their first computer-generated 3-D character to show emotion, the pseudopod creature in The Abyss 1991: ILM created their first dimensional matte painting – where a traditional matte painting was mapped onto 3-D geometry, allowing for camera parallax, in Hook. 1991: ILM created their first computer-generated main character, the T-1000 in Terminator 2: Judgment Day 1992: ILM generated the texture of human skin for the first time in Death Becomes Her 1993: The first time digital technology was used to create complete and detailed living creatures, the dinosaurs in Jurassic Park, earned ILM its thirteenth Oscar 1994: The first extensive use of digital manipulation of historical and stock footage was done to integrate characters in Forrest Gump. 1995: ILM created their first fully synthetic speaking computer-generated character, with a distinct personality and emotion, to take a leading role in Casper 1995: ILM created their first computer-generated photo-realistic hair and fur (used for the digital lion and monkeys) in Jumanji 1996: ILM's first completely computer-generated main character, Draco, was featured in Dragonheart 1999: ILM's first computer-generated character to have a full human anatomy, Imhotep, was featured in The Mummy 1999: ILM's first fully computer-generated character in a live-action film using motion-capture, Jar Jar Binks, was featured in Star Wars: Episode I - The Phantom Menace 2000: ILM created the OpenEXR imaging format. 2006: ILM developed the iMocap system, which uses computer vision techniques to track live-action performers on set. It was used in the creation of Davy Jones and ship's crew in the film Pirates of the Caribbean: Dead Man's Chest 2011: The first animated feature produced by ILM, Rango, was released. 2019: ILM used real-time rendering (with Unreal Engine) and digital LED displays as a virtual set (known as StageCraft or The Volume) for the first time in The Mandalorian 2025: Rob Bredow unveiled Star Wars test footage using a text-to-video model to generate fictional creatures. This was ILM's first implementation of generative artificial intelligence. == Notable employees and clients == Photoshop was first used at Industrial Light & Magic as an image-processing program. Photoshop was created by ILM Visual Effects Supervisor John Knoll and his brother Thomas as a summer project. It was used on The Abyss. The Knoll brothers sold the program to Adobe in 1989. Thomas Knoll continues to work on Photoshop at Adobe and is featured in the billing on the Photoshop splash screen. John Knoll continues to be ILM's top visual effects supervisor, and was one of the executive producers and writers of Rogue One: A Star Wars Story. In addition to their work for George Lucas, ILM also collaborates with Steven Spielberg on many films that he directs and produces. Dennis Muren has acted as Computer Animation Supervisor on many of these films. For Jurassic Park in 1993, ILM used the program Viewpaint, which allowed the visual effects artists to paint color and texture directly onto the surface of the computer models. Former ILM CG Animator Steve "Spaz" Williams said that it took nearly a year for the shots that involved computer-generated dinosaurs to be completed. The film is noted for its groundbreaking use of computer-generated imagery, and is regarded as a landmark for visual effects. The company also works on more subtle special effects—such as widening streets, digitally adding more extras to a shot, and inserting the film's actors into preexisting footage—in films such as in Forrest Gump in 1994. Adam Savage, Grant Imahara and Tory Belleci of MythBusters fame have all worked at ILM. ILM is also famous for their commercial work. Their clients include Energizer, and Oldsmobile. They also animated Yoda for a series of 2012 commercials for Vodafone, which were broadcast in the UK. Actor Masi Oka worked on several major ILM productions as a programmer, including Revenge of the Sith, before joining the cast of the NBC show Heroes as Hiro Nakamura. American film director David Fincher worked at ILM for four years in the early 1980s. Film director Joe Johnston was a Visual effects artist and an Art Director. Film director Mark A.Z. Dippé was a Visual Effects animator who directed Spawn which was released in 1997. Sound editor and film producer James "Jim" Nelson served as an associate producer of the original Star Wars and helped build Industrial Light & Magic alongside George Lucas, overseeing the company's administration and management. == Live-action films == === 1970s–1980s === === 1990s === === 2000s === === 2010s === === 2020s === === Upcoming === == Animated films == == Television == === 1980s === === 1990s === === 2010s === === 2020s === === Upcoming === === Television films and specials === == Live concerts == == Commercials == == See also == == Notes == == References == == External links == Official website (with detailed information in PDF format) Small entry at Lucasfilm's site
Wikipedia/Industrial_Light_&_Magic
Motion graphics (sometimes mograph) are pieces of animation or digital footage that create the illusion of motion or rotation, and are usually combined with audio for use in multimedia projects. Motion graphics are usually displayed via electronic media technology, but may also be displayed via manual powered technology (e.g. thaumatrope, phenakistoscope, stroboscope, zoetrope, praxinoscope, flip book). The term distinguishes static graphics from those with a transforming appearance over time, without over-specifying the form. While any form of experimental or abstract animation can be called motion graphics, the term typically more explicitly refers to the commercial application of animation and effects to video, film, TV, and interactive applications. == History of the term == Since there is no universally accepted definition of motion graphics, the official beginning of the art form is disputed. There have been presentations that could be classified as motion graphics as early as the 19th century. Michael Betancourt wrote the first in-depth historical survey of the field, arguing for its foundations in visual music and the historical abstract films of the 1920s by Walther Ruttmann, Hans Richter, Viking Eggeling and Oskar Fischinger. The history of motion graphics is closely related to the history of computer graphics, as the new developments of computer-generated graphics led to wider use of motion design not based on optical film animation. The term motion graphics originated with digital video editing in computing, perhaps to keep pace with newer technology. Graphics for television were originally referred to as Broadcast Design. === 1887-1941 === Walter Ruttmann was a German cinematographer and film director who worked mainly in experimental film. The films were experiments in new forms of film expression and featured shapes of different colors flowing back and forth and in and out of the lens. He started his film career in the early 1920s, starting with abstract films Lichtspiel: Opus I (1921), the first publicly screened abstract film, and Opus II (1923.) The animations were painted with oil on glass plates, so the wet paint could be wiped away and modified easily. === 1917-1995 === John Whitney was of the first users of the term "motion graphics" and founded a company called Motion Graphics Inc. in 1960. One of his most famous works was the animated title sequence from Alfred Hitchcock’s film “Vertigo” in 1958, collaborating with Saul Bass, which featured swirling graphics growing from small to large. === 1920-1996 === Saul Bass was a major pioneer in the development of feature film title sequences. His work included title sequences for popular films such as The Man with the Golden Arm (1955), Vertigo (1958), Anatomy of a Murder (1959), North by Northwest (1959), Psycho (1960), and Advise & Consent (1962). His designs were simple, but effectively communicated the mood of the film. === 1933-2003 === Stan Brakhage was one of the most important figures in 20th-century experimental film. He explored a variety of formats, creating a large, diverse body of work. His influence in the credits of the film Seven (1995), designed by Kyle Cooper, with the scratched emulsion, rapid cutaways, and bursts of light in his style. == Computer-generated motion graphics == Computer-generated animations "are more controllable than other, more physically based processes, like constructing miniatures for effects shots, or hiring extras for crowd scenes, because it allows the creation for images that would not be feasible using any other technology." Before computers were widely available, motion graphics were costly and time-consuming, limiting their use to high-budget filmmaking and television production. Computers began to be used as early as the late 1960s as super computers were capable of rendering crude graphics. John Whitney and Charles Csuri can be considered early pioneers of computer aided animation. In the late 1980s to mid-1990s, expensive proprietary graphics systems such as those from British-based Quantel were quite commonplace in many television stations. Quantel workstations such as the Hal, Henry, Harry, Mirage, and Paintbox were the broadcast graphics standard of the time. Many other real-time graphics systems were used such as Ampex ADO, Abekas A51 and Grass Valley Group Kaleidoscope for live digital video effects. Early proprietary 3D computer systems were also developed specifically for broadcast design such as the Bosch FGS-4000 which was used in the music video for Dire Straits' Money for Nothing. The advent of more powerful desktop computers running Photoshop in the mid-90s drastically lowered the costs for producing digital graphics. With the reduced cost of producing motion graphics on a computer, the discipline has seen more widespread use. With the availability of desktop programs such as Adobe After Effects, Adobe Premiere Pro and Apple Motion, motion graphics have become increasingly accessible. Modern character generators (CG) from Vizrt and Ross Video, incorporate motion graphics. Motion graphics continued to evolve as an art form with the incorporation of sweeping camera paths and 3D elements. Maxon's Cinema 4D, plugins such as MoGraph and Adobe After Effects. Despite their relative complexity, Autodesk's Maya and 3D Studio Max are widely used for the animation and design of motion graphics, as is Maya and 3D Studio which uses a node-based particle system generator similar to Cinema 4D's Thinking Particles plugin. There are also some other packages in Open Source panorama, which are gaining more features and adepts in order to use in a motion graphics workflow, while Blender integrates several of the functions of its commercial counterparts. Many motion graphics animators learn several 3D graphics packages for use according to each program's strengths. Although many trends in motion graphics tend to be based on a specific software's capabilities, the software is only a tool the broadcast designer uses while bringing the vision to life. Leaning heavily from techniques such as the collage or the pastiche, motion graphics have begun to integrate many traditional animation techniques as well, including stop-motion animation, frame by frame animation, or a combination of both. == Motion design and digital compositing software packages == Motion design applications include Adobe After Effects, Blackmagic Fusion, Nuke, Apple Motion, Max/MSP, various VJ programs, Moho, Adobe Animate, Natron. 3D programs used in motion graphics include Adobe Substance, Maxon Cinema 4D and Blender. Motion graphics plug-ins include Video Copilot's products, Red Giant Software and The Foundry Visionmongers. == Methods of animation == Elements of a motion graphics project can be animated by various means, depending on the capabilities of the software. These elements may be in the form of art, text, photos, and video clips, to name a few. The most popular form of animation is keyframing, in which properties of an object can be specified at certain points in time by setting a series of keyframes so that the properties of the object can be automatically altered (or tweened) in the frames between keyframes. Another method involves a behavior system such as is found in Apple Motion that controls these changes by simulating natural forces without requiring the more rigid but precise keyframing method. Yet another method involves the use of formulas or scripts, such as the expressions function in Adobe After Effects or the creation of ActionScripts within Adobe Flash. Computers are capable of calculating and randomizing changes in imagery to create the illusion of motion and transformation. Computer animations can use less information space (computer memory) by automatically tweening, a process of rendering the key changes of an image at a specified or calculated time. These key poses or frames are commonly referred to as keyframes or low CP. Adobe Flash uses computer animation tweening as well as frame-by-frame animation and video. == Notable filmmakers who have informed the motion graphics industry == Saul Bass John Whitney Maurice Binder Stan Brakhage Robert Abel Kyle Cooper Pablo Ferro Oskar Fischinger Martin Lambie-Nairn Len Lye Norman McLaren == Studios == Early ground breaking motion design studios include: Charlex Aerodrome Broadway Video Rushes Postproduction Sogitech Robert Abel and Associates Marks & Marks Pacific Data Images Pittard Sullivan Japan Computer Graphics Lab Cranston/Csuri Productions == See also == Audiovisual art Live event support Scanimate Video art Video synthesizer Motion graphic design Music visualization User Experience Design After Effects == References ==
Wikipedia/Motion_graphics
Computer graphics lighting encompasses the range of techniques used to simulate light within computer graphics. These methods vary in computational complexity, offering artists flexibility in both visual detail and performance. Graphics professionals can select from a wide array of light sources, lighting models, shading techniques, and effects to meet the specific requirements of each project. == Light sources == Light sources allow for different ways to introduce light into graphics scenes. === Point === Point sources emit light from a single point in all directions, with the intensity of the light decreasing with distance. An example of a point source is a standalone light bulb. === Directional === A directional source (or distant source) uniformly lights a scene from one direction. Unlike a point source, the intensity of light produced by a directional source does not change with distance over the scale of the scene, as the directional source is treated as though it is extremely far away. An example of a directional source is sunlight on Earth. === Spotlight === A spotlight produces a directed cone of light. The light becomes more intense as the viewer gets closer to the spotlight source and to the center of the light cone. An example of a spotlight is a flashlight. === Area === Area lights are 3D objects which emit light. Whereas point lights and spot lights sources are considered infinitesimally small points, area lights are treated as physical shapes. Area light produce softer shadows and more realistic lighting than point lights and spot lights. === Ambient === Ambient light sources illuminate objects even when no other light source is present. The intensity of ambient light is independent of direction, distance, and other objects, meaning the effect is completely uniform throughout the scene. This source ensures that objects are visible even in complete darkness. === Lightwarp === A lightwarp is a technique of which an object in the geometrical world refracts light based on the direction and intensity of the light. The light is then warped using an ambient diffuse term with a range of the color spectrum. The light then may be reflectively scattered to produce a higher depth of field, and refracted. The technique is used to produce a unique rendering style and can be used to limit overexposure of objects. Games such as Team Fortress 2 use the rendering technique to create a cartoon cel shaded stylized look. === HDRI === HDRI stands for High dynamic range image and is a 360° image that is wrapped around a 3D model as an outdoor setting and uses the sun typically as a light source in the sky. The textures from the model can reflect the direct and ambient light and colors from the HDRI. == Lighting interactions == In computer graphics, the overall effect of a light source on an object is determined by the combination of the object's interactions with it usually described by at least three main components. The three primary lighting components (and subsequent interaction types) are diffuse, ambient, and specular. === Diffuse === Diffuse lighting (or diffuse reflection) is the direct illumination of an object by an even amount of light interacting with a light-scattering surface. After light strikes an object, it is reflected as a function of the surface properties of the object as well as the angle of incoming light. This interaction is the primary contributor to the object's brightness and forms the basis for its color. === Ambient === As ambient light is directionless, it interacts uniformly across all surfaces, with its intensity determined by the strength of the ambient light sources and the properties of objects' surface materials, namely their ambient reflection coefficients. === Specular === The specular lighting component gives objects shine and highlights. This is distinct from mirror effects because other objects in the environment are not visible in these reflections. Instead, specular lighting creates bright spots on objects based on the intensity of the specular lighting component and the specular reflection coefficient of the surface. == Illumination models == Lighting models are used to replicate lighting effects in rendered environments where light is approximated based on the physics of light. Without lighting models, replicating lighting effects as they occur in the natural world would require more processing power than is practical for computer graphics. This lighting, or illumination model's purpose is to compute the color of every pixel or the amount of light reflected for different surfaces in the scene. There are two main illumination models, object oriented lighting and global illumination. They differ in that object oriented lighting considers each object individually, whereas global illumination maps how light interacts between objects. Currently, researchers are developing global illumination techniques to more accurately replicate how light interacts with its environment. === Object oriented lighting === Object oriented lighting, also known as local illumination, is defined by mapping a single light source to a single object. This technique is fast to compute, but often is an incomplete approximation of how light would behave in the scene in reality. It is often approximated by summing a combination of specular, diffuse, and ambient light of a specific object. The two predominant local illumination models are the Phong and the Blinn-Phong illumination models. ==== Phong illumination model ==== One of the most common reflection models is the Phong model. The Phong model assumes that the intensity of each pixel is the sum of the intensity due to diffuse, specular, and ambient lighting. This model takes into account the location of a viewer to determine specular light using the angle of light reflecting off an object. The cosine of the angle is taken and raised to a power decided by the designer. With this, the designer can decide how wide a highlight they want on an object; because of this, the power is called the shininess value. The shininess value is determined by the roughness of the surface where a mirror would have a value of infinity and the roughest surface might have a value of one. This model creates a more realistic looking white highlight based on the perspective of the viewer. ==== Blinn-Phong illumination model ==== The Blinn-Phong illumination model is similar to the Phong model as it uses specular light to create a highlight on an object based on its shininess. The Blinn-Phong differs from the Phong illumination model, as the Blinn-Phong model uses the vector normal to the surface of the object and halfway between the light source and the viewer. This model is used in order to have accurate specular lighting and reduced computation time. The process takes less time because finding the reflected light vector's direction is a more involved computation than calculating the halfway normal vector. While this is similar to the Phong model, it produces different visual results, and the specular reflection exponent or shininess might need modification in order to produce a similar specular reflection. === Global illumination === Global illumination differs from local illumination because it calculates light as it would travel throughout the entire scene. This lighting is based more heavily in physics and optics, with light rays scattering, reflecting, and indefinitely bouncing throughout the scene. There is still active research being done on global illumination as it requires more computational power than local illumination. ==== Ray tracing ==== Light sources emit rays that interact with various surfaces through absorption, reflection, or refraction. An observer of the scene would see any light source that reaches their eyes; a ray that does not reach the observer goes unnoticed. It is possible to simulate this by having all of the light sources emit rays and then compute how each of them interact with all of the objects in the scene. However, this process is inefficient as most of the light rays would not reach the observer and would waste processing time. Ray tracing solves this problem by reversing the process, instead sending view rays from the observer and calculating how they interact until they reach a light source. Although this way more effectively uses processing time and produces a light simulation closely imitating natural lighting, ray tracing still has high computation costs due to the high amounts of light that reach viewer's eyes. ==== Radiosity ==== Radiosity takes into account the energy given off by surrounding objects and the light source. Unlike ray tracing, which is dependent on the position and orientation of the observer, radiosity lighting is independent of view position. Radiosity requires more computational power than ray tracing, but can be more useful for scenes with static lighting because it would only have to be computed once. The surfaces of a scene can be divided into a large amount of patches; each patch radiates some light and affects the other patches, then a large set of equations needs to be solved simultaneously in order to get the final radiosity of each patch. ==== Photon mapping ==== Photon mapping was created as a two-pass global illumination algorithm that is more efficient than ray tracing. It is the basic principle of tracking photons released from a light source through a series of stages. The first pass includes the photons being released from a light source and bouncing off their first object; this map of where the photons are located is then recorded. The photon map contains both the position and direction of each photon which either bounce or are absorbed. The second pass happens with rendering where the reflections are calculated for different surfaces. In this process, the photon map is decoupled from the geometry of the scene, meaning rendering can be calculated separately. It is a useful technique because it can simulate caustics, and pre-processing steps do not need to be repeated if the view or objects change. == Polygonal shading == Polygonal shading is part of the rasterization process where 3D models are drawn as 2D pixel images. Shading applies a lighting model, in conjunction with the geometric attributes of the 3D model, to determine how lighting should be represented at each fragment (or pixel) of the resulting image. The polygons of the 3D model store the geometric values needed for the shading process. This information includes vertex positional values and surface normals, but can contain optional data, such as texture and bump maps. === Flat shading === Flat shading is a simple shading model with a uniform application of lighting and color per polygon. The color and normal of one vertex is used to calculate the shading of the entire polygon. Flat shading is inexpensive, as lighting for each polygon only needs to be calculated once per render. === Gouraud shading === Gouraud shading is a type of interpolated shading where the values inside of each polygon are a blend of its vertex values. Each vertex is given its own normal consisting of the average of the surface normals of the surrounding polygons. The lighting and shading at that vertex is then calculated using the average normal and the lighting model of choice. This process is repeated for all the vertices in the 3D model. Next, the shading of the edges between the vertices is calculated by interpolating between the vertex values. Finally, the shading inside of the polygon is calculated as an interpolation of the surrounding edge values. Gouraud shading generates a smooth lighting effect across the 3D model's surface. === Phong shading === Phong shading, similar to Gouraud shading, is another type of interpolative shading that blends between vertex values to shade polygons. The key difference between the two is that Phong shading interpolates the vertex normal values over the whole polygon before it calculates its shading. This contrasts with Gouraud shading which interpolates the already shaded vertex values over the whole polygon. Once Phong shading has calculated the normal of a fragment (pixel) inside the polygon, it can then apply a lighting model, shading that fragment. This process is repeated until each polygon of the 3D model is shaded. == Lighting effects == === Caustics === Caustics are an effect of light reflected and refracted in a medium with curved interfaces or reflected off a curved surface. They appear as ribbons of concentrated light and are often seen when looking at bodies of water or glass. Caustics can be implemented in 3D graphics by blending a caustic texture map with the texture map of the affected objects. The caustics texture can either be a static image that is animated to mimic the effects of caustics, or a Real-time calculation of caustics onto a blank image. The latter is more complicated and requires backwards ray tracing to simulate photons moving through the environment of the 3D render. In a photon mapping illumination model, Monte Carlo sampling is used in conjunction with the ray tracing to compute the intensity of light caused by the caustics. === Reflection mapping === Reflection mapping (also known as environment mapping) is a technique which uses 2D environment maps to create the effect of reflectivity without using ray tracing. Since the appearances of reflective objects depend on the relative positions of the viewers, the objects, and the surrounding environments, graphics algorithms produce reflection vectors to determine how to color the objects based on these elements. Using 2D environment maps rather than fully rendered, 3D objects to represent surroundings, reflections on objects can be determined using simple, computationally inexpensive algorithms. === Particle systems === Particle systems use collections of small particles to model chaotic, high-complexity events, such as fire, moving liquids, explosions, and moving hair. Particles which make up the complex animation are distributed by an emitter, which gives each particle its properties, such as speed, lifespan, and color. Over time, these particles may move, change color, or vary other properties, depending on the effect. Typically, particle systems incorporate randomness, such as in the initial properties the emitter gives each particle, to make the effect realistic and non-uniform. == See also == Per-pixel lighting Computer graphics == References ==
Wikipedia/Computer_graphics_lighting
Reflection in computer graphics is used to render reflective objects like mirrors and shiny surfaces. Accurate reflections are commonly computed using ray tracing whereas approximate reflections can usually be computed faster by using simpler methods such as environment mapping. Reflections on shiny surfaces like wood or tile can add to the photorealistic effects of a 3D rendering. == Approaches to reflection rendering == For rendering environment reflections there exist many techniques that differ in precision, computational and implementation complexity. Combination of these techniques are also possible. Image order rendering algorithms based on tracing rays of light, such as ray tracing or path tracing, typically compute accurate reflections on general surfaces, including multiple reflections and self reflections. However these algorithms are generally still too computationally expensive for real time rendering (even though specialized HW exists, such as Nvidia RTX) and require a different rendering approach from typically used rasterization. Reflections on planar surfaces, such as planar mirrors or water surfaces, can be computed simply and accurately in real time with two pass rendering — one for the viewer, one for the view in the mirror, usually with the help of stencil buffer. Some older video games used a trick to achieve this effect with one pass rendering by putting the whole mirrored scene behind a transparent plane representing the mirror. Reflections on non-planar (curved) surfaces are more challenging for real time rendering. Main approaches that are used include: Environment mapping (e.g. cube mapping): a technique that has been widely used e.g. in video games, offering reflection approximation that's mostly sufficient to the eye, but lacking self-reflections and requiring pre-rendering of the environment map.: 174  The precision can be increased by using a spatial array of environment maps instead of just one. It is also possible to generate cube map reflections in real time, at the cost of memory and computational requirements. Screen space reflections (SSR): a more expensive technique that traces rays come from pixel data.This requires the data of surface normal and either depth buffer(local space) or position buffer(world space).The disadvantage is that objects not captured in the rendered frame cannot appear in the reflections,which results in unresolved and or false intersections causing artefacts such as reflection vanishment and virtual image. SSR was originally introduced as Real Time Local Reflections in CryENGINE 3. == Types of reflection == Polished - A polished reflection is an undisturbed reflection, like a mirror or chrome surface. Blurry - A blurry reflection means that tiny random bumps on the surface of the material causes the reflection to be blurry. Metallic - A reflection is metallic if the highlights and reflections retain the color of the reflective object. Glossy - This term can be misused: sometimes, it is a setting which is the opposite of blurry (e.g. when "glossiness" has a low value, the reflection is blurry). Sometimes the term is used as a synonym for "blurred reflection". Glossy used in this context means that the reflection is actually blurred. === Polished or mirror reflection === Mirrors are usually almost 100% reflective. === Metallic reflection === Normal (nonmetallic) objects reflect light and colors in the original color of the object being reflected. Metallic objects reflect lights and colors altered by the color of the metallic object itself. === Blurry reflection === Many materials are imperfect reflectors, where the reflections are blurred to various degrees due to surface roughness that scatters the rays of the reflections. === Glossy reflection === Fully glossy reflection, shows highlights from light sources, but does not show a clear reflection from objects. == Examples of reflections == === Wet floor reflections === The wet floor effect is a graphic effects technique popular in conjunction with Web 2.0 style pages, particularly in logos. The effect can be done manually or created with an auxiliary tool which can be installed to create the effect automatically. Unlike a standard computer reflection (and the Java water effect popular in first-generation web graphics), the wet floor effect involves a gradient and often a slant in the reflection, so that the mirrored image appears to be hovering over or resting on a wet floor. == See also == Illumination model Lambertian reflectance Ray tracing Reflection mapping Rendering (computer graphics) Specular reflection (optics) == References ==
Wikipedia/Reflection_(computer_graphics)
SIGGRAPH (Special Interest Group on Computer Graphics and Interactive Techniques) is an annual conference centered around computer graphics organized by ACM, starting in 1974 in Boulder, CO. The main conference has always been held in North America; SIGGRAPH Asia, a second conference held annually, has been held since 2008 in countries throughout Asia. == Overview == The conference incorporates both academic presentations as well as an industry trade show. Other events at the conference include educational courses and panel discussions on recent topics in computer graphics and interactive techniques. === SIGGRAPH Proceedings === The SIGGRAPH conference proceedings, which are published in the ACM Transactions on Graphics, has one of the highest impact factors among academic publications in the field of computer graphics. The paper acceptance rate for SIGGRAPH has historically been between 17% and 29%, with the average acceptance rate between 2015 and 2019 of 27%. The submitted papers are peer-reviewed under a process that was historically single-blind, but was changed in 2018 to double-blind. The papers accepted for presentation at SIGGRAPH are printed since 2003 in a special issue of the ACM Transactions on Graphics journal. Prior to 1992, SIGGRAPH papers were printed as part of the Computer Graphics publication; between 1993 and 2001, there was a dedicated SIGGRAPH Conference Proceedings series of publications. === Awards programs === SIGGRAPH has several awards programs to recognize contributions to computer graphics. The most prestigious is the Steven Anson Coons Award for Outstanding Creative Contributions to Computer Graphics. It has been awarded every two years since 1983 to recognize an individual's lifetime achievement in computer graphics. == Conference == The SIGGRAPH conference experienced significant growth starting in the 1970s, peaking around the turn of the century. A second conference, SIGGRAPH Asia, started in 2008. === SIGGRAPH === === SIGGRAPH Asia === === Sponsored Conference === SIGGRAPH sponsored a number of conferences related to the field of computer graphics, including the ACM SIGGRAPH/Eurographics Symposium on Computer Animation, the ACM SIGGRAPH Conference on Motion, Interaction and Games (formerly known as Motion in Games). == See also == Association for Computing Machinery ACM SIGGRAPH ACM Transactions on Graphics Computer Graphics, a publication of ACM SIGGRAPH The list of computer science conferences contains other academic conferences in computer science. == References == == External links == ACM SIGGRAPH website ACM SIGGRAPH conference publications (ACM Digital Library) ACM SIGGRAPH YouTube SIGGRAPH 2017 Conference, Los Angeles, CA SIGGRAPH Asia 2017 Conference, Bangkok, Thailand
Wikipedia/SIGGRAPH
Computer Graphics: Principles and Practice is a textbook written by James D. Foley, Andries van Dam, Steven K. Feiner, John Hughes, Morgan McGuire, David F. Sklar, and Kurt Akeley and published by Addison–Wesley. First published in 1982 as Fundamentals of Interactive Computer Graphics, it is widely considered a classic standard reference book on the topic of computer graphics. It is sometimes known as the bible of computer graphics (due to its size). == Editions == === First Edition === The first edition, published in 1982 and titled Fundamentals of Interactive Computer Graphics, discussed the SGP library, which was based on ACM's SIGGRAPH CORE 1979 graphics standard, and focused on 2D vector graphics. === Second Edition === The second edition, published 1990, was completely rewritten and covered 2D and 3D raster and vector graphics, user interfaces, geometric modeling, anti-aliasing, advanced rendering algorithms and an introduction to animation. The SGP library was replaced by SRGP (Simple Raster Graphics Package), a library for 2D raster primitives and interaction handling, and SPHIGS (Simple PHIGS), a library for 3D primitives, which were specifically written for the book. === Second Edition in C === In the second edition in C, all examples were converted from Pascal to C. New implementations for the SRGP and SPHIGS graphics packages in C were also provided. === Third Edition === A third edition covering modern GPU architecture was released in July 2013. Examples in the third edition are written in C++, C#, WPF, GLSL, OpenGL, G3D, or pseudocode. == Awards == The book has won a Front Line Award (Hall of Fame) in 1998. == References ==
Wikipedia/Computer_Graphics:_Principles_and_Practice
A computed tomography scan (CT scan), formerly called computed axial tomography scan (CAT scan), is a medical imaging technique used to obtain detailed internal images of the body. The personnel that perform CT scans are called radiographers or radiology technologists. CT scanners use a rotating X-ray tube and a row of detectors placed in a gantry to measure X-ray attenuations by different tissues inside the body. The multiple X-ray measurements taken from different angles are then processed on a computer using tomographic reconstruction algorithms to produce tomographic (cross-sectional) images (virtual "slices") of a body. CT scans can be used in patients with metallic implants or pacemakers, for whom magnetic resonance imaging (MRI) is contraindicated. Since its development in the 1970s, CT scanning has proven to be a versatile imaging technique. While CT is most prominently used in medical diagnosis, it can also be used to form images of non-living objects. The 1979 Nobel Prize in Physiology or Medicine was awarded jointly to South African-American physicist Allan MacLeod Cormack and British electrical engineer Godfrey Hounsfield "for the development of computer-assisted tomography". == Types == On the basis of image acquisition and procedures, various type of scanners are available in the market. === Sequential CT === Sequential CT, also known as step-and-shoot CT, is a type of scanning method in which the CT table moves stepwise. The table increments to a particular location and then stops which is followed by the X-ray tube rotation and acquisition of a slice. The table then increments again, and another slice is taken. The table movement stops while taking slices. This results in an increased time of scanning. === Spiral CT === Spinning tube, commonly called spiral CT, or helical CT, is an imaging technique in which an entire X-ray tube is spun around the central axis of the area being scanned. These are the dominant type of scanners on the market because they have been manufactured longer and offer a lower cost of production and purchase. The main limitation of this type of CT is the bulk and inertia of the equipment (X-ray tube assembly and detector array on the opposite side of the circle) which limits the speed at which the equipment can spin. Some designs use two X-ray sources and detector arrays offset by an angle, as a technique to improve temporal resolution. === Electron beam tomography === Electron beam tomography (EBT) is a specific form of CT in which a large enough X-ray tube is constructed so that only the path of the electrons, travelling between the cathode and anode of the X-ray tube, are spun using deflection coils. This type had a major advantage since sweep speeds can be much faster, allowing for less blurry imaging of moving structures, such as the heart and arteries. Fewer scanners of this design have been produced when compared with spinning tube types, mainly due to the higher cost associated with building a much larger X-ray tube and detector array and limited anatomical coverage. === Dual energy CT === Dual energy CT, also known as spectral CT, is an advancement of computed Tomography in which two energies are used to create two sets of data. A dual energy CT may employ dual source, single source with dual detector layer, single source with energy switching methods to get two different sets of data. Dual source CT is an advanced scanner with a two X-ray tube detector system, unlike conventional single tube systems. These two detector systems are mounted on a single gantry at 90° in the same plane. Dual source CT scanners allow fast scanning with higher temporal resolution by acquiring a full CT slice in only half a rotation. Fast imaging reduces motion blurring at high heart rates and potentially allowing for shorter breath-hold time. This is particularly useful for ill patients having difficulty holding their breath or unable to take heart-rate lowering medication. Single source with energy switching is another mode of dual energy CT in which a single tube is operated at two different energies by switching the energies frequently. === CT perfusion imaging === CT perfusion imaging is a specific form of CT to assess flow through blood vessels whilst injecting a contrast agent. Blood flow, blood transit time, and organ blood volume, can all be calculated with reasonable sensitivity and specificity. This type of CT may be used on the heart, although sensitivity and specificity for detecting abnormalities are still lower than for other forms of CT. This may also be used on the brain, where CT perfusion imaging can often detect poor brain perfusion well before it is detected using a conventional spiral CT scan. This is better for stroke diagnosis than other CT types. === PET CT === Positron emission tomography–computed tomography is a hybrid CT modality which combines, in a single gantry, a positron emission tomography (PET) scanner and an X-ray computed tomography (CT) scanner, to acquire sequential images from both devices in the same session, which are combined into a single superposed (co-registered) image. Thus, functional imaging obtained by PET, which depicts the spatial distribution of metabolic or biochemical activity in the body can be more precisely aligned or correlated with anatomic imaging obtained by CT scanning. PET-CT gives both anatomical and functional details of an organ under examination and is helpful in detecting different type of cancers. == Medical use == Since its introduction in the 1970s, CT has become an important tool in medical imaging to supplement conventional X-ray imaging and medical ultrasonography. It has more recently been used for preventive medicine or screening for disease, for example, CT colonography for people with a high risk of colon cancer, or full-motion heart scans for people with a high risk of heart disease. Several institutions offer full-body scans for the general population although this practice goes against the advice and official position of many professional organizations in the field primarily due to the radiation dose applied. The use of CT scans has increased dramatically over the last two decades in many countries. An estimated 72 million scans were performed in the United States in 2007 and more than 80 million in 2015. === Head === CT scanning of the head is typically used to detect infarction (stroke), tumors, calcifications, haemorrhage, and bone trauma. Of the above, hypodense (dark) structures can indicate edema and infarction, hyperdense (bright) structures indicate calcifications and haemorrhage and bone trauma can be seen as disjunction in bone windows. Tumors can be detected by the swelling and anatomical distortion they cause, or by surrounding edema. CT scanning of the head is also used in CT-guided stereotactic surgery and radiosurgery for treatment of intracranial tumors, arteriovenous malformations, and other surgically treatable conditions using a device known as the N-localizer. === Neck === Contrast CT is generally the initial study of choice for neck masses in adults. CT of the thyroid plays an important role in the evaluation of thyroid cancer. CT scan often incidentally finds thyroid abnormalities, and so is often the preferred investigation modality for thyroid abnormalities. === Lungs === A CT scan can be used for detecting both acute and chronic changes in the lung parenchyma, the tissue of the lungs. It is particularly relevant here because normal two-dimensional X-rays do not show such defects. A variety of techniques are used, depending on the suspected abnormality. For evaluation of chronic interstitial processes such as emphysema, and fibrosis, thin sections with high spatial frequency reconstructions are used; often scans are performed both on inspiration and expiration. This special technique is called high resolution CT that produces a sampling of the lung, and not continuous images. Bronchial wall thickening can be seen on lung CTs and generally (but not always) implies inflammation of the bronchi. An incidentally found nodule in the absence of symptoms (sometimes referred to as an incidentaloma) may raise concerns that it might represent a tumor, either benign or malignant. Perhaps persuaded by fear, patients and doctors sometimes agree to an intensive schedule of CT scans, sometimes up to every three months and beyond the recommended guidelines, in an attempt to do surveillance on the nodules. However, established guidelines advise that patients without a prior history of cancer and whose solid nodules have not grown over a two-year period are unlikely to have any malignant cancer. For this reason, and because no research provides supporting evidence that intensive surveillance gives better outcomes, and because of risks associated with having CT scans, patients should not receive CT screening in excess of those recommended by established guidelines. === Angiography === Computed tomography angiography (CTA) is a type of contrast CT to visualize the arteries and veins throughout the body. This ranges from arteries serving the brain to those bringing blood to the lungs, kidneys, arms and legs. An example of this type of exam is CT pulmonary angiogram (CTPA) used to diagnose pulmonary embolism (PE). It employs computed tomography and an iodine-based contrast agent to obtain an image of the pulmonary arteries. CT scans can reduce the risk of angiography by providing clinicians with more information about the positioning and number of clots prior to the procedure. === Cardiac === A CT scan of the heart is performed to gain knowledge about cardiac or coronary anatomy. Traditionally, cardiac CT scans are used to detect, diagnose, or follow up coronary artery disease. More recently CT has played a key role in the fast-evolving field of transcatheter structural heart interventions, more specifically in the transcatheter repair and replacement of heart valves. The main forms of cardiac CT scanning are: Coronary CT angiography (CCTA): the use of CT to assess the coronary arteries of the heart. The subject receives an intravenous injection of radiocontrast, and then the heart is scanned using a high-speed CT scanner, allowing radiologists to assess the extent of occlusion in the coronary arteries, usually to diagnose coronary artery disease. Coronary CT calcium scan: also used for the assessment of severity of coronary artery disease. Specifically, it looks for calcium deposits in the coronary arteries that can narrow arteries and increase the risk of a heart attack. A typical coronary CT calcium scan is done without the use of radiocontrast, but it can possibly be done from contrast-enhanced images as well. To better visualize the anatomy, post-processing of the images is common. Most common are multiplanar reconstructions (MPR) and volume rendering. For more complex anatomies and procedures, such as heart valve interventions, a true 3D reconstruction or a 3D print is created based on these CT images to gain a deeper understanding. === Abdomen and pelvis === CT is an accurate technique for diagnosis of abdominal diseases like Crohn's disease, GIT bleeding, and diagnosis and staging of cancer, as well as follow-up after cancer treatment to assess response. It is commonly used to investigate acute abdominal pain. Non-contrast-enhanced CT scans are the gold standard for diagnosing kidney stone disease. They allow clinicians to estimate the size, volume, and density of stones, helping to guide further treatment; with size being especially important in predicting the time to spontaneous passage of a stone. === Axial skeleton and extremities === For the axial skeleton and extremities, CT is often used to image complex fractures, especially ones around joints, because of its ability to reconstruct the area of interest in multiple planes. Fractures, ligamentous injuries, and dislocations can easily be recognized with a 0.2 mm resolution. With modern dual-energy CT scanners, new areas of use have been established, such as aiding in the diagnosis of gout. === Biomechanical use === CT is used in biomechanics to quickly reveal the geometry, anatomy, density and elastic moduli of biological tissues. == Other uses == === Industrial use === Industrial CT scanning (industrial computed tomography) is a process which uses X-ray equipment to produce 3D representations of components both externally and internally. Industrial CT scanning has been used in many areas of industry for internal inspection of components. Some of the key uses for CT scanning have been flaw detection, failure analysis, metrology, assembly analysis, image-based finite element methods and reverse engineering applications. CT scanning is also employed in the imaging and conservation of museum artifacts. === Aviation security === CT scanning has also found an application in transport security (predominantly airport security) where it is currently used in a materials analysis context for explosives detection CTX (explosive-detection device) and is also under consideration for automated baggage/parcel security scanning using computer vision based object recognition algorithms that target the detection of specific threat items based on 3D appearance (e.g. guns, knives, liquid containers). Its usage in airport security pioneered at Shannon Airport in March 2022 has ended the ban on liquids over 100 ml there, a move that Heathrow Airport plans for a full roll-out on 1 December 2022 and the TSA spent $781.2 million on an order for over 1,000 scanners, ready to go live in the summer. === Geological use === X-ray CT is used in geological studies to quickly reveal materials inside a drill core. Dense minerals such as pyrite and barite appear brighter and less dense components such as clay appear dull in CT images. === Paleontological use === Traditional methods of studying fossils are often destructive, such as the use of thin sections and physical preparation. X-ray CT is used in paleontology to non-destructively visualize fossils in 3D. This has many advantages. For example, we can look at fragile structures that might never otherwise be able to be studied. In addition, one can freely move around models of fossils in virtual 3D space to inspect it without damaging the fossil. === Cultural heritage use === X-ray CT and micro-CT can also be used for the conservation and preservation of objects of cultural heritage. For many fragile objects, direct research and observation can be damaging and can degrade the object over time. Using CT scans, conservators and researchers are able to determine the material composition of the objects they are exploring, such as the position of ink along the layers of a scroll, without any additional harm. These scans have been optimal for research focused on the workings of the Antikythera mechanism or the text hidden inside the charred outer layers of the En-Gedi Scroll. However, they are not optimal for every object subject to these kinds of research questions, as there are certain artifacts like the Herculaneum papyri in which the material composition has very little variation along the inside of the object. After scanning these objects, computational methods can be employed to examine the insides of these objects, as was the case with the virtual unwrapping of the En-Gedi scroll and the Herculaneum papyri. Micro-CT has also proved useful for analyzing more recent artifacts such as still-sealed historic correspondence that employed the technique of letterlocking (complex folding and cuts) that provided a "tamper-evident locking mechanism". Further examples of use cases in archaeology is imaging the contents of sarcophagi or ceramics. Recently, CWI in Amsterdam has collaborated with Rijksmuseum to investigate art object inside details in the framework called IntACT. === Microorganism research === Varied types of fungus can degrade wood to different degrees, one Belgium research group has been used X-ray CT 3 dimension with sub-micron resolution unveiled fungi can penetrate micropores of 0.6 μm under certain conditions. === Timber sawmill === Sawmills use industrial CT scanners to detect round defects, for instance knots, to improve total value of timber productions. Most sawmills are planning to incorporate this robust detection tool to improve productivity in the long run, however initial investment cost is high. == Interpretation of results == === Presentation === The result of a CT scan is a volume of voxels, which may be presented to a human observer by various methods, which broadly fit into the following categories: Slices (of varying thickness). Thin slice is generally regarded as planes representing a thickness of less than 3 mm. Thick slice is generally regarded as planes representing a thickness between 3 mm and 5 mm. Projection, including maximum intensity projection and average intensity projection Volume rendering (VR) Technically, all volume renderings become projections when viewed on a 2-dimensional display, making the distinction between projections and volume renderings a bit vague. The epitomes of volume rendering models feature a mix of for example coloring and shading in order to create realistic and observable representations. Two-dimensional CT images are conventionally rendered so that the view is as though looking up at it from the patient's feet. Hence, the left side of the image is to the patient's right and vice versa, while anterior in the image also is the patient's anterior and vice versa. This left-right interchange corresponds to the view that physicians generally have in reality when positioned in front of patients. ==== Grayscale ==== Pixels in an image obtained by CT scanning are displayed in terms of relative radiodensity. The pixel itself is displayed according to the mean attenuation of the tissue(s) that it corresponds to on a scale from +3,071 (most attenuating) to −1,024 (least attenuating) on the Hounsfield scale. A pixel is a two dimensional unit based on the matrix size and the field of view. When the CT slice thickness is also factored in, the unit is known as a voxel, which is a three-dimensional unit. Water has an attenuation of 0 Hounsfield units (HU), while air is −1,000 HU, cancellous bone is typically +400 HU, and cranial bone can reach 2,000 HU. The attenuation of metallic implants depends on the atomic number of the element used: Titanium usually has an amount of +1000 HU, iron steel can completely block the X-ray and is, therefore, responsible for well-known line-artifacts in computed tomograms. Artifacts are caused by abrupt transitions between low- and high-density materials, which results in data values that exceed the dynamic range of the processing electronics. ==== Windowing ==== CT data sets have a very high dynamic range which must be reduced for display or printing. This is typically done via a process of "windowing", which maps a range (the "window") of pixel values to a grayscale ramp. For example, CT images of the brain are commonly viewed with a window extending from 0 HU to 80 HU. Pixel values of 0 and lower, are displayed as black; values of 80 and higher are displayed as white; values within the window are displayed as a gray intensity proportional to position within the window. The window used for display must be matched to the X-ray density of the object of interest, in order to optimize the visible detail. Window width and window level parameters are used to control the windowing of a scan. ==== Multiplanar reconstruction and projections ==== Multiplanar reconstruction (MPR) is the process of converting data from one anatomical plane (usually transverse) to other planes. It can be used for thin slices as well as projections. Multiplanar reconstruction is possible as present CT scanners provide almost isotropic resolution. MPR is used almost in every scan. The spine is frequently examined with it. An image of the spine in axial plane can only show one vertebral bone at a time and cannot show its relation with other vertebral bones. By reformatting the data in other planes, visualization of the relative position can be achieved in sagittal and coronal plane. New software allows the reconstruction of data in non-orthogonal (oblique) planes, which help in the visualization of organs which are not in orthogonal planes. It is better suited for visualization of the anatomical structure of the bronchi as they do not lie orthogonal to the direction of the scan. Curved-plane reconstruction (or curved planar reformation = CPR) is performed mainly for the evaluation of vessels. This type of reconstruction helps to straighten the bends in a vessel, thereby helping to visualize a whole vessel in a single image or in multiple images. After a vessel has been "straightened", measurements such as cross-sectional area and length can be made. This is helpful in preoperative assessment of a surgical procedure. For 2D projections used in radiation therapy for quality assurance and planning of external beam radiotherapy, including digitally reconstructed radiographs, see Beam's eye view. ==== Volume rendering ==== A threshold value of radiodensity is set by the operator (e.g., a level that corresponds to bone). With the help of edge detection image processing algorithms a 3D model can be constructed from the initial data and displayed on screen. Various thresholds can be used to get multiple models, each anatomical component such as muscle, bone and cartilage can be differentiated on the basis of different colours given to them. However, this mode of operation cannot show interior structures. Surface rendering is limited technique as it displays only the surfaces that meet a particular threshold density, and which are towards the viewer. However, In volume rendering, transparency, colours and shading are used which makes it easy to present a volume in a single image. For example, Pelvic bones could be displayed as semi-transparent, so that, even viewing at an oblique angle one part of the image does not hide another. === Image quality === ==== Dose versus image quality ==== An important issue within radiology today is how to reduce the radiation dose during CT examinations without compromising the image quality. In general, higher radiation doses result in higher-resolution images, while lower doses lead to increased image noise and unsharp images. However, increased dosage raises the adverse side effects, including the risk of radiation-induced cancer – a four-phase abdominal CT gives the same radiation dose as 300 chest X-rays. Several methods that can reduce the exposure to ionizing radiation during a CT scan exist. New software technology can significantly reduce the required radiation dose. New iterative tomographic reconstruction algorithms (e.g., iterative Sparse Asymptotic Minimum Variance) could offer super-resolution without requiring higher radiation dose. Individualize the examination and adjust the radiation dose to the body type and body organ examined. Different body types and organs require different amounts of radiation. Higher resolution is not always suitable, such as detection of small pulmonary masses. ==== Artifacts ==== Although images produced by CT are generally faithful representations of the scanned volume, the technique is susceptible to a number of artifacts, such as the following:Chapters 3 and 5 Streak artifact Streaks are often seen around materials that block most X-rays, such as metal or bone. Numerous factors contribute to these streaks: under sampling, photon starvation, motion, beam hardening, and Compton scatter. This type of artifact commonly occurs in the posterior fossa of the brain, or if there are metal implants. The streaks can be reduced using newer reconstruction techniques. Approaches such as metal artifact reduction (MAR) can also reduce this artifact. MAR techniques include spectral imaging, where CT images are taken with photons of different energy levels, and then synthesized into monochromatic images with special software such as GSI (Gemstone Spectral Imaging). Partial volume effect This appears as "blurring" of edges. It is due to the scanner being unable to differentiate between a small amount of high-density material (e.g., bone) and a larger amount of lower density (e.g., cartilage). The reconstruction assumes that the X-ray attenuation within each voxel is homogeneous; this may not be the case at sharp edges. This is most commonly seen in the z-direction (craniocaudal direction), due to the conventional use of highly anisotropic voxels, which have a much lower out-of-plane resolution, than in-plane resolution. This can be partially overcome by scanning using thinner slices, or an isotropic acquisition on a modern scanner. Ring artifact Probably the most common mechanical artifact, the image of one or many "rings" appears within an image. They are usually caused by the variations in the response from individual elements in a two dimensional X-ray detector due to defect or miscalibration. Ring artifacts can largely be reduced by intensity normalization, also referred to as flat field correction. Remaining rings can be suppressed by a transformation to polar space, where they become linear stripes. A comparative evaluation of ring artefact reduction on X-ray tomography images showed that the method of Sijbers and Postnov can effectively suppress ring artefacts. Noise This appears as grain on the image and is caused by a low signal to noise ratio. This occurs more commonly when a thin slice thickness is used. It can also occur when the power supplied to the X-ray tube is insufficient to penetrate the anatomy. Windmill Streaking appearances can occur when the detectors intersect the reconstruction plane. This can be reduced with filters or a reduction in pitch. Beam hardening This can give a "cupped appearance" when grayscale is visualized as height. It occurs because conventional sources, like X-ray tubes emit a polychromatic spectrum. Photons of higher photon energy levels are typically attenuated less. Because of this, the mean energy of the spectrum increases when passing the object, often described as getting "harder". This leads to an effect increasingly underestimating material thickness, if not corrected. Many algorithms exist to correct for this artifact. They can be divided into mono- and multi-material methods. == Advantages == CT scanning has several advantages over traditional two-dimensional medical radiography. First, CT eliminates the superimposition of images of structures outside the area of interest. Second, CT scans have greater image resolution, enabling examination of finer details. CT can distinguish between tissues that differ in radiographic density by 1% or less. Third, CT scanning enables multiplanar reformatted imaging: scan data can be visualized in the transverse (or axial), coronal, or sagittal plane, depending on the diagnostic task. The improved resolution of CT has permitted the development of new investigations. For example, CT angiography avoids the invasive insertion of a catheter. CT scanning can perform a virtual colonoscopy with greater accuracy and less discomfort for the patient than a traditional colonoscopy. Virtual colonography is far more accurate than a barium enema for detection of tumors and uses a lower radiation dose. CT is a moderate-to-high radiation diagnostic technique. The radiation dose for a particular examination depends on multiple factors: volume scanned, patient build, number and type of scan protocol, and desired resolution and image quality. Two helical CT scanning parameters, tube current and pitch, can be adjusted easily and have a profound effect on radiation. CT scanning is more accurate than two-dimensional radiographs in evaluating anterior interbody fusion, although they may still over-read the extent of fusion. == Adverse effects == === Cancer === The radiation used in CT scans can damage body cells, including DNA molecules, which can lead to radiation-induced cancer. The radiation doses received from CT scans is variable. Compared to the lowest dose X-ray techniques, CT scans can have 100 to 1,000 times higher dose than conventional X-rays. However, a lumbar spine X-ray has a similar dose as a head CT. Articles in the media often exaggerate the relative dose of CT by comparing the lowest-dose X-ray techniques (chest X-ray) with the highest-dose CT techniques. In general, a routine abdominal CT has a radiation dose similar to three years of average background radiation. Large scale population-based studies have consistently demonstrated that low dose radiation from CT scans has impacts on cancer incidence in a variety of cancers. For example, in a large population-based Australian cohort it was found that up to 3.7% of brain cancers were caused by CT scan radiation. Some experts project that in the future, between three and five percent of all cancers would result from medical imaging. An Australian study of 10.9 million people reported that the increased incidence of cancer after CT scan exposure in this cohort was mostly due to irradiation. In this group, one in every 1,800 CT scans was followed by an excess cancer. If the lifetime risk of developing cancer is 40% then the absolute risk rises to 40.05% after a CT. The risks of CT scan radiation are especially important in patients undergoing recurrent CT scans within a short time span of one to five years. Some experts note that CT scans are known to be "overused," and "there is distressingly little evidence of better health outcomes associated with the current high rate of scans." On the other hand, a recent paper analyzing the data of patients who received high cumulative doses showed a high degree of appropriate use. This creates an important issue of cancer risk to these patients. Moreover, a highly significant finding that was previously unreported is that some patients received >100 mSv dose from CT scans in a single day, which counteracts existing criticisms some investigators may have on the effects of protracted versus acute exposure. There are contrarian views and the debate is ongoing. Some studies have shown that publications indicating an increased risk of cancer from typical doses of body CT scans are plagued with serious methodological limitations and several highly improbable results, concluding that no evidence indicates such low doses cause any long-term harm. One study estimated that as many as 0.4% of cancers in the United States resulted from CT scans, and that this may have increased to as much as 1.5 to 2% based on the rate of CT use in 2007. Others dispute this estimate, as there is no consensus that the low levels of radiation used in CT scans cause damage. Lower radiation doses are used in many cases, such as in the investigation of renal colic. A person's age plays a significant role in the subsequent risk of cancer. Estimated lifetime cancer mortality risks from an abdominal CT of a one-year-old is 0.1%, or 1:1000 scans. The risk for someone who is 40 years old is half that of someone who is 20 years old with substantially less risk in the elderly. The International Commission on Radiological Protection estimates that the risk to a fetus being exposed to 10 mGy (a unit of radiation exposure) increases the rate of cancer before 20 years of age from 0.03% to 0.04% (for reference a CT pulmonary angiogram exposes a fetus to 4 mGy). A 2012 review did not find an association between medical radiation and cancer risk in children noting however the existence of limitations in the evidences over which the review is based. CT scans can be performed with different settings for lower exposure in children with most manufacturers of CT scans as of 2007 having this function built in. Furthermore, certain conditions can require children to be exposed to multiple CT scans. Current recommendations are to inform patients of the risks of CT scanning. However, employees of imaging centers tend not to communicate such risks unless patients ask. === Contrast reactions === In the United States half of CT scans are contrast CTs using intravenously injected radiocontrast agents. The most common reactions from these agents are mild, including nausea, vomiting, and an itching rash. Severe life-threatening reactions may rarely occur. Overall reactions occur in 1 to 3% with nonionic contrast and 4 to 12% of people with ionic contrast. Skin rashes may appear within a week to 3% of people. The old radiocontrast agents caused anaphylaxis in 1% of cases while the newer, low-osmolar agents cause reactions in 0.01–0.04% of cases. Death occurs in about 2 to 30 people per 1,000,000 administrations, with newer agents being safer. There is a higher risk of mortality in those who are female, elderly or in poor health, usually secondary to either anaphylaxis or acute kidney injury. The contrast agent may induce contrast-induced nephropathy. This occurs in 2 to 7% of people who receive these agents, with greater risk in those who have preexisting kidney failure, preexisting diabetes, or reduced intravascular volume. People with mild kidney impairment are usually advised to ensure full hydration for several hours before and after the injection. For moderate kidney failure, the use of iodinated contrast should be avoided; this may mean using an alternative technique instead of CT. Those with severe kidney failure requiring dialysis require less strict precautions, as their kidneys have so little function remaining that any further damage would not be noticeable and the dialysis will remove the contrast agent; it is normally recommended, however, to arrange dialysis as soon as possible following contrast administration to minimize any adverse effects of the contrast. In addition to the use of intravenous contrast, orally administered contrast agents are frequently used when examining the abdomen. These are frequently the same as the intravenous contrast agents, merely diluted to approximately 10% of the concentration. However, oral alternatives to iodinated contrast exist, such as very dilute (0.5–1% w/v) barium sulfate suspensions. Dilute barium sulfate has the advantage that it does not cause allergic-type reactions or kidney failure, but cannot be used in patients with suspected bowel perforation or suspected bowel injury, as leakage of barium sulfate from damaged bowel can cause fatal peritonitis. Side effects from contrast agents, administered intravenously in some CT scans, might impair kidney performance in patients with kidney disease, although this risk is now believed to be lower than previously thought. === Scan dose === The table reports average radiation exposures; however, there can be a wide variation in radiation doses between similar scan types, where the highest dose could be as much as 22 times higher than the lowest dose. A typical plain film X-ray involves radiation dose of 0.01 to 0.15 mGy, while a typical CT can involve 10–20 mGy for specific organs, and can go up to 80 mGy for certain specialized CT scans. For purposes of comparison, the world average dose rate from naturally occurring sources of background radiation is 2.4 mSv per year, equal for practical purposes in this application to 2.4 mGy per year. While there is some variation, most people (99%) received less than 7 mSv per year as background radiation. Medical imaging as of 2007 accounted for half of the radiation exposure of those in the United States with CT scans making up two thirds of this amount. In the United Kingdom it accounts for 15% of radiation exposure. The average radiation dose from medical sources is ≈0.6 mSv per person globally as of 2007. Those in the nuclear industry in the United States are limited to doses of 50 mSv a year and 100 mSv every 5 years. Lead is the main material used by radiography personnel for shielding against scattered X-rays. ==== Radiation dose units ==== The radiation dose reported in the gray or mGy unit is proportional to the amount of energy that the irradiated body part is expected to absorb, and the physical effect (such as DNA double strand breaks) on the cells' chemical bonds by X-ray radiation is proportional to that energy. The sievert unit is used in the report of the effective dose. The sievert unit, in the context of CT scans, does not correspond to the actual radiation dose that the scanned body part absorbs but to another radiation dose of another scenario, the whole body absorbing the other radiation dose and the other radiation dose being of a magnitude, estimated to have the same probability to induce cancer as the CT scan. Thus, as is shown in the table above, the actual radiation that is absorbed by a scanned body part is often much larger than the effective dose suggests. A specific measure, termed the computed tomography dose index (CTDI), is commonly used as an estimate of the radiation absorbed dose for tissue within the scan region, and is automatically computed by medical CT scanners. The equivalent dose is the effective dose of a case, in which the whole body would actually absorb the same radiation dose, and the sievert unit is used in its report. In the case of non-uniform radiation, or radiation given to only part of the body, which is common for CT examinations, using the local equivalent dose alone would overstate the biological risks to the entire organism. ==== Effects of radiation ==== Most adverse health effects of radiation exposure may be grouped in two general categories: deterministic effects (harmful tissue reactions) due in large part to the killing/malfunction of cells following high doses; stochastic effects, i.e., cancer and heritable effects involving either cancer development in exposed individuals owing to mutation of somatic cells or heritable disease in their offspring owing to mutation of reproductive (germ) cells. The added lifetime risk of developing cancer by a single abdominal CT of 8 mSv is estimated to be 0.05%, or 1 one in 2,000. Because of increased susceptibility of fetuses to radiation exposure, the radiation dosage of a CT scan is an important consideration in the choice of medical imaging in pregnancy. ==== Excess doses ==== In October, 2009, the US Food and Drug Administration (FDA) initiated an investigation of brain perfusion CT (PCT) scans, based on radiation burns caused by incorrect settings at one particular facility for this particular type of CT scan. Over 200 patients were exposed to radiation at approximately eight times the expected dose for an 18-month period; over 40% of them lost patches of hair. This event prompted a call for increased CT quality assurance programs. It was noted that "while unnecessary radiation exposure should be avoided, a medically needed CT scan obtained with appropriate acquisition parameter has benefits that outweigh the radiation risks." Similar problems have been reported at other centers. These incidents are believed to be due to human error. == Procedure == CT scan procedure varies according to the type of the study and the organ being imaged. The patient lies on the CT table and the centering of the table is done according to the body part. The IV line is established in case of contrast-enhanced CT. After selecting proper and rate of contrast from the pressure injector, the scout is taken to localize and plan the scan. Once the plan is selected, the contrast is given. The raw data is processed according to the study and proper windowing is done to make scans easy to diagnose. === Preparation === Patient preparation may vary according to the type of scan. The general patient preparation includes. Signing the informed consent. Removal of metallic objects and jewelry from the region of interest. Changing to the hospital gown according to hospital protocol. Checking of kidney function, especially creatinine and urea levels (in case of CECT). == Mechanism == Computed tomography operates by using an X-ray generator that rotates around the object; X-ray detectors are positioned on the opposite side of the circle from the X-ray source. As the X-rays pass through the patient, they are attenuated differently by various tissues according to the tissue density. A visual representation of the raw data obtained is called a sinogram, yet it is not sufficient for interpretation. Once the scan data has been acquired, the data must be processed using a form of tomographic reconstruction, which produces a series of cross-sectional images. These cross-sectional images are made up of small units of pixels or voxels. Pixels in an image obtained by CT scanning are displayed in terms of relative radiodensity. The pixel itself is displayed according to the mean attenuation of the tissue(s) that it corresponds to on a scale from +3,071 (most attenuating) to −1,024 (least attenuating) on the Hounsfield scale. A pixel is a two dimensional unit based on the matrix size and the field of view. When the CT slice thickness is also factored in, the unit is known as a voxel, which is a three-dimensional unit. Water has an attenuation of 0 Hounsfield units (HU), while air is −1,000 HU, cancellous bone is typically +400 HU, and cranial bone can reach 2,000 HU or more (os temporale) and can cause artifacts. The attenuation of metallic implants depends on the atomic number of the element used: Titanium usually has an amount of +1000 HU, iron steel can completely extinguish the X-ray and is, therefore, responsible for well-known line-artifacts in computed tomograms. Artifacts are caused by abrupt transitions between low- and high-density materials, which results in data values that exceed the dynamic range of the processing electronics. Two-dimensional CT images are conventionally rendered so that the view is as though looking up at it from the patient's feet. Hence, the left side of the image is to the patient's right and vice versa, while anterior in the image also is the patient's anterior and vice versa. This left-right interchange corresponds to the view that physicians generally have in reality when positioned in front of patients. Initially, the images generated in CT scans were in the transverse (axial) anatomical plane, perpendicular to the long axis of the body. Modern scanners allow the scan data to be reformatted as images in other planes. Digital geometry processing can generate a three-dimensional image of an object inside the body from a series of two-dimensional radiographic images taken by rotation around a fixed axis. These cross-sectional images are widely used for medical diagnosis and therapy. === Contrast === Contrast media used for X-ray CT, as well as for plain film X-ray, are called radiocontrasts. Radiocontrasts for CT are, in general, iodine-based. This is useful to highlight structures such as blood vessels that otherwise would be difficult to delineate from their surroundings. Using contrast material can also help to obtain functional information about tissues. Often, images are taken both with and without radiocontrast. == History == The history of X-ray computed tomography goes back to at least 1917 with the mathematical theory of the Radon transform. In October 1963, William H. Oldendorf received a U.S. patent for a "radiant energy apparatus for investigating selected areas of interior objects obscured by dense material". The first commercially viable CT scanner was invented by Godfrey Hounsfield in 1972. It is often claimed that revenues from the sales of The Beatles' records in the 1960s helped fund the development of the first CT scanner at EMI. The first production X-ray CT machines were in fact called EMI scanners. === Etymology === The word tomography is derived from the Greek tome 'slice' and graphein 'to write'. Computed tomography was originally known as the "EMI scan" as it was developed in the early 1970s at a research branch of EMI, a company best known today for its music and recording business. It was later known as computed axial tomography (CAT or CT scan) and body section röntgenography. The term CAT scan is no longer in technical use because current CT scans enable for multiplanar reconstructions. This makes CT scan the most appropriate term, which is used by radiologists in common vernacular as well as in textbooks and scientific papers. In Medical Subject Headings (MeSH), computed axial tomography was used from 1977 to 1979, but the current indexing explicitly includes X-ray in the title. The term sinogram was introduced by Paul Edholm and Bertil Jacobson in 1975. == Society and culture == === Campaigns === In response to increased concern by the public and the ongoing progress of best practices, the Alliance for Radiation Safety in Pediatric Imaging was formed within the Society for Pediatric Radiology. In concert with the American Society of Radiologic Technologists, the American College of Radiology and the American Association of Physicists in Medicine, the Society for Pediatric Radiology developed and launched the Image Gently Campaign which is designed to maintain high-quality imaging studies while using the lowest doses and best radiation safety practices available on pediatric patients. This initiative has been endorsed and applied by a growing list of various professional medical organizations around the world and has received support and assistance from companies that manufacture equipment used in Radiology. Following upon the success of the Image Gently campaign, the American College of Radiology, the Radiological Society of North America, the American Association of Physicists in Medicine and the American Society of Radiologic Technologists have launched a similar campaign to address this issue in the adult population called Image Wisely. The World Health Organization and International Atomic Energy Agency (IAEA) of the United Nations have also been working in this area and have ongoing projects designed to broaden best practices and lower patient radiation dose. === Prevalence === Use of CT has increased dramatically over the last two decades. An estimated 72 million scans were performed in the United States in 2007, accounting for close to half of the total per-capita dose rate from radiologic and nuclear medicine procedures. Of the CT scans, six to eleven percent are done in children, an increase of seven to eightfold from 1980. Similar increases have been seen in Europe and Asia. In Calgary, Canada, 12.1% of people who present to the emergency with an urgent complaint received a CT scan, most commonly either of the head or of the abdomen. The percentage who received CT, however, varied markedly by the emergency physician who saw them from 1.8% to 25%. In the emergency department in the United States, CT or MRI imaging is done in 15% of people who present with injuries as of 2007 (up from 6% in 1998). The increased use of CT scans has been the greatest in two fields: screening of adults (screening CT of the lung in smokers, virtual colonoscopy, CT cardiac screening, and whole-body CT in asymptomatic patients) and CT imaging of children. Shortening of the scanning time to around 1 second, eliminating the strict need for the subject to remain still or be sedated, is one of the main reasons for the large increase in the pediatric population (especially for the diagnosis of appendicitis). As of 2007, in the United States a proportion of CT scans are performed unnecessarily. Some estimates place this number at 30%. There are a number of reasons for this including: legal concerns, financial incentives, and desire by the public. For example, some healthy people avidly pay to receive full-body CT scans as screening. In that case, it is not at all clear that the benefits outweigh the risks and costs. Deciding whether and how to treat incidentalomas is complex, radiation exposure is not negligible, and the money for the scans involves opportunity cost. == Manufacturers == Major manufacturers of CT scanning devices and equipment are: Canon Medical Systems Corporation Fujifilm Healthcare GE HealthCare Neusoft Medical Systems Philips Siemens Healthineers United Imaging == Research == Photon-counting computed tomography is a CT technique currently under development. Typical CT scanners use energy integrating detectors; photons are measured as a voltage on a capacitor which is proportional to the X-rays detected. However, this technique is susceptible to noise and other factors which can affect the linearity of the voltage to X-ray intensity relationship. Photon counting detectors (PCDs) are still affected by noise but it does not change the measured counts of photons. PCDs have several potential advantages, including improving signal (and contrast) to noise ratios, reducing doses, improving spatial resolution, and through use of several energies, distinguishing multiple contrast agents. PCDs have only recently become feasible in CT scanners due to improvements in detector technologies that can cope with the volume and rate of data required. As of February 2016, photon counting CT is in use at three sites. Some early research has found the dose reduction potential of photon counting CT for breast imaging to be very promising. In view of recent findings of high cumulative doses to patients from recurrent CT scans, there has been a push for scanning technologies and techniques that reduce ionising radiation doses to patients to sub-milliSievert (sub-mSv in the literature) levels during the CT scan process, a goal that has been lingering. == See also == == References == == External links == Development of CT imaging CT Artefacts—PPT by David Platten Filler A (2009-06-30). "The History, Development and Impact of Computed Imaging in Neurological Diagnosis and Neurosurgery: CT, MRI, and DTI". Nature Precedings: 1. doi:10.1038/npre.2009.3267.4. ISSN 1756-0357. Boone JM, McCollough CH (2021). "Computed tomography turns 50". Physics Today. 74 (9): 34–40. Bibcode:2021PhT....74i..34B. doi:10.1063/PT.3.4834. ISSN 0031-9228. S2CID 239718717.
Wikipedia/Computed_tomography
A graphics card (also called a video card, display card, graphics accelerator, graphics adapter, VGA card/VGA, video adapter, display adapter, or colloquially GPU) is a computer expansion card that generates a feed of graphics output to a display device such as a monitor. Graphics cards are sometimes called discrete or dedicated graphics cards to emphasize their distinction to an integrated graphics processor on the motherboard or the central processing unit (CPU). A graphics processing unit (GPU) that performs the necessary computations is the main component in a graphics card, but the acronym "GPU" is sometimes also used to refer to the graphics card as a whole erroneously. Most graphics cards are not limited to simple display output. The graphics processing unit can be used for additional processing, which reduces the load from the CPU. Additionally, computing platforms such as OpenCL and CUDA allow using graphics cards for general-purpose computing. Applications of general-purpose computing on graphics cards include AI training, cryptocurrency mining, and molecular simulation. Usually, a graphics card comes in the form of a printed circuit board (expansion board) which is to be inserted into an expansion slot. Others may have dedicated enclosures, and they are connected to the computer via a docking station or a cable. These are known as external GPUs (eGPUs). Graphics cards are often preferred over integrated graphics for increased performance. A more powerful graphics card will be able to render more frames per second. == History == Graphics cards, also known as video cards or graphics processing units (GPUs), have historically evolved alongside computer display standards to accommodate advancing technologies and user demands. In the realm of IBM PC compatibles, the early standards included Monochrome Display Adapter (MDA), Color Graphics Adapter (CGA), Hercules Graphics Card, Enhanced Graphics Adapter (EGA), and Video Graphics Array (VGA). Each of these standards represented a step forward in the ability of computers to display more colors, higher resolutions, and richer graphical interfaces, laying the foundation for the development of modern graphical capabilities. In the late 1980s, advancements in personal computing led companies like Radius to develop specialized graphics cards for the Apple Macintosh II. These cards were unique in that they incorporated discrete 2D QuickDraw capabilities, enhancing the graphical output of Macintosh computers by accelerating 2D graphics rendering. QuickDraw, a core part of the Macintosh graphical user interface, allowed for the rapid rendering of bitmapped graphics, fonts, and shapes, and the introduction of such hardware-based enhancements signaled an era of specialized graphics processing in consumer machines. The evolution of graphics processing took a major leap forward in the mid-1990s with 3dfx Interactive's introduction of the Voodoo series, one of the earliest consumer-facing GPUs that supported 3D acceleration. The Voodoo's architecture marked a major shift in graphical computing by offloading the demanding task of 3D rendering from the CPU to the GPU, significantly improving gaming performance and graphical realism. The development of fully integrated GPUs that could handle both 2D and 3D rendering came with the introduction of the NVIDIA RIVA 128. Released in 1997, the RIVA 128 was one of the first consumer-facing GPUs to integrate both 3D and 2D processing units on a single chip. This innovation simplified the hardware requirements for end-users, as they no longer needed separate cards for 2D and 3D rendering, thus paving the way for the widespread adoption of more powerful and versatile GPUs in personal computers. In contemporary times, the majority of graphics cards are built using chips sourced from two dominant manufacturers: AMD and Nvidia. These modern graphics cards are multifunctional and support various tasks beyond rendering 3D images for gaming. They also provide 2D graphics processing, video decoding, TV output, and multi-monitor setups. Additionally, many graphics cards now have integrated sound capabilities, allowing them to transmit audio alongside video output to connected TVs or monitors with built-in speakers, further enhancing the multimedia experience. Within the graphics industry, these products are often referred to as graphics add-in boards (AIBs). The term "AIB" emphasizes the modular nature of these components, as they are typically added to a computer's motherboard to enhance its graphical capabilities. The evolution from the early days of separate 2D and 3D cards to today's integrated and multifunctional GPUs reflects the ongoing technological advancements and the increasing demand for high-quality visual and multimedia experiences in computing. == Discrete vs integrated graphics == As an alternative to the use of a graphics card, video hardware can be integrated into the motherboard, CPU, or a system-on-chip as integrated graphics. Motherboard-based implementations are sometimes called "on-board video". Some motherboards support using both integrated graphics and a graphics card simultaneously to feed separate displays. The main advantages of integrated graphics are: low cost, compactness, simplicity, and low energy consumption. Integrated graphics often have less performance than a graphics card because the graphics processing unit inside integrated graphics needs to share system resources with the CPU. On the other hand, a graphics card has a separate random access memory (RAM), cooling system, and dedicated power regulators. A graphics card can offload work and reduce memory-bus-contention from the CPU and system RAM, therefore, the overall performance for a computer could improve, in addition to increased performance in graphics processing. Such improvements to performance can be seen in video gaming, 3D animation, and video editing. Both AMD and Intel have introduced CPUs and motherboard chipsets that support the integration of a GPU into the same die as the CPU. AMD advertises CPUs with integrated graphics under the trademark Accelerated Processing Unit (APU), while Intel brands similar technology under "Intel Graphics Technology". == Power demand == As the processing power of graphics cards increased, so did their demand for electrical power. Current high-performance graphics cards tend to consume large amounts of power. For example, the thermal design power (TDP) for the GeForce Titan RTX is 280 watts. When tested with video games, the GeForce RTX 2080 Ti Founder's Edition averaged 300 watts of power consumption. While CPU and power supply manufacturers have recently aimed toward higher efficiency, power demands of graphics cards continued to rise, with the largest power consumption of any individual part in a computer. Although power supplies have also increased their power output, the bottleneck occurs in the PCI-Express connection, which is limited to supplying 75 watts. Modern graphics cards with a power consumption of over 75 watts usually include a combination of six-pin (75 W) or eight-pin (150 W) sockets that connect directly to the power supply. Providing adequate cooling becomes a challenge in such computers. Computers with multiple graphics cards may require power supplies over 750 watts. Heat extraction becomes a major design consideration for computers with two or more high-end graphics cards. As of the Nvidia GeForce RTX 30 series, Ampere architecture, a custom flashed RTX 3090 named "Hall of Fame" has been recorded to reach a peak power draw as high as 630 watts. A standard RTX 3090 can peak at up to 450 watts. The RTX 3080 can reach up to 350 watts, while a 3070 can reach a similar, if not slightly lower, peak power draw. Ampere cards of the Founders Edition variant feature a "dual axial flow through" cooler design, which includes fans above and below the card to dissipate as much heat as possible towards the rear of the computer case. A similar design was used by the Sapphire Radeon RX Vega 56 Pulse graphics card. == Size == Graphics cards for desktop computers have different size profiles, which allows graphics cards to be added to smaller-sized computers. Some graphics cards are not of the usual size, and are named as "low profile". Graphics card profiles are based on height only, with low-profile cards taking up less than the height of a PCIe slot, some can be as low as "half-height". Length and thickness can vary greatly, with high-end cards usually occupying two or three expansion slots, and with modern high-end graphics cards such as the RTX 4090 exceeding 300mm in length. A lower profile card is preferred when trying to fit multiple cards or if graphics cards run into clearance issues with other motherboard components like the DIMM or PCIE slots. This can be fixed with a larger computer case such as mid-tower or full tower. Full towers are usually able to fit larger motherboards in sizes like ATX and micro ATX. === GPU sag === In the late 2010s and early 2020s, some high-end graphics card models have become so heavy that it is possible for them to sag downwards after installing without proper support, which is why many manufacturers provide additional support brackets. GPU sag can damage a GPU in the long term. == Multicard scaling == Some graphics cards can be linked together to allow scaling graphics processing across multiple cards. This is done using either the PCIe bus on the motherboard or, more commonly, a data bridge. Usually, the cards must be of the same model to be linked, and most low end cards are not able to be linked in this way. AMD and Nvidia both have proprietary scaling methods, CrossFireX for AMD, and SLI (since the Turing generation, superseded by NVLink) for Nvidia. Cards from different chip-set manufacturers or architectures cannot be used together for multi-card scaling. If graphics cards have different sizes of memory, the lowest value will be used, with the higher values disregarded. Currently, scaling on consumer-grade cards can be done using up to four cards. The use of four cards requires a large motherboard with a proper configuration. Nvidia's GeForce GTX 590 graphics card can be configured in a four-card configuration. As stated above, users will want to stick to cards with the same performances for optimal use. Motherboards including ASUS Maximus 3 Extreme and Gigabyte GA EX58 Extreme are certified to work with this configuration. A large power supply is necessary to run the cards in SLI or CrossFireX. Power demands must be known before a proper supply is installed. For the four card configuration, a 1000+ watt supply is needed. With any relatively powerful graphics card, thermal management cannot be ignored. Graphics cards require well-vented chassis and good thermal solutions. Air or water cooling are usually required, though low end GPUs can use passive cooling. Larger configurations use water solutions or immersion cooling to achieve proper performance without thermal throttling. SLI and Crossfire have become increasingly uncommon as most games do not fully utilize multiple GPUs, due to the fact that most users cannot afford them. Multiple GPUs are still used on supercomputers (like in Summit), on workstations to accelerate video and 3D rendering, visual effects, for simulations, and for training artificial intelligence. == 3D graphics APIs == A graphics driver usually supports one or multiple cards by the same vendor and has to be written for a specific operating system. Additionally, the operating system or an extra software package may provide certain programming APIs for applications to perform 3D rendering. === Specific usage === Some GPUs are designed with specific usage in mind: Gaming GeForce GTX GeForce RTX Nvidia Titan Radeon HD Radeon RX Intel Arc Cloud gaming Nvidia Grid Radeon Sky Workstation Nvidia Quadro AMD FirePro Radeon Pro Intel Arc Pro Cloud Workstation Nvidia Tesla AMD FireStream Artificial Intelligence Cloud Nvidia Tesla Radeon Instinct Automated/Driverless car Nvidia Drive PX == Industry == As of 2016, the primary suppliers of the GPUs (graphics chips or chipsets) used in graphics cards are AMD and Nvidia. In the third quarter of 2013, AMD had a 35.5% market share while Nvidia had 64.5%, according to Jon Peddie Research. In economics, this industry structure is termed a duopoly. AMD and Nvidia also build and sell graphics cards, which are termed graphics add-in-boards (AIBs) in the industry. (See Comparison of Nvidia graphics processing units and Comparison of AMD graphics processing units.) In addition to marketing their own graphics cards, AMD and Nvidia sell their GPUs to authorized AIB suppliers, which AMD and Nvidia refer to as "partners". The fact that Nvidia and AMD compete directly with their customer/partners complicates relationships in the industry. AMD and Intel being direct competitors in the CPU industry is also noteworthy, since AMD-based graphics cards may be used in computers with Intel CPUs. Intel's integrated graphics may weaken AMD, in which the latter derives a significant portion of its revenue from its APUs. As of the second quarter of 2013, there were 52 AIB suppliers. These AIB suppliers may market graphics cards under their own brands, produce graphics cards for private label brands, or produce graphics cards for computer manufacturers. Some AIB suppliers such as MSI build both AMD-based and Nvidia-based graphics cards. Others, such as EVGA, build only Nvidia-based graphics cards, while XFX, now builds only AMD-based graphics cards. Several AIB suppliers are also motherboard suppliers. Most of the largest AIB suppliers are based in Taiwan and they include ASUS, MSI, GIGABYTE, and Palit. Hong Kong–based AIB manufacturers include Sapphire and Zotac. Sapphire and Zotac also sell graphics cards exclusively for AMD and Nvidia GPUs respectively. == Market == Graphics card shipments peaked at a total of 114 million in 1999. By contrast, they totaled 14.5 million units in the third quarter of 2013, a 17% fall from Q3 2012 levels. Shipments reached an annual total of 44 million in 2015. The sales of graphics cards have trended downward due to improvements in integrated graphics technologies; high-end, CPU-integrated graphics can provide competitive performance with low-end graphics cards. At the same time, graphics card sales have grown within the high-end segment, as manufacturers have shifted their focus to prioritize the gaming and enthusiast market. Beyond the gaming and multimedia segments, graphics cards have been increasingly used for general-purpose computing, such as big data processing. The growth of cryptocurrency has placed a severely high demand on high-end graphics cards, especially in large quantities, due to their advantages in the process of cryptocurrency mining. In January 2018, mid- to high-end graphics cards experienced a major surge in price, with many retailers having stock shortages due to the significant demand among this market. Graphics card companies released mining-specific cards designed to run 24 hours a day, seven days a week, and without video output ports. The graphics card industry took a setback due to the 2020–21 chip shortage. == Parts == A modern graphics card consists of a printed circuit board on which the components are mounted. These include: === Graphics processing unit === A graphics processing unit (GPU), also occasionally called visual processing unit (VPU), is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the building of images in a frame buffer intended for output to a display. Because of the large degree of programmable computational complexity for such a task, a modern graphics card is also a computer unto itself. === Heat sink === A heat sink is mounted on most modern graphics cards. A heat sink spreads out the heat produced by the graphics processing unit evenly throughout the heat sink and unit itself. The heat sink commonly has a fan mounted to cool the heat sink and the graphics processing unit. Not all cards have heat sinks, for example, some cards are liquid-cooled and instead have a water block; additionally, cards from the 1980s and early 1990s did not produce much heat, and did not require heat sinks. Most modern graphics cards need proper thermal solutions. They can be water-cooled or through heat sinks with additional connected heat pipes usually made of copper for the best thermal transfer. === Video BIOS === The video BIOS or firmware contains a minimal program for the initial set up and control of the graphics card. It may contain information on the memory and memory timing, operating speeds and voltages of the graphics processor, and other details which can sometimes be changed. Modern Video BIOSes do not support full functionalities of graphics cards; they are only sufficient to identify and initialize the card to display one of a few frame buffer or text display modes. It does not support YUV to RGB translation, video scaling, pixel copying, compositing or any of the multitude of other 2D and 3D features of the graphics card, which must be accessed by software drivers. === Video memory === The memory capacity of most modern graphics cards ranges from 2 to 24 GB. But with up to 32 GB as of the last 2010s, the applications for graphics use are becoming more powerful and widespread. Since video memory needs to be accessed by the GPU and the display circuitry, it often uses special high-speed or multi-port memory, such as VRAM, WRAM, SGRAM, etc. Around 2003, the video memory was typically based on DDR technology. During and after that year, manufacturers moved towards DDR2, GDDR3, GDDR4, GDDR5, GDDR5X, and GDDR6. The effective memory clock rate in modern cards is generally between 2 and 15 GHz. Video memory may be used for storing other data as well as the screen image, such as the Z-buffer, which manages the depth coordinates in 3D graphics, as well as textures, vertex buffers, and compiled shader programs. === RAMDAC === The RAMDAC, or random-access-memory digital-to-analog converter, converts digital signals to analog signals for use by a computer display that uses analog inputs such as cathode-ray tube (CRT) displays. The RAMDAC is a kind of RAM chip that regulates the functioning of the graphics card. Depending on the number of bits used and the RAMDAC-data-transfer rate, the converter will be able to support different computer-display refresh rates. With CRT displays, it is best to work over 75 Hz and never under 60 Hz, to minimize flicker. (This is not a problem with LCD displays, as they have little to no flicker.) Due to the growing popularity of digital computer displays and the integration of the RAMDAC onto the GPU die, it has mostly disappeared as a discrete component. All current LCD/plasma monitors and TVs and projectors with only digital connections work in the digital domain and do not require a RAMDAC for those connections. There are displays that feature analog inputs (VGA, component, SCART, etc.) only. These require a RAMDAC, but they reconvert the analog signal back to digital before they can display it, with the unavoidable loss of quality stemming from this digital-to-analog-to-digital conversion. With the VGA standard being phased out in favor of digital formats, RAMDACs have started to disappear from graphics cards. === Output interfaces === The most common connection systems between the graphics card and the computer display are: ==== Video Graphics Array (VGA) (DE-15) ==== Also known as D-sub, VGA is an analog-based standard adopted in the late 1980s designed for CRT displays, also called VGA connector. Today, the VGA analog interface is used for high definition video resolutions including 1080p and higher. Some problems of this standard are electrical noise, image distortion and sampling error in evaluating pixels. While the VGA transmission bandwidth is high enough to support even higher resolution playback, the picture quality can degrade depending on cable quality and length. The extent of quality difference depends on the individual's eyesight and the display; when using a DVI or HDMI connection, especially on larger sized LCD/LED monitors or TVs, quality degradation, if present, is prominently visible. Blu-ray playback at 1080p is possible via the VGA analog interface, if Image Constraint Token (ICT) is not enabled on the Blu-ray disc. ==== Digital Visual Interface (DVI) ==== Digital Visual Interface is a digital-based standard designed for displays such as flat-panel displays (LCDs, plasma screens, wide high-definition television displays) and video projectors. There were also some rare high-end CRT monitors that use DVI. It avoids image distortion and electrical noise, corresponding each pixel from the computer to a display pixel, using its native resolution. It is worth noting that most manufacturers include a DVI-I connector, allowing (via simple adapter) standard RGB signal output to an old CRT or LCD monitor with VGA input. ==== Video-in video-out (VIVO) for S-Video, composite video and component video ==== These connectors are included to allow connection with televisions, DVD players, video recorders and video game consoles. They often come in two 10-pin mini-DIN connector variations, and the VIVO splitter cable generally comes with either 4 connectors (S-Video in and out plus composite video in and out), or 6 connectors (S-Video in and out, component YPBPR out and composite in and out). ==== High-Definition Multimedia Interface (HDMI) ==== HDMI is a compact audio/video interface for transferring uncompressed video data and compressed/uncompressed digital audio data from an HDMI-compliant device ("the source device") to a compatible digital audio device, computer monitor, video projector, or digital television. HDMI is a digital replacement for existing analog video standards. HDMI supports copy protection through HDCP. ==== DisplayPort ==== DisplayPort is a digital display interface developed by the Video Electronics Standards Association (VESA). The interface is primarily used to connect a video source to a display device such as a computer monitor, though it can also be used to transmit audio, USB, and other forms of data. The VESA specification is royalty-free. VESA designed it to replace VGA, DVI, and LVDS. Backward compatibility to VGA and DVI by using adapter dongles enables consumers to use DisplayPort fitted video sources without replacing existing display devices. Although DisplayPort has a greater throughput of the same functionality as HDMI, it is expected to complement the interface, not replace it. ==== USB-C ==== ==== Other types of connection systems ==== === Motherboard interfaces === Chronologically, connection systems between graphics card and motherboard were, mainly: S-100 bus: Designed in 1974 as a part of the Altair 8800, it is the first industry-standard bus for the microcomputer industry. ISA: Introduced in 1981 by IBM, it became dominant in the marketplace in the 1980s. It is an 8- or 16-bit bus clocked at 8 MHz. NuBus: Used in Macintosh II, it is a 32-bit bus with an average bandwidth of 10 to 20 MB/s. MCA: Introduced in 1987 by IBM it is a 32-bit bus clocked at 10 MHz. EISA: Released in 1988 to compete with IBM's MCA, it was compatible with the earlier ISA bus. It is a 32-bit bus clocked at 8.33 MHz. VLB: An extension of ISA, it is a 32-bit bus clocked at 33 MHz. Also referred to as VESA. PCI: Replaced the EISA, ISA, MCA and VESA buses from 1993 onwards. PCI allowed dynamic connectivity between devices, avoiding the manual adjustments required with jumpers. It is a 32-bit bus clocked 33 MHz. UPA: An interconnect bus architecture introduced by Sun Microsystems in 1995. It is a 64-bit bus clocked at 67 or 83 MHz. USB: Although mostly used for miscellaneous devices, such as secondary storage devices or peripherals and toys, USB displays and display adapters exist. It was first used in 1996. AGP: First used in 1997, it is a dedicated-to-graphics bus. It is a 32-bit bus clocked at 66 MHz. PCI-X: An extension of the PCI bus, it was introduced in 1998. It improves upon PCI by extending the width of bus to 64 bits and the clock frequency to up to 133 MHz. PCI Express: Abbreviated as PCIe, it is a point-to-point interface released in 2004. In 2006, it provided a data-transfer rate that is double of AGP. It should not be confused with PCI-X, an enhanced version of the original PCI specification. This is standard for most modern graphics cards. The following table is a comparison between features of some interfaces listed above. == See also == List of computer hardware List of graphics card manufacturers List of computer display standards – a detailed list of standards like SVGA, WXGA, WUXGA, etc. AMD (ATI), Nvidia – quasi duopoly of 3D chip GPU and graphics card designers GeForce, Radeon, Intel Arc – examples of graphics card series GPGPU (i.e.: CUDA, AMD FireStream) Framebuffer – the computer memory used to store a screen image Capture card – the inverse of a graphics card == References == == Sources == Mueller, Scott (2005) Upgrading and Repairing PCs. 16th edition. Que Publishing. ISBN 0-7897-3173-8 == External links == How Graphics Cards Work at HowStuffWorks
Wikipedia/Graphics_cards
In 3D computer graphics, a wire-frame model (also spelled wireframe model) is a visual representation of a three-dimensional (3D) physical object. It is based on a polygon mesh or a volumetric mesh, created by specifying each edge of the physical object where two mathematically continuous smooth surfaces meet, or by connecting an object's constituent vertices using (straight) lines or curves. The object is projected into screen space and rendered by drawing lines at the location of each edge. The term "wire frame" comes from designers using metal wire to represent the three-dimensional shape of solid objects. 3D wireframe computer models allow for the construction and manipulation of solids and solid surfaces. 3D solid modeling efficiently draws higher quality representations of solids than conventional line drawing. Using a wire-frame model allows for the visualization of the underlying design structure of a 3D model. Traditional two-dimensional views and drawings/renderings can be created by the appropriate rotation of the object, and the selection of hidden-line removal via cutting planes. Since wire-frame renderings are relatively simple and fast to calculate, they are often used in cases where a relatively high screen frame rate is needed (for instance, when working with a particularly complex 3D model, or in real-time systems that model exterior phenomena). When greater graphical detail is desired, surface textures can be added automatically after the completion of the initial rendering of the wire frame. This allows a designer to quickly review solids, or rotate objects to different views without the long delays associated with more realistic rendering, or even the processing of faces and simple flat shading. The wire frame format is also well-suited and widely used in programming tool paths for direct numerical control (DNC) machine tools. Hand-drawn wire-frame-like illustrations date back as far as the Italian Renaissance. Wire-frame models were also used extensively in video games to represent 3D objects during the 1980s and early 1990s, when "properly" filled 3D objects would have been too complex to calculate and draw with the computers of the time. Wire-frame models are also used as the input for computer-aided manufacturing (CAM). There are three main types of 3D computer-aided design (CAD) models; wire frame is the most abstract and least realistic. The other types are surface and solid. The wire-frame method of modelling consists of only lines and curves that connect the points or vertices and thereby define the edges of an object. == Simple example of wireframe model == An object is specified by two tables: (1) Vertex Table, and, (2) Edge Table. The vertex table consists of three-dimensional coordinate values for each vertex with reference to the origin. Edge table specifies the start and end vertices for each edge. A naive interpretation could create a wire-frame representation by simply drawing straight lines between the screen coordinates of the appropriate vertices using the edge list. Unlike representations designed for more detailed rendering, face information is not specified (it must be calculated if required for solid rendering). Appropriate calculations have to be performed to transform the 3D coordinates of the vertices into 2D screen coordinates. == See also == Animation 3D computer graphics Computer animation Computer-generated imagery (CGI) Mockup Polygon mesh Vector graphics Virtual cinematography == References == Principles of Engineering Graphics by Maxwell Macmillan International Editions ASME Engineer's Data Book by Clifford Matthews Engineering Drawing by N.D. Bhatt Texturing and Modeling by Davis S. Ebert 3D Computer Graphics by Alan Watt
Wikipedia/Wire_frame_model
In computer graphics and computer vision, image-based modeling and rendering (IBMR) methods rely on a set of two-dimensional images of a scene to generate a three-dimensional model and then render some novel views of this scene. The traditional approach of computer graphics has been used to create a geometric model in 3D and try to reproject it onto a two-dimensional image. Computer vision, conversely, is mostly focused on detecting, grouping, and extracting features (edges, faces, etc.) present in a given picture and then trying to interpret them as three-dimensional clues. Image-based modeling and rendering allows the use of multiple two-dimensional images in order to generate directly novel two-dimensional images, skipping the manual modeling stage. == Light modeling == Instead of considering only the physical model of a solid, IBMR methods usually focus more on light modeling. The fundamental concept behind IBMR is the plenoptic illumination function which is a parametrisation of the light field. The plenoptic function describes the light rays contained in a given volume. It can be represented with seven dimensions: a ray is defined by its position ( x , y , z ) {\displaystyle (x,y,z)} , its orientation ( θ , ϕ ) {\displaystyle (\theta ,\phi )} , its wavelength ( λ ) {\displaystyle (\lambda )} and its time ( t ) {\displaystyle (t)} : P ( x , y , z , θ , ϕ , λ , t ) {\displaystyle P(x,y,z,\theta ,\phi ,\lambda ,t)} . IBMR methods try to approximate the plenoptic function to render a novel set of two-dimensional images from another. Given the high dimensionality of this function, practical methods place constraints on the parameters in order to reduce this number (typically to 2 to 4). == IBMR methods and algorithms == View morphing generates a transition between images Panoramic imaging renders panoramas using image mosaics of individual still images Lumigraph relies on a dense sampling of a scene Space carving generates a 3D model based on a photo-consistency check == See also == View synthesis 3D reconstruction Structure from motion == References == == External links == Quan, Long. Image-based modeling. Springer Science & Business Media, 2010. [1] Ce Zhu; Shuai Li (2016). "Depth Image Based View Synthesis: New Insights and Perspectives on Hole Generation and Filling". IEEE Transactions on Broadcasting. 62 (1): 82–93. doi:10.1109/TBC.2015.2475697. S2CID 19100077. Mansi Sharma; Santanu Chaudhury; Brejesh Lall; M.S. Venkatesh (2014). "A flexible architecture for multi-view 3DTV based on uncalibrated cameras". Journal of Visual Communication and Image Representation. 25 (4): 599–621. doi:10.1016/j.jvcir.2013.07.012. Mansi Sharma; Santanu Chaudhury; Brejesh Lall (2014). Kinect-Variety Fusion: A Novel Hybrid Approach for Artifacts-Free 3DTV Content Generation. In 22nd International Conference on Pattern Recognition (ICPR), Stockholm, 2014. doi:10.1109/ICPR.2014.395. Mansi Sharma; Santanu Chaudhury; Brejesh Lall (2012). 3DTV view generation with virtual pan/tilt/zoom functionality. Proceedings of the Eighth Indian Conference on Computer Vision, Graphics and Image Processing, ACM New York, NY, USA. doi:10.1145/2425333.2425374.
Wikipedia/Image-based_modeling_and_rendering
In statistical classification, two main approaches are called the generative approach and the discriminative approach. These compute classifiers by different approaches, differing in the degree of statistical modelling. Terminology is inconsistent, but three major types can be distinguished: A generative model is a statistical model of the joint probability distribution P ( X , Y ) {\displaystyle P(X,Y)} on a given observable variable X and target variable Y; A generative model can be used to "generate" random instances (outcomes) of an observation x. A discriminative model is a model of the conditional probability P ( Y ∣ X = x ) {\displaystyle P(Y\mid X=x)} of the target Y, given an observation x. It can be used to "discriminate" the value of the target variable Y, given an observation x. Classifiers computed without using a probability model are also referred to loosely as "discriminative". The distinction between these last two classes is not consistently made; Jebara (2004) refers to these three classes as generative learning, conditional learning, and discriminative learning, but Ng & Jordan (2002) only distinguish two classes, calling them generative classifiers (joint distribution) and discriminative classifiers (conditional distribution or no distribution), not distinguishing between the latter two classes. Analogously, a classifier based on a generative model is a generative classifier, while a classifier based on a discriminative model is a discriminative classifier, though this term also refers to classifiers that are not based on a model. Standard examples of each, all of which are linear classifiers, are: generative classifiers: naive Bayes classifier and linear discriminant analysis discriminative model: logistic regression In application to classification, one wishes to go from an observation x to a label y (or probability distribution on labels). One can compute this directly, without using a probability distribution (distribution-free classifier); one can estimate the probability of a label given an observation, P ( Y | X = x ) {\displaystyle P(Y|X=x)} (discriminative model), and base classification on that; or one can estimate the joint distribution P ( X , Y ) {\displaystyle P(X,Y)} (generative model), from that compute the conditional probability P ( Y | X = x ) {\displaystyle P(Y|X=x)} , and then base classification on that. These are increasingly indirect, but increasingly probabilistic, allowing more domain knowledge and probability theory to be applied. In practice different approaches are used, depending on the particular problem, and hybrids can combine strengths of multiple approaches. == Definition == An alternative division defines these symmetrically as: a generative model is a model of the conditional probability of the observable X, given a target y, symbolically, P ( X ∣ Y = y ) {\displaystyle P(X\mid Y=y)} a discriminative model is a model of the conditional probability of the target Y, given an observation x, symbolically, P ( Y ∣ X = x ) {\displaystyle P(Y\mid X=x)} Regardless of precise definition, the terminology is constitutional because a generative model can be used to "generate" random instances (outcomes), either of an observation and target ( x , y ) {\displaystyle (x,y)} , or of an observation x given a target value y, while a discriminative model or discriminative classifier (without a model) can be used to "discriminate" the value of the target variable Y, given an observation x. The difference between "discriminate" (distinguish) and "classify" is subtle, and these are not consistently distinguished. (The term "discriminative classifier" becomes a pleonasm when "discrimination" is equivalent to "classification".) The term "generative model" is also used to describe models that generate instances of output variables in a way that has no clear relationship to probability distributions over potential samples of input variables. Generative adversarial networks are examples of this class of generative models, and are judged primarily by the similarity of particular outputs to potential inputs. Such models are not classifiers. === Relationships between models === In application to classification, the observable X is frequently a continuous variable, the target Y is generally a discrete variable consisting of a finite set of labels, and the conditional probability P ( Y ∣ X ) {\displaystyle P(Y\mid X)} can also be interpreted as a (non-deterministic) target function f : X → Y {\displaystyle f\colon X\to Y} , considering X as inputs and Y as outputs. Given a finite set of labels, the two definitions of "generative model" are closely related. A model of the conditional distribution P ( X ∣ Y = y ) {\displaystyle P(X\mid Y=y)} is a model of the distribution of each label, and a model of the joint distribution is equivalent to a model of the distribution of label values P ( Y ) {\displaystyle P(Y)} , together with the distribution of observations given a label, P ( X ∣ Y ) {\displaystyle P(X\mid Y)} ; symbolically, P ( X , Y ) = P ( X ∣ Y ) P ( Y ) . {\displaystyle P(X,Y)=P(X\mid Y)P(Y).} Thus, while a model of the joint probability distribution is more informative than a model of the distribution of label (but without their relative frequencies), it is a relatively small step, hence these are not always distinguished. Given a model of the joint distribution, P ( X , Y ) {\displaystyle P(X,Y)} , the distribution of the individual variables can be computed as the marginal distributions P ( X ) = ∑ y P ( X , Y = y ) {\displaystyle P(X)=\sum _{y}P(X,Y=y)} and P ( Y ) = ∫ x P ( Y , X = x ) {\displaystyle P(Y)=\int _{x}P(Y,X=x)} (considering X as continuous, hence integrating over it, and Y as discrete, hence summing over it), and either conditional distribution can be computed from the definition of conditional probability: P ( X ∣ Y ) = P ( X , Y ) / P ( Y ) {\displaystyle P(X\mid Y)=P(X,Y)/P(Y)} and P ( Y ∣ X ) = P ( X , Y ) / P ( X ) {\displaystyle P(Y\mid X)=P(X,Y)/P(X)} . Given a model of one conditional probability, and estimated probability distributions for the variables X and Y, denoted P ( X ) {\displaystyle P(X)} and P ( Y ) {\displaystyle P(Y)} , one can estimate the opposite conditional probability using Bayes' rule: P ( X ∣ Y ) P ( Y ) = P ( Y ∣ X ) P ( X ) . {\displaystyle P(X\mid Y)P(Y)=P(Y\mid X)P(X).} For example, given a generative model for P ( X ∣ Y ) {\displaystyle P(X\mid Y)} , one can estimate: P ( Y ∣ X ) = P ( X ∣ Y ) P ( Y ) / P ( X ) , {\displaystyle P(Y\mid X)=P(X\mid Y)P(Y)/P(X),} and given a discriminative model for P ( Y ∣ X ) {\displaystyle P(Y\mid X)} , one can estimate: P ( X ∣ Y ) = P ( Y ∣ X ) P ( X ) / P ( Y ) . {\displaystyle P(X\mid Y)=P(Y\mid X)P(X)/P(Y).} Note that Bayes' rule (computing one conditional probability in terms of the other) and the definition of conditional probability (computing conditional probability in terms of the joint distribution) are frequently conflated as well. == Contrast with discriminative classifiers == A generative algorithm models how the data was generated in order to categorize a signal. It asks the question: based on my generation assumptions, which category is most likely to generate this signal? A discriminative algorithm does not care about how the data was generated, it simply categorizes a given signal. So, discriminative algorithms try to learn p ( y | x ) {\displaystyle p(y|x)} directly from the data and then try to classify data. On the other hand, generative algorithms try to learn p ( x , y ) {\displaystyle p(x,y)} which can be transformed into p ( y | x ) {\displaystyle p(y|x)} later to classify the data. One of the advantages of generative algorithms is that you can use p ( x , y ) {\displaystyle p(x,y)} to generate new data similar to existing data. On the other hand, it has been proved that some discriminative algorithms give better performance than some generative algorithms in classification tasks. Despite the fact that discriminative models do not need to model the distribution of the observed variables, they cannot generally express complex relationships between the observed and target variables. But in general, they don't necessarily perform better than generative models at classification and regression tasks. The two classes are seen as complementary or as different views of the same procedure. == Deep generative models == With the rise of deep learning, a new family of methods, called deep generative models (DGMs), is formed through the combination of generative models and deep neural networks. An increase in the scale of the neural networks is typically accompanied by an increase in the scale of the training data, both of which are required for good performance. Popular DGMs include variational autoencoders (VAEs), generative adversarial networks (GANs), and auto-regressive models. Recently, there has been a trend to build very large deep generative models. For example, GPT-3, and its precursor GPT-2, are auto-regressive neural language models that contain billions of parameters, BigGAN and VQ-VAE which are used for image generation that can have hundreds of millions of parameters, and Jukebox is a very large generative model for musical audio that contains billions of parameters. == Types == === Generative models === Types of generative models are: Gaussian mixture model (and other types of mixture model) Hidden Markov model Probabilistic context-free grammar Bayesian network (e.g. Naive bayes, Autoregressive model) Averaged one-dependence estimators Latent Dirichlet allocation Boltzmann machine (e.g. Restricted Boltzmann machine, Deep belief network) Variational autoencoder Generative adversarial network Flow-based generative model Energy based model Diffusion model If the observed data are truly sampled from the generative model, then fitting the parameters of the generative model to maximize the data likelihood is a common method. However, since most statistical models are only approximations to the true distribution, if the model's application is to infer about a subset of variables conditional on known values of others, then it can be argued that the approximation makes more assumptions than are necessary to solve the problem at hand. In such cases, it can be more accurate to model the conditional density functions directly using a discriminative model (see below), although application-specific details will ultimately dictate which approach is most suitable in any particular case. === Discriminative models === k-nearest neighbors algorithm Logistic regression Support Vector Machines Decision Tree Learning Random Forest Maximum-entropy Markov models Conditional random fields == Examples == === Simple example === Suppose the input data is x ∈ { 1 , 2 } {\displaystyle x\in \{1,2\}} , the set of labels for x {\displaystyle x} is y ∈ { 0 , 1 } {\displaystyle y\in \{0,1\}} , and there are the following 4 data points: ( x , y ) = { ( 1 , 0 ) , ( 1 , 1 ) , ( 2 , 0 ) , ( 2 , 1 ) } {\displaystyle (x,y)=\{(1,0),(1,1),(2,0),(2,1)\}} For the above data, estimating the joint probability distribution p ( x , y ) {\displaystyle p(x,y)} from the empirical measure will be the following: while p ( y | x ) {\displaystyle p(y|x)} will be following: === Text generation === Shannon (1948) gives an example in which a table of frequencies of English word pairs is used to generate a sentence beginning with "representing and speedily is an good"; which is not proper English but which will increasingly approximate it as the table is moved from word pairs to word triplets etc. == See also == Discriminative model Graphical model == Notes == == References == == External links ==
Wikipedia/Generative_model
Computer graphics are graphics created by computers and, more generally, the representation and manipulation of pictorial data by a computer. Computer graphics may also refer to: 2D computer graphics, the application of computer graphics to generating 2D imagery 3D computer graphics, the application of computer graphics to generating 3D imagery Computer animation, the art of creating moving images via the use of computers Computer-generated imagery, the application of the field of computer graphics to special effects in films, television programs, commercials, simulators and simulation generally, and printed media Computer graphics (computer science), a subfield of computer science studying mathematical and computational representations of visual objects Computer Graphics (publication), the journal by ACM SIGGRAPH Computer Graphics: Principles and Practice, the classic textbook by James D. Foley, Andries van Dam, Steven K. Feiner and John Hughes Computer Graphic (advertisement), a controversial television advertisement for Pot Noodle == See also == Display device, the hardware used to present computer graphics Graphics hardware, the computer hardware used to accelerate the creation of images
Wikipedia/Computer_graphics_(disambiguation)
A graphical user interface, or GUI, is a form of user interface that allows users to interact with electronic devices through graphical icons and visual indicators such as secondary notation. In many applications, GUIs are used instead of text-based UIs, which are based on typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces (CLIs), which require commands to be typed on a computer keyboard. The actions in a GUI are usually performed through direct manipulation of the graphical elements. Beyond computers, GUIs are used in many handheld mobile devices such as MP3 players, portable media players, gaming devices, smartphones and smaller household, office and industrial controls. The term GUI tends not to be applied to other lower-display resolution types of interfaces, such as video games (where head-up displays (HUDs) are preferred), or not including flat screens like volumetric displays because the term is restricted to the scope of 2D display screens able to describe generic information, in the tradition of the computer science research at the Xerox Palo Alto Research Center. == GUI and interaction design == Designing the visual composition and temporal behavior of a GUI is an important part of software application programming in the area of human–computer interaction. Its goal is to enhance the efficiency and ease of use for the underlying logical design of a stored program, a design discipline named usability. Methods of user-centered design are used to ensure that the visual language introduced in the design is well-tailored to the tasks. The visible graphical interface features of an application are sometimes referred to as chrome or GUI. Typically, users interact with information by manipulating visual widgets that allow for interactions appropriate to the kind of data they hold. The widgets of a well-designed interface are selected to support the actions necessary to achieve the goals of users. A model–view–controller allows flexible structures in which the interface is independent of and indirectly linked to application functions, so the GUI can be customized easily. This allows users to select or design a different skin or theme at will, and eases the designer's work to change the interface as user needs evolve. Good GUI design relates to users more, and to system architecture less. Large widgets, such as windows, usually provide a frame or container for the main presentation content such as a web page, email message, or drawing. Smaller ones usually act as a user-input tool. A GUI may be designed for the requirements of a vertical market as application-specific GUIs. Examples include automated teller machines (ATM), point of sale (POS) touchscreens at restaurants, self-service checkouts used in a retail store, airline self-ticket and check-in, information kiosks in a public space, like a train station or a museum, and monitors or control screens in an embedded industrial application which employ a real-time operating system (RTOS). Cell phones and handheld game systems also employ application specific touchscreen GUIs. Newer automobiles use GUIs in their navigation systems and multimedia centers, or navigation multimedia center combinations. == Examples == Sample graphical environments == Components == A GUI uses a combination of technologies and devices to provide a platform that users can interact with, for the tasks of gathering and producing information. A series of elements conforming a visual language have evolved to represent information stored in computers. This makes it easier for people with few computer skills to work with and use computer software. The most common combination of such elements in GUIs is the windows, icons, text fields, canvases, menus, pointer (WIMP) paradigm, especially in personal computers. The WIMP style of interaction uses a virtual input device to represent the position of a pointing device's interface, most often a mouse, and presents information organized in windows and represented with icons. Available commands are compiled together in menus, and actions are performed making gestures with the pointing device. A window manager facilitates the interactions between windows, applications, and the windowing system. The windowing system handles hardware devices such as pointing devices, graphics hardware, and positioning of the pointer. In personal computers, all these elements are modeled through a desktop metaphor to produce a simulation called a desktop environment in which the display represents a desktop, on which documents and folders of documents can be placed. Window managers and other software combine to simulate the desktop environment with varying degrees of realism. Entries may appear in a list to make space for text and details, or in a grid for compactness and larger icons with little space underneath for text. Variations in between exist, such as a list with multiple columns of items and a grid of items with rows of text extending sideways from the icon. Multi-row and multi-column layouts commonly found on the web are "shelf" and "waterfall". The former is found on image search engines, where images appear with a fixed height but variable length, and is typically implemented with the CSS property and parameter display: inline-block;. A waterfall layout found on Imgur and TweetDeck with fixed width but variable height per item is usually implemented by specifying column-width:. == Post-WIMP interface == Smaller app mobile devices such as personal digital assistants (PDAs) and smartphones typically use the WIMP elements with different unifying metaphors, due to constraints in space and available input devices. Applications for which WIMP is not well suited may use newer interaction techniques, collectively termed post-WIMP UIs. As of 2011, some touchscreen-based operating systems such as Apple's iOS (iPhone) and Android use the class of GUIs named post-WIMP. These support styles of interaction using more than one finger in contact with a display, which allows actions such as pinching and rotating, which are unsupported by one pointer and mouse. == Interaction == Human interface devices, for the efficient interaction with a GUI include a computer keyboard, especially used together with keyboard shortcuts, pointing devices for the cursor (or rather pointer) control: mouse, pointing stick, touchpad, trackball, joystick, virtual keyboards, and head-up displays (translucent information devices at the eye level). There are also actions performed by programs that affect the GUI. For example, there are components like inotify or D-Bus to facilitate communication between computer programs. == History == === Early efforts === Ivan Sutherland developed Sketchpad in 1963, widely held as the first graphical computer-aided design program. It used a light pen to create and manipulate objects in engineering drawings in realtime with coordinated graphics. In the late 1960s, researchers at the Stanford Research Institute, led by Douglas Engelbart, developed the On-Line System (NLS), which used text-based hyperlinks manipulated with a then-new device: the mouse. (A 1968 demonstration of NLS became known as "The Mother of All Demos".) In the 1970s, Engelbart's ideas were further refined and extended to graphics by researchers at Xerox PARC and specifically Alan Kay, who went beyond text-based hyperlinks and used a GUI as the main interface for the Smalltalk programming language, which ran on the Xerox Alto computer, released in 1973. Most modern general-purpose GUIs are derived from this system. The Xerox PARC GUI consisted of graphical elements such as windows, menus, radio buttons, and check boxes. The concept of icons was later introduced by David Canfield Smith, who had written a thesis on the subject under the guidance of Kay. The PARC GUI employs a pointing device along with a keyboard. These aspects can be emphasized by using the alternative term and acronym for windows, icons, menus, pointing device (WIMP). This effort culminated in the 1973 Xerox Alto, the first computer with a GUI, though the system never reached commercial production. The first commercially available computer with a GUI was the 1979 PERQ workstation, manufactured by Three Rivers Computer Corporation. Its design was heavily influenced by the work at Xerox PARC. In 1981, Xerox eventually commercialized the ideas from the Alto in the form of a new and enhanced system – the Xerox 8010 Information System – more commonly known as the Xerox Star. These early systems spurred many other GUI efforts, including Lisp machines by Symbolics and other manufacturers, the Apple Lisa (which presented the concept of menu bar and window controls) in 1983, the Apple Macintosh 128K in 1984, and the Atari ST with Digital Research's GEM, and Commodore Amiga in 1985. Visi On was released in 1983 for the IBM PC compatible computers, but was never popular due to its high hardware demands. Nevertheless, it was a crucial influence on the contemporary development of Microsoft Windows. Apple, Digital Research, IBM and Microsoft used many of Xerox's ideas to develop products, and IBM's Common User Access specifications formed the basis of the GUIs used in Microsoft Windows, IBM OS/2 Presentation Manager, and the Unix Motif toolkit and window manager. These ideas evolved to create the interface found in current versions of Microsoft Windows, and in various desktop environments for Unix-like operating systems, such as macOS and Linux. Thus most current GUIs have largely common idioms. === Popularization === GUIs were a hot topic in the early 1980s. The Apple Lisa was released in 1983, and various windowing systems existed for DOS operating systems (including PC GEM and PC/GEOS). Individual applications for many platforms presented their own GUI variants. Despite the GUI's advantages, many reviewers questioned the value of the entire concept, citing hardware limits and problems in finding compatible software. In 1984, Apple released a television commercial which introduced the Apple Macintosh during the telecast of Super Bowl XVIII by CBS, with allusions to George Orwell's noted novel Nineteen Eighty-Four. The goal of the commercial was to make people think about computers, identifying the user-friendly interface as a personal computer which departed from prior business-oriented systems, and becoming a signature representation of Apple products. In 1985, Commodore released the Amiga 1000, along with Workbench and Kickstart 1.0 (which contained Intuition). This interface ran as a separate task, meaning it was very responsive and, unlike other GUIs of the time, it didn't freeze up when a program was busy. Additionally, it was the first GUI to introduce something resembling Virtual Desktops. Windows 95, accompanied by an extensive marketing campaign, was a major success in the marketplace at launch and shortly became the most popular desktop operating system. In 2007, with the iPhone and later in 2010 with the introduction of the iPad, Apple popularized the post-WIMP style of interaction for multi-touch screens, and those devices were considered to be milestones in the development of mobile devices. The GUIs familiar to most people as of the mid-late 2010s are Microsoft Windows, macOS, and the X Window System interfaces for desktop and laptop computers, and Android, Apple's iOS, Symbian, BlackBerry OS, Windows Phone/Windows 10 Mobile, Tizen, WebOS, and Firefox OS for handheld (smartphone) devices. == Comparison to other interfaces == People said it's more of a right-brain machine and all that—I think there is some truth to that. I think there is something to dealing in a graphical interface and a more kinetic interface—you're really moving information around, you're seeing it move as though it had substance. And you don't see that on a PC. The PC is very much of a conceptual machine; you move information around the way you move formulas, elements on either side of an equation. I think there's a difference. === Command-line interfaces === Since the commands available in command line interfaces can be many, complex operations can be performed using a short sequence of words and symbols. Custom functions may be used to facilitate access to frequent actions. Command-line interfaces are more lightweight, as they only recall information necessary for a task; for example, no preview thumbnails or graphical rendering of web pages. This allows greater efficiency and productivity once many commands are learned. But reaching this level takes some time because the command words may not be easily discoverable or mnemonic. Also, using the command line can become slow and error-prone when users must enter long commands comprising many parameters or several different filenames at once. However, windows, icons, menus, pointer (WIMP) interfaces present users with many widgets that represent and can trigger some of the system's available commands. GUIs can be made quite hard when dialogs are buried deep in a system or moved about to different places during redesigns. Also, icons and dialog boxes are usually harder for users to script. WIMPs extensively use modes, as the meaning of all keys and clicks on specific positions on the screen are redefined all the time. Command-line interfaces use modes only in limited forms, such as for current directory and environment variables. Most modern operating systems provide both a GUI and some level of a CLI, although the GUIs usually receive more attention. === GUI wrappers === GUI wrappers find a way around the command-line interface versions (CLI) of (typically) Linux and Unix-like software applications and their text-based UIs or typed command labels. While command-line or text-based applications allow users to run a program non-interactively, GUI wrappers atop them avoid the steep learning curve of the command-line, which requires commands to be typed on the keyboard. By starting a GUI wrapper, users can intuitively interact with, start, stop, and change its working parameters, through graphical icons and visual indicators of a desktop environment, for example. Applications may also provide both interfaces, and when they do the GUI is usually a WIMP wrapper around the command-line version. This is especially common with applications designed for Unix-like operating systems. The latter used to be implemented first because it allowed the developers to focus exclusively on their product's functionality without bothering about interface details such as designing icons and placing buttons. Designing programs this way also allows users to run the program in a shell script. == Three-dimensional graphical user interface == Many environments and games use the methods of 3D graphics to project 3D GUI objects onto the screen. The use of 3D graphics has become increasingly common in mainstream operating systems (ex. Windows Aero, and Aqua (macOS)) to create attractive interfaces, termed eye candy (which includes, for example, the use of drop shadows underneath windows and the cursor), or for functional purposes only possible using three dimensions. For example, user switching is represented by rotating a cube with faces representing each user's workspace, and window management is represented via a Rolodex-style flipping mechanism in Windows Vista (see Windows Flip 3D). In both cases, the operating system transforms windows on-the-fly while continuing to update the content of those windows. The GUI is usually WIMP-based, although occasionally other metaphors surface, such as those used in Microsoft Bob, 3dwm, File System Navigator, File System Visualizer, 3D Mailbox, and GopherVR. Zooming (ZUI) is a related technology that promises to deliver the representation benefits of 3D environments without their usability drawbacks of orientation problems and hidden objects. In 2006, Hillcrest Labs introduced the first ZUI for television. Other innovations include the menus on the PlayStation 2; the menus on the Xbox; Sun's Project Looking Glass; Metisse, which was similar to Project Looking Glass; BumpTop, where users can manipulate documents and windows with realistic movement and physics as if they were physical documents; Croquet OS, which is built for collaboration; and compositing window managers such as Enlightenment and Compiz. Augmented reality and virtual reality also make use of 3D GUI elements. === In science fiction === 3D GUIs have appeared in science fiction literature and films, even before certain technologies were feasible or in common use. In prose fiction, 3D GUIs have been portrayed as immersible environments, coined as William Gibson's "cyberspace" and Neal Stephenson's "metaverse" and "avatars". The 1993 American film Jurassic Park features Silicon Graphics' 3D file manager File System Navigator, a real-life file manager for Unix operating systems. The film Minority Report has scenes of police officers using specialized 3D data systems. == See also == == Notes == == References == == External links == Evolution of Graphical User Interface in last 50 years by Raj Lal The men who really invented the GUI by Clive Akass Graphical User Interface Gallery, screenshots of various GUIs Marcin Wichary's GUIdebook, Graphical User Interface gallery: over 5500 screenshots of GUI, application and icon history The Real History of the GUI by Mike Tuck In The Beginning Was The Command Line by Neal Stephenson 3D Graphical User Interfaces (PDF) by Farid BenHajji and Erik Dybner, Department of Computer and Systems Sciences, Stockholm University Topological Analysis of the Gibbs Energy Function (Liquid-Liquid Equilibrium Correlation Data). Including a Thermodinamic Review and a Graphical User Interface (GUI) for Surfaces/Tie-lines/Hessian matrix analysis – University of Alicante (Reyes-Labarta et al. 2015–18) Innovative Ways to Use Information Visualization across a Variety of Fields Archived 2024-06-20 at the Wayback Machine by Ryan Erwin Digital marketing specialist (CLLAX) (2022-05)
Wikipedia/Graphical_user_interfaces
Isometric video game graphics are graphics employed in video games and pixel art that use a parallel projection, but which angle the viewpoint to reveal facets of the environment that would otherwise not be visible from a top-down perspective or side view, thereby producing a three-dimensional (3D) effect. Despite the name, isometric computer graphics are not necessarily truly isometric—i.e., the x, y, and z axes are not necessarily oriented 120° to each other. Instead, a variety of angles are used, with dimetric projection and a 2:1 pixel ratio being the most common. The terms "3/4 perspective", "3/4 view", "2.5D", and "pseudo 3D" are also sometimes used, although these terms can bear slightly different meanings in other contexts. Once common, isometric projection became less so with the advent of more powerful 3D graphics systems, and as video games began to focus more on action and individual characters. However, video games using isometric projection—especially computer role-playing games—have seen a resurgence in recent years within the indie gaming scene. == Overview == === Advantages === In video game development and pixel art, the technique has become popular because of the ease with which 2D sprite- and tile-based graphics can be made to represent 3D gaming environments. Because parallel projected objects do not change in size as they move about an area, there is no need for the computer to scale sprites or do the complex calculations necessary to simulate visual perspective. This allowed 8-bit and 16-bit game systems (and, more recently, handheld and mobile systems) to portray large game areas quickly and easily. And, while the depth confusion problems of parallel projection can sometimes be a problem, good game and level design can alleviate this. Though not limited strictly to isometric video game graphics, pre-rendered 2D graphics can possess a higher fidelity and use more advanced graphical techniques than may be possible on commonly available computer hardware, even with 3D hardware acceleration. Similarly to modern CGI used in motion pictures, graphics can be rendered one time on a powerful super computer or render farm, and then displayed many times on less powerful consumer hardware, such as on television sets, tablet computers and smartphones. This means that static pre-rendered isometric graphics often look better compared to their contemporary real-time-rendered counterparts, and may age better over time compared to their peers. However, this advantage may be less pronounced today than it was in the past, as developments in graphical technology equalize or produce diminishing returns, and current levels of graphical fidelity become "good enough" for many people. There are also gameplay advantages to using an isometric or near-isometric perspective in video games. For instance, compared to a purely top-down game, they add a third dimension, opening up new avenues for aiming and platforming. Compared to a first- or third-person video game, they allow a player to more easily field and control a large number of units, such as a full party of characters in a computer role-playing game, or an army of minions in a real-time strategy game. Further, they may alleviate situations where a player may become distracted from a game's core mechanics by having to constantly manage an unwieldy 3D camera. I.e., the player can focus on playing the game itself, and not on manipulating the game's camera. In the present day, rather than being purely a source of nostalgia, the revival of isometric projection is the result tangible design benefits. === Disadvantages === Some disadvantages of pre-rendered isometric graphics are that, as display resolutions and display aspect ratios continue to evolve, static 2D images need to be re-rendered each time in order to keep pace, or potentially suffer from the effects of pixelation and require anti-aliasing. Re-rendering a game's graphics is not always possible, however; as was the case in 2012, when Beamdog remade BioWare's Baldur's Gate (1998). Beamdog were lacking the original developers' creative art assets (the original data was lost in a flood) and opted for simple 2D graphics scaling with "smoothing", without re-rendering the game's sprites. The results were a certain "fuzziness", or lack of "crispness", compared to the original game's graphics. This does not affect real-time rendered polygonal isometric video games, however, as changing their display resolutions or aspect ratios is trivial, in comparison. === Differences from "true" isometric projection === The projection commonly used in video games deviates slightly from "true" isometric due to the limitations of raster graphics. Lines in the x and y directions would not follow a neat pixel pattern if drawn in the required 30° to the horizontal. While modern computers can eliminate this problem using anti-aliasing, earlier computer graphics did not support enough colors or possess enough CPU power to accomplish this. Instead, a 2:1 pixel pattern ratio would be used to draw the x and y axis lines, resulting in these axes following a ≈26.565° (arctan(1/2)) angle to the horizontal. (Game systems that do not use square pixels could, however, yield different angles, including "true" isometric.) Therefore, this form of projection is more accurately described as a variation of dimetric projection, since only two of the three angles between the axes are equal to each other, i.e., (≈116.565°, ≈116.565°, ≈126.870°). == History of isometric video games == Some three-dimensional games were released as early as the 1970s, but the first video games to use the distinct visual style of isometric projection in the meaning described above were arcade games in the early 1980s. === 1980s === The use of isometric graphics in video games began with Data East's arcade game Treasure Island, released in Japan in September 1981, but it was not released internationally until June 1982. The first isometric game to be released internationally was Sega's Zaxxon, which was significantly more popular and influential; it was released in Japan in December 1981 and internationally in April 1982. Zaxxon is an isometric shooter where the player flies a space plane through scrolling levels. It is also one of the first video games to display shadows. Another early isometric game is Q*bert. Warren Davis and Jeff Lee began programming the concept around April 1982. The game's production began in the summer and then released in October or November 1982. Q*bert shows a static pyramid in an isometric perspective, with the player controlling a character which can jump around on the pyramid. In February 1983, the isometric platform game arcade game Congo Bongo was released, running on the same hardware as Zaxxon. It allows the player character to traverse non-scrolling isometric levels, including three-dimensional climbing and falling. The same is possible in the arcade title Marble Madness, released in 1984. In 1983, isometric games were no longer exclusive to the arcade market and also entered home computers, with the release of Blue Max for the Atari 8-bit computers and Ant Attack for the ZX Spectrum. In Ant Attack, the player can move forward in any direction of the scrolling game, offering complete free movement rather than fixed to one axis as with Zaxxon. The views can also be changed around a 90 degrees axis. The ZX Spectrum magazine, Crash, consequently awarded it 100% in the graphics category for this new technique, known as "Soft Solid 3-D". A year later, the ZX Spectrum game Knight Lore was released. It was generally regarded as a revolutionary title that defined the subsequent genre of isometric adventure games. Following Knight Lore, many isometric titles were seen on home computers – to an extent that it once was regarded as being the second most cloned piece of software after WordStar, according to researcher Jan Krikke. Other examples out of those were Highway Encounter (1985), Batman (1986), Head Over Heels (1987) and La Abadía del Crimen (1987). Isometric perspective was not limited to action and adventure games. For example, the 1989 strategy game Populous uses isometric perspective. === 1990s === Throughout the 1990s, a number of successful computer games used a fixed isometric perspective, such as A-Train III (1990), Syndicate (1993), SimCity 2000 (1994), Civilization II (1996), X-COM (1994), and Diablo (1996). But with the advent of 3D acceleration on personal computers and gaming consoles, games previously using a 2D perspective generally started switching to true 3D (and perspective projection) instead. This can be seen in the successors to the above games: for instance SimCity (2013), Civilization VI (2016), XCOM: Enemy Unknown (2012) and Diablo III (2012) all use 3D polygonal graphics; and while Diablo II (2000) used fixed-perspective 2D perspective like its predecessor, it optionally allowed for perspective scaling of the sprites in the distance to lend it a "pseudo-3D" appearance. Also during the 1990s, isometric graphics began being used for Japanese role-playing video games (JRPGs) on console systems, particularly tactical role-playing games, many of which still use isometric graphics today. Examples include Front Mission (1995), Tactics Ogre (1995) and Final Fantasy Tactics (1997)—the latter of which used 3D graphics to create an environment where the player could freely rotate the camera. Other titles such as Vandal Hearts (1996) and Breath of Fire III (1997) carefully emulated an isometric or parallel view, but actually used perspective projection. Isometric, or similar, perspectives become popular in role-playing video games, such as Fallout and Baldur's Gate. In some cases, these role-playing games became defined by their isometric perspective, which allows larger scale battles. === 2010s === Isometric projection has seen continued relevance in the new millennium with the release of several newly-crowdfunded role-playing games on Kickstarter. These include the Shadowrun Returns series (2013-2015) by Harebrained Schemes; the Pillars of Eternity series (2015-2018) and Tyranny (2016) by Obsidian Entertainment; and Torment: Tides of Numenera (2017) by inXile Entertainment. Both Obsidian Entertainment and inXile Entertainment have employed, or were founded by, former members of Black Isle Studios and Interplay Entertainment. Obsidian Entertainment in particular wanted to "bring back the look and feel of the Infinity Engine games like Baldur's Gate, Icewind Dale, and Planescape: Torment". Lastly, several pseudo-isometric 3D RPGs, such as Divinity: Original Sin (2014), Wasteland 2 (2014) and Dead State (2014), have been crowdfunded using Kickstarter. These titles differ from the above games, however, in that they use perspective projection instead of parallel projection.. === Use of related projections and techniques === The term "isometric perspective" is frequently misapplied to any game with an—usually fixed—angled, overhead view that appears at first to be "isometric". These include the aforementioned dimetrically projected video games; games that use trimetric projection, such as Fallout (1997) and SimCity 4 (2003); games that use oblique projection, such as Ultima Online (1997) and Divine Divinity (2002); and games that use a combination of perspective projection and a bird's eye view, such as Silent Storm (2003), Torchlight (2009) and Divinity: Original Sin (2014). Also, not all "isometric" video games rely solely on pre-rendered 2D sprites. There are, for instance, titles which use polygonal 3D graphics completely, but render their graphics using parallel projection instead of perspective projection, such as Syndicate Wars (1996), Dungeon Keeper (1997) and Depths of Peril (2007); games which use a combination of pre-rendered 2D backgrounds and real-time rendered 3D character models, such as The Temple of Elemental Evil (2003) and Torment: Tides of Numenera (2017); and games which combine real-time rendered 3D backgrounds with hand-drawn 2D character sprites, such as Final Fantasy Tactics (1997) and Disgaea: Hour of Darkness (2003). One advantage of top-down oblique projection over other near-isometric perspectives, is that objects fit more snugly within non-overlapping square graphical tiles, thereby potentially eliminating the need for an additional Z-order in calculations, and requiring fewer pixels. == Mapping screen to world coordinates == One of the most common problems with programming games that use isometric (or more likely dimetric) projections is the ability to map between events that happen on the 2d plane of the screen and the actual location in the isometric space, called world space. A common example is picking the tile that lies right under the cursor when a user clicks. One such method is using the same rotation matrices that originally produced the isometric view in reverse to turn a point in screen coordinates into a point that would lie on the game board surface before it was rotated. Then, the world x and y values can be calculated by dividing by the tile width and height. Another way that is less computationally intensive and can have good results if the method is called on every frame, rests on the assumption that a square board was rotated by 45 degrees and then squashed to be half its original height. A virtual grid is overlaid on the projection as shown on the diagram, with axes virtual-x and virtual-y. Clicking any tile on the central axis of the board where (x, y) = (tileMapWidth / 2, y), will produce the same tile value for both world-x and world-y which in this example is 3 (0 indexed). Selecting the tile that lies one position on the right on the virtual grid, actually moves one tile less on the world-y and one tile more on the world-x. This is the formula that calculates world-x by taking the virtual-y and adding the virtual-x from the center of the board. Likewise world-y is calculated by taking virtual-y and subtracting virtual-x. These calculations measure from the central axis, as shown, so the results must be translated by half the board. For example, in the C programming language: This method might seem counter intuitive at first since the coordinates of a virtual grid are taken, rather than the original isometric world, and there is no one-to-one correspondence between virtual tiles and isometric tiles. A tile on the grid will contain more than one isometric tile, and depending on where it is clicked it should map to different coordinates. The key in this method is that the virtual coordinates are floating point numbers rather than integers. A virtual-x and y value can be (3.5, 3.5) which means the center of the third tile. In the diagram on the left, this falls in the 3rd tile on the y in detail. When the virtual-x and y must add up to 4, the world x will also be 4. == Examples == === Dimetric projection === === Oblique projection === === Perspective projection === == See also == Clipping Filmation engine Category:Video games with isometric graphics: listing of isometric video games Category:Video games with oblique graphics: listing of oblique video games Commons:Category:Isometric video game screenshots: gallery of isometric video game screenshots == References == == External links == The classic 8-bit isometric games that tried to break the mould at Eurogamer.com The Best-Looking Isometric Games at Kotaku.com The Best Isometric Video Games at Kotaku.com
Wikipedia/Isometric_video_game_graphics
Eurographics is a Europe-wide professional computer graphics association. The association supports its members in advancing the state of the art in computer graphics and related fields such as multimedia, scientific visualization and human–computer interaction. == Overview == Eurographics organizes many events and services, which are open to everyone. Eurgraphics has a broad membership, including researchers & developers, educators & industrialists, users & providers of computer graphics hardware, software, and applications. Eurographics organizes venues including the Eurographics Symposium on Rendering and High-Performance Graphics. Eurographics publishes Computer Graphics Forum, a quarterly journal, among others. == Conferences and symposiums == Annual Conference 3D Object Retrieval Computer Animation EuroVis EXPRESSIVE Geometry Processing Graphics and Cultural Heritage High-Performance Graphics Intelligent Cinematography and Editing Material Appearance Modeling Parallel Graphics and Visualization Rendering (EGSR) Urban Data Modeling and Visualization Virtual Environments Visual Computing in Biology and Medicine == Related organizations == ACM SIGGRAPH hosts SIGGRAPH, the world's largest computer graphics conference. Russian Computer Graphics Society hosts Graphicon, the former Soviet Union's largest computer graphics conference, in cooperation with Eurographics. == References == == External links == Eurographics website Eurographics Digital Library Eurographics 2019 conference website Eurographics 2018 conference website Eurographics 2017 conference website Eurographics 2016 conference website Eurographics 2015 conference website Eurographics 2014 conference website Eurographics 2013 conference website Eurographics 2012 conference website Ke-Sen Huang page contains a directory of Eurographics publications.
Wikipedia/Eurographics
A number of vector graphics editors exist for various platforms. Potential users of these editors will make a comparison of vector graphics editors based on factors such as the availability for the user's platform, the software license, the feature set, the merits of the user interface (UI) and the focus of the program. Some programs are more suitable for artistic work while others are better for technical drawings. Another important factor is the application's support of various vector and bitmap image formats for import and export. The tables in this article compare general and technical information for a number of vector graphics editors. See the article on each editor for further information. This article is neither all-inclusive nor necessarily up-to-date. == Some editors in detail == Adobe Fireworks (formerly Macromedia Fireworks) is a vector editor with bitmap editing capabilities with its main purpose being the creation of graphics for Web and screen. Fireworks supports RGB color scheme and has no CMYK support. This means it is mostly used for screen design. The native Fireworks file format is editable PNG (FWPNG or PNG). Adobe Fireworks has a competitive price, but its features can seem limited in comparison with other products. It is easier to learn than other products and can produce complex vector artwork. The Fireworks editable PNG file format is not supported by other Adobe products. Fireworks can manage the PSD and AI file formats which enables it to be integrated with other Adobe apps. Fireworks can also open FWPNG/PNG, PSD, AI, EPS, JPG, GIF, BMP, TIFF file formats, and save/export to FWPNG/PNG, PSD, AI (v.8), FXG (v.2.0), JPG, GIF, PDF, SWF and some others. Some support for exporting to SVG is available via a free Export extension. On May 6, 2013, Adobe announced that Fireworks would be phased out. Adobe Flash (formerly a Macromedia product) has straightforward vector editing tools that make it easier for designers and illustrators to use. The most important of these tools are vector lines and fills with bitmap-like selectable areas, simple modification of curves via the "selection" or the control points/handles through "direct selection" tools. Flash uses Actionscript for OOP, and has full XML functionality through E4X support. Adobe FreeHand (formerly Macromedia Freehand and Aldus Freehand) is mainly used by professional graphic designers. The functionality of FreeHand includes the flexibility of the application in the wide design environment, catering to the output needs of both traditional image reproduction methods and to contemporary print and digital media with its page-layout capabilities and text attribute controls. Specific functions of FreeHand include a superior image-tracing operation for vector editing, page layout features within multiple-page documents, and embedding custom print-settings (such as variable halftone-screen specifications within a single graphic, etc.) to each document independent of auxiliary printer-drivers. User-operation is considered to be more suited for designers with an artistic background compared to designers with a technical background. When being marketed, FreeHand lacked the promotional backing, development and PR support in comparison to other similar products. FreeHand was transferred to the classic print group after Macromedia was purchased by Adobe in 2005. On May 16, 2007, Adobe announced that no further updates to Freehand would be developed but continues to sell FreeHand MX as a Macromedia product. FreeHand continues to run on Mac OS X Snow Leopard (using an Adobe fix) and on Windows 7. For macOS, Affinity Designer is able to open version 10 & MX Freehand files. Adobe Illustrator is a commonly-used editor because of Adobe's market dominance, but is more expensive than other similar products. It is primarily developed consistently in line with other Adobe products and is best integrated with Adobe's Creative Suite packages. The ai file format is proprietary, but some vector editors can open and save in that format. Illustrator imports over two dozen formats, including PSD, PDF and SVG, and exports AI, PDF, SVG, SVGZ, GIF, JPG, PNG, WBMP, and SWF. However, the user must be aware of unchecking the "Preserve Illustrator Editing Capabilities" option if he or she desires to generate interoperable SVG files. Affinity Designer by Serif Europe (the successor to their previous product, DrawPlus) is non-subscription-based software that is often described as an alternative to Adobe Illustrator. The application can open Portable Document Format (PDF), Adobe Photoshop, and Adobe Illustrator files, as well as export to those formats and to the Scalable Vector Graphics (SVG) and Encapsulated PostScript (EPS) formats. It also supports import from some Adobe Freehand files (specifically versions 10 & MX). Apache OpenOffice Draw is the vector graphics editor of the Apache OpenOffice open source office suite. It supports many import and export file formats and is available for multiple desktop operating systems. Boxy SVG is a chromium-based vector graphics editor for creating illustrations, as well as logos, icons, and other elements of graphic design. It is primarily focused on editing drawings in the SVG file format. The program is available as both a web app and a desktop application for Windows, macOS, ChromeOS, and Linux-based operating systems. Collabora Online Draw is the vector graphics editor of the Collabora Online open source office suite. It supports many import and export file formats and is accessible via any modern web browser, it also supports desktop editing features, Collabora Office is available for desktop and mobile operating systems, it is the enterprise ready version of LibreOffice. ConceptDraw PRO is a business diagramming tool and vector graphics editor available for both Windows and macOS. It supports multi-page documents, and includes an integrated presentation mode. ConceptDraw PRO supports imports and exports several formats, including Microsoft Visio and Microsoft PowerPoint. Corel Designer (originally Micrografx Designer) is one of the earliest vector-based graphics editors for the Microsoft Windows platform. The product is mainly used for the creation of engineering drawings and is shipped with extensive libraries for the needs of engineers. It is also flexible enough for most vector graphics design applications. CorelDRAW is an editor used in the graphic design, sign making and fashion design industries. CorelDRAW is capable of limited interoperation by reading file formats from Adobe Illustrator. CorelDRAW has over 50 import and export filters, on-screen and dialog box editing and the ability to create multi-page documents. It can also generate TrueType and Type 1 fonts, although refined typographic control is better suited to a more specific application. Some other features of CorelDRAW include the creation and execution of VBA macros, viewing of colour separations in print preview mode and integrated professional imposing options. Dia is a free and open-source diagramming and vector graphics editor available for Windows, Linux and other Unix-based computer operating systems. Dia has a modular design and several shape packages for flowcharting, network diagrams and circuit diagrams. Its design was inspired by Microsoft Visio, although it uses a Single Document Interface similar to other GNOME software (such as GIMP). DrawPlus, first built for the Windows platform in 1993, has matured into a full featured vector graphics editor for home and professional users. Also available as a feature-limited free 'starter edition': DrawPlus SE. DrawPlus developers, Serif Europe, have now ceased its development in order to focus on its successor, Affinity Designer. Edraw Max is a cross-platform diagram software and vector graphics editor available for Windows, Mac and Linux. It supports kinds of diagram types. It supports imports and exports SVG, PDF, HTML, Multiple page TIFF, Microsoft Visio and Microsoft PowerPoint. Embroidermodder is a free machine embroidery software tool that supports a variety of formats and allows the user to add custom modifications to their embroidery designs. Fatpaint is a free, light-weight, browser-based graphic design application with built-in vector drawing tools. It can be accessed through any browser with Flash 9 installed. Its integration with Zazzle makes it particularly suitable for people who want to create graphics for custom printed products such as T-shirts, mugs, iPhone cases, flyers and other promotional products. Figma is a collaborative web-based online vector graphics editor, used primarily for UX design and prototyping. GIMP, which works mainly with raster images, offers a limited set of features to create and record SVG files. It can also load and handle SVG files created with other software like Inkscape. Inkscape is a free and open-source vector editor with the primary native format being SVG. Inkscape is available for Linux, Windows, Mac OS X, and other Unix-based systems. Inkscape can import SVG, SVGZ, AI, PDF, JPEG, PNG, GIF (and other raster graphics formats), WMF, CDR (CorelDRAW), VSD (Visio) file formats and export SVG, SVGZ, PNG, PDF, PostScript, EPS, EPSi, LaTeX, HPGL, SIF (Synfig Animation Studio), HTML5 Canvas, FXG (Flash XML Graphics) and POVRay file formats. Some formats have additional support through Inkscape extensions, including PDF, EPS, Adobe Illustrator, Dia, Xfig, CGM, sK1 and Sketch. The predecessor of Inkscape was Sodipodi. Ipe lets users draw geometric objects such as polylines, arcs and spline curves and text. Ipe supports use of layers and multiple pages. It can paste bitmap images from clipboard or import from JPEG or BMP, and also through a conversion software it can import PDF figures generated by other software. It differentiates itself from similar programs by including advanced snapping tools and the ability to directly include LaTeX text and equations. Ipe is extensible by use of ipelets, which are plugins written in C++ or Lua. LibreOffice Draw is the vector graphics editor of the LibreOffice open source office suite. It supports many import and export file formats and is available for multiple desktop operating systems. The Document Foundation with the help of others is currently developing Android and online versions of the LibreOffice office suite, including Draw. Microsoft Expression Design is a commercial vector and bitmap graphics editor based on Creature House Expression, which was acquired by Microsoft in 2003. It was part of the Microsoft Expression Studio suite. Expression Design is discontinued, and is no longer available for download from Microsoft. It runs on Windows XP, Vista, Windows 7 and 8, and on Windows 8.1 and 10 released after it was discontinued. Microsoft Visio is a diagramming, flow chart, floor plan and vector graphics editor available for Windows. It is commonly used by small and medium-sized businesses, and by Microsoft in their corporate documentation. OmniGraffle, by The Omni Group, is a vector graphics editor available for Macintosh. It is principally used for creating flow charts and other diagrams. OmniGraffle imports and exports several formats, including Microsoft Visio, SVG, and PDF. PhotoLine is mainly a raster graphics editor but also offers a comprehensive set of vector drawing tools including multiple paths per layer, layer groups, color management and full color space support including CMYK and Lab color spaces, and multipage documents. PhotoLine can import and export PDF and SVG files as well as all major bitmap formats. sK1 is a free and open-source cross-platform vector editor for Linux and Windows which is oriented for "prepress ready" PostScript & PDF output. The major sK1 features are CMYK colorspace support; CMYK support in Postscript; Cairo-based engine; color management; multiple document interface; Pango-based text engine; Universal CDR importer (7-X4 versions); native wxWidgets based user interface. sK1 can import postscript-based AI files, CDR, CDT, CCX, CDRX, CMX, XAR, PS, EPS, CGM, WMF, XFIG, SVG, SK, SK1, AFF, PLT, CPL, ASE, ACO, JCW, GPL, SOC, SKP file formats. It can export AI, SVG, SK, SK1, CGM, WMF, PDF, PS, PLT, CPL, ASE, ACO, JCW, GPL, SOC, SKP file formats. SaviDraw, by Silicon Beach Software, is a modern vector drawing program for Windows 10. It is available only from the Microsoft app store. It is designed to work well with touch screens - no functions require keyboard modifiers. It features a new way to draw vector curves (very different from the traditional Pen tool) and has voice-command shortcuts. Sketch is a commercial vector graphics application for macOS used primarily for user interface and web design. It offers features such as vector editing, prototyping, and collaboration tools. SVG-edit is a FOSS web-based, JavaScript-driven SVG editor that works in any modern browser. Synfig Studio (also known as Synfig) is a free and open-source 2D vector graphics and timeline-based computer animation program created by Robert Quattlebaum. Synfig is available for Linux, Windows, macOS. Synfig stores its animations in its own XML file format, SIF (uncompressed) or SIFZ (compressed) and can import SVG. VectorStyler by Numeric Path is a professional vector graphics app, currently in advanced beta, available for both macOS and Windows 10 systems. It offers a comprehensive set of vector drawing tools, vector-based brushes, shape and image effects, corner shapes, mesh and shape-based gradients, collision snapping, multi-page documents, and full color space support including CMYK. The application can open Portable Document Format (PDF), Scalable Vector Graphics (SVG), Adobe Illustrator, EPS and also Adobe Photoshop files, as well as export to those formats. WinFIG is a shareware Editor with crossplattform. It use the Format of Xfig. Xara Photo & Graphic Designer and Designer Pro (formerly Xara Xtreme and Xtreme Pro) are vector graphics editors for Windows developed by Xara. Xara Photo & Graphic Designer has high usability compared to other similar products and has very fast rendering. Xara Photo & Graphic Designer (and earlier product ArtWorks) was the first vector graphics software product to provide fully antialiased display, advanced gradient fill and transparency tools. The current version supports multi-page documents, and includes a capable integrated photo tool making it an option for any sort of DTP work. The Pro version includes extra features such as Pantone and color separation support, as well as comprehensive web page design features. Xara Xtreme LX is a partially open source version of Xara Photo & Graphic Designer for Linux. Xfig is an Xlib, open source editor started by Supoj Sutanthavibul in 1985, and maintained by various people. It has a technical library. Its advantage is support for exporting of the TeX friendly files containing code for LaTeX (pict2e and epic/eepic macros packages), PGF/TikZ, PStricks, graphs and picture drawing scripts that allow inclusion of complicated graphics into various document formats (e.g. PDF). == General information == This table gives basic general information about the different vector graphics editors: == Operating system support == This table lists the operating systems that different editors can run on without emulation: == Basic features == === Notes === == File format support == === Import === Notes == See also == Comparison of 3D computer graphics software Comparison of graphics file formats Raster graphics Comparison of raster-to-vector conversion software Comparison of raster graphics editors List of 2D graphics software Vector graphics == Notes ==
Wikipedia/Comparison_of_vector_graphics_editors
Visualization (or visualisation ), also known as graphics visualization, is any technique for creating images, diagrams, or animations to communicate a message. Visualization through visual imagery has been an effective way to communicate both abstract and concrete ideas since the dawn of humanity. from history include cave paintings, Egyptian hieroglyphs, Greek geometry, and Leonardo da Vinci's revolutionary methods of technical drawing for engineering purposes that actively involve scientific requirements. Visualization today has ever-expanding applications in science, education, engineering (e.g., product visualization), interactive multimedia, medicine, etc. Typical of a visualization application is the field of computer graphics. The invention of computer graphics (and 3D computer graphics) may be the most important development in visualization since the invention of central perspective in the Renaissance period. The development of animation also helped advance visualization. == Overview == The use of visualization to present information is not a new phenomenon. It has been used in maps, scientific drawings, and data plots for over a thousand years. Examples from cartography include Ptolemy's Geographia (2nd century AD), a map of China (1137 AD), and Minard's map (1861) of Napoleon's invasion of Russia a century and a half ago. Most of the concepts learned in devising these images carry over in a straightforward manner to computer visualization. Edward Tufte has written three critically acclaimed books that explain many of these principles. Computer graphics has from its beginning been used to study scientific problems. However, in its early days the lack of graphics power often limited its usefulness. The recent emphasis on visualization started in 1987 with the publication of Visualization in Scientific Computing, a special issue of Computer Graphics. Since then, there have been several conferences and workshops, co-sponsored by the IEEE Computer Society and ACM SIGGRAPH, devoted to the general topic, and special areas in the field, for example volume visualization. Most people are familiar with the digital animations produced to present meteorological data during weather reports on television, though few can distinguish between those models of reality and the satellite photos that are also shown on such programs. TV also offers scientific visualizations when it shows computer drawn and animated reconstructions of road or airplane accidents. Some of the most popular examples of scientific visualizations are computer-generated images that show real spacecraft in action, out in the void far beyond Earth, or on other planets. Dynamic forms of visualization, such as educational animation or timelines, have the potential to enhance learning about systems that change over time. Apart from the distinction between interactive visualizations and animation, the most useful categorization is probably between abstract and model-based scientific visualizations. The abstract visualizations show completely conceptual constructs in 2D or 3D. These generated shapes are completely arbitrary. The model-based visualizations either place overlays of data on real or digitally constructed images of reality or make a digital construction of a real object directly from the scientific data. Scientific visualization is usually done with specialized software, though there are a few exceptions, noted below. Some of these specialized programs have been released as open source software, having very often its origins in universities, within an academic environment where sharing software tools and giving access to the source code is common. There are also many proprietary software packages of scientific visualization tools. Models and frameworks for building visualizations include the data flow models popularized by systems such as AVS, IRIS Explorer, and VTK toolkit, and data state models in spreadsheet systems such as the Spreadsheet for Visualization and Spreadsheet for Images. == Applications == === Scientific visualization === As a subject in computer science, scientific visualization is the use of interactive, sensory representations, typically visual, of abstract data to reinforce cognition, hypothesis building, and reasoning. Scientific visualization is the transformation, selection, or representation of data from simulations or experiments, with an implicit or explicit geometric structure, to allow the exploration, analysis, and understanding of the data. Scientific visualization focuses and emphasizes the representation of higher order data using primarily graphics and animation techniques. It is a very important part of visualization and maybe the first one, as the visualization of experiments and phenomena is as old as science itself. Traditional areas of scientific visualization are flow visualization, medical visualization, astrophysical visualization, and chemical visualization. There are several different techniques to visualize scientific data, with isosurface reconstruction and direct volume rendering being the more common. === Data and information visualization === Data visualization is a related subcategory of visualization dealing with statistical graphics and geospatial data (as in thematic cartography) that is abstracted in schematic form. Information visualization concentrates on the use of computer-supported tools to explore large amount of abstract data. The term "information visualization" was originally coined by the User Interface Research Group at Xerox PARC and included Jock Mackinlay. Practical application of information visualization in computer programs involves selecting, transforming, and representing abstract data in a form that facilitates human interaction for exploration and understanding. Important aspects of information visualization are dynamics of visual representation and the interactivity. Strong techniques enable the user to modify the visualization in real-time, thus affording unparalleled perception of patterns and structural relations in the abstract data in question. === Educational visualization === Educational visualization is using a simulation to create an image of something so it can be taught about. This is very useful when teaching about a topic that is difficult to otherwise see, for example, atomic structure, because atoms are far too small to be studied easily without expensive and difficult to use scientific equipment. === Knowledge visualization === The use of visual representations to transfer knowledge between at least two persons aims to improve the transfer of knowledge by using computer and non-computer-based visualization methods complementarily. Thus properly designed visualization is an important part of not only data analysis but knowledge transfer process, too. Knowledge transfer may be significantly improved using hybrid designs as it enhances information density but may decrease clarity as well. For example, visualization of a 3D scalar field may be implemented using iso-surfaces for field distribution and textures for the gradient of the field. Examples of such visual formats are sketches, diagrams, images, objects, interactive visualizations, information visualization applications, and imaginary visualizations as in stories. While information visualization concentrates on the use of computer-supported tools to derive new insights, knowledge visualization focuses on transferring insights and creating new knowledge in groups. Beyond the mere transfer of facts, knowledge visualization aims to further transfer insights, experiences, attitudes, values, expectations, perspectives, opinions, and estimates in different fields by using various complementary visualizations. See also: picture dictionary, visual dictionary === Product visualization === Product visualization involves visualization software technology for the viewing and manipulation of 3D models, technical drawing and other related documentation of manufactured components and large assemblies of products. It is a key part of product lifecycle management. Product visualization software typically provides high levels of photorealism so that a product can be viewed before it is actually manufactured. This supports functions ranging from design and styling to sales and marketing. Technical visualization is an important aspect of product development. Originally technical drawings were made by hand, but with the rise of advanced computer graphics the drawing board has been replaced by computer-aided design (CAD). CAD-drawings and models have several advantages over hand-made drawings such as the possibility of 3-D modeling, rapid prototyping, and simulation. 3D product visualization promises more interactive experiences for online shoppers, but also challenges retailers to overcome hurdles in the production of 3D content, as large-scale 3D content production can be extremely costly and time-consuming. === Visual communication === Visual communication is the communication of ideas through the visual display of information. Primarily associated with two dimensional images, it includes: alphanumerics, art, signs, and electronic resources. Recent research in the field has focused on web design and graphically oriented usability. === Visual analytics === Visual analytics focuses on human interaction with visualization systems as part of a larger process of data analysis. Visual analytics has been defined as "the science of analytical reasoning supported by the interactive visual interface". Its focus is on human information discourse (interaction) within massive, dynamically changing information spaces. Visual analytics research concentrates on support for perceptual and cognitive operations that enable users to detect the expected and discover the unexpected in complex information spaces. Technologies resulting from visual analytics find their application in almost all fields, but are being driven by critical needs (and funding) in biology and national security. == Interactivity == Interactive visualization or interactive visualisation is a branch of graphic visualization in computer science that involves studying how humans interact with computers to create graphic illustrations of information and how this process can be made more efficient. For a visualization to be considered interactive it must satisfy two criteria: Human input: control of some aspect of the visual representation of information, or of the information being represented, must be available to a human, and Response time: changes made by the human must be incorporated into the visualization in a timely manner. In general, interactive visualization is considered a soft real-time task. One particular type of interactive visualization is virtual reality (VR), where the visual representation of information is presented using an immersive display device such as a stereo projector (see stereoscopy). VR is also characterized by the use of a spatial metaphor, where some aspect of the information is represented in three dimensions so that humans can explore the information as if it were present (where instead it was remote), sized appropriately (where instead it was on a much smaller or larger scale than humans can sense directly), or had shape (where instead it might be completely abstract). Another type of interactive visualization is collaborative visualization, in which multiple people interact with the same computer visualization to communicate their ideas to each other or to explore information cooperatively. Frequently, collaborative visualization is used when people are physically separated. Using several networked computers, the same visualization can be presented to each person simultaneously. The people then make annotations to the visualization as well as communicate via audio (i.e., telephone), video (i.e., a video-conference), or text (i.e., IRC) messages. === Human control of visualization === The Programmer's Hierarchical Interactive Graphics System (PHIGS) was one of the first programmatic efforts at interactive visualization and provided an enumeration of the types of input humans provide. People can: Pick some part of an existing visual representation; Locate a point of interest (which may not have an existing representation); Stroke a path; Choose an option from a list of options; Valuate by inputting a number; and Write by inputting text. All of these actions require a physical device. Input devices range from the common – keyboards, mice, graphics tablets, trackballs, and touchpads – to the esoteric – wired gloves, boom arms, and even omnidirectional treadmills. These input actions can be used to control both the unique information being represented or the way that the information is presented. When the information being presented is altered, the visualization is usually part of a feedback loop. For example, consider an aircraft avionics system where the pilot inputs roll, pitch, and yaw and the visualization system provides a rendering of the aircraft's new attitude. Another example would be a scientist who changes a simulation while it is running in response to a visualization of its current progress. This is called computational steering. More frequently, the representation of the information is changed rather than the information itself. === Rapid response to human input === Experiments have shown that a delay of more than 20 ms between when input is provided and a visual representation is updated is noticeable by most people . Thus it is desirable for an interactive visualization to provide a rendering based on human input within this time frame. However, when large amounts of data must be processed to create a visualization, this becomes hard or even impossible with current technology. Thus the term "interactive visualization" is usually applied to systems that provide feedback to users within several seconds of input. The term interactive framerate is often used to measure how interactive a visualization is. Framerates measure the frequency with which an image (a frame) can be generated by a visualization system. A framerate of 50 frames per second (frame/s) is considered good while 0.1 frame/s would be considered poor. The use of framerates to characterize interactivity is slightly misleading however, since framerate is a measure of bandwidth while humans are more sensitive to latency. Specifically, it is possible to achieve a good framerate of 50 frame/s but if the images generated refer to changes to the visualization that a person made more than 1 second ago, it will not feel interactive to a person. The rapid response time required for interactive visualization is a difficult constraint to meet and there are several approaches that have been explored to provide people with rapid visual feedback based on their input. Some include Parallel rendering – where more than one computer or video card is used simultaneously to render an image. Multiple frames can be rendered at the same time by different computers and the results transferred over the network for display on a single monitor. This requires each computer to hold a copy of all the information to be rendered and increases bandwidth, but also increases latency. Also, each computer can render a different region of a single frame and send the results over a network for display. This again requires each computer to hold all of the data and can lead to a load imbalance when one computer is responsible for rendering a region of the screen with more information than other computers. Finally, each computer can render an entire frame containing a subset of the information. The resulting images plus the associated depth buffer can then be sent across the network and merged with the images from other computers. The result is a single frame containing all the information to be rendered, even though no single computer's memory held all of the information. This is called parallel depth compositing and is used when large amounts of information must be rendered interactively. Progressive rendering – where a framerate is guaranteed by rendering some subset of the information to be presented and providing incremental (progressive) improvements to the rendering once the visualization is no longer changing. Level-of-detail (LOD) rendering – where simplified representations of information are rendered to achieve a desired framerate while a person is providing input and then the full representation is used to generate a still image once the person is through manipulating the visualization. One common variant of LOD rendering is subsampling. When the information being represented is stored in a topologically rectangular array (as is common with digital photos, MRI scans, and finite difference simulations), a lower resolution version can easily be generated by skipping n points for each 1 point rendered. Subsampling can also be used to accelerate rendering techniques such as volume visualization that require more than twice the computations for an image twice the size. By rendering a smaller image and then scaling the image to fill the requested screen space, much less time is required to render the same data. Frameless rendering – where the visualization is no longer presented as a time series of images, but as a single image where different regions are updated over time. == See also == Graphical perception Spatial visualization ability Visual language == References == == Further reading == Battiti, Roberto; Mauro Brunato (2011). Reactive Business Intelligence. From Data to Models to Insight. Trento, Italy: Reactive Search Srl. ISBN 978-88-905795-0-9. Bederson, Benjamin B., and Ben Shneiderman. The Craft of Information Visualization: Readings and Reflections, Morgan Kaufmann, 2003, ISBN 1-55860-915-6. Cleveland, William S. (1993). Visualizing Data. Cleveland, William S. (1994). The Elements of Graphing Data. Charles D. Hansen, Chris Johnson. The Visualization Handbook, Academic Press (June 2004). Kravetz, Stephen A. and David Womble. ed. Introduction to Bioinformatics. Totowa, N.J. Humana Press, 2003. Mackinlay, Jock D. (1999). Readings in information visualization: using vision to think. Card, S. K., Ben Shneiderman (eds.). Morgan Kaufmann Publishers Inc. pp. 686. ISBN 1-55860-533-9. Will Schroeder, Ken Martin, Bill Lorensen. The Visualization Toolkit, by August 2004. Spence, Robert Information Visualization: Design for Interaction (2nd Edition), Prentice Hall, 2007, ISBN 0-13-206550-9. Edward R. Tufte (1992). The Visual Display of Quantitative Information Edward R. Tufte (1990). Envisioning Information. Edward R. Tufte (1997). Visual Explanations: Images and Quantities, Evidence and Narrative. Matthew Ward, Georges Grinstein, Daniel Keim. Interactive Data Visualization: Foundations, Techniques, and Applications. (May 2010). Wilkinson, Leland. The Grammar of Graphics, Springer ISBN 0-387-24544-8 == External links == National Institute of Standards and Technology Scientific Visualization Tutorials, Georgia Tech Scientific Visualization Studio (NASA) Visual-literacy.org, (e.g. Periodic Table of Visualization Methods) Conferences Many conferences occur where interactive visualization academic papers are presented and published. Amer. Soc. of Information Science and Technology (ASIS&T SIGVIS) Special Interest Group in Visualization Information and Sound ACM SIGCHI ACM SIGGRAPH ACM VRST Eurographics IEEE Visualization ACM Transactions on Graphics IEEE Transactions on Visualization and Computer Graphics
Wikipedia/Visualization_(graphic)
This is a list of models and meshes commonly used in 3D computer graphics for testing and demonstrating rendering algorithms and visual effects. Their use is important for comparing results, similar to the way standard test images are used in image processing. == Modeled == Designed using CAD software; sorted by year of modeling. == Scanned == Includes photogrammetric methods; sorted by year of scanning. == Gallery == == See also == Standard test image – Digital image used to test image algorithms A Computer Animated Hand – 1972 film by Edwin CatmullPages displaying wikidata descriptions as a fallback Sutherland's Volkswagen – 3D test model == References == == External links == Standard test models The Stanford 3D Scanning Repository hosted by the Stanford University Large Geometric Models Archive hosted by the Georgia Institute of Technology Other repositories The Utah 3D Animation Repository, a small collection of animated 3D models scene collection, by Physically Based Rendering Toolkit: a number of interesting scenes to render with global illumination MGF Example Scenes, a small collection of some indoor 3D scenes archive3D, a collection of 3D models 3DModels, a collection of vehicle 3D models 3DBar, a collection of free 3D models NASA 3D Models, NASA 3D models to use for educational or informational purposes VRML Models from ORC Incorporated, 3D models in VRML format 3dRender.com: Lighting Challenges, regularly held lighting challenges, complete with scene and models for each challenge MPI Informatics Building Model, a virtual reconstruction of the Max Planck Institute for Informatics building in Saarbrücken Princeton shape-based 3D model search engine Keenan's 3D Model Repository hosted by the Carnegie Mellon University HeiCuBeDa Hilprecht – Heidelberg Cuneiform Benchmark Dataset for the Hilprecht Collection a collection of almost 2.000 cuneiform tablets for bulk-download acquired with a high-resolution 3D-scanner. Available under a CC BY license and quotable by digital object identifiers. Datasets cleaned using the GigaMesh Software Framework. HeiCu3Da Hilprecht – Heidelberg Cuneiform 3D Database - Hilprecht Collection browsable version of HeiCuBeDa allowing to download and quote single 3D models.
Wikipedia/3D_test_model
A vertex (plural vertices) in computer graphics is a data structure that describes certain attributes, like the position of a point in 2D or 3D space, or multiple points on a surface. == Application to 3D models == 3D models are most often represented as triangulated polyhedra forming a triangle mesh. Non-triangular surfaces can be converted to an array of triangles through tessellation. Attributes from the vertices are typically interpolated across mesh surfaces. == Vertex attributes == The vertices of triangles are associated not only with spatial position but also with other values used to render the object correctly. Most attributes of a vertex represent vectors in the space to be rendered. These vectors are typically 1 (x), 2 (x, y), or 3 (x, y, z) dimensional and can include a fourth homogeneous coordinate (w). These values are given meaning by a material description. In real-time rendering these properties are used by a vertex shader or vertex pipeline. Such attributes can include: Position 2D or 3D coordinates representing a position in space Color Typically diffuse or specular RGB values, either representing surface colour or precomputed lighting information. Reflectance of the surface at the vertex, e.g. specular exponent, metallicity, fresnel values. Texture coordinates Also known as UV coordinates, these control the texture mapping of the surface, possibly for multiple layers. normal vectors These define an approximated curved surface at the location of the vertex, used for lighting calculations (such as Phong shading), normal mapping, or displacement mapping, and to control subdivision. tangent vectors These define an approximated curved surface at the location of the vertex, used for lighting calculations (such as Phong shading), normal mapping, or displacement mapping, and to control subdivision. Blend weights Bone weights Weighting for assignment to bones to control deformation in skeletal animation. Blend shapes Multiple position vectors may be specified to be blended over time, especially for facial animation. == See also == For how vertices are processed on 3D graphics cards, see shader. == References ==
Wikipedia/Vertex_(computer_graphics)
A wireless network interface controller (WNIC) is a network interface controller which connects to a wireless network, such as Wi-Fi, Bluetooth, or LTE (4G) or 5G rather than a wired network, such as an Ethernet network. A WNIC, just like other NICs, works on the layers 1 and 2 of the OSI model and uses an antenna to communicate via radio waves. A wireless network interface controller may be implemented as an expansion card and connected using PCI bus or PCIe bus, or connected via USB, PC Card, ExpressCard, Mini PCIe or M.2. The low cost and ubiquity of the Wi-Fi standard means that many newer mobile computers have a wireless network interface built into the motherboard. The term is usually applied to IEEE 802.11 adapters; it may also apply to a NIC using protocols other than 802.11, such as one implementing Bluetooth connections. == Modes of operation == An 802.11 WNIC can operate in two modes known as infrastructure mode and ad hoc mode: Infrastructure mode In an infrastructure mode network the WNIC needs a wireless access point: all data is transferred using the access point as the central hub. All wireless nodes in an infrastructure mode network connect to an access point. All nodes connecting to the access point must have the same service set identifier (SSID) as the access point. If wireless security is enabled on the access point (such as WEP or WPA), the NIC must have valid authentication parameters in order to connect to the access point. Ad hoc mode In an ad hoc mode network the WNIC does not require an access point, but rather can interface with all other wireless nodes directly. All the nodes in an ad hoc network must have the same channel and SSID. == Specifications == The IEEE 802.11 standard sets out low-level specifications for how all 802.11 wireless networks operate. Earlier 802.11 interface controllers are usually only compatible with earlier variants of the standard, while newer cards support both current and old standards. Specifications commonly used in marketing materials for WNICs include: Wireless data transfer rates (measured in Mbit/s) Wireless transmit power (measured in dBm) Wireless network standards supported, such as 802.11b, 802.11a, 802.11g, 802.11n, 802.11ac, 802.11ax Most WNICs support one or more of 802.11, Bluetooth and 3GPP (2G, 3G, 4G, 5G) network standards. == Range == Wireless range may be substantially affected by objects in the way of the signal and by the quality of the antenna. Large electrical appliances, such as refrigerators, fuse boxes, metal plumbing, and air conditioning units can impede a wireless network signal. The theoretical maximum range of IEEE 802.11 is only reached under ideal circumstances and true effective range is typically about half of the theoretical range. Specifically, the maximum throughput speed is only achieved at extremely close range (less than 25 feet (7.6 m) or so); at the outer reaches of a device's effective range, speed may decrease to around 1 Mbit/s before it drops out altogether. The reason is that wireless devices dynamically negotiate the top speed at which they can communicate without dropping too many data packets. == FullMAC and SoftMAC devices == In an 802.11 WNIC, the MAC Sublayer Management Entity (MLME) can be implemented either in the NIC's hardware or firmware, or in host-based software that is executed on the main CPU. A WNIC that implements the MLME function in hardware or firmware is called a FullMAC WNIC or a HardMAC NIC and a NIC that implements it in host software is called a SoftMAC NIC. A FullMAC device hides the complexity of the 802.11 protocol from the main CPU, instead providing an 802.3 (Ethernet) interface; a SoftMAC design implements only the timing-critical part of the protocol in hardware/firmware and the rest on the host. FullMAC chips are typically used in mobile devices because: they are easier to integrate in complete products power is saved by having a specialized CPU perform the 802.11 processing; the chip vendor has tighter control of the MLME. Popular example of FullMAC chips is the one implemented on the Raspberry Pi 3. Linux kernel's mac80211 framework provides capabilities for SoftMAC devices and additional capabilities (such as mesh networking, which is known as the IEEE 802.11s standard) for devices with limited functionality. FreeBSD also supports SoftMAC drivers. == See also == List of device bandwidths Wi-Fi operating system support == References ==
Wikipedia/Wireless_network_interface_controller
A content delivery network or content distribution network (CDN) is a geographically distributed network of proxy servers and their data centers. The goal is to provide high availability and performance ("speed") by distributing the service spatially relative to end users. CDNs came into existence in the late 1990s as a means for alleviating the performance bottlenecks of the Internet as the Internet was starting to become a mission-critical medium for people and enterprises. Since then, CDNs have grown to serve a large portion of Internet content, including web objects (text, graphics and scripts), downloadable objects (media files, software, documents), applications (e-commerce, portals), live streaming media, on-demand streaming media, and social media sites. CDNs are a layer in the internet ecosystem. Content owners such as media companies and e-commerce vendors pay CDN operators to deliver their content to their end users. In turn, a CDN pays Internet service providers (ISPs), carriers, and network operators for hosting its servers in their data centers. CDN is an umbrella term spanning different types of content delivery services: video streaming, software downloads, web and mobile content acceleration, licensed/managed CDN, transparent caching, and services to measure CDN performance, load balancing, Multi CDN switching and analytics and cloud intelligence. CDN vendors may cross over into other industries like security, DDoS protection and web application firewalls (WAF), and WAN optimization. Content delivery service providers include Akamai Technologies, Cloudflare, Amazon CloudFront, Qwilt (Cisco), Fastly, and Google Cloud CDN. == Technology == CDN nodes are usually deployed in multiple locations, often over multiple Internet backbones. Benefits include reducing bandwidth costs, improving page load times, and increasing the global availability of content. The number of nodes and servers making up a CDN varies, depending on the architecture, some reaching thousands of nodes with tens of thousands of servers on many remote points of presence (PoPs). Others build a global network and have a small number of geographical PoPs. Requests for content are typically algorithmically directed to nodes that are optimal in some way. When optimizing for performance, locations that are best for serving content to the user may be chosen. This may be measured by choosing locations that are the fewest hops or the shortest time to the requesting client, or the highest server performance, to optimize delivery across local networks. When optimizing for cost, locations that are the least expensive may be chosen instead. In an optimal scenario, these two goals tend to align, as edge servers that are close to the end user at the edge of the network may have an advantage in performance or cost. Most CDN providers will provide their services over a varying, defined, set of PoPs, depending on the coverage desired, such as United States, International or Global, Asia-Pacific, etc. These sets of PoPs can be called "edges", "edge nodes", "edge servers", or "edge networks" as they would be the closest edge of CDN assets to the end user. CDN concepts: Content Provider Origin Server: the web server providing the source content CDN entry point(s): the servers within the CDN that fetch the content from the origin CDN Origin Shield: the CDN service helping to protect the origin server in case of heavy traffic CDN Edge Servers: the CDN servers serving the content request from the clients CDN footprint: the geographic areas where the CDN Edge Servers can effectively serve clients requests CDN selector: in the context of multi-CDN, a decision making service to choose among multiple CDNs CDN offloading: in the context of Peer-to-Peer CDN, a mechanism to help deliver the content between clients who are consuming it, in addition to CDN Edge Server delivery == Security and privacy == CDN providers profit either from direct fees paid by content providers using their network, or profit from the user analytics and tracking data collected as their scripts are being loaded onto customers' websites inside their browser origin. As such these services are being pointed out as potential privacy intrusions for the purpose of behavioral targeting and solutions are being created to restore single-origin serving and caching of resources. In particular, a website using a CDN may violate the EU's General Data Protection Regulation (GDPR). For example, in 2021 a German court forbade the use of a CDN on a university website, because this caused the transmission of the user's IP address to the CDN, which violated the GDPR. CDNs serving JavaScript have also been targeted as a way to inject malicious content into pages using them. Subresource Integrity mechanism was created in response to ensure that the page loads a script whose content is known and constrained to a hash referenced by the website author. == Content networking techniques == The Internet was designed according to the end-to-end principle. This principle keeps the core network relatively simple and moves the intelligence as much as possible to the network end-points: the hosts and clients. As a result, the core network is specialized, simplified, and optimized to only forward data packets. Content Delivery Networks augment the end-to-end transport network by distributing on it a variety of intelligent applications employing techniques designed to optimize content delivery. The resulting tightly integrated overlay uses web caching, server-load balancing, request routing, and content services. Web caches store popular content on servers that have the greatest demand for the content requested. These shared network appliances reduce bandwidth requirements, reduce server load, and improve the client response times for content stored in the cache. Web caches are populated based on requests from users (pull caching) or based on preloaded content disseminated from content servers (push caching). Server-load balancing uses one or more techniques including service-based (global load balancing) or hardware-based (i.e. layer 4–7 switches, also known as a web switch, content switch, or multilayer switch) to share traffic among a number of servers or web caches. Here the switch is assigned a single virtual IP address. Traffic arriving at the switch is then directed to one of the real web servers attached to the switch. This has the advantage of balancing load, increasing total capacity, improving scalability, and providing increased reliability by redistributing the load of a failed web server and providing server health checks. A content cluster or service node can be formed using a layer 4–7 switch to balance load across a number of servers or a number of web caches within the network. Request routing directs client requests to the content source best able to serve the request. This may involve directing a client request to the service node that is closest to the client, or to the one with the most capacity. A variety of algorithms are used to route the request. These include Global Server Load Balancing, DNS-based request routing, Dynamic metafile generation, HTML rewriting, and anycasting. Proximity—choosing the closest service node—is estimated using a variety of techniques including reactive probing, proactive probing, and connection monitoring. CDNs use a variety of methods of content delivery including, but not limited to, manual asset copying, active web caches, and global hardware load balancers. === Content service protocols === Several protocol suites are designed to provide access to a wide variety of content services distributed throughout a content network. The Internet Content Adaptation Protocol (ICAP) was developed in the late 1990s to provide an open standard for connecting application servers. A more recently defined and robust solution is provided by the Open Pluggable Edge Services (OPES) protocol. This architecture defines OPES service applications that can reside on the OPES processor itself or be executed remotely on a Callout Server. Edge Side Includes or ESI is a small markup language for edge-level dynamic web content assembly. It is fairly common for websites to have generated content. It could be because of changing content like catalogs or forums, or because of the personalization. This creates a problem for caching systems. To overcome this problem, a group of companies created ESI. === Peer-to-peer CDNs === In peer-to-peer (P2P) content-delivery networks, clients provide resources as well as use them. This means that, unlike client–server systems, the content-centric networks can actually perform better as more users begin to access the content (especially with protocols such as Bittorrent that require users to share). This property is one of the major advantages of using P2P networks because it makes the setup and running costs very small for the original content distributor. To incentive peers to participate in the P2P network, web3 and blockchain technologies can be used: participating nodes receive crypto tokens in exchange of their involvement. === Private CDNs === If content owners are not satisfied with the options or costs of a commercial CDN service, they can create their own CDN. This is called a private CDN. A private CDN consists of PoPs (points of presence) that are only serving content for their owner. These PoPs can be caching servers, reverse proxies or application delivery controllers. It can be as simple as two caching servers, or large enough to serve petabytes of content. When a private CDN is deployed within a company network, it is also referred as Entreprise CDN or eCDN. Large content distribution networks may even build and set up their own private network to distribute copies of content across cache locations. Such private networks are usually used in conjunction with public networks as a backup option in case the capacity of the private network is not enough or there is a failure which leads to capacity reduction. Since the same content has to be distributed across many locations, a variety of multicasting techniques may be used to reduce bandwidth consumption. Over private networks, it has also been proposed to select multicast trees according to network load conditions to more efficiently utilize available network capacity. == CDN trends == === Emergence of telco CDNs === The rapid growth of streaming video traffic required large capital expenditures by broadband providers in order to meet this demand and retain subscribers by delivering a sufficiently good quality of experience. To address this, telecommunications service providers have begun to launch their own content delivery networks as a means to lessen the demands on the network backbone and reduce infrastructure investments. === Telco CDN advantages === Because they own the networks over which video content is transmitted, telco CDNs have advantages over traditional CDNs. They own the last mile and can deliver content closer to the end-user because it can be cached deep in their networks. This deep caching minimizes the distance that video data travels over the general Internet and delivers it more quickly and reliably. Telco CDNs also have a built-in cost advantage since traditional CDNs must lease bandwidth from them and build the operator's margin into their own cost model. In addition, by operating their own content delivery infrastructure, telco operators have better control over the utilization of their resources. Content management operations performed by CDNs are usually applied without (or with very limited) information about the network (e.g., topology, utilization etc.) of the telco-operators with which they interact or have business relationships. These pose a number of challenges for the telco-operators who have a limited sphere of action in face of the impact of these operations on the utilization of their resources. In contrast, the deployment of telco-CDNs allows operators to implement their own content management operations, which enables them to have a better control over the utilization of their resources and, as such, provide better quality of service and experience to their end users. === Federated CDNs and Open Caching === In June 2011, StreamingMedia.com reported that a group of TSPs had founded an Operator Carrier Exchange (OCX) to interconnect their networks and compete more directly against large traditional CDNs like Akamai and Limelight Networks, which have extensive PoPs worldwide. This way, telcos are building a Federated CDN offering, which is more interesting for a content provider willing to deliver its content to the aggregated audience of this federation. It is likely that in a near future, other telco CDN federations will be created. They will grow by enrollment of new telcos joining the federation and bringing network presence and their Internet subscriber bases to the existing ones. The Open Caching specification by Streaming Video Technology Alliance defines a set of APIs that allows a Content Provider to deliver its content using several CDNs in a consistent way, seeing each CDN provider the same way through these APIs. === Multi CDN and CDN selection === Combining several CDN services allow Content Providers to not rely on a single CDN service, especially concerned to deal with high peak audience during live events. There are several ways to allocate traffic to a particular CDN among the list, either client-side CDN selection, or server-side (at the Content Provider's origin) or cloud-side (in the middle, between the content origin and the audience). CDN selection criteria can be performance, availability and cost. === Improving CDN performance using Extension Mechanisms for DNS === Traditionally, CDNs have used the IP of the client's recursive DNS resolver to geo-locate the client. While this is a sound approach in many situations, this leads to poor client performance if the client uses a non-local recursive DNS resolver that is far away. For instance, a CDN may route requests from a client in India to its edge server in Singapore, if that client uses a public DNS resolver in Singapore, causing poor performance for that client. Indeed, a recent study showed that in many countries where public DNS resolvers are in popular use, the median distance between the clients and their recursive DNS resolvers can be as high as a thousand miles. In August 2011, a global consortium of leading Internet service providers led by Google announced their official implementation of the edns-client-subnet IETF Internet Draft, which is intended to accurately localize DNS resolution responses. The initiative involves a limited number of leading DNS service providers, such as Google Public DNS, and CDN service providers as well. With the edns-client-subnet EDNS0 option, CDNs can now utilize the IP address of the requesting client's subnet when resolving DNS requests. This approach, called end-user mapping, has been adopted by CDNs and it has been shown to drastically reduce the round-trip latencies and improve performance for clients who use public DNS or other non-local resolvers. However, the use of EDNS0 also has drawbacks as it decreases the effectiveness of caching resolutions at the recursive resolvers, increases the total DNS resolution traffic, and raises a privacy concern of exposing the client's subnet. === Virtual CDN (vCDN) === Virtualization technologies are being used to deploy virtual CDNs (vCDNs) (also known as a software-defined CDN or sd-CDN) with the goal to reduce content provider costs, and at the same time, increase elasticity and decrease service delay. With vCDNs, it is possible to avoid traditional CDN limitations, such as performance, reliability and availability since virtual caches are deployed dynamically (as virtual machines or containers) in physical servers distributed across the provider's geographical coverage. As the virtual cache placement is based on both the content type and server or end-user geographic location, the vCDNs have a significant impact on service delivery and network congestion. === CDN using non-HTTP delivery === To boost performance, delivery to clients from servers can use alternate non-HTTP protocols such as WebRTC and WebSockets. === Image Optimization and Delivery (Image CDNs) === In 2017, Addy Osmani of Google started referring to software solutions that could integrate naturally with the Responsive Web Design paradigm (with particular reference to the <picture> element) as Image CDNs. The expression referred to the ability of a web architecture to serve multiple versions of the same image through HTTP, depending on the properties of the browser requesting it, as determined by either the browser or the server-side logic. The purpose of Image CDNs was, in Google's vision, to serve high-quality images (or, better, images perceived as high-quality by the human eye) while preserving download speed, thus contributing to a great User experience (UX). Arguably, the Image CDN term was originally a misnomer, as neither Cloudinary nor Imgix (the examples quoted by Google in the 2017 guide by Addy Osmani) were, at the time, a CDN in the classical sense of the term. Shortly afterwards, though, several companies offered solutions that allowed developers to serve different versions of their graphical assets according to several strategies. Many of these solutions were built on top of traditional CDNs, such as Akamai, CloudFront, Fastly, Edgecast and Cloudflare. At the same time, other solutions that already provided an image multi-serving service joined the Image CDN definition by either offering CDN functionality natively (ImageEngine) or integrating with one of the existing CDNs (Cloudinary/Akamai, Imgix/Fastly). While providing a universally agreed-on definition of what an Image CDN is may not be possible, generally speaking, an Image CDN supports the following three components: A Content Delivery Network (CDN) for the fast serving of images. Image manipulation and optimization, either on-the-fly through URL directives, in batch mode (through manual upload of images) or fully automatic (or a combination of these). Device Detection (also known as Device Intelligence), i.e. the ability to determine the properties of the requesting browser and/or device through analysis of the User-Agent string, HTTP Accept headers, Client-Hints or JavaScript. The following table summarizes the current situation with the main software CDNs in this space: == Content delivery service and technology providers == === Commercial or free software vendors (build your own CDN) === Ateme BlazingCDN Broadpeak Gcore Go-Fast CDN Nginx Varnish Software Vecima Networks Velocix (spin off Nokia) === Free-as-a-Service === === Commercial-as-a-Service === === Telco CDNs === === Commercial using P2P for delivery === === Multi === === In-house === TF1 BBC Netflix == See also == == References == == Further reading ==
Wikipedia/Content_delivery_network
Network address translation (NAT) is a method of mapping an IP address space into another by modifying network address information in the IP header of packets while they are in transit across a traffic routing device. The technique was initially used to bypass the need to assign a new address to every host when a network was moved, or when the upstream Internet service provider was replaced but could not route the network's address space. It is a popular and essential tool in conserving global address space in the face of IPv4 address exhaustion. One Internet-routable IP address of a NAT gateway can be used for an entire private network. As network address translation modifies the IP address information in packets, NAT implementations may vary in their specific behavior in various addressing cases and their effect on network traffic. Vendors of equipment containing NAT implementations do not commonly document the specifics of NAT behavior. == History == Internet Protocol version 4 (IPv4) uses 32-bit addresses, capable of uniquely addressing about 4.3 billion devices on the network. By 1992, it became evident that that would not be enough. The 1994 RFC 1631 describes NAT as a "short-term solution" to the two most compelling problems facing the Internet Protocol at that time: IP address depletion and scaling in routing. By 2004, NAT had become widespread. The technique also became known as IP masquerading. which suggests a technique that hides an entire IP address space, usually consisting of private IP addresses, behind a single IP address in another, usually public address space. Because of the popularity of this technique to conserve IPv4 address space, the term NAT became virtually synonymous with IP masquerading. In 1996, port-address translation (PAT) was introduced, which expanded the translation of addresses to include port numbers. == Basic NAT == The simplest type of NAT provides a one-to-one translation of IP addresses (RFC 1631). RFC 2663 refers to this type of NAT as basic NAT, also called a one-to-one NAT. In this type of NAT, only the IP addresses, IP header checksum, and any higher-level checksums that include the IP address are changed. Basic NAT can be used to interconnect two IP networks with incompatible addresses. == One-to-many NAT == Most network address translators map multiple private hosts to one publicly exposed IP address. In a typical configuration, a local network uses one of the designated private IP address subnets (RFC 1918). The network has a router having network interfaces on both the private and the public network. The public address is typically assigned by an Internet service provider. As traffic passes from the private network to the Internet, NAT translates the source address in each packet from a private address to the router's public address. The NAT facility tracks each active connection. When the router receives inbound traffic from the Internet, it uses the connection tracking data obtained during the outbound phase to determine to which private address it should forward the reply. Packets passing from the private network to the public network will have their source address modified, while packets passing from the public network back to the private network will have their destination address modified. To avoid ambiguity in how replies are translated, further modifications to the packets are required. The vast bulk of Internet traffic uses Transmission Control Protocol (TCP) or User Datagram Protocol (UDP). For these protocols, the port numbers are changed so that the combination of IP address (within the IP header) and port number (within the Transport Layer header) on the returned packet can be unambiguously mapped to the corresponding private network destination. RFC 2663 uses the term network address and port translation (NAPT) for this type of NAT. Other names include port address translation (PAT), IP masquerading, NAT overload, and many-to-one NAT. This is the most common type of NAT and has become synonymous with the term NAT in common usage. This method allows communication through the router only when the conversation originates in the private network, since the initial originating transmission establishes the required information in the translation tables. Thus, a web browser within the private network is able to browse websites that are outside the network, whereas web browsers outside the network are unable to browse a website hosted within. Protocols not based on TCP and UDP require other translation techniques. The primary benefit of one-to-many NAT is mitigation of IPv4 address exhaustion by allowing entire networks to be connected to the Internet using a single public IP address. == Methods of translation == Network address and port translation may be implemented in several ways. Some applications that use IP address information may need to determine the external address of a network address translator. This is the address that its communication peers in the external network detect. Furthermore, it may be necessary to examine and categorize the type of mapping in use, for example when it is desired to set up a direct communication path between two clients both of which are behind separate NAT gateways. For this purpose, RFC 3489 specified the protocol Simple Traversal of UDP over NATs (STUN) in 2003. It classified NAT implementations as full-cone NAT, (address) restricted-cone NAT, port-restricted cone NAT or symmetric NAT, and proposed a methodology for testing a device accordingly. However, these procedures have since been deprecated from standards status, as the methods are inadequate to correctly assess many devices. RFC 5389 standardized new methods in 2008 and the acronym STUN since represents the new title of the specification: Session Traversal Utilities for NAT. As many NAT implementations combine multiple types, it is better to refer to specific individual NAT behavior instead of using the Cone/Symmetric terminology. RFC 4787 attempts to alleviate confusion by introducing standardized terminology for observed behaviors. For the first bullet in each row of the above table, the RFC would characterize Full-Cone, Restricted-Cone, and Port-Restricted Cone NATs as having an Endpoint-Independent Mapping, whereas it would characterize a Symmetric NAT as having an Address- and Port-Dependent Mapping. For the second bullet in each row of the above table, RFC 4787 would also label Full-Cone NAT as having an Endpoint-Independent Filtering, Restricted-Cone NAT as having an Address-Dependent Filtering, Port-Restricted Cone NAT as having an Address and Port-Dependent Filtering, and Symmetric NAT as having either an Address-Dependent Filtering or Address and Port-Dependent Filtering. Other classifications of NAT behavior mentioned in the RFC include whether they preserve ports, when and how mappings are refreshed, whether external mappings can be used by internal hosts (i.e., its hairpinning behavior), and the level of determinism NATs exhibit when applying all these rules. Specifically, most NATs combine symmetric NAT for outgoing connections with static port mapping, where incoming packets addressed to the external address and port are redirected to a specific internal address and port. === NAT mapping vs NAT filtering === RFC 4787 distinguishes between NAT mapping and NAT filtering. Section 4.1 of the RFC covers NAT mapping and specifies the translation of an external IP address and port number into an internal IP address and port number. It defines endpoint-independent mapping, address-dependent mapping and address and port-dependent mapping, explains that these three possible choices do not relate to the security of the NAT as security is determined by the filtering behavior and then specifies "A NAT MUST have an 'Endpoint-Independent Mapping' behavior." Section 5 of the RFC covers NAT filtering and describes the criteria used by the NAT to filter packets originating from specific external endpoints. The options are endpoint-independent filtering, address-dependent filtering and address and port-dependent filtering. Endpoint-independent filtering is recommended where maximum application transparency is required while address-dependent filtering is recommended where more stringent filtering behavior is most important. Some NAT devices are not compliant with RFC 4787 as they treat NAT mapping and filtering in the same way so that their configuration option for changing the NAT filtering method also changes the NAT mapping method (e.g. Netgate TNSR). == Type of NAT and NAT traversal, role of port preservation for TCP == NAT traversal problems arise when peers behind different NATs try to communicate. One way to solve this problem is to use port forwarding. Another way is to use various NAT traversal techniques. The most popular technique for TCP NAT traversal is TCP hole punching. TCP hole punching requires the NAT to follow the port preservation design for TCP. For a given outgoing TCP communication, the same port numbers are used on both sides of the NAT. NAT port preservation for outgoing TCP connections is crucial for TCP NAT traversal because, under TCP, one port can only be used for one communication at a time. Programs that bind distinct TCP sockets to ephemeral ports for each TCP communication, make NAT port prediction impossible for TCP. On the other hand, for UDP, NATs do not need port preservation. Indeed, multiple UDP communications (each with a distinct endpoint) can occur on the same source port, and applications usually reuse the same UDP socket to send packets to distinct hosts. This makes port prediction straightforward, as it is the same source port for each packet. Furthermore, port preservation in NAT for TCP allows P2P protocols to offer less complexity and less latency because there is no need to use a third party (like STUN) to discover the NAT port since the application itself already knows the NAT port. However, if two internal hosts attempt to communicate with the same external host using the same port number, the NAT may attempt to use a different external IP address for the second connection or may need to forgo port preservation and remap the port.: 9 As of 2006, roughly 70% of the clients in peer-to-peer (P2P) networks employed some form of NAT. == Implementation == === Establishing two-way communication === Every TCP and UDP packet contains a source port number and a destination port number. Each of those packets is encapsulated in an IP packet, whose IP header contains a source IP address and a destination IP address. The IP address/protocol/port number triple defines an association with a network socket. For publicly accessible services such as web and mail servers the port number is important. For example, port 443 connects through a socket to the web server software and port 465 to a mail server's SMTP daemon. The IP address of a public server is also important, similar in global uniqueness to a postal address or telephone number. Both IP address and port number must be correctly known by all hosts wishing to successfully communicate. Private IP addresses as described in RFC 1918 are usable only on private networks not directly connected to the internet. Ports are endpoints of communication unique to that host, so a connection through the NAT device is maintained by the combined mapping of port and IP address. A private address on the inside of the NAT is mapped to an external public address. Port address translation (PAT) resolves conflicts that arise when multiple hosts happen to use the same source port number to establish different external connections at the same time. === Translation process === With NAT, all communications sent to external hosts actually contain the external IP address and port information of the NAT device instead of internal host IP addresses or port numbers. NAT only translates IP addresses and ports of its internal hosts, hiding the true endpoint of an internal host on a private network. When a computer on the private (internal) network sends an IP packet to the external network, the NAT device replaces the internal source IP address in the packet header with the external IP address of the NAT device. PAT may then assign the connection a port number from a pool of available ports, inserting this port number in the source port field. The packet is then forwarded to the external network. The NAT device then makes an entry in a translation table containing the internal IP address, original source port, and the translated source port. Subsequent packets from the same internal source IP address and port number are translated to the same external source IP address and port number. The computer receiving a packet that has undergone NAT establishes a connection to the port and IP address specified in the altered packet, oblivious to the fact that the supplied address is being translated. Upon receiving a packet from the external network, the NAT device searches the translation table based on the destination port in the packet header. If a match is found, the destination IP address and port number is replaced with the values found in the table and the packet is forwarded to the inside network. Otherwise, if the destination port number of the incoming packet is not found in the translation table, the packet is dropped or rejected because the PAT device doesn't know where to send it. == Applications == Routing Network address translation can be used to mitigate IP address overlap. Address overlap occurs when hosts in different networks with the same IP address space try to reach the same destination host. This is most often a misconfiguration and may result from the merger of two networks or subnets, especially when using RFC 1918 private network addressing. The destination host experiences traffic apparently arriving from the same network, and intermediate routers have no way to determine where reply traffic should be sent to. The solution is either renumbering to eliminate overlap or network address translation. Load balancing In client–server applications, load balancers forward client requests to a set of server computers to manage the workload of each server. Network address translation may be used to map a representative IP address of the server cluster to specific hosts that service the request. == Related techniques == IEEE Reverse Address and Port Translation (RAPT or RAT) allows a host whose real IP address changes from time to time to remain reachable as a server via a fixed home IP address. Cisco's RAPT implementation is PAT or NAT overloading and maps multiple private IP addresses to a single public IP address. Multiple addresses can be mapped to a single address because each private address is tracked by a port number. PAT uses unique source port numbers on the inside global IP address to distinguish between translations. PAT attempts to preserve the original source port. If this source port is already used, PAT assigns the first available port number starting from the beginning of the appropriate port group 0–511, 512–1023, or 1024–65535. When there are no more ports available and there is more than one external IP address configured, PAT moves to the next IP address to try to allocate the original source port again. This process continues until it runs out of available ports and external IP addresses. Mapping of Address and Port is a Cisco proposal that combines Address plus Port translation with tunneling of the IPv4 packets over an ISP provider's internal IPv6 network. In effect, it is an (almost) stateless alternative to carrier-grade NAT and DS-Lite that pushes the IPv4 address/port translation function (and the maintenance of NAT state) entirely into the existing customer premises equipment NAT implementation. Thus avoiding the NAT444 and statefulness problems of carrier-grade NAT, and also provides a transition mechanism for the deployment of native IPv6 at the same time with very little added complexity. == Issues and limitations == Hosts behind NAT-enabled routers do not have end-to-end connectivity and cannot participate in some internet protocols. Services that require the initiation of TCP connections from the outside network, or that use stateless protocols such as those using UDP, can be disrupted. Unless the NAT router makes a specific effort to support such protocols, incoming packets cannot reach their destination. Some protocols can accommodate one instance of NAT between participating hosts ("passive mode" FTP, for example), sometimes with the assistance of an application-level gateway (see § Applications affected by NAT), but fail when both systems are separated from the internet by NAT. The use of NAT also complicates tunneling protocols such as IPsec because NAT modifies values in the headers which interfere with the integrity checks done by IPsec and other tunneling protocols. End-to-end connectivity has been a core principle of the Internet, supported, for example, by the Internet Architecture Board. Current Internet architectural documents observe that NAT is a violation of the end-to-end principle, but that NAT does have a valid role in careful design. There is considerably more concern with the use of IPv6 NAT, and many IPv6 architects believe IPv6 was intended to remove the need for NAT. An implementation that only tracks ports can be quickly depleted by internal applications that use multiple simultaneous connections such as an HTTP request for a web page with many embedded objects. This problem can be mitigated by tracking the destination IP address in addition to the port thus sharing a single local port with many remote hosts. This additional tracking increases implementation complexity and computing resources at the translation device. Because the internal addresses are all disguised behind one publicly accessible address, it is impossible for external hosts to directly initiate a connection to a particular internal host. Applications such as VOIP, videoconferencing, and other peer-to-peer applications must use NAT traversal techniques to function. === Fragmentation and checksums === Pure NAT, operating on IP alone, may or may not correctly parse protocols with payloads containing information about IP, such as ICMP. This depends on whether the payload is interpreted by a host on the inside or outside of the translation. Basic protocols such as TCP and UDP cannot function properly unless NAT takes action beyond the network layer. IP packets have a checksum in each packet header, which provides error detection only for the header. IP datagrams may become fragmented and it is necessary for a NAT to reassemble these fragments to allow correct recalculation of higher-level checksums and correct tracking of which packets belong to which connection. TCP and UDP have a checksum that covers all the data they carry, as well as the TCP or UDP header, plus a pseudo-header that contains the source and destination IP addresses of the packet carrying the TCP or UDP header. For an originating NAT to pass TCP or UDP successfully, it must recompute the TCP or UDP header checksum based on the translated IP addresses, not the original ones, and put that checksum into the TCP or UDP header of the first packet of the fragmented set of packets. Alternatively, the originating host may perform path MTU Discovery to determine the packet size that can be transmitted without fragmentation and then set the don't fragment (DF) bit in the appropriate packet header field. This is only a one-way solution, because the responding host can send packets of any size, which may be fragmented before reaching the NAT. == Variant terms == === DNAT === Destination network address translation (DNAT) is a technique for transparently changing the destination IP address of a routed packet and performing the inverse function for any replies. Any router situated between two endpoints can perform this transformation of the packet. DNAT is commonly used to publish a service located in a private network on a publicly accessible IP address. This use of DNAT is also called port forwarding, or DMZ when used on an entire server, which becomes exposed to the WAN, becoming analogous to an undefended military demilitarized zone (DMZ). === SNAT === The meaning of the term SNAT varies by vendor: source NAT is a common expansion and is the counterpart of destination NAT (DNAT). This is used to describe one-to-many NAT; NAT for outgoing connections to public services. stateful NAT is used by Cisco Systems static NAT is used by WatchGuard secure NAT is used by F5 and by Microsoft (in regard to the ISA Server) Secure network address translation (SNAT) is part of Microsoft's Internet Security and Acceleration Server and is an extension to the NAT driver built into Microsoft Windows Server. It provides connection tracking and filtering for the additional network connections needed for the FTP, ICMP, H.323, and PPTP protocols as well as the ability to configure a transparent HTTP proxy server. === Dynamic network address translation === Dynamic NAT, just like static NAT, is not common in smaller networks but is found within larger corporations with complex networks. Where static NAT provides a one-to-one internal to public static IP address mapping, dynamic NAT uses a group of public IP addresses. == NAT hairpinning == NAT hairpinning, also known as NAT loopback or NAT reflection, is a feature in many consumer routers where a machine on the LAN is able to access another machine on the LAN via the external IP address of the LAN/router (with port forwarding set up on the router to direct requests to the appropriate machine on the LAN). This notion is officially described in 2008, RFC 5128. The following describes an example network: Public address: 203.0.113.1. This is the address of the WAN interface on the router. Internal address of router: 192.168.1.1 Address of the server: 192.168.1.2 Address of a local computer: 192.168.1.100 If a packet is sent to 203.0.113.1 by a computer at 192.168.1.100, the packet would normally be routed to the default gateway (the router). A router with the NAT loopback feature detects that 203.0.113.1 is the address of its WAN interface, and treats the packet as if coming from that interface. It determines the destination for that packet, based on DNAT (port forwarding) rules for the destination. If the data were sent to port 80 and a DNAT rule exists for port 80 directed to 192.168.1.2, then the host at that address receives the packet. If no applicable DNAT rule is available, the router drops the packet. An ICMP Destination Unreachable reply may be sent. If any DNAT rules were present, address translation is still in effect; the router still rewrites the source IP address in the packet. The local computer (192.168.1.100) sends the packet as coming from 192.168.1.100, but the server (192.168.1.2) receives it as coming from 203.0.113.1. When the server replies, the process is identical to an external sender. Thus, two-way communication is possible between hosts inside the LAN network via the public IP address. == NAT in IPv6 == Network address translation is not commonly used in IPv6 because one of the design goals of IPv6 is to restore end-to-end network connectivity. The large addressing space of IPv6 obviates the need to conserve addresses and every device can be given a unique globally routable address. Use of unique local addresses in combination with network prefix translation can achieve results similar to NAT. The large addressing space of IPv6 can still be defeated depending on the actual prefix length given by the carrier. It is not uncommon to be handed a /64 prefix – the smallest recommended subnet – for an entire home network, requiring a variety of techniques to be used to manually subdivide the range for all devices to remain reachable. Even actual IPv6-to-IPv6 NAT, NAT66, can turn out useful at times: the APNIC blog outlines a case where the author was only provided a single address (/128). == Applications affected by NAT == Some application layer protocols, such as File Transfer Protocol (FTP) and Session Initiation Protocol (SIP), send explicit network addresses within their application data. File Transfer Protocol in active mode, for example, uses separate connections for control traffic (commands) and for data traffic (file contents). When requesting a file transfer, the host making the request identifies the corresponding data connection by its network layer and transport layer addresses. If the host making the request lies behind a simple NAT firewall, the translation of the IP address or TCP port number makes the information received by the server invalid. SIP commonly controls voice over IP calls, and suffer the same problem. SIP and its accompanying Session Description Protocol may use multiple ports to set up a connection and transmit voice stream via Real-time Transport Protocol. IP addresses and port numbers are encoded in the payload data and must be known before the traversal of NATs. Without special techniques, such as STUN, NAT behavior is unpredictable and communications may fail. Application Layer Gateway (ALG) software or hardware may correct these problems. An ALG software module running on a NAT firewall device updates any payload data made invalid by address translation. ALGs need to understand the higher-layer protocol that they need to fix, and so each protocol with this problem requires a separate ALG. For example, on many Linux systems, there are kernel modules called connection trackers that serve to implement ALGs. However, ALG cannot work if the protocol data is encrypted. Another possible solution to this problem is to use NAT traversal techniques using protocols such as STUN or Interactive Connectivity Establishment (ICE), or proprietary approaches in a session border controller. NAT traversal is possible in both TCP- and UDP-based applications, but the UDP-based technique is simpler, more widely understood, and more compatible with legacy NATs. In either case, the high-level protocol must be designed with NAT traversal in mind, and it does not work reliably across symmetric NATs or other poorly behaved legacy NATs. Other possibilities are Port Control Protocol (PCP), NAT Port Mapping Protocol (NAT-PMP), or Internet Gateway Device Protocol but these require the NAT device to implement that protocol. Most client–server protocols (FTP being the main exception), however, do not send layer 3 contact information and do not require any special treatment by NATs. In fact, avoiding NAT complications is practically a requirement when designing new higher-layer protocols today. NATs can also cause problems where IPsec encryption is applied and in cases where multiple devices such as SIP phones are located behind a NAT. Phones that encrypt their signaling with IPsec encapsulate the port information within an encrypted packet, meaning that NAT devices cannot access and translate the port. In these cases, the NAT devices revert to simple NAT operations. This means that all traffic returning to the NAT is mapped onto one client, causing service to more than one client behind the NAT to fail. There are a couple of solutions to this problem: one is to use TLS, which operates at layer 4 and does not mask the port number; another is to encapsulate the IPsec within UDP – the latter being the solution chosen by TISPAN to achieve secure NAT traversal, or a NAT with "IPsec Passthru" support; another is to use a session border controller to help traverse the NAT. Interactive Connectivity Establishment (ICE) is a NAT traversal technique that does not rely on ALG support. The DNS protocol vulnerability announced by Dan Kaminsky on July 8, 2008, is indirectly affected by NAT port mapping. To avoid DNS cache poisoning, it is highly desirable not to translate UDP source port numbers of outgoing DNS requests from a DNS server behind a firewall that implements NAT. The recommended workaround for the DNS vulnerability is to make all caching DNS servers use randomized UDP source ports. If the NAT function de-randomizes the UDP source ports, the DNS server becomes vulnerable. == Examples of NAT software == Internet Connection Sharing (ICS): NAT & DHCP implementation included with Windows desktop operating systems IPFilter: included with (OpenSolaris, FreeBSD and NetBSD, available for many other Unix-like operating systems ipfirewall (ipfw): FreeBSD-native packet filter Netfilter with iptables/nftables: the Linux packet filter NPF: NetBSD-native packet filter PF: OpenBSD-native packet filter Routing and Remote Access Service (RRAS): routing implementation included with Windows Server operating systems VPP: user space packet forwarding implementation for Linux WinGate: third-party routing implementation for Windows == See also == Anything In Anything (AYIYA) – IPv6 over IPv4 UDP, thus working IPv6 tunneling over most NATs Carrier-grade NAT – NAT behind NAT within ISP. Gateway (telecommunications) – Connection between two network systems Internet Gateway Device Protocol (UPnP IGD) NAT-traversal method Middlebox – Intermediary box on the data path between a source host and destination host NAT Port Mapping Protocol (NAT-PMP) NAT-traversal method Port Control Protocol (PCP) NAT-traversal method Port triggering – NAT traversal mechanism Subnetwork – Logical subdivision of an IP networkPages displaying short descriptions of redirect targets Teredo tunneling – NAT traversal using IPv6 == Notes == == References == == External links == Characterization of different TCP NATs at the Wayback Machine (archived 2006-01-11) – Paper discussing the different types of NAT Anatomy: A Look Inside Network Address Translators – Volume 7, Issue 3, September 2004 Jeff Tyson, HowStuffWorks: How Network Address Translation Works Routing with NAT at archive.today (archived 2013-01-03) (Part of the documentation for the IBM iSeries) Network Address Translation (NAT) FAQ – Cisco Systems
Wikipedia/Network_address_translator
A network interface controller (NIC, also known as a network interface card, network adapter, LAN adapter and physical network interface) is a computer hardware component that connects a computer to a computer network. Early network interface controllers were commonly implemented on expansion cards that plugged into a computer bus. The low cost and ubiquity of the Ethernet standard means that most newer computers have a network interface built into the motherboard, or is contained into a USB-connected dongle, although network cards remain available. Modern network interface controllers offer advanced features such as interrupt and DMA interfaces to the host processors, support for multiple receive and transmit queues, partitioning into multiple logical interfaces, and on-controller network traffic processing such as the TCP offload engine. == Purpose == The network controller implements the electronic circuitry required to communicate using a specific physical layer and data link layer standard such as Ethernet or Wi-Fi. This provides a base for a full network protocol stack, allowing communication among computers on the same local area network (LAN) and large-scale network communications through routable protocols, such as Internet Protocol (IP). The NIC allows computers to communicate over a computer network, either by using cables or wirelessly. The NIC is both a physical layer and data link layer device, as it provides physical access to a networking medium and, for IEEE 802 and similar networks, provides a low-level addressing system through the use of MAC addresses that are uniquely assigned to network interfaces. == Implementation == Network controllers were originally implemented as expansion cards that plugged into a computer bus. The low cost and ubiquity of the Ethernet standard means that most new computers have a network interface controller built into the motherboard. Newer server motherboards may have multiple network interfaces built-in. The Ethernet capabilities are either integrated into the motherboard chipset or implemented via a low-cost dedicated Ethernet chip. A separate network card is typically no longer required unless additional independent network connections are needed or some non-Ethernet type of network is used. A general trend in computer hardware is towards integrating the various components of systems on a chip, and this is also applied to network interface cards. An Ethernet network controller typically has an 8P8C socket where the network cable is connected. Older NICs also supplied BNC, or AUI connections. Ethernet network controllers typically support 10 Mbit/s Ethernet, 100 Mbit/s Ethernet, and 1000 Mbit/s Ethernet varieties. Such controllers are designated as 10/100/1000, meaning that they can support data rates of 10, 100 or 1000 Mbit/s. 10 Gigabit Ethernet NICs are also available, and, as of November 2014, are beginning to be available on computer motherboards. Modular designs like SFP and SFP+ are highly popular, especially for fiber-optic communication. These define a standard receptacle for media-dependent transceivers, so users can easily adapt the network interface to their needs. LEDs adjacent to or integrated into the network connector inform the user of whether the network is connected, and when data activity occurs. The NIC may include ROM to store its factory-assigned MAC address. The NIC may use one or more of the following techniques to indicate the availability of packets to transfer: Polling is where the CPU examines the status of the peripheral under program control. Interrupt-driven I/O is where the peripheral alerts the CPU that it is ready to transfer data. NICs may use one or more of the following techniques to transfer packet data: Programmed input/output, where the CPU moves the data to or from the NIC to memory. Direct memory access (DMA), where a device other than the CPU assumes control of the system bus to move data to or from the NIC to memory. This removes load from the CPU but requires more logic on the card. In addition, a packet buffer on the NIC may not be required and latency can be reduced. == Performance and advanced functionality == Multiqueue NICs provide multiple transmit and receive queues, allowing packets received by the NIC to be assigned to one of its receive queues. The NIC may distribute incoming traffic between the receive queues using a hash function. Each receive queue is assigned to a separate interrupt; by routing each of those interrupts to different CPUs or CPU cores, processing of the interrupt requests triggered by the network traffic received by a single NIC can be distributed improving performance. The hardware-based distribution of the interrupts, described above, is referred to as receive-side scaling (RSS).: 82  Purely software implementations also exist, such as the receive packet steering (RPS), receive flow steering (RFS), and Intel Flow Director.: 98, 99  Further performance improvements can be achieved by routing the interrupt requests to the CPUs or cores executing the applications that are the ultimate destinations for network packets that generated the interrupts. This technique improves locality of reference and results in higher overall performance, reduced latency and better hardware utilization because of the higher utilization of CPU caches and fewer required context switches. With multi-queue NICs, additional performance improvements can be achieved by distributing outgoing traffic among different transmit queues. By assigning different transmit queues to different CPUs or CPU cores, internal operating system contentions can be avoided. This approach is usually referred to as transmit packet steering (XPS). Some products feature NIC partitioning (NPAR, also known as port partitioning) that uses SR-IOV virtualization to divide a single 10 Gigabit Ethernet NIC into multiple discrete virtual NICs with dedicated bandwidth, which are presented to the firmware and operating system as separate PCI device functions. Some NICs provide a TCP offload engine to offload processing of the entire TCP/IP stack to the network controller. It is primarily used with high-speed network interfaces, such as Gigabit Ethernet and 10 Gigabit Ethernet, for which the processing overhead of the network stack becomes significant. Some NICs offer integrated field-programmable gate arrays (FPGAs) for user-programmable processing of network traffic before it reaches the host computer, allowing for significantly reduced latencies in time-sensitive workloads. Moreover, some NICs offer complete low-latency TCP/IP stacks running on integrated FPGAs in combination with userspace libraries that intercept networking operations usually performed by the operating system kernel; Solarflare's open-source OpenOnload network stack that runs on Linux is an example. This kind of functionality is usually referred to as user-level networking. == See also == Converged network adapter (CNA) Host adapter Intel Data Direct I/O (DDIO) Loopback interface Network monitoring interface card (NMIC) Virtual network interface (VIF) Wireless network interface controller (WNIC) == Notes == == References == == External links == "Physical Network Interface". Microsoft. "Predictable Network Interface Names". Freedesktop.org. Multi-queue network interfaces with SMP on Linux
Wikipedia/Network_interface_card
A web application (or web app) is application software that is created with web technologies and runs via a web browser. Web applications emerged during the late 1990s and allowed for the server to dynamically build a response to the request, in contrast to static web pages. Web applications are commonly distributed via a web server. There are several different tier systems that web applications use to communicate between the web browsers, the client interface, and server data. Each system has its own uses as they function in different ways. However, there are many security risks that developers must be aware of during development; proper measures to protect user data are vital. Web applications are often constructed with the use of a web application framework. Single-page applications (SPAs) and progressive web apps (PWAs) are two architectural approaches to creating web applications that provide a user experience similar to native apps, including features such as smooth navigation, offline support, and faster interactions. == History == The concept of a "web application" was first introduced in the Java language in the Servlet Specification version 2.2, which was released in 1999. At that time, both JavaScript and XML had already been developed, but the XMLHttpRequest object had only been recently introduced on Internet Explorer 5 as an ActiveX object. Beginning around the early 2000s, applications such as "Myspace (2003), Gmail (2004), Digg (2004), [and] Google Maps (2005)," started to make their client sides more and more interactive. A web page script is able to contact the server for storing/retrieving data without downloading an entire web page. The practice became known as Ajax in 2005. In earlier computing models like client-server, the processing load for the application was shared between code on the server and code installed on each client locally. In other words, an application had its own pre-compiled client program which served as its user interface and had to be separately installed on each user's personal computer. An upgrade to the server-side code of the application would typically also require an upgrade to the client-side code installed on each user workstation, adding to the support cost and decreasing productivity. Additionally, both the client and server components of the application were bound tightly to a particular computer architecture and operating system, which made porting them to other systems prohibitively expensive for all but the largest applications. Later, in 1995, Netscape introduced the client-side scripting language called JavaScript, which allowed programmers to add dynamic elements to the user interface that ran on the client side. Essentially, instead of sending data to the server in order to generate an entire web page, the embedded scripts of the downloaded page can perform various tasks such as input validation or showing/hiding parts of the page. "Progressive web apps", the term coined by designer Frances Berriman and Google Chrome engineer Alex Russell in 2015, refers to apps taking advantage of new features supported by modern browsers, which initially run inside a web browser tab but later can run completely offline and can be launched without entering the app URL in the browser. == Structure == Traditional PC applications are typically single-tiered, residing solely on the client machine. In contrast, web applications inherently facilitate a multi-tiered architecture. Though many variations are possible, the most common structure is the three-tiered application. In its most common form, the three tiers are called presentation, application and storage. The first tier, presentation, refers to a web browser itself. The second tier refers to any engine using dynamic web content technology (such as ASP, CGI, ColdFusion, Dart, JSP/Java, Node.js, PHP, Python or Ruby on Rails). The third tier refers to a database that stores data and determines the structure of a user interface. Essentially, when using the three-tiered system, the web browser sends requests to the engine, which then services them by making queries and updates against the database and generates a user interface. The 3-tier solution may fall short when dealing with more complex applications, and may need to be replaced with the n-tiered approach; the greatest benefit of which is how business logic (which resides on the application tier) is broken down into a more fine-grained model. Another benefit would be to add an integration tier, which separates the data tier and provides an easy-to-use interface to access the data. For example, the client data would be accessed by calling a "list_clients()" function instead of making an SQL query directly against the client table on the database. This allows the underlying database to be replaced without making any change to the other tiers. There are some who view a web application as a two-tier architecture. This can be a "smart" client that performs all the work and queries a "dumb" server, or a "dumb" client that relies on a "smart" server. The client would handle the presentation tier, the server would have the database (storage tier), and the business logic (application tier) would be on one of them or on both. While this increases the scalability of the applications and separates the display and the database, it still does not allow for true specialization of layers, so most applications will outgrow this model. == Security == Security breaches on these kinds of applications are a major concern because it can involve both enterprise information and private customer data. Protecting these assets is an important part of any web application, and there are some key operational areas that must be included in the development process. This includes processes for authentication, authorization, asset handling, input, and logging and auditing. Building security into the applications from the beginning is sometimes more effective and less disruptive in the long run. == Development == Writing web applications is simplified with the use of web application frameworks. These frameworks facilitate rapid application development by allowing a development team to focus on the parts of their application which are unique to their goals without having to resolve common development issues such as user management. In addition, there is potential for the development of applications on Internet operating systems, although currently there are not many viable platforms that fit this model. == See also == Web API Software as a service (SaaS) Web 2.0 Web engineering Web GIS Web services Web sciences Web widget == References == == External links == HTML5 Draft recommendation, changes to HTML and related APIs to ease authoring of web-based applications. Web Applications Working Group at the World Wide Web Consortium (W3C) PWAs on Web.dev by Google Developers.
Wikipedia/Web_application
In number theory, the first Hardy–Littlewood conjecture states the asymptotic formula for the number of prime k-tuples less than a given magnitude by generalizing the prime number theorem. It was first proposed by G. H. Hardy and John Edensor Littlewood in 1923. == Statement == Let m 1 , m 2 , … , m k {\displaystyle m_{1},m_{2},\ldots ,m_{k}} be positive even integers such that the numbers of the sequence P = ( p , p + m 1 , p + m 2 , … , p + m k ) {\displaystyle P=(p,p+m_{1},p+m_{2},\ldots ,p+m_{k})} do not form a complete residue class with respect to any prime and let π P ( n ) {\displaystyle \pi _{P}(n)} denote the number of primes p {\displaystyle p} less than n {\displaystyle n} st. p + m 1 , p + m 2 , … , p + m k {\displaystyle p+m_{1},p+m_{2},\ldots ,p+m_{k}} are all prime. Then π P ( n ) ∼ C P ∫ 2 n d t log k + 1 ⁡ t , {\displaystyle \pi _{P}(n)\sim C_{P}\int _{2}^{n}{\frac {dt}{\log ^{k+1}t}},} where C P = 2 k ∏ q prime, q ≥ 3 1 − w ( q ; m 1 , m 2 , … , m k ) q ( 1 − 1 q ) k + 1 {\displaystyle C_{P}=2^{k}\prod _{q{\text{ prime,}} \atop q\geq 3}{\frac {1-{\frac {w(q;m_{1},m_{2},\ldots ,m_{k})}{q}}}{\left(1-{\frac {1}{q}}\right)^{k+1}}}} is a product over odd primes and w ( q ; m 1 , m 2 , … , m k ) {\displaystyle w(q;m_{1},m_{2},\ldots ,m_{k})} denotes the number of distinct residues of 0 , m 1 , m 2 , … , m k {\displaystyle 0,m_{1},m_{2},\ldots ,m_{k}} modulo q {\displaystyle q} . The case k = 1 {\displaystyle k=1} and m 1 = 2 {\displaystyle m_{1}=2} is related to the twin prime conjecture. Specifically if π 2 ( n ) {\displaystyle \pi _{2}(n)} denotes the number of twin primes less than n then π 2 ( n ) ∼ C 2 ∫ 2 n d t log 2 ⁡ t , {\displaystyle \pi _{2}(n)\sim C_{2}\int _{2}^{n}{\frac {dt}{\log ^{2}t}},} where C 2 = 2 ∏ q prime, q ≥ 3 ( 1 − 1 ( q − 1 ) 2 ) ≈ 1.320323632 … {\displaystyle C_{2}=2\prod _{\textstyle {q{\text{ prime,}} \atop q\geq 3}}\left(1-{\frac {1}{(q-1)^{2}}}\right)\approx 1.320323632\ldots } is the twin prime constant. == Skewes' number == The Skewes' numbers for prime k-tuples are an extension of the definition of Skewes' number to prime k-tuples based on the first Hardy–Littlewood conjecture. The first prime p that violates the Hardy–Littlewood inequality for the k-tuple P, i.e., such that π P ( p ) > C P li P ⁡ ( p ) , {\displaystyle \pi _{P}(p)>C_{P}\operatorname {li} _{P}(p),} (if such a prime exists) is the Skewes number for P. == Consequences == The conjecture has been shown to be inconsistent with the second Hardy–Littlewood conjecture. == Generalizations == The Bateman–Horn conjecture generalizes the first Hardy–Littlewood conjecture to polynomials of degree higher than 1. == Notes == == References == Aletheia-Zomlefer, Soren Laing; Fukshansky, Lenny; Garcia, Stephan Ramon (2020). "The Bateman–Horn conjecture: Heuristic, history, and applications". Expositiones Mathematicae. 38 (4): 430–479. doi:10.1016/j.exmath.2019.04.005. ISSN 0723-0869. Tóth, László (January 2019). "On the Asymptotic Density of Prime k-tuples and a Conjecture of Hardy and Littlewood". Computational Methods in Science and Technology. 25 (3): 143–138. arXiv:1910.02636. doi:10.12921/cmst.2019.0000033.
Wikipedia/First_Hardy–Littlewood_conjecture