text
stringlengths
11
320k
source
stringlengths
26
161
Instatistics,Poisson regressionis ageneralized linear modelform ofregression analysisused to modelcount dataandcontingency tables.[1]Poisson regression assumes the response variableYhas aPoisson distribution, and assumes thelogarithmof itsexpected valuecan be modeled by a linear combination of unknownparameters. A Poisson regression model is sometimes known as alog-linear model, especially when used to model contingency tables. Negative binomial regressionis a popular generalization of Poisson regression because it loosens the highly restrictive assumption that the variance is equal to the mean made by the Poisson model. The traditional negative binomial regression model is based on the Poisson-gamma mixture distribution. This model is popular because it models the Poisson heterogeneity with a gamma distribution. Poisson regression models aregeneralized linear modelswith the logarithm as the (canonical)link function, and thePoisson distributionfunction as the assumed probability distribution of the response. Ifx∈Rn{\displaystyle \mathbf {x} \in \mathbb {R} ^{n}}is a vector ofindependent variables, then the model takes the form whereα∈R{\displaystyle \alpha \in \mathbb {R} }andβ∈Rn{\displaystyle \mathbf {\beta } \in \mathbb {R} ^{n}}. Sometimes this is written more compactly as wherex{\displaystyle \mathbf {x} }is now an (n+ 1)-dimensional vector consisting ofnindependent variables concatenated to the number one. Hereθ{\displaystyle \theta }is simplyβ{\displaystyle \beta }concatenated toα{\displaystyle \alpha }. Thus, when given a Poisson regression modelθ{\displaystyle \theta }and an input vectorx{\displaystyle \mathbf {x} }, the predicted mean of the associated Poisson distribution is given by IfYi{\displaystyle Y_{i}}areindependentobservations with corresponding valuesxi{\displaystyle \mathbf {x} _{i}}of the predictor variables, thenθ{\displaystyle \theta }can be estimated bymaximum likelihood. The maximum-likelihood estimates lack aclosed-form expressionand must be found by numerical methods. The probability surface for maximum-likelihood Poisson regression is always concave, making Newton–Raphson or other gradient-based methods appropriate estimation techniques. Suppose we have a model with a single predictor, that is,n=1{\displaystyle n=1}: Suppose we compute the predicted values at point(Y2,x2){\displaystyle (Y_{2},x_{2})}and(Y1,x1){\displaystyle (Y_{1},x_{1})}: By subtracting the first from the second: Suppose now thatx2=x1+1{\displaystyle x_{2}=x_{1}+1}. We obtain: So the coefficient of the model is to be interpreted as the increase in the logarithm of the count of the outcome variable when the independent variable increases by 1. By applying the rules of logarithms: That is, when the independent variable increases by 1, the outcome variable is multiplied by the exponentiated coefficient. The exponentiated coefficient is also called theincidence ratio. Often, the object of interest is the average partial effect or average marginal effect∂E(Y|x)∂x{\displaystyle {\frac {\partial E(Y|x)}{\partial x}}}, which is interpreted as the change in the outcomeY{\displaystyle Y}for a one unit change in the independent variablex{\displaystyle x}. The average partial effect in the Poisson model for a continuousx{\displaystyle x}can be shown to be:[2] This can be estimated using the coefficient estimates from the Poisson modelθ^=(α^,β^){\displaystyle {\hat {\theta }}=({\hat {\alpha }},{\hat {\beta }})}with the observed values ofx{\displaystyle \mathbb {x} }. Given a set of parametersθand an input vectorx, the mean of the predictedPoisson distribution, as stated above, is given by and thus, the Poisson distribution'sprobability mass functionis given by Now suppose we are given a data set consisting ofmvectorsxi∈Rn+1,i=1,…,m{\displaystyle x_{i}\in \mathbb {R} ^{n+1},\,i=1,\ldots ,m}, along with a set ofmvaluesy1,…,ym∈N{\displaystyle y_{1},\ldots ,y_{m}\in \mathbb {N} }. Then, for a given set of parametersθ, the probability of attaining this particular set of data is given by By the method ofmaximum likelihood, we wish to find the set of parametersθthat makes this probability as large as possible. To do this, the equation is first rewritten as alikelihood functionin terms ofθ: Note that the expression on theright hand sidehas not actually changed. A formula in this form is typically difficult to work with; instead, one uses thelog-likelihood: Notice that the parametersθonly appear in the first two terms of each term in the summation. Therefore, given that we are only interested in finding the best value forθwe may drop theyi! and simply write To find a maximum, we need to solve an equation∂ℓ(θ∣X,Y)∂θ=0{\displaystyle {\frac {\partial \ell (\theta \mid X,Y)}{\partial \theta }}=0}which has no closed-form solution. However, the negative log-likelihood,−ℓ(θ∣X,Y){\displaystyle -\ell (\theta \mid X,Y)}, is a convex function, and so standardconvex optimizationtechniques such asgradient descentcan be applied to find the optimal value ofθ. Poisson regression may be appropriate when the dependent variable is a count, for instance ofeventssuch as the arrival of a telephone call at a call centre.[3]The events must be independent in the sense that the arrival of one call will not make another more or less likely, but the probability per unit time of events is understood to be related to covariates such as time of day. Poisson regression may also be appropriate for rate data, where the rate is a count of events divided by some measure of that unit'sexposure(a particular unit of observation).[4]For example, biologists may count the number of tree species in a forest: events would be tree observations, exposure would be unit area, and rate would be the number of species per unit area. Demographers may model death rates in geographic areas as the count of deaths divided by person−years. More generally, event rates can be calculated as events per unit time, which allows the observation window to vary for each unit. In these examples, exposure is respectively unit area, person−years and unit time. In Poisson regression this is handled as anoffset. If the rate is count/exposure, multiplying both sides of the equation by exposure moves it to the right side of the equation. When both sides of the equation are then logged, the final model contains log(exposure) as a term that is added to the regression coefficients. This logged variable, log(exposure), is called the offset variable and enters on the right-hand side of the equation with a parameter estimate (for log(exposure)) constrained to 1. which implies Offset in the case of aGLMinRcan be achieved using theoffset()function: A characteristic of thePoisson distributionis that its mean is equal to its variance. In certain circumstances, it will be found that the observedvarianceis greater than the mean; this is known asoverdispersionand indicates that the model is not appropriate. A common reason is the omission of relevant explanatory variables, or dependent observations. Under some circumstances, the problem of overdispersion can be solved by usingquasi-likelihoodestimation or anegative binomial distributioninstead.[5][6] Ver Hoef and Boveng described the difference between quasi-Poisson (also called overdispersion with quasi-likelihood) and negative binomial (equivalent to gamma-Poisson) as follows: IfE(Y) =μ, the quasi-Poisson model assumes var(Y) =θμwhile the gamma-Poisson assumes var(Y) =μ(1 +κμ), whereθis the quasi-Poisson overdispersion parameter, andκis the shape parameter of thenegative binomial distribution. For both models, parameters are estimated usingiteratively reweighted least squares. For quasi-Poisson, the weights areμ/θ. For negative binomial, the weights areμ/(1 +κμ). With largeμand substantial extra-Poisson variation, the negative binomial weights are capped at 1/κ. Ver Hoef and Boveng discussed an example where they selected between the two by plotting mean squared residuals vs. the mean.[7] Another common problem with Poisson regression is excess zeros: if there are two processes at work, one determining whether there are zero events or any events, and a Poisson process determining how many events there are, there will be more zeros than a Poisson regression would predict. An example would be the distribution of cigarettes smoked in an hour by members of a group where some individuals are non-smokers. Othergeneralized linear modelssuch as thenegative binomialmodel orzero-inflated modelmay function better in these cases. On the contrary, underdispersion may pose an issue for parameter estimation.[8] Poisson regression creates proportional hazards models, one class ofsurvival analysis: seeproportional hazards modelsfor descriptions of Cox models. When estimating the parameters for Poisson regression, one typically tries to find values forθthat maximize the likelihood of an expression of the form wheremis the number of examples in the data set, andp(yi;eθ′xi){\displaystyle p(y_{i};e^{\theta 'x_{i}})}is theprobability mass functionof thePoisson distributionwith the mean set toeθ′xi{\displaystyle e^{\theta 'x_{i}}}. Regularization can be added to this optimization problem by instead maximizing[9] for some positive constantλ{\displaystyle \lambda }. This technique, similar toridge regression, can reduceoverfitting.
https://en.wikipedia.org/wiki/Poisson_regression
Insurvey methodology,Poisson sampling(sometimes denoted asPO sampling[1]: 61) is asamplingprocess where each element of thepopulationis subjected to anindependentBernoulli trialwhich determines whether the element becomes part of the sample.[1]: 85[2] Each element of the population may have a different probability of being included in the sample (πi{\displaystyle \pi _{i}}). The probability of being included in a sample during the drawing of a single sample is denoted as thefirst-orderinclusion probabilityof that element (pi{\displaystyle p_{i}}). If all first-order inclusion probabilities are equal, Poisson sampling becomes equivalent toBernoulli sampling, which can therefore be considered to be a special case of Poisson sampling. Mathematically, the first-orderinclusion probabilityof theith element of the population is denoted by the symbolπi{\displaystyle \pi _{i}}and the second-order inclusion probability that a pair consisting of theith andjth element of the population that is sampled is included in a sample during the drawing of a single sample is denoted byπij{\displaystyle \pi _{ij}}. The following relation is valid during Poisson sampling wheni≠j{\displaystyle i\neq j}: πii{\displaystyle \pi _{ii}}is defined to beπi{\displaystyle \pi _{i}}. Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Poisson_sampling
In mathematics, in functional analysis, several differentwaveletsare known by the namePoisson wavelet. In one context, the term "Poisson wavelet" is used to denote a family of wavelets labeled by the set ofpositive integers, the members of which are associated with thePoisson probability distribution. These wavelets were first defined and studied by Karlene A. Kosanovich, Allan R. Moser and Michael J. Piovoso in 1995–96.[1][2]In another context, the term refers to a certain wavelet which involves a form of the Poisson integral kernel.[3]In still another context, the terminology is used to describe a family of complex wavelets indexed by positive integers which are connected with the derivatives of the Poisson integral kernel.[4] For each positive integernthe Poisson waveletψn(t){\displaystyle \psi _{n}(t)}is defined by To see the relation between the Poisson wavelet and the Poisson distribution letXbe a discrete random variable having the Poisson distribution with parameter (mean)tand, for each non-negative integern, let Prob(X=n) =pn(t). Then we have The Poisson waveletψn(t){\displaystyle \psi _{n}(t)}is now given by The Poisson wavelet family can be used to construct the family of Poisson wavelet transforms of functions defined the time domain. Since the Poisson wavelets satisfy the admissibility condition also, functions in the time domain can be reconstructed from their Poisson wavelet transforms using the formula for inverse continuous-time wavelet transforms. Iff(t) is a function in the time domain itsn-th Poisson wavelet transform is given by In the reverse direction, given then-th Poisson wavelet transform(Wnf)(a,b){\displaystyle (W_{n}f)(a,b)}of a functionf(t) in the time domain, the functionf(t) can be reconstructed as follows: Poisson wavelet transforms have been applied in multi-resolution analysis, system identification, and parameter estimation. They are particularly useful in studying problems in which the functions in the time domain consist of linear combinations of decaying exponentials with time delay. The Poisson wavelet is defined by the function[3] This can be expressed in the form The functionP(t){\displaystyle P(t)}appears as anintegral kernelin the solution of a certaininitial value problemof theLaplace operator. This is the initial value problem: Given anys(x){\displaystyle s(x)}inLp(R){\displaystyle L^{p}(\mathbb {R} )}, find a harmonic functionϕ(x,y){\displaystyle \phi (x,y)}defined in theupper half-planesatisfying the following conditions: The problem has the following solution: There is exactly one functionϕ(x,y){\displaystyle \phi (x,y)}satisfying the two conditions and it is given by wherePy(t)=1yP(ty)=1πyt2+y2{\displaystyle P_{y}(t)={\frac {1}{y}}P\left({\frac {t}{y}}\right)={\frac {1}{\pi }}{\frac {y}{t^{2}+y^{2}}}}and where "⋆{\displaystyle \star }" denotes theconvolution operation. The functionPy(t){\displaystyle P_{y}(t)}is the integral kernel for the functionϕ(x,y){\displaystyle \phi (x,y)}. The functionϕ(x,y){\displaystyle \phi (x,y)}is the harmonic continuation ofs(x){\displaystyle s(x)}into the upper half plane. The Poisson wavelet is a family of complex valued functions indexed by the set of positive integers and defined by[4][5] The functionψn(t){\displaystyle \psi _{n}(t)}can be expressed as ann-th derivative as follows: Writing the function(1−it)−1{\displaystyle (1-it)^{-1}}in terms of the Poisson integral kernelP(t)=11+t2{\displaystyle P(t)={\frac {1}{1+t^{2}}}}as we have Thusψn(t){\displaystyle \psi _{n}(t)}can be interpreted as a function proportional to the derivatives of the Poisson integral kernel. The Fourier transform ofψn(t){\displaystyle \psi _{n}(t)}is given by whereu(ω){\displaystyle u(\omega )}is theunit step function.
https://en.wikipedia.org/wiki/Poisson_wavelet
Queueing theoryis the mathematical study ofwaiting lines, orqueues.[1]A queueing model is constructed so that queue lengths and waiting time can be predicted.[1]Queueing theory is generally considered a branch ofoperations researchbecause the results are often used when making business decisions about the resources needed to provide a service. Queueing theory has its origins in research byAgner Krarup Erlang, who created models to describe the system of incoming calls at the Copenhagen Telephone Exchange Company.[1]These ideas were seminal to the field ofteletraffic engineeringand have since seen applications intelecommunications,traffic engineering,computing,[2]project management, and particularlyindustrial engineering, where they are applied in the design of factories, shops, offices, and hospitals.[3][4] The spelling "queueing" over "queuing" is typically encountered in the academic research field. In fact, one of the flagship journals of the field isQueueing Systems. Queueing theory is one of the major areas of study in the discipline ofmanagement science. Through management science, businesses are able to solve a variety of problems using different scientific and mathematical approaches. Queueing analysis is the probabilistic analysis of waiting lines, and thus the results, also referred to as the operating characteristics, are probabilistic rather than deterministic.[5]The probability that n customers are in the queueing system, the average number of customers in the queueing system, the average number of customers in the waiting line, the average time spent by a customer in the total queuing system, the average time spent by a customer in the waiting line, and finally the probability that the server is busy or idle are all of the different operating characteristics that these queueing models compute.[5]The overall goal of queueing analysis is to compute these characteristics for the current system and then test several alternatives that could lead to improvement. Computing the operating characteristics for the current system and comparing the values to the characteristics of the alternative systems allows managers to see the pros and cons of each potential option. These systems help in the final decision making process by showing ways to increase savings, reduce waiting time, improve efficiency, etc. The main queueing models that can be used are the single-server waiting line system and the multiple-server waiting line system, which are discussed further below. These models can be further differentiated depending on whether service times are constant or undefined, the queue length is finite, the calling population is finite, etc.[5] Aqueueorqueueing nodecan be thought of as nearly ablack box.Jobs(also calledcustomersorrequests, depending on the field) arrive to the queue, possibly wait some time, take some time being processed, and then depart from the queue. However, the queueing node is not quite a pure black box since some information is needed about the inside of the queueing node. The queue has one or moreserverswhich can each be paired with an arriving job. When the job is completed and departs, that server will again be free to be paired with another arriving job. An analogy often used is that of the cashier at a supermarket. Customers arrive, are processed by the cashier, and depart. Each cashier processes one customer at a time, and hence this is a queueing node with only one server. A setting where a customer will leave immediately if the cashier is busy when the customer arrives, is referred to as a queue with nobuffer(or nowaiting area). A setting with a waiting zone for up toncustomers is called a queue with a buffer of sizen. The behaviour of a single queue (also called aqueueing node) can be described by abirth–death process, which describes the arrivals and departures from the queue, along with the number of jobs currently in the system. Ifkdenotes the number of jobs in the system (either being serviced or waiting if the queue has a buffer of waiting jobs), then an arrival increaseskby 1 and a departure decreaseskby 1. The system transitions between values ofkby "births" and "deaths", which occur at the arrival ratesλi{\displaystyle \lambda _{i}}and the departure ratesμi{\displaystyle \mu _{i}}for each jobi{\displaystyle i}. For a queue, these rates are generally considered not to vary with the number of jobs in the queue, so a singleaveragerate of arrivals/departures per unit time is assumed. Under this assumption, this process has an arrival rate ofλ=avg(λ1,λ2,…,λk){\displaystyle \lambda ={\text{avg}}(\lambda _{1},\lambda _{2},\dots ,\lambda _{k})}and a departure rate ofμ=avg(μ1,μ2,…,μk){\displaystyle \mu ={\text{avg}}(\mu _{1},\mu _{2},\dots ,\mu _{k})}. Thesteady stateequations for the birth-and-death process, known as thebalance equations, are as follows. HerePn{\displaystyle P_{n}}denotes the steady state probability to be in staten. The first two equations imply and By mathematical induction, The condition∑n=0∞Pn=P0+P0∑n=1∞∏i=0n−1λiμi+1=1{\displaystyle \sum _{n=0}^{\infty }P_{n}=P_{0}+P_{0}\sum _{n=1}^{\infty }\prod _{i=0}^{n-1}{\frac {\lambda _{i}}{\mu _{i+1}}}=1}leads to which, together with the equation forPn{\displaystyle P_{n}}(n≥1){\displaystyle (n\geq 1)}, fully describes the required steady state probabilities. Single queueing nodes are usually described using Kendall's notation in the form A/S/cwhereAdescribes the distribution of durations between each arrival to the queue,Sthe distribution of service times for jobs, andcthe number of servers at the node.[6][7]For an example of the notation, theM/M/1 queueis a simple model where a single server serves jobs that arrive according to aPoisson process(where inter-arrival durations areexponentially distributed) and have exponentially distributed service times (the M denotes aMarkov process). In anM/G/1 queue, the G stands for "general" and indicates an arbitraryprobability distributionfor service times. Consider a queue with one server and the following characteristics: Further, letEn{\displaystyle E_{n}}represent the number of times the system enters staten, andLn{\displaystyle L_{n}}represent the number of times the system leaves staten. Then|En−Ln|∈{0,1}{\displaystyle \left\vert E_{n}-L_{n}\right\vert \in \{0,1\}}for alln. That is, the number of times the system leaves a state differs by at most 1 from the number of times it enters that state, since it will either return into that state at some time in the future (En=Ln{\displaystyle E_{n}=L_{n}}) or not (|En−Ln|=1{\displaystyle \left\vert E_{n}-L_{n}\right\vert =1}). When the system arrives at a steady state, the arrival rate should be equal to the departure rate. Thus the balance equations imply The fact thatP0+P1+⋯=1{\displaystyle P_{0}+P_{1}+\cdots =1}leads to thegeometric distributionformula whereρ=λμ<1{\displaystyle \rho ={\frac {\lambda }{\mu }}<1}. A common basic queueing system is attributed toErlangand is a modification ofLittle's Law. Given an arrival rateλ, a dropout rateσ, and a departure rateμ, length of the queueLis defined as: Assuming an exponential distribution for the rates, the waiting timeWcan be defined as the proportion of arrivals that are served. This is equal to the exponential survival rate of those who do not drop out over the waiting period, giving: The second equation is commonly rewritten as: The two-stage one-box model is common inepidemiology.[8] In 1909,Agner Krarup Erlang, a Danish engineer who worked for the Copenhagen Telephone Exchange, published the first paper on what would now be called queueing theory.[9][10][11]He modeled the number of telephone calls arriving at an exchange by aPoisson processand solved theM/D/1 queuein 1917 andM/D/kqueueingmodel in 1920.[12]In Kendall's notation: If the node has more jobs than servers, then jobs will queue and wait for service. TheM/G/1 queuewas solved byFelix Pollaczekin 1930,[13]a solution later recast in probabilistic terms byAleksandr Khinchinand now known as thePollaczek–Khinchine formula.[12][14] After the 1940s, queueing theory became an area of research interest to mathematicians.[14]In 1953,David George Kendallsolved the GI/M/kqueue[15]and introduced the modern notation for queues, now known asKendall's notation. In 1957, Pollaczek studied the GI/G/1 using anintegral equation.[16]John Kingmangave a formula for themean waiting timein aG/G/1 queue, now known asKingman's formula.[17] Leonard Kleinrockworked on the application of queueing theory tomessage switchingin the early 1960s andpacket switchingin the early 1970s. His initial contribution to this field was his doctoral thesis at theMassachusetts Institute of Technologyin 1962, published in book form in 1964. His theoretical work published in the early 1970s underpinned the use of packet switching in theARPANET, a forerunner to the Internet. Thematrix geometric methodandmatrix analytic methodshave allowed queues withphase-type distributedinter-arrival and service time distributions to be considered.[18] Systems with coupled orbits are an important part in queueing theory in the application to wireless networks and signal processing.[19] Modern day application of queueing theory concerns among other thingsproduct developmentwhere (material) products have a spatiotemporal existence, in the sense that products have a certain volume and a certain duration.[20] Problems such as performance metrics for theM/G/kqueueremain an open problem.[12][14] Various scheduling policies can be used at queueing nodes: Server failures occur according to a stochastic (random) process (usually Poisson) and are followed by setup periods during which the server is unavailable. The interrupted customer remains in the service area until server is fixed.[27] Arriving customers not served (either due to the queue having no buffer, or due to balking or reneging by the customer) are also known asdropouts. The average rate of dropouts is a significant parameter describing a queue. Queue networks are systems in which multiple queues are connected bycustomer routing. When a customer is serviced at one node, it can join another node and queue for service, or leave the network. For networks ofmnodes, the state of the system can be described by anm–dimensional vector (x1,x2, ...,xm) wherexirepresents the number of customers at each node. The simplest non-trivial networks of queues are calledtandem queues.[28]The first significant results in this area wereJackson networks,[29][30]for which an efficientproduct-form stationary distributionexists and themean value analysis[31](which allows average metrics such as throughput and sojourn times) can be computed.[32]If the total number of customers in the network remains constant, the network is called aclosed networkand has been shown to also have a product–form stationary distribution by theGordon–Newell theorem.[33]This result was extended to theBCMP network,[34]where a network with very general service time, regimes, and customer routing is shown to also exhibit a product–form stationary distribution. Thenormalizing constantcan be calculated with theBuzen's algorithm, proposed in 1973.[35] Networks of customers have also been investigated, such asKelly networks, where customers of different classes experience different priority levels at different service nodes.[36]Another type of network areG-networks, first proposed byErol Gelenbein 1993:[37]these networks do not assume exponential time distributions like the classic Jackson network. In discrete-time networks where there is a constraint on which service nodes can be active at any time, the max-weight scheduling algorithm chooses a service policy to give optimal throughput in the case that each job visits only a single-person service node.[21]In the more general case where jobs can visit more than one node,backpressure routinggives optimal throughput. Anetwork schedulermust choose aqueueing algorithm, which affects the characteristics of the larger network.[38] Mean-field modelsconsider the limiting behaviour of theempirical measure(proportion of queues in different states) as the number of queuesmapproaches infinity. The impact of other queues on any given queue in the network is approximated by a differential equation. The deterministic model converges to the same stationary distribution as the original model.[39] In a system with high occupancy rates (utilisation near 1), a heavy traffic approximation can be used to approximate the queueing length process by areflected Brownian motion,[40]Ornstein–Uhlenbeck process, or more generaldiffusion process.[41]The number of dimensions of the Brownian process is equal to the number of queueing nodes, with the diffusion restricted to the non-negativeorthant. Fluid models are continuous deterministic analogs of queueing networks obtained by taking the limit when the process is scaled in time and space, allowing heterogeneous objects. This scaled trajectory converges to a deterministic equation which allows the stability of the system to be proven. It is known that a queueing network can be stable but have an unstable fluid limit.[42] Queueing theory finds widespread application in computer science and information technology. In networking, for instance, queues are integral to routers and switches, where packets queue up for transmission. By applying queueing theory principles, designers can optimize these systems, ensuring responsive performance and efficient resource utilization. Beyond the technological realm, queueing theory is relevant to everyday experiences. Whether waiting in line at a supermarket or for public transportation, understanding the principles of queueing theory provides valuable insights into optimizing these systems for enhanced user satisfaction. At some point, everyone will be involved in an aspect of queuing. What some may view to be an inconvenience could possibly be the most effective method. Queueing theory, a discipline rooted in applied mathematics and computer science, is a field dedicated to the study and analysis of queues, or waiting lines, and their implications across a diverse range of applications. This theoretical framework has proven instrumental in understanding and optimizing the efficiency of systems characterized by the presence of queues. The study of queues is essential in contexts such as traffic systems, computer networks, telecommunications, and service operations. Queueing theory delves into various foundational concepts, with the arrival process and service process being central. The arrival process describes the manner in which entities join the queue over time, often modeled using stochastic processes like Poisson processes. The efficiency of queueing systems is gauged through key performance metrics. These include the average queue length, average wait time, and system throughput. These metrics provide insights into the system's functionality, guiding decisions aimed at enhancing performance and reducing wait times.[43][44][45]
https://en.wikipedia.org/wiki/Queueing_theory
Renewal theoryis the branch ofprobability theorythat generalizes thePoisson processfor arbitrary holding times. Instead ofexponentially distributedholding times, a renewal process may have anyindependent and identically distributed(IID) holding times that have finite expectation. A renewal-reward process additionally has a random sequence of rewards incurred at each holding time, which are IID but need not be independent of the holding times. A renewal process has asymptotic properties analogous to thestrong law of large numbersandcentral limit theorem. The renewal functionm(t){\displaystyle m(t)}(expected number of arrivals) and reward functiong(t){\displaystyle g(t)}(expected reward value) are of key importance in renewal theory. The renewal function satisfies a recursive integral equation, the renewal equation. The key renewal equation gives the limiting value of theconvolutionofm′(t){\displaystyle m'(t)}with a suitable non-negative function. The superposition of renewal processes can be studied as a special case ofMarkov renewal processes. Applications include calculating the best strategy for replacing worn-out machinery in a factory; comparing the long-term benefits of different insurance policies; and modelling the transmission of infectious disease, where "One of the most widely adopted means of inference of thereproduction numberis via the renewal equation".[1]The inspection paradox relates to the fact that observing a renewal interval at timetgives an interval with average value larger than that of an average renewal interval. Therenewal processis a generalization of thePoisson process. In essence, the Poisson process is acontinuous-time Markov processon the positive integers (usually starting at zero) which has independentexponentially distributedholding times at each integeri{\displaystyle i}before advancing to the next integer,i+1{\displaystyle i+1}. In a renewal process, the holding times need not have an exponential distribution; rather, the holding times may have any distribution on the positive numbers, so long as the holding times are independent and identically distributed (IID) and have finite mean. Let(Si)i≥1{\displaystyle (S_{i})_{i\geq 1}}be a sequence of positiveindependent identically distributedrandom variableswith finiteexpected value We refer to the random variableSi{\displaystyle S_{i}}as the "i{\displaystyle i}-th holding time". Define for eachn> 0 : eachJn{\displaystyle J_{n}}is referred to as the "n{\displaystyle n}-th jump time" and the intervals[Jn,Jn+1]{\displaystyle [J_{n},J_{n+1}]}are called "renewal intervals". Then(Xt)t≥0{\displaystyle (X_{t})_{t\geq 0}}is given by random variable whereI{Jn≤t}{\displaystyle \operatorname {\mathbb {I} } _{\{J_{n}\leq t\}}}is theindicator function (Xt)t≥0{\displaystyle (X_{t})_{t\geq 0}}represents the number of jumps that have occurred by timet, and is called a renewal process. If one considers events occurring at random times, one may choose to think of the holding times{Si:i≥1}{\displaystyle \{S_{i}:i\geq 1\}}as the random time elapsed between two consecutive events. For example, if the renewal process is modelling the numbers of breakdown of different machines, then the holding time represents the time between one machine breaking down before another one does. The Poisson process is the unique renewal process with theMarkov property,[2]as the exponential distribution is the unique continuous random variable with the property of memorylessness. LetW1,W2,…{\displaystyle W_{1},W_{2},\ldots }be a sequence ofIIDrandom variables (rewards) satisfying Then the random variable is called arenewal-reward process. Note that unlike theSi{\displaystyle S_{i}}, eachWi{\displaystyle W_{i}}may take negative values as well as positive values. The random variableYt{\displaystyle Y_{t}}depends on two sequences: the holding timesS1,S2,…{\displaystyle S_{1},S_{2},\ldots }and the rewardsW1,W2,…{\displaystyle W_{1},W_{2},\ldots }These two sequences need not be independent. In particular,Wi{\displaystyle W_{i}}may be a function ofSi{\displaystyle S_{i}}. In the context of the above interpretation of the holding times as the time between successive malfunctions of a machine, the "rewards"W1,W2,…{\displaystyle W_{1},W_{2},\ldots }(which in this case happen to be negative) may be viewed as the successive repair costs incurred as a result of the successive malfunctions. An alternative analogy is that we have a magic goose which lays eggs at intervals (holding times) distributed asSi{\displaystyle S_{i}}. Sometimes it lays golden eggs of random weight, and sometimes it lays toxic eggs (also of random weight) which require responsible (and costly) disposal. The "rewards"Wi{\displaystyle W_{i}}are the successive (random) financial losses/gains resulting from successive eggs (i= 1,2,3,...) andYt{\displaystyle Y_{t}}records the total financial "reward" at timet. We define therenewal functionas theexpected valueof the number of jumps observed up to some timet{\displaystyle t}: The renewal function satisfies To prove the elementary renewal theorem, it is sufficient to show that{Xtt;t≥0}{\displaystyle \left\{{\frac {X_{t}}{t}};t\geq 0\right\}}is uniformly integrable. To do this, consider some truncated renewal process where the holding times are defined bySn¯=aI⁡{Sn>a}{\displaystyle {\overline {S_{n}}}=a\operatorname {\mathbb {I} } \{S_{n}>a\}}wherea{\displaystyle a}is a point such that0<F(a)=p<1{\displaystyle 0<F(a)=p<1}which exists for all non-deterministic renewal processes. This new renewal processX¯t{\displaystyle {\overline {X}}_{t}}is an upper bound onXt{\displaystyle X_{t}}and its renewals can only occur on the lattice{na;n∈N}{\displaystyle \{na;n\in \mathbb {N} \}}. Furthermore, the number of renewals at each time is geometric with parameterp{\displaystyle p}. So we have We define thereward function: The reward function satisfies The renewal function satisfies whereFS{\displaystyle F_{S}}is the cumulative distribution function ofS1{\displaystyle S_{1}}andfS{\displaystyle f_{S}}is the corresponding probability density function. From the definition of the renewal process, we have So as required. LetXbe a renewal process with renewal functionm(t){\displaystyle m(t)}and interrenewal meanμ{\displaystyle \mu }. Letg:[0,∞)→[0,∞){\displaystyle g:[0,\infty )\rightarrow [0,\infty )}be a function satisfying: The key renewal theorem states that, ast→∞{\displaystyle t\rightarrow \infty }:[4] Consideringg(x)=I[0,h](x){\displaystyle g(x)=\mathbb {I} _{[0,h]}(x)}for anyh>0{\displaystyle h>0}gives as a special case the renewal theorem:[5] The result can be proved using integral equations or by acouplingargument.[6]Though a special case of the key renewal theorem, it can be used to deduce the full theorem, by considering step functions and then increasing sequences of step functions.[4] Renewal processes and renewal-reward processes have properties analogous to thestrong law of large numbers, which can be derived from the same theorem. If(Xt)t≥0{\displaystyle (X_{t})_{t\geq 0}}is a renewal process and(Yt)t≥0{\displaystyle (Y_{t})_{t\geq 0}}is a renewal-reward process then: almost surely. for allt≥0{\displaystyle t\geq 0}and so for allt≥ 0. Now since0<E⁡[Si]<∞{\displaystyle 0<\operatorname {E} [S_{i}]<\infty }we have: ast→∞{\displaystyle t\to \infty }almost surely(with probability 1). Hence: almost surely (using the strong law of large numbers); similarly: almost surely. Thus (sincet/Xt{\displaystyle t/X_{t}}is sandwiched between the two terms) almost surely.[4] Next consider(Yt)t≥0{\displaystyle (Y_{t})_{t\geq 0}}. We have almost surely (using the first result and using the law of large numbers onYt{\displaystyle Y_{t}}). Renewal processes additionally have a property analogous to thecentral limit theorem:[7] A curious feature of renewal processes is that if we wait some predetermined timetand then observe how large the renewal interval containingtis, we should expect it to be typically larger than a renewal interval of average size. Mathematically theinspection paradoxstates:for any t > 0 the renewal interval containing t isstochastically largerthan the first renewal interval.That is, for allx> 0 and for allt> 0: whereFSis the cumulative distribution function of the IID holding timesSi. A vivid example is thebus waiting time paradox: For a given random distribution of bus arrivals, the average rider at a bus stop observes more delays than the average operator of the buses. The resolution of the paradox is that our sampled distribution at timetis size-biased (seesampling bias), in that the likelihood an interval is chosen is proportional to its size. However, a renewal interval of average size is not size-biased. since both1−F(x)1−F(t−s){\displaystyle {\frac {1-F(x)}{1-F(t-s)}}}and1{\displaystyle 1}are greater than or equal to1−F(x){\displaystyle 1-F(x)}for all values ofs. Unless the renewal process is a Poisson process, the superposition (sum) of two independent renewal processes is not a renewal process.[8]However, such processes can be described within a larger class of processes called theMarkov-renewal processes.[9]However, thecumulative distribution functionof the first inter-event time in the superposition process is given by[10] whereRk(t) andαk> 0 are the CDF of the inter-event times and the arrival rate of processk.[11] Eric the entrepreneur hasnmachines, each having an operational lifetime uniformly distributed between zero and two years. Eric may let each machine run until it fails with replacement cost €2600; alternatively he may replace a machine at any time while it is still functional at a cost of €200. What is his optimal replacement policy? If Eric decides at the start of a machine's life to replace it at time0 <t< 2but the machine happens to fail before that time then the lifetimeSof the machine is uniformly distributed on [0,t] and thus has expectation 0.5t. So the overall expected lifetime of the machine is: and the expected costWper machine is: So by the strong law of large numbers, his long-term average cost per unit time is: then differentiating with respect tot: this implies that the turning points satisfy: and thus We take the only solutiontin [0, 2]:t= 2/3. This is indeed a minimum (and not a maximum) since the cost per unit time tends to infinity asttends to zero, meaning that the cost is decreasing astincreases, until the point 2/3 where it starts to increase.
https://en.wikipedia.org/wiki/Renewal_theory
Instatistics, theRobbins lemma, named afterHerbert Robbins, states that ifXis arandom variablehaving aPoisson distributionwith parameterλ, andfis any function for which theexpected valueE(f(X)) exists, then[1] Robbins introduced this proposition while developingempirical Bayes methods.
https://en.wikipedia.org/wiki/Robbins_lemma
TheSkellam distributionis thediscrete probability distributionof the differenceN1−N2{\displaystyle N_{1}-N_{2}}of twostatistically independentrandom variablesN1{\displaystyle N_{1}}andN2,{\displaystyle N_{2},}eachPoisson-distributedwith respectiveexpected valuesμ1{\displaystyle \mu _{1}}andμ2{\displaystyle \mu _{2}}. It is useful in describing the statistics of the difference of two images with simplephoton noise, as well as describing thepoint spreaddistribution in sports where all scored points are equal, such asbaseball,hockeyandsoccer. The distribution is also applicable to a special case of the difference of dependent Poisson random variables, but just the obvious case where the two variables have a common additive random contribution which is cancelled by the differencing: see Karlis & Ntzoufras (2003) for details and an application. Theprobability mass functionfor the Skellam distribution for a differenceK=N1−N2{\displaystyle K=N_{1}-N_{2}}between two independent Poisson-distributed random variables with meansμ1{\displaystyle \mu _{1}}andμ2{\displaystyle \mu _{2}}is given by: whereIk(z) is themodified Bessel functionof the first kind. Sincekis an integer we have thatIk(z)=I|k|(z). Theprobability mass functionof aPoisson-distributedrandom variable with mean μ is given by fork≥0{\displaystyle k\geq 0}(and zero otherwise). The Skellam probability mass function for the difference of two independent countsK=N1−N2{\displaystyle K=N_{1}-N_{2}}is theconvolutionof two Poisson distributions: (Skellam, 1946) Since the Poisson distribution is zero for negative values of the count(p(N<0;μ)=0){\displaystyle (p(N<0;\mu )=0)}, the second sum is only taken for those terms wheren≥0{\displaystyle n\geq 0}andn+k≥0{\displaystyle n+k\geq 0}. It can be shown that the above sum implies that so that: whereIk(z) is themodified Bessel functionof the first kind. The special case forμ1=μ2(=μ){\displaystyle \mu _{1}=\mu _{2}(=\mu )}is given by Irwin (1937): Using the limiting values of the modified Bessel function for small arguments, we can recover the Poisson distribution as a special case of the Skellam distribution forμ2=0{\displaystyle \mu _{2}=0}. As it is a discrete probability function, the Skellam probability mass function is normalized: We know that theprobability generating function(pgf) for aPoisson distributionis: It follows that the pgf,G(t;μ1,μ2){\displaystyle G(t;\mu _{1},\mu _{2})}, for a Skellam probability mass function will be: Notice that the form of theprobability-generating functionimplies that the distribution of the sums or the differences of any number of independent Skellam-distributed variables are again Skellam-distributed. It is sometimes claimed that any linear combination of two Skellam distributed variables are again Skellam-distributed, but this is clearly not true since any multiplier other than±1{\displaystyle \pm 1}would change thesupportof the distribution and alter the pattern ofmomentsin a way that no Skellam distribution can satisfy. Themoment-generating functionis given by: which yields the raw momentsmk. Define: Then the raw momentsmkare Thecentral momentsMkare Themean,variance,skewness, andkurtosis excessare respectively: Thecumulant-generating functionis given by: which yields thecumulants: For the special case when μ1= μ2, anasymptotic expansionof themodified Bessel function of the first kindyields for large μ: (Abramowitz & Stegun 1972, p. 377). Also, for this special case, whenkis also large, and oforderof the square root of 2μ, the distribution tends to anormal distribution: These special results can easily be extended to the more general case of different means. IfX∼Skellam⁡(μ1,μ2){\displaystyle X\sim \operatorname {Skellam} (\mu _{1},\mu _{2})}, withμ1<μ2{\displaystyle \mu _{1}<\mu _{2}}, then Details can be found inPoisson distribution#Poisson races
https://en.wikipedia.org/wiki/Skellam_distribution
Inprobabilityandstatistics, theTweedie distributionsare a family ofprobability distributionswhich include the purely continuousnormal,gammaandinverse Gaussiandistributions, the purely discrete scaledPoisson distribution, and the class ofcompound Poisson–gammadistributions which have positive mass at zero, but are otherwise continuous.[1]Tweedie distributions are a special case ofexponential dispersion modelsand are often used as distributions forgeneralized linear models.[2] The Tweedie distributions were named byBent Jørgensenin[3]afterMaurice Tweedie,[4]a statistician and medical physicist at theUniversity of Liverpool, UK, who presented the first thorough study of these distributions in 1982 when the conference[1]was held. Around the same time, Bar-Lev and Enis published about the same topic.[5][6] The (reproductive) Tweedie distributions are defined as subfamily of (reproductive)exponential dispersion models(ED), with a specialmean-variancerelationship. Arandom variableYis Tweedie distributedTwp(μ, σ2), ifY∼ED(μ,σ2){\displaystyle Y\sim \mathrm {ED} (\mu ,\sigma ^{2})}with meanμ=E⁡(Y){\displaystyle \mu =\operatorname {E} (Y)}, positive dispersion parameterσ2{\displaystyle \sigma ^{2}}andVar⁡(Y)=σ2μp,{\displaystyle \operatorname {Var} (Y)=\sigma ^{2}\,\mu ^{p},}wherep∈R{\displaystyle p\in \mathbf {R} }is called the Tweedie power parameter. The probability distributionPθ,σ2on themeasurable setsA, is given byPθ,σ2(Y∈A)=∫Aexp⁡(θ⋅z−κp(θ)σ2)⋅νλ(dz),{\displaystyle P_{\theta ,\sigma ^{2}}(Y\in A)=\int _{A}\exp \left({\frac {\theta \cdot z-\kappa _{p}(\theta )}{\sigma ^{2}}}\right)\cdot \nu _{\lambda }\,(dz),}for some σ-finite measureνλ. This representation uses the canonical parameterθof an exponential dispersion model andcumulant functionκp(θ)={α−1α(θα−1)α,forp≠1,2−log⁡(−θ),forp=2eθ,forp=1{\displaystyle \kappa _{p}(\theta )={\begin{cases}{\frac {\alpha -1}{\alpha }}\left({\frac {\theta }{\alpha -1}}\right)^{\alpha },&{\text{for }}p\neq 1,2\\-\log(-\theta ),&{\text{for }}p=2\\e^{\theta },&{\text{for }}p=1\end{cases}}}where we usedα=p−2p−1{\displaystyle \alpha ={\frac {p-2}{p-1}}}, or equivalentlyp=α−2α−1{\displaystyle p={\frac {\alpha -2}{\alpha -1}}}. The models just described are in the reproductive form. Anexponential dispersion modelhas always adual: the additive form. IfYis reproductive, thenZ=λY{\displaystyle Z=\lambda Y}withλ=1σ2{\displaystyle \lambda ={\frac {1}{\sigma ^{2}}}}is in the additive form ED*(θ,λ), for TweedieTw*p(μ, λ). Additive models have the property that the distribution of the sum of independent random variables,Z+=Z1+⋯+Zn,{\displaystyle Z_{+}=Z_{1}+\cdots +Z_{n},}for whichZi~ ED*(θ,λi) with fixedθand variousλare members of the family of distributions with the sameθ,Z+∼ED∗⁡(θ,λ1+⋯+λn).{\displaystyle Z_{+}\sim \operatorname {ED} ^{*}(\theta ,\lambda _{1}+\cdots +\lambda _{n}).} A second class of exponential dispersion models exists designated by the random variableY=Z/λ∼ED⁡(μ,σ2),{\displaystyle Y=Z/\lambda \sim \operatorname {ED} (\mu ,\sigma ^{2}),}whereσ2= 1/λ, known as reproductive exponential dispersion models. They have the property that fornindependent random variablesYi~ ED(μ,σ2/wi), with weighting factorswiandw=∑i=1nwi,{\displaystyle w=\sum _{i=1}^{n}w_{i},}a weighted average of the variables gives,w−1∑i=1nwiYi∼ED⁡(μ,σ2/w).{\displaystyle w^{-1}\sum _{i=1}^{n}w_{i}Y_{i}\sim \operatorname {ED} (\mu ,\sigma ^{2}/w).} For reproductive models the weighted average of independent random variables with fixedμandσ2and various values forwiis a member of the family of distributions with sameμandσ2. The Tweedie exponential dispersion models are both additive and reproductive; we thus have theduality transformationY↦Z=Y/σ2.{\displaystyle Y\mapsto Z=Y/\sigma ^{2}.} A third property of the Tweedie models is that they arescale invariant: For a reproductive exponential dispersion modelTwp(μ, σ2)and any positive constantcwe have the property of closure under scale transformation,cTwp⁡(μ,σ2)=Twp⁡(cμ,c2−pσ2).{\displaystyle c\operatorname {Tw} _{p}(\mu ,\sigma ^{2})=\operatorname {Tw} _{p}(c\mu ,c^{2-p}\sigma ^{2}).} To define thevariance functionfor exponential dispersion models we make use of the mean value mapping, the relationship between the canonical parameterθand the meanμ. It is defined by the functionτ(θ)=κ′(θ)=μ.{\displaystyle \tau (\theta )=\kappa ^{\prime }(\theta )=\mu .}with cumulative functionκ(θ){\displaystyle \kappa (\theta )}. Thevariance functionV(μ) is constructed from the mean value mapping,V(μ)=τ′[τ−1(μ)].{\displaystyle V(\mu )=\tau ^{\prime }[\tau ^{-1}(\mu )].} Here the minus exponent inτ−1(μ) denotes an inverse function rather than a reciprocal. The mean and variance of an additive random variable is thenE(Z) =λμandvar(Z) =λV(μ). Scale invariance implies that the variance function obeys the relationshipV(μ) =μp.[2] The unitdevianceof a reproductive Tweedie distribution is given byd(y,μ)={(y−μ)2,forp=02(ylog⁡(y/μ)+μ−y),forp=12(log⁡(μ/y)+y/μ−1),forp=22(max(y,0)2−p(1−p)(2−p)−yμ1−p1−p+μ2−p2−p),else{\displaystyle d(y,\mu )={\begin{cases}(y-\mu )^{2},&{\text{for }}p=0\\2(y\log(y/\mu )+\mu -y),&{\text{for }}p=1\\2(\log(\mu /y)+y/\mu -1),&{\text{for }}p=2\\2\left({\frac {\max(y,0)^{2-p}}{(1-p)(2-p)}}-{\frac {y\mu ^{1-p}}{1-p}}+{\frac {\mu ^{2-p}}{2-p}}\right),&{\text{else}}\end{cases}}} The properties of exponential dispersion models give us twodifferential equations.[2]The first relates the mean value mapping and the variance function to each other,∂τ−1(μ)∂μ=1V(μ).{\displaystyle {\frac {\partial \tau ^{-1}(\mu )}{\partial \mu }}={\frac {1}{V(\mu )}}.} The second shows how the mean value mapping is related to thecumulant function,∂κ(θ)∂θ=τ(θ).{\displaystyle {\frac {\partial \kappa (\theta )}{\partial \theta }}=\tau (\theta ).} These equations can be solved to obtain the cumulant function for different cases of the Tweedie models. A cumulant generating function (CGF) may then be obtained from the cumulant function. The additive CGF is generally specified by the equationK∗(s)=log⁡[E⁡(esZ)]=λ[κ(θ+s)−κ(θ)],{\displaystyle K^{*}(s)=\log[\operatorname {E} (e^{sZ})]=\lambda [\kappa (\theta +s)-\kappa (\theta )],}and the reproductive CGF byK(s)=log⁡[E⁡(esY)]=λ[κ(θ+s/λ)−κ(θ)],{\displaystyle K(s)=\log[\operatorname {E} (e^{sY})]=\lambda [\kappa (\theta +s/\lambda )-\kappa (\theta )],}wheresis the generating function variable. For the additive Tweedie models the CGFs take the form,Kp∗(s;θ,λ)={λκp(θ)[(1+s/θ)α−1]p≠1,2,−λlog⁡(1+s/θ)p=2,λeθ(es−1)p=1,{\displaystyle K_{p}^{*}(s;\theta ,\lambda )={\begin{cases}\lambda \kappa _{p}(\theta )[(1+s/\theta )^{\alpha }-1]&\quad p\neq 1,2,\\-\lambda \log(1+s/\theta )&\quad p=2,\\\lambda e^{\theta }(e^{s}-1)&\quad p=1,\end{cases}}}and for the reproductive models,Kp(s;θ,λ)={λκp(θ){[1+s/(θλ)]α−1}p≠1,2,−λlog⁡[1+s/(θλ)]p=2,λeθ(es/λ−1)p=1.{\displaystyle K_{p}(s;\theta ,\lambda )={\begin{cases}\lambda \kappa _{p}(\theta )\left\{\left[1+s/(\theta \lambda )\right]^{\alpha }-1\right\}&\quad p\neq 1,2,\\[1ex]-\lambda \log[1+s/(\theta \lambda )]&\quad p=2,\\[1ex]\lambda e^{\theta }\left(e^{s/\lambda }-1\right)&\quad p=1.\end{cases}}} The additive and reproductive Tweedie models are conventionally denoted by the symbolsTw*p(θ,λ) andTwp(θ,σ2), respectively. The first and second derivatives of the CGFs, withs= 0, yields the mean and variance, respectively. One can thus confirm that for the additive models the variance relates to the mean by the power law,var(Z)∝E(Z)p.{\displaystyle \mathrm {var} (Z)\propto \mathrm {E} (Z)^{p}.} The Tweedie exponential dispersion models are fundamental in statistical theory consequent to their roles as foci ofconvergencefor a wide range of statistical processes. Jørgensenet alproved a theorem that specifies the asymptotic behaviour of variance functions known as theTweedie convergence theorem.[7]This theorem, in technical terms, is stated thus:[2]The unit variance function is regular of orderpat zero (or infinity) provided thatV(μ) ~c0μpforμas it approaches zero (or infinity) for all real values ofpandc0> 0. Then for a unit variance function regular of orderpat either zero or infinity and forp∉(0,1),{\displaystyle p\notin (0,1),}for anyμ>0{\displaystyle \mu >0}, andσ2>0{\displaystyle \sigma ^{2}>0}we havec−1ED⁡(cμ,σ2c2−p)→Twp(μ,c0σ2){\displaystyle c^{-1}\operatorname {ED} (c\mu ,\sigma ^{2}c^{2-p})\rightarrow Tw_{p}(\mu ,c_{0}\sigma ^{2})}asc↓0{\displaystyle c\downarrow 0}orc→∞{\displaystyle c\rightarrow \infty }, respectively, where the convergence is through values ofcsuch thatcμis in the domain ofθandcp−2/σ2is in the domain ofλ. The model must be infinitely divisible asc2−papproaches infinity.[2] In nontechnical terms this theorem implies that any exponential dispersion model that asymptotically manifests a variance-to-mean power law is required to have a variance function that comes within thedomain of attractionof a Tweedie model. Almost all distribution functions with finite cumulant generating functions qualify as exponential dispersion models and most exponential dispersion models manifest variance functions of this form. Hence many probability distributions have variance functions that express this asymptotic behaviour, and the Tweedie distributions become foci of convergence for a wide range of data types.[8] The Tweedie distributions include a number of familiar distributions as well as some unusual ones, each being specified by thedomainof the index parameter. We have the For 0 <p< 1 no Tweedie model exists. Note that allstabledistributions mean actuallygenerated by stable distributions. Taylor's lawis an empirical law inecologythat relates the variance of the number of individuals of a species per unit area of habitat to the corresponding mean by apower-lawrelationship.[9]For the population countYwith meanμand variance var(Y), Taylor's law is written,var⁡(Y)=aμp,{\displaystyle \operatorname {var} (Y)=a\mu ^{p},}whereaandpare both positive constants. Since L. R. Taylor described this law in 1961 there have been many different explanations offered to explain it, ranging from animal behavior,[9]arandom walkmodel,[10]astochastic birth, death, immigration and emigration model,[11]to a consequence of equilibrium and non-equilibriumstatistical mechanics.[12]No consensus exists as to an explanation for this model. Since Taylor's law is mathematically identical to the variance-to-mean power law that characterizes the Tweedie models, it seemed reasonable to use these models and the Tweedie convergence theorem to explain the observed clustering of animals and plants associated with Taylor's law.[13][14]The majority of the observed values for the power-law exponentphave fallen in the interval (1,2) and so the Tweedie compound Poisson–gamma distribution would seem applicable. Comparison of theempirical distribution functionto the theoretical compound Poisson–gamma distribution has provided a means to verify consistency of this hypothesis.[13] Whereas conventional models for Taylor's law have tended to involvead hocanimal behavioral orpopulation dynamicassumptions, the Tweedie convergence theorem would imply that Taylor's law results from a general mathematical convergence effect much as how thecentral limit theoremgoverns the convergence behavior of certain types of random data. Indeed, any mathematical model, approximation or simulation that is designed to yield Taylor's law (on the basis of this theorem) is required to converge to the form of the Tweedie models.[8] Pink noise, or 1/fnoise, refers to a pattern of noise characterized by a power-law relationship between its intensitiesS(f) at different frequenciesf,S(f)∝1fγ,{\displaystyle S(f)\propto {\frac {1}{f^{\gamma }}},}where the dimensionless exponentγ∈ [0,1]. It is found within a diverse number of natural processes.[15]Many different explanations for 1/fnoise exist, a widely held hypothesis is based onSelf-organized criticalitywhere dynamical systems close to acritical pointare thought to manifestscale-invariantspatial and/or temporal behavior. In this subsection a mathematical connection between 1/fnoise and the Tweedie variance-to-mean power law will be described. To begin, we first need to introduceself-similar processes: For the sequence of numbersY=(Yi:i=0,1,2,…,N){\displaystyle Y=(Y_{i}:i=0,1,2,\ldots ,N)}with meanμ^=E⁡(Yi),{\displaystyle {\widehat {\mu }}=\operatorname {E} (Y_{i}),}deviationsyi=Yi−μ^,{\displaystyle y_{i}=Y_{i}-{\widehat {\mu }},}varianceσ^2=E⁡(yi2),{\displaystyle {\widehat {\sigma }}^{2}=\operatorname {E} (y_{i}^{2}),}and autocorrelation functionr(k)=E⁡(yi,yi+k)E⁡(yi2){\displaystyle r(k)={\frac {\operatorname {E} (y_{i},y_{i+k})}{\operatorname {E} (y_{i}^{2})}}}with lagk, if theautocorrelationof this sequence has the long range behaviorr(k)∼k−dL(k){\displaystyle r(k)\sim k^{-d}L(k)}ask→∞and whereL(k) is a slowly varying function at large values ofk, this sequence is called a self-similar process.[16] Themethod of expanding binscan be used to analyze self-similar processes. Consider a set of equal-sized non-overlapping bins that divides the original sequence ofNelements into groups ofmequal-sized segments (N/mis integer) so that new reproductive sequences, based on the mean values, can be defined:Yi(m)=(Yim−m+1+⋯+Yim)/m.{\displaystyle Y_{i}^{(m)}=\left(Y_{im-m+1}+\cdots +Y_{im}\right)/m.} The variance determined from this sequence will scale as the bin size changes such thatvar⁡[Y(m)]=σ^2m−d{\displaystyle \operatorname {var} [Y^{(m)}]={\widehat {\sigma }}^{2}m^{-d}}if and only if the autocorrelation has the limiting form[17]limk→∞r(k)/k−d=(2−d)(1−d)/2.{\displaystyle \lim _{k\to \infty }r(k)/k^{-d}=(2-d)(1-d)/2.} One can also construct a set of corresponding additive sequencesZi(m)=mYi(m),{\displaystyle Z_{i}^{(m)}=mY_{i}^{(m)},}based on the expanding bins,Zi(m)=(Yim−m+1+⋯+Yim).{\displaystyle Z_{i}^{(m)}=(Y_{im-m+1}+\cdots +Y_{im}).} Provided the autocorrelation function exhibits the same behavior, the additive sequences will obey the relationshipvar⁡[Zi(m)]=m2var⁡[Y(m)]=(σ^2μ^2−d)E⁡[Zi(m)]2−d{\displaystyle \operatorname {var} [Z_{i}^{(m)}]=m^{2}\operatorname {var} [Y^{(m)}]=\left({\frac {{\widehat {\sigma }}^{2}}{{\widehat {\mu }}^{2-d}}}\right)\operatorname {E} [Z_{i}^{(m)}]^{2-d}} Sinceμ^{\displaystyle {\widehat {\mu }}}andσ^2{\displaystyle {\widehat {\sigma }}^{2}}are constants this relationship constitutes a variance-to-mean power law, withp= 2 -d.[8][18] Thebiconditionalrelationship above between the variance-to-mean power law and power law autocorrelation function, and theWiener–Khinchin theorem[19]imply that any sequence that exhibits a variance-to-mean power law by the method of expanding bins will also manifest 1/fnoise, and vice versa. Moreover, the Tweedie convergence theorem, by virtue of its central limit-like effect of generating distributions that manifest variance-to-mean power functions, will also generate processes that manifest 1/fnoise.[8]The Tweedie convergence theorem thus provides an alternative explanation for the origin of 1/fnoise, based its central limit-like effect. Much as thecentral limit theoremrequires certain kinds of random processes to have as a focus of their convergence theGaussian distributionand thus expresswhite noise, the Tweedie convergence theorem requires certain non-Gaussian processes to have as a focus of convergence the Tweedie distributions that express 1/fnoise.[8] From the properties of self-similar processes, the power-law exponentp= 2 -dis related to theHurst exponentHand thefractal dimensionDby[17]D=2−H=2−p/2.{\displaystyle D=2-H=2-p/2.} A one-dimensional data sequence of self-similar data may demonstrate a variance-to-mean power law with local variations in the value ofpand hence in the value ofD. When fractal structures manifest local variations in fractal dimension, they are said to bemultifractals. Examples of data sequences that exhibit local variations inplike this include the eigenvalue deviations of theGaussian Orthogonal and Unitary Ensembles.[8]The Tweedie compound Poisson–gamma distribution has served to model multifractality based on local variations in the Tweedie exponentα. Consequently, in conjunction with the variation ofα, the Tweedie convergence theorem can be viewed as having a role in the genesis of such multifractals. The variation ofαhas been found to obey the asymmetricLaplace distributionin certain cases.[20]This distribution has been shown to be a member of the family of geometric Tweedie models,[21]that manifest as limiting distributions in a convergence theorem for geometric dispersion models. Regional organ blood flow has been traditionally assessed by the injection ofradiolabelledpolyethylene microspheresinto the arterial circulation of animals, of a size that they become entrapped within themicrocirculationof organs. The organ to be assessed is then divided into equal-sized cubes and the amount of radiolabel within each cube is evaluated byliquid scintillation countingand recorded. The amount of radioactivity within each cube is taken to reflect the blood flow through that sample at the time of injection. It is possible to evaluate adjacent cubes from an organ in order to additively determine the blood flow through larger regions. Through the work ofJ B Bassingthwaighteand others an empirical power law has been derived between the relative dispersion of blood flow of tissue samples (RD= standard deviation/mean) of massmrelative to reference-sized samples:[22]RD(m)=RD(mref)(mmref)1−Ds{\displaystyle RD(m)=RD(m_{\text{ref}})\left({\frac {m}{m_{\text{ref}}}}\right)^{1-D_{s}}} This power law exponentDshas been called a fractal dimension.Bassingthwaighte's power lawcan be shown to directly relate to the variance-to-mean power law. Regional organ blood flow can thus be modelled by the Tweedie compound Poisson–gamma distribution.,[23]In this model tissue sample could be considered to contain a random (Poisson) distributed number of entrapment sites, each withgamma distributedblood flow. Blood flow at this microcirculatory level has been observed to obey a gamma distribution,[24]thus providing support for this hypothesis. The "experimental cancermetastasisassay"[25]has some resemblance to the above method to measure regional blood flow. Groups ofsyngeneicand age matched mice are given intravenous injections of equal-sized aliquots of suspensions of cloned cancer cells and then after a set period of time their lungs are removed and the number of cancer metastases enumerated within each pair of lungs. If other groups of mice are injected with different cancer cellclonesthen the number of metastases per group will differ in accordance with the metastatic potentials of the clones. It has been long recognized that there can be considerable intraclonal variation in the numbers of metastases per mouse despite the best attempts to keep the experimental conditions within each clonal group uniform.[25]This variation is larger than would be expected on the basis of aPoisson distributionof numbers of metastases per mouse in each clone and when the variance of the number of metastases per mouse was plotted against the corresponding mean a power law was found.[26] The variance-to-mean power law for metastases was found to also hold forspontaneous murine metastases[27]and for cases series of human metastases.[28]Since hematogenous metastasis occurs in direct relationship to regional blood flow[29]and videomicroscopic studies indicate that the passage and entrapment of cancer cells within the circulation appears analogous to the microsphere experiments[30]it seemed plausible to propose that the variation in numbers of hematogenous metastases could reflect heterogeneity in regional organ blood flow.[31]The blood flow model was based on the Tweedie compound Poisson–gamma distribution, a distribution governing a continuous random variable. For that reason in the metastasis model it was assumed that blood flow was governed by that distribution and that the number of regional metastases occurred as aPoisson processfor which the intensity was directly proportional to blood flow. This led to the description of the Poisson negative binomial (PNB) distribution as adiscrete equivalentto the Tweedie compound Poisson–gamma distribution. Theprobability generating functionfor the PNB distribution isG(s)=exp⁡[λα−1α(θα−1)α{(1−1θ+sθ)α−1}]{\displaystyle G(s)=\exp \left[\lambda {\frac {\alpha -1}{\alpha }}\left({\frac {\theta }{\alpha -1}}\right)^{\alpha }\left\{\left(1-{\frac {1}{\theta }}+{\frac {s}{\theta }}\right)^{\alpha }-1\right\}\right]} The relationship between the mean and variance of the PNB distribution is thenvar⁡(Y)=aE⁡(Y)b+E⁡(Y),{\displaystyle \operatorname {var} (Y)=a\operatorname {E} (Y)^{b}+\operatorname {E} (Y),}which, in the range of many experimental metastasis assays, would be indistinguishable from the variance-to-mean power law. For sparse data, however, this discrete variance-to-mean relationship would behave more like that of a Poisson distribution where the variance equaled the mean. The local density ofSingle Nucleotide Polymorphisms(SNPs) within thehuman genome, as well as that ofgenes, appears to cluster in accord with the variance-to-mean power law and the Tweedie compound Poisson–gamma distribution.[32][33]In the case of SNPs their observed density reflects the assessment techniques, the availability of genomic sequences for analysis, and thenucleotide heterozygosity.[34]The first two factors reflect ascertainment errors inherent to the collection methods, the latter factor reflects an intrinsic property of the genome. In thecoalescent modelof population genetics each genetic locus has its own unique history. Within the evolution of a population from some species some genetic loci could presumably be traced back to a relativelyrecent common ancestorwhereas other loci might have more ancientgenealogies. More ancient genomic segments would have had more time to accumulate SNPs and to experiencerecombination.R R Hudsonhas proposed a model where recombination could cause variation in the time tomost common recent ancestorfor different genomic segments.[35]A high recombination rate could cause a chromosome to contain a large number of small segments with less correlated genealogies. Assuming a constant background rate of mutation the number of SNPs per genomic segment would accumulate proportionately to the time to the most recent common ancestor. Currentpopulation genetic theorywould indicate that these times would begamma distributed, on average.[36]The Tweedie compound Poisson–gamma distribution would suggest a model whereby the SNP map would consist of multiple small genomic segments with the mean number of SNPs per segment would be gamma distributed as per Hudson's model. The distribution of genes within the human genome also demonstrated a variance-to-mean power law, when the method of expanding bins was used to determine the corresponding variances and means.[33]Similarly the number of genes per enumerative bin was found to obey a Tweedie compound Poisson–gamma distribution. This probability distribution was deemed compatible with two different biological models: themicroarrangement modelwhere the number of genes per unit genomic length was determined by the sum of a random number of smaller genomic segments derived by random breakage and reconstruction of protochormosomes. These smaller segments would be assumed to carry on average a gamma distributed number of genes. In the alternativegene cluster model, genes would be distributed randomly within the protochromosomes. Over large evolutionary timescales there would occurtandem duplication,mutations, insertions, deletionsandrearrangementsthat could affect the genes through a stochasticbirth, death and immigration processto yield the Tweedie compound Poisson–gamma distribution. Both these mechanisms would implicateneutral evolutionary processesthat would result in regional clustering of genes. TheGaussian unitary ensemble(GUE) consists of complexHermitian matricesthat are invariant underunitary transformationswhereas theGaussian orthogonal ensemble(GOE) consists of real symmetric matrices invariant underorthogonal transformations. The rankedeigenvaluesEnfrom these random matrices obeyWigner's semicircular distribution: For aN×Nmatrix the average density for eigenvalues of sizeEwill beρ¯(E)={2N−E2/π|E|<2N0|E|>2N{\displaystyle {\bar {\rho }}(E)={\begin{cases}{\sqrt {2N-E^{2}}}/\pi &\quad \left\vert E\right\vert <{\sqrt {2N}}\\0&\quad \left\vert E\right\vert >{\sqrt {2N}}\end{cases}}}asE→ ∞. Integration of the semicircular rule provides the number of eigenvalues on average less thanE,η¯(E)=12π[E2N−E2+2Narcsin⁡(E2N)+πN].{\displaystyle {\bar {\eta }}(E)={\frac {1}{2\pi }}\left[E{\sqrt {2N-E^{2}}}+2N\arcsin \left({\frac {E}{\sqrt {2N}}}\right)+\pi N\right].} The ranked eigenvalues can beunfolded, or renormalized, with the equationen=η¯(E)=∫−∞EndE′ρ¯(E′).{\displaystyle e_{n}={\bar {\eta }}(E)=\int _{-\infty }^{E_{n}}\,dE'{\bar {\rho }}(E').} This removes the trend of the sequence from the fluctuating portion. If we look at the absolute value of the difference between the actual and expected cumulative number of eigenvalues|D¯n|=|n−η¯(En)|{\displaystyle \left|{\bar {D}}_{n}\right|=\left|n-{\bar {\eta }}(E_{n})\right|}we obtain a sequence ofeigenvalue fluctuationswhich, using the method of expanding bins, reveals a variance-to-mean power law.[8]The eigenvalue fluctuations of both the GUE and the GOE manifest this power law with the power law exponents ranging between 1 and 2, and they similarly manifest 1/fnoise spectra. These eigenvalue fluctuations also correspond to the Tweedie compound Poisson–gamma distribution and they exhibit multifractality.[8] ThesecondChebyshev functionψ(x) is given by,ψ(x)=∑p^k≤xlog⁡p^=∑n≤xΛ(n){\displaystyle \psi (x)=\sum _{{\widehat {p\,}}^{k}\leq x}\log {\widehat {p\,}}=\sum _{n\leq x}\Lambda (n)}where the summation extends over all prime powersp^k{\displaystyle {\widehat {p\,}}^{k}}not exceedingx,xruns over the positive real numbers, andΛ(n){\displaystyle \Lambda (n)}is thevon Mangoldt function. The functionψ(x) is related to theprime-counting functionπ(x), and as such provides information with regards to the distribution of prime numbers amongst the real numbers. It is asymptotic tox, a statement equivalent to theprime number theoremand it can also be shown to be related to the zeros of theRiemann zeta functionlocated on the critical stripρ, where the real part of the zeta zeroρis between 0 and 1. Thenψexpressed forxgreater than one can be written:ψ0(x)=x−∑ρxρρ−ln⁡2π−12ln⁡(1−x−2){\displaystyle \psi _{0}(x)=x-\sum _{\rho }{\frac {x^{\rho }}{\rho }}-\ln 2\pi -{\frac {1}{2}}\ln(1-x^{-2})}whereψ0(x)=limε→0ψ(x−ε)+ψ(x+ε)2.{\displaystyle \psi _{0}(x)=\lim _{\varepsilon \rightarrow 0}{\frac {\psi (x-\varepsilon )+\psi (x+\varepsilon )}{2}}.} TheRiemann hypothesisstates that thenontrivial zerosof theRiemann zeta functionall havereal part1⁄2. These zeta function zeros are related to thedistribution of prime numbers.Schoenfeld[37]has shown that if the Riemann hypothesis is true thenΔ(x)=|ψ(x)−x|<xlog2⁡(x)/(8π){\displaystyle \Delta (x)=\left\vert \psi (x)-x\right\vert <{\sqrt {x}}\log ^{2}(x)/(8\pi )}for allx>73.2{\displaystyle x>73.2}. If we analyze the Chebyshev deviations Δ(n) on the integersnusing the method of expanding bins and plot the variance versus the mean a variance to mean power law can be demonstrated.[citation needed]Moreover, these deviations correspond to the Tweedie compound Poisson-gamma distribution and they exhibit 1/fnoise. Applications of Tweedie distributions include:
https://en.wikipedia.org/wiki/Tweedie_distribution
Instatistics, azero-inflated modelis astatistical modelbased on a zero-inflatedprobability distribution, i.e. a distribution that allows for frequent zero-valued observations. Zero-inflated models are commonly used in the analysis of count data, such as the number of visits a patient makes to the emergency room in one year, or the number of fish caught in one day in one lake.[1]Count data can take values of 0, 1, 2, … (non-negative integer values).[2]Other examples of count data are the number of hits recorded by a Geiger counter in one minute, patient days in the hospital, goals scored in a soccer game,[3]and the number of episodes of hypoglycemia per year for a patient with diabetes.[4] For statistical analysis, the distribution of the counts is often represented using aPoisson distributionor anegative binomial distribution. Hilbe[3]notes that "Poisson regression is traditionally conceived of as the basic count model upon which a variety of other count models are based." In a Poisson model, "… the random variabley{\displaystyle y}is the count response and parameterλ{\displaystyle \lambda }(lambda) is the mean. Often,λ{\displaystyle \lambda }is also called the rate or intensity parameter… In statistical literature,λ{\displaystyle \lambda }is also expressed asμ{\displaystyle \mu }(mu) when referring to Poisson and traditional negative binomial models." In some data, the number of zeros is greater than would be expected using aPoisson distributionor anegative binomial distribution. Data with such an excess of zero counts are described as Zero-inflated.[4] Example histograms of zero-inflated Poisson distributions with meanμ{\displaystyle \mu }of 5 or 10 and proportion of zero inflationπ{\displaystyle \pi }of 0.2 or 0.5 are shown below, based on the R program ZeroInflPoiDistPlots.R from Bilder and Laughlin.[1] As the examples above show, zero-inflated data can arise as amixtureof two distributions. The first distribution generates zeros. The second distribution, which may be aPoisson distribution, anegative binomial distributionor other count distribution, generates counts, some of which may be zeros.[7] In the statistical literature, different authors may use different names to distinguish zeros from the two distributions. Some authors describe zeros generated by the first (binary) distribution as "structural" and zeros generated by the second (count) distribution as "random".[7]Other authors use the terminology "immune" and "susceptible" for the binary and count zeros, respectively.[1] One well-known zero-inflated model isDiane Lambert's zero-inflated Poisson model, which concerns a random event containing excess zero-count data in unit time.[8]For example, the number ofinsurance claimswithin a population for a certain type of risk would be zero-inflated by those people who have not taken out insurance against the risk and thus are unable to claim. The zero-inflated Poisson (ZIP) modelmixestwo zero generating processes. The first process generates zeros. The second process is governed by aPoisson distributionthat generates counts, some of which may be zero. Themixture distributionis described as follows: where the outcome variableyi{\displaystyle y_{i}}has any non-negative integer value,λ{\displaystyle \lambda }is the expected Poisson count for thei{\displaystyle i}th individual;π{\displaystyle \pi }is the probability of extra zeros. The mean is(1−π)λ{\displaystyle (1-\pi )\lambda }and the variance isλ(1−π)(1+πλ){\displaystyle \lambda (1-\pi )(1+\pi \lambda )}. The method of moments estimators are given by[9] wherem{\displaystyle m}is the sample mean ands2{\displaystyle s^{2}}is the sample variance. The maximum likelihood estimator[10]can be found by solving the following equation wheren0n{\displaystyle {\frac {n_{0}}{n}}}is the observed proportion of zeros. A closed form solution of this equation is given by[11] withW0{\displaystyle W_{0}}being the main branch of Lambert's W-function[12]and Alternatively, the equation can be solved by iteration.[13] The maximum likelihood estimator forπ{\displaystyle \pi }is given by In 1994, Greene considered the zero-inflatednegative binomial(ZINB) model.[14]Daniel B. Hall adapted Lambert's methodology to an upper-bounded count situation, thereby obtaining a zero-inflated binomial (ZIB) model.[15] If the count dataY{\displaystyle Y}is such that the probability of zero is larger than the probability of nonzero, namely then the discrete dataY{\displaystyle Y}obey discrete pseudocompound Poisson distribution.[16] In fact, letG(z)=∑n=0∞P(Y=n)zn{\displaystyle G(z)=\sum \limits _{n=0}^{\infty }P(Y=n)z^{n}}be theprobability generating functionofyi{\displaystyle y_{i}}. Ifp0=Pr(Y=0)>0.5{\displaystyle p_{0}=\Pr(Y=0)>0.5}, then|G(z)|⩾p0−∑i=1∞pi=2p0−1>0{\displaystyle |G(z)|\geqslant p_{0}-\sum \limits _{i=1}^{\infty }p_{i}=2p_{0}-1>0}. Then from theWiener–Lévy theorem,[17]G(z){\displaystyle G(z)}has theprobability generating functionof the discrete pseudocompound Poisson distribution. We say that the discrete random variableY{\displaystyle Y}satisfyingprobability generating functioncharacterization has a discrete pseudocompound Poisson distributionwith parameters When all theαk{\displaystyle \alpha _{k}}are non-negative, it is the discretecompound Poisson distribution(non-Poisson case) withoverdispersionproperty.
https://en.wikipedia.org/wiki/Zero-inflated_model
Inprobability theory, thezero-truncated Poisson distribution(ZTP distribution) is a certaindiscrete probability distributionwhose support is the set of positive integers. This distribution is also known as theconditional Poisson distribution[1]or thepositive Poisson distribution.[2]It is the conditionalprobability distributionof aPoisson-distributedrandom variable, given that the value of the random variable is not zero. Thus it is impossible for a ZTP random variable to be zero. Consider for example the random variable of the number of items in a shopper's basket at a supermarket checkout line. Presumably a shopper does not stand in line with nothing to buy (i.e., the minimum purchase is 1 item), so this phenomenon may follow a ZTP distribution.[3] Since the ZTP is atruncated distributionwith the truncation stipulated ask> 0, one can derive theprobability mass functiong(k;λ)from a standard Poisson distributionf(k;λ) as follows:[4] Themeanis and thevarianceis Themethod of momentsestimatorλ^{\displaystyle {\widehat {\lambda }}}for the parameterλ{\displaystyle \lambda }is obtained by solving wherex¯{\displaystyle {\bar {x}}}is thesample mean.[1] This equation has a solution in terms of theLambert W function. In practice, a solution may be found using numerical methods. Insurance claims: Imagine navigating the intricate landscape of auto insurance claims, where each claim signifies a unique event – an accident or damage occurrence. The ZTP distribution seamlessly aligns with this scenario, excluding the possibility of policyholders with zero claims. LetXdenote the random variable representing the number of insurance claims. Ifλis the average rate of claims, the ZTP probability mass function takes the form: P(X=k)=λke−λk!(1−e−λ){\displaystyle P(X=k)={\frac {\lambda ^{k}e^{-\lambda }}{k!\left(1-e^{-\lambda }\right)}}}for k= 1,2,3,... This formula encapsulates the probability of observingkclaims given that at least one claim has transpired. The denominator ensures the exclusion of the improbable zero-claim scenario. By utilizing the zero-truncated Poisson distribution, the manufacturing company can analyze and predict the frequency of defects in their products while focusing on instances where defects exist. This distribution helps in understanding and improving the quality control process, especially when it's crucial to account for at least one defect. Random variables sampled from the zero-truncated Poisson distribution may be achieved using algorithms derived from Poisson distribution sampling algorithms.[5] The cost of the procedure above is linear ink, which may be large for large values ofλ{\displaystyle \lambda }. Given access to an efficient sampler for non-truncated Poisson random variates, a non-iterative approach involves sampling from a truncatedexponential distributionrepresenting the time of the first event in aPoisson point process, conditional on such an event existing.[6]A simpleNumPyimplementation is:
https://en.wikipedia.org/wiki/Zero-truncated_Poisson_distribution
Inprobability theory, aLévy process, named after the French mathematicianPaul Lévy, is astochastic processwith independent, stationary increments: it represents the motion of a point whose successive displacements arerandom, in which displacements in pairwise disjoint time intervals are independent, and displacements in different time intervals of the same length have identical probability distributions. A Lévy process may thus be viewed as the continuous-time analog of arandom walk. The most well known examples of Lévy processes are theWiener process, often called theBrownian motionprocess, and thePoisson process. Further important examples include theGamma process, the Pascal process, and the Meixner process. Aside from Brownian motion with drift, all other proper (that is, not deterministic) Lévy processes havediscontinuouspaths. All Lévy processes areadditive processes.[1] A Lévy process is astochastic processX={Xt:t≥0}{\displaystyle X=\{X_{t}:t\geq 0\}}that satisfies the following properties: IfX{\displaystyle X}is a Lévy process then one may construct aversionofX{\displaystyle X}such thatt↦Xt{\displaystyle t\mapsto X_{t}}isalmost surelyright-continuous with left limits. A continuous-time stochastic process assigns arandom variableXtto each pointt≥ 0 in time. In effect it is a random function oft. Theincrementsof such a process are the differencesXs−Xtbetween its values at different timest<s. To call the increments of a processindependentmeans that incrementsXs−XtandXu−Xvareindependentrandom variables whenever the two time intervals do not overlap and, more generally, any finite number of increments assigned to pairwise non-overlapping time intervals are mutually (not justpairwise) independent. To call the increments stationary means that theprobability distributionof any incrementXt−Xsdepends only on the lengtht−sof the time interval; increments on equally long time intervals are identically distributed. IfX{\displaystyle X}is aWiener process, the probability distribution ofXt−Xsisnormalwithexpected value0 andvariancet−s. IfX{\displaystyle X}is aPoisson process, the probability distribution ofXt−Xsis aPoisson distributionwith expected value λ(t−s), where λ > 0 is the "intensity" or "rate" of the process. IfX{\displaystyle X}is aCauchy process, the probability distribution ofXt−Xsis aCauchy distributionwith densityf(x;t)=1π[γx2+γ2]{\displaystyle f(x;t)={1 \over \pi }\left[{\gamma \over x^{2}+\gamma ^{2}}\right]}whereγ=t−s{\displaystyle \gamma =t-s}. The distribution of a Lévy process has the property ofinfinite divisibility: given any integern, thelawof a Lévy process at time t can be represented as the law of the sum ofnindependent random variables, which are precisely the increments of the Lévy process over time intervals of lengtht/n,which are independent and identically distributed by assumptions 2 and 3. Conversely, for each infinitely divisible probability distributionF{\displaystyle F}, there is a Lévy processX{\displaystyle X}such that the law ofX1{\displaystyle X_{1}}is given byF{\displaystyle F}. In any Lévy process with finitemoments, thenth momentμn(t)=E(Xtn){\displaystyle \mu _{n}(t)=E(X_{t}^{n})}, is apolynomial functionoft;these functions satisfy a binomial identity: The distribution of a Lévy process is characterized by itscharacteristic function, which is given by theLévy–Khintchine formula(general for allinfinitely divisible distributions):[2] IfX=(Xt)t≥0{\displaystyle X=(X_{t})_{t\geq 0}}is a Lévy process, then its characteristic functionφX(θ){\displaystyle \varphi _{X}(\theta )}is given by wherea∈R{\displaystyle a\in \mathbb {R} },σ≥0{\displaystyle \sigma \geq 0}, andΠ{\displaystyle \Pi }is aσ-finite measure called theLévy measureofX{\displaystyle X}, satisfying the property In the above,1{\displaystyle \mathbf {1} }is theindicator function. Becausecharacteristic functionsuniquely determine their underlying probability distributions, each Lévy process is uniquely determined by the "Lévy–Khintchine triplet"(a,σ2,Π){\displaystyle (a,\sigma ^{2},\Pi )}. The terms of this triplet suggest that a Lévy process can be seen as having three independent components: a linear drift, aBrownian motion, and a Lévy jump process, as described below. This immediately gives that the only (nondeterministic) continuous Lévy process is a Brownian motion with drift; similarly, every Lévy process is asemimartingale.[3] Because the characteristic functions of independent random variables multiply, the Lévy–Khintchine theorem suggests that every Lévy process is the sum of Brownian motion with drift and another independent random variable, a Lévy jump process. The Lévy–Itô decomposition describes the latter as a (stochastic) sum of independent Poisson random variables. Letν=Π|R∖(−1,1)Π(R∖(−1,1)){\displaystyle \nu ={\frac {\Pi |_{\mathbb {R} \setminus (-1,1)}}{\Pi (\mathbb {R} \setminus (-1,1))}}}— that is, the restriction ofΠ{\displaystyle \Pi }toR∖(−1,1){\displaystyle \mathbb {R} \setminus (-1,1)}, normalized to be a probability measure; similarly, letμ=Π|(−1,1)∖{0}{\displaystyle \mu =\Pi |_{(-1,1)\setminus \{0\}}}(but do not rescale). Then The former is the characteristic function of acompound Poisson processwith intensityΠ(R∖(−1,1)){\displaystyle \Pi (\mathbb {R} \setminus (-1,1))}and child distributionν{\displaystyle \nu }. The latter is that of acompensated generalized Poisson process(CGPP): a process with countably many jump discontinuities on every intervala.s., but such that those discontinuities are of magnitude less than1{\displaystyle 1}. If∫R|x|μ(dx)<∞{\displaystyle \int _{\mathbb {R} }{|x|\,\mu (dx)}<\infty }, then the CGPP is apure jump process.[4][5]Therefore in terms of processes one may decomposeX{\displaystyle X}in the following way whereY{\displaystyle Y}is the compound Poisson process with jumps larger than1{\displaystyle 1}in absolute value andZt{\displaystyle Z_{t}}is the aforementioned compensated generalized Poisson process which is also a zero-mean martingale. A Lévyrandom fieldis a multi-dimensional generalization of Lévy process.[6][7]Still more general are decomposable processes.[8]
https://en.wikipedia.org/wiki/L%C3%A9vy_process
Inprobability theoryandstatistics, theLaplace distributionis a continuousprobability distributionnamed afterPierre-Simon Laplace. It is also sometimes called thedouble exponential distribution, because it can be thought of as twoexponential distributions(with an additional location parameter) spliced together along the x-axis,[2]although the term is also sometimes used to refer to theGumbel distribution. The difference between twoindependent identically distributedexponential random variables is governed by a Laplace distribution, as is aBrownian motionevaluated at an exponentially distributed random time[citation needed]. Increments ofLaplace motionor avariance gamma processevaluated over the time scale also have a Laplace distribution. Arandom variablehas aLaplace⁡(μ,b){\displaystyle \operatorname {Laplace} (\mu ,b)}distribution if itsprobability density functionis whereμ{\displaystyle \mu }is alocation parameter, andb>0{\displaystyle b>0}, which is sometimes referred to as the "diversity", is ascale parameter. Ifμ=0{\displaystyle \mu =0}andb=1{\displaystyle b=1}, the positive half-line is exactly anexponential distributionscaled by 1/2.[3] The probability density function of the Laplace distribution is also reminiscent of thenormal distribution; however, whereas the normal distribution is expressed in terms of the squared difference from the meanμ{\displaystyle \mu }, the Laplace density is expressed in terms of theabsolute differencefrom the mean. Consequently, the Laplace distribution has fatter tails than the normal distribution. It is a special case of thegeneralized normal distributionand thehyperbolic distribution. Continuous symmetric distributions that have exponential tails, like the Laplace distribution, but which have probability density functions that are differentiable at the mode include thelogistic distribution,hyperbolic secant distribution, and theChampernowne distribution. The Laplace distribution is easy tointegrate(if one distinguishes two symmetric cases) due to the use of theabsolute valuefunction. Itscumulative distribution functionis as follows: The inverse cumulative distribution function is given by LetX,Y{\displaystyle X,Y}be independent laplace random variables:X∼Laplace(μX,bX){\displaystyle X\sim {\textrm {Laplace}}(\mu _{X},b_{X})}andY∼Laplace(μY,bY){\displaystyle Y\sim {\textrm {Laplace}}(\mu _{Y},b_{Y})}, and we want to computeP(X>Y){\displaystyle P(X>Y)}. The probability ofP(X>Y){\displaystyle P(X>Y)}can be reduced (using the properties below) toP(μ+bZ1>Z2){\displaystyle P(\mu +bZ_{1}>Z_{2})}, whereZ1,Z2∼Laplace(0,1){\displaystyle Z_{1},Z_{2}\sim {\textrm {Laplace}}(0,1)}. This probability is equal to P(μ+bZ1>Z2)={b2eμ/b−eμ2(b2−1),whenμ<01−b2e−μ/b−e−μ2(b2−1),whenμ>0{\displaystyle P(\mu +bZ_{1}>Z_{2})={\begin{cases}{\frac {b^{2}e^{\mu /b}-e^{\mu }}{2(b^{2}-1)}},&{\text{when }}\mu <0\\1-{\frac {b^{2}e^{-\mu /b}-e^{-\mu }}{2(b^{2}-1)}},&{\text{when }}\mu >0\\\end{cases}}} Whenb=1{\displaystyle b=1}, both expressions are replaced by their limit asb→1{\displaystyle b\to 1}: P(μ+Z1>Z2)={eμ(2−μ)4,whenμ<01−e−μ(2+μ)4,whenμ>0{\displaystyle P(\mu +Z_{1}>Z_{2})={\begin{cases}e^{\mu }{\frac {(2-\mu )}{4}},&{\text{when }}\mu <0\\1-e^{-\mu }{\frac {(2+\mu )}{4}},&{\text{when }}\mu >0\\\end{cases}}} To compute the case forμ>0{\displaystyle \mu >0}, note thatP(μ+Z1>Z2)=1−P(μ+Z1<Z2)=1−P(−μ−Z1>−Z2)=1−P(−μ+Z1>Z2){\displaystyle P(\mu +Z_{1}>Z_{2})=1-P(\mu +Z_{1}<Z_{2})=1-P(-\mu -Z_{1}>-Z_{2})=1-P(-\mu +Z_{1}>Z_{2})} sinceZ∼−Z{\displaystyle Z\sim -Z}whenZ∼Laplace(0,1){\displaystyle Z\sim {\textrm {Laplace}}(0,1)}. A Laplace random variable can be represented as the difference of twoindependent and identically distributed(iid) exponential random variables.[4]One way to show this is by using thecharacteristic functionapproach. For any set of independent continuous random variables, for any linear combination of those variables, its characteristic function (which uniquely determines the distribution) can be acquired by multiplying the corresponding characteristic functions. Consider two i.i.d random variablesX,Y∼Exponential(λ){\displaystyle X,Y\sim {\textrm {Exponential}}(\lambda )}. The characteristic functions forX,−Y{\displaystyle X,-Y}are respectively. On multiplying these characteristic functions (equivalent to the characteristic function of the sum of the random variablesX+(−Y){\displaystyle X+(-Y)}), the result is This is the same as the characteristic function forZ∼Laplace(0,1/λ){\displaystyle Z\sim {\textrm {Laplace}}(0,1/\lambda )}, which is Sargan distributions are a system of distributions of which the Laplace distribution is a core member. Ap{\displaystyle p}th order Sargan distribution has density[5][6] for parametersα≥0,βj≥0{\displaystyle \alpha \geq 0,\beta _{j}\geq 0}. The Laplace distribution results forp=0{\displaystyle p=0}. Givenn{\displaystyle n}independent and identically distributed samplesx1,x2,...,xn{\displaystyle x_{1},x_{2},...,x_{n}}, themaximum likelihood(MLE) estimator ofμ{\displaystyle \mu }is the samplemedian,[7] The MLE estimator ofb{\displaystyle b}is themean absolute deviationfrom the median,[citation needed] revealing a link between the Laplace distribution andleast absolute deviations. A correction for small samples can be applied as follows: (see:exponential distribution#Parameter estimation). The Laplacian distribution has been used in speech recognition to model priors onDFTcoefficients[8]and in JPEG image compression to model AC coefficients[9]generated by aDCT. Given a random variableU{\displaystyle U}drawn from theuniform distributionin the interval(−1/2,1/2){\displaystyle \left(-1/2,1/2\right)}, the random variable has a Laplace distribution with parametersμ{\displaystyle \mu }andb{\displaystyle b}. This follows from the inverse cumulative distribution function given above. ALaplace(0,b){\displaystyle {\textrm {Laplace}}(0,b)}variatecan also be generated as the difference of twoi.i.d.Exponential(1/b){\displaystyle {\textrm {Exponential}}(1/b)}random variables. Equivalently,Laplace(0,1){\displaystyle {\textrm {Laplace}}(0,1)}can also be generated as thelogarithmof the ratio of twoi.i.d.uniform random variables. This distribution is often referred to as "Laplace's first law of errors". He published it in 1774, modeling the frequency of an error as an exponential function of its magnitude once its sign was disregarded. Laplace would later replace this model with his "second law of errors", based on the normal distribution, after the discovery of thecentral limit theorem.[15][16] Keynespublished a paper in 1911 based on his earlier thesis wherein he showed that the Laplace distribution minimised the absolute deviation from the median.[17]
https://en.wikipedia.org/wiki/Laplace_distribution
Inprobabilitytheory, aCauchy processis a type ofstochastic process. There aresymmetricandasymmetricforms of the Cauchy process.[1]The unspecified term "Cauchy process" is often used to refer to the symmetric Cauchy process.[2] The Cauchy process has a number of properties: The symmetric Cauchy process can be described by aBrownian motionorWiener processsubject to aLévysubordinator.[7]The Lévy subordinator is a process associated with aLévy distributionhaving location parameter of0{\displaystyle 0}and a scale parameter oft2/2{\displaystyle t^{2}/2}.[7]The Lévy distribution is a special case of theinverse-gamma distribution. So, usingC{\displaystyle C}to represent the Cauchy process andL{\displaystyle L}to represent the Lévy subordinator, the symmetric Cauchy process can be described as: The Lévy distribution is the probability of the first hitting time for a Brownian motion, and thus the Cauchy process is essentially the result of twoindependentBrownian motion processes.[7] TheLévy–Khintchine representationfor the symmetric Cauchy process is a triplet with zero drift and zero diffusion, giving a Lévy–Khintchine triplet of(0,0,W){\displaystyle (0,0,W)}, whereW(dx)=dx/(πx2){\displaystyle W(dx)=dx/(\pi x^{2})}.[8] The marginalcharacteristic functionof the symmetric Cauchy process has the form:[1][8] The marginalprobability distributionof the symmetric Cauchy process is theCauchy distributionwhose density is[8][9] The asymmetric Cauchy process is defined in terms of a parameterβ{\displaystyle \beta }. Hereβ{\displaystyle \beta }is theskewnessparameter, and itsabsolute valuemust be less than or equal to 1.[1]In the case where|β|=1{\displaystyle |\beta |=1}the process is considered a completely asymmetric Cauchy process.[1] The Lévy–Khintchine triplet has the form(0,0,W){\displaystyle (0,0,W)}, whereW(dx)={Ax−2dxifx>0Bx−2dxifx<0{\displaystyle W(dx)={\begin{cases}Ax^{-2}\,dx&{\text{if }}x>0\\Bx^{-2}\,dx&{\text{if }}x<0\end{cases}}}, whereA≠B{\displaystyle A\neq B},A>0{\displaystyle A>0}andB>0{\displaystyle B>0}.[1] Given this,β{\displaystyle \beta }is a function ofA{\displaystyle A}andB{\displaystyle B}. The characteristic function of the asymmetric Cauchy distribution has the form:[1] The marginal probability distribution of the asymmetric Cauchy process is astable distributionwith index of stability (i.e., α parameter) equal to 1.
https://en.wikipedia.org/wiki/Cauchy_process
Inprobabilitytheory, astable processis a type ofstochastic process. It includes stochastic processes whose associatedprobability distributionsarestable distributions.[1] Examples of stable processes include theWiener process, orBrownian motion, whose associated probability distribution is thenormal distribution. They also include theCauchy process. For the symmetric Cauchy process, the associated probability distribution is theCauchy distribution.[1] The degenerate case, where there is no random element, i.e.,X(t)=mt{\displaystyle X(t)=mt}, wherem{\displaystyle m}is a constant, is also a stable process.[1]
https://en.wikipedia.org/wiki/Stable_process
Inprobability theory, theslash distributionis theprobability distributionof a standardnormalvariate divided by an independentstandard uniformvariate.[1]In other words, if therandom variableZhas a normal distribution with zero mean and unitvariance, the random variableUhas a uniform distribution on [0,1] andZandUarestatistically independent, then the random variableX=Z/Uhas a slash distribution. The slash distribution is an example of aratio distribution. The distribution was named by William H. Rogers andJohn Tukeyin a paper published in 1972.[2] Theprobability density function(pdf) is whereφ(x){\displaystyle \varphi (x)}is the probability density function of the standard normal distribution.[3]The quotient is undefined atx= 0, but thediscontinuity is removable: The most common use of the slash distribution is insimulationstudies. It is a useful distribution in this context because it hasheavier tailsthan a normal distribution, but it is not aspathologicalas theCauchy distribution.[3] This article incorporatespublic domain materialfrom theNational Institute of Standards and Technology
https://en.wikipedia.org/wiki/Slash_distribution
TheFama–MacBeth regressionis a method used to estimate parameters forasset pricing modelssuch as thecapital asset pricing model(CAPM). The method estimates thebetasandrisk premiafor anyrisk factorsthat are expected to determine asset prices. The method works with multiple assets across time (panel data). The parameters are estimated in two steps: The risk premiums of the factors are then estimated by the average factor returns, i.e.γ¯0=1T∑t=1Tγt,0,…,γ¯m=1T∑t=1Tγt,m.{\displaystyle {\bar {\gamma }}_{0}={\frac {1}{T}}\sum _{t=1}^{T}\gamma _{t,0},\ldots ,{\bar {\gamma }}_{m}={\frac {1}{T}}\sum _{t=1}^{T}\gamma _{t,m}.} Eugene F. Famaand James D. MacBeth (1973) demonstrated that the residuals of risk-return regressions and the observed "fair game" properties of the coefficients are consistent with an "efficient capital market" (quotes in the original).[2] Note that Fama MacBeth regressions providestandard errorscorrected only for cross-sectional correlation. The standard errors from this method do not correct for time-series autocorrelation. This is usually not a problem for stock trading since stocks have weak time-series autocorrelation in daily and weekly holding periods, but autocorrelation is stronger over long horizons.[3] This means Fama MacBeth regressions may be inappropriate to use in many corporate finance settings where project holding periods tend to be long. For alternative methods of correcting standard errors for time series and cross-sectional correlation in the error term look into double clustering by firm and year.[4] Thisfinancial theoryrelated article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Fama%E2%80%93MacBeth_regression
Non-linear least squaresis the form ofleast squaresanalysis used to fit a set ofmobservations with a model that is non-linear innunknown parameters (m≥n). It is used in some forms ofnonlinear regression. The basis of the method is to approximate the model by a linear one and to refine the parameters by successive iterations. There are many similarities tolinear least squares, but also somesignificant differences. In economic theory, the non-linear least squares method is applied in (i) the probit regression, (ii) threshold regression, (iii) smooth regression, (iv) logistic link regression, (v) Box–Cox transformed regressors (m(x,θi)=θ1+θ2x(θ3){\displaystyle m(x,\theta _{i})=\theta _{1}+\theta _{2}x^{(\theta _{3})}}). Consider a set ofm{\displaystyle m}data points,(x1,y1),(x2,y2),…,(xm,ym),{\displaystyle (x_{1},y_{1}),(x_{2},y_{2}),\dots ,(x_{m},y_{m}),}and a curve (model function)y^=f(x,β),{\displaystyle {\hat {y}}=f(x,{\boldsymbol {\beta }}),}that in addition to the variablex{\displaystyle x}also depends onn{\displaystyle n}parameters,β=(β1,β2,…,βn),{\displaystyle {\boldsymbol {\beta }}=(\beta _{1},\beta _{2},\dots ,\beta _{n}),}withm≥n.{\displaystyle m\geq n.}It is desired to find the vectorβ{\displaystyle {\boldsymbol {\beta }}}of parameters such that the curve fits best the given data in the least squares sense, that is, the sum of squaresS=∑i=1mri2{\displaystyle S=\sum _{i=1}^{m}r_{i}^{2}}is minimized, where theresiduals(in-sample prediction errors)riare given byri=yi−f(xi,β){\displaystyle r_{i}=y_{i}-f(x_{i},{\boldsymbol {\beta }})}fori=1,2,…,m.{\displaystyle i=1,2,\dots ,m.} Theminimumvalue ofSoccurs when thegradientis zero. Since the model containsnparameters there arengradient equations:∂S∂βj=2∑iri∂ri∂βj=0(j=1,…,n).{\displaystyle {\frac {\partial S}{\partial \beta _{j}}}=2\sum _{i}r_{i}{\frac {\partial r_{i}}{\partial \beta _{j}}}=0\quad (j=1,\ldots ,n).} In a nonlinear system, the derivatives∂ri∂βj{\textstyle {\frac {\partial r_{i}}{\partial \beta _{j}}}}are functions of both the independent variable and the parameters, so in general these gradient equations do not have a closed solution. Instead, initial values must be chosen for the parameters. Then, the parameters are refined iteratively, that is, the values are obtained by successive approximation,βj≈βjk+1=βjk+Δβj.{\displaystyle \beta _{j}\approx \beta _{j}^{k+1}=\beta _{j}^{k}+\Delta \beta _{j}.} Here,kis an iteration number and the vector of increments,Δβ{\displaystyle \Delta {\boldsymbol {\beta }}}is known as the shift vector. At each iteration the model is linearized by approximation to a first-orderTaylor polynomialexpansion aboutβk{\displaystyle {\boldsymbol {\beta }}^{k}}f(xi,β)≈f(xi,βk)+∑j∂f(xi,βk)∂βj(βj−βjk)=f(xi,βk)+∑jJijΔβj.{\displaystyle f(x_{i},{\boldsymbol {\beta }})\approx f(x_{i},{\boldsymbol {\beta }}^{k})+\sum _{j}{\frac {\partial f(x_{i},{\boldsymbol {\beta }}^{k})}{\partial \beta _{j}}}\left(\beta _{j}-\beta _{j}^{k}\right)=f(x_{i},{\boldsymbol {\beta }}^{k})+\sum _{j}J_{ij}\,\Delta \beta _{j}.}TheJacobian matrix,J, is a function of constants, the independent variableandthe parameters, so it changes from one iteration to the next. Thus, in terms of the linearized model,∂ri∂βj=−Jij{\displaystyle {\frac {\partial r_{i}}{\partial \beta _{j}}}=-J_{ij}}and the residuals are given byΔyi=yi−f(xi,βk),{\displaystyle \Delta y_{i}=y_{i}-f(x_{i},{\boldsymbol {\beta }}^{k}),}ri=yi−f(xi,β)=(yi−f(xi,βk))+(f(xi,βk)−f(xi,β))≈Δyi−∑s=1nJisΔβs.{\displaystyle r_{i}=y_{i}-f(x_{i},{\boldsymbol {\beta }})=\left(y_{i}-f(x_{i},{\boldsymbol {\beta }}^{k})\right)+\left(f(x_{i},{\boldsymbol {\beta }}^{k})-f(x_{i},{\boldsymbol {\beta }})\right)\approx \Delta y_{i}-\sum _{s=1}^{n}J_{is}\Delta \beta _{s}.} Substituting these expressions into the gradient equations, they become−2∑i=1mJij(Δyi−∑s=1nJisΔβs)=0,{\displaystyle -2\sum _{i=1}^{m}J_{ij}\left(\Delta y_{i}-\sum _{s=1}^{n}J_{is}\ \Delta \beta _{s}\right)=0,}which, on rearrangement, becomensimultaneous linear equations, thenormal equations∑i=1m∑s=1nJijJisΔβs=∑i=1mJijΔyi(j=1,…,n).{\displaystyle \sum _{i=1}^{m}\sum _{s=1}^{n}J_{ij}J_{is}\ \Delta \beta _{s}=\sum _{i=1}^{m}J_{ij}\ \Delta y_{i}\qquad (j=1,\dots ,n).} The normal equations are written in matrix notation as(JTJ)Δβ=JTΔy.{\displaystyle \left(\mathbf {J} ^{\mathsf {T}}\mathbf {J} \right)\Delta {\boldsymbol {\beta }}=\mathbf {J} ^{\mathsf {T}}\ \Delta \mathbf {y} .} These equations form the basis for theGauss–Newton algorithmfor a non-linear least squares problem. Note the sign convention in the definition of the Jacobian matrix in terms of the derivatives. Formulas linear inJ{\displaystyle J}may appear with factor of−1{\displaystyle -1}in other articles or the literature. When the observations are not equally reliable, a weighted sum of squares may be minimized,S=∑i=1mWiiri2.{\displaystyle S=\sum _{i=1}^{m}W_{ii}r_{i}^{2}.} Each element of thediagonalweight matrixWshould, ideally, be equal to the reciprocal of the errorvarianceof the measurement.[1]The normal equations are then, more generally,(JTWJ)Δβ=JTWΔy.{\displaystyle \left(\mathbf {J} ^{\mathsf {T}}\mathbf {WJ} \right)\Delta {\boldsymbol {\beta }}=\mathbf {J} ^{\mathsf {T}}\mathbf {W} \ \Delta \mathbf {y} .} In linear least squares theobjective function,S, is aquadratic functionof the parameters.S=∑iWii(yi−∑jXijβj)2{\displaystyle S=\sum _{i}W_{ii}\left(y_{i}-\sum _{j}X_{ij}\beta _{j}\right)^{2}}When there is only one parameter the graph ofSwith respect to that parameter will be aparabola. With two or more parameters the contours ofSwith respect to any pair of parameters will be concentricellipses(assuming that the normal equations matrixXTWX{\displaystyle \mathbf {X} ^{\mathsf {T}}\mathbf {WX} }ispositive definite). The minimum parameter values are to be found at the centre of the ellipses. The geometry of the general objective function can be described as paraboloid elliptical. In NLLSQ the objective function is quadratic with respect to the parameters only ina region closeto its minimum value, where the truncated Taylor series is a good approximation to the model.S≈∑iWii(yi−∑jJijβj)2{\displaystyle S\approx \sum _{i}W_{ii}\left(y_{i}-\sum _{j}J_{ij}\beta _{j}\right)^{2}}The more the parameter values differ from their optimal values, the more the contours deviate from elliptical shape. A consequence of this is that initial parameter estimates should be as close as practicable to their (unknown!) optimal values. It also explains how divergence can come about as the Gauss–Newton algorithm is convergent only when the objective function is approximately quadratic in the parameters. Some problems of ill-conditioning and divergence can be corrected by finding initial parameter estimates that are near to the optimal values. A good way to do this is bycomputer simulation. Both the observed and calculated data are displayed on a screen. The parameters of the model are adjusted by hand until the agreement between observed and calculated data is reasonably good. Although this will be a subjective judgment, it is sufficient to find a good starting point for the non-linear refinement. Initial parameter estimates can be created using transformations or linearizations. Better still evolutionary algorithms such as the Stochastic Funnel Algorithm can lead to the convex basin of attraction that surrounds the optimal parameter estimates.[citation needed]Hybrid algorithms that use randomization and elitism, followed by Newton methods have been shown to be useful and computationally efficient[citation needed]. Any method among the ones describedbelowcan be applied to find a solution. The common sense criterion for convergence is that the sum of squares does not increase from one iteration to the next. However this criterion is often difficult to implement in practice, for various reasons. A useful convergence criterion is|Sk−Sk+1Sk|<0.0001.{\displaystyle \left|{\frac {S^{k}-S^{k+1}}{S^{k}}}\right|<0.0001.}The value 0.0001 is somewhat arbitrary and may need to be changed. In particular it may need to be increased when experimental errors are large. An alternative criterion is|Δβjβj|<0.001,j=1,…,n.{\displaystyle \left|{\frac {\Delta \beta _{j}}{\beta _{j}}}\right|<0.001,\qquad j=1,\dots ,n.} Again, the numerical value is somewhat arbitrary; 0.001 is equivalent to specifying that each parameter should be refined to 0.1% precision. This is reasonable when it is less than the largest relative standard deviation on the parameters. There are models for which it is either very difficult or even impossible to derive analytical expressions for the elements of the Jacobian. Then, the numerical approximation∂f(xi,β)∂βj≈δf(xi,β)δβj{\displaystyle {\frac {\partial f(x_{i},{\boldsymbol {\beta }})}{\partial \beta _{j}}}\approx {\frac {\delta f(x_{i},{\boldsymbol {\beta }})}{\delta \beta _{j}}}}is obtained by calculation off(xi,β){\displaystyle f(x_{i},{\boldsymbol {\beta }})}forβj{\displaystyle \beta _{j}}andβj+δβj{\displaystyle \beta _{j}+\delta \beta _{j}}. The increment,δβj{\displaystyle \delta \beta _{j}}, size should be chosen so the numerical derivative is not subject to approximation error by being too large, orround-offerror by being too small. Some information is given inthe corresponding sectionon theWeighted least squarespage. Multiple minima can occur in a variety of circumstances some of which are: Not all multiple minima have equal values of the objective function. False minima, also known as local minima, occur when the objective function value is greater than its value at the so-called global minimum. To be certain that the minimum found is the global minimum, the refinement should be started with widely differing initial values of the parameters. When the same minimum is found regardless of starting point, it is likely to be the global minimum. When multiple minima exist there is an important consequence: the objective function will have a stationary point (e.g. a maximum or asaddle point) somewhere between two minima. The normal equations matrix is not positive definite at a stationary point in the objective function, because the gradient vanishes and no unique direction of descent exists. Refinement from a point (a set of parameter values) close to a stationary point will be ill-conditioned and should be avoided as a starting point. For example, when fitting a Lorentzian the normal equations matrix is not positive definite when the half-width of the Lorentzian is zero.[2] A non-linear model can sometimes be transformed into a linear one. Such an approximation is, for instance, often applicable in the vicinity of the best estimator, and it is one of the basic assumption in most iterative minimization algorithms. When a linear approximation is valid, the model can directly be used for inference with ageneralized least squares, where the equations of theLinear Template Fit[3]apply. Another example of a linear approximation would be when the model is a simple exponential function,f(xi,β)=αeβxi,{\displaystyle f(x_{i},{\boldsymbol {\beta }})=\alpha e^{\beta x_{i}},}which can be transformed into a linear model by taking logarithms.log⁡f(xi,β)=log⁡α+βxi{\displaystyle \log f(x_{i},{\boldsymbol {\beta }})=\log \alpha +\beta x_{i}}Graphically this corresponds to working on asemi-log plot. The sum of squares becomesS=∑i(log⁡yi−log⁡α−βxi)2.{\displaystyle S=\sum _{i}(\log y_{i}-\log \alpha -\beta x_{i})^{2}.}This procedure should be avoided unless the errors are multiplicative andlog-normally distributedbecause it can give misleading results. This comes from the fact that whatever the experimental errors onymight be, the errors onlogyare different. Therefore, when the transformed sum of squares is minimized, different results will be obtained both for the parameter values and their calculated standard deviations. However, with multiplicative errors that are log-normally distributed, this procedure gives unbiased and consistent parameter estimates. Another example is furnished byMichaelis–Menten kinetics, used to determine two parametersVmax{\displaystyle V_{\max }}andKm{\displaystyle K_{m}}:v=Vmax[S]Km+[S].{\displaystyle v={\frac {V_{\max }[S]}{K_{m}+[S]}}.}TheLineweaver–Burk plot1v=1Vmax+KmVmax[S]{\displaystyle {\frac {1}{v}}={\frac {1}{V_{\max }}}+{\frac {K_{m}}{V_{\max }[S]}}}of1v{\textstyle {\frac {1}{v}}}against1[S]{\textstyle {\frac {1}{[S]}}}is linear in the parameters1Vmax{\textstyle {\frac {1}{V_{\max }}}}andKmVmax{\textstyle {\frac {K_{m}}{V_{\max }}}}but very sensitive to data error and strongly biased toward fitting the data in a particular range of the independent variable[S]{\displaystyle [S]}. The normal equations(JTWJ)Δβ=(JTW)Δy{\displaystyle \left(\mathbf {J} ^{\mathsf {T}}\mathbf {WJ} \right)\Delta {\boldsymbol {\beta }}=\left(\mathbf {J} ^{\mathsf {T}}\mathbf {W} \right)\Delta \mathbf {y} }may be solved forΔβ{\displaystyle \Delta {\boldsymbol {\beta }}}byCholesky decomposition, as described inlinear least squares. The parameters are updated iterativelyβk+1=βk+Δβ{\displaystyle {\boldsymbol {\beta }}^{k+1}={\boldsymbol {\beta }}^{k}+\Delta {\boldsymbol {\beta }}}wherekis an iteration number. While this method may be adequate for simple models, it will fail if divergence occurs. Therefore, protection against divergence is essential. If divergence occurs, a simple expedient is to reduce the length of the shift vector,Δβ{\displaystyle \Delta {\boldsymbol {\beta }}}, by a fraction,fβk+1=βk+fΔβ.{\displaystyle {\boldsymbol {\beta }}^{k+1}={\boldsymbol {\beta }}^{k}+f\ \Delta {\boldsymbol {\beta }}.}For example, the length of the shift vector may be successively halved until the new value of the objective function is less than its value at the last iteration. The fraction,fcould be optimized by aline search.[4]As each trial value offrequires the objective function to be re-calculated it is not worth optimizing its value too stringently. When using shift-cutting, the direction of the shift vector remains unchanged. This limits the applicability of the method to situations where the direction of the shift vector is not very different from what it would be if the objective function were approximately quadratic in the parameters,βk.{\displaystyle {\boldsymbol {\beta }}^{k}.} If divergence occurs and the direction of the shift vector is so far from its "ideal" direction that shift-cutting is not very effective, that is, the fraction,frequired to avoid divergence is very small, the direction must be changed. This can be achieved by using theMarquardtparameter.[5]In this method the normal equations are modified(JTWJ+λI)Δβ=(JTW)Δy{\displaystyle \left(\mathbf {J} ^{\mathsf {T}}\mathbf {WJ} +\lambda \mathbf {I} \right)\Delta {\boldsymbol {\beta }}=\left(\mathbf {J} ^{\mathsf {T}}\mathbf {W} \right)\Delta \mathbf {y} }whereλ{\displaystyle \lambda }is the Marquardt parameter andIis an identity matrix. Increasing the value ofλ{\displaystyle \lambda }has the effect of changing both the direction and the length of the shift vector. The shift vector is rotated towards the direction ofsteepest descentwhenλI≫JTWJ,Δβ≈1λJTWΔy.{\displaystyle \lambda \mathbf {I} \gg \mathbf {J} ^{\mathsf {T}}\mathbf {WJ} ,\ {\Delta {\boldsymbol {\beta }}}\approx {\frac {1}{\lambda }}\mathbf {J} ^{\mathsf {T}}\mathbf {W} \ \Delta \mathbf {y} .}JTWΔy{\displaystyle \mathbf {J} ^{\mathsf {T}}\mathbf {W} \,\Delta \mathbf {y} }is the steepest descent vector. So, whenλ{\displaystyle \lambda }becomes very large, the shift vector becomes a small fraction of the steepest descent vector. Various strategies have been proposed for the determination of the Marquardt parameter. As with shift-cutting, it is wasteful to optimize this parameter too stringently. Rather, once a value has been found that brings about a reduction in the value of the objective function, that value of the parameter is carried to the next iteration, reduced if possible, or increased if need be. When reducing the value of the Marquardt parameter, there is a cut-off value below which it is safe to set it to zero, that is, to continue with the unmodified Gauss–Newton method. The cut-off value may be set equal to the smallest singular value of the Jacobian.[6]A bound for this value is given by1/tr⁡(JTWJ)−1{\displaystyle 1/\operatorname {tr} \left(\mathbf {J} ^{\mathsf {T}}\mathbf {WJ} \right)^{-1}}wheretris thetrace function.[7] The minimum in the sum of squares can be found by a method that does not involve forming the normal equations. The residuals with the linearized model can be written asr=Δy−JΔβ.{\displaystyle \mathbf {r} =\Delta \mathbf {y} -\mathbf {J} \,\Delta {\boldsymbol {\beta }}.}The Jacobian is subjected to an orthogonal decomposition; theQR decompositionwill serve to illustrate the process.J=QR{\displaystyle \mathbf {J} =\mathbf {QR} }whereQis anorthogonalm×m{\displaystyle m\times m}matrix andRis anm×n{\displaystyle m\times n}matrix which ispartitionedinto ann×n{\displaystyle n\times n}block,Rn{\displaystyle \mathbf {R} _{n}}, and a(m−n)×n{\displaystyle (m-n)\times n}zero block.Rn{\displaystyle \mathbf {R} _{n}}is upper triangular. R=[Rn0]{\displaystyle \mathbf {R} ={\begin{bmatrix}\mathbf {R} _{n}\\\mathbf {0} \end{bmatrix}}} The residual vector is left-multiplied byQT{\displaystyle \mathbf {Q} ^{\mathsf {T}}}. QTr=QTΔy−RΔβ=[(QTΔy−RΔβ)n(QTΔy)m−n]{\displaystyle \mathbf {Q} ^{\mathsf {T}}\mathbf {r} =\mathbf {Q} ^{\mathsf {T}}\ \Delta \mathbf {y} -\mathbf {R} \ \Delta {\boldsymbol {\beta }}={\begin{bmatrix}\left(\mathbf {Q} ^{\mathsf {T}}\ \Delta \mathbf {y} -\mathbf {R} \ \Delta {\boldsymbol {\beta }}\right)_{n}\\\left(\mathbf {Q} ^{\mathsf {T}}\ \Delta \mathbf {y} \right)_{m-n}\end{bmatrix}}} This has no effect on the sum of squares sinceS=rTQQTr=rTr{\displaystyle S=\mathbf {r} ^{\mathsf {T}}\mathbf {Q} \mathbf {Q} ^{\mathsf {T}}\mathbf {r} =\mathbf {r} ^{\mathsf {T}}\mathbf {r} }becauseQisorthogonal. The minimum value ofSis attained when the upper block is zero. Therefore, the shift vector is found by solvingRnΔβ=(QTΔy)n.{\displaystyle \mathbf {R} _{n}\ \Delta {\boldsymbol {\beta }}=\left(\mathbf {Q} ^{\mathsf {T}}\ \Delta \mathbf {y} \right)_{n}.} These equations are easily solved asRis upper triangular. A variant of the method of orthogonal decomposition involvessingular value decomposition, in whichRis diagonalized by further orthogonal transformations. J=UΣVT{\displaystyle \mathbf {J} =\mathbf {U} {\boldsymbol {\Sigma }}\mathbf {V} ^{\mathsf {T}}}whereU{\displaystyle \mathbf {U} }is orthogonal,Σ{\displaystyle {\boldsymbol {\Sigma }}}is a diagonal matrix of singular values andV{\displaystyle \mathbf {V} }is the orthogonal matrix of the eigenvectors ofJTJ{\displaystyle \mathbf {J} ^{\mathsf {T}}\mathbf {J} }or equivalently the right singular vectors ofJ{\displaystyle \mathbf {J} }. In this case the shift vector is given byΔβ=VΣ−1(UTΔy)n.{\displaystyle \Delta {\boldsymbol {\beta }}=\mathbf {V} {\boldsymbol {\Sigma }}^{-1}\left(\mathbf {U} ^{\mathsf {T}}\ \Delta \mathbf {y} \right)_{n}.} The relative simplicity of this expression is very useful in theoretical analysis of non-linear least squares. The application of singular value decomposition is discussed in detail in Lawson and Hanson.[6] There are many examples in the scientific literature where different methods have been used for non-linear data-fitting problems. Direct search methods depend on evaluations of the objective function at a variety of parameter values and do not use derivatives at all. They offer alternatives to the use of numerical derivatives in the Gauss–Newton method and gradient methods. More detailed descriptions of these, and other, methods are available, inNumerical Recipes, together with computer code in various languages.
https://en.wikipedia.org/wiki/Non-linear_least_squares
Numerical methods for linear least squaresentails thenumerical analysisoflinear least squaresproblems. A general approach to the least squares problemmin‖y−Xβ‖2{\displaystyle \operatorname {\,min} \,{\big \|}\mathbf {y} -X{\boldsymbol {\beta }}{\big \|}^{2}}can be described as follows. Suppose that we can find annbymmatrixSsuch thatXSis anorthogonal projectiononto the image ofX. Then a solution to our minimization problem is given by simply because is exactly a sought for orthogonal projection ofy{\displaystyle \mathbf {y} }onto an image ofX(see the picture belowand note that as explained in thenext sectionthe image ofXis just a subspace generated by column vectors ofX). A few popular ways to find such a matrixSare described below. The equation(XTX)β=XTy{\displaystyle (\mathbf {X} ^{\rm {T}}\mathbf {X} )\beta =\mathbf {X} ^{\rm {T}}y}is known as the normal equation. The algebraic solution of the normal equations with a full-rank matrixXTXcan be written as whereX+is theMoore–Penrose pseudoinverseofX. Although this equation is correct and can work in many applications, it is not computationally efficient to invert the normal-equations matrix (theGramian matrix). An exception occurs innumerical smoothing and differentiationwhere an analytical expression is required. If the matrixXTXiswell-conditionedandpositive definite, implying that it has fullrank, the normal equations can be solved directly by using theCholesky decompositionRTR, whereRis an uppertriangular matrix, giving: The solution is obtained in two stages, aforward substitutionstep, solving forz: followed by a backward substitution, solving forβ^{\displaystyle {\hat {\boldsymbol {\beta }}}}: Both substitutions are facilitated by the triangular nature ofR. Orthogonal decomposition methods of solving the least squares problem are slower than the normal equations method but are morenumerically stablebecause they avoid forming the productXTX. The residuals are written in matrix notation as The matrixXis subjected to an orthogonal decomposition, e.g., theQR decompositionas follows. whereQis anm×morthogonal matrix(QTQ=I) andRis ann×nupper triangular matrix withrii>0{\displaystyle r_{ii}>0}. The residual vector is left-multiplied byQT. BecauseQisorthogonal, the sum of squares of the residuals,s, may be written as: Sincevdoesn't depend onβ, the minimum value ofsis attained when the upper block,u, is zero. Therefore, the parameters are found by solving: These equations are easily solved asRis upper triangular. An alternative decomposition ofXis thesingular value decomposition(SVD)[1] whereUismbymorthogonal matrix,Visnbynorthogonal matrix andΣ{\displaystyle \Sigma }is anmbynmatrix with all its elements outside of the main diagonal equal to0. ThepseudoinverseofΣ{\displaystyle \Sigma }is easily obtained by inverting its non-zero diagonal elements and transposing. Hence, wherePis obtained fromΣ{\displaystyle \Sigma }by replacing its non-zero diagonal elements with ones. Since(XX+)∗=XX+{\displaystyle (\mathbf {X} \mathbf {X} ^{+})^{*}=\mathbf {X} \mathbf {X} ^{+}}(the property of pseudoinverse), the matrixUPUT{\displaystyle UPU^{\rm {T}}}is an orthogonal projection onto the image (column-space) ofX. In accordance with a general approach described in the introduction above (findXSwhich is an orthogonal projection), and thus, is a solution of a least squares problem. This method is the most computationally intensive, but is particularly useful if the normal equations matrix,XTX, is very ill-conditioned (i.e. if itscondition numbermultiplied by the machine's relativeround-off erroris appreciably large). In that case, including the smallestsingular valuesin the inversion merely adds numerical noise to the solution. This can be cured with the truncated SVD approach, giving a more stable and exact answer, by explicitly setting to zero all singular values below a certain threshold and so ignoring them, a process closely related tofactor analysis. The numerical methods for linear least squares are important becauselinear regressionmodels are among the most important types of model, both as formalstatistical modelsand for exploration of data-sets. The majority ofstatistical computer packagescontain facilities for regression analysis that make use of linear least squares computations. Hence it is appropriate that considerable effort has been devoted to the task of ensuring that these computations are undertaken efficiently and with due regard toround-off error. Individual statistical analyses are seldom undertaken in isolation, but rather are part of a sequence of investigatory steps. Some of the topics involved in considering numerical methods for linear least squares relate to this point. Thus important topics can be Fitting of linear models by least squares often, but not always, arise in the context ofstatistical analysis. It can therefore be important that considerations of computation efficiency for such problems extend to all of the auxiliary quantities required for such analyses, and are not restricted to the formal solution of the linear least squares problem. Matrix calculations, like any other, are affected byrounding errors. An early summary of these effects, regarding the choice of computation methods for matrix inversion, was provided by Wilkinson.[2]
https://en.wikipedia.org/wiki/Numerical_methods_for_linear_least_squares
System identificationis a method of identifying or measuring themathematical modelof asystemfrom measurements of the system inputs and outputs. The applications of system identification include any system where the inputs and outputs can be measured and includeindustrial processes,control systems,economic data,biologyand thelife sciences,medicine,social systemsand many more. Anonlinear systemis defined as any system that is not linear, that is any system that does not satisfy thesuperposition principle. This negative definition tends to obscure that there are very many different types of nonlinear systems. Historically, system identification for nonlinear systems[1][2]has developed by focusing on specific classes of system and can be broadly categorized into five basic approaches, each defined by a model class: There are four steps to be followed for system identification: data gathering, model postulate, parameter identification, and model validation. Data gathering is considered as the first and essential part in identification terminology, used as the input for the model which is prepared later. It consists of selecting an appropriate data set, pre-processing and processing. It involves the implementation of the known algorithms together with the transcription of flight tapes, data storage and data management, calibration, processing, analysis, and presentation. Moreover, model validation is necessary to gain confidence in, or reject, a particular model. In particular, the parameter estimation and the model validation are integral parts of the system identification. Validation refers to the process of confirming the conceptual model and demonstrating an adequate correspondence between the computational results of the model and the actual data.[3] The early work was dominated by methods based on theVolterra series, which in the discrete time case can be expressed as whereu(k),y(k);k= 1, 2, 3, ... are the measured input and output respectively andhℓ(m1,…,mℓ){\displaystyle h_{\ell }(m_{1},\ldots ,m_{\ell })}is thelth-order Volterra kernel, orlth-order nonlinear impulse response. The Volterra series is an extension of the linearconvolutionintegral. Most of the earlier identification algorithms assumed that just the first two, linear and quadratic, Volterra kernels are present and used special inputs such as Gaussian white noise and correlation methods to identify the two Volterra kernels. In most of these methods the input has to be Gaussian and white which is a severe restriction for many real processes. These results were later extended to include the first three Volterra kernels, to allow different inputs, and other related developments including theWiener series. A very important body of work was developed by Wiener, Lee, Bose and colleagues at MIT from the 1940s to the 1960s including the famous Lee and Schetzen method.[4][5]While these methods are still actively studied today there are several basic restrictions. These include the necessity of knowing the number of Volterra series terms a priori, the use of special inputs, and the large number of estimates that have to be identified. For example, for a system where the first order Volterra kernel is described by say 30 samples, 30x30 points will be required for the second order kernel, 30x30x30 for the third order and so on and hence the amount of data required to provide good estimates becomes excessively large.[6]These numbers can be reduced by exploiting certain symmetries but the requirements are still excessive irrespective of what algorithm is used for the identification. Because of the problems of identifying Volterra models other model forms were investigated as a basis for system identification for nonlinear systems. Various forms of block structured nonlinear models have been introduced or re-introduced.[6][7]The Hammerstein model consists of a static single valued nonlinear element followed by a linear dynamic element.[8]The Wiener model is the reverse of this combination so that the linear element occurs before the static nonlinear characteristic.[9]The Wiener-Hammerstein model consists of a static nonlinear element sandwiched between two dynamic linear elements, and several other model forms are available. The Hammerstein-Wiener model consists of a linear dynamic block sandwiched between two static nonlinear blocks.[10]The Urysohn model[11][12]is different from other block models, it does not consists of sequence linear and nonlinear blocks, but describes both dynamic and static nonlinearities in the expression of the kernel of an operator.[13]All these models can be represented by a Volterra series but in this case the Volterra kernels take on a special form in each case. Identification consists of correlation based and parameter estimation methods. The correlation methods exploit certain properties of these systems, which means that if specific inputs are used, often white Gaussian noise, the individual elements can be identified one at a time. This results in manageable data requirements and the individual blocks can sometimes be related to components in the system under study. More recent results are based on parameter estimation and neural network based solutions. Many results have been introduced and these systems continue to be studied in depth. One problem is that these methods are only applicable to a very special form of model in each case and usually this model form has to be known prior to identification. Artificial neural networkstry loosely to imitate the network of neurons in the brain where computation takes place through a large number of simple processing elements. A typical neural network consists of a number of simple processing units interconnected to form a complex network. Layers of such units are arranged so that data is entered at the input layer and passes through either one or several intermediate layers before reaching the output layer. Insupervised learningthe network is trained by operating on the difference between the actual output and the desired output of the network, the prediction error, to change the connection strengths between the nodes. By iterating, the weights are modified until the output error reaches an acceptable level. This process is called machine learning because the network adjusts the weights so that the output pattern is reproduced. Neural networks have been extensively studied and there are many excellent textbooks devoted to this topic in general,[1][14]and more focused textbooks which emphasise control and systems applications,.[1][15]There are two main problem types that can be studied using neural networks: static problems, and dynamic problems. Static problems includepattern recognition,classification, andapproximation. Dynamic problems involve lagged variables and are more appropriate for system identification and related applications. Depending on the architecture of the network the training problem can be either nonlinear-in-the-parameters which involves optimisation or linear-in-the-parameters which can be solved using classical approaches. The training algorithms can be categorised into supervised, unsupervised, or reinforcement learning. Neural networks have excellent approximation properties but these are usually based on standard function approximation results using for example theWeierstrassTheorem that applies equally well to polynomials, rational functions, and other well-known models. Neural networks have been applied extensively to system identification problems which involve nonlinear and dynamic relationships. However, classical neural networks are purely gross static approximating machines. There is no dynamics within the network. Hence when fitting dynamic models all the dynamics arise by allocating lagged inputs and outputs to the input layer of the network. The training procedure then produces the best static approximation that relates the lagged variables assigned to the input nodes to the output. There are more complex network architectures, including recurrent networks,[1]that produce dynamics by introducing increasing orders of lagged variables to the input nodes. But in these cases it is very easy to over specify the lags and this can lead to over fitting and poor generalisation properties. Neural networks have several advantages; they are conceptually simple, easy to train and to use, have excellent approximation properties, the concept of local and parallel processing is important and this provides integrity and fault tolerant behaviour. The biggest criticism of the classical neural network models is that the models produced are completely opaque and usually cannot be written down or analysed. It is therefore very difficult to know what is causing what, to analyse the model, or to compute dynamic characteristics from the model. Some of these points will not be relevant to all applications but they are for dynamic modelling. Thenonlinearautoregressivemovingaverage model with exogenous inputs (NARMAX model) can represent a wide class of nonlinear systems,[2]and is defined as wherey(k),u(k) ande(k) are the system output, input, and noise sequences respectively;ny{\displaystyle n_{y}},nu{\displaystyle n_{u}}, andne{\displaystyle n_{e}}are the maximum lags for the system output, input and noise; F[•] is some nonlinear function, d is a time delay typically set tod= 1.The model is essentially an expansion of past inputs, outputs and noise terms. Because thenoiseis modelled explicitly, unbiased estimates of the system model can be obtained in the presence of unobserved highly correlated and nonlinear noise. The Volterra, the block structured models and many neural network architectures can all be considered as subsets of the NARMAX model. Since NARMAX was introduced, by proving what class of nonlinear systems can be represented by this model, many results and algorithms have been derived based around this description. Most of the early work was based on polynomial expansions of the NARMAX model. These are still the most popular methods today but other more complex forms based onwaveletsand other expansions have been introduced to represent severely nonlinear and highly complex nonlinear systems. A significant proportion of nonlinear systems can be represented by a NARMAX model including systems with exotic behaviours such aschaos,bifurcations, andsubharmonics. While NARMAX started as the name of a model it has now developed into a philosophy of nonlinear system identification,.[2]The NARMAX approach consists of several steps: Structure detection forms the most fundamental part of NARMAX. For example, a NARMAX model which consists of one lagged input and one lagged output term, three lagged noise terms, expanded as a cubic polynomial would consist of eighty two possible candidate terms. This number of candidate terms arises because the expansion by definition includes all possible combinations within the cubic expansion. Naively proceeding to estimate a model which includes all these terms and then pruning will cause numerical and computational problems and should always be avoided. However, only a few terms are often important in the model. Structure detection, which aims to select terms one at a time, is therefore critically important. These objectives can easily be achieved by using the Orthogonal Least Squares[2]algorithm and its derivatives to select the NARMAX model terms one at a time. These ideas can also be adapted forpattern recognitionandfeature selectionand provide an alternative toprincipal component analysisbut with the advantage that the features are revealed as basis functions that are easily related back to the original problem. NARMAX methods are designed to do more than find the best approximating model. System identification can be divided into two aims. The first involves approximation where the key aim is to develop a model that approximates the data set such that good predictions can be made. There are many applications where this approach is appropriate, for example in time series prediction of the weather, stock prices, speech, target tracking, pattern classification etc. In such applications the form of the model is not that important. The objective is to find an approximation scheme which produces the minimum prediction errors. A second objective of system identification, which includes the first objective as a subset, involves much more than just finding a model to achieve the best mean squared errors. This second aim is why the NARMAX philosophy was developed and is linked to the idea of finding the simplest model structure. The aim here is to develop models that reproduce the dynamic characteristics of the underlying system, to find the simplest possible model, and if possible to relate this to components and behaviours of the system under study. The core aim of this second approach to identification is therefore to identify and reveal the rule that represents the system. These objectives are relevant to model simulation and control systems design, but increasingly to applications in medicine, neuro science, and the life sciences. Here the aim is to identify models, often nonlinear, that can be used to understand the basic mechanisms of how these systems operate and behave so that we can manipulate and utilise these. NARMAX methods have also been developed in the frequency and spatio-temporal domains. In a general situation, it might be the case that some exogenous uncertain disturbance passes through the nonlinear dynamics and influence the outputs. A model class that is general enough to capture this situation is the class of stochastic nonlinearstate-space models. A state-space model is usually obtained using first principle laws,[16]such as mechanical, electrical, or thermodynamic physical laws, and the parameters to be identified usually have some physical meaning or significance. A discrete-time state-space model may be defined by the difference equations: in whicht{\displaystyle t}is a positive integer referring to time. The functionsf{\displaystyle f}andg{\displaystyle g}are general nonlinear functions. The first equation is known as the state equation and the second is known as the output equation. All the signals are modeled usingstochastic processes. The processxt{\displaystyle x_{t}}is known as the state process,wt{\displaystyle w_{t}}andvt{\displaystyle v_{t}}are usually assumedindependentand mutually independent such thatwt∼p(w;θ),vt∼p(v;θ){\displaystyle w_{t}\sim p(w;\theta ),\;v_{t}\sim p(v;\theta )}. The parameterθ{\displaystyle \theta }is usually a finite-dimensional (real) parameter to be estimated (using experimental data). Observe that the state process does not have to be a physical signal, and it is normally unobserved (not measured). The data set is given as a set of input-output pairs(yt,ut){\displaystyle (y_{t},u_{t})}fort=1,…,N{\displaystyle t=1,\dots ,N}for some finite positive integer valueN{\displaystyle N}. Unfortunately, due to the nonlinear transformation of unobserved random variables, thelikelihood functionof the outputs is analytically intractable; it is given in terms of a multidimensional marginalization integral. Consequently, commonly used parameter estimation methods such as theMaximum Likelihood Methodor the Prediction Error Method based on the optimal one-step ahead predictor[16]are analytically intractable. Recently, algorithms based onsequential Monte Carlomethods have been used to approximate the conditional mean of the outputs or, in conjunction with theExpectation-Maximizationalgorithm, to approximate the maximum likelihood estimator.[17]These methods, albeit asymptotically optimal, are computationally demanding and their use is limited to specific cases where the fundamental limitations of the employed particle filters can be avoided. An alternative solution is to apply the prediction error method using a sub-optimal predictor.[18][19][20]The resulting estimator can be shown to be strongly consistent and asymptotically normal and can be evaluated using relatively simple algorithms.[21][20]
https://en.wikipedia.org/wiki/Nonlinear_system_identification
Incalculus, aderivative testuses thederivativesof afunctionto locate thecritical pointsof a function and determine whether each point is alocal maximum, alocal minimum, or asaddle point. Derivative tests can also give information about theconcavityof a function. The usefulness of derivatives to findextremais proved mathematically byFermat's theorem of stationary points. The first-derivative test examines a function'smonotonicproperties (where the function is increasing or decreasing), focusing on a particular point in itsdomain. If the function "switches" from increasing to decreasing at the point, then the function will achieve a highest value at that point. Similarly, if the function "switches" from decreasing to increasing at the point, then it will achieve a least value at that point. If the function fails to "switch" and remains increasing or remains decreasing, then no highest or least value is achieved. One can examine a function's monotonicity without calculus. However, calculus is usually helpful because there aresufficient conditionsthat guarantee the monotonicity properties above, and these conditions apply to the vast majority of functions one would encounter. Stated precisely, suppose thatfis areal-valued function defined on someopen intervalcontaining the pointxand suppose further thatfiscontinuousatx. Note that in the first case,fis not required to be strictly increasing or strictly decreasing to the left or right ofx, while in the last case,fis required to be strictly increasing or strictly decreasing. The reason is that in the definition of local maximum and minimum, the inequality is not required to be strict: e.g. every value of aconstant functionis considered both a local maximum and a local minimum. The first-derivative test depends on the "increasing–decreasing test", which is itself ultimately a consequence of themean value theorem. It is a direct consequence of the way thederivativeis defined and its connection to decrease and increase of a function locally, combined with the previous section. Supposefis a real-valued function of a real variable defined on someintervalcontaining the critical pointa. Further suppose thatfis continuous ataanddifferentiableon some open interval containinga, except possibly ataitself. Again, corresponding to the comments in the section on monotonicity properties, note that in the first two cases, the inequality is not required to be strict, while in the third, strict inequality is required. The first-derivative test is helpful in solvingoptimization problemsin physics, economics, and engineering. In conjunction with theextreme value theorem, it can be used to find the absolute maximum and minimum of a real-valued function defined on aclosedandboundedinterval. In conjunction with other information such as concavity, inflection points, andasymptotes, it can be used to sketch thegraphof a function. After establishing thecritical pointsof a function, thesecond-derivative testuses the value of thesecond derivativeat those points to determine whether such points are a localmaximumor a local minimum.[1]If the functionfis twice-differentiable at a critical pointx(i.e. a point wheref′(x) = 0), then: In the last case,Taylor's theoremmay sometimes be used to determine the behavior offnearxusinghigher derivatives. Suppose we havef″(x)>0{\displaystyle f''(x)>0}(the proof forf″(x)<0{\displaystyle f''(x)<0}is analogous). By assumption,f′(x)=0{\displaystyle f'(x)=0}. Then Thus, forhsufficiently small we get which means thatf′(x+h)<0{\displaystyle f'(x+h)<0}ifh<0{\displaystyle h<0}(intuitively,fis decreasing as it approachesx{\displaystyle x}from the left), and thatf′(x+h)>0{\displaystyle f'(x+h)>0}ifh>0{\displaystyle h>0}(intuitively,fis increasing as we go right fromx). Now, by thefirst-derivative test,f{\displaystyle f}has a local minimum atx{\displaystyle x}. A related but distinct use of second derivatives is to determine whether a function isconcave upor concave down at a point. It does not, however, provide information aboutinflection points. Specifically, a twice-differentiable functionfis concave up iff″(x)>0{\displaystyle f''(x)>0}and concave down iff″(x)<0{\displaystyle f''(x)<0}. Note that iff(x)=x4{\displaystyle f(x)=x^{4}}, thenx=0{\displaystyle x=0}has zero second derivative, yet is not an inflection point, so the second derivative alone does not give enough information to determine whether a given point is an inflection point. Thehigher-order derivative testorgeneral derivative testis able to determine whether a function's critical points are maxima, minima, or points of inflection for a wider variety of functions than the second-order derivative test. As shown below, the second-derivative test is mathematically identical to the special case ofn= 1 in the higher-order derivative test. Letfbe a real-valued, sufficiently differentiable function on an intervalI⊂R{\displaystyle I\subset \mathbb {R} }, letc∈I{\displaystyle c\in I}, and letn≥1{\displaystyle n\geq 1}be anatural number. Also let all the derivatives offatcbe zero up to and including then-th derivative, but with the (n+ 1)th derivative being non-zero: There are four possibilities, the first two cases wherecis an extremum, the second two wherecis a (local) saddle point: Sincenmust be either odd or even, this analytical test classifies any stationary point off, so long as a nonzero derivative shows up eventually. Say we want to perform the general derivative test on the functionf(x)=x6+5{\displaystyle f(x)=x^{6}+5}at the pointx=0{\displaystyle x=0}. To do this, we calculate the derivatives of the function and then evaluate them at the point of interest until the result is nonzero. As shown above, at the pointx=0{\displaystyle x=0}, the functionx6+5{\displaystyle x^{6}+5}has all of its derivatives at 0 equal to 0, except for the 6th derivative, which is positive. Thusn= 5, and by the test, there is a local minimum at 0. For a function of more than one variable, the second-derivative test generalizes to a test based on theeigenvaluesof the function'sHessian matrixat the critical point. In particular, assuming that all second-order partial derivatives offare continuous on aneighbourhoodof a critical pointx, then if the eigenvalues of the Hessian atxare all positive, thenxis a local minimum. If the eigenvalues are all negative, thenxis a local maximum, and if some are positive and some negative, then the point is asaddle point. If the Hessian matrix issingular, then the second-derivative test is inconclusive.
https://en.wikipedia.org/wiki/Derivative_test
In mathematics, theinfimum(abbreviatedinf;pl.:infima) of asubsetS{\displaystyle S}of apartially ordered setP{\displaystyle P}is thegreatest elementinP{\displaystyle P}that is less than or equal to each element ofS,{\displaystyle S,}if such an element exists.[1]If the infimum ofS{\displaystyle S}exists, it is unique, and ifbis alower boundofS{\displaystyle S}, thenbis less than or equal to the infimum ofS{\displaystyle S}. Consequently, the termgreatest lower bound(abbreviated asGLB) is also commonly used.[1]Thesupremum(abbreviatedsup;pl.:suprema) of a subsetS{\displaystyle S}of a partially ordered setP{\displaystyle P}is theleast elementinP{\displaystyle P}that is greater than or equal to each element ofS,{\displaystyle S,}if such an element exists.[1]If the supremum ofS{\displaystyle S}exists, it is unique, and ifbis anupper boundofS{\displaystyle S}, then the supremum ofS{\displaystyle S}is less than or equal tob. Consequently, the supremum is also referred to as theleast upper bound(orLUB).[1] The infimum is, in a precise sense,dualto the concept of a supremum. Infima and suprema ofreal numbersare common special cases that are important inanalysis, and especially inLebesgue integration. However, the general definitions remain valid in the more abstract setting oforder theorywhere arbitrary partially ordered sets are considered. The concepts of infimum and supremum are close tominimumandmaximum, but are more useful in analysis because they better characterize special sets which may haveno minimum or maximum. For instance, the set ofpositive real numbersR+{\displaystyle \mathbb {R} ^{+}}(not including0{\displaystyle 0}) does not have a minimum, because any given element ofR+{\displaystyle \mathbb {R} ^{+}}could simply be divided in half resulting in a smaller number that is still inR+.{\displaystyle \mathbb {R} ^{+}.}There is, however, exactly one infimum of the positive real numbers relative to the real numbers:0,{\displaystyle 0,}which is smaller than all the positive real numbers and greater than any other real number which could be used as a lower bound. An infimum of a set is always and only defined relative to a superset of the set in question. For example, there is no infimum of the positive real numbers inside the positive real numbers (as their own superset), nor any infimum of the positive real numbers inside the complex numbers with positive real part. Alower boundof a subsetS{\displaystyle S}of apartially ordered set(P,≤){\displaystyle (P,\leq )}is an elementy{\displaystyle y}ofP{\displaystyle P}such that A lower bounda{\displaystyle a}ofS{\displaystyle S}is called aninfimum(orgreatest lower bound, ormeet) ofS{\displaystyle S}if Similarly, anupper boundof a subsetS{\displaystyle S}of a partially ordered set(P,≤){\displaystyle (P,\leq )}is an elementz{\displaystyle z}ofP{\displaystyle P}such that An upper boundb{\displaystyle b}ofS{\displaystyle S}is called asupremum(orleast upper bound, orjoin) ofS{\displaystyle S}if Infima and suprema do not necessarily exist. Existence of an infimum of a subsetS{\displaystyle S}ofP{\displaystyle P}can fail ifS{\displaystyle S}has no lower bound at all, or if the set of lower bounds does not contain a greatest element. (An example of this is the subset{x∈Q:x2<2}{\displaystyle \{x\in \mathbb {Q} :x^{2}<2\}}ofQ{\displaystyle \mathbb {Q} }. It has upper bounds, such as 1.5, but no supremum inQ{\displaystyle \mathbb {Q} }.) Consequently, partially ordered sets for which certain infima are known to exist become especially interesting. For instance, alatticeis a partially ordered set in which allnonempty finitesubsets have both a supremum and an infimum, and acomplete latticeis a partially ordered set in whichallsubsets have both a supremum and an infimum. More information on the various classes of partially ordered sets that arise from such considerations are found in the article oncompleteness properties. If the supremum of a subsetS{\displaystyle S}exists, it is unique. IfS{\displaystyle S}contains a greatest element, then that element is the supremum; otherwise, the supremum does not belong toS{\displaystyle S}(or does not exist). Likewise, if the infimum exists, it is unique. IfS{\displaystyle S}contains a least element, then that element is the infimum; otherwise, the infimum does not belong toS{\displaystyle S}(or does not exist). The infimum of a subsetS{\displaystyle S}of a partially ordered setP,{\displaystyle P,}assuming it exists, does not necessarily belong toS.{\displaystyle S.}If it does, it is aminimum or least elementofS.{\displaystyle S.}Similarly, if the supremum ofS{\displaystyle S}belongs toS,{\displaystyle S,}it is amaximum or greatest elementofS.{\displaystyle S.} For example, consider the set of negative real numbers (excluding zero). This set has no greatest element, since for every element of the set, there is another, larger, element. For instance, for any negative real numberx,{\displaystyle x,}there is another negative real numberx2,{\displaystyle {\tfrac {x}{2}},}which is greater. On the other hand, every real number greater than or equal to zero is certainly an upper bound on this set. Hence,0{\displaystyle 0}is the least upper bound of the negative reals, so the supremum is 0. This set has a supremum but no greatest element. However, the definition ofmaximal and minimal elementsis more general. In particular, a set can have many maximal and minimal elements, whereas infima and suprema are unique. Whereas maxima and minima must be members of the subset that is under consideration, the infimum and supremum of a subset need not be members of that subset themselves. Finally, a partially ordered set may have many minimal upper bounds without having a least upper bound. Minimal upper bounds are those upper bounds for which there is no strictly smaller element that also is an upper bound. This does not say that each minimal upper bound is smaller than all other upper bounds, it merely is not greater. The distinction between "minimal" and "least" is only possible when the given order is not atotalone. In a totally ordered set, like the real numbers, the concepts are the same. As an example, letS{\displaystyle S}be the set of all finite subsets of natural numbers and consider the partially ordered set obtained by taking all sets fromS{\displaystyle S}together with the set ofintegersZ{\displaystyle \mathbb {Z} }and the set of positive real numbersR+,{\displaystyle \mathbb {R} ^{+},}ordered by subset inclusion as above. Then clearly bothZ{\displaystyle \mathbb {Z} }andR+{\displaystyle \mathbb {R} ^{+}}are greater than all finite sets of natural numbers. Yet, neither isR+{\displaystyle \mathbb {R} ^{+}}smaller thanZ{\displaystyle \mathbb {Z} }nor is the converse true: both sets are minimal upper bounds but none is a supremum. Theleast-upper-bound propertyis an example of the aforementionedcompleteness propertieswhich is typical for the set of real numbers. This property is sometimes calledDedekind completeness. If an ordered setS{\displaystyle S}has the property that every nonempty subset ofS{\displaystyle S}having an upper bound also has a least upper bound, thenS{\displaystyle S}is said to have the least-upper-bound property. As noted above, the setR{\displaystyle \mathbb {R} }of all real numbers has the least-upper-bound property. Similarly, the setZ{\displaystyle \mathbb {Z} }of integers has the least-upper-bound property; ifS{\displaystyle S}is a nonempty subset ofZ{\displaystyle \mathbb {Z} }and there is some numbern{\displaystyle n}such that every elements{\displaystyle s}ofS{\displaystyle S}is less than or equal ton,{\displaystyle n,}then there is a least upper boundu{\displaystyle u}forS,{\displaystyle S,}an integer that is an upper bound forS{\displaystyle S}and is less than or equal to every other upper bound forS.{\displaystyle S.}Awell-orderedset also has the least-upper-bound property, and the empty subset has also a least upper bound: the minimum of the whole set. An example of a set thatlacksthe least-upper-bound property isQ,{\displaystyle \mathbb {Q} ,}the set of rational numbers. LetS{\displaystyle S}be the set of all rational numbersq{\displaystyle q}such thatq2<2.{\displaystyle q^{2}<2.}ThenS{\displaystyle S}has an upper bound (1000,{\displaystyle 1000,}for example, or6{\displaystyle 6}) but no least upper bound inQ{\displaystyle \mathbb {Q} }: If we supposep∈Q{\displaystyle p\in \mathbb {Q} }is the least upper bound, a contradiction is immediately deduced because between any two realsx{\displaystyle x}andy{\displaystyle y}(including2{\displaystyle {\sqrt {2}}}andp{\displaystyle p}) there exists some rationalr,{\displaystyle r,}which itself would have to be the least upper bound (ifp>2{\displaystyle p>{\sqrt {2}}}) or a member ofS{\displaystyle S}greater thanp{\displaystyle p}(ifp<2{\displaystyle p<{\sqrt {2}}}). Another example is thehyperreals; there is no least upper bound of the set of positive infinitesimals. There is a correspondinggreatest-lower-bound property; an ordered set possesses the greatest-lower-bound property if and only if it also possesses the least-upper-bound property; the least-upper-bound of the set of lower bounds of a set is the greatest-lower-bound, and the greatest-lower-bound of the set of upper bounds of a set is the least-upper-bound of the set. If in a partially ordered setP{\displaystyle P}every bounded subset has a supremum, this applies also, for any setX,{\displaystyle X,}in the function space containing all functions fromX{\displaystyle X}toP,{\displaystyle P,}wheref≤g{\displaystyle f\leq g}if and only iff(x)≤g(x){\displaystyle f(x)\leq g(x)}for allx∈X.{\displaystyle x\in X.}For example, it applies for real functions, and, since these can be considered special cases of functions, for realn{\displaystyle n}-tuples and sequences of real numbers. Theleast-upper-bound propertyis an indicator of the suprema. Inanalysis, infima and suprema of subsetsS{\displaystyle S}of thereal numbersare particularly important. For instance, the negativereal numbersdo not have a greatest element, and their supremum is0{\displaystyle 0}(which is not a negative real number).[1]Thecompleteness of the real numbersimplies (and is equivalent to) that any bounded nonempty subsetS{\displaystyle S}of the real numbers has an infimum and a supremum. IfS{\displaystyle S}is not bounded below, one often formally writesinfS=−∞.{\displaystyle \inf _{}S=-\infty .}IfS{\displaystyle S}isempty, one writesinfS=+∞.{\displaystyle \inf _{}S=+\infty .} IfA{\displaystyle A}is any set of real numbers thenA≠∅{\displaystyle A\neq \varnothing }if and only ifsupA≥infA,{\displaystyle \sup A\geq \inf A,}and otherwise−∞=sup∅<inf∅=∞.{\displaystyle -\infty =\sup \varnothing <\inf \varnothing =\infty .}[2] Set inclusion IfA⊆B{\displaystyle A\subseteq B}are sets of real numbers theninfA≥infB{\displaystyle \inf A\geq \inf B}(ifA=∅{\displaystyle A=\varnothing }this reads asinfB≤∞{\displaystyle \inf B\leq \infty }) andsupA≤supB.{\displaystyle \sup A\leq \sup B.} Image under functionsIff:R→R{\displaystyle f\colon \mathbb {R} \to \mathbb {R} }is a nonincreasing function, thenf(inf(S))≤inf(f(S)){\displaystyle f(\inf(S))\leq \inf(f(S))}andsup(f(S)){\displaystyle \sup(f(S))}, where the image is defined asf(S)=def{f(s):s∈S}.{\displaystyle f(S)\,{\stackrel {\scriptscriptstyle {\text{def}}}{=}}\,\{f(s):s\in S\}.} Identifying infima and suprema If the infimum ofA{\displaystyle A}exists (that is,infA{\displaystyle \inf A}is a real number) and ifp{\displaystyle p}is any real number thenp=infA{\displaystyle p=\inf A}if and only ifp{\displaystyle p}is a lower bound and for everyϵ>0{\displaystyle \epsilon >0}there is anaϵ∈A{\displaystyle a_{\epsilon }\in A}withaϵ<p+ϵ.{\displaystyle a_{\epsilon }<p+\epsilon .}Similarly, ifsupA{\displaystyle \sup A}is a real number and ifp{\displaystyle p}is any real number thenp=supA{\displaystyle p=\sup A}if and only ifp{\displaystyle p}is an upper bound and if for everyϵ>0{\displaystyle \epsilon >0}there is anaϵ∈A{\displaystyle a_{\epsilon }\in A}withaϵ>p−ϵ.{\displaystyle a_{\epsilon }>p-\epsilon .} Relation to limits of sequences IfS≠∅{\displaystyle S\neq \varnothing }is any non-empty set of real numbers then there always exists a non-decreasing sequences1≤s2≤⋯{\displaystyle s_{1}\leq s_{2}\leq \cdots }inS{\displaystyle S}such thatlimn→∞sn=supS.{\displaystyle \lim _{n\to \infty }s_{n}=\sup S.}Similarly, there will exist a (possibly different) non-increasing sequences1≥s2≥⋯{\displaystyle s_{1}\geq s_{2}\geq \cdots }inS{\displaystyle S}such thatlimn→∞sn=infS.{\displaystyle \lim _{n\to \infty }s_{n}=\inf S.}In particular, the infimum and supremum of a set belong to itsclosureifinfS∈R{\displaystyle \inf S\in \mathbb {R} }theninfS∈S¯{\displaystyle \inf S\in {\bar {S}}}and ifsupS∈R{\displaystyle \sup S\in \mathbb {R} }thensupS∈S¯{\displaystyle \sup S\in {\bar {S}}} Expressing the infimum and supremum as a limit of a such a sequence allows theorems from various branches of mathematics to be applied. Consider for example the well-known fact fromtopologythat iff{\displaystyle f}is acontinuous functionands1,s2,…{\displaystyle s_{1},s_{2},\ldots }is a sequence of points in its domain that converges to a pointp,{\displaystyle p,}thenf(s1),f(s2),…{\displaystyle f\left(s_{1}\right),f\left(s_{2}\right),\ldots }necessarily converges tof(p).{\displaystyle f(p).}It implies that iflimn→∞sn=supS{\displaystyle \lim _{n\to \infty }s_{n}=\sup S}is a real number (where alls1,s2,…{\displaystyle s_{1},s_{2},\ldots }are inS{\displaystyle S}) and iff{\displaystyle f}is a continuous function whose domain containsS{\displaystyle S}andsupS,{\displaystyle \sup S,}thenf(supS)=f(limn→∞sn)=limn→∞f(sn),{\displaystyle f(\sup S)=f\left(\lim _{n\to \infty }s_{n}\right)=\lim _{n\to \infty }f\left(s_{n}\right),}which (for instance) guarantees[note 1]thatf(supS){\displaystyle f(\sup S)}is anadherent pointof the setf(S)=def{f(s):s∈S}.{\displaystyle f(S)\,{\stackrel {\scriptscriptstyle {\text{def}}}{=}}\,\{f(s):s\in S\}.}If in addition to what has been assumed, the continuous functionf{\displaystyle f}is also an increasing ornon-decreasing function, then it is even possible to conclude thatsupf(S)=f(supS).{\displaystyle \sup f(S)=f(\sup S).}This may be applied, for instance, to conclude that wheneverg{\displaystyle g}is a real (orcomplex) valued function with domainΩ≠∅{\displaystyle \Omega \neq \varnothing }whosesup norm‖g‖∞=defsupx∈Ω|g(x)|{\displaystyle \|g\|_{\infty }\,{\stackrel {\scriptscriptstyle {\text{def}}}{=}}\,\sup _{x\in \Omega }|g(x)|}is finite, then for every non-negative real numberq,{\displaystyle q,}‖g‖∞q=def(supx∈Ω|g(x)|)q=supx∈Ω(|g(x)|q){\displaystyle \|g\|_{\infty }^{q}~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\left(\sup _{x\in \Omega }|g(x)|\right)^{q}=\sup _{x\in \Omega }\left(|g(x)|^{q}\right)}since the mapf:[0,∞)→R{\displaystyle f:[0,\infty )\to \mathbb {R} }defined byf(x)=xq{\displaystyle f(x)=x^{q}}is a continuous non-decreasing function whose domain[0,∞){\displaystyle [0,\infty )}always containsS:={|g(x)|:x∈Ω}{\displaystyle S:=\{|g(x)|:x\in \Omega \}}andsupS=def‖g‖∞.{\displaystyle \sup S\,{\stackrel {\scriptscriptstyle {\text{def}}}{=}}\,\|g\|_{\infty }.} Although this discussion focused onsup,{\displaystyle \sup ,}similar conclusions can be reached forinf{\displaystyle \inf }with appropriate changes (such as requiring thatf{\displaystyle f}be non-increasing rather than non-decreasing). Othernormsdefined in terms ofsup{\displaystyle \sup }orinf{\displaystyle \inf }include theweakLp,w{\displaystyle L^{p,w}}spacenorms (for1≤p<∞{\displaystyle 1\leq p<\infty }), the norm onLebesgue spaceL∞(Ω,μ),{\displaystyle L^{\infty }(\Omega ,\mu ),}andoperator norms. Monotone sequences inS{\displaystyle S}that converge tosupS{\displaystyle \sup S}(or toinfS{\displaystyle \inf S}) can also be used to help prove many of the formula given below, since addition and multiplication of real numbers are continuous operations. The following formulas depend on a notation that conveniently generalizes arithmetic operations on sets. Throughout,A,B⊆R{\displaystyle A,B\subseteq \mathbb {R} }are sets of real numbers. Sum of sets TheMinkowski sumof two setsA{\displaystyle A}andB{\displaystyle B}of real numbers is the setA+B:={a+b:a∈A,b∈B}{\displaystyle A+B~:=~\{a+b:a\in A,b\in B\}}consisting of all possible arithmetic sums of pairs of numbers, one from each set. The infimum and supremum of the Minkowski sum satisfy, ifA≠∅≠B{\displaystyle A\neq \varnothing \neq B}inf(A+B)=(infA)+(infB){\displaystyle \inf(A+B)=(\inf A)+(\inf B)}andsup(A+B)=(supA)+(supB).{\displaystyle \sup(A+B)=(\sup A)+(\sup B).} Product of sets The multiplication of two setsA{\displaystyle A}andB{\displaystyle B}of real numbers is defined similarly to their Minkowski sum:A⋅B:={a⋅b:a∈A,b∈B}.{\displaystyle A\cdot B~:=~\{a\cdot b:a\in A,b\in B\}.} IfA{\displaystyle A}andB{\displaystyle B}are nonempty sets of positive real numbers theninf(A⋅B)=(infA)⋅(infB){\displaystyle \inf(A\cdot B)=(\inf A)\cdot (\inf B)}and similarly for supremasup(A⋅B)=(supA)⋅(supB).{\displaystyle \sup(A\cdot B)=(\sup A)\cdot (\sup B).}[3] Scalar product of a set The product of a real numberr{\displaystyle r}and a setB{\displaystyle B}of real numbers is the setrB:={r⋅b:b∈B}.{\displaystyle rB~:=~\{r\cdot b:b\in B\}.} Ifr>0{\displaystyle r>0}theninf(r⋅A)=r(infA)andsup(r⋅A)=r(supA),{\displaystyle \inf(r\cdot A)=r(\inf A)\quad {\text{ and }}\quad \sup(r\cdot A)=r(\sup A),}while ifr<0{\displaystyle r<0}theninf(r⋅A)=r(supA)andsup(r⋅A)=r(infA).{\displaystyle \inf(r\cdot A)=r(\sup A)\quad {\text{ and }}\quad \sup(r\cdot A)=r(\inf A).}In the caser=0{\displaystyle r=0}, one has, ifA≠∅{\displaystyle A\neq \varnothing }inf(0⋅A)=0andsup(0⋅A)=0{\displaystyle \inf(0\cdot A)=0\quad {\text{ and }}\quad \sup(0\cdot A)=0}Usingr=−1{\displaystyle r=-1}and the notation−A:=(−1)A={−a:a∈A},{\textstyle -A:=(-1)A=\{-a:a\in A\},}it follows that,inf(−A)=−supAandsup(−A)=−infA.{\displaystyle \inf(-A)=-\sup A\quad {\text{ and }}\quad \sup(-A)=-\inf A.} Multiplicative inverse of a set For any setS{\displaystyle S}that does not contain0,{\displaystyle 0,}let1S:={1s:s∈S}.{\displaystyle {\frac {1}{S}}~:=\;\left\{{\tfrac {1}{s}}:s\in S\right\}.} IfS⊆(0,∞){\displaystyle S\subseteq (0,\infty )}is non-empty then1supS=inf1S{\displaystyle {\frac {1}{\sup _{}S}}~=~\inf _{}{\frac {1}{S}}}where this equation also holds whensupS=∞{\displaystyle \sup _{}S=\infty }if the definition1∞:=0{\displaystyle {\frac {1}{\infty }}:=0}is used.[note 2]This equality may alternatively be written as1sups∈Ss=infs∈S1s.{\displaystyle {\frac {1}{\displaystyle \sup _{s\in S}s}}=\inf _{s\in S}{\tfrac {1}{s}}.}Moreover,infS=0{\displaystyle \inf _{}S=0}if and only ifsup1S=∞,{\displaystyle \sup _{}{\tfrac {1}{S}}=\infty ,}where if[note 2]infS>0,{\displaystyle \inf _{}S>0,}then1infS=sup1S.{\displaystyle {\tfrac {1}{\inf _{}S}}=\sup _{}{\tfrac {1}{S}}.} If one denotes byPop{\displaystyle P^{\operatorname {op} }}the partially-ordered setP{\displaystyle P}with theopposite order relation; that is, for allxandy,{\displaystyle x{\text{ and }}y,}declare:x≤yinPopif and only ifx≥yinP,{\displaystyle x\leq y{\text{ in }}P^{\operatorname {op} }\quad {\text{ if and only if }}\quad x\geq y{\text{ in }}P,}then infimum of a subsetS{\displaystyle S}inP{\displaystyle P}equals the supremum ofS{\displaystyle S}inPop{\displaystyle P^{\operatorname {op} }}and vice versa. For subsets of the real numbers, another kind of duality holds:infS=−sup(−S),{\displaystyle \inf S=-\sup(-S),}where−S:={−s:s∈S}.{\displaystyle -S:=\{-s~:~s\in S\}.} In the last example, the supremum of a set ofrationalsisirrational, which means that the rationals areincomplete. One basic property of the supremum issup{f(t)+g(t):t∈A}≤sup{f(t):t∈A}+sup{g(t):t∈A}{\displaystyle \sup\{f(t)+g(t):t\in A\}~\leq ~\sup\{f(t):t\in A\}+\sup\{g(t):t\in A\}}for anyfunctionalsf{\displaystyle f}andg.{\displaystyle g.} The supremum of a subsetS{\displaystyle S}of(N,∣){\displaystyle (\mathbb {N} ,\mid \,)}where∣{\displaystyle \,\mid \,}denotes "divides", is thelowest common multipleof the elements ofS.{\displaystyle S.} The supremum of a setS{\displaystyle S}containing subsets of some setX{\displaystyle X}is theunionof the subsets when considering the partially ordered set(P(X),⊆){\displaystyle (P(X),\subseteq )}, whereP{\displaystyle P}is thepower setofX{\displaystyle X}and⊆{\displaystyle \,\subseteq \,}issubset.
https://en.wikipedia.org/wiki/Infimum_and_supremum
Inmathematics, thelimit inferiorandlimit superiorof asequencecan be thought of aslimiting(that is, eventual and extreme) bounds on the sequence. They can be thought of in a similar fashion for afunction(seelimit of a function). For aset, they are theinfimum and supremumof the set'slimit points, respectively. In general, when there are multiple objects around which a sequence, function, or set accumulates, the inferior and superior limits extract the smallest and largest of them; the type of object and the measure of size is context-dependent, but the notion of extreme limits is invariant. Limit inferior is also calledinfimum limit,limit infimum,liminf,inferior limit,lower limit, orinner limit; limit superior is also known assupremum limit,limit supremum,limsup,superior limit,upper limit, orouter limit. The limit inferior of a sequence(xn){\displaystyle (x_{n})}is denoted bylim infn→∞xnorlim_n→∞⁡xn,{\displaystyle \liminf _{n\to \infty }x_{n}\quad {\text{or}}\quad \varliminf _{n\to \infty }x_{n},}and the limit superior of a sequence(xn){\displaystyle (x_{n})}is denoted bylim supn→∞xnorlim¯n→∞⁡xn.{\displaystyle \limsup _{n\to \infty }x_{n}\quad {\text{or}}\quad \varlimsup _{n\to \infty }x_{n}.} Thelimit inferiorof a sequence (xn) is defined bylim infn→∞xn:=limn→∞(infm≥nxm){\displaystyle \liminf _{n\to \infty }x_{n}:=\lim _{n\to \infty }\!{\Big (}\inf _{m\geq n}x_{m}{\Big )}}orlim infn→∞xn:=supn≥0infm≥nxm=sup{inf{xm:m≥n}:n≥0}.{\displaystyle \liminf _{n\to \infty }x_{n}:=\sup _{n\geq 0}\,\inf _{m\geq n}x_{m}=\sup \,\{\,\inf \,\{\,x_{m}:m\geq n\,\}:n\geq 0\,\}.} Similarly, thelimit superiorof (xn) is defined bylim supn→∞xn:=limn→∞(supm≥nxm){\displaystyle \limsup _{n\to \infty }x_{n}:=\lim _{n\to \infty }\!{\Big (}\sup _{m\geq n}x_{m}{\Big )}}orlim supn→∞xn:=infn≥0supm≥nxm=inf{sup{xm:m≥n}:n≥0}.{\displaystyle \limsup _{n\to \infty }x_{n}:=\inf _{n\geq 0}\,\sup _{m\geq n}x_{m}=\inf \,\{\,\sup \,\{\,x_{m}:m\geq n\,\}:n\geq 0\,\}.} Alternatively, the notationslim_n→∞⁡xn:=lim infn→∞xn{\displaystyle \varliminf _{n\to \infty }x_{n}:=\liminf _{n\to \infty }x_{n}}andlim¯n→∞⁡xn:=lim supn→∞xn{\displaystyle \varlimsup _{n\to \infty }x_{n}:=\limsup _{n\to \infty }x_{n}}are sometimes used. The limits superior and inferior can equivalently be defined using the concept of subsequential limits of the sequence(xn){\displaystyle (x_{n})}.[1]An elementξ{\displaystyle \xi }of theextended real numbersR¯{\displaystyle {\overline {\mathbb {R} }}}is asubsequential limitof(xn){\displaystyle (x_{n})}if there exists a strictly increasing sequence ofnatural numbers(nk){\displaystyle (n_{k})}such thatξ=limk→∞xnk{\displaystyle \xi =\lim _{k\to \infty }x_{n_{k}}}. IfE⊆R¯{\displaystyle E\subseteq {\overline {\mathbb {R} }}}is the set of all subsequential limits of(xn){\displaystyle (x_{n})}, then and If the terms in the sequence arereal numbers, the limit superior and limit inferior always exist, as the real numbers together with ±∞ (i.e. theextended real number line) arecomplete. More generally, these definitions make sense in anypartially ordered set, provided thesupremaandinfimaexist, such as in acomplete lattice. Whenever the ordinary limit exists, the limit inferior and limit superior are both equal to it; therefore, each can be considered a generalization of the ordinary limit which is primarily interesting in cases where the limit doesnotexist. Whenever lim infxnand lim supxnboth exist, we have The limits inferior and superior are related tobig-O notationin that they bound a sequence only "in the limit"; the sequence may exceed the bound. However, with big-O notation the sequence can only exceed the bound in a finite prefix of the sequence, whereas the limit superior of a sequence like e−nmay actually be less than all elements of the sequence. The only promise made is that some tail of the sequence can be bounded above by the limit superior plus an arbitrarily small positive constant, and bounded below by the limit inferior minus an arbitrarily small positive constant. The limit superior and limit inferior of a sequence are a special case of those of a function (see below). Inmathematical analysis, limit superior and limit inferior are important tools for studying sequences ofreal numbers. Since the supremum and infimum of anunbounded setof real numbers may not exist (the reals are not a complete lattice), it is convenient to consider sequences in theaffinely extended real number system: we add the positive and negative infinities to the real line to give the completetotally ordered set[−∞,∞], which is a complete lattice. Consider a sequence(xn){\displaystyle (x_{n})}consisting of real numbers. Assume that the limit superior and limit inferior are real numbers (so, not infinite). The relationship of limit inferior and limit superior for sequences of real numbers is as follows:lim supn→∞(−xn)=−lim infn→∞xn{\displaystyle \limsup _{n\to \infty }\left(-x_{n}\right)=-\liminf _{n\to \infty }x_{n}} As mentioned earlier, it is convenient to extendR{\displaystyle \mathbb {R} }to[−∞,∞].{\displaystyle [-\infty ,\infty ].}Then,(xn){\displaystyle \left(x_{n}\right)}in[−∞,∞]{\displaystyle [-\infty ,\infty ]}convergesif and only iflim infn→∞xn=lim supn→∞xn{\displaystyle \liminf _{n\to \infty }x_{n}=\limsup _{n\to \infty }x_{n}}in which caselimn→∞xn{\displaystyle \lim _{n\to \infty }x_{n}}is equal to their common value. (Note that when working just inR,{\displaystyle \mathbb {R} ,}convergence to−∞{\displaystyle -\infty }or∞{\displaystyle \infty }would not be considered as convergence.) Since the limit inferior is at most the limit superior, the following conditions holdlim infn→∞xn=∞implieslimn→∞xn=∞,lim supn→∞xn=−∞implieslimn→∞xn=−∞.{\displaystyle {\begin{alignedat}{4}\liminf _{n\to \infty }x_{n}&=\infty &&\;\;{\text{ implies }}\;\;\lim _{n\to \infty }x_{n}=\infty ,\\[0.3ex]\limsup _{n\to \infty }x_{n}&=-\infty &&\;\;{\text{ implies }}\;\;\lim _{n\to \infty }x_{n}=-\infty .\end{alignedat}}} IfI=lim infn→∞xn{\displaystyle I=\liminf _{n\to \infty }x_{n}}andS=lim supn→∞xn{\displaystyle S=\limsup _{n\to \infty }x_{n}}, then the interval[I,S]{\displaystyle [I,S]}need not contain any of the numbersxn,{\displaystyle x_{n},}but every slight enlargement[I−ϵ,S+ϵ],{\displaystyle [I-\epsilon ,S+\epsilon ],}for arbitrarily smallϵ>0,{\displaystyle \epsilon >0,}will containxn{\displaystyle x_{n}}for all but finitely many indicesn.{\displaystyle n.}In fact, the interval[I,S]{\displaystyle [I,S]}is the smallest closed interval with this property. We can formalize this property like this: there existsubsequencesxkn{\displaystyle x_{k_{n}}}andxhn{\displaystyle x_{h_{n}}}ofxn{\displaystyle x_{n}}(wherekn{\displaystyle k_{n}}andhn{\displaystyle h_{n}}are increasing) for which we havelim infn→∞xn+ϵ>xhnxkn>lim supn→∞xn−ϵ{\displaystyle \liminf _{n\to \infty }x_{n}+\epsilon >x_{h_{n}}\;\;\;\;\;\;\;\;\;x_{k_{n}}>\limsup _{n\to \infty }x_{n}-\epsilon } On the other hand, there exists an0∈N{\displaystyle n_{0}\in \mathbb {N} }so that for alln≥n0{\displaystyle n\geq n_{0}}lim infn→∞xn−ϵ<xn<lim supn→∞xn+ϵ{\displaystyle \liminf _{n\to \infty }x_{n}-\epsilon <x_{n}<\limsup _{n\to \infty }x_{n}+\epsilon } To recapitulate: Conversely, it can also be shown that: In general,infnxn≤lim infn→∞xn≤lim supn→∞xn≤supnxn.{\displaystyle \inf _{n}x_{n}\leq \liminf _{n\to \infty }x_{n}\leq \limsup _{n\to \infty }x_{n}\leq \sup _{n}x_{n}.}The liminf and limsup of a sequence are respectively the smallest and greatestcluster points.[3] Analogously, the limit inferior satisfiessuperadditivity:lim infn→∞(an+bn)≥lim infn→∞an+lim infn→∞bn.{\displaystyle \liminf _{n\to \infty }\,(a_{n}+b_{n})\geq \liminf _{n\to \infty }a_{n}+\ \liminf _{n\to \infty }b_{n}.}In the particular case that one of the sequences actually converges, sayan→a,{\displaystyle a_{n}\to a,}then the inequalities above become equalities (withlim supn→∞an{\displaystyle \limsup _{n\to \infty }a_{n}}orlim infn→∞an{\displaystyle \liminf _{n\to \infty }a_{n}}being replaced bya{\displaystyle a}). hold whenever the right-hand side is not of the form0⋅∞.{\displaystyle 0\cdot \infty .} Iflimn→∞an=A{\displaystyle \lim _{n\to \infty }a_{n}=A}exists (including the caseA=+∞{\displaystyle A=+\infty }) andB=lim supn→∞bn,{\displaystyle B=\limsup _{n\to \infty }b_{n},}thenlim supn→∞(anbn)=AB{\displaystyle \limsup _{n\to \infty }\left(a_{n}b_{n}\right)=AB}provided thatAB{\displaystyle AB}is not of the form0⋅∞.{\displaystyle 0\cdot \infty .} Assume that a function is defined from asubsetof the real numbers to the real numbers. As in the case for sequences, the limit inferior and limit superior are always well-defined if we allow the values +∞ and −∞; in fact, if both agree then the limit exists and is equal to their common value (again possibly including the infinities). For example, givenf(x)=sin⁡(1/x){\displaystyle f(x)=\sin(1/x)}, we havelim supx→0f(x)=1{\displaystyle \limsup _{x\to 0}f(x)=1}andlim infx→0f(x)=−1{\displaystyle \liminf _{x\to 0}f(x)=-1}. The difference between the two is a rough measure of how "wildly" the function oscillates, and in observation of this fact, it is called theoscillationoffat 0. This idea of oscillation is sufficient to, for example, characterizeRiemann-integrablefunctions ascontinuousexcept on a set ofmeasure zero.[5]Note that points of nonzero oscillation (i.e., points at whichfis "badly behaved") are discontinuities which, unless they make up a set of zero, are confined to a negligible set. There is a notion of limsup and liminf for functions defined on ametric spacewhose relationship to limits of real-valued functions mirrors that of the relation between the limsup, liminf, and the limit of a real sequence. Take a metric spaceX{\displaystyle X}, a subspaceE{\displaystyle E}contained inX{\displaystyle X}, and a functionf:E→R{\displaystyle f:E\to \mathbb {R} }. Define, for anylimit pointa{\displaystyle a}ofE{\displaystyle E}, lim supx→af(x)=limε→0(sup{f(x):x∈E∩B(a,ε)∖{a}}){\displaystyle \limsup _{x\to a}f(x)=\lim _{\varepsilon \to 0}\left(\sup \,\{f(x):x\in E\cap B(a,\varepsilon )\setminus \{a\}\}\right)}and lim infx→af(x)=limε→0(inf{f(x):x∈E∩B(a,ε)∖{a}}){\displaystyle \liminf _{x\to a}f(x)=\lim _{\varepsilon \to 0}\left(\inf \,\{f(x):x\in E\cap B(a,\varepsilon )\setminus \{a\}\}\right)} whereB(a,ε){\displaystyle B(a,\varepsilon )}denotes themetric ballof radiusε{\displaystyle \varepsilon }abouta{\displaystyle a}. Note that asεshrinks, the supremum of the function over the ball isnon-increasing(strictly decreasing or remaining the same), so we have lim supx→af(x)=infε>0(sup{f(x):x∈E∩B(a,ε)∖{a}}){\displaystyle \limsup _{x\to a}f(x)=\inf _{\varepsilon >0}\left(\sup \,\{f(x):x\in E\cap B(a,\varepsilon )\setminus \{a\}\}\right)}and similarlylim infx→af(x)=supε>0(inf{f(x):x∈E∩B(a,ε)∖{a}}).{\displaystyle \liminf _{x\to a}f(x)=\sup _{\varepsilon >0}\left(\inf \,\{f(x):x\in E\cap B(a,\varepsilon )\setminus \{a\}\}\right).} This finally motivates the definitions for generaltopological spaces. TakeX,Eandaas before, but now letXbe a topological space. In this case, we replace metric balls withneighborhoods: (there is a way to write the formula using "lim" usingnetsand theneighborhood filter). This version is often useful in discussions ofsemi-continuitywhich crop up in analysis quite often. An interesting note is that this version subsumes the sequential version by considering sequences as functions from the natural numbers as a topological subspace of the extended real line, into the space (the closure ofNin [−∞,∞], theextended real number line, isN∪ {∞}.) Thepower set℘(X) of asetXis acomplete latticethat is ordered byset inclusion, and so the supremum and infimum of any set of subsets (in terms of set inclusion) always exist. In particular, every subsetYofXis bounded above byXand below by theempty set∅ because ∅ ⊆Y⊆X. Hence, it is possible (and sometimes useful) to consider superior and inferior limits of sequences in ℘(X) (i.e., sequences of subsets ofX). There are two common ways to define the limit of sequences of sets. In both cases: The difference between the two definitions involves how thetopology(i.e., how to quantify separation) is defined. In fact, the second definition is identical to the first when thediscrete metricis used to induce the topology onX. A sequence of sets in ametrizable spaceX{\displaystyle X}approaches a limiting set when the elements of each member of the sequence approach the elements of the limiting set. In particular, if(Xn){\displaystyle (X_{n})}is a sequence of subsets ofX,{\displaystyle X,}then: The limitlimXn{\displaystyle \lim X_{n}}exists if and only iflim infXn{\displaystyle \liminf X_{n}}andlim supXn{\displaystyle \limsup X_{n}}agree, in which caselimXn=lim supXn=lim infXn.{\displaystyle \lim X_{n}=\limsup X_{n}=\liminf X_{n}.}[6]The outer and inner limits should not be confused with theset-theoretic limitssuperior and inferior, as the latter sets are not sensitive to the topological structure of the space. This is the definition used inmeasure theoryandprobability. Further discussion and examples from the set-theoretic point of view, as opposed to the topological point of view discussed below, are atset-theoretic limit. By this definition, a sequence of sets approaches a limiting set when the limiting set includes elements which are in all except finitely many sets of the sequenceanddoes not include elements which are in all except finitely many complements of sets of the sequence. That is, this case specializes the general definition when the topology on setXis induced from thediscrete metric. Specifically, for pointsx,y∈X, the discrete metric is defined by under which a sequence of points (xk) converges to pointx∈Xif and only ifxk=xfor all but finitely manyk. Therefore,if the limit set existsit contains the points and only the points which are in all except finitely many of the sets of the sequence. Since convergence in the discrete metric is the strictest form of convergence (i.e., requires the most), this definition of a limit set is the strictest possible. If (Xn) is a sequence of subsets ofX, then the following always exist: Observe thatx∈ lim supXnif and only ifx∉ lim infXnc. In this sense, the sequence has a limit so long as every point inXeither appears in all except finitely manyXnor appears in all except finitely manyXnc.[7] Using the standard parlance of set theory,set inclusionprovides apartial orderingon the collection of all subsets ofXthat allows set intersection to generate a greatest lower bound and set union to generate a least upper bound. Thus, the infimum ormeetof a collection of subsets is the greatest lower bound while the supremum orjoinis the least upper bound. In this context, the inner limit, lim infXn, is thelargest meeting of tailsof the sequence, and the outer limit, lim supXn, is thesmallest joining of tailsof the sequence. The following makes this precise. The following are several set convergence examples. They have been broken into sections with respect to the metric used to induce the topology on setX. The above definitions are inadequate for many technical applications. In fact, the definitions above are specializations of the following definitions. The limit inferior of a setX⊆Yis theinfimumof all of thelimit pointsof the set. That is, Similarly, the limit superior ofXis thesupremumof all of the limit points of the set. That is, Note that the setXneeds to be defined as a subset of apartially ordered setYthat is also atopological spacein order for these definitions to make sense. Moreover, it has to be acomplete latticeso that the suprema and infima always exist. In that case every set has a limit superior and a limit inferior. Also note that the limit inferior and the limit superior of a set do not have to be elements of the set. Take atopological spaceXand afilter baseBin that space. The set of allcluster pointsfor that filter base is given by whereB¯0{\displaystyle {\overline {B}}_{0}}is theclosureofB0{\displaystyle B_{0}}. This is clearly aclosed setand is similar to the set of limit points of a set. Assume thatXis also apartially ordered set. The limit superior of the filter baseBis defined as when that supremum exists. WhenXhas atotal order, is acomplete latticeand has theorder topology, Similarly, the limit inferior of the filter baseBis defined as when that infimum exists; ifXis totally ordered, is a complete lattice, and has the order topology, then If the limit inferior and limit superior agree, then there must be exactly one cluster point and the limit of the filter base is equal to this unique cluster point. Note that filter bases are generalizations ofnets, which are generalizations ofsequences. Therefore, these definitions give the limit inferior andlimit superiorof any net (and thus any sequence) as well. For example, take topological spaceX{\displaystyle X}and the net(xα)α∈A{\displaystyle (x_{\alpha })_{\alpha \in A}}, where(A,≤){\displaystyle (A,{\leq })}is adirected setandxα∈X{\displaystyle x_{\alpha }\in X}for allα∈A{\displaystyle \alpha \in A}. The filter base ("of tails") generated by this net isB{\displaystyle B}defined by Therefore, the limit inferior and limit superior of the net are equal to the limit superior and limit inferior ofB{\displaystyle B}respectively. Similarly, for topological spaceX{\displaystyle X}, take the sequence(xn){\displaystyle (x_{n})}wherexn∈X{\displaystyle x_{n}\in X}for anyn∈N{\displaystyle n\in \mathbb {N} }. The filter base ("of tails") generated by this sequence isC{\displaystyle C}defined by Therefore, the limit inferior and limit superior of the sequence are equal to the limit superior and limit inferior ofC{\displaystyle C}respectively.
https://en.wikipedia.org/wiki/Limit_superior_and_limit_inferior
Inmathematics, themaximum-minimums identityis a relation between the maximum element of asetSofnnumbers and the minima of the 2n− 1non-emptysubsetsofS. LetS= {x1,x2, ...,xn}. Theidentitystates that or conversely For a probabilistic proof, see the reference. Ross, Sheldon M. (2020).A First Course in Probability(Tenth, global ed.). Harlow, United Kingdom: Pearson. pp.331–333.ISBN978-1-292-26920-7.
https://en.wikipedia.org/wiki/Maximum-minimums_identity
Inclassical mechanics, aparticleis inmechanical equilibriumif thenet forceon that particle is zero.[1]: 39By extension, aphysical systemmade up of many parts is in mechanical equilibrium if thenet forceon each of its individual parts is zero.[1]: 45–46[2] In addition to defining mechanical equilibrium in terms of force, there are many alternative definitions for mechanical equilibrium which are all mathematically equivalent. More generally inconservative systems, equilibrium is established at a point inconfiguration spacewhere thegradientof thepotential energywith respect to thegeneralized coordinatesis zero. If a particle in equilibrium has zero velocity, that particle is instatic equilibrium.[3][4]Since all particles in equilibrium have constant velocity, it is always possible to find aninertial reference framein which the particle isstationarywith respect to the frame. An important property of systems at mechanical equilibrium is theirstability. In a function which describes the system's potential energy, the system's equilibria can be determined usingcalculus. A system is in mechanical equilibrium at thecritical pointsof the function describing the system'spotential energy. These points can be located using the fact that thederivativeof the function is zero at these points. To determine whether or not the system is stable or unstable, thesecond derivative testis applied. WithV{\displaystyle V}denoting the staticequation of motionof a system with a singledegree of freedomthe following calculations can be performed: When considering more than one dimension, it is possible to get different results in different directions, for example stability with respect to displacements in thex-direction but instability in they-direction, a case known as asaddle point. Generally an equilibrium is only referred to as stable if it is stable in all directions. Sometimes theequilibriumequations – force and moment equilibrium conditions – are insufficient to determine the forces andreactions. Such a situation is described asstatically indeterminate. Statically indeterminate situations can often be solved by using information from outside the standard equilibrium equations. A stationary object (or set of objects) is in "static equilibrium," which is a special case of mechanical equilibrium. A paperweight on a desk is an example of static equilibrium. Other examples include arock balancesculpture, or a stack of blocks in the game ofJenga, so long as the sculpture or stack of blocks is not in the state ofcollapsing. Objects in motion can also be in equilibrium. A child sliding down aslideat constant speed would be in mechanical equilibrium, but not in static equilibrium (in the reference frame of the earth or slide). Another example of mechanical equilibrium is a person pressing a spring to a defined point. He or she can push it to an arbitrary point and hold it there, at which point the compressive load and the spring reaction are equal. In this state the system is in mechanical equilibrium. When the compressive force is removed the spring returns to its original state. The minimal number of static equilibria of homogeneous, convex bodies (when resting under gravity on a horizontal surface) is of special interest. In the planar case, the minimal number is 4, while in three dimensions one can build an object with just one stable and one unstable balance point.[5]Such an object is called agömböc.
https://en.wikipedia.org/wiki/Mechanical_equilibrium
Inmathematics, themex("minimumexcluded value") of asubsetof awell-orderedset is the smallest value from the whole set that does not belong to the subset. That is, it is theminimumvalue of thecomplement set. Beyond sets,subclassesof well-orderedclasseshave minimum excluded values. Minimum excluded values of subclasses of theordinal numbersare used incombinatorial game theoryto assignnim-valuestoimpartial games. According to theSprague–Grundy theorem, the nim-value of a game position is the minimum excluded value of the class of values of the positions that can be reached in a single move from the given position.[1] Minimum excluded values are also used ingraph theory, ingreedy coloringalgorithms. These algorithms typically choose an ordering of the vertices of a graph and choose a numbering of the available vertex colors. They then consider the vertices in order, for each vertex choosing its color to be the minimum excluded value of the set of colors already assigned to its neighbors.[2] The following examples all assume that the given set is a subset of the class ofordinal numbers:mex⁡(∅)=0mex⁡({1,2,3})=0mex⁡({0,2,4,6,…})=1mex⁡({0,1,4,7,12})=2mex⁡({0,1,2,3,…})=ωmex⁡({0,1,2,3,…,ω})=ω+1{\displaystyle {\begin{array}{lcl}\operatorname {mex} (\emptyset )&=&0\\[2pt]\operatorname {mex} (\{1,2,3\})&=&0\\[2pt]\operatorname {mex} (\{0,2,4,6,\ldots \})&=&1\\[2pt]\operatorname {mex} (\{0,1,4,7,12\})&=&2\\[2pt]\operatorname {mex} (\{0,1,2,3,\ldots \})&=&\omega \\[2pt]\operatorname {mex} (\{0,1,2,3,\ldots ,\omega \})&=&\omega +1\end{array}}} whereωis thelimit ordinalfor the natural numbers. In theSprague–Grundy theorythe minimum excluded ordinal is used to determine thenimberof anormal-playimpartial game. In such a game, either player has the same moves in each position and the last player to move wins. The nimber is equal to 0 for a game that is lost immediately by the first player, and is equal to the mex of the nimbers of all possible next positions for any other game. For example, in a one-pile version ofNim, the game starts with a pile ofnstones, and the player to move may take any positive number of stones. Ifnis zero stones, the nimber is 0 because the mex of theempty setof legal moves is the nimber 0. Ifnis 1 stone, the player to move will leave 0 stones, andmex({0}) = 1, gives the nimber for this case. Ifnis 2 stones, the player to move can leave 0 or 1 stones, giving the nimber 2 as the mex of the nimbers{0, 1}.In general, the player to move with a pile ofnstones can leave anywhere from 0 ton− 1stones; the mex of the nimbers{0, 1, …,n− 1}is always the nimbern. The first player wins in Nimif and only ifthe nimber is not zero, so from this analysis we can conclude that the first player wins if and only if the starting number of stones in a one-pile game of Nim is not zero; the winning move is to take all the stones. If we change the game so that the player to move can take up to 3 stones only, then withn= 4stones, the successor states have nimbers{1, 2, 3},giving a mex of 0. Since the nimber for 4 stones is 0, the first player loses. The second player's strategy is to respond to whatever move the first player makes by taking the rest of the stones. Forn= 5stones, the nimbers of the successor states of 2, 3, and 4 stones are the nimbers 2, 3, and 0 (as we just calculated); the mex of the set of nimbers{0, 2, 3}is the nimber 1, so starting with 5 stones in this game is a win for the first player. Seenimbersfor more details on the meaning of nimber values.
https://en.wikipedia.org/wiki/Mex_(mathematics)
Inmathematics, asaddle pointorminimax point[1]is apointon thesurfaceof thegraph of a functionwhere theslopes(derivatives) inorthogonaldirections are all zero (acritical point), but which is not alocal extremumof the function.[2]An example of a saddle point is when there is a critical point with a relativeminimumalong one axial direction (between peaks) and a relativemaximumalong the crossing axis. However, a saddle point need not be in this form. For example, the functionf(x,y)=x2+y3{\displaystyle f(x,y)=x^{2}+y^{3}}has a critical point at(0,0){\displaystyle (0,0)}that is a saddle point since it is neither a relative maximum nor relative minimum, but it does not have a relative maximum or relative minimum in they{\displaystyle y}-direction. The name derives from the fact that the prototypical example in two dimensions is asurfacethatcurves upin one direction, andcurves downin a different direction, resembling a ridingsaddle. In terms ofcontour lines, a saddle point in two dimensions gives rise to a contour map with, in principle, a pair of lines intersecting at the point. Such intersections are rare in contour maps drawn with discrete contour lines, such as ordnance survey maps, as the height of the saddle point is unlikely to coincide with the integer multiples used in such maps. Instead, the saddle point appears as a blank space in the middle of four sets of contour lines that approach and veer away from it. For a basic saddle point, these sets occur in pairs, with an opposing high pair and an opposing low pair positioned in orthogonal directions. The critical contour lines generally do not have to intersect orthogonally. A simple criterion for checking if a givenstationary pointof a real-valued functionF(x,y) of two real variables is a saddle point is to compute the function'sHessian matrixat that point: if the Hessian isindefinite, then that point is a saddle point. For example, the Hessian matrix of the functionz=x2−y2{\displaystyle z=x^{2}-y^{2}}at the stationary point(x,y,z)=(0,0,0){\displaystyle (x,y,z)=(0,0,0)}is the matrix which is indefinite. Therefore, this point is a saddle point. This criterion gives only a sufficient condition. For example, the point(0,0,0){\displaystyle (0,0,0)}is a saddle point for the functionz=x4−y4,{\displaystyle z=x^{4}-y^{4},}but the Hessian matrix of this function at the origin is thenull matrix, which is not indefinite. In the most general terms, asaddle pointfor asmooth function(whosegraphis acurve,surfaceorhypersurface) is a stationary point such that the curve/surface/etc. in theneighborhoodof that point is not entirely on any side of thetangent spaceat that point. In a domain of one dimension, a saddle point is apointwhich is both astationary pointand apoint of inflection. Since it is a point of inflection, it is not alocal extremum. Asaddle surfaceis asmooth surfacecontaining one or more saddle points. Classical examples of two-dimensional saddle surfaces in theEuclidean spaceare second order surfaces, thehyperbolic paraboloidz=x2−y2{\displaystyle z=x^{2}-y^{2}}(which is often referred to as "thesaddle surface" or "the standard saddle surface") and thehyperboloid of one sheet. ThePringlespotato chip or crisp is an everyday example of a hyperbolic paraboloid shape. Saddle surfaces have negativeGaussian curvaturewhich distinguish them from convex/elliptical surfaces which have positive Gaussian curvature. A classical third-order saddle surface is themonkey saddle.[3] In a two-playerzero sumgame defined on a continuous space, theequilibriumpoint is a saddle point. For a second-order linear autonomous system, acritical pointis a saddle point if thecharacteristic equationhas one positive and one negative realeigenvalue.[4] In optimization subject to equality constraints, the first-order conditions describe a saddle point of theLagrangian. Indynamical systems, if the dynamic is given by adifferentiable mapfthen a point is hyperbolic if and only if the differential ofƒn(wherenis the period of the point) has no eigenvalue on the (complex)unit circlewhen computed at the point. Then asaddle pointis a hyperbolicperiodic pointwhosestableandunstable manifoldshave adimensionthat is not zero. A saddle point of a matrix is an element which is both the largest element in its column and the smallest element in its row.
https://en.wikipedia.org/wiki/Saddle_point
Instatistics, thesample maximumandsample minimum,also called thelargest observationandsmallest observation,are the values of the greatest and least elements of asample.[1]They are basicsummary statistics, used indescriptive statisticssuch as thefive-number summaryandBowley's seven-figure summaryand the associatedbox plot. The minimum and the maximum value are the first and lastorder statistics(often denotedX(1)andX(n)respectively, for a sample size ofn). If the sample hasoutliers, they necessarily include the sample maximum or sample minimum, or both, depending on whether they are extremely high or low. However, the sample maximum and minimum need not be outliers, if they are not unusually far from other observations. The sample maximum and minimum are theleastrobust statistics: they are maximally sensitive to outliers. This can either be an advantage or a drawback: if extreme values are real (not measurement errors), and of real consequence, as in applications ofextreme value theorysuch as building dikes or financial loss, then outliers (as reflected in sample extrema) are important. On the other hand, if outliers have little or no impact on actual outcomes, then using non-robust statistics such as the sample extrema simply clouds the statistics, and robust alternatives should be used, such as otherquantiles: the 10th and 90thpercentiles(first and lastdecile) are more robust alternatives. In addition to being a component of every statistic that uses all elements of the sample, the sample extrema are important parts of therange, a measure of dispersion, andmid-range, a measure of location. They also realize themaximum absolute deviation: one of them is thefurthestpoint from any given point, particularly a measure of center such as the median or mean. For a sample set, the maximum function is non-smooth and thus non-differentiable. For optimization problems that occur in statistics it often needs to be approximated by a smooth function that is close to the maximum of the set. Asmooth maximum, for example, is a good approximation of the sample maximum. The sample maximum and minimum are basicsummary statistics, showing the most extreme observations, and are used in thefive-number summaryand a version of theseven-number summaryand the associatedbox plot. The sample maximum and minimum provide a non-parametricprediction interval: in a sample from a population, or more generally anexchangeable sequenceof random variables, each observation is equally likely to be the maximum or minimum. Thus if one has a sample{X1,…,Xn},{\displaystyle \{X_{1},\dots ,X_{n}\},}and one picks another observationXn+1,{\displaystyle X_{n+1},}then this has1/(n+1){\displaystyle 1/(n+1)}probability of being the largest value seen so far,1/(n+1){\displaystyle 1/(n+1)}probability of being the smallest value seen so far, and thus the other(n−1)/(n+1){\displaystyle (n-1)/(n+1)}of the time,Xn+1{\displaystyle X_{n+1}}falls between the sample maximum and sample minimum of{X1,…,Xn}.{\displaystyle \{X_{1},\dots ,X_{n}\}.}Thus, denoting the sample maximum and minimum byMandm,this yields an(n−1)/(n+1){\displaystyle (n-1)/(n+1)}prediction interval of [m,M]. For example, ifn= 19, then [m,M] gives an 18/20 = 90% prediction interval – 90% of the time, the 20th observation falls between the smallest and largest observation seen heretofore. Likewise,n= 39 gives a 95% prediction interval, andn= 199 gives a 99% prediction interval. Due to their sensitivity to outliers, the sample extrema cannot reliably be used asestimatorsunless data is clean – robust alternatives include the first and lastdeciles. However, with clean data or in theoretical settings, they can sometimes prove very good estimators, particularly forplatykurticdistributions, where for small data sets themid-rangeis the mostefficientestimator. They are inefficient estimators of location for mesokurtic distributions, such as thenormal distribution, and leptokurtic distributions, however. For sampling without replacement from auniform distributionwith one or two unknown endpoints (so1,2,…,N{\displaystyle 1,2,\dots ,N}withNunknown, orM,M+1,…,N{\displaystyle M,M+1,\dots ,N}with bothMandNunknown), the sample maximum, or respectively the sample maximum and sample minimum, aresufficientandcompletestatistics for the unknown endpoints; thus an unbiased estimator derived from these will beUMVUestimator. If only the top endpoint is unknown, the sample maximum is a biased estimator for the population maximum, but the unbiased estimatork+1km−1{\displaystyle {\frac {k+1}{k}}m-1}(wheremis the sample maximum andkis the sample size) is the UMVU estimator; seeGerman tank problemfor details. If both endpoints are unknown, then the sample range is a biased estimator for the population range, but correcting as for maximum above yields the UMVU estimator. If both endpoints are unknown, then themid-rangeis an unbiased (and hence UMVU) estimator of the midpoint of the interval (here equivalently the population median, average, or mid-range). The reason the sample extrema are sufficient statistics is that the conditional distribution of the non-extreme samples is just the distribution for the uniform interval between the sample maximum and minimum – once the endpoints are fixed, the values of the interior points add no additional information. The sample extrema can be used for a simplenormality test, specifically of kurtosis: one computes thet-statisticof the sample maximum and minimum (subtractssample meanand divides by thesample standard deviation), and if they are unusually large for the sample size (as per thethree sigma ruleand table therein, or more precisely aStudent's t-distribution), then the kurtosis of the sample distribution deviates significantly from that of the normal distribution. For instance, a daily process should expect a 3σ event once per year (of calendar days; once every year and a half of business days), while a 4σ event happens on average every 40 years of calendar days, 60 years of business days (once in a lifetime), 5σ events happen every 5,000 years (once in recorded history), and 6σ events happen every 1.5 million years (essentially never). Thus if the sample extrema are 6 sigmas from the mean, one has a significant failure of normality. Further, this test is very easy to communicate without involved statistics. These tests of normality can be applied if one faceskurtosis risk, for instance. Sample extrema play two main roles inextreme value theory: However, caution must be used in using sample extrema as guidelines: inheavy-tailed distributionsor fornon-stationaryprocesses, extreme events can be significantly more extreme than any previously observed event. This is elaborated inblack swan theory.
https://en.wikipedia.org/wiki/Sample_maximum_and_minimum
Inalgebra, afieldkisperfectif any one of the following equivalent conditions holds: Otherwise,kis calledimperfect. In particular, all fields of characteristic zero and allfinite fieldsare perfect. Perfect fields are significant becauseGalois theoryover these fields becomes simpler, since the general Galois assumption of field extensions being separable is automatically satisfied over these fields (see third condition above). Another important property of perfect fields is that they admitWitt vectors. More generally, aringof characteristicp(paprime) is calledperfectif theFrobenius endomorphismis anautomorphism.[1](When restricted tointegral domains, this is equivalent to the above condition "every element ofkis apth power".) Examples of perfect fields are: Most fields that are encountered in practice are perfect. The imperfect case arises mainly inalgebraic geometryin characteristicp> 0. Every imperfect field is necessarilytranscendentalover itsprime subfield(the minimal subfield), because the latter is perfect. An example of an imperfect field is the fieldFp(x){\displaystyle \mathbf {F} _{p}(x)}of rational polynomials in an unknown elementx{\displaystyle x}. This can be seen from the fact that the Frobenius endomorphism sendsx↦xp{\displaystyle x\mapsto x^{p}}and therefore is not surjective. Equivalently, one can show that the polynomialf(X)=Xp−x{\displaystyle f(X)=X^{p}-x}, which is an element of(Fp(x))[X]{\displaystyle (\mathbf {F} _{p}(x))[X]}, is irreducible but inseparable. This field embeds into the perfect field called itsperfection. Imperfect fields cause technical difficulties because irreducible polynomials can become reducible in the algebraic closure of the base field. For example,[4]considerf(x,y)=xp+ayp∈k[x,y]{\displaystyle f(x,y)=x^{p}+ay^{p}\in k[x,y]}fork{\displaystyle k}an imperfect field of characteristicp{\displaystyle p}andanot ap-th power ink. Then in its algebraic closurekalg[x,y]{\displaystyle k^{\operatorname {alg} }[x,y]}, the following equality holds: wherebp=aand suchbexists in this algebraic closure. Geometrically, this means thatf{\displaystyle f}does not define anaffineplane curveink[x,y]{\displaystyle k[x,y]}. Anyfinitely generated field extensionKover a perfect fieldkis separably generated, i.e. admits a separatingtranscendence base, that is, a transcendence base Γ such thatKis separably algebraic overk(Γ).[5] One of the equivalent conditions says that, in characteristicp, a field adjoined with allpr-th roots (r≥ 1) is perfect; it is called theperfect closureofkand usually denoted bykp−∞{\displaystyle k^{p^{-\infty }}}. The perfect closure can be used in a test for separability. More precisely, a commutativek-algebraAis separable if and only ifA⊗kkp−∞{\displaystyle A\otimes _{k}k^{p^{-\infty }}}is reduced.[6] In terms ofuniversal properties, theperfect closureof a ringAof characteristicpis a perfect ringApof characteristicptogether with aring homomorphismu:A→Apsuch that for any other perfect ringBof characteristicpwith a homomorphismv:A→Bthere is a unique homomorphismf:Ap→Bsuch thatvfactors throughu(i.e.v=fu). The perfect closure always exists; the proof involves "adjoiningp-th roots of elements ofA", similar to the case of fields.[7] Theperfectionof a ringAof characteristicpis the dual notion (though this term is sometimes used for the perfect closure). In other words, the perfectionR(A) ofAis a perfect ring of characteristicptogether with a mapθ:R(A) →Asuch that for any perfect ringBof characteristicpequipped with a mapφ:B→A, there is a unique mapf:B→R(A)such thatφfactors throughθ(i.e.φ=θf). The perfection ofAmay be constructed as follows. Consider theprojective system where the transition maps are the Frobenius endomorphism. Theinverse limitof this system isR(A) and consists of sequences (x0,x1, ... ) of elements ofAsuch thatxi+1p=xi{\displaystyle x_{i+1}^{p}=x_{i}}for alli. The mapθ:R(A) →Asends (xi) tox0.[8]
https://en.wikipedia.org/wiki/Perfect_field
Inarithmetic geometry, aFrobenioidis acategorywith some extra structure that generalizes the theory ofline bundleson models of finite extensions ofglobal fields. Frobenioids were introduced byShinichi Mochizuki(2008). The word "Frobenioid" is aportmanteauofFrobeniusandmonoid, as certainFrobenius morphismsbetween Frobenioids are analogues of the usualFrobenius morphism, and some of the simplest examples of Frobenioids are essentially monoids. IfMis acommutative monoid, it is acted on naturally by the monoidNof positiveintegersunder multiplication, with an elementnofNmultiplying an element ofMbyn. The Frobenioid ofMis thesemidirect productofMandN. The underlying category of this Frobenioid is category of the monoid, with one object and a morphism for each element of the monoid. Thestandard Frobenioidis the special case of this construction whenMis the additive monoid of non-negative integers. An elementary Frobenioid is a generalization of the Frobenioid of a commutative monoid, given by a sort ofsemidirect productof the monoid of positive integers by a family Φ of commutative monoids over a base categoryD. In applications the categoryDis sometimes the category of models of finite separable extensions of a global field, and Φ corresponds to the line bundles on these models, and the action of a positive integersninNis given by taking thenth power of a line bundle. A Frobenioid consists of a categoryCtogether with a functor to an elementary Frobenioid, satisfying some complicated conditions related to the behavior of line bundles and divisors on models of global fields. One of Mochizuki's fundamental theorems states that under various conditions a Frobenioid can be reconstructed from the categoryC. A poly-Frobenioid is an extension of a Frobenioid.
https://en.wikipedia.org/wiki/Frobenioid
Inmathematics, afinite fieldorGalois field(so-named in honor ofÉvariste Galois) is afieldthat contains a finite number ofelements. As with any field, a finite field is aseton which the operations of multiplication, addition, subtraction and division are defined and satisfy certain basic rules. The most common examples of finite fields are theintegers modp{\displaystyle p}whenp{\displaystyle p}is aprime number. Theorderof a finite field is its number of elements, which is either a prime number or aprime power. For every prime numberp{\displaystyle p}and every positive integerk{\displaystyle k}there are fields of orderpk{\displaystyle p^{k}}. All finite fields of a given order areisomorphic. Finite fields are fundamental in a number of areas of mathematics andcomputer science, includingnumber theory,algebraic geometry,Galois theory,finite geometry,cryptographyandcoding theory. A finite field is a finite set that is afield; this means that multiplication, addition, subtraction and division (excluding division by zero) are defined and satisfy the rules of arithmetic known as thefield axioms.[1] The number of elements of a finite field is called itsorderor, sometimes, itssize. A finite field of orderq{\displaystyle q}exists if and only ifq{\displaystyle q}is aprime powerpk{\displaystyle p^{k}}(wherep{\displaystyle p}is a prime number andk{\displaystyle k}is a positive integer). In a field of orderpk{\displaystyle p^{k}}, addingp{\displaystyle p}copies of any element always results in zero; that is, thecharacteristicof the field isp{\displaystyle p}.[1] Forq=pk{\displaystyle q=p^{k}}, all fields of orderq{\displaystyle q}areisomorphic(see§ Existence and uniquenessbelow).[2]Moreover, a field cannot contain two different finitesubfieldswith the same order. One may therefore identify all finite fields with the same order, and they are unambiguously denotedFq{\displaystyle \mathbb {F} _{q}},Fq{\displaystyle \mathbf {F} _{q}}orGF(q){\displaystyle \mathrm {GF} (q)}, where the letters GF stand for "Galois field".[3] In a finite field of orderq{\displaystyle q}, thepolynomialXq−X{\displaystyle X^{q}-X}has allq{\displaystyle q}elements of the finite field asroots.[citation needed]The non-zero elements of a finite field form amultiplicative group. This group iscyclic, so all non-zero elements can be expressed as powers of a single element called aprimitive elementof the field. (In general there will be several primitive elements for a given field.)[1] The simplest examples of finite fields are the fields of prime order: for eachprime numberp{\displaystyle p}, theprime fieldof orderp{\displaystyle p}may be constructed as theintegers modulop{\displaystyle p},Z/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }.[1] The elements of the prime field of orderp{\displaystyle p}may be represented by integers in the range0,…,p−1{\displaystyle 0,\ldots ,p-1}. The sum, the difference and the product are theremainder of the divisionbyp{\displaystyle p}of the result of the corresponding integer operation.[1]The multiplicative inverse of an element may be computed by using the extended Euclidean algorithm (seeExtended Euclidean algorithm § Modular integers).[citation needed] LetF{\displaystyle F}be a finite field. For any elementx{\displaystyle x}inF{\displaystyle F}and anyintegern{\displaystyle n}, denote byn⋅x{\displaystyle n\cdot x}the sum ofn{\displaystyle n}copies ofx{\displaystyle x}. The least positiven{\displaystyle n}such thatn⋅1=0{\displaystyle n\cdot 1=0}is the characteristicp{\displaystyle p}of the field.[1]This allows defining a multiplication(k,x)↦k⋅x{\displaystyle (k,x)\mapsto k\cdot x}of an elementk{\displaystyle k}ofGF(p){\displaystyle \mathrm {GF} (p)}by an elementx{\displaystyle x}ofF{\displaystyle F}by choosing an integer representative fork{\displaystyle k}. This multiplication makesF{\displaystyle F}into aGF(p){\displaystyle \mathrm {GF} (p)}-vector space.[1]It follows that the number of elements ofF{\displaystyle F}ispn{\displaystyle p^{n}}for some integern{\displaystyle n}.[1] Theidentity(x+y)p=xp+yp{\displaystyle (x+y)^{p}=x^{p}+y^{p}}(sometimes called thefreshman's dream) is true in a field of characteristicp{\displaystyle p}. This follows from thebinomial theorem, as eachbinomial coefficientof the expansion of(x+y)p{\displaystyle (x+y)^{p}}, except the first and the last, is a multiple ofp{\displaystyle p}.[citation needed] ByFermat's little theorem, ifp{\displaystyle p}is a prime number andx{\displaystyle x}is in the fieldGF(p){\displaystyle \mathrm {GF} (p)}thenxp=x{\displaystyle x^{p}=x}. This implies the equalityXp−X=∏a∈GF(p)(X−a){\displaystyle X^{p}-X=\prod _{a\in \mathrm {GF} (p)}(X-a)}for polynomials overGF(p){\displaystyle \mathrm {GF} (p)}. More generally, every element inGF(pn){\displaystyle \mathrm {GF} (p^{n})}satisfies the polynomial equationxpn−x=0{\displaystyle x^{p^{n}}-x=0}.[citation needed] Any finitefield extensionof a finite field isseparableand simple. That is, ifE{\displaystyle E}is a finite field andF{\displaystyle F}is a subfield ofE{\displaystyle E}, thenE{\displaystyle E}is obtained fromF{\displaystyle F}by adjoining a single element whoseminimal polynomialisseparable. To use a piece of jargon, finite fields areperfect.[1] A more generalalgebraic structurethat satisfies all the other axioms of a field, but whose multiplication is not required to becommutative, is called adivision ring(or sometimesskew field). ByWedderburn's little theorem, any finite division ring is commutative, and hence is a finite field.[1] Letq=pn{\displaystyle q=p^{n}}be aprime power, andF{\displaystyle F}be thesplitting fieldof the polynomialP=Xq−X{\displaystyle P=X^{q}-X}over the prime fieldGF(p){\displaystyle \mathrm {GF} (p)}. This means thatF{\displaystyle F}is a finite field of lowest order, in whichP{\displaystyle P}hasq{\displaystyle q}distinct roots (theformal derivativeofP{\displaystyle P}isP′=−1{\displaystyle P'=-1}, implying thatgcd(P,P′)=1{\displaystyle \mathrm {gcd} (P,P')=1}, which in general implies that the splitting field is aseparable extensionof the original). Theabove identityshows that the sum and the product of two roots ofP{\displaystyle P}are roots ofP{\displaystyle P}, as well as the multiplicative inverse of a root ofP{\displaystyle P}. In other words, the roots ofP{\displaystyle P}form a field of orderq{\displaystyle q}, which is equal toF{\displaystyle F}by the minimality of the splitting field. The uniqueness up to isomorphism of splitting fields implies thus that all fields of orderq{\displaystyle q}are isomorphic. Also, if a fieldF{\displaystyle F}has a field of orderq=pk{\displaystyle q=p^{k}}as a subfield, its elements are theq{\displaystyle q}roots ofXq−X{\displaystyle X^{q}-X}, andF{\displaystyle F}cannot contain another subfield of orderq{\displaystyle q}. In summary, we have the following classification theorem first proved in 1893 byE. H. Moore:[2] The order of a finite field is a prime power. For every prime powerq{\displaystyle q}there are fields of orderq{\displaystyle q}, and they are all isomorphic. In these fields, every element satisfiesxq=x,{\displaystyle x^{q}=x,}and the polynomialXq−X{\displaystyle X^{q}-X}factors asXq−X=∏a∈F(X−a).{\displaystyle X^{q}-X=\prod _{a\in F}(X-a).} It follows thatGF(pn){\displaystyle \mathrm {GF} (p^{n})}contains a subfield isomorphic toGF(pm){\displaystyle \mathrm {GF} (p^{m})}if and only ifm{\displaystyle m}is a divisor ofn{\displaystyle n}; in that case, this subfield is unique. In fact, the polynomialXpm−X{\displaystyle X^{p^{m}}-X}dividesXpn−X{\displaystyle X^{p^{n}}-X}if and only ifm{\displaystyle m}is a divisor ofn{\displaystyle n}. Given a prime powerq=pn{\displaystyle q=p^{n}}withp{\displaystyle p}prime andn>1{\displaystyle n>1}, the fieldGF(q){\displaystyle \mathrm {GF} (q)}may be explicitly constructed in the following way. One first chooses anirreducible polynomialP{\displaystyle P}inGF(p)[X]{\displaystyle \mathrm {GF} (p)[X]}of degreen{\displaystyle n}(such an irreducible polynomial always exists). Then thequotient ringGF(q)=GF(p)[X]/(P){\displaystyle \mathrm {GF} (q)=\mathrm {GF} (p)[X]/(P)}of the polynomial ringGF(p)[X]{\displaystyle \mathrm {GF} (p)[X]}by the ideal generated byP{\displaystyle P}is a field of orderq{\displaystyle q}. More explicitly, the elements ofGF(q){\displaystyle \mathrm {GF} (q)}are the polynomials overGF(p){\displaystyle \mathrm {GF} (p)}whose degree is strictly less thann{\displaystyle n}. The addition and the subtraction are those of polynomials overGF(p){\displaystyle \mathrm {GF} (p)}. The product of two elements is the remainder of theEuclidean divisionbyP{\displaystyle P}of the product inGF(q)[X]{\displaystyle \mathrm {GF} (q)[X]}. The multiplicative inverse of a non-zero element may be computed with the extended Euclidean algorithm; seeExtended Euclidean algorithm § Simple algebraic field extensions. However, with this representation, elements ofGF(q){\displaystyle \mathrm {GF} (q)}may be difficult to distinguish from the corresponding polynomials. Therefore, it is common to give a name, commonlyα{\displaystyle \alpha }to the element ofGF(q){\displaystyle \mathrm {GF} (q)}that corresponds to the polynomialX{\displaystyle X}. So, the elements ofGF(q){\displaystyle \mathrm {GF} (q)}become polynomials inα{\displaystyle \alpha }, whereP(α)=0{\displaystyle P(\alpha )=0}, and, when one encounters a polynomial inα{\displaystyle \alpha }of degree greater or equal ton{\displaystyle n}(for example after a multiplication), one knows that one has to use the relationP(α)=0{\displaystyle P(\alpha )=0}to reduce its degree (it is what Euclidean division is doing). Except in the construction ofGF(4){\displaystyle \mathrm {GF} (4)}, there are several possible choices forP{\displaystyle P}, which produce isomorphic results. To simplify the Euclidean division, one commonly chooses forP{\displaystyle P}a polynomial of the formXn+aX+b,{\displaystyle X^{n}+aX+b,}which make the needed Euclidean divisions very efficient. However, for some fields, typically in characteristic2{\displaystyle 2}, irreducible polynomials of the formXn+aX+b{\displaystyle X^{n}+aX+b}may not exist. In characteristic2{\displaystyle 2}, if the polynomialXn+X+1{\displaystyle X^{n}+X+1}is reducible, it is recommended to chooseXn+Xk+1{\displaystyle X^{n}+X^{k}+1}with the lowest possiblek{\displaystyle k}that makes the polynomial irreducible. If all thesetrinomialsare reducible, one chooses "pentanomials"Xn+Xa+Xb+Xc+1{\displaystyle X^{n}+X^{a}+X^{b}+X^{c}+1}, as polynomials of degree greater than1{\displaystyle 1}, with an even number of terms, are never irreducible in characteristic2{\displaystyle 2}, having1{\displaystyle 1}as a root.[4] A possible choice for such a polynomial is given byConway polynomials. They ensure a certain compatibility between the representation of a field and the representations of its subfields. In the next sections, we will show how the general construction method outlined above works for small finite fields. The smallest non-prime field is the field with four elements, which is commonly denotedGF(4){\displaystyle \mathrm {GF} (4)}orF4.{\displaystyle \mathbb {F} _{4}.}It consists of the four elements0,1,α,1+α{\displaystyle 0,1,\alpha ,1+\alpha }such thatα2=1+α{\displaystyle \alpha ^{2}=1+\alpha },1⋅α=α⋅1=α{\displaystyle 1\cdot \alpha =\alpha \cdot 1=\alpha },x+x=0{\displaystyle x+x=0}, andx⋅0=0⋅x=0{\displaystyle x\cdot 0=0\cdot x=0}, for everyx∈GF(4){\displaystyle x\in \mathrm {GF} (4)}, the other operation results being easily deduced from thedistributive law. See below for the complete operation tables. This may be deduced as follows from the results of the preceding section. OverGF(2){\displaystyle \mathrm {GF} (2)}, there is only oneirreducible polynomialof degree2{\displaystyle 2}:X2+X+1{\displaystyle X^{2}+X+1}Therefore, forGF(4){\displaystyle \mathrm {GF} (4)}the construction of the preceding section must involve this polynomial, andGF(4)=GF(2)[X]/(X2+X+1).{\displaystyle \mathrm {GF} (4)=\mathrm {GF} (2)[X]/(X^{2}+X+1).}Letα{\displaystyle \alpha }denote a root of this polynomial inGF(4){\displaystyle \mathrm {GF} (4)}. This implies thatα2=1+α,{\displaystyle \alpha ^{2}=1+\alpha ,}and thatα{\displaystyle \alpha }and1+α{\displaystyle 1+\alpha }are the elements ofGF(4){\displaystyle \mathrm {GF} (4)}that are not inGF(2){\displaystyle \mathrm {GF} (2)}. The tables of the operations inGF(4){\displaystyle \mathrm {GF} (4)}result from this, and are as follows: A table for subtraction is not given, because subtraction is identical to addition, as is the case for every field of characteristic 2. In the third table, for the division ofx{\displaystyle x}byy{\displaystyle y}, the values ofx{\displaystyle x}must be read in the left column, and the values ofy{\displaystyle y}in the top row. (Because0⋅z=0{\displaystyle 0\cdot z=0}for everyz{\displaystyle z}in everyringthedivision by 0has to remain undefined.) From the tables, it can be seen that the additive structure ofGF(4){\displaystyle \mathrm {GF} (4)}is isomorphic to theKlein four-group, while the non-zero multiplicative structure is isomorphic to the groupZ3{\displaystyle Z_{3}}. The mapφ:x↦x2{\displaystyle \varphi :x\mapsto x^{2}}is the non-trivial field automorphism, called theFrobenius automorphism, which sendsα{\displaystyle \alpha }into the second root1+α{\displaystyle 1+\alpha }of the above-mentioned irreducible polynomialX2+X+1{\displaystyle X^{2}+X+1}. For applying theabove general constructionof finite fields in the case ofGF(p2){\displaystyle \mathrm {GF} (p^{2})}, one has to find an irreducible polynomial of degree 2. Forp=2{\displaystyle p=2}, this has been done in the preceding section. Ifp{\displaystyle p}is an odd prime, there are always irreducible polynomials of the formX2−r{\displaystyle X^{2}-r}, withr{\displaystyle r}inGF(p){\displaystyle \mathrm {GF} (p)}. More precisely, the polynomialX2−r{\displaystyle X^{2}-r}is irreducible overGF(p){\displaystyle \mathrm {GF} (p)}if and only ifr{\displaystyle r}is aquadratic non-residuemodulop{\displaystyle p}(this is almost the definition of a quadratic non-residue). There arep−12{\displaystyle {\frac {p-1}{2}}}quadratic non-residues modulop{\displaystyle p}. For example,2{\displaystyle 2}is a quadratic non-residue forp=3,5,11,13,…{\displaystyle p=3,5,11,13,\ldots }, and3{\displaystyle 3}is a quadratic non-residue forp=5,7,17,…{\displaystyle p=5,7,17,\ldots }. Ifp≡3mod4{\displaystyle p\equiv 3\mod 4}, that isp=3,7,11,19,…{\displaystyle p=3,7,11,19,\ldots }, one may choose−1≡p−1{\displaystyle -1\equiv p-1}as a quadratic non-residue, which allows us to have a very simple irreducible polynomialX2+1{\displaystyle X^{2}+1}. Having chosen a quadratic non-residuer{\displaystyle r}, letα{\displaystyle \alpha }be a symbolic square root ofr{\displaystyle r}, that is, a symbol that has the propertyα2=r{\displaystyle \alpha ^{2}=r}, in the same way that the complex numberi{\displaystyle i}is a symbolic square root of−1{\displaystyle -1}. Then, the elements ofGF(p2){\displaystyle \mathrm {GF} (p^{2})}are all the linear expressionsa+bα,{\displaystyle a+b\alpha ,}witha{\displaystyle a}andb{\displaystyle b}inGF(p){\displaystyle \mathrm {GF} (p)}. The operations onGF(p2){\displaystyle \mathrm {GF} (p^{2})}are defined as follows (the operations between elements ofGF(p){\displaystyle \mathrm {GF} (p)}represented by Latin letters are the operations inGF(p){\displaystyle \mathrm {GF} (p)}):−(a+bα)=−a+(−b)α(a+bα)+(c+dα)=(a+c)+(b+d)α(a+bα)(c+dα)=(ac+rbd)+(ad+bc)α(a+bα)−1=a(a2−rb2)−1+(−b)(a2−rb2)−1α{\displaystyle {\begin{aligned}-(a+b\alpha )&=-a+(-b)\alpha \\(a+b\alpha )+(c+d\alpha )&=(a+c)+(b+d)\alpha \\(a+b\alpha )(c+d\alpha )&=(ac+rbd)+(ad+bc)\alpha \\(a+b\alpha )^{-1}&=a(a^{2}-rb^{2})^{-1}+(-b)(a^{2}-rb^{2})^{-1}\alpha \end{aligned}}} The polynomialX3−X−1{\displaystyle X^{3}-X-1}is irreducible overGF(2){\displaystyle \mathrm {GF} (2)}andGF(3){\displaystyle \mathrm {GF} (3)}, that is, it is irreduciblemodulo2{\displaystyle 2}and3{\displaystyle 3}(to show this, it suffices to show that it has no root inGF(2){\displaystyle \mathrm {GF} (2)}nor inGF(3){\displaystyle \mathrm {GF} (3)}). It follows that the elements ofGF(8){\displaystyle \mathrm {GF} (8)}andGF(27){\displaystyle \mathrm {GF} (27)}may be represented byexpressionsa+bα+cα2,{\displaystyle a+b\alpha +c\alpha ^{2},}wherea,b,c{\displaystyle a,b,c}are elements ofGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}(respectively), andα{\displaystyle \alpha }is a symbol such thatα3=α+1.{\displaystyle \alpha ^{3}=\alpha +1.} The addition, additive inverse and multiplication onGF(8){\displaystyle \mathrm {GF} (8)}andGF(27){\displaystyle \mathrm {GF} (27)}may thus be defined as follows; in following formulas, the operations between elements ofGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}, represented by Latin letters, are the operations inGF(2){\displaystyle \mathrm {GF} (2)}orGF(3){\displaystyle \mathrm {GF} (3)}, respectively:−(a+bα+cα2)=−a+(−b)α+(−c)α2(forGF(8),this operation is the identity)(a+bα+cα2)+(d+eα+fα2)=(a+d)+(b+e)α+(c+f)α2(a+bα+cα2)(d+eα+fα2)=(ad+bf+ce)+(ae+bd+bf+ce+cf)α+(af+be+cd+cf)α2{\displaystyle {\begin{aligned}-(a+b\alpha +c\alpha ^{2})&=-a+(-b)\alpha +(-c)\alpha ^{2}\qquad {\text{(for }}\mathrm {GF} (8),{\text{this operation is the identity)}}\\(a+b\alpha +c\alpha ^{2})+(d+e\alpha +f\alpha ^{2})&=(a+d)+(b+e)\alpha +(c+f)\alpha ^{2}\\(a+b\alpha +c\alpha ^{2})(d+e\alpha +f\alpha ^{2})&=(ad+bf+ce)+(ae+bd+bf+ce+cf)\alpha +(af+be+cd+cf)\alpha ^{2}\end{aligned}}} The polynomialX4+X+1{\displaystyle X^{4}+X+1}is irreducible overGF(2){\displaystyle \mathrm {GF} (2)}, that is, it is irreducible modulo2{\displaystyle 2}. It follows that the elements ofGF(16){\displaystyle \mathrm {GF} (16)}may be represented byexpressionsa+bα+cα2+dα3,{\displaystyle a+b\alpha +c\alpha ^{2}+d\alpha ^{3},}wherea,b,c,d{\displaystyle a,b,c,d}are either0{\displaystyle 0}or1{\displaystyle 1}(elements ofGF(2){\displaystyle \mathrm {GF} (2)}), andα{\displaystyle \alpha }is a symbol such thatα4=α+1{\displaystyle \alpha ^{4}=\alpha +1}(that is,α{\displaystyle \alpha }is defined as a root of the given irreducible polynomial). As the characteristic ofGF(2){\displaystyle \mathrm {GF} (2)}is2{\displaystyle 2}, each element is its additive inverse inGF(16){\displaystyle \mathrm {GF} (16)}. The addition and multiplication onGF(16){\displaystyle \mathrm {GF} (16)}may be defined as follows; in following formulas, the operations between elements ofGF(2){\displaystyle \mathrm {GF} (2)}, represented by Latin letters are the operations inGF(2){\displaystyle \mathrm {GF} (2)}.(a+bα+cα2+dα3)+(e+fα+gα2+hα3)=(a+e)+(b+f)α+(c+g)α2+(d+h)α3(a+bα+cα2+dα3)(e+fα+gα2+hα3)=(ae+bh+cg+df)+(af+be+bh+cg+df+ch+dg)α+(ag+bf+ce+ch+dg+dh)α2+(ah+bg+cf+de+dh)α3{\displaystyle {\begin{aligned}(a+b\alpha +c\alpha ^{2}+d\alpha ^{3})+(e+f\alpha +g\alpha ^{2}+h\alpha ^{3})&=(a+e)+(b+f)\alpha +(c+g)\alpha ^{2}+(d+h)\alpha ^{3}\\(a+b\alpha +c\alpha ^{2}+d\alpha ^{3})(e+f\alpha +g\alpha ^{2}+h\alpha ^{3})&=(ae+bh+cg+df)+(af+be+bh+cg+df+ch+dg)\alpha \;+\\&\quad \;(ag+bf+ce+ch+dg+dh)\alpha ^{2}+(ah+bg+cf+de+dh)\alpha ^{3}\end{aligned}}} The fieldGF(16){\displaystyle \mathrm {GF} (16)}has eightprimitive elements(the elements that have all nonzero elements ofGF(16){\displaystyle \mathrm {GF} (16)}as integer powers). These elements are the four roots ofX4+X+1{\displaystyle X^{4}+X+1}and theirmultiplicative inverses. In particular,α{\displaystyle \alpha }is a primitive element, and the primitive elements areαm{\displaystyle \alpha ^{m}}withm{\displaystyle m}less than andcoprimewith15{\displaystyle 15}(that is, 1, 2, 4, 7, 8, 11, 13, 14). The set of non-zero elements inGF(q){\displaystyle \mathrm {GF} (q)}is anabelian groupunder the multiplication, of orderq−1{\displaystyle q-1}. ByLagrange's theorem, there exists a divisork{\displaystyle k}ofq−1{\displaystyle q-1}such thatxk=1{\displaystyle x^{k}=1}for every non-zerox{\displaystyle x}inGF(q){\displaystyle \mathrm {GF} (q)}. As the equationxk=1{\displaystyle x^{k}=1}has at mostk{\displaystyle k}solutions in any field,q−1{\displaystyle q-1}is the lowest possible value fork{\displaystyle k}. Thestructure theorem of finite abelian groupsimplies that this multiplicative group iscyclic, that is, all non-zero elements are powers of a single element. In summary: Such an elementa{\displaystyle a}is called aprimitive elementofGF(q){\displaystyle \mathrm {GF} (q)}. Unlessq=2,3{\displaystyle q=2,3}, the primitive element is not unique. The number of primitive elements isϕ(q−1){\displaystyle \phi (q-1)}whereϕ{\displaystyle \phi }isEuler's totient function. The result above implies thatxq=x{\displaystyle x^{q}=x}for everyx{\displaystyle x}inGF(q){\displaystyle \mathrm {GF} (q)}. The particular case whereq{\displaystyle q}is prime isFermat's little theorem. Ifa{\displaystyle a}is a primitive element inGF(q){\displaystyle \mathrm {GF} (q)}, then for any non-zero elementx{\displaystyle x}inF{\displaystyle F}, there is a unique integern{\displaystyle n}with0≤n≤q−2{\displaystyle 0\leq n\leq q-2}such thatx=an{\displaystyle x=a^{n}}. This integern{\displaystyle n}is called thediscrete logarithmofx{\displaystyle x}to the basea{\displaystyle a}. Whilean{\displaystyle a^{n}}can be computed very quickly, for example usingexponentiation by squaring, there is no known efficient algorithm for computing the inverse operation, the discrete logarithm. This has been used in variouscryptographic protocols, seeDiscrete logarithmfor details. When the nonzero elements ofGF(q){\displaystyle \mathrm {GF} (q)}are represented by their discrete logarithms, multiplication and division are easy, as they reduce to addition and subtraction moduloq−1{\displaystyle q-1}. However, addition amounts to computing the discrete logarithm ofam+an{\displaystyle a^{m}+a^{n}}. The identityam+an=an(am−n+1){\displaystyle a^{m}+a^{n}=a^{n}\left(a^{m-n}+1\right)}allows one to solve this problem by constructing the table of the discrete logarithms ofan+1{\displaystyle a^{n}+1}, calledZech's logarithms, forn=0,…,q−2{\displaystyle n=0,\ldots ,q-2}(it is convenient to define the discrete logarithm of zero as being−∞{\displaystyle -\infty }). Zech's logarithms are useful for large computations, such aslinear algebraover medium-sized fields, that is, fields that are sufficiently large for making natural algorithms inefficient, but not too large, as one has to pre-compute a table of the same size as the order of the field. Every nonzero element of a finite field is aroot of unity, asxq−1=1{\displaystyle x^{q-1}=1}for every nonzero element ofGF(q){\displaystyle \mathrm {GF} (q)}. Ifn{\displaystyle n}is a positive integer, ann{\displaystyle n}thprimitive root of unityis a solution of the equationxn=1{\displaystyle x^{n}=1}that is not a solution of the equationxm=1{\displaystyle x^{m}=1}for any positive integerm<n{\displaystyle m<n}. Ifa{\displaystyle a}is an{\displaystyle n}th primitive root of unity in a fieldF{\displaystyle F}, thenF{\displaystyle F}contains all then{\displaystyle n}roots of unity, which are1,a,a2,…,an−1{\displaystyle 1,a,a^{2},\ldots ,a^{n-1}}. The fieldGF(q){\displaystyle \mathrm {GF} (q)}contains an{\displaystyle n}th primitive root of unity if and only ifn{\displaystyle n}is a divisor ofq−1{\displaystyle q-1}; ifn{\displaystyle n}is a divisor ofq−1{\displaystyle q-1}, then the number of primitiven{\displaystyle n}th roots of unity inGF(q){\displaystyle \mathrm {GF} (q)}isϕ(n){\displaystyle \phi (n)}(Euler's totient function). The number ofn{\displaystyle n}th roots of unity inGF(q){\displaystyle \mathrm {GF} (q)}isgcd(n,q−1){\displaystyle \mathrm {gcd} (n,q-1)}. In a field of characteristicp{\displaystyle p}, everynp{\displaystyle np}th root of unity is also an{\displaystyle n}th root of unity. It follows that primitivenp{\displaystyle np}th roots of unity never exist in a field of characteristicp{\displaystyle p}. On the other hand, ifn{\displaystyle n}iscoprimetop{\displaystyle p}, the roots of then{\displaystyle n}thcyclotomic polynomialare distinct in every field of characteristicp{\displaystyle p}, as this polynomial is a divisor ofXn−1{\displaystyle X^{n}-1}, whosediscriminantnn{\displaystyle n^{n}}is nonzero modulop{\displaystyle p}. It follows that then{\displaystyle n}thcyclotomic polynomialfactors overGF(q){\displaystyle \mathrm {GF} (q)}into distinct irreducible polynomials that have all the same degree, sayd{\displaystyle d}, and thatGF(pd){\displaystyle \mathrm {GF} (p^{d})}is the smallest field of characteristicp{\displaystyle p}that contains then{\displaystyle n}th primitive roots of unity. When computingBrauer characters, one uses the mapαk↦exp⁡(2πik/(q−1)){\displaystyle \alpha ^{k}\mapsto \exp(2\pi ik/(q-1))}to map eigenvalues of a representation matrix to the complex numbers. Under this mapping, the base subfieldGF(p){\displaystyle \mathrm {GF} (p)}consists of evenly spaced points around the unit circle (omitting zero). The fieldGF(64){\displaystyle \mathrm {GF} (64)}has several interesting properties that smaller fields do not share: it has two subfields such that neither is contained in the other; not all generators (elements withminimal polynomialof degree6{\displaystyle 6}overGF(2){\displaystyle \mathrm {GF} (2)}) are primitive elements; and the primitive elements are not all conjugate under theGalois group. The order of this field being26, and the divisors of6being1, 2, 3, 6, the subfields ofGF(64)areGF(2),GF(22) = GF(4),GF(23) = GF(8), andGF(64)itself. As2and3arecoprime, the intersection ofGF(4)andGF(8)inGF(64)is the prime fieldGF(2). The union ofGF(4)andGF(8)has thus10elements. The remaining54elements ofGF(64)generateGF(64)in the sense that no other subfield contains any of them. It follows that they are roots of irreducible polynomials of degree6overGF(2). This implies that, overGF(2), there are exactly9 =⁠54/6⁠irreduciblemonic polynomialsof degree6. This may be verified by factoringX64−XoverGF(2). The elements ofGF(64)are primitiven{\displaystyle n}th roots of unity for somen{\displaystyle n}dividing63{\displaystyle 63}. As the 3rd and the 7th roots of unity belong toGF(4)andGF(8), respectively, the54generators are primitiventh roots of unity for somenin{9, 21, 63}.Euler's totient functionshows that there are6primitive9th roots of unity,12{\displaystyle 12}primitive21{\displaystyle 21}st roots of unity, and36{\displaystyle 36}primitive63rd roots of unity. Summing these numbers, one finds again54{\displaystyle 54}elements. By factoring thecyclotomic polynomialsoverGF(2){\displaystyle \mathrm {GF} (2)}, one finds that: This shows that the best choice to constructGF(64){\displaystyle \mathrm {GF} (64)}is to define it asGF(2)[X] / (X6+X+ 1). In fact, this generator is a primitive element, and this polynomial is the irreducible polynomial that produces the easiest Euclidean division. In this section,p{\displaystyle p}is a prime number, andq=pn{\displaystyle q=p^{n}}is a power ofp{\displaystyle p}. InGF(q){\displaystyle \mathrm {GF} (q)}, the identity(x+y)p=xp+ypimplies that the mapφ:x↦xp{\displaystyle \varphi :x\mapsto x^{p}}is aGF(p){\displaystyle \mathrm {GF} (p)}-linear endomorphismand afield automorphismofGF(q){\displaystyle \mathrm {GF} (q)}, which fixes every element of the subfieldGF(p){\displaystyle \mathrm {GF} (p)}. It is called theFrobenius automorphism, afterFerdinand Georg Frobenius. Denoting byφkthecompositionofφwith itselfktimes, we haveφk:x↦xpk.{\displaystyle \varphi ^{k}:x\mapsto x^{p^{k}}.}It has been shown in the preceding section thatφnis the identity. For0 <k<n, the automorphismφkis not the identity, as, otherwise, the polynomialXpk−X{\displaystyle X^{p^{k}}-X}would have more thanpkroots. There are no otherGF(p)-automorphisms ofGF(q). In other words,GF(pn)has exactlynGF(p)-automorphisms, which areId=φ0,φ,φ2,…,φn−1.{\displaystyle \mathrm {Id} =\varphi ^{0},\varphi ,\varphi ^{2},\ldots ,\varphi ^{n-1}.} In terms ofGalois theory, this means thatGF(pn)is aGalois extensionofGF(p), which has acyclicGalois group. The fact that the Frobenius map is surjective implies that every finite field isperfect. IfFis a finite field, a non-constantmonic polynomialwith coefficients inFisirreducibleoverF, if it is not the product of two non-constant monic polynomials, with coefficients inF. As everypolynomial ringover a field is aunique factorization domain, every monic polynomial over a finite field may be factored in a unique way (up to the order of the factors) into a product of irreducible monic polynomials. There are efficient algorithms for testing polynomial irreducibility and factoring polynomials over finite fields. They are a key step for factoring polynomials over the integers or therational numbers. At least for this reason, everycomputer algebra systemhas functions for factoring polynomials over finite fields, or, at least, over finite prime fields. The polynomialXq−X{\displaystyle X^{q}-X}factors into linear factors over a field of orderq. More precisely, this polynomial is the product of all monic polynomials of degree one over a field of orderq. This implies that, ifq=pnthenXq−Xis the product of all monic irreducible polynomials overGF(p), whose degree dividesn. In fact, ifPis an irreducible factor overGF(p)ofXq−X, its degree dividesn, as itssplitting fieldis contained inGF(pn). Conversely, ifPis an irreducible monic polynomial overGF(p)of degreeddividingn, it defines a field extension of degreed, which is contained inGF(pn), and all roots ofPbelong toGF(pn), and are roots ofXq−X; thusPdividesXq−X. AsXq−Xdoes not have any multiple factor, it is thus the product of all the irreducible monic polynomials that divide it. This property is used to compute the product of the irreducible factors of each degree of polynomials overGF(p); seeDistinct degree factorization. The numberN(q,n)of monic irreducible polynomials of degreenoverGF(q)is given by[5]N(q,n)=1n∑d∣nμ(d)qn/d,{\displaystyle N(q,n)={\frac {1}{n}}\sum _{d\mid n}\mu (d)q^{n/d},}whereμis theMöbius function. This formula is an immediate consequence of the property ofXq−Xabove and theMöbius inversion formula. By the above formula, the number of irreducible (not necessarily monic) polynomials of degreenoverGF(q)is(q− 1)N(q,n). The exact formula implies the inequalityN(q,n)≥1n(qn−∑ℓ∣n,ℓprimeqn/ℓ);{\displaystyle N(q,n)\geq {\frac {1}{n}}\left(q^{n}-\sum _{\ell \mid n,\ \ell {\text{ prime}}}q^{n/\ell }\right);}this is sharp if and only ifnis a power of some prime. For everyqand everyn, the right hand side is positive, so there is at least one irreducible polynomial of degreenoverGF(q). Incryptography, the difficulty of thediscrete logarithm problemin finite fields or inelliptic curvesis the basis of several widely used protocols, such as theDiffie–Hellmanprotocol. For example, in 2014, a secure internet connection to Wikipedia involved the elliptic curve Diffie–Hellman protocol (ECDHE) over a large finite field.[6]Incoding theory, many codes are constructed assubspacesofvector spacesover finite fields. Finite fields are used by manyerror correction codes, such asReed–Solomon error correction codeorBCH code. The finite field almost always has characteristic of2, since computer data is stored in binary. For example, a byte of data can be interpreted as an element ofGF(28). One exception isPDF417bar code, which isGF(929). Some CPUs have special instructions that can be useful for finite fields of characteristic2, generally variations ofcarry-less product. Finite fields are widely used innumber theory, as many problems over the integers may be solved by reducing themmoduloone or severalprime numbers. For example, the fastest known algorithms forpolynomial factorizationandlinear algebraover the field ofrational numbersproceed by reduction modulo one or several primes, and then reconstruction of the solution by usingChinese remainder theorem,Hensel liftingor theLLL algorithm. Similarly many theoretical problems in number theory can be solved by considering their reductions modulo some or all prime numbers. See, for example,Hasse principle. Many recent developments ofalgebraic geometrywere motivated by the need to enlarge the power of these modular methods.Wiles' proof of Fermat's Last Theoremis an example of a deep result involving many mathematical tools, including finite fields. TheWeil conjecturesconcern the number of points onalgebraic varietiesover finite fields and the theory has many applications includingexponentialandcharacter sumestimates. Finite fields have widespread application incombinatorics, two well known examples being the definition ofPaley Graphsand the related construction forHadamard Matrices. Inarithmetic combinatoricsfinite fields[7]and finite field models[8][9]are used extensively, such as inSzemerédi's theoremon arithmetic progressions. Adivision ringis a generalization of field. Division rings are not assumed to be commutative. There are no non-commutative finite division rings:Wedderburn's little theoremstates that all finitedivision ringsare commutative, and hence are finite fields. This result holds even if we relax theassociativityaxiom toalternativity, that is, all finitealternative division ringsare finite fields, by theArtin–Zorn theorem.[10] A finite fieldF{\displaystyle F}is not algebraically closed: the polynomialf(T)=1+∏α∈F(T−α),{\displaystyle f(T)=1+\prod _{\alpha \in F}(T-\alpha ),}has no roots inF{\displaystyle F}, sincef(α) = 1for allα{\displaystyle \alpha }inF{\displaystyle F}. Given a prime numberp, letF¯p{\displaystyle {\overline {\mathbb {F} }}_{p}}be an algebraic closure ofFp.{\displaystyle \mathbb {F} _{p}.}It is not only uniqueup toan isomorphism, as do all algebraic closures, but contrarily to the general case, all its subfield are fixed by all its automorphisms, and it is also the algebraic closure of all finite fields of the same characteristicp. This property results mainly from the fact that the elements ofFpn{\displaystyle \mathbb {F} _{p^{n}}}are exactly the roots ofxpn−x,{\displaystyle x^{p^{n}}-x,}and this defines an inclusionFpn⊂Fpnm{\displaystyle \mathbb {\mathbb {F} } _{p^{n}}\subset \mathbb {F} _{p^{nm}}}form>1.{\displaystyle m>1.}These inclusions allow writing informallyF¯p=⋃n≥1Fpn.{\displaystyle {\overline {\mathbb {F} }}_{p}=\bigcup _{n\geq 1}\mathbb {F} _{p^{n}}.}The formal validation of this notation results from the fact that the above field inclusions form adirected setof fields; Itsdirect limitisF¯p,{\displaystyle {\overline {\mathbb {F} }}_{p},}which may thus be considered as "directed union". Given aprimitive elementgmn{\displaystyle g_{mn}}ofFqmn,{\displaystyle \mathbb {F} _{q^{mn}},}thengmnm{\displaystyle g_{mn}^{m}}is a primitive element ofFqn.{\displaystyle \mathbb {F} _{q^{n}}.} For explicit computations, it may be useful to have a coherent choice of the primitive elements for all finite fields; that is, to choose the primitive elementgn{\displaystyle g_{n}}ofFqn{\displaystyle \mathbb {F} _{q^{n}}}in order that, whenevern=mh,{\displaystyle n=mh,}one hasgm=gnh,{\displaystyle g_{m}=g_{n}^{h},}wheregm{\displaystyle g_{m}}is the primitive element already chosen forFqm.{\displaystyle \mathbb {F} _{q^{m}}.} Such a construction may be obtained byConway polynomials. Although finite fields are not algebraically closed, they arequasi-algebraically closed, which means that everyhomogeneous polynomialover a finite field has a non-trivial zero whose components are in the field if the number of its variables is more than its degree. This was a conjecture ofArtinandDicksonproved byChevalley(seeChevalley–Warning theorem).
https://en.wikipedia.org/wiki/Finite_field#Frobenius_automorphism_and_Galois_theory
Inalgebraic geometry, auniversal homeomorphismis amorphism of schemesf:X→Y{\displaystyle f:X\to Y}such that, for each morphismY′→Y{\displaystyle Y'\to Y}, the base changeX×YY′→Y′{\displaystyle X\times _{Y}Y'\to Y'}is ahomeomorphismof topological spaces. A morphism of schemes is a universal homeomorphism if and only if it isintegral,radicialand surjective.[1]In particular, a morphism of locally of finite type is a universal homeomorphism if and only if it isfinite, radicial and surjective. For example, anabsolute Frobenius morphismis a universal homeomorphism. Thisalgebraic geometry–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Universal_homeomorphism
Inmathematics, aWitt vectoris aninfinite sequenceof elements of acommutativering.Ernst Wittshowed how to put a ringstructureon the set of Witt vectors, in such a way that the ring of Witt vectorsW(Fp){\displaystyle W(\mathbb {F} _{p})}over thefinitefieldofprimeorderpisisomorphictoZp{\displaystyle \mathbb {Z} _{p}}, the ring ofp-adic integers. They have a highly non-intuitive structure[1]upon first glance because their additive and multiplicative structure depends on an infinite set of recursive formulas which do not behave like addition and multiplication formulas for standardp-adic integers. The main idea[1]behind Witt vectors is that instead of using the standardp-adic expansion a=a0+a1p+a2p2+⋯{\displaystyle a=a_{0}+a_{1}p+a_{2}p^{2}+\cdots } to represent an element inZp{\displaystyle \mathbb {Z} _{p}}, an expansion using theTeichmüller charactercan be considered instead; ω:Fp∗→Zp∗{\displaystyle \omega :\mathbb {F} _{p}^{*}\to \mathbb {Z} _{p}^{*}}, which is a group morphism sending each element in the solution set ofxp−1−1{\displaystyle x^{p-1}-1}inFp{\displaystyle \mathbb {F} _{p}}to an element in the solution set ofxp−1−1{\displaystyle x^{p-1}-1}inZp{\displaystyle \mathbb {Z} _{p}}. That is, the elements inZp{\displaystyle \mathbb {Z} _{p}}can be expanded out in terms ofroots of unityinstead of as profinite elements in∏Fp{\displaystyle \prod \mathbb {F} _{p}}. We also setω(0)=0{\displaystyle \omega (0)=0}, which defines an injective multiplicative mapω:Fp→Zp{\displaystyle \omega :\mathbb {F} _{p}\to \mathbb {Z} _{p}}sending elements ofFp{\displaystyle \mathbb {F} _{p}}to roots ofxp−x{\displaystyle x^{p}-x}inZp{\displaystyle \mathbb {Z} _{p}}. Ap-adic integer can then be expressed as an infinite sum a=ω(a0)+ω(a1)p+ω(a2)p2+⋯{\displaystyle a=\omega (a_{0})+\omega (a_{1})p+\omega (a_{2})p^{2}+\cdots }, which gives a Witt vector (a0,a1,a2,…)∈W(Fp)=(Fp)N{\displaystyle (a_{0},a_{1},a_{2},\ldots )\in W(\mathbb {F} _{p})=(\mathbb {F} _{p})^{\mathbb {N} }}. Then, the non-trivial additive and multiplicative structure in Witt vectors comes from using this map to giveW(Fp){\displaystyle W(\mathbb {F} _{p})}an additive and multiplicative structure such thatω{\displaystyle \omega }induces a commutative ringhomomorphism. In the 19th century,Ernst Eduard Kummerstudiedcyclicextensionsof fields as part of his work onFermat's Last Theorem. This led to the subject known asKummer theory. Letk{\displaystyle k}be a field containing a primitiven{\displaystyle n}-th root of unity. Kummer theory classifiesdegreen{\displaystyle n}cyclic field extensionsK{\displaystyle K}ofk{\displaystyle k}. Such fields are inbijectionwith ordern{\displaystyle n}cyclicgroupsΔ⊆k×/(k×)n{\displaystyle \Delta \subseteq k^{\times }/(k^{\times })^{n}}, whereΔ{\displaystyle \Delta }corresponds toK=k(Δn){\displaystyle K=k({\sqrt[{n}]{\Delta }}\,)}. But suppose thatk{\displaystyle k}hascharacteristicp{\displaystyle p}. The problem of studying degreep{\displaystyle p}extensions ofk{\displaystyle k}, or more generally degreepn{\displaystyle p^{n}}extensions, may appear superficially similar to Kummer theory. However, in this situation,k{\displaystyle k}cannot contain a primitivep{\displaystyle p}-th root of unity. Ifx{\displaystyle x}is ap{\displaystyle p}-th root of unity ink{\displaystyle k}, then it satisfiesxp=1{\displaystyle x^{p}=1}. But consider the expression(x−1)p=0{\displaystyle (x-1)^{p}=0}. By expanding usingbinomial coefficients, the operation of raising to thep{\displaystyle p}-th power, known here as theFrobenius homomorphism, introduces the factorp{\displaystyle p}to every coefficient except the first and the last, and so modulop{\displaystyle p}these equations are the same. Thereforex=1{\displaystyle x=1}. Consequently, Kummer theory is never applicable to extensions whose degree is divisible by the characteristic. The case where the characteristic divides the degree is today calledArtin–Schreier theorybecause the first progress was made by Artin and Schreier. Their initial motivation was theArtin–Schreier theorem, which characterizes thereal closed fieldsas those whoseabsolute Galois grouphas order two.[2]This inspired them to ask what other fields hadfiniteabsolute Galois groups. In the midst ofprovingthat no other such fields exist, they proved that degreep{\displaystyle p}extensions of a fieldk{\displaystyle k}of characteristicp{\displaystyle p}were the same assplitting fieldsofArtin–Schreier polynomials. These are by definition of the formxp−x−a.{\displaystyle x^{p}-x-a.}By repeating their construction, they described degreep2{\displaystyle p^{2}}extensions.Abraham Adrian Albertused this idea to describe degreepn{\displaystyle p^{n}}extensions. Each repetition entailed complicated algebraic conditions to ensure that the field extension wasnormal.[3] Schmid[4]generalized further to non-commutative cyclic algebras of degreepn{\displaystyle p^{n}}. In the process of doing so, certainpolynomialsrelated to the addition ofp{\displaystyle p}-adic integers appeared. Witt seized on these polynomials. By using them systematically, he was able to give simple and unified constructions of degreepn{\displaystyle p^{n}}field extensions and cyclic algebras. Specifically, he introduced a ring now calledWn(k){\displaystyle W_{n}(k)}, thering ofn{\displaystyle n}-truncatedp{\displaystyle p}-typical Witt vectors. This ring hask{\displaystyle k}as aquotient, and it comes with an operatorF{\displaystyle F}which is called the Frobenius operator since it reduces to the Frobenius operator onk{\displaystyle k}. Witt observed that the degreepn{\displaystyle p^{n}}analog of Artin–Schreier polynomials is wherea∈Wn(k){\displaystyle a\in W_{n}(k)}. To complete the analogy with Kummer theory, define℘{\displaystyle \wp }to be the operatorx↦F(x)−x.{\displaystyle x\mapsto F(x)-x.}Then the degreepn{\displaystyle p^{n}}extensions ofk{\displaystyle k}are in bijective correspondence with cyclicsubgroupsΔ⊆Wn(k)/℘(Wn(k)){\displaystyle \Delta \subseteq W_{n}(k)/\wp (W_{n}(k))}of orderpn{\displaystyle p^{n}}, whereΔ{\displaystyle \Delta }corresponds to the fieldk(℘−1(Δ)){\displaystyle k(\wp ^{-1}(\Delta ))}. Anyp{\displaystyle p}-adic integer (an element ofZp{\displaystyle \mathbb {Z} _{p}}, not to be confused withZ/pZ=Fp{\displaystyle \mathbb {Z} /p\mathbb {Z} =\mathbb {F} _{p}}) can be written as apower seriesa0+a1p1+a2p2+⋯{\displaystyle a_{0}+a_{1}p^{1}+a_{2}p^{2}+\cdots }, where theai{\displaystyle a_{i}}are usually taken from theintegerinterval[0,p−1]={0,1,2,…,p−1}{\displaystyle [0,p-1]=\{0,1,2,\ldots ,p-1\}}. It can be difficult to provide an algebraic expression for addition and multiplication using this representation, as one faces the problem of carrying between digits. However, taking representative coefficientsai∈[0,p−1]{\displaystyle a_{i}\in [0,p-1]}is only one of many choices, andHenselhimself (the creator ofp{\displaystyle p}-adic numbers) suggested the roots of unity in the field as representatives. These representatives are therefore the number0{\displaystyle 0}together with the(p−1)st{\displaystyle (p-1)^{\text{st}}}roots of unity; that is, the solutions ofxp−x=0{\displaystyle x^{p}-x=0}inZp{\displaystyle \mathbb {Z} _{p}}, so thatai=aip{\displaystyle a_{i}=a_{i}^{p}}. This choice extends naturally to ring extensions ofZp{\displaystyle \mathbb {Z} _{p}}in which the residue field is enlarged toFq{\displaystyle \mathbb {F} _{q}}withq=pf{\displaystyle q=p^{f}}, some power ofp{\displaystyle p}. Indeed, it is these fields (thefields of fractionsof the rings) that motivated Hensel's choice. Now the representatives are theq{\displaystyle q}solutions in the field toxq−x=0{\displaystyle x^{q}-x=0}. Call the fieldZp(η){\displaystyle \mathbb {Z} _{p}(\eta )}, withη{\displaystyle \eta }an appropriate primitive(q−1)th{\displaystyle (q-1)^{\text{th}}}root of unity (overZp{\displaystyle \mathbb {Z} _{p}}). The representatives are then0{\displaystyle 0}andηi{\displaystyle \eta ^{i}}for0≤i≤q−2{\displaystyle 0\leq i\leq q-2}. Since these representatives form a multiplicative set they can be thought of as characters. Some thirty years after Hensel's works,Teichmüllerstudied these characters, which now bear his name, and this led him to a characterisation of the structure of the whole field in terms of the residue field. TheseTeichmüller representativescan be identified with the elements of the finite fieldFq{\displaystyle \mathbb {F} _{q}}of orderq{\displaystyle q}by taking residues modulop{\displaystyle p}inZp(η){\displaystyle \mathbb {Z} _{p}(\eta )}, and elements ofFq×{\displaystyle \mathbb {F} _{q}^{\times }}are taken to their representatives by theTeichmüller characterω:Fq×→Zp(η)×{\displaystyle \omega :\mathbb {F} _{q}^{\times }\to \mathbb {Z} _{p}(\eta )^{\times }}. This operation identifies the set of integers inZp(η){\displaystyle \mathbb {Z} _{p}(\eta )}with infinite sequences of elements ofω(Fq×)∪{0}{\displaystyle \omega (\mathbb {F} _{q}^{\times })\cup \{0\}}. Taking those representatives, the expressions for addition and multiplication can be written in closed form. The following problem (stated for the simplest case:q=p{\displaystyle q=p}): given two infinite sequences of elements ofω(Fp×)∪{0}{\displaystyle \omega (\mathbb {F} _{p}^{\times })\cup \{0\}}, describe their sum and product asp-adic integers explicitly. This problem was solved by Witt using Witt vectors.[citation needed] The ring ofp{\displaystyle p}-adic integersZp{\displaystyle \mathbb {Z} _{p}}is derived from the finite fieldFp=Z/pZ{\displaystyle \mathbb {F} _{p}=\mathbb {Z} /p\mathbb {Z} }using a construction which naturally generalizes to the Witt vector construction. The ringZp{\displaystyle \mathbb {Z} _{p}}ofp-adic integers can be understood as theinverse limitof the ringsZ/piZ{\displaystyle \mathbb {Z} /p^{i}\mathbb {Z} }taken along the projections. Specifically, it consists of the sequences(n0,n1,…){\displaystyle (n_{0},n_{1},\ldots )}withni∈Z/pi+1Z{\displaystyle n_{i}\in \mathbb {Z} /p^{i+1}\mathbb {Z} }, such thatnj≡nimodpi+1{\displaystyle n_{j}\equiv n_{i}{\bmod {p}}^{i+1}}forj≥i{\displaystyle j\geq i}. That is, each successive element of the sequence is equal to the previous elements modulo a lower power ofp; this is the inverse limit of the projectionsZ/pi+1Z→Z/piZ{\displaystyle \mathbb {Z} /p^{i+1}\mathbb {Z} \to \mathbb {Z} /p^{i}\mathbb {Z} }. The elements ofZp{\displaystyle \mathbb {Z} _{p}}can be expanded as(formal) power seriesinp{\displaystyle p} where the coefficientsai{\displaystyle a_{i}}are taken from the integer interval[0,p−1]={0,1,…,p−1}{\displaystyle [0,p-1]=\{0,1,\ldots ,p-1\}}. This power series usually will not converge inR{\displaystyle \mathbb {R} }using the standard metric on thereals, but it will converge inZp{\displaystyle \mathbb {Z} _{p}}with thep-adic metric. Lettinga+b{\displaystyle a+b}be denoted byc{\displaystyle c}, the following definition can be considered for addition: and a similar definition for multiplication can be made. However, this is not a closed formula, since the new coefficients are not in the allowed set[0,p−1]{\displaystyle [0,p-1]}. There is a coefficient subset ofZp{\displaystyle \mathbb {Z} _{p}}which does yield closed formulas, theTeichmüller representatives: zero together with the(p−1)th{\displaystyle (p-1)^{\text{th}}}roots of unity. They can be explicitly calculated (in terms of the original coefficient representatives[0,p−1]{\displaystyle [0,p-1]}) as roots ofxp−1−1=0{\displaystyle x^{p-1}-1=0}throughHensel lifting, thep-adic version ofNewton's method. For example, inZ5{\displaystyle \mathbb {Z} _{5}}, to calculate the representative of 2, one starts by finding the unique solution ofx4−1=0{\displaystyle x^{4}-1=0}inZ/25Z{\displaystyle \mathbb {Z} /25\mathbb {Z} }withx≡2mod5{\displaystyle x\equiv 2{\bmod {5}}}; one gets 7. Repeating this inZ/125Z{\displaystyle \mathbb {Z} /125\mathbb {Z} }, with the conditionsx4−1=0{\displaystyle x^{4}-1=0}andx≡7mod25{\displaystyle x\equiv 7{\bmod {2}}5}, gives 57, and so on; the resulting Teichmüller representative of 2, denotedω(2){\displaystyle \omega (2)}, is the sequence ω(2)=(2,7,57,…)∈W(F5){\displaystyle \omega (2)=(2,7,57,\ldots )\in W(\mathbb {F} _{5})}. The existence of a lift in each step is guaranteed by thegreatest common divisor(xp−1−1,(p−1)xp−2)=1{\displaystyle (x^{p-1}-1,(p-1)x^{p-2})=1}in everyZ/pnZ{\displaystyle \mathbb {Z} /p^{n}\mathbb {Z} }. This algorithm shows that for everyj∈[0,p−1]{\displaystyle j\in [0,p-1]}, there is one Teichmüller representative witha0=j{\displaystyle a_{0}=j}, which is denotedω(j){\displaystyle \omega (j)}. This defines theTeichmüller characterω:Fp∗→Zp∗{\displaystyle \omega :\mathbb {F} _{p}^{*}\to \mathbb {Z} _{p}^{*}}as a (multiplicative)group homomorphism, which moreover satisfiesm∘ω=idFp{\displaystyle m\circ \omega =\mathrm {id} _{\mathbb {F} _{p}}}if one letsm:Zp→Zp/pZp≅Fp{\displaystyle m:\mathbb {Z} _{p}\to \mathbb {Z} _{p}/p\mathbb {Z} _{p}\cong \mathbb {F} _{p}}denote the canonical projection. Note however thatω{\displaystyle \omega }isnotadditive, as the sum need not be a representative. Despite this, ifω(k)≡ω(i)+ω(j)modp{\displaystyle \omega (k)\equiv \omega (i)+\omega (j){\bmod {p}}}inZp,{\displaystyle \mathbb {Z} _{p},}theni+j=k{\displaystyle i+j=k}inFp.{\displaystyle \mathbb {F} _{p}.} Because of this one-to-one correspondence given byω{\displaystyle \omega }, one can expand everyp-adic integer as a power series inpwith coefficients taken from the Teichmüller representatives. An explicit algorithm can be given, as follows. Write the Teichmüller representative asω(t0)=t0+t1p1+t2p2+⋯{\displaystyle \omega (t_{0})=t_{0}+t_{1}p^{1}+t_{2}p^{2}+\cdots }. Then, if one has some arbitraryp-adic integer of the formx=x0+x1p1+x2p2+⋯{\displaystyle x=x_{0}+x_{1}p^{1}+x_{2}p^{2}+\cdots }, one takes the differencex−ω(x0)=x1′p1+x2′p2+⋯{\displaystyle x-\omega (x_{0})=x'_{1}p^{1}+x'_{2}p^{2}+\cdots }, leaving a value divisible byp{\displaystyle p}. Hence,x−ω(x0)=0modp{\displaystyle x-\omega (x_{0})=0{\bmod {p}}}. The process is then repeated, subtractingω(x1′)p{\displaystyle \omega (x'_{1})p}and proceed likewise. This yields a sequence of congruences so that andi′>i{\displaystyle i'>i}implies for This obtains a power series for each residue ofx{\displaystyle x}modulo powers ofp{\displaystyle p}, but with coefficients in the Teichmüller representatives rather than{0,…,p−1}{\displaystyle \{0,\ldots ,p-1\}}. since for alli{\displaystyle i}asi→∞{\displaystyle i\to \infty }, so the difference tends to 0 with respect to thep-adic metric. The resulting coefficients will typically differ from theai{\displaystyle a_{i}}modulopi{\displaystyle p^{i}}except the first one. The Teichmüller coefficients have the key additional property thatω(x¯i)p=ω(x¯i),{\displaystyle \omega ({\bar {x}}_{i})^{p}=\omega ({\bar {x}}_{i}),}which is missing for the numbers in[0,p−1]{\displaystyle [0,p-1]}. This can be used to describe addition, as follows. Consider the equationc=a+b{\textstyle c=a+b}inZp{\textstyle \mathbb {Z} _{p}}and let the coefficientsai,bi,ci∈Zp{\textstyle a_{i},b_{i},c_{i}\in \mathbb {Z} _{p}}now be as in the Teichmüller expansion. Since the Teichmüller character is not additive,c0=a0+b0{\displaystyle c_{0}=a_{0}+b_{0}}is not true inZp{\displaystyle \mathbb {Z} _{p}}, but it holds inFp{\displaystyle \mathbb {F} _{p}}, as the first congruence implies. In particular, and thus Since the binomial coefficient(pi){\displaystyle {\binom {p}{i}}}is divisible byp{\displaystyle p}, this gives This completely determinesc1{\displaystyle c_{1}}by the lift. Moreover, the congruence modulop{\displaystyle p}indicates that the calculation can actually be done inFp,{\displaystyle \mathbb {F} _{p},}satisfying the basic aim of defining a simple additive structure. Forc2{\displaystyle c_{2}}this step can be cumbersome. Write Just as forc0{\displaystyle c_{0}}, a singlep{\displaystyle p}th power is not enough: one must take However,(p2i){\displaystyle {\binom {p^{2}}{i}}}is not in general divisible byp2{\displaystyle p^{2}}, but it is divisible wheni=pd{\displaystyle i=pd}, in which caseaibp2−i=adbp−d{\displaystyle a^{i}b^{p^{2}-i}=a^{d}b^{p-d}}combined with similar monomials inc1p{\displaystyle c_{1}^{p}}will make a multiple ofp2{\displaystyle p^{2}}. At this step, one works with addition of the form This motivates the definition of Witt vectors. Fix a prime numberp. AWitt vector[5]over a commutative ringR{\displaystyle R}(relative to the primep{\displaystyle p}) is a sequence(X0,X1,X2,…){\displaystyle (X_{0},X_{1},X_{2},\ldots )}of elements ofR{\displaystyle R}. TheWitt polynomialsWi{\displaystyle W_{i}}can be defined by and in general TheWn{\displaystyle W_{n}}are called theghost componentsof the Witt vector(X0,X1,X2,…){\displaystyle (X_{0},X_{1},X_{2},\ldots )}, and are usually denoted byX(n){\displaystyle X^{(n)}}; taken together, theWn{\displaystyle W_{n}}define theghost mapto∏i=0∞R{\textstyle \prod _{i=0}^{\infty }R}. IfR{\textstyle R}isp-torsionfree, then the ghost map isinjectiveand the ghost components can be thought of as an alternative coordinate system for theR{\displaystyle R}-moduleof sequences (though note that the ghost map is notsurjectiveunlessR{\textstyle R}isp-divisible). Thering of(p-typical)Witt vectorsW(R){\displaystyle W(R)}is defined by componentwise addition and multiplication of the ghost components. That is, that there is a unique way to make the set of Witt vectors over any commutative ringR{\displaystyle R}into a ring such that: In other words, The first few polynomials giving the sum and product of Witt vectors can be written down explicitly. For example, These are to be understood as shortcuts for the actual formulas: if for example the ringR{\displaystyle R}has characteristicp{\displaystyle p}, the division byp{\displaystyle p}in the first formula above, the one byp2{\displaystyle p^{2}}that would appear in the next component and so forth, do not make sense. However, if thep{\displaystyle p}-power of the sum is developed, the termsX0p+Y0p{\displaystyle X_{0}^{p}+Y_{0}^{p}}are cancelled with the previous ones and the remaining ones are simplified byp{\displaystyle p}, no division byp{\displaystyle p}remains and the formula makes sense. The same consideration applies to the ensuing components. As would be expected, the identity element in the ring of Witt vectorsW(A){\displaystyle W(A)}is the element 1_=(1,0,0,…){\displaystyle {\underline {1}}=(1,0,0,\ldots )} Adding this element to itself gives a non-trivial sequence, for example inW(F5){\displaystyle W(\mathbb {F} _{5})}, 1_+1_=(2,4,…){\displaystyle {\underline {1}}+{\underline {1}}=(2,4,\ldots )} since 2=1+14=−32−1−15mod5⋯{\displaystyle {\begin{aligned}2&=1+1\\4&=-{\frac {32-1-1}{5}}\mod 5\\&\cdots \end{aligned}}} which is not the expected behavior, since it doesn't equal2_{\displaystyle {\underline {2}}}. But, when the mapm:W(F5)→F5{\displaystyle m:W(\mathbb {F} _{5})\to \mathbb {F} _{5}}is reduced with, one getsm(ω(1)+ω(1))=m(ω(2)){\displaystyle m(\omega (1)+\omega (1))=m(\omega (2))}. Note if there is an elementx∈A{\displaystyle x\in A}and an elementa∈W(A){\displaystyle a\in W(A)}, then x_a=(xa0,xpa1,…,xpnan,…){\displaystyle {\underline {x}}a=(xa_{0},x^{p}a_{1},\ldots ,x^{p^{n}}a_{n},\ldots )} showing that multiplication also behaves in a highly non-trivial manner. Wn+1(Fp[t])≅{∑aiti/pn∈(Z/pn+1)[t1/pn]:piai=0modpn+1}=(Z/pn)[t]+p(Z/pn)[t1/p]+p2(Z/pn)[t1/p2]+⋯+pn(Z/pn)[t1/pn]{\displaystyle {\begin{array}{rcl}W_{n+1}(\mathbb {F} _{p}[t])&\cong &\left\{\sum a_{i}t^{i/p^{n}}\in (\mathbb {Z} /p^{n+1})[t^{1/p^{n}}]\ :\ pia_{i}=0\ {\textrm {mod}}\ p^{n+1}\right\}\\&=&(\mathbb {Z} /p^{n})[t]+p(\mathbb {Z} /p^{n})[t^{1/p}]+p^{2}(\mathbb {Z} /p^{n})[t^{1/p^{2}}]+\dots +p^{n}(\mathbb {Z} /p^{n})[t^{1/p^{n}}]\end{array}}} The Witt vectors are the inverse limit along the canonical projections W(Fp[t])=lim(⋯→Wn+1(Fp[t])→Wn(Fp[t])→⋯→W1(Fp[t])=Fp[t]).{\displaystyle W(\mathbb {F} _{p}[t])=\lim(\dots \to W_{n+1}(\mathbb {F} _{p}[t])\to W_{n}(\mathbb {F} _{p}[t])\to \dots \to W_{1}(\mathbb {F} _{p}[t])=\mathbb {F} _{p}[t]).} Here the transition homomorphisms are induced by reduction(Z/pn+1)[t1/pn]→(Z/pn)[t1/pn]{\displaystyle (\mathbb {Z} /p^{n+1})[t^{1/p^{n}}]\to (\mathbb {Z} /p^{n})[t^{1/p^{n}}]}. The Witt polynomials for different primesp{\displaystyle p}are special cases of universal Witt polynomials, which can be used to form a universal Witt ring (not depending on a choice of primep{\displaystyle p}). Define the universal Witt polynomialsWn{\displaystyle W_{n}}forn≥1{\displaystyle n\geq 1}by and in general Again,(W1,W2,W3,…){\displaystyle (W_{1},W_{2},W_{3},\ldots )}is called the vector ofghost componentsof the Witt vector(X1,X2,X3,…){\displaystyle (X_{1},X_{2},X_{3},\ldots )}, and is usually denoted by(X(1),X(2),X(3),…){\displaystyle (X^{(1)},X^{(2)},X^{(3)},\ldots )}. These polynomials can be used to define thering of universal Witt vectorsorbig Witt ringof any commutative ringR{\displaystyle R}in much the same way as above (so the universal Witt polynomials are all homomorphisms to the ringR{\displaystyle R}). Witt also provided another approach usinggenerating functions.[7] LetX{\displaystyle X}be a Witt vector and define Forn≥1{\displaystyle n\geq 1}letIn{\displaystyle {\mathcal {I}}_{n}}denote the collection of subsets of{1,2,…,n}{\displaystyle \{1,2,\ldots ,n\}}whose elements add up ton{\displaystyle n}. Then One can get the ghost components by taking thelogarithmic derivative: Now one can seefZ(t)=fX(t)fY(t){\displaystyle f_{Z}(t)=f_{X}(t)f_{Y}(t)}ifZ=X+Y{\displaystyle Z=X+Y}. So that ifAn,Bn,Cn{\displaystyle A_{n},B_{n},C_{n}}are the respective coefficients in the power seriesfX(t),fY(t),fZ(t){\displaystyle f_{X}(t),f_{Y}(t),f_{Z}(t)}. Then SinceAn{\displaystyle A_{n}}is a polynomial inX1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}and likewise forBn{\displaystyle B_{n}}, one can show byinductionthatZn{\displaystyle Z_{n}}is a polynomial inX1,…,Xn,Y1,…,Yn.{\displaystyle X_{1},\ldots ,X_{n},Y_{1},\ldots ,Y_{n}.} IfW=XY{\displaystyle W=XY}is set, then but Now 3-tuplesm,d,e{\displaystyle {m,d,e}}withm∈Z+,d|m,e|m{\displaystyle m\in \mathbb {Z} ^{+},d\,|\,m,e\,|\,m}are in bijection with 3-tuplesd,e,n{\displaystyle {d,e,n}}withd,e,n∈Z+{\displaystyle d,e,n\in \mathbb {Z} ^{+}}, vian=m/[d,e]{\displaystyle n=m/[d,e]}([d,e]{\displaystyle [d,e]}is theleast common multiple), the series becomes so that whereDn{\displaystyle D_{n}}are polynomials ofX1,…,Xn,Y1,…,Yn.{\displaystyle X_{1},\ldots ,X_{n},Y_{1},\ldots ,Y_{n}.}So by similar induction, thenWn{\displaystyle W_{n}}can be solved as polynomials ofX1,…,Xn,Y1,…,Yn{\displaystyle X_{1},\ldots ,X_{n},Y_{1},\ldots ,Y_{n}}. The map taking a commutative ringR{\displaystyle R}to the ring of Witt vectors overR{\displaystyle R}(for a fixed primep{\displaystyle p}) is afunctorfrom commutative rings to commutative rings, and is alsorepresentable, so it can be thought of as aring scheme, called theWitt scheme, overSpec⁡(Z).{\displaystyle \operatorname {Spec} (\mathbb {Z} ).}The Witt scheme can be canonically identified with the spectrum of thering of symmetric functions. Similarly, the rings of truncated Witt vectors, and the rings of universal Witt vectors correspond to ring schemes, called thetruncated Witt schemesand theuniversal Witt scheme. Moreover, the functor taking the commutative ringR{\displaystyle R}to the setRn{\displaystyle R^{n}}is represented by theaffine spaceAZn{\displaystyle \mathbb {A} _{\mathbb {Z} }^{n}}, and the ring structure onRn{\displaystyle R^{n}}makesAZn{\displaystyle \mathbb {A} _{\mathbb {Z} }^{n}}into a ring scheme denotedO_n{\displaystyle {\underline {\mathcal {O}}}^{n}}. From the construction of truncated Witt vectors, it follows that their associated ring schemeWn{\displaystyle \mathbb {W} _{n}}is the schemeAZn{\displaystyle \mathbb {A} _{\mathbb {Z} }^{n}}with the unique ring structure such that the morphismWn→O_n{\displaystyle \mathbb {W} _{n}\to {\underline {\mathcal {O}}}^{n}}given by the Witt polynomials is a morphism of ring schemes. Over analgebraically closed fieldof characteristic 0, anyunipotentabelian connectedalgebraic groupis isomorphic to a product of copies of the additive groupGa{\displaystyle G_{a}}. The analogue of this for fields of characteristicp{\displaystyle p}is false: the truncated Witt schemes arecounterexamples. (They are made into algebraic groups by using the additive structure instead of multiplication.) However, these are essentially the only counterexamples: over an algebraically closed field of characteristicp{\displaystyle p}, any unipotent abelian connected algebraic group isisogenousto a product of truncated Witt group schemes. André Joyalexplicated theuniversal propertyof the (p-typical) Witt vectors.[8]The basic intuition is that the formation of Witt vectors is the universal way to deform a characteristicpring to characteristic 0 together with a lift of its Frobenius endomorphism.[9]To make this precise, define aδ{\displaystyle \delta }-ring(R,δ){\textstyle (R,\delta )}to consist of a commutative ringR{\textstyle R}together with a map of setsδ:R→R{\textstyle \delta :R\to R}that is ap-derivation, so thatδ{\textstyle \delta }satisfies the relations The definition is such that given aδ{\displaystyle \delta }-ring(R,δ){\textstyle (R,\delta )}, if one defines the mapϕ:R→R{\textstyle \phi :R\to R}by the formulaϕ(x)=xp+pδ(x){\textstyle \phi (x)=x^{p}+p\delta (x)}, thenϕ{\textstyle \phi }is a ring homomorphism lifting Frobenius onR/p{\displaystyle R/p}. Conversely, ifR{\textstyle R}isp-torsionfree, then this formula uniquely defines the structure of aδ{\displaystyle \delta }-ring onR{\textstyle R}from that of a Frobenius lift. One may thus regard the notion ofδ{\displaystyle \delta }-ring as a suitable replacement for a Frobenius lift in the non-p-torsionfree case. The collection ofδ{\displaystyle \delta }-rings and ring homomorphisms thereof respecting theδ{\displaystyle \delta }-structure assembles to acategoryCRingδ{\textstyle \mathrm {CRing} _{\delta }}. One then has aforgetful functorU:CRingδ→CRing{\displaystyle U:\mathrm {CRing} _{\delta }\to \mathrm {CRing} }whoseright adjointidentifies with the functorW{\textstyle W}of Witt vectors. The functorU{\textstyle U}createslimits and colimitsand admits an explicitly describable left adjoint as a type offree functor; from this, it can be shown thatCRingδ{\textstyle \mathrm {CRing} _{\delta }}inheritslocal presentabilityfromCRing{\displaystyle \mathrm {CRing} }so that one can construct the functorW{\textstyle W}by appealing to theadjoint functor theorem. One further has thatW{\textstyle W}restricts to afully faithful functoron thefull subcategoryofperfect ringsof characteristicp. Its image then consists of thoseδ{\displaystyle \delta }-rings that are perfect (in the sense that the associated mapϕ{\textstyle \phi }is an isomorphism) and whose underlying ring isp-adically complete.[10]
https://en.wikipedia.org/wiki/Witt_vector
Innumber theory, theLagarias arithmetic derivativeornumber derivativeis afunctiondefined forintegers, based onprime factorization, by analogy with theproduct rulefor thederivative of a functionthat is used inmathematical analysis. There are many versions of "arithmetic derivatives", including the one discussed in this article (the Lagarias arithmetic derivative), such as Ihara's arithmetic derivative and Buium's arithmetic derivatives. The arithmetic derivative was introduced by Spanish mathematician Josè Mingot Shelly in 1911.[1][2]The arithmetic derivative also appeared in the 1950Putnam Competition.[3] Fornatural numbersn, the arithmetic derivativeD(n)[note 1]is defined as follows: Edward J. Barbeauextended thedomainto all integers by showing that the choiceD(−n) = −D(n)uniquely extends the domain to the integers and is consistent with the product formula. Barbeau also further extended it to therational numbers, showing that the familiarquotient rulegives a well-defined derivative onQ{\displaystyle \mathbb {Q} }: Victor UfnarovskiandBo Åhlanderexpanded it to theirrationalsthat can be written as the product of primes raised to arbitrary rational powers, allowing expressions likeD(3){\displaystyle D({\sqrt {3}}\,)}to be computed.[6] The arithmetic derivative can also be extended to anyunique factorization domain(UFD),[6]such as theGaussian integersand theEisenstein integers, and its associatedfield of fractions. If the UFD is apolynomial ring, then the arithmetic derivative is the same as thederivationover said polynomial ring. For example, the regularderivativeis the arithmetic derivative for the rings ofunivariaterealandcomplexpolynomialandrational functions, which can be proven using thefundamental theorem of algebra. The arithmetic derivative has also been extended to thering of integers modulon.[7] The Leibniz rule implies thatD(0) = 0(takem=n= 0) andD(1) = 0(takem=n= 1). Thepower ruleis also valid for the arithmetic derivative. For any integerskandn≥ 0: This allows one to compute the derivative from the prime factorization of an integer,x=∏p∈Ppnp{\textstyle x=\prod \limits _{p\in \mathbb {P} }p^{n_{p}}}(in whichnp=νp(x){\textstyle n_{p}=\nu _{p}(x)}is thep-adic valuationofx) : This shows that if one knows the derivative for all prime numbers, then the derivative is fully known. In fact, the family of arithmetic partial derivative∂∂p{\textstyle {\frac {\partial }{\partial p}}}relative to the prime numberp{\textstyle p}, defined by∂∂p(q)=0{\textstyle {\frac {\partial }{\partial p}}(q)=0}for all primesq{\textstyle q}, except forq=p{\textstyle q=p}for which∂∂p(p)=1{\textstyle {\frac {\partial }{\partial p}}(p)=1}is a basis of the space of derivatives. Note that, for this derivative, we have∂x∂p=npxp{\displaystyle {\frac {\partial x}{\partial p}}=n_{p}{\frac {x}{p}}}. Usually, one takes the derivative such thatD(p)=1{\textstyle D(p)=1}for all primesp, so that With this derivative, we have for example: or And thesequenceof number derivatives forx= 0, 1, 2, …begins (sequenceA003415in theOEIS): Thelogarithmic derivativeld⁡(x)=D(x)x=∑p∈Pp∣xνp(x)p{\displaystyle \operatorname {ld} (x)={\frac {D(x)}{x}}=\sum _{\stackrel {p\,\mid \,x}{p\in \mathbb {P} }}{\frac {\nu _{p}(x)}{p}}}is atotally additive function:ld⁡(x⋅y)=ld⁡(x)+ld⁡(y).{\displaystyle \operatorname {ld} (x\cdot y)=\operatorname {ld} (x)+\operatorname {ld} (y).} Letp{\displaystyle p}be a prime. Thearithmetic partial derivativeofx{\displaystyle x}with respect top{\displaystyle p}is defined asDp(x)=νp(x)px.{\displaystyle D_{p}(x)={\frac {\nu _{p}(x)}{p}}x.}So, the arithmetic derivative ofx{\displaystyle x}is given asD(x)=∑p∈Pp∣xDp(x).{\displaystyle D(x)=\sum _{\stackrel {p\,\mid \,x}{p\in \mathbb {P} }}D_{p}(x).} LetS{\displaystyle S}be a nonempty set of primes. Thearithmetic subderivativeofx{\displaystyle x}with respect toS{\displaystyle S}is defined asDS(x)=∑p∈Sp∣xDp(x).{\displaystyle D_{S}(x)=\sum _{\stackrel {p\,\mid \,x}{p\in S}}D_{p}(x).}IfS{\displaystyle S}is the set of all primes, thenDS(x)=D(x),{\displaystyle D_{S}(x)=D(x),}the usual arithmetic derivative. IfS={p}{\displaystyle S=\{p\}}, thenDS(x)=Dp(x),{\displaystyle D_{S}(x)=D_{p}(x),}the arithmetic partial derivative. An arithmetic functionf{\displaystyle f}isLeibniz-additiveif there is atotally multiplicative functionhf{\displaystyle h_{f}}such thatf(mn)=f(m)hf(n)+f(n)hf(m){\displaystyle f(mn)=f(m)h_{f}(n)+f(n)h_{f}(m)}for all positive integersm{\displaystyle m}andn{\displaystyle n}. A motivation for this concept is the fact that Leibniz-additive functions are generalizations of the arithmetic derivativeD{\displaystyle D}; namely,D{\displaystyle D}is Leibniz-additive withhD(n)=n{\displaystyle h_{D}(n)=n}. The functionδ{\displaystyle \delta }given in Section 3.5 of the book by Sandor and Atanassov is, in fact, exactly the same as the usual arithmetic derivativeD{\displaystyle D}. E. J. Barbeau examined bounds on the arithmetic derivative[8]and found that and whereΩ(n), aprime omega function, is the number of prime factors inn. In both bounds above, equality always occurs whennis apower of 2. Dahl, Olsson and Loiko found the arithmetic derivative of natural numbers is bounded by[9] wherepis the least prime innand equality holds whennis a power ofp. Alexander Loiko,Jonas OlssonandNiklas Dahlfound that it is impossible to find similar bounds for the arithmetic derivative extended to rational numbers by proving that between any two rational numbers there are other rationals with arbitrary large or small derivatives (note that this means that the arithmetic derivative is not acontinuous functionfromQ{\displaystyle \mathbb {Q} }toQ{\displaystyle \mathbb {Q} }). We have and for anyδ> 0, where Victor UfnarovskiandBo Åhlanderhave detailed the function's connection to famous number-theoreticconjectureslike thetwin prime conjecture, the prime triples conjecture, andGoldbach's conjecture. For example, Goldbach's conjecture would imply, for eachk> 1the existence of annso thatD(n) = 2k. The twin prime conjecture would imply that there are infinitely manykfor whichD2(k) = 1.[6]
https://en.wikipedia.org/wiki/Arithmetic_derivative
Inmathematics, aderivationis a function on analgebrathat generalizes certain features of thederivativeoperator. Specifically, given an algebraAover aringor afieldK, aK-derivation is aK-linear mapD:A→Athat satisfiesLeibniz's law: More generally, ifMis anA-bimodule, aK-linear mapD:A→Mthat satisfies the Leibniz law is also called a derivation. The collection of allK-derivations ofAto itself is denoted by DerK(A). The collection ofK-derivations ofAinto anA-moduleMis denoted byDerK(A,M). Derivations occur in many different contexts in diverse areas of mathematics. Thepartial derivativewith respect to a variable is anR-derivation on the algebra ofreal-valueddifferentiable functions onRn. TheLie derivativewith respect to avector fieldis anR-derivation on the algebra of differentiable functions on adifferentiable manifold; more generally it is a derivation on thetensor algebraof a manifold. It follows that theadjoint representation of a Lie algebrais a derivation on that algebra. ThePincherle derivativeis an example of a derivation inabstract algebra. If the algebraAis noncommutative, then thecommutatorwith respect to an element of the algebraAdefines a linearendomorphismofAto itself, which is a derivation overK. That is, where[⋅,N]{\displaystyle [\cdot ,N]}is the commutator with respect toN{\displaystyle N}. An algebraAequipped with a distinguished derivationdforms adifferential algebra, and is itself a significant object of study in areas such asdifferential Galois theory. IfAis aK-algebra, forKa ring, andD:A→Ais aK-derivation, then Given agraded algebraAand a homogeneous linear mapDof grade |D| onA,Dis ahomogeneous derivationif for every homogeneous elementaand every elementbofAfor a commutator factorε= ±1. Agraded derivationis sum of homogeneous derivations with the sameε. Ifε= 1, this definition reduces to the usual case. Ifε= −1, however, then for odd |D|, andDis called ananti-derivation. Examples of anti-derivations include theexterior derivativeand theinterior productacting ondifferential forms. Graded derivations ofsuperalgebras(i.e.Z2-graded algebras) are often calledsuperderivations. Hasse–Schmidt derivationsareK-algebra homomorphisms Composing further with the map that sends aformal power series∑antn{\displaystyle \sum a_{n}t^{n}}to the coefficienta1{\displaystyle a_{1}}gives a derivation.
https://en.wikipedia.org/wiki/Derivation_(abstract_algebra)
Innumber theory, afull reptend prime,full repetend prime,proper prime[1]: 166orlong primeinbasebis anoddprime numberpsuch that theFermat quotient (wherepdoes notdivideb) gives acyclic number. Therefore, the basebexpansion of1/p{\displaystyle 1/p}repeats the digits of the corresponding cyclic number infinitely, as does that ofa/p{\displaystyle a/p}with rotation of the digits for anyabetween 1 andp− 1. The cyclic number corresponding to primepwill possessp− 1 digitsif and only ifpis a full reptend prime. That is, themultiplicative orderordpb=p− 1, which is equivalent tobbeing aprimitive rootmodulop. The term "long prime" was used byJohn ConwayandRichard Guyin theirBook of Numbers. Base 10may be assumed if no base is specified, in which case the expansion of the number is called arepeating decimal. In base 10, if a full reptend prime ends in the digit 1, then each digit 0, 1, ..., 9 appears in the reptend the same number of times as each other digit.[1]: 166(For such primes in base 10, seeOEIS:A073761.) In fact, in baseb, if a full reptend prime ends in the digit 1, then each digit 0, 1, ...,b− 1 appears in the repetend the same number of times as each other digit, but no such prime exists whenb= 12, since every full reptend prime inbase 12ends in the digit 5 or 7 in the same base. Generally, no such prime exists whenbiscongruentto 0 or 1 modulo 4. The values ofpfor which this formula produces cyclic numbers in decimal are: This sequence is the set of primespsuch that 10 is aprimitive rootmodulop.Artin's conjecture on primitive rootsis that this sequence contains 37.395...% of the primes. Inbase 2, the full reptend primes are: (less than 1000) For these primes, 2 is aprimitive rootmodulop, so 2nmodulopcan be anynatural numberbetween 1 andp− 1. These sequences of periodp− 1 have an autocorrelation function that has a negative peak of −1 for shift of(p−1)/2{\displaystyle (p-1)/2}. The randomness of these sequences has been examined bydiehard tests.[2] Binary full reptend prime sequences (also called maximum-length decimal sequences) have foundcryptographicanderror-correction codingapplications.[3]In these applications, repeating decimals to base 2 are generally used which gives rise to binary sequences. The maximum length binary sequence for1/p{\displaystyle 1/p}(when 2 is a primitive root ofp) is given by Kak.[4]
https://en.wikipedia.org/wiki/Full_reptend_prime
Inmathematics,Midy's theorem, named afterFrenchmathematicianE. Midy,[1]is a statement about thedecimal expansionoffractionsa/pwherepis aprimeanda/phas arepeating decimalexpansion with anevenperiod (sequenceA028416in theOEIS). If the period of the decimal representation ofa/pis 2n, so that ap=0.a1a2a3…anan+1…a2n¯{\displaystyle {\frac {a}{p}}=0.{\overline {a_{1}a_{2}a_{3}\dots a_{n}a_{n+1}\dots a_{2n}}}} then the digits in the second half of the repeating decimal period are the9s complementof the corresponding digits in its first half. In other words, ai+ai+n=9{\displaystyle a_{i}+a_{i+n}=9}a1…an+an+1…a2n=10n−1.{\displaystyle a_{1}\dots a_{n}+a_{n+1}\dots a_{2n}=10^{n}-1.} For example, 113=0.076923¯and076+923=999.{\displaystyle {\frac {1}{13}}=0.{\overline {076923}}{\text{ and }}076+923=999.}117=0.0588235294117647¯and05882352+94117647=99999999.{\displaystyle {\frac {1}{17}}=0.{\overline {0588235294117647}}{\text{ and }}05882352+94117647=99999999.} Ifkis any divisor ofh(wherehis the number of digits of the period of the decimal expansion ofa/p(wherepis again a prime)), then Midy's theorem can be generalised as follows. Theextended Midy's theorem[2]states that if the repeating portion of the decimal expansion ofa/pis divided intok-digit numbers, then their sum is a multiple of 10k− 1. For example,119=0.052631578947368421¯{\displaystyle {\frac {1}{19}}=0.{\overline {052631578947368421}}}has a period of 18. Dividing the repeating portion into 6-digit numbers and summing them gives052631+578947+368421=999999.{\displaystyle 052631+578947+368421=999999.}Similarly, dividing the repeating portion into 3-digit numbers and summing them gives052+631+578+947+368+421=2997=3×999.{\displaystyle 052+631+578+947+368+421=2997=3\times 999.} Midy's theorem and its extension do not depend on special properties of the decimal expansion, but work equally well in anybaseb, provided we replace 10k− 1 withbk− 1 and carry out addition in baseb. For example, inoctal 119=0.032745¯80328+7458=7778038+278+458=778.{\displaystyle {\begin{aligned}&{\frac {1}{19}}=0.{\overline {032745}}_{8}\\[8pt]&032_{8}+745_{8}=777_{8}\\[8pt]&03_{8}+27_{8}+45_{8}=77_{8}.\end{aligned}}} Indozenal(using inverted two and three for ten and eleven, respectively) Short proofs of Midy's theorem can be given using results fromgroup theory. However, it is also possible to prove Midy's theorem usingelementary algebraandmodular arithmetic: Letpbe a prime anda/pbe a fraction between 0 and 1. Suppose the expansion ofa/pin basebhas a period ofℓ, so ap=[0.a1a2…aℓ¯]b⇒apbℓ=[a1a2…aℓ.a1a2…aℓ¯]b⇒apbℓ=N+[0.a1a2…aℓ¯]b=N+ap⇒ap=Nbℓ−1{\displaystyle {\begin{aligned}&{\frac {a}{p}}=[0.{\overline {a_{1}a_{2}\dots a_{\ell }}}]_{b}\\[6pt]&\Rightarrow {\frac {a}{p}}b^{\ell }=[a_{1}a_{2}\dots a_{\ell }.{\overline {a_{1}a_{2}\dots a_{\ell }}}]_{b}\\[6pt]&\Rightarrow {\frac {a}{p}}b^{\ell }=N+[0.{\overline {a_{1}a_{2}\dots a_{\ell }}}]_{b}=N+{\frac {a}{p}}\\[6pt]&\Rightarrow {\frac {a}{p}}={\frac {N}{b^{\ell }-1}}\end{aligned}}} whereNis the integer whose expansion in basebis the stringa1a2...aℓ. Note thatbℓ− 1 is a multiple ofpbecause (bℓ− 1)a/pis an integer. Alsobn−1 isnota multiple ofpfor any value ofnless thanℓ, because otherwise the repeating period ofa/pin basebwould be less thanℓ. Now suppose thatℓ=hk. Thenbℓ− 1 is a multiple ofbk− 1. (To see this, substitutexforbk; thenbℓ=xhandx− 1 is a factor ofxh− 1. ) Saybℓ− 1 =m(bk− 1), so ap=Nm(bk−1).{\displaystyle {\frac {a}{p}}={\frac {N}{m(b^{k}-1)}}.} Butbℓ− 1 is a multiple ofp;bk− 1 isnota multiple ofp(becausekis less thanℓ); andpis a prime; sommust be a multiple ofpand amp=Nbk−1{\displaystyle {\frac {am}{p}}={\frac {N}{b^{k}-1}}} is an integer. In other words, N≡0(modbk−1).{\displaystyle N\equiv 0{\pmod {b^{k}-1}}.} Now split the stringa1a2...aℓintohequal parts of lengthk, and let these represent the integersN0...Nh− 1in baseb, so that Nh−1=[a1…ak]bNh−2=[ak+1…a2k]b⋮N0=[al−k+1…al]b{\displaystyle {\begin{aligned}N_{h-1}&=[a_{1}\dots a_{k}]_{b}\\N_{h-2}&=[a_{k+1}\dots a_{2k}]_{b}\\&{}\ \ \vdots \\N_{0}&=[a_{l-k+1}\dots a_{l}]_{b}\end{aligned}}} To prove Midy's extended theorem in basebwe must show that the sum of thehintegersNiis a multiple ofbk− 1. Sincebkis congruent to 1 modulobk− 1, any power ofbkwill also be congruent to 1 modulobk− 1. So N=∑i=0h−1Nibik=∑i=0h−1Ni(bk)i{\displaystyle N=\sum _{i=0}^{h-1}N_{i}b^{ik}=\sum _{i=0}^{h-1}N_{i}(b^{k})^{i}}⇒N≡∑i=0h−1Ni(modbk−1){\displaystyle \Rightarrow N\equiv \sum _{i=0}^{h-1}N_{i}{\pmod {b^{k}-1}}}⇒∑i=0h−1Ni≡0(modbk−1){\displaystyle \Rightarrow \sum _{i=0}^{h-1}N_{i}\equiv 0{\pmod {b^{k}-1}}} which proves Midy's extended theorem in baseb. To prove the original Midy's theorem, take the special case whereh= 2. Note thatN0andN1are both represented by strings ofkdigits in basebso both satisfy 0≤Ni≤bk−1.{\displaystyle 0\leq N_{i}\leq b^{k}-1.} N0andN1cannot both equal 0 (otherwisea/p= 0) and cannot both equalbk− 1 (otherwisea/p= 1), so 0<N0+N1<2(bk−1){\displaystyle 0<N_{0}+N_{1}<2(b^{k}-1)} and sinceN0+N1is a multiple ofbk− 1, it follows that N0+N1=bk−1.{\displaystyle N_{0}+N_{1}=b^{k}-1.} From the above,amp{\displaystyle {\frac {am}{p}}}is an integer Thusm≡0(modp){\displaystyle m\equiv 0{\pmod {p}}} And thus fork=ℓ2{\displaystyle k={\frac {\ell }{2}}} bℓ/2+1≡0(modp){\displaystyle b^{\ell /2}+1\equiv 0{\pmod {p}}} Fork=ℓ3{\displaystyle k={\frac {\ell }{3}}}and is an integer b2ℓ/3+bℓ/3+1≡0(modp){\displaystyle b^{2\ell /3}+b^{\ell /3}+1\equiv 0{\pmod {p}}} and so on.
https://en.wikipedia.org/wiki/Midy%27s_theorem
Inmathematics, ann-parasitic number(inbase 10) is a positivenatural numberwhich, whenmultipliedbyn, results in movement of the lastdigitof itsdecimal representationto its front. Herenis itself a single-digit positive natural number. In other words, the decimal representation undergoes a rightcircular shiftby one place. For example: Most mathematicians do not allowleading zerosto be used, and that is a commonly followed convention. So even though 4 × 25641 = 102564, the number 25641 isnot4-parasitic. Ann-parasitic number can be derived by starting with a digitk(which should be equal tonor greater) in the rightmost (units) place, and working up one digit at a time. For example, forn= 4 andk= 7 So 179487 is a 4-parasitic number with units digit 7. Others are 179487179487, 179487179487179487, etc. Notice that therepeating decimal Thus In general, ann-parasitic number can be found as follows. Pick a one digit integerksuch thatk≥n, and take the period of therepeating decimalk/(10n−1). This will bek10n−1(10m−1){\displaystyle {\frac {k}{10n-1}}(10^{m}-1)}wheremis the length of the period; i.e. themultiplicative orderof 10modulo(10n− 1). For another example, ifn= 2, then 10n− 1 = 19 and the repeating decimal for 1/19 is So that for 2/19 is double that: The lengthmof this period is 18, the same as the order of 10 modulo 19, so2 × (1018− 1)/19= 105263157894736842. 105263157894736842 × 2 = 210526315789473684, which is the result of moving the last digit of 105263157894736842 to the front. The step-by-step derivation algorithm depicted above is a great core technique but will not find all n-parasitic numbers. It will get stuck in an infinite loop when the derived number equals the derivation source. An example of this occurs when n = 5 and k = 5. The 42-digit n-parasitic number to be derived is 102040816326530612244897959183673469387755. Check the steps in Table One below. The algorithm begins building from right to left until it reaches step 15—then the infinite loop occurs. Lines 16 and 17 are pictured to show that nothing changes. There is a fix for this problem, and when applied, the algorithm will not only find alln-parasitic numbers in base ten, it will find them in base 8 and base 16 as well. Look at line 15 in Table Two. The fix, when this condition is identified and then-parasitic number has not been found, is simply to not shift the product from the multiplication, but use it as is, and appendn(in this case 5) to the end. After 42 steps, the proper parasitic number will be found. There is one more condition to be aware of when working with this algorithm, leading zeros must not be lost. When the shift number is created it may contain a leading zero which is positionally important and must be carried into and through the next step. Calculators and computer math methods will remove leading zeros. Look at Table Three below displaying the derivation steps forn= 4 andk= 4. The Shift number created in step 4, 02564, has a leading zero which is fed into step 5 creating a leading zero product. The resulting Shift is fed into Step 6 which displays a product proving the 4-parasitic number ending in 4 is 102564. The smallestn-parasitic numbers are also known asDyson numbers, after a puzzle concerning these numbers posed byFreeman Dyson.[1][2][3]They are: (leading zeros are not allowed) (sequenceA092697in theOEIS) In general, if we relax the rules to allow a leading zero, then there are 9n-parasitic numbers for eachn. Otherwise only ifk≥nthen the numbers do not start with zero and hence fit the actual definition. Othern-parasitic integers can be built by concatenation. For example, since 179487 is a 4-parasitic number, so are 179487179487, 179487179487179487 etc. Induodecimalsystem, the smallestn-parasitic numbers are: (using inverted two and three for ten and eleven, respectively) (leading zeros are not allowed) In strict definition, least numbermbeginning with 1 such that the quotientm/nis obtained merely by shifting the leftmost digit 1 ofmto the right end are They are the period ofn/(10n− 1), also the period of thedecadic integer-n/(10n− 1). Number of digits of them are
https://en.wikipedia.org/wiki/Parasitic_number
Atrailing zerois any 0 digit that comes after the last nonzero digit in a number string inpositional notation. For digitsbeforethe decimal point, the trailing zeros between thedecimal pointand the last nonzero digit are necessary for conveying the magnitude of a number and cannot be omitted (ex. 100), whileleading zeros– zeros occurring before the decimal point and before the first nonzero digit – can be omitted without changing the meaning (ex. 001). Any zeros appearing to the right of the last non-zero digitafterthe decimal point do not affect its value (ex. 0.100). Thus, decimal notation often does not use trailing zeros that come after the decimal point. However, trailing zeros that come after the decimal point may be used to indicate the number ofsignificant figures, for example in a measurement, and in that context, "simplifying" a number by removing trailing zeros would be incorrect. The number of trailing zeros in a non-zero base-bintegernequals the exponent of the highest power ofbthat dividesn. For example, 14000 has three trailing zeros and is therefore divisible by 1000 = 103, but not by 104. This property is useful when looking for small factors ininteger factorization. Somecomputer architectureshave acount trailing zerosoperation in theirinstruction setfor efficiently determining the number of trailing zero bits in a machine word. Inpharmacy, trailing zeros are omitted fromdosevalues to prevent misreading. The number of trailing zeros in thedecimal representationofn!, thefactorialof anon-negativeintegern, is simply the multiplicity of theprimefactor 5 inn!. This can be determined with this special case ofde Polignac's formula:[1] wherekmust be chosen such that more precisely and⌊a⌋{\displaystyle \lfloor a\rfloor }denotes thefloor functionapplied toa. Forn= 0, 1, 2, ... this is For example, 53> 32, and therefore 32! = 263130836933693530167218012160000000 ends in zeros. Ifn< 5, the inequality is satisfied byk= 0; in that case the sum isempty, giving the answer 0. The formula actually counts the number of factors 5 inn!, but since there are at least as many factors 2, this is equivalent to the number of factors 10, each of which gives one more trailing zero. Defining the followingrecurrence relationholds: This can be used to simplify the computation of the terms of the summation, which can be stopped as soon asqireaches zero. The condition5k+1>nis equivalent toqk+1= 0.
https://en.wikipedia.org/wiki/Trailing_zero
Thereciprocalsofprime numbershave been of interest to mathematicians for various reasons. Theydo not have a finite sum, asLeonhard Eulerproved in 1737. Likerational numbers, the reciprocals of primes haverepeating decimalrepresentations. In his later years,George Salmon(1819–1904) concerned himself with the repeating periods of these decimal representations of reciprocals of primes.[1] Contemporaneously,William Shanks(1812–1882) calculated numerous reciprocals of primes and their repeating periods, and published two papers "On Periods in the Reciprocals of Primes" in 1873[2]and 1874.[3]In 1874 he also published a table of primes, and the periods of their reciprocals, up to 20,000 (with help from and "communicated by the Rev. George Salmon"), and pointed out the errors in previous tables by three other authors.[4] Rules for calculating the periods of repeating decimals from rational fractions were given byJames Whitbread Lee Glaisherin 1878.[5]For a primep, the period of its reciprocal dividesp− 1.[6] The sequence of recurrence periods of the reciprocal primes (sequenceA002371in theOEIS) appears in the 1973 Handbook of Integer Sequences. *Full reptend primesare italicised.†Unique primes are highlighted. Afull reptend prime,full repetend prime,proper prime[7]: 166orlong primeinbasebis anoddprime numberpsuch that theFermat quotient (wherepdoes notdivideb) gives acyclic numberwithp− 1 digits. Therefore, the basebexpansion of1/p{\displaystyle 1/p}repeats the digits of the corresponding cyclic number infinitely. A primep(wherep≠ 2, 5 when working in base 10) is calleduniqueif there is no other primeqsuch that theperiod lengthof the decimal expansion of itsreciprocal, 1/p, is equal to the period length of the reciprocal ofq, 1/q.[8]For example, 3 is the only prime with period 1, 11 is the only prime with period 2, 37 is the only prime with period 3, 101 is the only prime with period 4, so they are unique primes. The next larger unique prime is 9091 with period 10, though the next larger period is 9 (its prime being 333667). Unique primes were described bySamuel Yatesin 1980.[9]A prime numberpis unique if and only if there exists annsuch that is a power ofp, whereΦn(b){\displaystyle \Phi _{n}(b)}denotes then{\displaystyle n}thcyclotomic polynomialevaluated atb{\displaystyle b}. The value ofnis then the period of the decimal expansion of 1/p.[10] At present, more than fifty decimal unique primes orprobable primesare known. However, there are only twenty-three unique primes below 10100. The decimal unique primes are
https://en.wikipedia.org/wiki/Unique_prime
Inmathematics,0.999...(also written as0.9,0..9, or0.(9)) is arepeating decimalthat is an alternative way of writing the number1. Following the standard rules for representingnumbersin decimal notation, its value is the smallest number greater than or equal to every number in the sequence0.9, 0.99, 0.999, .... It can be proved that this number is1; that is, Despite common misconceptions, 0.999... is not "almost exactly 1" or "very, very nearly but not quite 1"; rather, "0.999..." and "1" representexactlythe same number. An elementary proof is given below that involves onlyelementary arithmeticand the fact that there is nopositive real numberless than all110n{\displaystyle {\tfrac {1}{10^{n}}}}wherenis anatural number, a property that results immediately from theArchimedean propertyof thereal numbers. There are many other ways of showing this equality, fromintuitivearguments tomathematically rigorousproofs. The intuitive arguments are generally based on properties offinite decimalsthat are extended without proof to infinite decimals. The proofs are generally based on basic properties of real numbers and methods ofcalculus, such asseriesandlimits. A question studied inmathematics educationis why some people reject this equality. Inother number systems, 0.999... can have the same meaning, a different definition, or be undefined. Every nonzeroterminating decimalhas two equal representations (for example, 8.32000... and 8.31999...). Having values with multiple representations is a feature of allpositional numeral systemsthat represent the real numbers. It is possible to prove the equation0.999... = 1using just the mathematical tools of comparison and addition of (finite)decimal numbers, without any reference to more advanced topics such asseriesandlimits. The proof givenbelowis a direct formalization of the intuitive fact that, if one draws 0.9, 0.99, 0.999, etc. on thenumber line, there is no room left for placing a number between them and 1. The meaning of the notation 0.999... is the least point on the number line lying to the right of all of the numbers 0.9, 0.99, 0.999, etc. Because there is ultimately no room between 1 and these numbers, the point 1 must be this least point, and so0.999... = 1. If one places 0.9, 0.99, 0.999, etc. on thenumber line, one sees immediately that all these points are to the left of 1, and that they get closer and closer to 1. For any numberx{\displaystyle x}that is less than 1, the sequence 0.9, 0.99, 0.999, and so on will eventually reach a number larger than⁠x{\displaystyle x}⁠. So, it does not make sense to identify 0.999... with any number smaller than 1. Meanwhile, every number larger than 1 will be larger than any decimal of the form 0.999...9 for any finite number of nines. Therefore, 0.999... cannot be identified with any number larger than 1, either. Because 0.999... cannot be bigger than 1 or smaller than 1, it must equal 1 if it is to be any real number at all.[1][2] Denote by 0.(9)nthe number 0.999...9, withn{\displaystyle n}nines after the decimal point. Thus0.(9)1= 0.9,0.(9)2= 0.99,0.(9)3= 0.999, and so on. One has1 − 0.(9)1= 0.1 =⁠110{\displaystyle \textstyle {\frac {1}{10}}}⁠,1 − 0.(9)2= 0.01 =⁠1102{\displaystyle \textstyle {\frac {1}{10^{2}}}}⁠, and so on; that is,1 − 0.(9)n=110n{\textstyle {\frac {1}{10^{n}}}}for everynatural number⁠n{\displaystyle n}⁠. Letx{\displaystyle x}be a number not greater than 1 and greater than 0.9, 0.99, 0.999, etc.; that is,0.(9)n<x{\displaystyle x}≤ 1, for every⁠n{\displaystyle n}⁠. By subtracting these inequalities from 1, one gets0 ≤ 1 −x{\displaystyle x}<⁠110n{\displaystyle \textstyle {\frac {1}{10^{n}}}}⁠. The end of the proof requires that there is no positive number that is less than110n{\textstyle {\frac {1}{10^{n}}}}for all⁠n{\displaystyle n}⁠. This is one version of theArchimedean property, which is true for real numbers.[3][4]This property implies that if1 −x{\displaystyle x}<⁠110n{\displaystyle \textstyle {\frac {1}{10^{n}}}}⁠for all⁠n{\displaystyle n}⁠, then1 −x{\displaystyle x}can only be equal to 0. So,x{\displaystyle x}= 1and 1 is the smallest number that is greater than all 0.9, 0.99, 0.999, etc. That is,1 = 0.999.... This proof relies on the Archimedean property of rational and real numbers. Real numbers may be enlarged intonumber systems, such ashyperreal numbers, with infinitely small numbers (infinitesimals) and infinitely large numbers (infinite numbers).[5][6]When using such systems, the notation 0.999... is generally not used, as there is no smallest number among the numbers larger than all 0.(9)n.[a] Part of what this argument shows is that there is aleast upper boundof the sequence 0.9, 0.99, 0.999, etc.: the smallest number that is greater than all of the terms of the sequence. One of theaxiomsof thereal number systemis thecompleteness axiom, which states that every bounded sequence has a least upper bound.[7][8]This least upper bound is one way to define infinite decimal expansions: the real number represented by an infinite decimal is the least upper bound of its finite truncations.[9]The argument here does not need to assume completeness to be valid, because it shows that this particular sequence of rational numbers has a least upper bound and that this least upper bound is equal to one.[10] Simple algebraic illustrations of equality are a subject of pedagogical discussion and critique.Byers (2007)discusses the argument that, in elementary school, one is taught that13{\textstyle {\frac {1}{3}}}= 0.333..., so, ignoring all essential subtleties, "multiplying" this identity by 3 gives1 = 0.999.... He further says that this argument is unconvincing, because of an unresolved ambiguity over the meaning of theequals sign; a student might think, "It surely does not mean that the number 1 is identical to that which is meant by the notation 0.999...‍." Most undergraduate mathematics majors encountered by Byers feel that while 0.999... is "very close" to 1 on the strength of this argument, with some even saying that it is "infinitely close", they are not ready to say that it is equal to 1.[11]Richman (1999)discusses how "this argument gets its force from the fact that most people have been indoctrinated to accept the first equation without thinking", but also suggests that the argument may lead skeptics to question this assumption.[12] Byers also presents the following argument. Students who did not accept the first argument sometimes accept the second argument, but, in Byers's opinion, still have not resolved the ambiguity, and therefore do not understand the representation of infinite decimals.Peressini & Peressini (2007), presenting the same argument, also state that it does not explain the equality, indicating that such an explanation would likely involve concepts of infinity andcompleteness.[13]Baldwin & Norton (2012), citingKatz & Katz (2010a), also conclude that the treatment of the identity based on such arguments as these, without the formal concept of a limit, is premature.[14]Cheng (2023)concurs, arguing that knowing one can multiply 0.999... by 10 by shifting the decimal point presumes an answer to the deeper question of how one gives a meaning to the expression 0.999... at all.[15]The same argument is also given byRichman (1999), who notes that skeptics may question whetherx{\displaystyle x}iscancellable– that is, whether it makes sense to subtractx{\displaystyle x}from both sides.[12]Eisenmann (2008)similarly argues that both the multiplication and subtraction which removes the infinite decimal require further justification.[16] Real analysisis the study of the logical underpinnings ofcalculus, including the behavior of sequences and series of real numbers.[17]The proofs in this section establish0.999... = 1using techniques familiar from real analysis. A common development of decimal expansions is to define them as sums ofinfinite series. In general:b0.b1b2b3b4…=b0+b1(110)+b2(110)2+b3(110)3+b4(110)4+⋯.{\displaystyle b_{0}.b_{1}b_{2}b_{3}b_{4}\ldots =b_{0}+b_{1}\left({\tfrac {1}{10}}\right)+b_{2}\left({\tfrac {1}{10}}\right)^{2}+b_{3}\left({\tfrac {1}{10}}\right)^{3}+b_{4}\left({\tfrac {1}{10}}\right)^{4}+\cdots .} For 0.999... one can apply theconvergencetheorem concerninggeometric series, stating that if⁠|r|{\displaystyle \vert r\vert }⁠< 1, then:[18]ar+ar2+ar3+⋯=ar1−r.{\displaystyle ar+ar^{2}+ar^{3}+\cdots ={\frac {ar}{1-r}}.} Since 0.999... is such a sum witha=9{\displaystyle a=9}and common ratio⁠r=110{\displaystyle \textstyle r={\frac {1}{10}}}⁠, the theorem makes short work of the question:0.999…=9(110)+9(110)2+9(110)3+⋯=9(110)1−110=1.{\displaystyle 0.999\ldots =9\left({\tfrac {1}{10}}\right)+9\left({\tfrac {1}{10}}\right)^{2}+9\left({\tfrac {1}{10}}\right)^{3}+\cdots ={\frac {9\left({\tfrac {1}{10}}\right)}{1-{\tfrac {1}{10}}}}=1.}This proof appears as early as 1770 inLeonhard Euler'sElements of Algebra.[19] The sum of a geometric series is itself a result even older than Euler. A typical 18th-century derivation used a term-by-term manipulation similar to thealgebraic proofgiven above, and as late as 1811, Bonnycastle's textbookAn Introduction to Algebrauses such an argument for geometric series to justify the same maneuver on 0.999...‍.[20]A 19th-century reaction against such liberal summation methods resulted in the definition that still dominates today: the sum of a series isdefinedto be the limit of the sequence of its partial sums. A corresponding proof of the theorem explicitly computes that sequence; it can be found in several proof-based introductions to calculus or analysis.[21] Asequence(x0{\displaystyle x_{0}},x1{\displaystyle x_{1}},x2{\displaystyle x_{2}}, ...)has the valuex{\displaystyle x}as itslimitif the distance|x−xn|{\displaystyle \left\vert x-x_{n}\right\vert }becomes arbitrarily small asn{\displaystyle n}increases. The statement that0.999... = 1can itself be interpreted and proven as a limit:[b]0.999…=deflimn→∞0.99…9⏟n=deflimn→∞∑k=1n910k=limn→∞(1−110n)=1−limn→∞110n=1−0=1.{\displaystyle 0.999\ldots \ {\overset {\underset {\mathrm {def} }{}}{=}}\ \lim _{n\to \infty }0.\underbrace {99\ldots 9} _{n}\ {\overset {\underset {\mathrm {def} }{}}{=}}\ \lim _{n\to \infty }\sum _{k=1}^{n}{\frac {9}{10^{k}}}=\lim _{n\to \infty }\left(1-{\frac {1}{10^{n}}}\right)=1-\lim _{n\to \infty }{\frac {1}{10^{n}}}=1-0=1.}The first two equalities can be interpreted as symbol shorthand definitions. The remaining equalities can be proven. The last step, that 10-napproaches 0 asn{\displaystyle n}approaches infinity (⁠∞{\displaystyle \infty }⁠), is often justified by theArchimedean propertyof the real numbers. This limit-based attitude towards 0.999... is often put in more evocative but less precise terms. For example, the 1846 textbookThe University Arithmeticexplains, ".999 +, continued to infinity = 1, because every annexation of a 9 brings the value closer to 1"; the 1895Arithmetic for Schoolssays, "when a large number of 9s is taken, the difference between 1 and .99999... becomes inconceivably small".[22]Suchheuristicsare often incorrectly interpreted by students as implying that 0.999... itself is less than 1.[23] The series definition above defines the real number named by a decimal expansion. A complementary approach is tailored to the opposite process: for a given real number, define the decimal expansion(s) to name it. If a real numberx{\displaystyle x}is known to lie in theclosed interval[0, 10](that is, it is greater than or equal to 0 and less than or equal to 10), one can imagine dividing that interval into ten pieces that overlap only at their endpoints:[0, 1],[1, 2],[2, 3], and so on up to[9, 10]. The numberx{\displaystyle x}must belong to one of these; if it belongs to[2, 3], then one records the digit "2" and subdivides that interval into[2, 2.1],[2.1, 2.2], ...,[2.8, 2.9],[2.9, 3]. Continuing this process yields an infinite sequence ofnested intervals, labeled by an infinite sequence of digits⁠b1{\displaystyle b_{1}}⁠,⁠b2{\displaystyle b_{2}}⁠,⁠b3{\displaystyle b_{3}}⁠, ..., and one writesx=b0.b1b2b3….{\displaystyle x=b_{0}.b_{1}b_{2}b_{3}\ldots \,.} In this formalism, the identities1 = 0.999...and1 = 1.000...reflect, respectively, the fact that 1 lies in both[0, 1]. and[1, 2], so one can choose either subinterval when finding its digits. To ensure that this notation does not abuse the "=" sign, one needs a way to reconstruct a unique real number for each decimal. This can be done with limits, but other constructions continue with the ordering theme.[24] One straightforward choice is thenested intervals theorem, which guarantees that given a sequence of nested, closed intervals whose lengths become arbitrarily small, the intervals contain exactly one real number in theirintersection. So⁠b1{\displaystyle b_{1}}⁠,⁠b2{\displaystyle b_{2}}⁠,⁠b3{\displaystyle b_{3}}⁠, ... is defined to be the unique number contained within all the intervals[b0{\displaystyle b_{0}},b0{\displaystyle b_{0}}+ 1],[b0.b1{\displaystyle b_{0}.b_{1}},b0.b1{\displaystyle b_{0}.b_{1}}+ 0.1], and so on. 0.999... is then the unique real number that lies in all of the intervals[0, 1],[0.9, 1],[0.99, 1], and[0.99...9, 1]for every finite string of 9s. Since 1 is an element of each of these intervals,0.999... = 1.[25] The nested intervals theorem is usually founded upon a more fundamental characteristic of the real numbers: the existence ofleast upper boundsorsuprema. To directly exploit these objects, one may define⁠b0.b1b2b3{\displaystyle b_{0}.b_{1}b_{2}b_{3}}⁠... to be the least upper bound of the set of approximants⁠b0{\displaystyle b_{0}}⁠,⁠b0.b1{\displaystyle b_{0}.b_{1}}⁠,⁠b0.b1b2{\displaystyle b_{0}.b_{1}b_{2}}⁠, ...‍.[26]One can then show that this definition (or the nested intervals definition) is consistent with the subdivision procedure, implying0.999... = 1again.Tom Apostolconcludes, "the fact that a real number might have two different decimal representations is merely a reflection of the fact that two different sets of real numbers can have the same supremum."[27] Some approaches explicitly define real numbers to be certainstructures built upon the rational numbers, usingaxiomatic set theory. Thenatural numbers{0, 1, 2, 3, ...}begin with 0 and continue upwards so that every number has a successor. One can extend the natural numbers with their negatives to give all theintegers, and to further extend to ratios, giving therational numbers. These number systems are accompanied by the arithmetic of addition, subtraction, multiplication, and division.[28][29]More subtly, they includeordering, so that one number can be compared to another and found to be less than, greater than, or equal to another number.[30] The step from rationals to reals is a major extension. There are at least two popular ways to achieve this step, both published in 1872:Dedekind cutsandCauchy sequences. Proofs that0.999... = 1that directly uses these constructions are not found in textbooks on real analysis, where the modern trend for the last few decades has been to use an axiomatic analysis. Even when a construction is offered, it is usually applied toward proving the axioms of the real numbers, which then support the above proofs. However, several authors express the idea that starting with a construction is more logically appropriate, and the resulting proofs are more self-contained.[c] In theDedekind cutapproach, each real numberx{\displaystyle x}is defined as theinfinite setof allrational numbersless than⁠x{\displaystyle x}⁠.[d]In particular, the real number 1 is the set of all rational numbers that are less than 1.[e]Every positive decimal expansion easily determines a Dedekind cut: the set of rational numbers that are less than some stage of the expansion. So the real number 0.999... is the set of rational numbersr{\displaystyle r}such thatr{\displaystyle r}< 0, orr{\displaystyle r}< 0.9, orr{\displaystyle r}< 0.99, orr{\displaystyle r}is less than some other number of the form[31]1−110n=0.(9)n=0.99…9⏟nnines.{\displaystyle 1-{\frac {1}{10^{n}}}=0.(9)_{n}=0.\underbrace {99\ldots 9} _{n{\text{ nines}}}.} Every element of 0.999... is less than 1, so it is an element of the real number 1. Conversely, all elements of 1 are rational numbers that can be written asab<1,{\displaystyle {\frac {a}{b}}<1,}withb>0{\displaystyle b>0}and⁠b>a{\displaystyle b>a}⁠. This implies1−ab=b−ab≥1b>110b,{\displaystyle 1-{\frac {a}{b}}={\frac {b-a}{b}}\geq {\frac {1}{b}}>{\frac {1}{10^{b}}},}and thusab<1−110b.{\displaystyle {\frac {a}{b}}<1-{\frac {1}{10^{b}}}.} Since1−110b=0.(9)b<0.999…,{\displaystyle 1-{\frac {1}{10^{b}}}=0.(9)_{b}<0.999\ldots ,}by the definition above, every element of 1 is also an element of 0.999..., and, combined with the proof above that every element of 0.999... is also an element of 1, the sets 0.999... and 1 contain the same rational numbers, and are therefore the same set, that is,0.999... = 1. The definition of real numbers as Dedekind cuts was first published byRichard Dedekindin 1872.[32]The above approach to assigning a real number to each decimal expansion is due to an expository paper titled "Is0.999 ... = 1?" by Fred Richman inMathematics Magazine.[12]Richman notes that taking Dedekind cuts in anydense subsetof the rational numbers yields the same results; in particular, he usesdecimal fractions, for which the proof is more immediate. He also notes that typically the definitions allow{x{\displaystyle x}|x{\displaystyle x}< 1}to be a cut but not{x{\displaystyle x}|x{\displaystyle x}≤ 1}(or vice versa).[33]A further modification of the procedure leads to a different structure where the two are not equal. Although it is consistent, many of the common rules of decimal arithmetic no longer hold, for example, the fraction13{\textstyle {\frac {1}{3}}}has no representation; see§ Alternative number systemsbelow. Another approach is to define a real number as the limit of aCauchy sequenceof rational numbers. This construction of the real numbers uses the ordering of rationals less directly. First, the distance betweenx{\displaystyle x}andy{\displaystyle y}is defined as the absolute value⁠|x−y|{\displaystyle \left\vert x-y\right\vert }⁠, where the absolute value|z|{\displaystyle \left\vert z\right\vert }is defined as the maximum ofz{\displaystyle z}and⁠−z{\displaystyle -z}⁠, thus never negative. Then the reals are defined to be the sequences of rationals that have the Cauchy sequence property using this distance. That is, in the sequence⁠x0{\displaystyle x_{0}}⁠,⁠x1{\displaystyle x_{1}}⁠,⁠x2{\displaystyle x_{2}}⁠, ..., a mapping from natural numbers to rationals, for any positive rationalδ{\displaystyle \delta }there is anN{\displaystyle N}such that|xm−xn|≤δ{\displaystyle \left\vert x_{m}-x_{n}\right\vert \leq \delta }for all⁠m,n>N{\displaystyle m,n>N}⁠; the distance between terms becomes smaller than any positive rational.[34] If(xn){\displaystyle (x_{n})}and(yn){\displaystyle (y_{n})}are two Cauchy sequences, then they are defined to be equal as real numbers if the sequence(xn−yn){\displaystyle (x_{n}-y_{n})}has the limit 0. Truncations of the decimal number⁠b0.b1b2b3{\displaystyle b_{0}.b_{1}b_{2}b_{3}}⁠... generate a sequence of rationals, which is Cauchy; this is taken to define the real value of the number.[35]Thus in this formalism the task is to show that the sequence of rational numbers(1−0,1−910,1−99100,…)=(1,110,1100,…){\displaystyle \left(1-0,1-{9 \over 10},1-{99 \over 100},\ldots \right)=\left(1,{1 \over 10},{1 \over 100},\ldots \right)}has a limit 0. Considering the⁠n{\displaystyle n}⁠th term of the sequence, for⁠n∈N{\displaystyle n\in \mathbb {N} }⁠, it must therefore be shown thatlimn→∞110n=0.{\displaystyle \lim _{n\rightarrow \infty }{\frac {1}{10^{n}}}=0.}This can be proved by thedefinition of a limit. So again,0.999... = 1.[36] The definition of real numbers as Cauchy sequences was first published separately byEduard HeineandGeorg Cantor, also in 1872.[32]The above approach to decimal expansions, including the proof that0.999... = 1, closely follows Griffiths & Hilton's 1970 workA comprehensive textbook of classical mathematics: A contemporary interpretation.[37] Commonly insecondary schools' mathematics education, the real numbers are constructed by defining a number using an integer followed by aradix pointand an infinite sequence written out as a string to represent thefractional partof any given real number. In this construction, the set of any combination of an integer and digits after the decimal point (or radix point in non-base 10 systems) is the set of real numbers. This construction can be rigorously shown to satisfy all of thereal axiomsafter defining anequivalence relationover the set that defines1 =eq0.999...as well as for any other nonzero decimals with only finitely many nonzero terms in the decimal string with its trailing 9s version. In other words, the equality0.999... = 1holding true is a necessary condition for strings of digits to behave as real numbers should.[38][39] One of the notions that can resolve the issue is the requirement that real numbers be densely ordered. Dense ordering implies that if there is no new element strictly between two elements of the set, the two elements must be considered equal. Therefore, if 0.99999... were to be different from 1, there would have to be another real number in between them but there is none: a single digit cannot be changed in either of the two to obtain such a number.[40] The result that0.999... = 1generalizes readily in two ways. First, every nonzero number with a finite decimal notation (equivalently, endless trailing 0s) has a counterpart with trailing 9s. For example, 0.24999... equals 0.25, exactly as in the special case considered. These numbers are exactly the decimal fractions, and they aredense.[41][9] Second, a comparable theorem applies in eachradix(base). For example, in base 2 (thebinary numeral system) 0.111... equals 1, and in base 3 (theternary numeral system) 0.222... equals 1. In general, any terminating baseb{\displaystyle b}expression has a counterpart with repeated trailing digits equal tob{\displaystyle b}− 1. Textbooks of real analysis are likely to skip the example of 0.999... and present one or both of these generalizations from the start.[42] Alternative representations of 1 also occur in non-integer bases. For example, in thegolden ratio base, the two standard representations are 1.000... and 0.101010..., and there are infinitely many more representations that include adjacent 1s. Generally, foralmost allq{\displaystyle q}between 1 and 2, there are uncountably manybase-q{\displaystyle q}expansions of 1. In contrast, there are still uncountably many⁠q{\displaystyle q}⁠, including all natural numbers greater than 1, for which there is only onebase-q{\displaystyle q}expansion of 1, other than the trivial 1.000...‍. This result was first obtained byPaul Erdős, Miklos Horváth, and István Joó around 1990. In 1998 Vilmos Komornik andPaola Loretidetermined the smallest such base, theKomornik–Loreti constantq{\displaystyle q}= 1.787231650...‍. In this base,1 = 0.11010011001011010010110011010011...; the digits are given by theThue–Morse sequence, which does not repeat.[43] A more far-reaching generalization addressesthe most general positional numeral systems. They too have multiple representations, and in some sense, the difficulties are even worse. For example:[44] Petkovšek (1990)has proven that for any positional system that names all the real numbers, the set of reals with multiple representations is always dense. He calls the proof "an instructive exercise in elementarypoint-set topology"; it involves viewing sets of positional values asStone spacesand noticing that their real representations are given bycontinuous functions.[45] One application of 0.999... as a representation of 1 occurs in elementarynumber theory. In 1802, H. Goodwyn published an observation on the appearance of 9s in the repeating-decimal representations of fractions whose denominators are certainprime numbers.[46]Examples include: E. Midy proved a general result about such fractions, now calledMidy's theorem, in 1836. The publication was obscure, and it is unclear whether his proof directly involved 0.999..., but at least one modern proof by William G. Leavitt does. If it can be proved that if a decimal of the form⁠0.b1b2b3{\displaystyle 0.b_{1}b_{2}b_{3}}⁠... is a positive integer, then it must be 0.999..., which is then the source of the 9s in the theorem.[47]Investigations in this direction can motivate such concepts asgreatest common divisors,modular arithmetic,Fermat primes,orderofgroupelements, andquadratic reciprocity.[48] Returning to real analysis, the base-3 analogue0.222... = 1plays a key role in the characterization of one of the simplestfractals, the middle-thirdsCantor set: a point in theunit intervallies in the Cantor setif and only ifit can be represented in ternary using only the digits 0 and 2. The⁠n{\displaystyle n}⁠th digit of the representation reflects the position of the point in the⁠n{\displaystyle n}⁠th stage of the construction. For example, the point23{\textstyle {\frac {2}{3}}}is given the usual representation of 0.2 or 0.2000..., since it lies to the right of the first deletion and the left of every deletion thereafter. The point13{\textstyle {\frac {1}{3}}}is represented not as 0.1 but as 0.0222..., since it lies to the left of the first deletion and the right of every deletion thereafter.[49] Repeating nines also turns up in yet another of Georg Cantor's works. They must be taken into account to construct a valid proof, applyinghis 1891 diagonal argumentto decimal expansions, of theuncountabilityof the unit interval. Such a proof needs to be able to declare certain pairs of real numbers to be different based on their decimal expansions, so one needs to avoid pairs like 0.2 and 0.1999... A simple method represents all numbers with nonterminating expansions; the opposite method rules out repeating nines.[f]A variant that may be closer to Cantor's original argument uses base 2, and by turning base-3 expansions into base-2 expansions, one can prove the uncountability of the Cantor set as well.[50] Students of mathematics often reject the equality of 0.999... and 1, for reasons ranging from their disparate appearance to deep misgivings over thelimitconcept and disagreements over the nature ofinfinitesimals. There are many common contributing factors to the confusion: These ideas are mistaken in the context of the standard real numbers, although some may be valid in other number systems, either invented for their general mathematical utility or as instructivecounterexamplesto better understand 0.999...; see§ In alternative number systemsbelow. Many of these explanations were found byDavid Tall, who has studied characteristics of teaching and cognition that lead to some of the misunderstandings he has encountered with his college students. Interviewing his students to determine why the vast majority initially rejected the equality, he found that "students continued to conceive of 0.999... as a sequence of numbers getting closer and closer to 1 and not a fixed value, because 'you haven't specified how many places there are' or 'it is the nearest possible decimal below 1'".[23] The elementary argument of multiplying0.333... =13{\textstyle {\frac {1}{3}}}by 3 can convince reluctant students that 0.999... = 1. Still, when confronted with the conflict between their belief in the first equation and their disbelief in the second, some students either begin to disbelieve the first equation or simply become frustrated.[53]Nor are more sophisticated methods foolproof: students who are fully capable of applying rigorous definitions may still fall back on intuitive images when they are surprised by a result in advanced mathematics, including 0.999...‍. For example, one real analysis student was able to prove that0.333... =13{\textstyle {\frac {1}{3}}}using asupremumdefinition but then insisted that0.999... < 1based on her earlier understanding oflong division.[54]Others still can prove that13{\textstyle {\frac {1}{3}}}= 0.333..., but, upon being confronted by thefractional proof, insist that "logic" supersedes the mathematical calculations. Mazur (2005)tells the tale of an otherwise brilliant calculus student of his who "challenged almost everything I said in class but never questioned his calculator", and who had come to believe that nine digits are all one needs to do mathematics, including calculating the square root of 23. The student remained uncomfortable with a limiting argument that9.99... = 10, calling it a "wildly imagined infinite growing process".[55] As part of theAPOS Theoryof mathematical learning,Dubinsky et al. (2005)propose that students who conceive of 0.999... as a finite, indeterminate string with an infinitely small distance from 1 have "not yet constructed a complete process conception of the infinite decimal". Other students who have a complete process conception of 0.999... may not yet be able to "encapsulate" that process into an "object conception", like the object conception they have of 1, and so they view the process 0.999... and the object 1 as incompatible. They also link this mental ability of encapsulation to viewing13{\textstyle {\frac {1}{3}}}as a number in its own right and to dealing with the set of natural numbers as a whole.[56] With the rise of theInternet, debates about 0.999... have become commonplace onnewsgroupsandmessage boards, including many that nominally have little to do with mathematics. In the newsgroupsci.mathin the 1990s, arguing over 0.999... became a "popular sport", and was one of the questions answered in itsFAQ.[57][58]The FAQ briefly covers⁠13{\displaystyle \textstyle {\frac {1}{3}}}⁠, multiplication by 10, and limits, and alludes to Cauchy sequences as well. A 2003 edition of the general-interest newspaper columnThe Straight Dopediscusses 0.999... via13{\textstyle {\frac {1}{3}}}and limits, saying of misconceptions, The lower primate in us still resists, saying: .999~ doesn't really represent anumber, then, but aprocess. To find a number we have to halt the process, at which point the .999~ = 1 thing falls apart. Nonsense.[59] ASlatearticle reports that the concept of 0.999... is "hotly disputed on websites ranging fromWorld of Warcraftmessage boards toAyn Randforums".[60]0.999... features also inmathematical jokes, such as:[61] Q: How many mathematicians does it take toscrew in a lightbulb?A: 0.999999.... The fact that 0.999... is equal to 1 has been compared toZeno's paradox of the runner.[62]The runner paradox can be mathematically modeled and then, like 0.999..., resolved using a geometric series. However, it is not clear whether this mathematical treatment addresses the underlying metaphysical issues Zeno was exploring.[63] Although the real numbers form an extremely usefulnumber system, the decision to interpret the notation "0.999..." as naming a real number is ultimately a convention, andTimothy Gowersargues inMathematics: A Very Short Introductionthat the resulting identity0.999... = 1is a convention as well: However, it is by no means an arbitrary convention, because not adopting it forces one either to invent strange new objects or to abandon some of the familiar rules of arithmetic.[64] Some proofs that0.999... = 1rely on theArchimedean propertyof the real numbers: that there are no nonzeroinfinitesimals. Specifically, the difference1 − 0.999...must be smaller than any positive rational number, so it must be an infinitesimal; but since the reals do not contain nonzero infinitesimals, the difference is zero, and therefore the two values are the same. However, there are mathematically coherent orderedalgebraic structures, including various alternatives to the real numbers, which are non-Archimedean.Non-standard analysisprovides a number system with a full array of infinitesimals (and their inverses).[h]A. H. Lightstonedeveloped a decimal expansion forhyperreal numbersin(0, 1)∗. Lightstone shows how to associate each number with a sequence of digits,0.d1d2d3…;…d∞−1d∞d∞+1…,{\displaystyle 0.d_{1}d_{2}d_{3}\ldots ;\ldots d_{\infty -1}d_{\infty }d_{\infty +1}\ldots ,}indexed by thehypernaturalnumbers. While he does not directly discuss 0.999..., he shows the real number13{\textstyle {\frac {1}{3}}}is represented by 0.333...;...333..., which is a consequence of thetransfer principle. As a consequence the number0.999...;...999... = 1. With this type of decimal representation, not every expansion represents a number. In particular "0.333...;...000..." and "0.999...;...000..." do not correspond to any number.[65] The standard definition of the number 0.999... is thelimit of the sequence0.9, 0.99, 0.999, ...‍. A different definition involves anultralimit, i.e., the equivalence class[(0.9, 0.99, 0.999, ...)]of this sequence in theultrapower construction, which is a number that falls short of 1 by an infinitesimal amount.[66]More generally, the hyperreal numberuH{\displaystyle u_{H}}= 0.999...;...999000..., with last digit 9 at infinitehypernaturalrank⁠H{\displaystyle H}⁠, satisfies a strict inequality⁠uH<1{\displaystyle u_{H}<1}⁠. Accordingly, an alternative interpretation for "zero followed by infinitely many 9s" could be[67]0.999…⏟H=1−110H.{\displaystyle {\underset {H}{0.\underbrace {999\ldots } }}\;=1\;-\;{\frac {1}{10^{H}}}.}All such interpretations of "0.999..." are infinitely close to 1.Ian Stewartcharacterizes this interpretation as an "entirely reasonable" way to rigorously justify the intuition that "there's a little bit missing" from 1 in 0.999....[i]Along withKatz & Katz (2010b),Ely (2010)also questions the assumption that students' ideas about0.999... < 1are erroneous intuitions about the real numbers, interpreting them rather asnonstandardintuitions that could be valuable in the learning of calculus.[68] Combinatorial game theoryprovides a generalized concept of number that encompasses the real numbers and much more besides.[69]For example, in 1974,Elwyn Berlekampdescribed a correspondence between strings of red and blue segments inHackenbushand binary expansions of real numbers, motivated by the idea ofdata compression. For example, the value of the Hackenbush string LRRLRLRL... is0.010101...2=13{\textstyle {\frac {1}{3}}}.However, the value of LRLLL... (corresponding to 0.111...2) is infinitesimally less than 1. The difference between the two is thesurreal number1ω{\textstyle {\frac {1}{\omega }}}, whereω{\displaystyle \omega }is the firstinfinite ordinal; the relevant game is LRRRR... or 0.000...2.[j] This is true of the binary expansions of many rational numbers, where the values of the numbers are equal but the correspondingbinary treepaths are different. For example,0.10111...2= 0.11000...2, which are both equal to⁠34{\displaystyle \textstyle {\frac {3}{4}}}⁠, but the first representation corresponds to the binary tree path LRLRLLL..., while the second corresponds to the different path LRLLRRR...‍. Another manner in which the proofs might be undermined is if1 − 0.999...simply does not exist because subtraction is not always possible. Mathematical structures with an addition operation but not a subtraction operation includecommutativesemigroups,commutative monoids, andsemirings.Richman (1999)considers two such systems, designed so that0.999... < 1.[12] First,Richman (1999)defines a nonnegativedecimal numberto be a literal decimal expansion. He defines thelexicographical orderand an addition operation, noting that0.999... < 1simply because0 < 1in the ones place, but for any nonterminating⁠x{\displaystyle x}⁠, one has0.999... +x{\displaystyle x}= 1 +⁠x{\displaystyle x}⁠. So one peculiarity of the decimal numbers is that addition cannot always be canceled; another is that no decimal number corresponds to⁠13{\displaystyle \textstyle {\frac {1}{3}}}⁠. After defining multiplication, the decimal numbers form a positive, totally ordered, commutative semiring.[70] In the process of defining multiplication, Richman also defines another system he calls "cut⁠D{\displaystyle D}⁠", which is the set ofDedekind cutsof decimal fractions. Ordinarily, this definition leads to the real numbers, but for a decimal fractiond{\displaystyle d}he allows both the cut(⁠−∞{\displaystyle -\infty }⁠,⁠d{\displaystyle d}⁠)and the "principal cut"(⁠−∞{\displaystyle -\infty }⁠,⁠d{\displaystyle d}⁠]. The result is that the real numbers are "living uneasily together with" the decimal fractions. Again0.999... < 1. There are no positive infinitesimals in cut⁠D{\displaystyle D}⁠, but there is "a sort of negative infinitesimal", 0−, which has no decimal expansion. He concludes that0.999... = 1 + 0−, while the equation "0.999... +x{\displaystyle x}= 1" has no solution.[k] When asked about 0.999..., novices often believe there should be a "final 9", believing1 − 0.999...to be a positive number which they write as "0.000...1". Whether or not that makes sense, the intuitive goal is clear: adding a 1 to the final 9 in 0.999... would carry all the 9s into 0s and leave a 1 in the ones place. Among other reasons, this idea fails because there is no "final 9" in 0.999...‍.[71]However, there is a system that contains an infinite string of 9s including a last 9. Thep{\displaystyle p}-adic numbersare an alternative number system of interest innumber theory. Like the real numbers, thep{\displaystyle p}-adic numbers can be built from the rational numbers viaCauchy sequences; the construction uses a different metric in which 0 is closer to⁠p{\displaystyle p}⁠, and much closer to⁠pn{\displaystyle p^{n}}⁠, than it is to 1.[72]Thep{\displaystyle p}-adic numbers form afieldfor primep{\displaystyle p}and aringfor other⁠p{\displaystyle p}⁠, including 10. So arithmetic can be performed in thep{\displaystyle p}-adics, and there are no infinitesimals. In the 10-adic numbers, the analogues of decimal expansions run to the left. The 10-adic expansion ...999 does have a last 9, and it does not have a first 9. One can add 1 to the ones place, and it leaves behind only 0s after carrying through:1 + ...999 = ...000 = 0, and so...999 = −1.[73]Another derivation uses a geometric series. The infinite series implied by "...999" does not converge in the real numbers, but it converges in the 10-adics, and so one can re-use the familiar formula:[74]…999=9+9(10)+9(10)2+9(10)3+⋯=91−10=−1.{\displaystyle \ldots 999=9+9(10)+9(10)^{2}+9(10)^{3}+\cdots ={\frac {9}{1-10}}=-1.} Compare with the series in thesection above. A third derivation was invented by a seventh-grader who was doubtful over her teacher's limiting argument that0.999... = 1but was inspired to take the multiply-by-10 proofabovein the opposite direction: ifx{\displaystyle x}= ...999, then10x{\displaystyle x}= ...990, so10x{\displaystyle x}=x{\displaystyle x}− 9, hencex{\displaystyle x}= −1again.[73] As a final extension, since0.999... = 1(in the reals) and...999 = −1(in the 10-adics), then by "blind faith and unabashed juggling of symbols"[75]one may add the two equations and arrive at...999.999... = 0. This equation does not make sense either as a 10-adic expansion or an ordinary decimal expansion, but it turns out to be meaningful and true in thedoubly infinitedecimal expansionof the10-adic solenoid, with eventually repeating left ends to represent the real numbers and eventually repeating right ends to represent the 10-adic numbers.[76]
https://en.wikipedia.org/wiki/0.999...
Inmathematics, thepigeonhole principlestates that ifnitems are put intomcontainers, withn>m, then at least one container must contain more than one item.[1]For example, of three gloves, at least two must be right-handed or at least two must be left-handed, because there are three objects but only two categories of handedness to put them into. This seemingly obvious statement, a type ofcounting argument, can be used to demonstrate possibly unexpected results. For example, given that thepopulation of Londonis more than one unit greater than the maximum number of hairs that can be on a human's head, the principle requires that there must be at least two people in London who have the same number of hairs on their heads. Although the pigeonhole principle appears as early as 1624 in a book attributed toJean Leurechon,[2]it is commonly calledDirichlet's box principleorDirichlet's drawer principleafter an 1834 treatment of the principle byPeter Gustav Lejeune Dirichletunder the nameSchubfachprinzip("drawer principle" or "shelf principle").[3] The principle has several generalizations and can be stated in various ways. In a more quantified version: fornatural numberskandm, ifn=km+ 1objects are distributed amongmsets, the pigeonhole principle asserts that at least one of the sets will contain at leastk+ 1objects.[4]For arbitrarynandm, this generalizes tok+1=⌊(n−1)/m⌋+1=⌈n/m⌉{\displaystyle k+1=\lfloor (n-1)/m\rfloor +1=\lceil n/m\rceil }, where⌊⋯⌋{\displaystyle \lfloor \cdots \rfloor }and⌈⋯⌉{\displaystyle \lceil \cdots \rceil }denote thefloor and ceiling functions, respectively. Though the principle's most straightforward application is tofinite sets(such as pigeons and boxes), it is also used withinfinite setsthat cannot be put intoone-to-one correspondence. To do so requires the formal statement of the pigeonhole principle: "there does not exist aninjective functionwhosecodomainis smaller than itsdomain". Advanced mathematical proofs likeSiegel's lemmabuild upon this more general concept. Dirichlet published his works in both French and German, using either the GermanSchubfachor the Frenchtiroir. The strict original meaning of these terms corresponds to the Englishdrawer, that is, anopen-topped box that can be slid in and out of the cabinet that contains it. (Dirichlet wrote about distributing pearls among drawers.) These terms morphed topigeonholein the sense of asmall open space in a desk, cabinet, or wall for keeping letters or papers, metaphorically rooted in structures that house pigeons. Because furniture with pigeonholes is commonly used for storing or sorting things into many categories (such as letters in a post office or room keys in a hotel), the translationpigeonholemay be a better rendering of Dirichlet's original "drawer". That understanding of the termpigeonhole, referring to some furniture features, is fading—especially among those who do not speak English natively but as alingua francain the scientific world—in favor of the more pictorial interpretation, literally involving pigeons and holes. The suggestive (though not misleading) interpretation of "pigeonhole" as "dovecote" has lately found its way back to a German back-translation of the "pigeonhole principle" as the "Taubenschlagprinzip".[5] Besides the original terms "Schubfachprinzip" in German[6]and "Principe des tiroirs" inFrench,[7]other literal translations are still in use inArabic("مبدأ برج الحمام"),Bulgarian("принцип на чекмеджетата"),Chinese("抽屉原理"),Danish("Skuffeprincippet"),Dutch("ladenprincipe"),Hungarian("skatulyaelv"),Italian("principio dei cassetti"),Japanese("引き出し論法"),Persian("اصل لانه کبوتری"),Polish("zasada szufladkowa"),Portuguese("Princípio das Gavetas"),Swedish("Lådprincipen"),Turkish("çekmece ilkesi"), andVietnamese("nguyên lý hộp"). Suppose a drawer contains a mixture of black socks and blue socks, each of which can be worn on either foot. You pull a number of socks from the drawer without looking. What is the minimum number of pulled socks required to guarantee a pair of the same color? By the pigeonhole principle (m= 2, using one pigeonhole per color), the answer is three (n= 3items). Either you havethreeof one color, or you havetwoof one color andoneof the other.[citation needed] Ifnpeople can shake hands with one another (wheren> 1), the pigeonhole principle shows that there is always a pair of people who will shake hands with the same number of people. In this application of the principle, the "hole" to which a person is assigned is the number of hands that person shakes. Since each person shakes hands with some number of people from 0 ton− 1, there arenpossible holes. On the other hand, either the "0" hole, the"n− 1"hole, or both must be empty, for it is impossible (ifn> 1) for some person to shake hands with everybody else while some person shakes hands with nobody. This leavesnpeople to be placed into at mostn− 1non-empty holes, so the principle applies. This hand-shaking example is equivalent to the statement that in anygraphwith more than onevertex, there is at least one pair of vertices that share the samedegree.[8]This can be seen by associating each person with a vertex and eachedgewith a handshake.[citation needed] One can demonstrate there must be at least two people inLondonwith the same number of hairs on their heads as follows.[9][10]Since a typical human head has anaverageof around 150,000 hairs, it is reasonable to assume (as an upper bound) that no one has more than 1,000,000 hairs on their head(m= 1 millionholes). There are more than 1,000,000 people in London (nis bigger than 1 million items). Assigning a pigeonhole to each number of hairs on a person's head, and assigning people to pigeonholes according to the number of hairs on their heads, there must be at least two people assigned to the same pigeonhole by the 1,000,001st assignment (because they have the same number of hairs on their heads; or,n>m). Assuming London has 9.002 million people,[11]it follows that at least ten Londoners have the same number of hairs, as having nine Londoners in each of the 1 million pigeonholes accounts for only 9 million people. For the average case (m= 150,000) with the constraint: fewest overlaps, there will be at most one person assigned to every pigeonhole and the 150,001st person assigned to the same pigeonhole as someone else. In the absence of this constraint, there may be empty pigeonholes because the "collision" happens before the 150,001st person. The principle just proves the existence of an overlap; it says nothing about the number of overlaps (which falls under the subject ofprobability distribution).[citation needed] There is a passing, satirical, allusion in English to this version of the principle inA History of the Athenian Society, prefixed toA Supplement to the Athenian Oracle: Being a Collection of the Remaining Questions and Answers in the Old Athenian Mercuries(printed for Andrew Bell, London, 1710).[12]It seems that the questionwhether there were any two persons in the World that have an equal number of hairs on their head?had been raised inThe Athenian Mercurybefore 1704.[13][14] Perhaps the first written reference to the pigeonhole principle appears in a short sentence from the French JesuitJean Leurechon's 1622 workSelectæ Propositiones:[2]"It is necessary that two men have the same number of hairs,écus, or other things, as each other."[15]The full principle was spelled out two years later, with additional examples, in another book that has often been attributed to Leurechon, but might be by one of his students.[2] The birthday problem asks, for a set ofnrandomly chosen people, what is the probability that some pair of them will have the same birthday? The problem itself is mainly concerned with counterintuitive probabilities, but we can also tell by the pigeonhole principle that among 367 people, there is at least one pair of people who share the same birthday with 100% probability, as there are only 366 possible birthdays to choose from. Imagine seven people who want to play in a tournament of teams(n= 7items), with a limitation of only four teams(m= 4holes) to choose from. The pigeonhole principle tells us that they cannot all play for different teams; there must be at least one team featuring at least two of the seven players:[citation needed] Anysubsetof size six from thesetS={1,2,3,…,9}{\displaystyle S=\{1,2,3,\dots ,9\}}must contain two elements whose sum is 10. The pigeonholes will be labeled by the two element subsets{1,9},{2,8},{3,7},{4,6}{\displaystyle \{1,9\},\{2,8\},\{3,7\},\{4,6\}}and thesingleton{5}{\displaystyle \{5\}}, five pigeonholes in all. When the six "pigeons" (elements of the size six subset) are placed into these pigeonholes, each pigeon going into the pigeonhole that has it contained in its label, at least one of the pigeonholes labeled with a two-element subset will have two pigeons in it.[16] Hashingincomputer scienceis the process of mapping an arbitrarily large set of datantomfixed-size values. This has applications incachingwhereby large data sets can be stored by a reference to their representative values (their "hash codes") in a "hash table" for fast recall. Typically, the number of unique objects in a data setnis larger than the number of available unique hash codesm, and the pigeonhole principle holds in this case that hashing those objects is no guarantee of uniqueness, since if you hashed all objects in the data setn, some objects must necessarily share the same hash code.[citation needed] The principle can be used to prove that anylossless compressionalgorithm, provided it makes some inputs smaller (as "compression" suggests), will also make some other inputs larger. Otherwise, the set of all input sequences up to a given lengthLcould be mapped to the (much) smaller set of all sequences of length less thanLwithout collisions (because the compression is lossless), a possibility that the pigeonhole principle excludes. A notable problem inmathematical analysisis, for a fixedirrational numbera, to show that the set⁠{[na]:n∈Z}{\displaystyle \{[na]:n\in \mathbb {Z} \}}⁠offractional partsisdensein[0, 1]. One finds that it is not easy to explicitly find integersn, msuch that|na−m|<e,{\displaystyle |na-m|<e,}wheree> 0is a small positive number andais some arbitrary irrational number. But if one takesMsuch that⁠1M<e,{\displaystyle {\tfrac {1}{M}}<e,}⁠by the pigeonhole principle there must ben1,n2∈{1,2,…,M+1}{\displaystyle n_{1},n_{2}\in \{1,2,\ldots ,M+1\}}such thatn1aandn2aare in the same integer subdivision of size⁠1M{\displaystyle {\tfrac {1}{M}}}⁠(there are onlyMsuch subdivisions between consecutive integers). In particular, one can findn1,n2such that for somep, qintegers andkin{0, 1, ...,M− 1}. One can then easily verify that This implies that⁠[na]<1M<e,{\displaystyle [na]<{\tfrac {1}{M}}<e,}⁠wheren=n2−n1orn=n1−n2. This shows that 0 is a limit point of {[na]}. One can then use this fact to prove the case forpin(0, 1]: findnsuch that⁠[na]<1M<e;{\displaystyle [na]<{\tfrac {1}{M}}<e;}⁠then if⁠p∈(0,1M],{\displaystyle p\in {\bigl (}0,{\tfrac {1}{M}}{\bigr ]},}⁠the proof is complete. Otherwise and by setting one obtains Variants occur in a number of proofs. In the proof of thepumping lemma for regular languages, a version that mixes finite and infinite sets is used: If infinitely many objects are placed into finitely many boxes, then two objects share a box.[18]In Fisk's solution to theArt gallery problema sort of converse is used: Ifnobjects are placed intokboxes, then there is a box containing at most⁠nk{\displaystyle {\tfrac {n}{k}}}⁠objects.[19] The following are alternative formulations of the pigeonhole principle. Letq1,q2, ...,qnbe positive integers. If objects are distributed intonboxes, then either the first box contains at leastq1objects, or the second box contains at leastq2objects, ..., or thenth box contains at leastqnobjects.[21] The simple form is obtained from this by takingq1=q2= ... =qn= 2, which givesn+ 1objects. Takingq1=q2= ... =qn=rgives the more quantified version of the principle, namely: Letnandrbe positive integers. Ifn(r− 1) + 1objects are distributed intonboxes, then at least one of the boxes containsror more of the objects.[22] This can also be stated as, ifkdiscrete objects are to be allocated toncontainers, then at least one container must hold at least⌈k/n⌉{\displaystyle \lceil k/n\rceil }objects, where⌈x⌉{\displaystyle \lceil x\rceil }is theceiling function, denoting the smallest integer larger than or equal tox. Similarly, at least one container must hold no more than⌊k/n⌋{\displaystyle \lfloor k/n\rfloor }objects, where⌊x⌋{\displaystyle \lfloor x\rfloor }is thefloor function, denoting the largest integer smaller than or equal tox.[citation needed] A probabilistic generalization of the pigeonhole principle states that ifnpigeons are randomly put intompigeonholes with uniform probability1/m, then at least one pigeonhole will hold more than one pigeon with probability where(m)nis thefalling factorialm(m− 1)(m− 2)...(m−n+ 1). Forn= 0and forn= 1(andm> 0), that probability is zero; in other words, if there is just one pigeon, there cannot be a conflict. Forn>m(more pigeons than pigeonholes) it is one, in which case it coincides with the ordinary pigeonhole principle. But even if the number of pigeons does not exceed the number of pigeonholes (n≤m), due to the random nature of the assignment of pigeons to pigeonholes there is often a substantial chance that clashes will occur. For example, if 2 pigeons are randomly assigned to 4 pigeonholes, there is a 25% chance that at least one pigeonhole will hold more than one pigeon; for 5 pigeons and 10 holes, that probability is 69.76%; and for 10 pigeons and 20 holes it is about 93.45%. If the number of holes stays fixed, there is always a greater probability of a pair when you add more pigeons. This problem is treated at much greater length in thebirthday paradox. A further probabilistic generalization is that when a real-valuedrandom variableXhas a finitemeanE(X), then the probability is nonzero thatXis greater than or equal toE(X), and similarly the probability is nonzero thatXis less than or equal toE(X). To see that this implies the standard pigeonhole principle, take any fixed arrangement ofnpigeons intomholes and letXbe the number of pigeons in a hole chosen uniformly at random. The mean ofXisn/m, so if there are more pigeons than holes the mean is greater than one. Therefore,Xis sometimes at least 2. The pigeonhole principle can be extended toinfinite setsby phrasing it in terms ofcardinal numbers: if the cardinality of setAis greater than the cardinality of setB, then there is no injection fromAtoB. However, in this form the principle istautological, since the meaning of the statement that the cardinality of setAis greater than the cardinality of setBis exactly that there is no injective map fromAtoB. However, adding at least one element to a finite set is sufficient to ensure that the cardinality increases. Another way to phrase the pigeonhole principle for finite sets is similar to the principle that finite sets areDedekind finite: LetAandBbe finite sets. If there is a surjection fromAtoBthat is not injective, then no surjection fromAtoBis injective. In fact no function of any kind fromAtoBis injective. This is not true for infinite sets: Consider the function on the natural numbers that sends 1 and 2 to 1, 3 and 4 to 2, 5 and 6 to 3, and so on. There is a similar principle for infinite sets: If uncountably many pigeons are stuffed into countably many pigeonholes, there will exist at least one pigeonhole having uncountably many pigeons stuffed into it. This principle is not a generalization of the pigeonhole principle for finite sets however: It is in general false for finite sets. In technical terms it says that ifAandBare finite sets such that any surjective function fromAtoBis not injective, then there exists an elementbofBsuch that there exists a bijection between the preimage ofbandA. This is a quite different statement, and is absurd for large finite cardinalities. Yakir Aharonovet al. presented arguments thatquantum mechanicsmay violate the pigeonhole principle, and proposedinterferometricexperiments to test the pigeonhole principle in quantum mechanics.[23] Later research has called this conclusion into question.[24][25]In a January 2015arXivpreprint, researchers Alastair Rae and Ted Forgan at the University of Birmingham performed a theoreticalwave functionanalysis, employing the standard pigeonhole principle, on the flight of electrons at various energies through aninterferometer. If the electrons had no interaction strength at all, they would each produce a single, perfectly circular peak. At high interaction strength, each electron produces four distinct peaks, for a total of 12 peaks on the detector; these peaks are the result of the four possible interactions each electron could experience (alone, together with the first other particle only, together with the second other particle only, or all three together). If the interaction strength was fairly low, as would be the case in many real experiments, the deviation from a zero-interaction pattern would be nearly indiscernible, much smaller than thelattice spacingof atoms in solids, such as the detectors used for observing these patterns. This would make it very difficult or impossible to distinguish a weak-but-nonzero interaction strength from no interaction whatsoever, and thus give an illusion of three electrons that did not interact despite all three passing through two paths.[citation needed]
https://en.wikipedia.org/wiki/Pigeonhole_principle
In mathematics, aCarleman matrixis a matrix used to convertfunction compositionintomatrix multiplication. It is often used in iteration theory to find the continuousiteration of functionswhich cannot be iterated bypattern recognitionalone. Other uses of Carleman matrices occur in the theory ofprobabilitygenerating functions, andMarkov chains. TheCarleman matrixof an infinitely differentiable functionf(x){\displaystyle f(x)}is defined as: so as to satisfy the (Taylor series) equation: For instance, the computation off(x){\displaystyle f(x)}by simply amounts to the dot-product of row 1 ofM[f]{\displaystyle M[f]}with a column vector[1,x,x2,x3,...]τ{\displaystyle \left[1,x,x^{2},x^{3},...\right]^{\tau }}. The entries ofM[f]{\displaystyle M[f]}in the next row give the 2nd power off(x){\displaystyle f(x)}: and also, in order to have the zeroth power off(x){\displaystyle f(x)}inM[f]{\displaystyle M[f]}, we adopt the row 0 containing zeros everywhere except the first position, such that Thus, thedot productofM[f]{\displaystyle M[f]}with the column vector[1,x,x2,...]T{\displaystyle {\begin{bmatrix}1,x,x^{2},...\end{bmatrix}}^{T}}yields the column vector[1,f(x),f(x)2,...]T{\displaystyle \left[1,f(x),f(x)^{2},...\right]^{T}}, i.e., A generalization of the Carleman matrix of a function can be defined around any point, such as: orM[f]x0=M[g]{\displaystyle M[f]_{x_{0}}=M[g]}whereg(x)=f(x+x0)−x0{\displaystyle g(x)=f(x+x_{0})-x_{0}}. This allows thematrix powerto be related as: If we setψn(x)=xn{\displaystyle \psi _{n}(x)=x^{n}}we have theCarleman matrix. Becauseh(x)=∑ncn(h)⋅ψn(x)=∑ncn(h)⋅xn{\displaystyle h(x)=\sum _{n}c_{n}(h)\cdot \psi _{n}(x)=\sum _{n}c_{n}(h)\cdot x^{n}}then we know that the n-th coefficientcn(h){\displaystyle c_{n}(h)}must be the nth-coefficient of thetaylor seriesofh{\displaystyle h}. Thereforecn(h)=1n!h(n)(0){\displaystyle c_{n}(h)={\frac {1}{n!}}h^{(n)}(0)}ThereforeG[f]mn=cn(ψm∘f)=cn(f(x)m)=1n![dndxn(f(x))m]x=0{\displaystyle G[f]_{mn}=c_{n}(\psi _{m}\circ f)=c_{n}(f(x)^{m})={\frac {1}{n!}}\left[{\frac {d^{n}}{dx^{n}}}(f(x))^{m}\right]_{x=0}}Which is theCarleman matrixgiven above.(It's important to note that this is not an orthornormal basis) If{en(x)}n{\displaystyle \{e_{n}(x)\}_{n}}is an orthonormal basis for a Hilbert Space with a defined inner product⟨f,g⟩{\displaystyle \langle f,g\rangle }, we can setψn=en{\displaystyle \psi _{n}=e_{n}}andcn(h){\displaystyle c_{n}(h)}will be⟨h,en⟩{\displaystyle {\displaystyle \langle h,e_{n}\rangle }}. ThenG[f]mn=cn(em∘f)=⟨em∘f,en⟩{\displaystyle G[f]_{mn}=c_{n}(e_{m}\circ f)=\langle e_{m}\circ f,e_{n}\rangle }. Ifen(x)=einx{\displaystyle e_{n}(x)=e^{inx}}we have the analogous forFourier Series. Letc^n{\displaystyle {\hat {c}}_{n}}andG^{\displaystyle {\hat {G}}}represent the carleman coefficient and matrix in the fourier basis. Because the basis is orthogonal, we have. Then, therefore,G^[f]mn=cn^(em∘f)=⟨em∘f,en⟩{\displaystyle {\hat {G}}[f]_{mn}={\hat {c_{n}}}(e_{m}\circ f)=\langle e_{m}\circ f,e_{n}\rangle }which is Carleman matrices satisfy the fundamental relationship which makes the Carleman matrixMa (direct) representation off(x){\displaystyle f(x)}. Here the termf∘g{\displaystyle f\circ g}denotes the composition of functionsf(g(x)){\displaystyle f(g(x))}. Other properties include: The Carleman matrix of a constant is: The Carleman matrix of the identity function is: The Carleman matrix of a constant addition is: The Carleman matrix of thesuccessor functionis equivalent to theBinomial coefficient: The Carleman matrix of thelogarithmis related to the (signed)Stirling numbers of the first kindscaled byfactorials: The Carleman matrix of thelogarithmis related to the (unsigned)Stirling numbers of the first kindscaled byfactorials: The Carleman matrix of theexponential functionis related to theStirling numbers of the second kindscaled byfactorials: The Carleman matrix ofexponential functionsis: The Carleman matrix of a constant multiple is: The Carleman matrix of a linear function is: The Carleman matrix of a functionf(x)=∑k=1∞fkxk{\displaystyle f(x)=\sum _{k=1}^{\infty }f_{k}x^{k}}is: The Carleman matrix of a functionf(x)=∑k=0∞fkxk{\displaystyle f(x)=\sum _{k=0}^{\infty }f_{k}x^{k}}is: TheBell matrixor theJabotinsky matrixof a functionf(x){\displaystyle f(x)}is defined as[1][2][3] so as to satisfy the equation These matrices were developed in 1947 by Eri Jabotinsky to represent convolutions of polynomials.[4]It is thetransposeof the Carleman matrix and satisfy B[f∘g]=B[g]B[f],{\displaystyle B[f\circ g]=B[g]B[f]~,}which makes the Bell matrixBananti-representationoff(x){\displaystyle f(x)}.
https://en.wikipedia.org/wiki/Bell_matrix
Incombinatorialmathematics, theexponential formula(called thepolymer expansioninphysics) states that theexponential generating functionfor structures onfinite setsis theexponentialof the exponential generating function for connected structures. The exponential formula is apower seriesversion of a special case ofFaà di Bruno's formula. Here is a purelyalgebraicstatement, as a first introduction to the combinatorial use of the formula. For anyformal power seriesof the formf(x)=a1x+a22x2+a36x3+⋯+ann!xn+⋯{\displaystyle f(x)=a_{1}x+{a_{2} \over 2}x^{2}+{a_{3} \over 6}x^{3}+\cdots +{a_{n} \over n!}x^{n}+\cdots \,}we haveexp⁡f(x)=ef(x)=∑n=0∞bnn!xn,{\displaystyle \exp f(x)=e^{f(x)}=\sum _{n=0}^{\infty }{b_{n} \over n!}x^{n},\,}wherebn=∑π={S1,…,Sk}a|S1|⋯a|Sk|,{\displaystyle b_{n}=\sum _{\pi =\left\{\,S_{1},\,\dots ,\,S_{k}\,\right\}}a_{\left|S_{1}\right|}\cdots a_{\left|S_{k}\right|},}and the indexπ{\displaystyle \pi }runs through allpartitions{S1,…,Sk}{\displaystyle \{S_{1},\ldots ,S_{k}\}}of the set{1,…,n}{\displaystyle \{1,\ldots ,n\}}. (Whenk=0,{\displaystyle k=0,}the product isemptyand by definition equals1{\displaystyle 1}.) One can write the formula in the following form:bn=Bn(a1,a2,…,an),{\displaystyle b_{n}=B_{n}(a_{1},a_{2},\dots ,a_{n}),}and thusexp⁡(∑n=1∞ann!xn)=∑n=0∞Bn(a1,…,an)n!xn,{\displaystyle \exp \left(\sum _{n=1}^{\infty }{a_{n} \over n!}x^{n}\right)=\sum _{n=0}^{\infty }{B_{n}(a_{1},\dots ,a_{n}) \over n!}x^{n},}whereBn(a1,…,an){\displaystyle B_{n}(a_{1},\ldots ,a_{n})}is then{\displaystyle n}th completeBell polynomial. Alternatively, the exponential formula can also be written using thecycle indexof thesymmetric group, as follows:exp⁡(∑n=1∞anxnn)=∑n=0∞Zn(a1,…,an)xn,{\displaystyle \exp \left(\sum _{n=1}^{\infty }a_{n}{x^{n} \over n}\right)=\sum _{n=0}^{\infty }Z_{n}(a_{1},\dots ,a_{n})x^{n},}whereZn{\displaystyle Z_{n}}stands for the cycle index polynomial for the symmetric groupSn{\displaystyle S_{n}}, defined as:Zn(x1,⋯,xn)=1n!∑σ∈Snx1σ1⋯xnσn{\displaystyle Z_{n}(x_{1},\cdots ,x_{n})={\frac {1}{n!}}\sum _{\sigma \in S_{n}}x_{1}^{\sigma _{1}}\cdots x_{n}^{\sigma _{n}}}andσj{\displaystyle \sigma _{j}}denotes the number of cycles ofσ{\displaystyle \sigma }of sizej∈{1,⋯,n}{\displaystyle j\in \{1,\cdots ,n\}}. This is a consequence of the general relation betweenZn{\displaystyle Z_{n}}andBell polynomials:Zn(x1,…,xn)=1n!Bn(0!x1,1!x2,…,(n−1)!xn).{\displaystyle Z_{n}(x_{1},\dots ,x_{n})={1 \over n!}B_{n}(0!\,x_{1},1!\,x_{2},\dots ,(n-1)!\,x_{n}).} In combinatorial applications, the numbersan{\displaystyle a_{n}}count the number of some sort of "connected" structure on ann{\displaystyle n}-point set, and the numbersbn{\displaystyle b_{n}}count the number of (possibly disconnected) structures. The numbersbn/n!{\displaystyle b_{n}/n!}count the number ofisomorphism classesof structures onn{\displaystyle n}points, with each structure being weighted by the reciprocal of itsautomorphism group, and the numbersan/n!{\displaystyle a_{n}/n!}count isomorphism classes of connected structures in the same way.
https://en.wikipedia.org/wiki/Exponential_formula
Inmathematics, and in particular ingroup theory, acyclic permutationis apermutationconsisting of a single cycle.[1][2]In some cases, cyclic permutations are referred to ascycles;[3]if a cyclic permutation haskelements, it may be called ak-cycle. Some authors widen this definition to include permutations with fixed points in addition to at most one non-trivial cycle.[3][4]Incycle notation, cyclic permutations are denoted by the list of their elements enclosed with parentheses, in the order to which they are permuted. For example, the permutation (1 3 2 4) that sends 1 to 3, 3 to 2, 2 to 4 and 4 to 1 is a 4-cycle, and the permutation (1 3 2)(4) that sends 1 to 3, 3 to 2, 2 to 1 and 4 to 4 is considered a 3-cycle by some authors. On the other hand, the permutation (1 3)(2 4) that sends 1 to 3, 3 to 1, 2 to 4 and 4 to 2 is not a cyclic permutation because it separately permutes the pairs {1, 3} and {2, 4}. For the wider definition of a cyclic permutation, allowing fixed points, these fixed points each constitute trivialorbitsof the permutation, and there is a single non-trivial orbit containing all the remaining points. This can be used as a definition: a cyclic permutation (allowing fixed points) is a permutation that has a single non-trivial orbit. Every permutation on finitely many elements can be decomposed into cyclic permutations whose non-trivial orbits are disjoint.[5] The individual cyclic parts of a permutation are also calledcycles, thus the second example is composed of a 3-cycle and a 1-cycle (orfixed point) and the third is composed of two 2-cycles. There is not widespread consensus about the precise definition of a cyclic permutation. Some authors define apermutationσof a setXto be cyclic if "successive application would take each object of the permuted set successively through the positions of all the other objects",[1]or, equivalently, if its representation in cycle notation consists of a single cycle.[2]Others provide a more permissive definition which allows fixed points.[3][4] A nonempty subsetSofXis acycleofσ{\displaystyle \sigma }if the restriction ofσ{\displaystyle \sigma }toSis a cyclic permutation ofS. IfXisfinite, its cycles aredisjoint, and theirunionisX. That is, they form apartition, called thecycle decompositionofσ.{\displaystyle \sigma .}So, according to the more permissive definition, a permutation ofXis cyclic if and only ifXis its unique cycle. For example, the permutation, written incycle notationandtwo-line notation(in two ways) as has one 6-cycle and two 1-cycles its cycle diagram is shown at right. Some authors consider this permutation cyclic while others do not. With the enlarged definition, there are cyclic permutations that do not consist of a single cycle. More formally, for the enlarged definition, a permutationσ{\displaystyle \sigma }of a setX, viewed as abijective functionσ:X→X{\displaystyle \sigma :X\to X}, is called a cycle if the action onXof the subgroup generated byσ{\displaystyle \sigma }has at most one orbit with more than a single element.[6]This notion is most commonly used whenXis a finite set; then the largest orbit,S, is also finite. Lets0{\displaystyle s_{0}}be any element ofS, and putsi=σi(s0){\displaystyle s_{i}=\sigma ^{i}(s_{0})}for anyi∈Z{\displaystyle i\in \mathbf {Z} }. IfSis finite, there is a minimal numberk≥1{\displaystyle k\geq 1}for whichsk=s0{\displaystyle s_{k}=s_{0}}. ThenS={s0,s1,…,sk−1}{\displaystyle S=\{s_{0},s_{1},\ldots ,s_{k-1}\}}, andσ{\displaystyle \sigma }is the permutation defined by andσ(x)=x{\displaystyle \sigma (x)=x}for any element ofX∖S{\displaystyle X\setminus S}. The elements not fixed byσ{\displaystyle \sigma }can be pictured as A cyclic permutation can be written using the compactcycle notationσ=(s0s1…sk−1){\displaystyle \sigma =(s_{0}~s_{1}~\dots ~s_{k-1})}(there are no commas between elements in this notation, to avoid confusion with ak-tuple). Thelengthof a cycle is the number of elements of its largest orbit. A cycle of lengthkis also called ak-cycle. The orbit of a 1-cycle is called afixed pointof the permutation, but as a permutation every 1-cycle is theidentity permutation.[7]When cycle notation is used, the 1-cycles are often omitted when no confusion will result.[8] One of the basic results onsymmetric groupsis that any permutation can be expressed as the product ofdisjointcycles (more precisely: cycles with disjoint orbits); such cycles commute with each other, and the expression of the permutation is unique up to the order of the cycles.[a]Themultisetof lengths of the cycles in this expression (thecycle type) is therefore uniquely determined by the permutation, and both the signature and theconjugacy classof the permutation in the symmetric group are determined by it.[9] The number ofk-cycles in the symmetric groupSnis given, for1≤k≤n{\displaystyle 1\leq k\leq n}, by the following equivalent formulas:(nk)(k−1)!=n(n−1)⋯(n−k+1)k=n!(n−k)!k.{\displaystyle {\binom {n}{k}}(k-1)!={\frac {n(n-1)\cdots (n-k+1)}{k}}={\frac {n!}{(n-k)!k}}.} Ak-cycle hassignature(−1)k− 1. Theinverseof a cycleσ=(s0s1…sk−1){\displaystyle \sigma =(s_{0}~s_{1}~\dots ~s_{k-1})}is given by reversing the order of the entries:σ−1=(sk−1…s1s0){\displaystyle \sigma ^{-1}=(s_{k-1}~\dots ~s_{1}~s_{0})}. In particular, since(ab)=(ba){\displaystyle (a~b)=(b~a)}, every two-cycle is its own inverse. Since disjoint cycles commute, the inverse of a product of disjoint cycles is the result of reversing each of the cycles separately. A cycle with only two elements is called atransposition. For example, the permutationπ=(12341432){\displaystyle \pi ={\begin{pmatrix}1&2&3&4\\1&4&3&2\end{pmatrix}}}that swaps 2 and 4. Since it is a 2-cycle, it can be written asπ=(2,4){\displaystyle \pi =(2,4)}. Any permutation can be expressed as thecomposition(product) of transpositions—formally, they aregeneratorsfor thegroup.[10]In fact, when the set being permuted is{1, 2, ...,n}for some integern, then any permutation can be expressed as a product ofadjacent transpositions(12),(23),(34),{\displaystyle (1~2),(2~3),(3~4),}and so on. This follows because an arbitrary transposition can be expressed as the product of adjacent transpositions. Concretely, one can express the transposition(kl){\displaystyle (k~~l)}wherek<l{\displaystyle k<l}by movingktolone step at a time, then movinglback to wherekwas, which interchanges these two and makes no other changes: The decomposition of a permutation into a product of transpositions is obtained for example by writing the permutation as a product of disjoint cycles, and then splitting iteratively each of the cycles of length 3 and longer into a product of a transposition and a cycle of length one less: This means the initial request is to movea{\displaystyle a}tob,{\displaystyle b,}b{\displaystyle b}toc,{\displaystyle c,}y{\displaystyle y}toz,{\displaystyle z,}and finallyz{\displaystyle z}toa.{\displaystyle a.}Instead one may roll the elements keepinga{\displaystyle a}where it is by executing the right factor first (as usual in operator notation, and following the convention in the articlePermutation). This has movedz{\displaystyle z}to the position ofb,{\displaystyle b,}so after the first permutation, the elementsa{\displaystyle a}andz{\displaystyle z}are not yet at their final positions. The transposition(ab),{\displaystyle (a~b),}executed thereafter, then addressesz{\displaystyle z}by the index ofb{\displaystyle b}to swap what initially werea{\displaystyle a}andz.{\displaystyle z.} In fact, thesymmetric groupis aCoxeter group, meaning that it is generated by elements of order 2 (the adjacent transpositions), and all relations are of a certain form. One of the main results on symmetric groups states that either all of the decompositions of a given permutation into transpositions have an even number of transpositions, or they all have an odd number of transpositions.[11]This permits theparity of a permutationto be awell-definedconcept. This article incorporates material from cycle onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
https://en.wikipedia.org/wiki/Cyclic_permutation
Inmathematics, apermutationof asetcan mean one of two different things: An example of the first meaning is the six permutations (orderings) of the set {1, 2, 3}: written astuples, they are (1, 2, 3), (1, 3, 2), (2, 1, 3), (2, 3, 1), (3, 1, 2), and (3, 2, 1).Anagramsof a word whose letters are all different are also permutations: the letters are already ordered in the original word, and the anagram reorders them. The study of permutations offinite setsis an important topic incombinatoricsandgroup theory. Permutations are used in almost every branch of mathematics and in many other fields of science. Incomputer science, they are used for analyzingsorting algorithms; inquantum physics, for describing states of particles; and inbiology, for describingRNAsequences. The number of permutations ofndistinct objects isnfactorial, usually written asn!, which means the product of all positive integers less than or equal ton. According to the second meaning, a permutation of asetSis defined as abijectionfromSto itself.[2][3]That is, it is afunctionfromStoSfor which every element occurs exactly once as animagevalue. Such a functionσ:S→S{\displaystyle \sigma :S\to S}is equivalent to the rearrangement of the elements ofSin which each elementiis replaced by the correspondingσ(i){\displaystyle \sigma (i)}. For example, the permutation (3, 1, 2) corresponds to the functionσ{\displaystyle \sigma }defined asσ(1)=3,σ(2)=1,σ(3)=2.{\displaystyle \sigma (1)=3,\quad \sigma (2)=1,\quad \sigma (3)=2.}The collection of all permutations of a set form agroupcalled thesymmetric groupof the set. Thegroup operationis thecomposition of functions(performing one rearrangement after the other), which results in another function (rearrangement). The properties of permutations do not depend on the nature of the elements being permuted, only on their number, so one often considers the standard setS={1,2,…,n}{\displaystyle S=\{1,2,\ldots ,n\}}. In elementary combinatorics, thek-permutations, orpartial permutations, are the ordered arrangements ofkdistinct elements selected from a set. Whenkis equal to the size of the set, these are the permutations in the previous sense. Permutation-like objects calledhexagramswere used in China in theI Ching(Pinyin: Yi Jing) as early as 1000 BC. In Greece,Plutarchwrote thatXenocratesof Chalcedon (396–314 BC) discovered the number of different syllables possible in the Greek language. This would have been the first attempt on record to solve a difficult problem in permutations and combinations.[4] Al-Khalil(717–786), anArab mathematicianandcryptographer, wrote theBook of Cryptographic Messages. It contains the first use of permutations and combinations, to list all possibleArabicwords with and without vowels.[5] The rule to determine the number of permutations ofnobjects was known in Indian culture around 1150 AD. TheLilavatiby the Indian mathematicianBhāskara IIcontains a passage that translates as follows: The product of multiplication of the arithmetical series beginning and increasing by unity and continued to the number of places, will be the variations of number with specific figures.[6] In 1677,Fabian Stedmandescribed factorials when explaining the number of permutations of bells inchange ringing. Starting from two bells: "first,twomust be admitted to be varied in two ways", which he illustrates by showing 1 2 and 2 1.[7]He then explains that with three bells there are "three times two figures to be produced out of three" which again is illustrated. His explanation involves "cast away 3, and 1.2 will remain; cast away 2, and 1.3 will remain; cast away 1, and 2.3 will remain".[8]He then moves on to four bells and repeats the casting away argument showing that there will be four different sets of three. Effectively, this is a recursive process. He continues with five bells using the "casting away" method and tabulates the resulting 120 combinations.[9]At this point he gives up and remarks: Now the nature of these methods is such, that the changes on one number comprehends the changes on all lesser numbers, ... insomuch that a compleat Peal of changes on one number seemeth to be formed by uniting of the compleat Peals on all lesser numbers into one entire body;[10] Stedman widens the consideration of permutations; he goes on to consider the number of permutations of the letters of the alphabet and of horses from a stable of 20.[11] A first case in which seemingly unrelated mathematical questions were studied with the help of permutations occurred around 1770, whenJoseph Louis Lagrange, in the study of polynomial equations, observed that properties of the permutations of therootsof an equation are related to the possibilities to solve it. This line of work ultimately resulted, through the work ofÉvariste Galois, inGalois theory, which gives a complete description of what is possible and impossible with respect to solving polynomial equations (in one unknown) by radicals. In modern mathematics, there are many similar situations in which understanding a problem requires studying certain permutations related to it. The study of permutations as substitutions on n elements led to the notion of group as algebraic structure, through the works ofCauchy(1815 memoir). Permutations played an important role in thecryptanalysis of the Enigma machine, a cipher device used byNazi GermanyduringWorld War II. In particular, one important property of permutations, namely, that two permutations are conjugate exactly when they have the same cycle type, was used by cryptologistMarian Rejewskito break the German Enigma cipher in turn of years 1932-1933.[12][13] In mathematics texts it is customary to denote permutations using lowercase Greek letters. Commonly, eitherα,β,γ{\displaystyle \alpha ,\beta ,\gamma }orσ,τ,ρ,π{\displaystyle \sigma ,\tau ,\rho ,\pi }are used.[14] A permutation can be defined as abijection(an invertible mapping, a one-to-one and onto function) from a setSto itself: σ:S⟶∼S.{\displaystyle \sigma :S\ {\stackrel {\sim }{\longrightarrow }}\ S.} Theidentity permutationis defined byσ(x)=x{\displaystyle \sigma (x)=x}for all elementsx∈S{\displaystyle x\in S}, and can be denoted by the number1{\displaystyle 1},[a]byid=idS{\displaystyle {\text{id}}={\text{id}}_{S}}, or by a single 1-cycle (x).[15][16]The set of all permutations of a set withnelements forms thesymmetric groupSn{\displaystyle S_{n}}, where thegroup operationiscomposition of functions. Thus for two permutationsσ{\displaystyle \sigma }andτ{\displaystyle \tau }in the groupSn{\displaystyle S_{n}}, their productπ=στ{\displaystyle \pi =\sigma \tau }is defined by: π(i)=σ(τ(i)).{\displaystyle \pi (i)=\sigma (\tau (i)).} Composition is usually written without a dot or other sign. In general, composition of two permutations is notcommutative:τσ≠στ.{\displaystyle \tau \sigma \neq \sigma \tau .} As a bijection from a set to itself, a permutation is a function thatperformsa rearrangement of a set, termed anactive permutationorsubstitution. An older viewpoint sees a permutation as an ordered arrangement or list of all the elements ofS, called apassive permutation.[17]According to this definition, all permutations in§ One-line notationare passive. This meaning is subtly distinct from how passive (i.e.alias) is used inActive and passive transformationand elsewhere,[18][19]which would consider all permutations open to passive interpretation (regardless of whether they are in one-line notation, two-line notation, etc.). A permutationσ{\displaystyle \sigma }can be decomposed into one or more disjointcycleswhich are theorbitsof the cyclic group⟨σ⟩={1,σ,σ2,…}{\displaystyle \langle \sigma \rangle =\{1,\sigma ,\sigma ^{2},\ldots \}}actingon the setS. A cycle is found by repeatedly applying the permutation to an element:x,σ(x),σ(σ(x)),…,σk−1(x){\displaystyle x,\sigma (x),\sigma (\sigma (x)),\ldots ,\sigma ^{k-1}(x)}, where we assumeσk(x)=x{\displaystyle \sigma ^{k}(x)=x}. A cycle consisting ofkelements is called ak-cycle. (See§ Cycle notationbelow.) Afixed pointof a permutationσ{\displaystyle \sigma }is an elementxwhich is taken to itself, that isσ(x)=x{\displaystyle \sigma (x)=x}, forming a 1-cycle(x){\displaystyle (\,x\,)}. A permutation with no fixed points is called aderangement. A permutation exchanging two elements (a single 2-cycle) and leaving the others fixed is called atransposition. Several notations are widely used to represent permutations conveniently.Cycle notationis a popular choice, as it is compact and shows the permutation's structure clearly. This article will use cycle notation unless otherwise specified. Cauchy'stwo-line notation[20][21]lists the elements ofSin the first row, and the image of each element below it in the second row. For example, the permutation ofS= {1, 2, 3, 4, 5, 6} given by the function σ(1)=2,σ(2)=6,σ(3)=5,σ(4)=4,σ(5)=3,σ(6)=1{\displaystyle \sigma (1)=2,\ \ \sigma (2)=6,\ \ \sigma (3)=5,\ \ \sigma (4)=4,\ \ \sigma (5)=3,\ \ \sigma (6)=1} can be written as The elements ofSmay appear in any order in the first row, so this permutation could also be written: If there is a "natural" order for the elements ofS,[b]sayx1,x2,…,xn{\displaystyle x_{1},x_{2},\ldots ,x_{n}}, then one uses this for the first row of the two-line notation: Under this assumption, one may omit the first row and write the permutation inone-line notationas that is, as an ordered arrangement of the elements ofS.[22][23]Care must be taken to distinguish one-line notation from the cycle notation described below: a common usage is to omit parentheses or other enclosing marks for one-line notation, while using parentheses for cycle notation. The one-line notation is also called thewordrepresentation.[24] The example above would then be: σ=(123456265431)=265431.{\displaystyle \sigma ={\begin{pmatrix}1&2&3&4&5&6\\2&6&5&4&3&1\end{pmatrix}}=265431.} (It is typical to use commas to separate these entries only if some have two or more digits.) This compact form is common in elementarycombinatoricsandcomputer science. It is especially useful in applications where the permutations are to be compared aslarger or smallerusinglexicographic order. Cycle notation describes the effect of repeatedly applying the permutation on the elements of the setS, with an orbit being called acycle. The permutation is written as a list of cycles; since distinct cycles involvedisjointsets of elements, this is referred to as "decomposition into disjoint cycles". To write down the permutationσ{\displaystyle \sigma }in cycle notation, one proceeds as follows: Also, it is common to omit 1-cycles, since these can be inferred: for any elementxinSnot appearing in any cycle, one implicitly assumesσ(x)=x{\displaystyle \sigma (x)=x}.[25] Following the convention of omitting 1-cycles, one may interpret an individual cycle as a permutation which fixes all the elements not in the cycle (acyclic permutationhaving only one cycle of length greater than 1). Then the list of disjoint cycles can be seen as the composition of these cyclic permutations. For example, the one-line permutationσ=265431{\displaystyle \sigma =265431}can be written in cycle notation as: σ=(126)(35)(4)=(126)(35).{\displaystyle \sigma =(126)(35)(4)=(126)(35).} This may be seen as the compositionσ=κ1κ2{\displaystyle \sigma =\kappa _{1}\kappa _{2}}of cyclic permutations: κ1=(126)=(126)(3)(4)(5),κ2=(35)=(35)(1)(2)(6).{\displaystyle \kappa _{1}=(126)=(126)(3)(4)(5),\quad \kappa _{2}=(35)=(35)(1)(2)(6).} While permutations in general do not commute, disjoint cycles do; for example: σ=(126)(35)=(35)(126).{\displaystyle \sigma =(126)(35)=(35)(126).} Also, each cycle can be rewritten from a different starting point; for example, σ=(126)(35)=(261)(53).{\displaystyle \sigma =(126)(35)=(261)(53).} Thus one may write the disjoint cycles of a given permutation in many different ways. A convenient feature of cycle notation is that inverting the permutation is given by reversing the order of the elements in each cycle. For example, σ−1=(A2(126)(35))−1=(621)(53).{\displaystyle \sigma ^{-1}=\left({\vphantom {A^{2}}}(126)(35)\right)^{-1}=(621)(53).} In some combinatorial contexts it is useful to fix a certain order for the elements in the cycles and of the (disjoint) cycles themselves.Miklós Bónacalls the following ordering choices thecanonical cycle notation: For example,(513)(6)(827)(94){\displaystyle (513)(6)(827)(94)}is a permutation ofS={1,2,…,9}{\displaystyle S=\{1,2,\ldots ,9\}}in canonical cycle notation.[26] Richard Stanleycalls this the "standard representation" of a permutation,[27]and Martin Aigner uses "standard form".[24]Sergey Kitaevalso uses the "standard form" terminology, but reverses both choices; that is, each cycle lists its minimal element first, and the cycles are sorted in decreasing order of their minimal elements.[28] There are two ways to denote the composition of two permutations. In the most common notation,σ⋅τ{\displaystyle \sigma \cdot \tau }is the function that maps any elementxtoσ(τ(x)){\displaystyle \sigma (\tau (x))}. The rightmost permutation is applied to the argument first,[29]because the argument is written to the right of the function. Adifferentrule for multiplying permutations comes from writing the argument to the left of the function, so that the leftmost permutation acts first.[30][31][32]In this notation, the permutation is often written as an exponent, soσacting onxis writtenxσ; then the product is defined byxσ⋅τ=(xσ)τ{\displaystyle x^{\sigma \cdot \tau }=(x^{\sigma })^{\tau }}. This article uses the first definition, where the rightmost permutation is applied first. Thefunction compositionoperation satisfies the axioms of agroup. It isassociative, meaning(ρσ)τ=ρ(στ){\displaystyle (\rho \sigma )\tau =\rho (\sigma \tau )}, and products of more than two permutations are usually written without parentheses. The composition operation also has anidentity element(the identity permutationid{\displaystyle {\text{id}}}), and each permutationσ{\displaystyle \sigma }has an inverseσ−1{\displaystyle \sigma ^{-1}}(itsinverse function) withσ−1σ=σσ−1=id{\displaystyle \sigma ^{-1}\sigma =\sigma \sigma ^{-1}={\text{id}}}. The concept of a permutation as an ordered arrangement admits several generalizations that have been calledpermutations, especially in older literature. In older literature and elementary textbooks, ak-permutation ofn(sometimes called apartial permutation,sequence without repetition,variation, orarrangement) means an ordered arrangement (list) of ak-element subset of ann-set.[c][33][34]The number of suchk-permutations (k-arrangements) ofn{\displaystyle n}is denoted variously by such symbols asPkn{\displaystyle P_{k}^{n}},nPk{\displaystyle _{n}P_{k}},nPk{\displaystyle ^{n}\!P_{k}},Pn,k{\displaystyle P_{n,k}},P(n,k){\displaystyle P(n,k)}, orAnk{\displaystyle A_{n}^{k}},[35]computed by the formula:[36] which is 0 whenk>n, and otherwise is equal to The product is well defined without the assumption thatn{\displaystyle n}is a non-negative integer, and is of importance outside combinatorics as well; it is known as thePochhammer symbol(n)k{\displaystyle (n)_{k}}or as thek{\displaystyle k}-th falling factorial powernk_{\displaystyle n^{\underline {k}}}: P(n,k)=nPk=(n)k=nk_.{\displaystyle P(n,k)={_{n}}P_{k}=(n)_{k}=n^{\underline {k}}.} This usage of the termpermutationis closely associated with the termcombinationto mean a subset. Ak-combinationof a setSis ak-element subset ofS: the elements of a combination are not ordered. Ordering thek-combinations ofSin all possible ways produces thek-permutations ofS. The number ofk-combinations of ann-set,C(n,k), is therefore related to the number ofk-permutations ofnby: These numbers are also known asbinomial coefficients, usually denoted(nk){\displaystyle {\tbinom {n}{k}}}: C(n,k)=nCk=(nk).{\displaystyle C(n,k)={_{n}}C_{k}={\binom {n}{k}}.} Ordered arrangements ofkelements of a setS, where repetition is allowed, are calledk-tuples. They have sometimes been referred to aspermutations with repetition, although they are not permutations in the usual sense. They are also calledwordsorstringsover the alphabetS. If the setShasnelements, the number ofk-tuples overSisnk.{\displaystyle n^{k}.} IfMis a finitemultiset, then amultiset permutationis an ordered arrangement of elements ofMin which each element appears a number of times equal exactly to its multiplicity inM. Ananagramof a word having some repeated letters is an example of a multiset permutation.[d]If the multiplicities of the elements ofM(taken in some order) arem1{\displaystyle m_{1}},m2{\displaystyle m_{2}}, ...,ml{\displaystyle m_{l}}and their sum (that is, the size ofM) isn, then the number of multiset permutations ofMis given by themultinomial coefficient,[37] For example, the number of distinct anagrams of the word MISSISSIPPI is:[38] Ak-permutationof a multisetMis a sequence ofkelements ofMin which each element appearsa number of times less than or equal toits multiplicity inM(an element'srepetition number). Permutations, when considered as arrangements, are sometimes referred to aslinearly orderedarrangements. If, however, the objects are arranged in a circular manner this distinguished ordering is weakened: there is no "first element" in the arrangement, as any element can be considered as the start. An arrangement of distinct objects in a circular manner is called acircular permutation.[39][e]These can be formally defined asequivalence classesof ordinary permutations of these objects, for theequivalence relationgenerated by moving the final element of the linear arrangement to its front. Two circular permutations are equivalent if one can be rotated into the other. The following four circular permutations on four letters are considered to be the same. The circular arrangements are to be read counter-clockwise, so the following two are not equivalent since no rotation can bring one to the other. There are (n– 1)! circular permutations of a set withnelements. The number of permutations ofndistinct objects isn!. The number ofn-permutations withkdisjoint cycles is the signlessStirling number of the first kind, denotedc(n,k){\displaystyle c(n,k)}or[nk]{\displaystyle [{\begin{smallmatrix}n\\k\end{smallmatrix}}]}.[40] The cycles (including the fixed points) of a permutationσ{\displaystyle \sigma }of a set withnelements partition that set; so the lengths of these cycles form aninteger partitionofn, which is called thecycle type(or sometimescycle structureorcycle shape) ofσ{\displaystyle \sigma }. There is a "1" in the cycle type for every fixed point ofσ{\displaystyle \sigma }, a "2" for every transposition, and so on. The cycle type ofβ=(125)(34)(68)(7){\displaystyle \beta =(1\,2\,5\,)(\,3\,4\,)(6\,8\,)(\,7\,)}is(3,2,2,1).{\displaystyle (3,2,2,1).} This may also be written in a more compact form as[112231]. More precisely, the general form is[1α12α2⋯nαn]{\displaystyle [1^{\alpha _{1}}2^{\alpha _{2}}\dotsm n^{\alpha _{n}}]}, whereα1,…,αn{\displaystyle \alpha _{1},\ldots ,\alpha _{n}}are the numbers of cycles of respective length. The number of permutations of a given cycle type is[41] The number of cycle types of a set withnelements equals the value of thepartition functionp(n){\displaystyle p(n)}. Polya'scycle indexpolynomial is agenerating functionwhich counts permutations by their cycle type. In general, composing permutations written in cycle notation follows no easily described pattern – the cycles of the composition can be different from those being composed. However the cycle type is preserved in the special case ofconjugatinga permutationσ{\displaystyle \sigma }by another permutationπ{\displaystyle \pi }, which means forming the productπσπ−1{\displaystyle \pi \sigma \pi ^{-1}}. Here,πσπ−1{\displaystyle \pi \sigma \pi ^{-1}}is theconjugateofσ{\displaystyle \sigma }byπ{\displaystyle \pi }and its cycle notation can be obtained by taking the cycle notation forσ{\displaystyle \sigma }and applyingπ{\displaystyle \pi }to all the entries in it.[42]It follows that two permutations are conjugate exactly when they have the same cycle type. The order of a permutationσ{\displaystyle \sigma }is the smallest positive integermso thatσm=id{\displaystyle \sigma ^{m}=\mathrm {id} }. It is theleast common multipleof the lengths of its cycles. For example, the order ofσ=(152)(34){\displaystyle \sigma =(152)(34)}islcm(3,2)=6{\displaystyle {\text{lcm}}(3,2)=6}. Every permutation of a finite set can be expressed as the product of transpositions.[43]Although many such expressions for a given permutation may exist, either they all contain an even number of transpositions or they all contain an odd number of transpositions. Thus all permutations can be classified aseven or odddepending on this number. This result can be extended so as to assign asign, writtensgn⁡σ{\displaystyle \operatorname {sgn} \sigma }, to each permutation.sgn⁡σ=+1{\displaystyle \operatorname {sgn} \sigma =+1}ifσ{\displaystyle \sigma }is even andsgn⁡σ=−1{\displaystyle \operatorname {sgn} \sigma =-1}ifσ{\displaystyle \sigma }is odd. Then for two permutationsσ{\displaystyle \sigma }andπ{\displaystyle \pi } It follows thatsgn⁡(σσ−1)=+1.{\displaystyle \operatorname {sgn} \left(\sigma \sigma ^{-1}\right)=+1.} The sign of a permutation is equal to the determinant of its permutation matrix (below). Apermutation matrixis ann×nmatrixthat has exactly one entry 1 in each column and in each row, and all other entries are 0. There are several ways to assign a permutation matrix to a permutation of {1, 2, ...,n}. One natural approach is to defineLσ{\displaystyle L_{\sigma }}to be thelinear transformationofRn{\displaystyle \mathbb {R} ^{n}}which permutes thestandard basis{e1,…,en}{\displaystyle \{\mathbf {e} _{1},\ldots ,\mathbf {e} _{n}\}}byLσ(ej)=eσ(j){\displaystyle L_{\sigma }(\mathbf {e} _{j})=\mathbf {e} _{\sigma (j)}}, and defineMσ{\displaystyle M_{\sigma }}to be its matrix. That is,Mσ{\displaystyle M_{\sigma }}has itsjthcolumn equal to the n × 1 column vectoreσ(j){\displaystyle \mathbf {e} _{\sigma (j)}}: its (i,j) entry is to 1 ifi=σ(j), and 0 otherwise. Since composition of linear mappings is described by matrix multiplication, it follows that this construction is compatible with composition of permutations: MσMτ=Mστ{\displaystyle M_{\sigma }M_{\tau }=M_{\sigma \tau }}. For example, the one-line permutationsσ=213,τ=231{\displaystyle \sigma =213,\ \tau =231}have productστ=132{\displaystyle \sigma \tau =132}, and the corresponding matrices are:MσMτ=(010100001)(001100010)=(100001010)=Mστ.{\displaystyle M_{\sigma }M_{\tau }={\begin{pmatrix}0&1&0\\1&0&0\\0&0&1\end{pmatrix}}{\begin{pmatrix}0&0&1\\1&0&0\\0&1&0\end{pmatrix}}={\begin{pmatrix}1&0&0\\0&0&1\\0&1&0\end{pmatrix}}=M_{\sigma \tau }.} It is also common in the literature to find the inverse convention, where a permutationσis associated to the matrixPσ=(Mσ)−1=(Mσ)T{\displaystyle P_{\sigma }=(M_{\sigma })^{-1}=(M_{\sigma })^{T}}whose (i,j) entry is 1 ifj=σ(i) and is 0 otherwise. In this convention, permutation matrices multiply in the opposite order from permutations, that is,PσPτ=Pτσ{\displaystyle P_{\sigma }P_{\tau }=P_{\tau \sigma }}. In this correspondence, permutation matrices act on the right side of the standard1×n{\displaystyle 1\times n}row vectors(ei)T{\displaystyle ({\bf {e}}_{i})^{T}}:(ei)TPσ=(eσ(i))T{\displaystyle ({\bf {e}}_{i})^{T}P_{\sigma }=({\bf {e}}_{\sigma (i)})^{T}}. TheCayley tableon the right shows these matrices for permutations of 3 elements. In some applications, the elements of the set being permuted will be compared with each other. This requires that the setShas atotal orderso that any two elements can be compared. The set {1, 2, ...,n} with the usual ≤ relation is the most frequently used set in these applications. A number of properties of a permutation are directly related to the total ordering ofS,considering the permutation written in one-line notation as a sequenceσ=σ(1)σ(2)⋯σ(n){\displaystyle \sigma =\sigma (1)\sigma (2)\cdots \sigma (n)}. Anascentof a permutationσofnis any positioni<nwhere the following value is bigger than the current one. That is,iis an ascent ifσ(i)<σ(i+1){\displaystyle \sigma (i)<\sigma (i{+}1)}. For example, the permutation 3452167 has ascents (at positions) 1, 2, 5, and 6. Similarly, adescentis a positioni<nwithσ(i)>σ(i+1){\displaystyle \sigma (i)>\sigma (i{+}1)}, so everyiwith1≤i<n{\displaystyle 1\leq i<n}is either an ascent or a descent. Anascending runof a permutation is a nonempty increasing contiguous subsequence that cannot be extended at either end; it corresponds to a maximal sequence of successive ascents (the latter may be empty: between two successive descents there is still an ascending run of length 1). By contrast anincreasing subsequenceof a permutation is not necessarily contiguous: it is an increasing sequence obtained by omitting some of the values of the one-line notation. For example, the permutation 2453167 has the ascending runs 245, 3, and 167, while it has an increasing subsequence 2367. If a permutation hask− 1 descents, then it must be the union ofkascending runs.[44] The number of permutations ofnwithkascents is (by definition) theEulerian number⟨nk⟩{\displaystyle \textstyle \left\langle {n \atop k}\right\rangle }; this is also the number of permutations ofnwithkdescents. Some authors however define the Eulerian number⟨nk⟩{\displaystyle \textstyle \left\langle {n \atop k}\right\rangle }as the number of permutations withkascending runs, which corresponds tok− 1descents.[45] An exceedance of a permutationσ1σ2...σnis an indexjsuch thatσj>j. If the inequality is not strict (that is,σj≥j), thenjis called aweak exceedance. The number ofn-permutations withkexceedances coincides with the number ofn-permutations withkdescents.[46] Arecordorleft-to-right maximumof a permutationσis an elementisuch thatσ(j) <σ(i) for allj < i. Foata'sfundamental bijectiontransforms a permutationσwith a given canonical cycle form into the permutationf(σ)=σ^{\displaystyle f(\sigma )={\hat {\sigma }}}whose one-line notation has the same sequence of elements with parentheses removed.[27][47]For example:σ=(513)(6)(827)(94)=(123456789375916824),{\displaystyle \sigma =(513)(6)(827)(94)={\begin{pmatrix}1&2&3&4&5&6&7&8&9\\3&7&5&9&1&6&8&2&4\end{pmatrix}},} σ^=513682794=(123456789513682794).{\displaystyle {\hat {\sigma }}=513682794={\begin{pmatrix}1&2&3&4&5&6&7&8&9\\5&1&3&6&8&2&7&9&4\end{pmatrix}}.} Here the first element in each canonical cycle ofσbecomes a record (left-to-right maximum) ofσ^{\displaystyle {\hat {\sigma }}}. Givenσ^{\displaystyle {\hat {\sigma }}}, one may find its records and insert parentheses to construct the inverse transformationσ=f−1(σ^){\displaystyle \sigma =f^{-1}({\hat {\sigma }})}. Underlining the records in the above example:σ^=5_136_8_279_4{\displaystyle {\hat {\sigma }}={\underline {5}}\,1\,3\,{\underline {6}}\,{\underline {8}}\,2\,7\,{\underline {9}}\,4}, which allows the reconstruction of the cycles ofσ. The following table showsσ^{\displaystyle {\hat {\sigma }}}andσfor the six permutations ofS= {1, 2, 3}, with the bold text on each side showing the notation used in the bijection: one-line notation forσ^{\displaystyle {\hat {\sigma }}}and canonical cycle notation forσ. σ^=f(σ)σ=f−1(σ^)123=(1)(2)(3)123=(1)(2)(3)132=(1)(32)132=(1)(32)213=(21)(3)213=(21)(3)231=(312)321=(2)(31)312=(321)231=(312)321=(2)(31)312=(321){\displaystyle {\begin{array}{l|l}{\hat {\sigma }}=f(\sigma )&\sigma =f^{-1}({\hat {\sigma }})\\\hline \mathbf {123} =(\,1\,)(\,2\,)(\,3\,)&123=\mathbf {(\,1\,)(\,2\,)(\,3\,)} \\\mathbf {132} =(\,1\,)(\,3\,2\,)&132=\mathbf {(\,1\,)(\,3\,2\,)} \\\mathbf {213} =(\,2\,1\,)(\,3\,)&213=\mathbf {(\,2\,1\,)(\,3\,)} \\\mathbf {231} =(\,3\,1\,2\,)&321=\mathbf {(\,2\,)(\,3\,1\,)} \\\mathbf {312} =(\,3\,2\,1\,)&231=\mathbf {(\,3\,1\,2\,)} \\\mathbf {321} =(\,2\,)(\,3\,1\,)&312=\mathbf {(\,3\,2\,1\,)} \end{array}}}As a first corollary, the number ofn-permutations with exactlykrecords is equal to the number ofn-permutations with exactlykcycles: this last number is the signlessStirling number of the first kind,c(n,k){\displaystyle c(n,k)}. Furthermore, Foata's mapping takes ann-permutation withkweak exceedances to ann-permutation withk− 1ascents.[47]For example, (2)(31) = 321 hask =2 weak exceedances (at index 1 and 2), whereasf(321) = 231hask− 1 = 1ascent (at index 1; that is, from 2 to 3). Aninversionof a permutationσis a pair(i,j)of positions where the entries of a permutation are in the opposite order:i<j{\displaystyle i<j}andσ(i)>σ(j){\displaystyle \sigma (i)>\sigma (j)}.[49]Thus a descent is an inversion at two adjacent positions. For example,σ= 23154has (i,j) = (1, 3), (2, 3), and (4, 5), where (σ(i),σ(j)) = (2, 1), (3, 1), and (5, 4). Sometimes an inversion is defined as the pair of values (σ(i),σ(j)); this makes no difference for thenumberof inversions, and the reverse pair (σ(j),σ(i)) is an inversion in the above sense for the inverse permutationσ−1. The number of inversions is an important measure for the degree to which the entries of a permutation are out of order; it is the same forσand forσ−1. To bring a permutation withkinversions into order (that is, transform it into the identity permutation), by successively applying (right-multiplication by)adjacent transpositions, is always possible and requires a sequence ofksuch operations. Moreover, any reasonable choice for the adjacent transpositions will work: it suffices to choose at each step a transposition ofiandi+ 1whereiis a descent of the permutation as modified so far (so that the transposition will remove this particular descent, although it might create other descents). This is so because applying such a transposition reduces the number of inversions by 1; as long as this number is not zero, the permutation is not the identity, so it has at least one descent.Bubble sortandinsertion sortcan be interpreted as particular instances of this procedure to put a sequence into order. Incidentally this procedure proves that any permutationσcan be written as a product of adjacent transpositions; for this one may simply reverse any sequence of such transpositions that transformsσinto the identity. In fact, by enumerating all sequences of adjacent transpositions that would transformσinto the identity, one obtains (after reversal) acompletelist of all expressions of minimal length writingσas a product of adjacent transpositions. The number of permutations ofnwithkinversions is expressed by aMahonian number.[50]This is the coefficient ofqk{\displaystyle q^{k}}in the expansion of the product [n]q!=∏m=1n∑i=0m−1qi=1(1+q)(1+q+q2)⋯(1+q+q2+⋯+qn−1),{\displaystyle [n]_{q}!=\prod _{m=1}^{n}\sum _{i=0}^{m-1}q^{i}=1\left(1+q\right)\left(1+q+q^{2}\right)\cdots \left(1+q+q^{2}+\cdots +q^{n-1}\right),} The notation[n]q!{\displaystyle [n]_{q}!}denotes theq-factorial. This expansion commonly appears in the study ofnecklaces. Letσ∈Sn,i,j∈{1,2,…,n}{\displaystyle \sigma \in S_{n},i,j\in \{1,2,\dots ,n\}}such thati<j{\displaystyle i<j}andσ(i)>σ(j){\displaystyle \sigma (i)>\sigma (j)}. In this case, say the weight of the inversion(i,j){\displaystyle (i,j)}isσ(i)−σ(j){\displaystyle \sigma (i)-\sigma (j)}. Kobayashi (2011) proved the enumeration formula∑i<j,σ(i)>σ(j)(σ(i)−σ(j))=|{τ∈Sn∣τ≤σ,τis bigrassmannian}{\displaystyle \sum _{i<j,\sigma (i)>\sigma (j)}(\sigma (i)-\sigma (j))=|\{\tau \in S_{n}\mid \tau \leq \sigma ,\tau {\text{ is bigrassmannian}}\}} where≤{\displaystyle \leq }denotesBruhat orderin thesymmetric groups. This graded partial order often appears in the context ofCoxeter groups. One way to represent permutations ofnthings is by an integerNwith 0 ≤N<n!, provided convenient methods are given to convert between the number and the representation of a permutation as an ordered arrangement (sequence). This gives the most compact representation of arbitrary permutations, and in computing is particularly attractive whennis small enough thatNcan be held in a machine word; for 32-bit words this meansn≤ 12, and for 64-bit words this meansn≤ 20. The conversion can be done via the intermediate form of a sequence of numbersdn,dn−1, ...,d2,d1, wherediis a non-negative integer less thani(one may omitd1, as it is always 0, but its presence makes the subsequent conversion to a permutation easier to describe). The first step then is to simply expressNin thefactorial number system, which is just a particularmixed radixrepresentation, where, for numbers less thann!, the bases (place values or multiplication factors) for successive digits are(n− 1)!,(n− 2)!, ..., 2!, 1!. The second step interprets this sequence as aLehmer codeor (almost equivalently) as an inversion table. In theLehmer codefor a permutationσ, the numberdnrepresents the choice made for the first termσ1, the numberdn−1represents the choice made for the second termσ2among the remainingn− 1elements of the set, and so forth. More precisely, eachdn+1−igives the number ofremainingelements strictly less than the termσi. Since those remaining elements are bound to turn up as some later termσj, the digitdn+1−icounts theinversions(i,j) involvingias smaller index (the number of valuesjfor whichi<jandσi>σj). Theinversion tableforσis quite similar, but heredn+1−kcounts the number of inversions (i,j) wherek=σjoccurs as the smaller of the two values appearing in inverted order.[51] Both encodings can be visualized by annbynRothe diagram[52](named afterHeinrich August Rothe) in which dots at (i,σi) mark the entries of the permutation, and a cross at (i,σj) marks the inversion (i,j); by the definition of inversions a cross appears in any square that comes both before the dot (j,σj) in its column, and before the dot (i,σi) in its row. The Lehmer code lists the numbers of crosses in successive rows, while the inversion table lists the numbers of crosses in successive columns; it is just the Lehmer code for the inverse permutation, and vice versa. To effectively convert a Lehmer codedn,dn−1, ...,d2,d1into a permutation of an ordered setS, one can start with a list of the elements ofSin increasing order, and foriincreasing from 1 tonsetσito the element in the list that is preceded bydn+1−iother ones, and remove that element from the list. To convert an inversion tabledn,dn−1, ...,d2,d1into the corresponding permutation, one can traverse the numbers fromd1todnwhile inserting the elements ofSfrom largest to smallest into an initially empty sequence; at the step using the numberdfrom the inversion table, the element fromSinserted into the sequence at the point where it is preceded bydelements already present. Alternatively one could process the numbers from the inversion table and the elements ofSboth in the opposite order, starting with a row ofnempty slots, and at each step place the element fromSinto the empty slot that is preceded bydother empty slots. Converting successive natural numbers to the factorial number system produces those sequences inlexicographic order(as is the case with any mixed radix number system), and further converting them to permutations preserves the lexicographic ordering, provided the Lehmer code interpretation is used (using inversion tables, one gets a different ordering, where one starts by comparing permutations by theplaceof their entries 1 rather than by the value of their first entries). The sum of the numbers in the factorial number system representation gives the number of inversions of the permutation, and the parity of that sum gives thesignatureof the permutation. Moreover, the positions of the zeroes in the inversion table give the values of left-to-right maxima of the permutation (in the example 6, 8, 9) while the positions of the zeroes in the Lehmer code are the positions of the right-to-left minima (in the example positions the 4, 8, 9 of the values 1, 2, 5); this allows computing the distribution of such extrema among all permutations. A permutation with Lehmer codedn,dn−1, ...,d2,d1has an ascentn−iif and only ifdi≥di+1. In computing it may be required to generate permutations of a given sequence of values. The methods best adapted to do this depend on whether one wants some randomly chosen permutations, or all permutations, and in the latter case if a specific ordering is required. Another question is whether possible equality among entries in the given sequence is to be taken into account; if so, one should only generate distinct multiset permutations of the sequence. An obvious way to generate permutations ofnis to generate values for theLehmer code(possibly using thefactorial number systemrepresentation of integers up ton!), and convert those into the corresponding permutations. However, the latter step, while straightforward, is hard to implement efficiently, because it requiresnoperations each of selection from a sequence and deletion from it, at an arbitrary position; of the obvious representations of the sequence as anarrayor alinked list, both require (for different reasons) aboutn2/4 operations to perform the conversion. Withnlikely to be rather small (especially if generation of all permutations is needed) that is not too much of a problem, but it turns out that both for random and for systematic generation there are simple alternatives that do considerably better. For this reason it does not seem useful, although certainly possible, to employ a special data structure that would allow performing the conversion from Lehmer code to permutation inO(nlogn)time. For generatingrandom permutationsof a given sequence ofnvalues, it makes no difference whether one applies a randomly selected permutation ofnto the sequence, or chooses a random element from the set of distinct (multiset) permutations of the sequence. This is because, even though in case of repeated values there can be many distinct permutations ofnthat result in the same permuted sequence, the number of such permutations is the same for each possible result. Unlike for systematic generation, which becomes unfeasible for largendue to the growth of the numbern!, there is no reason to assume thatnwill be small for random generation. The basic idea to generate a random permutation is to generate at random one of then! sequences of integersd1,d2,...,dnsatisfying0 ≤di<i(sinced1is always zero it may be omitted) and to convert it to a permutation through abijectivecorrespondence. For the latter correspondence one could interpret the (reverse) sequence as a Lehmer code, and this gives a generation method first published in 1938 byRonald FisherandFrank Yates.[53]While at the time computer implementation was not an issue, this method suffers from the difficulty sketched above to convert from Lehmer code to permutation efficiently. This can be remedied by using a different bijective correspondence: after usingdito select an element amongiremaining elements of the sequence (for decreasing values ofi), rather than removing the element and compacting the sequence by shifting down further elements one place, oneswapsthe element with the final remaining element. Thus the elements remaining for selection form a consecutive range at each point in time, even though they may not occur in the same order as they did in the original sequence. The mapping from sequence of integers to permutations is somewhat complicated, but it can be seen to produce each permutation in exactly one way, by an immediateinduction. When the selected element happens to be the final remaining element, the swap operation can be omitted. This does not occur sufficiently often to warrant testing for the condition, but the final element must be included among the candidates of the selection, to guarantee that all permutations can be generated. The resulting algorithm for generating a random permutation ofa[0],a[1], ...,a[n− 1]can be described as follows inpseudocode: This can be combined with the initialization of the arraya[i] =ias follows Ifdi+1=i, the first assignment will copy an uninitialized value, but the second will overwrite it with the correct valuei. However, Fisher-Yates is not the fastest algorithm for generating a permutation, because Fisher-Yates is essentially a sequential algorithm and "divide and conquer" procedures can achieve the same result in parallel.[54] There are many ways to systematically generate all permutations of a given sequence.[55]One classic, simple, and flexible algorithm is based upon finding the next permutation inlexicographic ordering, if it exists. It can handle repeated values, for which case it generates each distinct multiset permutation once. Even for ordinary permutations it is significantly more efficient than generating values for the Lehmer code in lexicographic order (possibly using thefactorial number system) and converting those to permutations. It begins by sorting the sequence in (weakly)increasingorder (which gives its lexicographically minimal permutation), and then repeats advancing to the next permutation as long as one is found. The method goes back toNarayana Panditain 14th century India, and has been rediscovered frequently.[56] The following algorithm generates the next permutation lexicographically after a given permutation. It changes the given permutation in-place. For example, given the sequence [1, 2, 3, 4] (which is in increasing order), and given that the index iszero-based, the steps are as follows: Following this algorithm, the next lexicographic permutation will be [1, 3, 2, 4], and the 24th permutation will be [4, 3, 2, 1] at which pointa[k] <a[k+ 1] does not exist, indicating that this is the last permutation. This method uses about 3 comparisons and 1.5 swaps per permutation, amortized over the whole sequence, not counting the initial sort.[57] An alternative to the above algorithm, theSteinhaus–Johnson–Trotter algorithm, generates an ordering on all the permutations of a given sequence with the property that any two consecutive permutations in its output differ by swapping two adjacent values. This ordering on the permutations was known to 17th-century English bell ringers, among whom it was known as "plain changes". One advantage of this method is that the small amount of change from one permutation to the next allows the method to be implemented in constant time per permutation. The same can also easily generate the subset of even permutations, again in constant time per permutation, by skipping every other output permutation.[56] An alternative to Steinhaus–Johnson–Trotter isHeap's algorithm,[58]said byRobert Sedgewickin 1977 to be the fastest algorithm of generating permutations in applications.[55] The following figure shows the output of all three aforementioned algorithms for generating all permutations of lengthn=4{\displaystyle n=4}, and of six additional algorithms described in the literature. Explicit sequence of swaps (transpositions, 2-cycles(pq){\displaystyle (pq)}), is described here, each swap applied (on the left) to the previous chain providing a new permutation, such that all the permutations can be retrieved, each only once.[64]This counting/generating procedure has an additional structure (call it nested), as it is given in steps: after completely retrievingSk−1{\displaystyle S_{k-1}}, continue retrievingSk∖Sk−1{\displaystyle S_{k}\backslash S_{k-1}}by cosetsSk−1τi{\displaystyle S_{k-1}\tau _{i}}ofSk−1{\displaystyle S_{k-1}}inSk{\displaystyle S_{k}}, by appropriately choosing the coset representativesτi{\displaystyle \tau _{i}}to be described below. Since eachSm{\displaystyle S_{m}}is sequentially generated, there is alast elementλm∈Sm{\displaystyle \lambda _{m}\in S_{m}}. So, after generatingSk−1{\displaystyle S_{k-1}}by swaps, the next permutation inSk∖Sk−1{\displaystyle S_{k}\backslash S_{k-1}}has to beτ1=(p1k)λk−1{\displaystyle \tau _{1}=(p_{1}k)\lambda _{k-1}}for some1≤p1<k{\displaystyle 1\leq p_{1}<k}. Then all swaps that generatedSk−1{\displaystyle S_{k-1}}are repeated, generating the whole cosetSk−1τ1{\displaystyle S_{k-1}\tau _{1}}, reaching the last permutation in that cosetλk−1τ1{\displaystyle \lambda _{k-1}\tau _{1}}; the next swap has to move the permutation to representative of another cosetτ2=(p2k)λk−1τ1{\displaystyle \tau _{2}=(p_{2}k)\lambda _{k-1}\tau _{1}}. Continuing the same way, one gets coset representativesτj=(pjk)λk−1⋯λk−1(pik)λk−1⋯λk−1(p1k)λk−1{\displaystyle \tau _{j}=(p_{j}k)\lambda _{k-1}\cdots \lambda _{k-1}(p_{i}k)\lambda _{k-1}\cdots \lambda _{k-1}(p_{1}k)\lambda _{k-1}}for the cosets ofSk−1{\displaystyle S_{k-1}}inSk{\displaystyle S_{k}}; the ordered set(p1,…,pk−1){\displaystyle (p_{1},\ldots ,p_{k-1})}(0≤pi<k{\displaystyle 0\leq p_{i}<k}) is called the set of coset beginnings. Two of these representatives are in the same coset if and only ifτj(τi)−1=(pjk)λk−1(pj−1k)λk−1⋯λk−1(pi+1k)=ϰij∈Sk−1{\displaystyle \tau _{j}(\tau _{i})^{-1}=(p_{j}k)\lambda _{k-1}(p_{j-1}k)\lambda _{k-1}\cdots \lambda _{k-1}(p_{i+1}k)=\varkappa _{ij}\in S_{k-1}}, that is,ϰij(k)=k{\displaystyle \varkappa _{ij}(k)=k}. Concluding, permutationsτi∈Sk−Sk−1{\displaystyle \tau _{i}\in S_{k}-S_{k-1}}are all representatives of distinct cosets if and only if for anyk>j>i≥1{\displaystyle k>j>i\geq 1},(λk−1)j−ipi≠pj{\displaystyle (\lambda _{k-1})^{j-i}p_{i}\neq p_{j}}(no repeat condition). In particular, for all generated permutations to be distinct it is not necessary for thepi{\displaystyle p_{i}}values to be distinct. In the process, one gets thatλk=λk−1(pk−1k)λk−1(pk−2k)λk−1⋯λk−1(p1k)λk−1{\displaystyle \lambda _{k}=\lambda _{k-1}(p_{k-1}k)\lambda _{k-1}(p_{k-2}k)\lambda _{k-1}\cdots \lambda _{k-1}(p_{1}k)\lambda _{k-1}}and this provides the recursion procedure. EXAMPLES: obviously, forλ2{\displaystyle \lambda _{2}}one hasλ2=(12){\displaystyle \lambda _{2}=(12)}; to buildλ3{\displaystyle \lambda _{3}}there are only two possibilities for the coset beginnings satisfying the no repeat condition; the choicep1=p2=1{\displaystyle p_{1}=p_{2}=1}leads toλ3=λ2(13)λ2(13)λ2=(13){\displaystyle \lambda _{3}=\lambda _{2}(13)\lambda _{2}(13)\lambda _{2}=(13)}. To continue generatingS4{\displaystyle S_{4}}one needs appropriate coset beginnings (satisfying the no repeat condition): there is a convenient choice:p1=1,p2=2,p3=3{\displaystyle p_{1}=1,p_{2}=2,p_{3}=3}, leading toλ4=(13)(1234)(13)=(1432){\displaystyle \lambda _{4}=(13)(1234)(13)=(1432)}. Then, to buildλ5{\displaystyle \lambda _{5}}a convenient choice for the coset beginnings (satisfying the no repeat condition) isp1=p2=p3=p4=1{\displaystyle p_{1}=p_{2}=p_{3}=p_{4}=1}, leading toλ5=(15){\displaystyle \lambda _{5}=(15)}. From examples above one can inductively go to higherk{\displaystyle k}in a similar way, choosing coset beginnings ofSk{\displaystyle S_{k}}inSk+1{\displaystyle S_{k+1}}, as follows: fork{\displaystyle k}even choosing all coset beginnings equal to 1 and fork{\displaystyle k}odd choosing coset beginnings equal to(1,2,…,k){\displaystyle (1,2,\dots ,k)}. With such choices the "last" permutation isλk=(1k){\displaystyle \lambda _{k}=(1k)}fork{\displaystyle k}odd andλk=(1k−)(12⋯k)(1k−){\displaystyle \lambda _{k}=(1k_{-})(12\cdots k)(1k_{-})}fork{\displaystyle k}even (k−=k−1{\displaystyle k_{-}=k-1}). Using these explicit formulae one can easily compute the permutation of certain index in the counting/generation steps with minimum computation. For this, writing the index in factorial base is useful. For example, the permutation for index699=5(5!)+4(4!)+1(2!)+1(1!){\displaystyle 699=5(5!)+4(4!)+1(2!)+1(1!)}is:σ=λ2(13)λ2(15)λ4(15)λ4(15)λ4(15)λ4(56)λ5(46)λ5(36)λ5(26)λ5(16)λ5={\displaystyle \sigma =\lambda _{2}(13)\lambda _{2}(15)\lambda _{4}(15)\lambda _{4}(15)\lambda _{4}(15)\lambda _{4}(56)\lambda _{5}(46)\lambda _{5}(36)\lambda _{5}(26)\lambda _{5}(16)\lambda _{5}=}λ2(13)λ2((15)λ4)4(λ5)−1λ6=(23)(14325)−1(15)(15)(123456)(15)={\displaystyle \lambda _{2}(13)\lambda _{2}((15)\lambda _{4})^{4}(\lambda _{5})^{-1}\lambda _{6}=(23)(14325)^{-1}(15)(15)(123456)(15)=}(23)(15234)(123456)(15){\displaystyle (23)(15234)(123456)(15)}, yelding finally,σ=(1653)(24){\displaystyle \sigma =(1653)(24)}. Because multiplying by swap permutation takes short computing time and every new generated permutation requires only one such swap multiplication, this generation procedure is quite efficient. Moreover as there is a simple formula, having the last permutation in eachSk{\displaystyle S_{k}}can save even more time to go directly to a permutation with certain index in fewer steps than expected as it can be done in blocks of subgroups rather than swap by swap. Permutations are used in theinterleavercomponent of theerror detection and correctionalgorithms, such asturbo codes, for example3GPP Long Term Evolutionmobile telecommunication standard uses these ideas (see 3GPP technical specification 36.212[65]). Such applications raise the question of fast generation of permutations satisfying certain desirable properties. One of the methods is based on thepermutation polynomials. Also as a base for optimal hashing in Unique Permutation Hashing.[66]
https://en.wikipedia.org/wiki/Cycle_notation
In the mathematical theory ofspecial functions, thePochhammerk-symboland thek-gamma function, introduced by Rafael Díaz and Eddy Pariguan[1]are generalizations of thePochhammer symbolandgamma function. They differ from the Pochhammer symbol and gamma function in that they can be related to a generalarithmetic progressionin the same manner as those are related to the sequence of consecutiveintegers. The Pochhammerk-symbol (x)n,kis defined as and thek-gamma function Γk, withk> 0, is defined as Whenk= 1 the standard Pochhammer symbol and gamma function are obtained. Díaz and Pariguan use these definitions to demonstrate a number of properties of thehypergeometric function. Although Díaz and Pariguan restrict these symbols tok> 0, the Pochhammerk-symbol as they define it is well-defined for all realk,and for negativekgives thefalling factorial, while fork= 0 it reduces to thepowerxn. The Díaz and Pariguan paper does not address the many analogies between the Pochhammerk-symbol and the power function, such as the fact that thebinomial theoremcan be extended to Pochhammerk-symbols. It is true, however, that many equations involving the power functionxncontinue to hold whenxnis replaced by (x)n,k. Jacobi-typeJ-fractionsfor theordinarygenerating function of the Pochhammer k-symbol, denoted in slightly different notation bypn(α,R):=R(R+α)⋯(R+(n−1)α){\displaystyle p_{n}(\alpha ,R):=R(R+\alpha )\cdots (R+(n-1)\alpha )}for fixedα>0{\displaystyle \alpha >0}and some indeterminate parameterR{\displaystyle R}, are considered in[2]in the form of the next infinitecontinued fractionexpansion given by The rationalhth{\displaystyle h^{th}}convergent function,Convh(α,R;z){\displaystyle {\text{Conv}}_{h}(\alpha ,R;z)}, to the full generating function for these products expanded by the last equation is given by where the component convergent function sequences,FPh(α,R;z){\displaystyle {\text{FP}}_{h}(\alpha ,R;z)}andFQh(α,R;z){\displaystyle {\text{FQ}}_{h}(\alpha ,R;z)}, are given as closed-form sums in terms of the ordinaryPochhammer symboland theLaguerre polynomialsby The rationality of thehth{\displaystyle h^{th}}convergent functions for allh≥2{\displaystyle h\geq 2}, combined with known enumerative properties of the J-fraction expansions, imply the following finite difference equations both exactly generating(x)n,α{\displaystyle (x)_{n,\alpha }}for alln≥1{\displaystyle n\geq 1}, and generating the symbol modulohαt{\displaystyle h\alpha ^{t}}for some fixed integer0≤t≤h{\displaystyle 0\leq t\leq h}: The rationality ofConvh(α,R;z){\displaystyle {\text{Conv}}_{h}(\alpha ,R;z)}also implies the next exact expansions of these products given by where the formula is expanded in terms of the special zeros of theLaguerre polynomials, or equivalently, of theconfluent hypergeometric function, defined as the finite (ordered) set and whereConvh(α,R;z):=∑j=1hch,j(α,x)/(1−ℓh,j(α,x)){\displaystyle {\text{Conv}}_{h}(\alpha ,R;z):=\sum _{j=1}^{h}c_{h,j}(\alpha ,x)/(1-\ell _{h,j}(\alpha ,x))}denotes thepartial fraction decompositionof the rationalhth{\displaystyle h^{th}}convergent function. Additionally, since the denominator convergent functions,FQh(α,R;z){\displaystyle {\text{FQ}}_{h}(\alpha ,R;z)}, are expanded exactly through theLaguerre polynomialsas above, we can exactly generate the Pochhammer k-symbol as the series coefficients for any prescribed integern0≥0{\displaystyle n_{0}\geq 0}. Special cases of the Pochhammer k-symbol,(x)n,k{\displaystyle (x)_{n,k}}, correspond to the following special cases of thefalling and rising factorials, including thePochhammer symbol, and the generalized cases of the multiple factorial functions (multifactorialfunctions), or theα{\displaystyle \alpha }-factorial functions studied in the last two references by Schmidt: The expansions of thesek-symbol-relatedproducts considered termwise with respect to the coefficients of the powers ofxk{\displaystyle x^{k}}(1≤k≤n{\displaystyle 1\leq k\leq n}) for each finiten≥1{\displaystyle n\geq 1}are defined in the article on generalizedStirling numbers of the first kindand generalizedStirling (convolution) polynomialsin.[3]
https://en.wikipedia.org/wiki/Pochhammer_k-symbol
Incombinatorics,Vandermonde's identity(orVandermonde's convolution) is the following identity forbinomial coefficients: for any nonnegativeintegersr,m,n. The identity is named afterAlexandre-Théophile Vandermonde(1772), although it was already known in 1303 by theChinese mathematicianZhu Shijie.[1] There is aq-analogto this theorem called theq-Vandermonde identity. Vandermonde's identity can be generalized in numerous ways, including to the identity In general, the product of twopolynomialswith degreesmandn, respectively, is given by where we use the convention thatai= 0 for all integersi>mandbj= 0 for all integersj>n. By thebinomial theorem, Using the binomial theorem also for the exponentsmandn, and then the above formula for the product of polynomials, we obtain where the above convention for the coefficients of the polynomials agrees with the definition of the binomial coefficients, because both give zero for alli>mandj>n, respectively. By comparing coefficients ofxr, Vandermonde's identity follows for all integersrwith 0 ≤r≤m+n. For larger integersr, both sides of Vandermonde's identity are zero due to the definition of binomial coefficients. Vandermonde's identity also admits a combinatorialdouble counting proof, as follows. Suppose a committee consists ofmmen andnwomen. In how many ways can a subcommittee ofrmembers be formed? The answer is The answer is also the sum over all possible values ofk, of the number of subcommittees consisting ofkmen andr−kwomen: Take a rectangular grid ofrx (m+n−r) squares. There are paths that start on the bottom left vertex and, moving only upwards or rightwards, end at the top right vertex (this is becauserright moves andm+n-rup moves must be made (or vice versa) in any order, and the total path length ism+n). Call the bottom left vertex (0, 0). There are(mk){\displaystyle {\binom {m}{k}}}paths starting at (0, 0) that end at (k,m−k), askright moves andm−kupward moves must be made (and the path length ism). Similarly, there are(nr−k){\displaystyle {\binom {n}{r-k}}}paths starting at (k,m−k) that end at (r,m+n−r), as a total ofr−kright moves and (m+n−r) − (m−k) upward moves must be made and the path length must ber−k+ (m+n−r) − (m−k) =n. Thus there are paths that start at (0, 0), end at (r,m+n−r), and go through (k,m−k). This is asubsetof all paths that start at (0, 0) and end at (r,m+n−r), so sum fromk= 0 tok=r(as the point (k,m−k) is confined to be within the square) to obtain the total number of paths that start at (0, 0) and end at (r,m+n−r). One can generalize Vandermonde's identity as follows: This identity can be obtained through the algebraic derivation above when more than two polynomials are used, or through a simpledouble countingargument. On the one hand, one choosesk1{\displaystyle \textstyle k_{1}}elements out of a first set ofn1{\displaystyle \textstyle n_{1}}elements; thenk2{\displaystyle \textstyle k_{2}}out of another set, and so on, throughp{\displaystyle \textstyle p}such sets, until a total ofm{\displaystyle \textstyle m}elements have been chosen from thep{\displaystyle \textstyle p}sets. One therefore choosesm{\displaystyle \textstyle m}elements out ofn1+⋯+np{\displaystyle \textstyle n_{1}+\dots +n_{p}}in the left-hand side, which is also exactly what is done in the right-hand side. The identity generalizes to non-integer arguments. In this case, it is known as theChu–Vandermonde identity(seeAskey 1975, pp. 59–60) and takes the form for generalcomplex-valuedsandtand any non-negative integern. It can be proved along the lines of the algebraic proof above bymultiplyingthebinomial seriesfor(1+x)s{\displaystyle (1+x)^{s}}and(1+x)t{\displaystyle (1+x)^{t}}and comparing terms with the binomial series for(1+x)s+t{\displaystyle (1+x)^{s+t}}. This identity may be rewritten in terms of the fallingPochhammer symbolsas in which form it is clearly recognizable as anumbralvariant of thebinomial theorem(for more on umbral variants of the binomial theorem, seebinomial type). The Chu–Vandermonde identity can also be seen to be a special case ofGauss's hypergeometric theorem, which states that where2F1{\displaystyle \;_{2}F_{1}}is thehypergeometric functionandΓ(n+1)=n!{\displaystyle \Gamma (n+1)=n!}is thegamma function. One regains the Chu–Vandermonde identity by takinga= −nand applying the identity liberally. TheRothe–Hagen identityis a further generalization of this identity. When both sides have been divided by the expression on the left, so that the sum is 1, then the terms of the sum may be interpreted as probabilities. The resultingprobability distributionis thehypergeometric distribution. That is the probability distribution of the number of red marbles inrdrawswithout replacementfrom an urn containingnred andmblue marbles.
https://en.wikipedia.org/wiki/Vandermonde_identity
The termumbral calculushas two related but distinct meanings. Inmathematics, before the 1970s, umbral calculus referred to the surprising similarity between seemingly unrelatedpolynomial equationsand certain shadowy techniques used to prove them. These techniques were introduced in 1861 byJohn Blissardand are sometimes calledBlissard's symbolic method.[1]They are often attributed toÉdouard Lucas(orJames Joseph Sylvester), who used the technique extensively.[2]The use of shadowy techniques was put on a solid mathematical footing starting in the 1970s, and the resulting mathematical theory is also referred to as "umbral calculus". In the 1930s and 1940s,Eric Temple Bellattempted to set the umbral calculus on a rigorous footing, however his attempt in making this kind of argument logically rigorous was unsuccessful. ThecombinatorialistJohn Riordanin his bookCombinatorial Identitiespublished in the 1960s, used techniques of this sort extensively. In the 1970s,Steven Roman,Gian-Carlo Rota, and others developed the umbral calculus by means oflinear functionalson spaces of polynomials. Currently,umbral calculusrefers to the study ofSheffer sequences, including polynomial sequences ofbinomial typeandAppell sequences, but may encompass systematic correspondence techniques of thecalculus of finite differences. The method is a notational procedure used for deriving identities involving indexed sequences of numbers bypretending that the indices are exponents. Construed literally, it is absurd, and yet it is successful: identities derived via the umbral calculus can also be properly derived by more complicated methods that can be taken literally without logical difficulty. An example involves theBernoulli polynomials. Consider, for example, the ordinarybinomial expansion(which contains abinomial coefficient): and the remarkably similar-looking relation on theBernoulli polynomials: Compare also the ordinary derivative to a very similar-looking relation on the Bernoulli polynomials: These similarities allow one to constructumbralproofs, which on the surface cannot be correct, but seem to work anyway. Thus, for example, by pretending that the subscriptn−kis an exponent: and then differentiating, one gets the desired result: In the above, the variablebis an "umbra" (Latinforshadow). See alsoFaulhaber's formula. Indifferential calculus, theTaylor seriesof a function is an infinite sum of terms that are expressed in terms of the function'sderivativesat a single point. That is, arealorcomplex-valued functionf(x) that isanalyticata{\displaystyle a}can be written as: f(x)=∑n=0∞f(n)(a)n!(x−a)n{\displaystyle f(x)=\sum _{n=0}^{\infty }{\frac {f^{(n)}(a)}{n!}}(x-a)^{n}} Similar relationships were also observed in the theory offinite differences. The umbral version of the Taylor series is given by a similar expression involving thek-thforward differencesΔk[f]{\displaystyle \Delta ^{k}[f]}of apolynomialfunctionf, where is thePochhammer symbolused here for the falling sequential product. A similar relationship holds for the backward differences and rising factorial. This series is also known as theNewton seriesorNewton's forward difference expansion. The analogy to Taylor's expansion is utilized in thecalculus of finite differences. Another combinatorialist,Gian-Carlo Rota, pointed out that the mystery vanishes if one considers thelinear functionalLon polynomials inzdefined by Then, using the definition of the Bernoulli polynomials and the definition and linearity ofL, one can write This enables one to replace occurrences ofBn(x){\displaystyle B_{n}(x)}byL((z+x)n){\displaystyle L((z+x)^{n})}, that is, move thenfrom a subscript to a superscript (the key operation of umbral calculus). For instance, we can now prove that: Rota later stated that much confusion resulted from the failure to distinguish between threeequivalence relationsthat occur frequently in this topic, all of which were denoted by "=". In a paper published in 1964, Rota used umbral methods to establish therecursionformula satisfied by theBell numbers, which enumeratepartitionsof finite sets. In the paper of Roman and Rota cited below, the umbral calculus is characterized as the study of theumbral algebra, defined as thealgebraof linear functionals on thevector spaceof polynomials in a variablex, with a productL1L2of linear functionals defined by Whenpolynomial sequencesreplace sequences of numbers as images ofynunder the linear mappingL, then the umbral method is seen to be an essential component of Rota's general theory of special polynomials, and that theory is theumbral calculusby some more modern definitions of the term.[3]A small sample of that theory can be found in the article onpolynomial sequences of binomial type. Another is the article titledSheffer sequence. Rota later applied umbral calculus extensively in his paper with Shen to study the various combinatorial properties of thecumulants.[4]
https://en.wikipedia.org/wiki/Umbral_calculus
Incombinatorialmathematics, aLangford pairing, also called aLangford sequence, is apermutationof the sequence of 2nnumbers 1, 1, 2, 2, ...,n,nin which the two 1s are one unit apart, the two 2s are two units apart, and more generally the two copies of each numberkarekunits apart. Langford pairings are named after C. Dudley Langford, who posed the problem of constructing them in 1958. Langford's problemis the task of finding Langford pairings for a given value ofn.[1] The closely related concept of aSkolem sequence[2]is defined in the same way, but instead permutes the sequence 0, 0, 1, 1, ...,n− 1,n− 1. A Langford pairing forn= 3 is given by the sequence 2, 3, 1, 2, 1, 3. Langford pairings exist only whenniscongruentto 0 or 3 modulo 4; for instance, there is no Langford pairing whenn= 1, 2, or 5. The numbers of different Langford pairings forn= 1, 2, …, counting any sequence as being the same as its reversal, are AsKnuth (2008)describes, the problem of listing all Langford pairings for a givenncan be solved as an instance of theexact cover problem, but for largenthe number of solutions can be calculated more efficiently by algebraic methods. Skolem (1957)used Skolem sequences to constructSteiner triple systems. In the 1960s, E. J. Groth used Langford pairings to construct circuits for integermultiplication.[3]
https://en.wikipedia.org/wiki/Langford_pairing
Inmathematics,Stirling's approximation(orStirling's formula) is anasymptoticapproximation forfactorials. It is a good approximation, leading to accurate results even for small values ofn{\displaystyle n}. It is named afterJames Stirling, though a related but less precise result was first stated byAbraham de Moivre.[1][2][3] One way of stating the approximation involves thelogarithmof the factorial:ln⁡(n!)=nln⁡n−n+O(ln⁡n),{\displaystyle \ln(n!)=n\ln n-n+O(\ln n),}where thebig O notationmeans that, for all sufficiently large values ofn{\displaystyle n}, the difference betweenln⁡(n!){\displaystyle \ln(n!)}andnln⁡n−n{\displaystyle n\ln n-n}will be at most proportional to the logarithm ofn{\displaystyle n}. In computer science applications such as theworst-case lower bound for comparison sorting, it is convenient to instead use thebinary logarithm, giving the equivalent formlog2⁡(n!)=nlog2⁡n−nlog2⁡e+O(log2⁡n).{\displaystyle \log _{2}(n!)=n\log _{2}n-n\log _{2}e+O(\log _{2}n).}The error term in either base can be expressed more precisely as12log2⁡(2πn)+O(1n){\displaystyle {\tfrac {1}{2}}\log _{2}(2\pi n)+O({\tfrac {1}{n}})}, corresponding to an approximate formula for the factorial itself,n!∼2πn(ne)n.{\displaystyle n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}.}Here the sign∼{\displaystyle \sim }means that the two quantities are asymptotic, that is, their ratio tends to 1 asn{\displaystyle n}tends to infinity. Roughly speaking, the simplest version of Stirling's formula can be quickly obtained by approximating the sumln⁡(n!)=∑j=1nln⁡j{\displaystyle \ln(n!)=\sum _{j=1}^{n}\ln j}with anintegral:∑j=1nln⁡j≈∫1nln⁡xdx=nln⁡n−n+1.{\displaystyle \sum _{j=1}^{n}\ln j\approx \int _{1}^{n}\ln x\,{\rm {d}}x=n\ln n-n+1.} The full formula, together with precise estimates of its error, can be derived as follows. Instead of approximatingn!{\displaystyle n!}, one considers itsnatural logarithm, as this is aslowly varying function:ln⁡(n!)=ln⁡1+ln⁡2+⋯+ln⁡n.{\displaystyle \ln(n!)=\ln 1+\ln 2+\cdots +\ln n.} The right-hand side of this equation minus12(ln⁡1+ln⁡n)=12ln⁡n{\displaystyle {\tfrac {1}{2}}(\ln 1+\ln n)={\tfrac {1}{2}}\ln n}is the approximation by thetrapezoid ruleof the integralln⁡(n!)−12ln⁡n≈∫1nln⁡xdx=nln⁡n−n+1,{\displaystyle \ln(n!)-{\tfrac {1}{2}}\ln n\approx \int _{1}^{n}\ln x\,{\rm {d}}x=n\ln n-n+1,} and the error in this approximation is given by theEuler–Maclaurin formula:ln⁡(n!)−12ln⁡n=12ln⁡1+ln⁡2+ln⁡3+⋯+ln⁡(n−1)+12ln⁡n=nln⁡n−n+1+∑k=2m(−1)kBkk(k−1)(1nk−1−1)+Rm,n,{\displaystyle {\begin{aligned}\ln(n!)-{\tfrac {1}{2}}\ln n&={\tfrac {1}{2}}\ln 1+\ln 2+\ln 3+\cdots +\ln(n-1)+{\tfrac {1}{2}}\ln n\\&=n\ln n-n+1+\sum _{k=2}^{m}{\frac {(-1)^{k}B_{k}}{k(k-1)}}\left({\frac {1}{n^{k-1}}}-1\right)+R_{m,n},\end{aligned}}} whereBk{\displaystyle B_{k}}is aBernoulli number, andRm,nis the remainder term in the Euler–Maclaurin formula. Take limits to find thatlimn→∞(ln⁡(n!)−nln⁡n+n−12ln⁡n)=1−∑k=2m(−1)kBkk(k−1)+limn→∞Rm,n.{\displaystyle \lim _{n\to \infty }\left(\ln(n!)-n\ln n+n-{\tfrac {1}{2}}\ln n\right)=1-\sum _{k=2}^{m}{\frac {(-1)^{k}B_{k}}{k(k-1)}}+\lim _{n\to \infty }R_{m,n}.} Denote this limit asy{\displaystyle y}. Because the remainderRm,nin the Euler–Maclaurin formula satisfiesRm,n=limn→∞Rm,n+O(1nm),{\displaystyle R_{m,n}=\lim _{n\to \infty }R_{m,n}+O\left({\frac {1}{n^{m}}}\right),} wherebig-O notationis used, combining the equations above yields the approximation formula in its logarithmic form:ln⁡(n!)=nln⁡(ne)+12ln⁡n+y+∑k=2m(−1)kBkk(k−1)nk−1+O(1nm).{\displaystyle \ln(n!)=n\ln \left({\frac {n}{e}}\right)+{\tfrac {1}{2}}\ln n+y+\sum _{k=2}^{m}{\frac {(-1)^{k}B_{k}}{k(k-1)n^{k-1}}}+O\left({\frac {1}{n^{m}}}\right).} Taking the exponential of both sides and choosing any positive integerm{\displaystyle m}, one obtains a formula involving an unknown quantityey{\displaystyle e^{y}}. Form= 1, the formula isn!=eyn(ne)n(1+O(1n)).{\displaystyle n!=e^{y}{\sqrt {n}}\left({\frac {n}{e}}\right)^{n}\left(1+O\left({\frac {1}{n}}\right)\right).} The quantityey{\displaystyle e^{y}}can be found by taking the limit on both sides asn{\displaystyle n}tends to infinity and usingWallis' product, which shows thatey=2π{\displaystyle e^{y}={\sqrt {2\pi }}}. Therefore, one obtains Stirling's formula:n!=2πn(ne)n(1+O(1n)).{\displaystyle n!={\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\left(1+O\left({\frac {1}{n}}\right)\right).} An alternative formula forn!{\displaystyle n!}using thegamma functionisn!=∫0∞xne−xdx.{\displaystyle n!=\int _{0}^{\infty }x^{n}e^{-x}\,{\rm {d}}x.}(as can be seen by repeated integration by parts). Rewriting and changing variablesx=ny, one obtainsn!=∫0∞enln⁡x−xdx=enln⁡nn∫0∞en(ln⁡y−y)dy.{\displaystyle n!=\int _{0}^{\infty }e^{n\ln x-x}\,{\rm {d}}x=e^{n\ln n}n\int _{0}^{\infty }e^{n(\ln y-y)}\,{\rm {d}}y.}ApplyingLaplace's methodone has∫0∞en(ln⁡y−y)dy∼2πne−n,{\displaystyle \int _{0}^{\infty }e^{n(\ln y-y)}\,{\rm {d}}y\sim {\sqrt {\frac {2\pi }{n}}}e^{-n},}which recovers Stirling's formula:n!∼enln⁡nn2πne−n=2πn(ne)n.{\displaystyle n!\sim e^{n\ln n}n{\sqrt {\frac {2\pi }{n}}}e^{-n}={\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}.} In fact, further corrections can also be obtained using Laplace's method. From previous result, we know thatΓ(x)∼xxe−x{\displaystyle \Gamma (x)\sim x^{x}e^{-x}}, so we "peel off" this dominant term, then perform two changes of variables, to obtain:x−xexΓ(x)=∫Rex(1+t−et)dt{\displaystyle x^{-x}e^{x}\Gamma (x)=\int _{\mathbb {R} }e^{x(1+t-e^{t})}dt}To verify this:∫Rex(1+t−et)dt=t↦ln⁡tex∫0∞tx−1e−xtdt=t↦t/xx−xex∫0∞e−ttx−1dt=x−xexΓ(x){\displaystyle \int _{\mathbb {R} }e^{x(1+t-e^{t})}dt{\overset {t\mapsto \ln t}{=}}e^{x}\int _{0}^{\infty }t^{x-1}e^{-xt}dt{\overset {t\mapsto t/x}{=}}x^{-x}e^{x}\int _{0}^{\infty }e^{-t}t^{x-1}dt=x^{-x}e^{x}\Gamma (x)}. Now the functiont↦1+t−et{\displaystyle t\mapsto 1+t-e^{t}}is unimodal, with maximum value zero. Locally around zero, it looks like−t2/2{\displaystyle -t^{2}/2}, which is why we are able to perform Laplace's method. In order to extend Laplace's method to higher orders, we perform another change of variables by1+t−et=−τ2/2{\displaystyle 1+t-e^{t}=-\tau ^{2}/2}. This equation cannot be solved in closed form, but it can be solved by serial expansion, which gives ust=τ−τ2/6+τ3/36+a4τ4+O(τ5){\displaystyle t=\tau -\tau ^{2}/6+\tau ^{3}/36+a_{4}\tau ^{4}+O(\tau ^{5})}. Now plug back to the equation to obtainx−xexΓ(x)=∫Re−xτ2/2(1−τ/3+τ2/12+4a4τ3+O(τ4))dτ=2π(x−1/2+x−3/2/12)+O(x−5/2){\displaystyle x^{-x}e^{x}\Gamma (x)=\int _{\mathbb {R} }e^{-x\tau ^{2}/2}(1-\tau /3+\tau ^{2}/12+4a_{4}\tau ^{3}+O(\tau ^{4}))d\tau ={\sqrt {2\pi }}(x^{-1/2}+x^{-3/2}/12)+O(x^{-5/2})}notice how we don't need to actually finda4{\displaystyle a_{4}}, since it is cancelled out by the integral. Higher orders can be achieved by computing more terms int=τ+⋯{\displaystyle t=\tau +\cdots }, which can be obtained programmatically.[note 1] Thus we get Stirling's formula to two orders:n!=2πn(ne)n(1+112n+O(1n2)).{\displaystyle n!={\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\left(1+{\frac {1}{12n}}+O\left({\frac {1}{n^{2}}}\right)\right).} A complex-analysis version of this method[4]is to consider1n!{\displaystyle {\frac {1}{n!}}}as aTaylor coefficientof the exponential functionez=∑n=0∞znn!{\displaystyle e^{z}=\sum _{n=0}^{\infty }{\frac {z^{n}}{n!}}}, computed byCauchy's integral formulaas1n!=12πi∮|z|=rezzn+1dz.{\displaystyle {\frac {1}{n!}}={\frac {1}{2\pi i}}\oint \limits _{|z|=r}{\frac {e^{z}}{z^{n+1}}}\,\mathrm {d} z.} This line integral can then be approximated using thesaddle-point methodwith an appropriate choice of contour radiusr=rn{\displaystyle r=r_{n}}. The dominant portion of the integral near the saddle point is then approximated by a real integral and Laplace's method, while the remaining portion of the integral can be bounded above to give an error term. An alternative version uses the fact that thePoisson distributionconverges to anormal distributionby theCentral Limit Theorem.[5] Since the Poisson distribution with parameterλ{\displaystyle \lambda }converges to a normal distribution with meanλ{\displaystyle \lambda }and varianceλ{\displaystyle \lambda }, theirdensity functionswill be approximately the same: exp⁡(−μ)μxx!≈12πμexp⁡(−12(x−μμ)){\displaystyle {\frac {\exp(-\mu )\mu ^{x}}{x!}}\approx {\frac {1}{\sqrt {2\pi \mu }}}\exp(-{\frac {1}{2}}({\frac {x-\mu }{\sqrt {\mu }}}))} Evaluating this expression at the mean, at which the approximation is particularly accurate, simplifies this expression to: exp⁡(−μ)μμμ!≈12πμ{\displaystyle {\frac {\exp(-\mu )\mu ^{\mu }}{\mu !}}\approx {\frac {1}{\sqrt {2\pi \mu }}}} Taking logs then results in: −μ+μln⁡μ−ln⁡μ!≈−12ln⁡2πμ{\displaystyle -\mu +\mu \ln \mu -\ln \mu !\approx -{\frac {1}{2}}\ln 2\pi \mu } which can easily be rearranged to give: ln⁡μ!≈μln⁡μ−μ+12ln⁡2πμ{\displaystyle \ln \mu !\approx \mu \ln \mu -\mu +{\frac {1}{2}}\ln 2\pi \mu } Evaluating atμ=n{\displaystyle \mu =n}gives the usual, more precise form of Stirling's approximation. Stirling's formula is in fact the first approximation to the following series (now called theStirling series):[6]n!∼2πn(ne)n(1+112n+1288n2−13951840n3−5712488320n4+⋯).{\displaystyle n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\left(1+{\frac {1}{12n}}+{\frac {1}{288n^{2}}}-{\frac {139}{51840n^{3}}}-{\frac {571}{2488320n^{4}}}+\cdots \right).} An explicit formula for the coefficients in this series was given by G. Nemes.[7]Further terms are listed in theOn-Line Encyclopedia of Integer SequencesasA001163andA001164. The first graph in this section shows therelative errorvs.n{\displaystyle n}, for 1 through all 5 terms listed above. (Bender and Orszag[8]p. 218) gives the asymptotic formula for the coefficients:A2j+1∼(−1)j2(2j)!/(2π)2(j+1){\displaystyle A_{2j+1}\sim (-1)^{j}2(2j)!/(2\pi )^{2(j+1)}}which shows that it grows superexponentially, and that by theratio testtheradius of convergenceis zero. Asn→ ∞, the error in the truncated series is asymptotically equal to the first omitted term. This is an example of anasymptotic expansion. It is not aconvergent series; for anyparticularvalue ofn{\displaystyle n}there are only so many terms of the series that improve accuracy, after which accuracy worsens. This is shown in the next graph, which shows the relative error versus the number of terms in the series, for larger numbers of terms. More precisely, letS(n,t)be the Stirling series tot{\displaystyle t}terms evaluated atn{\displaystyle n}. The graphs show|ln⁡(S(n,t)n!)|,{\displaystyle \left|\ln \left({\frac {S(n,t)}{n!}}\right)\right|,}which, when small, is essentially the relative error. Writing Stirling's series in the formln⁡(n!)∼nln⁡n−n+12ln⁡(2πn)+112n−1360n3+11260n5−11680n7+⋯,{\displaystyle \ln(n!)\sim n\ln n-n+{\tfrac {1}{2}}\ln(2\pi n)+{\frac {1}{12n}}-{\frac {1}{360n^{3}}}+{\frac {1}{1260n^{5}}}-{\frac {1}{1680n^{7}}}+\cdots ,}it is known that the error in truncating the series is always of the opposite sign and at most the same magnitude as the first omitted term.[citation needed] Other bounds, due to Robbins,[9]valid for all positive integersn{\displaystyle n}are2πn(ne)ne112n+1<n!<2πn(ne)ne112n.{\displaystyle {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}e^{\frac {1}{12n+1}}<n!<{\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}e^{\frac {1}{12n}}.}This upper bound corresponds to stopping the above series forln⁡(n!){\displaystyle \ln(n!)}after the1n{\displaystyle {\frac {1}{n}}}term. The lower bound is weaker than that obtained by stopping the series after the1n3{\displaystyle {\frac {1}{n^{3}}}}term. A looser version of this bound is thatn!ennn+12∈(2π,e]{\displaystyle {\frac {n!e^{n}}{n^{n+{\frac {1}{2}}}}}\in ({\sqrt {2\pi }},e]}for alln≥1{\displaystyle n\geq 1}. For all positive integers,n!=Γ(n+1),{\displaystyle n!=\Gamma (n+1),}whereΓdenotes thegamma function. However, the gamma function, unlike the factorial, is more broadly defined for all complex numbers other than non-positive integers; nevertheless, Stirling's formula may still be applied. IfRe(z) > 0, thenln⁡Γ(z)=zln⁡z−z+12ln⁡2πz+∫0∞2arctan⁡(tz)e2πt−1dt.{\displaystyle \ln \Gamma (z)=z\ln z-z+{\tfrac {1}{2}}\ln {\frac {2\pi }{z}}+\int _{0}^{\infty }{\frac {2\arctan \left({\frac {t}{z}}\right)}{e^{2\pi t}-1}}\,{\rm {d}}t.} Repeated integration by parts givesln⁡Γ(z)∼zln⁡z−z+12ln⁡2πz+∑n=1N−1B2n2n(2n−1)z2n−1=zln⁡z−z+12ln⁡2πz+112z−1360z3+11260z5+…,{\displaystyle {\begin{aligned}\ln \Gamma (z)\sim z\ln z-z+{\tfrac {1}{2}}\ln {\frac {2\pi }{z}}+\sum _{n=1}^{N-1}{\frac {B_{2n}}{2n(2n-1)z^{2n-1}}}\\=z\ln z-z+{\tfrac {1}{2}}\ln {\frac {2\pi }{z}}+{\frac {1}{12z}}-{\frac {1}{360z^{3}}}+{\frac {1}{1260z^{5}}}+\dots ,\end{aligned}}} whereBn{\displaystyle B_{n}}is then{\displaystyle n}thBernoulli number(note that the limit of the sum asN→∞{\displaystyle N\to \infty }is not convergent, so this formula is just anasymptotic expansion). The formula is valid forz{\displaystyle z}large enough in absolute value, when|arg(z)| < π −ε, whereεis positive, with an error term ofO(z−2N+ 1). The corresponding approximation may now be written:Γ(z)=2πz(ze)z(1+O(1z)).{\displaystyle \Gamma (z)={\sqrt {\frac {2\pi }{z}}}\,{\left({\frac {z}{e}}\right)}^{z}\left(1+O\left({\frac {1}{z}}\right)\right).} where the expansion is identical to that of Stirling's series above forn!{\displaystyle n!}, except thatn{\displaystyle n}is replaced withz− 1.[10] A further application of this asymptotic expansion is for complex argumentzwith constantRe(z). See for example the Stirling formula applied inIm(z) =tof theRiemann–Siegel theta functionon the straight line⁠1/4⁠+it. Thomas Bayesshowed, in a letter toJohn Cantonpublished by theRoyal Societyin 1763, that Stirling's formula did not give aconvergent series.[11]Obtaining a convergent version of Stirling's formula entails evaluatingBinet's formula:∫0∞2arctan⁡(tx)e2πt−1dt=ln⁡Γ(x)−xln⁡x+x−12ln⁡2πx.{\displaystyle \int _{0}^{\infty }{\frac {2\arctan \left({\frac {t}{x}}\right)}{e^{2\pi t}-1}}\,{\rm {d}}t=\ln \Gamma (x)-x\ln x+x-{\tfrac {1}{2}}\ln {\frac {2\pi }{x}}.} One way to do this is by means of a convergent series of invertedrising factorials. Ifzn¯=z(z+1)⋯(z+n−1),{\displaystyle z^{\bar {n}}=z(z+1)\cdots (z+n-1),}then∫0∞2arctan⁡(tx)e2πt−1dt=∑n=1∞cn(x+1)n¯,{\displaystyle \int _{0}^{\infty }{\frac {2\arctan \left({\frac {t}{x}}\right)}{e^{2\pi t}-1}}\,{\rm {d}}t=\sum _{n=1}^{\infty }{\frac {c_{n}}{(x+1)^{\bar {n}}}},}wherecn=1n∫01xn¯(x−12)dx=12n∑k=1nk|s(n,k)|(k+1)(k+2),{\displaystyle c_{n}={\frac {1}{n}}\int _{0}^{1}x^{\bar {n}}\left(x-{\tfrac {1}{2}}\right)\,{\rm {d}}x={\frac {1}{2n}}\sum _{k=1}^{n}{\frac {k|s(n,k)|}{(k+1)(k+2)}},}wheres(n,k)denotes theStirling numbers of the first kind. From this one obtains a version of Stirling's seriesln⁡Γ(x)=xln⁡x−x+12ln⁡2πx+112(x+1)+112(x+1)(x+2)++59360(x+1)(x+2)(x+3)+2960(x+1)(x+2)(x+3)(x+4)+⋯,{\displaystyle {\begin{aligned}\ln \Gamma (x)&=x\ln x-x+{\tfrac {1}{2}}\ln {\frac {2\pi }{x}}+{\frac {1}{12(x+1)}}+{\frac {1}{12(x+1)(x+2)}}+\\&\quad +{\frac {59}{360(x+1)(x+2)(x+3)}}+{\frac {29}{60(x+1)(x+2)(x+3)(x+4)}}+\cdots ,\end{aligned}}}which converges whenRe(x) > 0. Stirling's formula may also be given in convergent form as[12]Γ(x)=2πxx−12e−x+μ(x){\displaystyle \Gamma (x)={\sqrt {2\pi }}x^{x-{\frac {1}{2}}}e^{-x+\mu (x)}}whereμ(x)=∑n=0∞((x+n+12)ln⁡(1+1x+n)−1).{\displaystyle \mu \left(x\right)=\sum _{n=0}^{\infty }\left(\left(x+n+{\frac {1}{2}}\right)\ln \left(1+{\frac {1}{x+n}}\right)-1\right).} The approximationΓ(z)≈2πz(zezsinh⁡1z+1810z6)z{\displaystyle \Gamma (z)\approx {\sqrt {\frac {2\pi }{z}}}\left({\frac {z}{e}}{\sqrt {z\sinh {\frac {1}{z}}+{\frac {1}{810z^{6}}}}}\right)^{z}}and its equivalent form2ln⁡Γ(z)≈ln⁡(2π)−ln⁡z+z(2ln⁡z+ln⁡(zsinh⁡1z+1810z6)−2){\displaystyle 2\ln \Gamma (z)\approx \ln(2\pi )-\ln z+z\left(2\ln z+\ln \left(z\sinh {\frac {1}{z}}+{\frac {1}{810z^{6}}}\right)-2\right)}can be obtained by rearranging Stirling's extended formula and observing a coincidence between the resultantpower seriesand theTaylor seriesexpansion of thehyperbolic sinefunction. This approximation is good to more than 8 decimal digits forzwith a real part greater than 8. Robert H. Windschitl suggested it in 2002 for computing the gamma function with fair accuracy on calculators with limited program or register memory.[13] Gergő Nemes proposed in 2007 an approximation which gives the same number of exact digits as the Windschitl approximation but is much simpler:[14]Γ(z)≈2πz(1e(z+112z−110z))z,{\displaystyle \Gamma (z)\approx {\sqrt {\frac {2\pi }{z}}}\left({\frac {1}{e}}\left(z+{\frac {1}{12z-{\frac {1}{10z}}}}\right)\right)^{z},}or equivalently,ln⁡Γ(z)≈12(ln⁡(2π)−ln⁡z)+z(ln⁡(z+112z−110z)−1).{\displaystyle \ln \Gamma (z)\approx {\tfrac {1}{2}}\left(\ln(2\pi )-\ln z\right)+z\left(\ln \left(z+{\frac {1}{12z-{\frac {1}{10z}}}}\right)-1\right).} An alternative approximation for the gamma function stated bySrinivasa RamanujaninRamanujan's lost notebook[15]isΓ(1+x)≈π(xe)x(8x3+4x2+x+130)16{\displaystyle \Gamma (1+x)\approx {\sqrt {\pi }}\left({\frac {x}{e}}\right)^{x}\left(8x^{3}+4x^{2}+x+{\frac {1}{30}}\right)^{\frac {1}{6}}}forx≥ 0. The equivalent approximation forlnn!has an asymptotic error of⁠1/1400n3⁠and is given byln⁡n!≈nln⁡n−n+16ln⁡(8n3+4n2+n+130)+12ln⁡π.{\displaystyle \ln n!\approx n\ln n-n+{\tfrac {1}{6}}\ln(8n^{3}+4n^{2}+n+{\tfrac {1}{30}})+{\tfrac {1}{2}}\ln \pi .} The approximation may be made precise by giving paired upper and lower bounds; one such inequality is[16][17][18][19]π(xe)x(8x3+4x2+x+1100)1/6<Γ(1+x)<π(xe)x(8x3+4x2+x+130)1/6.{\displaystyle {\sqrt {\pi }}\left({\frac {x}{e}}\right)^{x}\left(8x^{3}+4x^{2}+x+{\frac {1}{100}}\right)^{1/6}<\Gamma (1+x)<{\sqrt {\pi }}\left({\frac {x}{e}}\right)^{x}\left(8x^{3}+4x^{2}+x+{\frac {1}{30}}\right)^{1/6}.} The formula was first discovered byAbraham de Moivre[2]in the formn!∼[constant]⋅nn+12e−n.{\displaystyle n!\sim [{\rm {constant}}]\cdot n^{n+{\frac {1}{2}}}e^{-n}.} De Moivre gave an approximate rational-number expression for the natural logarithm of the constant. Stirling's contribution consisted of showing that the constant is precisely2π{\displaystyle {\sqrt {2\pi }}}.[3]
https://en.wikipedia.org/wiki/Stirling%27s_approximation
Mobile technologyis the technology used forcellular communication. Mobile technology has evolved rapidly over the past few years. Since the start of this millennium, a standard mobile device has gone from being no more than a simple two-waypagerto being amobile phone,GPS navigation device, an embeddedweb browserandinstant messagingclient, and ahandheld gaming console. Many experts believe that the future of computer technology rests inmobile computingwithwireless networking. Mobile computing by way of tablet computers is becoming more popular. Tablets are available on the3Gand4Gnetworks. Source:[1] Nikola Teslalaid the theoretical foundation for wireless communication in 1890.Guglielmo Marconi, known as the father of radio, first transmitted wireless signals two miles away in 1894. Mobile technology gave human society great change. The use of mobile technology in government departments can also be traced back to World War I. In recent years, the integration of mobile communication technology and information technology has made mobile technology the focus of industry attention. With the integration of mobile communication and mobile computing technology, mobile technology has gradually matured, and the mobile interaction brought by the application and development of mobile technology has provided online connection and communication for Ubiquitous Computing and Any time, anywhere Liaison and information exchange provide possibilities, provide new opportunities and challenges for mobile work, and promote further changes in social and organizational forms. The integration of information technology and communication technology is bringing great changes to our social life. Mobile technology and the Internet have become the main driving forces for the development of information and communication technologies. Through the use of high-coverage mobile communication networks, high-speed wireless networks, and various types of mobile information terminals, the use of mobile technologies has opened up a vast space for mobile interaction. And has become a popular and popular way of living and working. Due to the attractiveness of mobile interaction and the rapid development of new technologies, mobile information terminals, and wireless networks will be no less than the scale and impact of computers and networks in the future. The development of mobile government and mobile commerce has provided new opportunities for further improving the level of city management, improving the level and efficiency of public services, and building a more responsive, efficient, transparent, and responsible government. It also helps to bridge the digital divide and provide citizens with universal Service, agile service. The integration and development of information and communication technology have spurred the formation of an information society and a knowledge society and have also led to a user-oriented innovation oriented to a knowledge society, user-centered society, a stage of social practice, and a feature of mass innovation, joint innovation, and open innovation. Shape, innovation 2.0 mode is gradually emerging to the attention of the scientific community and society. Source:[2] 0G: An early cellular mobile phone technology emerged in the 1970s. At this time, although briefcase-type mobile phones have appeared, they still generally need to be installed in a car or truck. 1G: Refers to the first generation of wireless telephone technology, namely cellular portable wireless telephone. Introduced in the 1980s are analog cellular portable radiotelephone standards. 2G: Second-generation wireless telephone based on digital technology. 2G networks are only for voice communications, except that some standards can also use SMS messages as a form of data transmission. 2.5G: A set of transition technologies between 2G and 3G wireless technologies. In addition to voice, it involves digital communication technologies that support E-mail and simple Web browsing. 2.75G: refers to a technology that, although it does not meet 3G requirements, plays a role in 3G in the market. 3G: Representing the third generation of wireless communication technology, it supports broadband voice, data, and multimedia communication technologies in wireless networks. 3.5G: Generally refers to a technology that goes beyond the development of comprehensive 3G wireless and mobile technologies. 3.75G: A technology that goes beyond the development of comprehensive 3G wireless and mobile technologies. 4G: Named for high-speed mobile wireless communications technology and designed to enable new data services and interactive TV services in mobile networks. 5G: Aims to improve upon 4G, offering lower response times (lower latency) and higher data transfer speeds In the early 1980s,1Gwas introduced as voice-only communication via "brick phones".[3]Later in 1991, the development of2GintroducedShort Message Service(SMS) andMultimedia Messaging Service(MMS) capabilities, allowing picture messages to be sent and received between phones.[3]In 1998, 3G was introduced to provide faster data-transmission speeds to supportvideo callingandinternetaccess.4Gwas released in 2008 to support more demanding services such asgaming services, HDmobile TV,video conferencing, and3D TV.[3]5Gtechnology was initially released in 2019, but is still only available in certain areas. 4Gis the current mainstream cellular service offered to cell phone users, performance roughly 10 times faster than3Gservice.[4]One of the most important features in the4Gmobile networks is the domination of high-speedpackettransmissions or bursttrafficin the channels. The same codes used in the2G-3Gnetworks are applied to4Gmobile or wireless networks, the detection of very short bursts will be a serious problem due to their very poor partial correlation properties. 5G's performance goals are high data rates, reduced latency, energy savings, reduced costs, increased system capacity and large-scale device connectivity. 5G is still a fairly new type of networking and is still being spread across nations. Moving forward, 5G is going to set the standard of cellular service around the whole globe. Corporations such as AT&T, Verizon, and T-Mobile are some of the notorious cellular companies that are rolling out 5G services across the US. 5G started being deployed at the beginning of 2020 and has been growing ever since. According to the GSM association, by 2025, approximately 1.7 billion subscribers will have a subscription with 5G service.[5] 5G wireless signals are transmitted through large numbers of small cell stations located in places like light poles or building roofs.[6]In the past, 4G networking had to rely on large cell towers in order to transmit signals over large distances. With the introduction of 5G networking, it is imperative that small cell stations are used because the MM wave spectrum, which is the specific type of band used in 5G services, strictly travels over short distances. If the distances between cell stations were longer, signals may suffer from interference from inclimate weather, or other objects such as houses, buildings, trees, and much more. In 5G networking, there are 3 main kinds of 5G: low-band, mid-band, and high-band.[7]Low-band frequencies operate below 2 GHz, mid-band frequencies operate between 2–10 GHz, and high-band frequencies operate between 20 and 100 GHz. Verizon have seen outrageous numbers on their high-band 5g service, which they deem "ultraband", which hit speeds of over 3 Gbit/s. The main advantage of 5G networks is that the data transmission rate is much higher than the previous cellular network, up to 10 Gbit/s, which is faster than the current wired Internet and 100 times faster than the previous 4G LTE cellular network. Another advantage is lower network latency (faster response time), less than 1 millisecond, and 4G is 30-70 milliseconds. Since 5G is a relatively new type of service, only phones which are newly released or are upcoming can support 5G service. Some of these phones include the iPhone 12/13; select Samsung devices such as the S21 series, Note series, Flip/Fold series, A series; Google Pixel 4a/5; and a few more devices from other manufacturers. The first ever 5G smartphone, the Samsung Galaxy S20, was released by Samsung in March 2020. Following the release of Samsung's S20 series, Apple was able to integrate 5G compatibility into their iPhone 12s, which was released in fall 2020. These 5G phones were able to harness the power of 5G capability and gave consumers access to speeds that were rapid enough for high demand streaming and gaming.[8]Another type of cellular device that is being utilized is the 5G hotspot. For people who have a device that is only WiFi-capable, these 5G hotspots would provide strong performance when they don't have access to home Wi-Fi. Private 5G networks are also growing immensely among businesses. 5G can help businesses keep up with the growing networking demands of newer technologies such as AI, machine learning, AR, as well as just regular operations. As stated by Verizon, a private 5G network allows large enterprise and public sector customers to bring a custom-tailored 5G experience to indoor or outdoor facilities where high-speed, high-capacity, low-latency connectivity is crucial.[9]Having the access to such high performing networks opens the door to many opportunities for different companies. Being able to connect vast amounts of devices to a reliable and powerful network will be crucial for companies and their technologies moving forward. The Operating System (OS)  is the program that manages all applications in a computer and is often considered the most important software. In order for a computer to run, applications make requests to the OS through an application programming interface (API),[10]and users are able to interact with the OS through a command line or graphical user interface, often with a keyboard and mouse or by touch. A computer that is without an operating system serves no purpose as it will not be able to operate and run tasks effectively.[11]Since the OS manages the computer's hardware and software resources, without it the computer will not be able to communicate applications and hardware connected to the computer. When someone purchases a computer, the operating system is already preloaded. The most common types of operating systems are Microsoft Windows, Apple macOS, Linux, Android, and Apple's iOS. A majority of the modern-day operating systems use a GUI or Graphical User Interface. A GUI allows the user to perform specific tasks, such as using a mouse to click on icons, buttons, and menus. It also allows for graphics and texts to be displayed to be seen clearly. In 1985 Microsoft created the Windows operating system, the most popular operating system worldwide. As of October 2021, the most recent version of Windows is Windows 10. Some of the past ones were Windows 7, 8, and 10. In most computers, Windows comes preloaded. According to the Medium, "Windows achieved its popularity by targeting everyday average users, who are not mainly concerned by the optimal robustness and security of their machines, but are more focused on the usability, familiarity, and availability of productivity tools." Another popular operating system is Apple's Mac OS X. macOS and Microsoft Windows are head-to-head in the competition considering that they are both used commonly used. Apple allies offer a mobile operating system called IOS. This OS is used exclusively for iPhones, one of the most popular phones on the market. These devices are regularly updated since there are often new features. According to The Verge, "Many users appreciate the unique user interface with touch gestures and the ease of use that iOS offers." When looking at mobile tech and computers, their operating systems differ entirely since they are developed for different users. Unlike mobile operating systems, computer systems are way more complex because they store more data. Additionally, the 2 have a different user interface, and since computer operating systems have been around longer than phones, they are more commonly used. Another significant difference is that mobile phones do not offer a desktop feature like most computers. Considering the interface of mobile devices that sets them apart from the computers is that they are simpler to use. Many types ofmobile operating systems(OS) are available forsmartphones, includingAndroid,BlackBerry OS,webOS,iOS,Symbian,Windows MobileProfessional (touch screen),Windows Mobile Standard(non-touch screen), andBada. The most popular are theApple iPhone, and the newest:Android. Android, a mobile OS developed byGoogle, is the first completelyopen-sourcemobile OS, meaning that it is free to any cell phonemobile network. Since 2008 customizable OSs allow the user to downloadappslike games,GPS, utilities, and other tools. Users can also create their own apps and publish them, e.g. to Apple'sApp Store. ThePalm PreusingwebOShas functionality over theInternetand can support Internet-based programming languages such asCascading Style Sheets(CSS),HTML, andJavaScript. TheResearch In Motion(RIM)BlackBerryis a smartphone with a multimedia player and third-party software installation. The Windows Mobile Professional Smartphones (Pocket PCor Windows Mobile PDA) are likepersonal digital assistants(PDA) and havetouchscreenabilities. The Windows Mobile Standard does not have a touch screen but uses atrackball,touchpad, or rockers. There will be a hit to file sharing, the normal web surfer would want to look at a new web page every minute or so at 100 kbs a page loads quickly.[clarification needed]Because of the changes to the security of wireless networks users will be unable to do huge file transfers because service providers want to reduce channel use.AT&Tclaimed that they would ban any of their users that they caught usingpeer-to-peer(P2P)file sharingapplications on their 3G network. It then became apparent that it would keep any of their users from using their iTunes programs. The users would then be forced to find aWi-Fi hotspotto be able to download files. The limits of wireless networking will not be cured by 4G, as there are too many fundamental differences between wireless networking and other means of Internet access. If wireless vendors do not realize these differences andbandwidthlimits, future wireless customers will find themselves disappointed and the market may suffer setbacks. Mobile Internet[12]emerged from the development of PC Internet in the form of handheld, portable devices. The combination of mobile communication and the Internet has allowed users to have easier access in going online if they have mobile technologies such as smartphones, tablets, and laptops amongst the most popular. It is a general term for activities in which the technology, platforms, business models, and applications of the Internet are combined with mobile communications technology. The current medical industry has started to incorporate new emerging technologies such as online medical treatment, online appointments, telemedicine cooperation andonline paymentto their practices.[13]An increase of hospitals and clinics have started implementing electronic health records (EHR) systems to help manage their patients' big data over traditional paper file records. Electronic health records are patients' records and information stored digitally and can be accessed online by exclusively authorized personnel.[14] From the patient's perspective: The continuous evolution of technology allows relevant medical services and treatments to grow to be more effective and personable with medical technology. With the advancements in 3D medical technology, the opportunities of efficient, customizable healthcare such as medicine and surgeries are becoming increasingly achievable.[15]Technology has been pioneering the world and experts are determined to find the optimal applications of technology in the medical field to make customizable healthcare affordable, cost-efficient, and practical. Experts have begun to study and apply 3D technology to surgical procedures, where surgeons and surgeons-in-training have started using 3D-printed, physical stimulations to navigate cranial surgeries with the use of the patients' data.[16] Mobile e-commerce can provide users with the services, applications, information and entertainment they need anytime, anywhere. Purchasing and using goods and services have become more convenient with the introduction of a mobile terminal. Not to mention, websites have started to adopt various forms of mobile payments. The mobile payment platform not only supports various bank cards for online payment, but also supports various terminal operations such as mobile phones and telephones, which meets the needs of online consumers in pursuit of personalization and diversification. Due to the COVID19 pandemic, the usage of m-commerce has skyrocketed in popular retail stores such as Amazon, 7Eleven, and other large retailers. Shopping online has made a lot more stores accessible and convenient for customers, as long as these applications are designed to be straightforward and simple. Poor UI/UX design is a big factor in deterring customers from completing their purchases and/or navigating through online websites.[17]Customers highly value their time and therefore seek practices that can reduce the time spent in stores, which also apply to online applications and websites. Many in-person stores have also started to use contactless and digital payments to reduce the usual amount of face-to-face interactions with the newly improved convenience digital technology provides.[18]Amazon Go is a newly implemented, highly technical store that allows customers to shop while skipping the checkout process. With the use of enhanced technology, Amazon Go can calculate the total cost of the items that were selected and put in to the customer's "virtual" baskets. As long as customers had some form of payment linked to their Amazon accounts, they would be able to leave the store without having to go through the checkout process because it would be automatically paid upon exit.[19]As more and more customers rely on virtual online transactions, the increasing need for security and Internet access will be extremely important. Augmented reality is also known as "mixed reality'' and uses computer technology to apply virtual information to the real world. It uses computer technology to apply virtual information to the real world. The real environment and virtual objects are superimposed on the same screen or space in real time. Augmented reality provides information that, in general, differs from what humans can perceive. It not only displays real-world information, but also displays virtual information at the same time, and the two kinds of information complement each other. According to the data gathered by PSFK Research, customers highly value their time. 72% of the customers want faster and efficient checkout times with the help of technology whereas 61% of customers want efficient technology that helps them find their items faster.[20]Retailers and businesses have implemented augmented reality to help them efficiently manage their storage and more flexible schedules due to remote work. Having the visualization of objects such as clothes, make-up, and shoes will allow users a better, curated shopping experience which can improve the process of checking-out and reducing the amount of time spent in the store.[21] Increasing mobile technology use has changed how the modern family interacts with one another through technology. With the rise of mobile devices, families are becoming increasingly "on-the-move", and spend less time in physical contact with one another. However, this trend does not mean that families are no longer interacting with each other, but rather have evolved into a more digitized variant. A study has shown that the modern family actually learns better with usage of mobile media, and children are more willing to cooperate with their parents via a digital medium than a more direct approach. For example, family members can share information from articles or online videos via mobile devices and thus stay connected with one another during a busy day. Family members can also use video chatting platforms to stay in touch even when they are physically not around. This can be taken one step further by looking at applications that offer features such as photo sharing between families as well as providing life updates through statuses and pictures. Examples of these applications include Google Photos, Facebook, Instagram, and Twitter. Aside from these, there are also finance management and e-book applications that provide the collaboration feature for family members. This feature is important because even when a family may be in the same household, lifestyle related tasks are easier to manage when they are the tip of your fingers. While the world has become more digitalized. mobile technology has played its parts in keeping up with the times. This is also evident through several mobile applications that have been created in order to increase communication between those who live in the same household and even those who may be far. It is no surprise that the reliance on mobile technology has increased but to be able to positively maneuver through this fast-paced change is what is necessary in this day and age. The future indicates that the world will only increase its dependency on technology and as mobile companies offer upgraded devices, the appeal to stay mobile will only grow. Forbes speaks on this behalf as they collect predictions from nine tech experts who share what the future looks like in terms of smartphones. Before beginning, Forbes states, "The members of Forbes Technology Council have their finger on the pulse of upcoming technology advances, including those in the smartphone market."[22]Forbes outlines that there will be more diverse interfaces that will feel more natural and easy to use. Increased interaction with voice assistants will also be offered that will make users more comfortable with assistants such as Alexa, Cortana, and other such artificial intelligence. It is clear that mobile technology is the future of our world - and it will only be more integrated within family members and their day to day communication. This trend is not without controversy, however. Many parents ofelementary school-age children express concern and sometimes disapproval of heavy mobile technology use. Parents may feel that excessive usage of such technologies distracts children from "un-plugged" bonding experiences, and many express safety concerns about children using mobile media. While parents may have many concerns are, they are not necessarilyanti-technology. In fact, many parents express approval of mobile technology usage if their children can learn something from the session. for example, through art or music tutorials onYouTube. Rikuya Hosokawa and Toshiki Katsura speak on this regard in their article, Association between mobile technology use and child adjustment in early elementary school age" in which they declare how the positive or negative effects of mobile technology depend entirely on its context and use. The authors offer studies that illustrate that even where there is positive development of cognitive and academic skills via increased technological time, there are much more negative effects on a child's social and psychological development which can include anything from reduced face to face interaction for children to affecting a child's sleep and behavior.[23] In family life, this technological invention has caused positive and negative effects of equal measure. While others may view this gadget as having eased communication among people and families, some researchers have proved otherwise. These gadgets have strengthened family units. For example, families compensate for daily stress through text messages, phone calls, and e-mails. Internet-enabled phones have also assisted in the connection through social sites where family members can discuss their issues even if they are far apart (Alamenciak, 2012). In America, for instance, parents have adjusted to modern technology thus increasing their connection with their children who may be working in different states. Cell phones are bringing families together as they increase the quality of communication among the family members are living separately in the distance. Families use cell phones to get in touch with their children by the use of e-mails and web (George, 2008). These families contact their children to know how they're redoing and entertain them in the process. Moreover, cell phone communication brings families more closely increasing the relationship between family members. During this time, family heads promote values and set good examples to their children. They encourage openness and communication in case problems arise in the family as well as security since family members get the opportunity to know each other well. Also, cell phones have enhanced accountability either in working premises or at homes. People keep in touch with their core-workers and employees as well as their family members (Good Connection, Bad Example: Cell Phones and The Family, 2007). The next generation ofsmartphoneswill becontext-aware, taking advantage of the growing availability of embedded physical sensors and data exchange abilities. One of the main features applying to this is that phones will start keeping track of users' personal data, and adapt to anticipate the information will need. All-new applications will come out with the new phones, one of which is an X-ray device that reveals information about any location at which the phone is pointed. Companies are developing software to take advantage of more accurate location-sensing data. This has been described as making the phone a virtual mouse able to click the real world.[citation needed]An example would be pointing the phone's camera at a building while having the live feed open, and the phone will show text with the image of the building, and save its location for use in the future. The future of smartphones is ever-growing as smartphone technology is fairly new, existing only for the last two decades with the first one released in the market in 1994 byIBM.[24]Currently,smartphonesare ubiquitous, that many rely on as a tool for leisure, business, entertainment, productivity, and much more. There are currently 237 brands of smartphones with thousands of models combined, and these numbers are growing.[25]Companies release smartphones for each use case and for different price segments. Over the decade the prices of smartphones have been rising, giving a boom to a low end and medium price segment. We can expect to see price ceilings gradually increasing in the coming years. Smartphones are becoming powerful computational tools in the medical industry, being used in and outside of clinics.[26] Omnitouchis a device via which apps can be viewed and used on a hand, arm, wall, desk, or any other everyday surface. The device uses a sensor touch interface, which enables the user to access all the functions through the use of the touch of a finger. It was developed atCarnegie Mellon University. This device uses a projector and camera worn on the user's shoulder, with no controls other than the user's fingers. Throughout the last decade smartphone SOCs (System on a Chip) have rapidly gained speed to catch up to desktop-class CPUs and GPUs. Modern smartphones are capable of performing similar tasks compared to computers, with speed and efficiency. Efficiency is what drives the mobile-first society where smartphones are ubiquitous.[27]Computational speed measured in FLOPS of smartphone chips has been measured to be as closely powerful as a rat's neurological column. With the rapid development in the smartphone's SOCs will soon be powerful enough to replace computer chips for the most part in the consumer market as smartphone SOCs are cheaper and very efficient while being as powerful if not more.[28] On the go, connectivity is more important than ever as smartphones are adapting to more and more tasks that one has to do sitting in front of a computer.6Gconnectivity will bring a whole futuristic realm that is yet not possible. Such asholographic,virtual reality,autonomous driving, etc. With ten times the speed of5G,6Gcan prove to blendvirtual realitywithin the real world to give an immersive experience.6Ghas many applications in almost every industry. Internet-connected devices are ubiquitous, and hyper-connectivity like6Gwill provide latency-free communication for a robust automation server.[29] Smartphonecompanies try to blend form and function for optimal value for customers. Where some companies have come out with radical designs that totally change the norm of a phone design. Like theSamsungGalaxy Fold for example, which was a foldable phone that had a bendable screen.[30]This was a prototype when it debuted but with three iterations and other companies adopting the design. With the new design, the retail price experienced a hike but soon as there will be more competition, prices will follow the market. The flexible screen technology gives opportunities to new design opportunities. The screen size has also played a big role in the smartphone industry as it has allowed companies to pack more tech into the body as well cater towards the high demand for big-screen smartphones.[31]The first popular touchscreen smartphone which was the firstiPhoneAppleintroduced in 2007 had a screen size of approximately 3.5". That has almost doubled to 6.7" for Apple's lineup of smartphones, while other companies have even crossed the size of 7". Borderless phones lack bezels, allowing the screen to be larger. Loading a larger screen into a limited phone size can increase one-handed operability, aesthetics, and a sense of technology. However, the technical problems faced by borderless, light leakage on the screen, accidental touch on the edges, and more fragile bare screens have all been obstacles to the popularization of this technology. Transparent phone is a mobile phone that uses replaceable glass to achieve a visual penetration effect so that its appearance is transparent. Transparent mobile phones use special switchable glass technology. Once the electrically controlled glass is activated by a current through a transparent wire, these molecules will rearrange to form text, icons and other images. The idea is that a cell phone can be made directly at the chip level and implanted in the body. Cell phones are used as brain-assisting tools to help improve work efficiency and sensory experience. Mobile technology, driven by the convergence of mobile communication technology and mobile computing technology, mainly includes four types of technologies.
https://en.wikipedia.org/wiki/Mobile_technology
TheGlobal System for Mobile Communications(GSM) is a family of standards to describe the protocols for second-generation (2G) digitalcellular networks,[2]as used by mobile devices such asmobile phonesandmobile broadband modems. GSM is also atrade markowned by theGSM Association.[3]"GSM" may also refer to the voice codec initially used in GSM.[4] 2G networks developed as a replacement for first generation (1G) analog cellular networks. The original GSM standard, which was developed by theEuropean Telecommunications Standards Institute(ETSI), originally described a digital, circuit-switched network optimized forfull duplexvoicetelephony, employingtime division multiple access(TDMA) between stations. This expanded over time to includedata communications, first bycircuit-switched transport, then bypacketdata transport via its upgraded standards,GPRSand thenEDGE. GSM exists in various versions based on thefrequency bands used. GSM was first implemented inFinlandin December 1991.[5]It became the global standard for mobile cellular communications, with over 2 billion GSM subscribers globally in 2006, far above its competing standard,CDMA.[6]Its share reached over 90% market share by the mid-2010s, and operating in over 219 countries and territories.[2]The specifications and maintenance of GSM passed over to the3GPPbody in 2000,[7]which at the time developed third-generation (3G)UMTSstandards, followed by the fourth-generation (4G) LTE Advanced and the fifth-generation5Gstandards, which do not form part of the GSM standard. Beginning in the late 2010s, various carriers worldwidestarted to shut down their GSM networks; nevertheless, as a result of the network's widespread use, the acronym "GSM" is still used as a generic term for the plethora ofGmobile phone technologies evolved from it or mobile phones itself. In 1983, work began to develop a European standard for digital cellular voice telecommunications when theEuropean Conference of Postal and Telecommunications Administrations(CEPT) set up theGroupe Spécial Mobile(GSM) committee and later provided a permanent technical-support group based inParis. Five years later, in 1987, 15 representatives from 13 European countries signed amemorandum of understandinginCopenhagento develop and deploy a common cellular telephone system across Europe, and EU rules were passed to make GSM a mandatory standard.[8]The decision to develop a continental standard eventually resulted in a unified, open, standard-based network which was larger than that in the United States.[9][10][11][12] In February 1987 Europe produced the first agreed GSM Technical Specification. Ministers fromthe four big EU countries[clarification needed]cemented their political support for GSM with the Bonn Declaration on Global Information Networks in May and the GSMMoUwas tabled for signature in September. The MoU drew in mobile operators from across Europe to pledge to invest in new GSM networks to an ambitious common date. In this short 38-week period the whole of Europe (countries and industries) had been brought behind GSM in a rare unity and speed guided by four public officials: Armin Silberhorn (Germany), Stephen Temple (UK),Philippe Dupuis(France), and Renzo Failli (Italy).[13]In 1989 the Groupe Spécial Mobile committee was transferred from CEPT to theEuropean Telecommunications Standards Institute(ETSI).[10][11][12]The IEEE/RSE awarded toThomas HaugandPhilippe Dupuisthe 2018James Clerk Maxwell medalfor their "leadership in the development of the first international mobile communications standard with subsequent evolution into worldwide smartphone data communication".[14]The GSM (2G) has evolved into 3G, 4G and 5G. In parallelFranceandGermanysigned a joint development agreement in 1984 and were joined byItalyand theUKin 1986. In 1986, theEuropean Commissionproposed reserving the 900 MHz spectrum band for GSM. It was long believed that the formerFinnishprime ministerHarri Holkerimade the world's first GSM call on 1 July 1991, callingKaarina Suonio(deputy mayor of the city ofTampere) using a network built byNokia and SiemensandoperatedbyRadiolinja.[15]In 2021 a former Nokia engineerPekka Lonkarevealed toHelsingin Sanomatmaking a test call just a couple of hours earlier. "World's first GSM call was actually made by me. I called Marjo Jousinen, in Salo.", Lonka informed.[16]The following year saw the sending of the firstshort messaging service(SMS or "text message") message, andVodafone UKand Telecom Finland signed the first internationalroamingagreement. Work began in 1991 to expand the GSM standard to the 1800 MHz frequency band and the first 1800 MHz network became operational in the UK by 1993, called the DCS 1800. Also that year,Telstrabecame the first network operator to deploy a GSM network outside Europe and the first practical hand-held GSMmobile phonebecame available. In 1995 fax, data and SMS messaging services were launched commercially, the first 1900 MHz GSM network became operational in the United States and GSM subscribers worldwide exceeded 10 million. In the same year, theGSM Associationformed. Pre-paid GSM SIM cards were launched in 1996 and worldwide GSM subscribers passed 100 million in 1998.[11] In 2000 the first commercialGeneral Packet Radio Service(GPRS) services were launched and the first GPRS-compatible handsets became available for sale. In 2001, the first UMTS (W-CDMA) network was launched, a 3G technology that is not part of GSM. Worldwide GSM subscribers exceeded 500 million. In 2002, the firstMultimedia Messaging Service(MMS) was introduced and the first GSM network in the 800 MHz frequency band became operational.Enhanced Data rates for GSM Evolution(EDGE) services first became operational in a network in 2003, and the number of worldwide GSM subscribers exceeded 1 billion in 2004.[11] By 2005 GSM networks accounted for more than 75% of the worldwide cellular network market, serving 1.5 billion subscribers. In 2005, the firstHSDPA-capable network also became operational. The firstHSUPAnetwork launched in 2007. (High Speed Packet Access(HSPA) and its uplink and downlink versions are 3G technologies, not part of GSM.) Worldwide GSM subscribers exceeded three billion in 2008.[11] TheGSM Associationestimated in 2011 that technologies defined in the GSM standard served 80% of the mobile market, encompassing more than 5 billion people across more than 212 countries and territories, making GSM the most ubiquitous of the many standards for cellular networks.[17] GSM is a second-generation (2G) standard employing time-division multiple-access (TDMA) spectrum-sharing, issued by the European Telecommunications Standards Institute (ETSI). The GSM standard does not include the 3GUniversal Mobile Telecommunications System(UMTS),code-division multiple access(CDMA) technology, nor the 4G LTEorthogonal frequency-division multiple access(OFDMA) technology standards issued by the 3GPP.[18] GSM, for the first time, set a common standard for Europe for wireless networks. It was also adopted by many countries outside Europe. This allowed subscribers to use other GSM networks that have roaming agreements with each other. The common standard reduced research and development costs, since hardware and software could be sold with only minor adaptations for the local market.[19] TelstrainAustraliashut down its 2G GSM network on 1 December 2016, the first mobile network operator to decommission a GSM network.[20]The second mobile provider to shut down its GSM network (on 1 January 2017) wasAT&T Mobilityfrom theUnited States.[21]OptusinAustraliacompleted the shut down of its 2G GSM network on 1 August 2017, part of the Optus GSM network coveringWestern Australiaand theNorthern Territoryhad earlier in the year been shut down in April 2017.[22]Singaporeshut down 2G services entirely in April 2017.[23] The network is structured into several discrete sections: GSM utilizes acellular network, meaning thatcell phonesconnect to it by searching for cells in the immediate vicinity. There are five different cell sizes in a GSM network: The coverage area of each cell varies according to the implementation environment. Macro cells can be regarded as cells where thebase-stationantennais installed on a mast or a building above average rooftop level. Micro cells are cells whose antenna height is under average rooftop level; they are typically deployed in urban areas. Picocells are small cells whose coverage diameter is a few dozen meters; they are mainly used indoors. Femtocells are cells designed for use in residential orsmall-businessenvironments and connect to atelecommunications service provider's network via abroadband-internetconnection. Umbrella cells are used to cover shadowed regions of smaller cells and to fill in gaps in coverage between those cells. Cell horizontal radius varies – depending on antenna height,antenna gain, andpropagationconditions – from a couple of hundred meters to several tens of kilometers. The longest distance the GSM specification supports in practical use is 35 kilometres (22 mi). There are also several implementations of the concept of an extended cell,[24]where the cell radius could be double or even more, depending on the antenna system, the type of terrain, and thetiming advance. GSM supports indoor coverage – achievable by using an indoor picocell base station, or anindoor repeaterwith distributed indoor antennas fed through power splitters – to deliver the radio signals from an antenna outdoors to the separate indoor distributed antenna system. Picocells are typically deployed when significant call capacity is needed indoors, as in shopping centers or airports. However, this is not a prerequisite, since indoor coverage is also provided by in-building penetration of radio signals from any nearby cell. GSM networks operate in a number of differentcarrier frequencyranges (separated intoGSM frequency rangesfor 2G andUMTS frequency bandsfor 3G), with most2GGSM networks operating in the 900 MHz or 1800 MHz bands. Where these bands were already allocated, the 850 MHz and 1900 MHz bands were used instead (for example in Canada and the United States). In rare cases the 400 and 450 MHz frequency bands are assigned in some countries because they were previously used for first-generation systems. For comparison, most3Gnetworks in Europe operate in the 2100 MHz frequency band. For more information on worldwide GSM frequency usage, seeGSM frequency bands. Regardless of the frequency selected by an operator, it is divided intotimeslotsfor individual phones. This allows eight full-rate or sixteen half-rate speech channels perradio frequency. These eight radio timeslots (orburstperiods) are grouped into aTDMAframe. Half-rate channels use alternate frames in the same timeslot. The channel data rate for all8 channelsis270.833 kbit/s,and the frame duration is4.615 ms.[25]TDMA noise is interference that can be heard on speakers near a GSM phone using TDMA, audible as a buzzing sound.[26] The transmission power in the handset is limited to a maximum of 2 watts inGSM 850/900and1 wattinGSM 1800/1900. GSM has used a variety of voicecodecsto squeeze 3.1 kHz audio into between 7 and 13 kbit/s. Originally, two codecs, named after the types of data channel they were allocated, were used, calledHalf Rate(6.5 kbit/s) andFull Rate(13 kbit/s). These used a system based onlinear predictive coding(LPC). In addition to being efficient withbitrates, these codecs also made it easier to identify more important parts of the audio, allowing the air interface layer to prioritize and better protect these parts of the signal. GSM was further enhanced in 1997[27]with theenhanced full rate(EFR) codec, a 12.2 kbit/s codec that uses a full-rate channel. Finally, with the development ofUMTS, EFR was refactored into a variable-rate codec calledAMR-Narrowband, which is high quality and robust against interference when used on full-rate channels, or less robust but still relatively high quality when used in good radio conditions on half-rate channel. One of the key features of GSM is theSubscriber Identity Module, commonly known as aSIM card. The SIM is a detachablesmart card[3]containing a user's subscription information and phone book. This allows users to retain their information after switching handsets. Alternatively, users can change networks or network identities without switching handsets - simply by changing the SIM. Sometimesmobile network operatorsrestrict handsets that they sell for exclusive use in their own network. This is calledSIM lockingand is implemented by a software feature of the phone. A subscriber may usually contact the provider to remove the lock for a fee, utilize private services to remove the lock, or use software and websites to unlock the handset themselves. It is possible to hack past a phone locked by a network operator. In some countries and regions (e.g.BrazilandGermany) all phones are sold unlocked due to the abundance of dual-SIM handsets and operators.[28] GSM was intended to be a secure wireless system. It has considered the user authentication using apre-shared keyandchallenge–response, and over-the-air encryption. However, GSM is vulnerable to different types of attack, each of them aimed at a different part of the network.[29] Research findings indicate that GSM faces susceptibility to hacking byscript kiddies, a term referring to inexperienced individuals utilizing readily available hardware and software. The vulnerability arises from the accessibility of tools such as a DVB-T TV tuner, posing a threat to both mobile and network users. Despite the term "script kiddies" implying a lack of sophisticated skills, the consequences of their attacks on GSM can be severe, impacting the functionality ofcellular networks. Given that GSM continues to be the main source of cellular technology in numerous countries, its susceptibility to potential threats from malicious attacks is one that needs to be addressed.[30] The development ofUMTSintroduced an optionalUniversal Subscriber Identity Module(USIM), that uses a longer authentication key to give greater security, as well as mutually authenticating the network and the user, whereas GSM only authenticates the user to the network (and not vice versa). The security model therefore offers confidentiality and authentication, but limited authorization capabilities, and nonon-repudiation. GSM uses several cryptographic algorithms for security. TheA5/1,A5/2, andA5/3stream ciphersare used for ensuring over-the-air voice privacy. A5/1 was developed first and is a stronger algorithm used within Europe and the United States; A5/2 is weaker and used in other countries. Serious weaknesses have been found in both algorithms: it is possible to break A5/2 in real-time with aciphertext-only attack, and in January 2007, The Hacker's Choice started the A5/1 cracking project with plans to useFPGAsthat allow A5/1 to be broken with arainbow tableattack.[31]The system supports multiple algorithms so operators may replace that cipher with a stronger one. Since 2000, different efforts have been made in order to crack the A5 encryption algorithms. Both A5/1 and A5/2 algorithms have been broken, and theircryptanalysishas been revealed in the literature. As an example,Karsten Nohldeveloped a number ofrainbow tables(static values which reduce the time needed to carry out an attack) and have found new sources forknown plaintext attacks.[32]He said that it is possible to build "a full GSM interceptor... from open-source components" but that they had not done so because of legal concerns.[33]Nohl claimed that he was able to intercept voice and text conversations by impersonating another user to listen tovoicemail, make calls, or send text messages using a seven-year-oldMotorolacellphone and decryption software available for free online.[34] GSM usesGeneral Packet Radio Service(GPRS) for data transmissions like browsing the web. The most commonly deployed GPRS ciphers were publicly broken in 2011.[35] The researchers revealed flaws in the commonly used GEA/1 and GEA/2 (standing for GPRS Encryption Algorithms 1 and 2) ciphers and published the open-source "gprsdecode" software forsniffingGPRS networks. They also noted that some carriers do not encrypt the data (i.e., using GEA/0) in order to detect the use of traffic or protocols they do not like (e.g.,Skype), leaving customers unprotected. GEA/3 seems to remain relatively hard to break and is said to be in use on some more modern networks. If used withUSIMto prevent connections to fake base stations anddowngrade attacks, users will be protected in the medium term, though migration to 128-bit GEA/4 is still recommended. The first public cryptanalysis of GEA/1 and GEA/2 (also written GEA-1 and GEA-2) was done in 2021. It concluded that although using a 64-bit key, the GEA-1 algorithm actually provides only 40 bits of security, due to a relationship between two parts of the algorithm. The researchers found that this relationship was very unlikely to have happened if it was not intentional. This may have been done in order to satisfy European controls on export of cryptographic programs.[36][37][38] The GSM systems and services are described in a set of standards governed byETSI, where a full list is maintained.[39] Severalopen-source softwareprojects exist that provide certain GSM features,[40]such as abase transceiver stationbyOpenBTSdevelops aBase transceiver stationand theOsmocomstack providing various parts.[41] Patents remain a problem for any open-source GSM implementation, because it is not possible for GNU or any other free software distributor to guarantee immunity from all lawsuits by the patent holders against the users. Furthermore, new features are being added to the standard all the time which means they have patent protection for a number of years.[citation needed] The original GSM implementations from 1991 may now be entirely free of patent encumbrances, however patent freedom is not certain due to the United States' "first to invent" system that was in place until 2012. The "first to invent" system, coupled with "patentterm adjustment" can extend the life of a U.S. patent far beyond 20 years from its priority date. It is unclear at this time whetherOpenBTSwill be able to implement features of that initial specification without limit. As patents subsequently expire, however, those features can be added into the open-source version. As of 2011[update], there have been no lawsuits against users of OpenBTS over GSM use.[citation needed]
https://en.wikipedia.org/wiki/GSM
In communications,Circuit Switched Data(CSD) (also namedGSM data) is the original form ofdatatransmission developed for thetime-division multiple access(TDMA)-basedmobile phonesystems likeGlobal System for Mobile Communications(GSM). In later years,High Speed Circuit Switched Data(HSCSD) was developed providing increased data rates over conventional CSD. After 2010 many telecommunication carriers dropped support for CSD and HSCSD, having been superseded byGPRSandEDGE(E-GPRS). CSD uses a single radiotime slotto deliver 9.6kbit/sdata transmission to the GSMnetwork switching subsystemwhere it could be connected through the equivalent of a normalmodemto thePublic Switched Telephone Network(PSTN), allowing direct calls to anydial-upservice. For backwards compatibility, theIS-95standard also supports CDMA Circuit Switched Data. However, unlike TDMA, there are no time slots, and allCDMAradios can be active all the time to deliver up to 14.4 kbit/s data transmission speeds. With the evolution of CDMA toCDMA2000and1xRTT, the use ofIS-95CDMA Circuit Switched Data declined in favour of the faster data transmission speeds available with the newer technologies. Prior to CSD, data transmission over mobile phone systems was done by using a modem, either built into the phone or attached to it. Such systems were limited by the quality of the audio signal to 2.4 kbit/s or less. With the introduction of digital transmission in TDMA-based systems like GSM, CSD provided almost direct access to the underlying digital signal, allowing for higher speeds. At the same time, the speech-orientedaudio compressionused in GSM actually meant that data rates using a traditional modem connected to the phone would have been even lower than with olderanalogsystems. A CSDcallfunctions in a very similar way to a normalvoice callin a GSM network. A single dedicated radio time slot is allocated between the phone and thebase station. A dedicated "sub-time slot" (16 kbit/s) is allocated from the base station to thetranscoder, and finally, another time slot (64 kbit/s) is allocated from the transcoder to theMobile Switching Centre(MSC). At the MSC, it is possible to use a modem to convert to ananalog signal, though this will typically actually be encoded as a digitalpulse-code modulation(PCM) signal when sent from the MSC. It is also possible to directly use the digital signal as anIntegrated Services Digital Network(ISDN) data signal and feed it into the equivalent of aremote access server. High Speed Circuit Switched Data(HSCSD) is an enhancement to CSD designed to provide higher data rates by means of more efficient channel coding and/or multiple (up to 4) time slots. It requires the time slots being used to be fully reserved to a single user. A transfer rate of up to 57.6 kbit/s (i.e., 4 × 14.4 kbit/s) can be reached, or even 115 kbit/s if a network allows combining 8 slots instead of just 4. It is possible that either at the beginning of the call, or at some point during a call, it will not be possible for the user's full request to be satisfied since the network is often configured to allow normal voice calls to take precedence over additional time slots for HSCSD users. An innovation in HSCSD is to allow different error correction methods to be used for data transfer. The original error correction used in GSM was designed to work at the limits of coverage and in the worst case that GSM will handle. This means that a large part of the GSM transmission capacity is taken up with error correction codes. HSCSD provides different levels of possible error correction which can be used according to the quality of the radio link. This means that in the best conditions 14.4 kbit/s can be put through a single time slot that under CSD would only carry 9.6 kbit/s, i.e. a 50% improvement in throughput. The user is typically charged for HSCSD at a rate higher than a normal phone call (e.g., by the number of time slots allocated) for the total period of time that the user has a connection active. This makes HSCSD relatively expensive in many GSM networks and is one of the reasons that packet-switchedGeneral Packet Radio Service(GPRS), which typically has lower pricing (based on amount of data transferred rather than the duration of the connection), has become more common than HSCSD. Apart from the fact that the full allocated bandwidth of the connection is available to the HSCSD user, HSCSD also has an advantage in GSM systems in terms of lower average radio interface latency than GPRS. This is because the user of an HSCSD connection does not have to wait for permission from the network to send a packet. HSCSD is also an option inEnhanced Data Rates for GSM Evolution(EDGE) andUniversal Mobile Telecommunications System(UMTS) systems where packet data transmission rates are much higher. In the UMTS system, the advantages of HSCSD over packet data are even lower since the UMTS radio interface has been specifically designed to support high bandwidth, low latency packet connections. This means that the primary reason to use HSCSD in this environment would be access to legacy dial up systems. HSCSD was specified in 1997.[1]Nokia 6210was the first mobile phone from Nokia that supported HSCSD. GSM data transmission has advanced since the introduction of CSD: In some places CSD services have continued to operate on 2G networks for a long time. In theNetherlandsoperatorKPNswitched the service off in 2021.[3]
https://en.wikipedia.org/wiki/Circuit_Switched_Data
General Packet Radio Service(GPRS), also called2.5G, is amobile datastandard on the2Gcellular communicationnetwork'sglobal system for mobile communications(GSM).[1]Networks and mobile devices with GPRS started to roll out around the year 2001;[2]it offered, for the first time on GSM networks, seamless data transmission usingpacket datafor an "always-on" connection (eliminating the need to "dial-up"),[3]so providing improvedInternet accessforweb,email,WAPservices,Multimedia Messaging Service(MMS) and others.[4] Up until the rollout of GPRS, onlycircuit switcheddata was used in cellular networks, meaning that one or more radio channels were occupied for the entire duration of a data connection. On the other hand, on GPRS networks, data is broken into small packets and transmitted through available channels.[5]This increased efficiency also gives it theoretical data rates of 56–114kbit/s,[6]significantly faster than the precedingCircuit Switched Data(CSD) technology. GPRS was succeeded byEDGE("2.75G") which provided improved performance and speeds on the 2G GSM system. The GPRS core network allows2G,3GandWCDMAmobile networksto transmitIP packetsto external networks such as theInternet. The GPRS system is an integrated part of theGSMnetwork switching subsystem.[7][8][9] GPRS is abest-effort service, implying variablethroughputandlatencythat depend on the number of other users sharing the service concurrently, as opposed tocircuit switching, where a certainquality of service(QoS) is guaranteed during the connection. It uses unusedtime-division multiple access(TDMA) channels in the GSM system for efficiency. Unlike older circuit switching data, GPRS was sold according to the total volume of data transferred instead of time spent online,[10]which is now standard. GPRS extends the GSM Packet circuit switched data capabilities and makes the following services possible: If SMS over GPRS is used, an SMS transmission speed of about 30 SMS messages per minute may be achieved. This is much faster than using the ordinary SMS over GSM, whose SMS transmission speed is about 6 to 10 SMS messages per minute. As the GPRS standard is an extension of GSM capabilities, the service operates on the2Gand3Gcellular communicationGSM frequencies.[8][11]GPRS devices can typically use (one or more) of the frequencies within one of the frequency bands the radio supports (850, 900, 1800, 1900 MHz). Depending on the device, location and intended use, regulations may be imposed either restricting or explicitly specifying authorised frequency bands.[11][12][13] GSM-850 and GSM-1900 are used in the United States, Canada, and many other countries in the Americas. GSM-900 and GSM-1800 are used in: Europe, Middle East, Africa and most of Asia. In South Americas these bands are used in Costa Rica (GSM-1800), Brazil (GSM-850, 900 and 1800), Guatemala (GSM-850, GSM-900 and 1900), El Salvador (GSM-850, GSM-900 and 1900). There is a more comprehensive record ofinternational cellular service frequency assignments GPRS supports the following protocols: WhenTCP/IPis used, each phone can have one or moreIP addressesallocated. GPRS will store and forward the IP packets to the phone even duringhandover. The TCP restores any packets lost (e.g. due to a radio noise induced pause). Devices supporting GPRS are grouped into three classes: Because a Class A device must service GPRS and GSM networks together, it effectively needs two radios. To avoid this hardware requirement, a GPRS mobile device may implement thedual transfer mode (DTM)feature. A DTM-capable mobile can handle both GSM packets and GPRS packets with network coordination to ensure both types are not transmitted at the same time. Such devices are considered pseudo-Class A, sometimes referred to as "simple class A". Some networks have supported DTM since 2007[citation needed]. USB 3G/GPRS modems have aterminal-like interface overUSBwithV.42bis, andRFC1144data formats. Some models include an externalantennaconnector. Modem cards for laptop PCs, or external USB modems are available, similar in shape and size to acomputer mouse, or apendrive. A GPRS connection is established by reference to itsaccess point name(APN). The APN defines the services such aswireless application protocol(WAP) access,short message service(SMS),multimedia messaging service(MMS), and forInternetcommunication services such asemailandWorld Wide Webaccess. In order to set up a GPRS connection for awireless modem, a user must specify an APN, optionally a user name and password, and very rarely anIP address, provided by the network operator. GSM module or GPRS modules are similar to modems, but there's one difference: the modem is an external piece of equipment, whereas the GSM module or GPRS module can be integrated within an electrical or electronic equipment. It is an embedded piece of hardware. A GSM mobile, on the other hand, is a complete embedded system in itself. It comes with embedded processors dedicated to provide a functional interface between the user and the mobile network. The upload and download speeds that can be achieved in GPRS depend on a number of factors such as: Themultiple access methodsused in GSM with GPRS are based onfrequency-division duplex(FDD) and TDMA. During a session, a user is assigned to one pair of up-link and down-link frequency channels. This is combined with time domainstatistical multiplexingwhich makes it possible for several users to share the same frequency channel. Thepacketshave constant length, corresponding to a GSM time slot. The down-link usesfirst-come first-servedpacket scheduling, while the up-link uses a scheme very similar toreservation ALOHA(R-ALOHA). This means thatslotted ALOHA(S-ALOHA) is used for reservation inquiries during a contention phase, and then the actual data is transferred usingdynamic TDMAwith first-come first-served. The channel encoding process in GPRS consists of two steps: first, a cyclic code is used to add parity bits, which are also referred to as the Block Check Sequence, followed by coding with a possibly puncturedconvolutional code.[14]The Coding Schemes CS-1 to CS-4 specify the number of parity bits generated by the cyclic code and the puncturing rate of the convolutional code.[14]In Coding Schemes CS-1 through CS-3, the convolutional code is of rate 1/2, i.e. each input bit is converted into two coded bits.[14]In Coding Schemes CS-2 and CS-3, the output of the convolutional code ispuncturedto achieve the desired code rate.[14]In Coding Scheme CS-4, no convolutional coding is applied.[14]The following table summarises the options. The least robust, but fastest, coding scheme (CS-4) is available near abase transceiver station(BTS), while the most robust coding scheme (CS-1) is used when the mobile station (MS) is further away from a BTS. Using the CS-4 it is possible to achieve a user speed of 20.0 kbit/s per time slot. However, using this scheme the cell coverage is 25% of normal. CS-1 can achieve a user speed of only 8.0 kbit/s per time slot, but has 98% of normal coverage. Newer network equipment can adapt the transfer speed automatically depending on the mobile location. In addition to GPRS, there are two other GSM technologies which deliver data services:circuit-switched data(CSD) andhigh-speed circuit-switched data(HSCSD). In contrast to the shared nature of GPRS, these instead establish a dedicated circuit (usually billed per minute). Some applications such asvideo callingmay prefer HSCSD, especially when there is a continuous flow of data between the endpoints. The following table summarises some possible configurations of GPRS and circuit switched data services. The multislot class determines the speed of data transfer available in theUplinkandDownlinkdirections. It is a value between 1 and 45 which the network uses to allocate radio channels in the uplink and downlink direction. Multislot class with values greater than 31 are referred to as high multislot classes. A multislot allocation is represented as, for example, 5+2. The first number is the number of downlink timeslots and the second is the number of uplink timeslots allocated for use by the mobile station. A commonly used value is class 10 for many GPRS/EGPRS mobiles which uses a maximum of 4 timeslots in downlink direction and 2 timeslots in uplink direction. However simultaneously a maximum number of 5 simultaneous timeslots can be used in both uplink and downlink. The network will automatically configure for either 3+2 or 4+1 operation depending on the nature of data transfer. Some high end mobiles, usually also supportingUMTS, also support GPRS/EDGEmultislot class 32. According to3GPPTS 45.002 (Release 12), Table B.1,[17]mobile stations of this class support 5 timeslots in downlink and 3 timeslots in uplink with a maximum number of 6 simultaneously used timeslots. If data traffic is concentrated in downlink direction the network will configure the connection for 5+1 operation. When more data is transferred in the uplink the network can at any time change the constellation to 4+2 or 3+3. Under the best reception conditions, i.e. when the best EDGEmodulation and coding schemecan be used, 5 timeslots can carry a bandwidth of 5*59.2 kbit/s = 296 kbit/s. In uplink direction, 3 timeslots can carry a bandwidth of 3*59.2 kbit/s = 177.6 kbit/s.[18] Each multislot class identifies the following: The different multislot class specification is detailed in the Annex B of the 3GPP Technical Specification 45.002 (Multiplexing and multiple access on the radio path) The maximum speed of a GPRS connection offered in 2003 was similar to amodemconnection in an analog wire telephone network, about 32–40 kbit/s, depending on the phone used.Latencyis very high; round-trip time (RTT) is typically about 600–700 ms and often reaches 1s. GPRS is typically prioritized lower than speech, and thus the quality of connection varies greatly. Devices with latency/RTT improvements (via, for example, the extended UL TBF mode feature) are generally available. Also, network upgrades of features are available with certain operators. With these enhancements the active round-trip time can be reduced, resulting in significant increase in application-level throughput speeds. GSM was designed for voice, not data. It did not provide direct access to the Internet and it had a limited capacity of 9600 bauds per second.[19]The limitations ofCircuit Switched Data(CSD) also included higher costs. GPRS opened in 2000[20]as a packet-switched data service embedded in the channel-switched cellular radio networkGSM. GPRS extends the reach of the fixed Internet by connecting mobile terminals worldwide. GPRS was established byEuropean Telecommunications Standards Institute(ETSI) in response to the earlierCDPDandi-modepacket-switched cellular technologies and is integrated into GSM Release 97 and newer releases. It is now maintained by the3rd Generation Partnership Project(3GPP).[21][22] The CELLPAC[23]protocol developed 1991–1993 was the trigger point for starting in 1993 the specification of standard GPRS by ETSISMG. Especially, the CELLPAC Voice & Data functions introduced in a 1993 ETSI Workshop contribution[24]anticipate what was later known to be the roots of GPRS. This workshop contribution is referenced in 22 GPRS-related US patents.[25]Successor systems to GSM/GPRS like W-CDMA (UMTS) andLTErely on key GPRS functions for mobile Internet access as introduced by CELLPAC. According to a study on history of GPRS development,[26]Bernhard Walkeand his student Peter Decker are the inventors of GPRS — the first system providing worldwide mobile Internet access. Enhanced Data rates for GSM Evolution(EDGE), also known as 2.75G and under various other names, is a2Gdigitalmobile phonetechnology forpacket switcheddata transmission. It is a subset ofGeneral Packet Radio Service(GPRS) on theGSMnetwork and improves upon it offering speeds close to3Gtechnology, hence the name 2.75G. EDGE is standardized by the3GPPas part of the GSM family and as an upgrade to GPRS.
https://en.wikipedia.org/wiki/GPRS
Enhanced Data rates for GSM Evolution(EDGE), also known as2.75Gand under various other names, is a2Gdigitalmobile phonetechnology forpacket switcheddata transmission. It is a subset ofGeneral Packet Radio Service(GPRS) on theGSMnetwork and improves upon it offering speeds close to3Gtechnology, hence the name 2.75G. EDGE is standardized by the3GPPas part of the GSM family and as an upgrade to GPRS. EDGE was deployed on GSM networks beginning in 2003 – initially byCingular(nowAT&T) in the United States.[1]It could be readily deployed on existing GSM and GPRS cellular equipment, making it an easier upgrade forcellular companiescompared to theUMTS3G technology that required significant changes.[2]Through the introduction of sophisticated methods of coding and transmitting data, EDGE delivers higher bit-rates per radio channel, resulting in a threefold increase in capacity and performance compared with an ordinary GSM/GPRS connection - originally a max speed of 384 kbit/s.[3]Later,Evolved EDGEwas developed as an enhanced standard providing even more reduced latency and more than double performance, with a peak bit-rate of up to 1 Mbit/s. Enhanced Data rates for GSM Evolutionis the common full name of the EDGE standard. Other names include:Enhanced GPRS(EGPRS),IMT Single Carrier(IMT-SC), andEnhanced Data rates for Global Evolution. Although described as "2.75G" by the3GPPbody, EDGE is part ofInternational Telecommunication Union(ITU)'s 3G definition.[4]It is also recognized as part of theInternational Mobile Telecommunications - 2000(IMT-2000) standard for 3G. EDGE/EGPRS is implemented as a bolt-on enhancement for2.5GGSM/GPRS networks, making it easier for existing GSM carriers to upgrade to it. EDGE is a superset to GPRS and can function on any network with GPRS deployed on it, provided the carrier implements the necessary upgrade. EDGE requires no hardware or software changes to be made in GSM core networks. EDGE-compatible transceiver units must be installed and the base station subsystem needs to be upgraded to support EDGE. If the operator already has this in place, which is often the case today, the network can be upgraded to EDGE by activating an optional software feature. Today EDGE is supported by all major chip vendors for both GSM andWCDMA/HSPA. In addition toGaussian minimum-shift keying(GMSK), EDGE useshigher-order PSK/8 phase-shift keying(8PSK) for the upper five of its nine modulation and coding schemes. EDGE produces a 3-bit word for every change in carrier phase. This effectively triples the gross data rate offered by GSM. EDGE, likeGPRS, uses a rate adaptation algorithm that adapts the modulation and coding scheme (MCS) according to the quality of the radio channel, and thus the bit rate and robustness of data transmission. It introduces a new technology not found in GPRS,incremental redundancy, which, instead of retransmitting disturbed packets, sends more redundancy information to be combined in the receiver. This increases the probability of correct decoding. EDGE can carry a bandwidth up to 236 kbit/s (with end-to-end latency of less than 150 ms) for 4timeslots(theoretical maximum is 473.6 kbit/s for 8 timeslots) in packet mode. This means it can handle four times as much traffic as standard GPRS. EDGE meets theInternational Telecommunication Union's requirement for a3Gnetwork, and has been accepted by the ITU as part of theIMT-2000family of 3G standards.[4]It also enhances the circuit data mode calledHSCSD, increasing the data rate of this service. The channel encoding process in GPRS as well as EGPRS/EDGE consists of two steps: first, a cyclic code is used to add parity bits, which are also referred to as the Block Check Sequence, followed by coding with a possibly puncturedconvolutional code.[5]In GPRS, the Coding Schemes CS-1 to CS-4 specify the number of parity bits generated by the cyclic code and the puncturing rate of the convolutional code.[5]In GPRS Coding Schemes CS-1 through CS-3, the convolutional code is of rate 1/2, i.e. each input bit is converted into two coded bits.[5]In Coding Schemes CS-2 and CS-3, the output of the convolutional code ispuncturedto achieve the desired code rate.[5]In GPRS Coding Scheme CS-4, no convolutional coding is applied.[5] In EGPRS/EDGE, themodulationand coding schemes MCS-1 to MCS-9 take the place of the coding schemes of GPRS, and additionally specify which modulation scheme is used, GMSK or 8PSK.[5]MCS-1 through MCS-4 use GMSK and have performance similar (but not equal) to GPRS, while MCS-5 through MCS-9 use 8PSK.[5]In all EGPRS modulation and coding schemes, a convolutional code of rate 1/3 is used, and puncturing is used to achieve the desired code rate.[5]In contrast to GPRS, theRadio Link Control(RLC) andmedium access control(MAC) headers and the payload data are coded separately in EGPRS.[5]The headers are coded more robustly than the data.[5] The first EDGE network was deployed byCingular(nowAT&T) in the United States[1]on June 30, 2003, initially coveringIndianapolis.[8][9]T-Mobile USdeployed their EDGE network in September 2005.[10][11]In Canada,Rogers Wirelessdeployed their EDGE network in 2004.[12]In Malaysia,DiGilaunched EDGE beginning in May 2004 initially only in theKlang Valley.[13] In Europe,TeliaSonerain Finland rolled out EDGE in April 2004.[14]Orangebegan trialling EDGE in France in April 2005 before a consumer rollout later that year.[15]Bouygues Telecomcompleted its national deployment of EDGE in the country in 2005, strategically focusing on EDGE which is cheaper to deploy compared to 3G networks.[16]Telfortwas the first network in the Netherlands to roll out EDGE having done so by May 2005.[17]Orange launched the UK's first EDGE network in February 2006.[18] TheGlobal Mobile Suppliers Associationreported in 2008 that EDGE networks have been launched in 147 countries around the world.[19] Evolved EDGE, also calledEDGE Evolutionand2.875G, is a bolt-on extension to theGSMmobile telephony standard, which improves on EDGE in a number of ways. Latencies are reduced by lowering theTransmission Time Intervalby half (from 20 ms to 10 ms). Bit rates are increased up to 1 Mbit/s peak bandwidth and latencies down to 80 ms using dual carrier, higher symbol rate andhigher-order modulation(32QAM and 16QAM instead of 8PSK), andturbo codesto improve error correction. This results in real world downlink speeds of up to 600 kbit/s.[20]Further the signal quality is improved using dual antennas improving average bit-rates and spectrum efficiency. The main intention of increasing the existing EDGE throughput is that many operators would like to upgrade their existing infrastructure rather than invest on new network infrastructure. Mobile operators have invested billions in GSM networks, many of which are already capable of supporting EDGE data speeds up to 236.8 kbit/s. With a software upgrade and a new device compliant with Evolved EDGE (like an Evolved EDGEsmartphone) for the user, these data rates can be boosted to speeds approaching 1 Mbit/s (i.e. 98.6 kbit/s per timeslot for 32QAM). Many service providers may not invest in a completely new technology like3Gnetworks.[21] Considerable research and development happened throughout the world for this new technology. A successful trial by Nokia Siemens and "one of China's leading operators" was achieved in a live environment.[21]However, Evolved EDGE was introduced much later than its predecessor, EDGE, coinciding with the widespread adoption of 3G technologies such asHSPAand just before the emergence of4Gnetworks. This timing significantly limited its relevance and practical application, as operators prioritized investment in more advanced wireless technologies likeUMTSandLTE. Moreover, these newer technologies also targeted network coverage layers on low frequencies, further diminishing the potential advantages of Evolved EDGE. Coupled with the upcoming phase-out and shutdown of 2G mobile networks, it became very unlikely that Evolved EDGE would ever see deployment on live networks. As of 2016, nocommercial networkssupported the Evolved EDGE standard (3GPP Rel-7). With Evolved EDGE come three major features designed to reduce latency over the air interface. In EDGE, a single RLC data block (ranging from 23 to 148 bytes of data) is transmitted over four frames, using a single time slot. On average, this requires 20 ms for one way transmission. Under the RTTI scheme, one data block is transmitted over two frames in two timeslots, reducing the latency of the air interface to 10 ms. In addition, Reduced Latency also implies support of Piggy-backedACK/NACK(PAN), in which a bitmap of blocks not received is included in normal data blocks. Using the PAN field, the receiver may report missing data blocks immediately, rather than waiting to send a dedicated PAN message. A final enhancement is RLC-non persistent mode. With EDGE, the RLC interface could operate in either acknowledged mode, or unacknowledged mode. In unacknowledged mode, there is no retransmission of missing data blocks, so a single corrupt block would cause an entire upper-layer IP packet to be lost. With non-persistent mode, an RLC data block may be retransmitted if it is less than a certain age. Once this time expires, it is considered lost, and subsequent data blocks may then be forwarded to upper layers. Both uplink and downlink throughput is improved by using 16 or 32 QAM (quadrature amplitude modulation), along with turbo codes and higher symbol rates. A lesser-known version of the EDGE standard is Enhanced Circuit Switched Data (ECSD), which iscircuit switched.[22] A variant, so called Compact-EDGE, was developed for use in a portion ofDigital AMPSnetwork spectrum.[23] The Global mobile Suppliers Association (GSA) states that, as of May 2013, there were 604 GSM/EDGE networks in 213 countries, from a total of 606 mobile network operator commitments in 213 countries.[24]
https://en.wikipedia.org/wiki/EDGE_(telecommunication)
Enhanced Data rates for GSM Evolution(EDGE), also known as2.75Gand under various other names, is a2Gdigitalmobile phonetechnology forpacket switcheddata transmission. It is a subset ofGeneral Packet Radio Service(GPRS) on theGSMnetwork and improves upon it offering speeds close to3Gtechnology, hence the name 2.75G. EDGE is standardized by the3GPPas part of the GSM family and as an upgrade to GPRS. EDGE was deployed on GSM networks beginning in 2003 – initially byCingular(nowAT&T) in the United States.[1]It could be readily deployed on existing GSM and GPRS cellular equipment, making it an easier upgrade forcellular companiescompared to theUMTS3G technology that required significant changes.[2]Through the introduction of sophisticated methods of coding and transmitting data, EDGE delivers higher bit-rates per radio channel, resulting in a threefold increase in capacity and performance compared with an ordinary GSM/GPRS connection - originally a max speed of 384 kbit/s.[3]Later,Evolved EDGEwas developed as an enhanced standard providing even more reduced latency and more than double performance, with a peak bit-rate of up to 1 Mbit/s. Enhanced Data rates for GSM Evolutionis the common full name of the EDGE standard. Other names include:Enhanced GPRS(EGPRS),IMT Single Carrier(IMT-SC), andEnhanced Data rates for Global Evolution. Although described as "2.75G" by the3GPPbody, EDGE is part ofInternational Telecommunication Union(ITU)'s 3G definition.[4]It is also recognized as part of theInternational Mobile Telecommunications - 2000(IMT-2000) standard for 3G. EDGE/EGPRS is implemented as a bolt-on enhancement for2.5GGSM/GPRS networks, making it easier for existing GSM carriers to upgrade to it. EDGE is a superset to GPRS and can function on any network with GPRS deployed on it, provided the carrier implements the necessary upgrade. EDGE requires no hardware or software changes to be made in GSM core networks. EDGE-compatible transceiver units must be installed and the base station subsystem needs to be upgraded to support EDGE. If the operator already has this in place, which is often the case today, the network can be upgraded to EDGE by activating an optional software feature. Today EDGE is supported by all major chip vendors for both GSM andWCDMA/HSPA. In addition toGaussian minimum-shift keying(GMSK), EDGE useshigher-order PSK/8 phase-shift keying(8PSK) for the upper five of its nine modulation and coding schemes. EDGE produces a 3-bit word for every change in carrier phase. This effectively triples the gross data rate offered by GSM. EDGE, likeGPRS, uses a rate adaptation algorithm that adapts the modulation and coding scheme (MCS) according to the quality of the radio channel, and thus the bit rate and robustness of data transmission. It introduces a new technology not found in GPRS,incremental redundancy, which, instead of retransmitting disturbed packets, sends more redundancy information to be combined in the receiver. This increases the probability of correct decoding. EDGE can carry a bandwidth up to 236 kbit/s (with end-to-end latency of less than 150 ms) for 4timeslots(theoretical maximum is 473.6 kbit/s for 8 timeslots) in packet mode. This means it can handle four times as much traffic as standard GPRS. EDGE meets theInternational Telecommunication Union's requirement for a3Gnetwork, and has been accepted by the ITU as part of theIMT-2000family of 3G standards.[4]It also enhances the circuit data mode calledHSCSD, increasing the data rate of this service. The channel encoding process in GPRS as well as EGPRS/EDGE consists of two steps: first, a cyclic code is used to add parity bits, which are also referred to as the Block Check Sequence, followed by coding with a possibly puncturedconvolutional code.[5]In GPRS, the Coding Schemes CS-1 to CS-4 specify the number of parity bits generated by the cyclic code and the puncturing rate of the convolutional code.[5]In GPRS Coding Schemes CS-1 through CS-3, the convolutional code is of rate 1/2, i.e. each input bit is converted into two coded bits.[5]In Coding Schemes CS-2 and CS-3, the output of the convolutional code ispuncturedto achieve the desired code rate.[5]In GPRS Coding Scheme CS-4, no convolutional coding is applied.[5] In EGPRS/EDGE, themodulationand coding schemes MCS-1 to MCS-9 take the place of the coding schemes of GPRS, and additionally specify which modulation scheme is used, GMSK or 8PSK.[5]MCS-1 through MCS-4 use GMSK and have performance similar (but not equal) to GPRS, while MCS-5 through MCS-9 use 8PSK.[5]In all EGPRS modulation and coding schemes, a convolutional code of rate 1/3 is used, and puncturing is used to achieve the desired code rate.[5]In contrast to GPRS, theRadio Link Control(RLC) andmedium access control(MAC) headers and the payload data are coded separately in EGPRS.[5]The headers are coded more robustly than the data.[5] The first EDGE network was deployed byCingular(nowAT&T) in the United States[1]on June 30, 2003, initially coveringIndianapolis.[8][9]T-Mobile USdeployed their EDGE network in September 2005.[10][11]In Canada,Rogers Wirelessdeployed their EDGE network in 2004.[12]In Malaysia,DiGilaunched EDGE beginning in May 2004 initially only in theKlang Valley.[13] In Europe,TeliaSonerain Finland rolled out EDGE in April 2004.[14]Orangebegan trialling EDGE in France in April 2005 before a consumer rollout later that year.[15]Bouygues Telecomcompleted its national deployment of EDGE in the country in 2005, strategically focusing on EDGE which is cheaper to deploy compared to 3G networks.[16]Telfortwas the first network in the Netherlands to roll out EDGE having done so by May 2005.[17]Orange launched the UK's first EDGE network in February 2006.[18] TheGlobal Mobile Suppliers Associationreported in 2008 that EDGE networks have been launched in 147 countries around the world.[19] Evolved EDGE, also calledEDGE Evolutionand2.875G, is a bolt-on extension to theGSMmobile telephony standard, which improves on EDGE in a number of ways. Latencies are reduced by lowering theTransmission Time Intervalby half (from 20 ms to 10 ms). Bit rates are increased up to 1 Mbit/s peak bandwidth and latencies down to 80 ms using dual carrier, higher symbol rate andhigher-order modulation(32QAM and 16QAM instead of 8PSK), andturbo codesto improve error correction. This results in real world downlink speeds of up to 600 kbit/s.[20]Further the signal quality is improved using dual antennas improving average bit-rates and spectrum efficiency. The main intention of increasing the existing EDGE throughput is that many operators would like to upgrade their existing infrastructure rather than invest on new network infrastructure. Mobile operators have invested billions in GSM networks, many of which are already capable of supporting EDGE data speeds up to 236.8 kbit/s. With a software upgrade and a new device compliant with Evolved EDGE (like an Evolved EDGEsmartphone) for the user, these data rates can be boosted to speeds approaching 1 Mbit/s (i.e. 98.6 kbit/s per timeslot for 32QAM). Many service providers may not invest in a completely new technology like3Gnetworks.[21] Considerable research and development happened throughout the world for this new technology. A successful trial by Nokia Siemens and "one of China's leading operators" was achieved in a live environment.[21]However, Evolved EDGE was introduced much later than its predecessor, EDGE, coinciding with the widespread adoption of 3G technologies such asHSPAand just before the emergence of4Gnetworks. This timing significantly limited its relevance and practical application, as operators prioritized investment in more advanced wireless technologies likeUMTSandLTE. Moreover, these newer technologies also targeted network coverage layers on low frequencies, further diminishing the potential advantages of Evolved EDGE. Coupled with the upcoming phase-out and shutdown of 2G mobile networks, it became very unlikely that Evolved EDGE would ever see deployment on live networks. As of 2016, nocommercial networkssupported the Evolved EDGE standard (3GPP Rel-7). With Evolved EDGE come three major features designed to reduce latency over the air interface. In EDGE, a single RLC data block (ranging from 23 to 148 bytes of data) is transmitted over four frames, using a single time slot. On average, this requires 20 ms for one way transmission. Under the RTTI scheme, one data block is transmitted over two frames in two timeslots, reducing the latency of the air interface to 10 ms. In addition, Reduced Latency also implies support of Piggy-backedACK/NACK(PAN), in which a bitmap of blocks not received is included in normal data blocks. Using the PAN field, the receiver may report missing data blocks immediately, rather than waiting to send a dedicated PAN message. A final enhancement is RLC-non persistent mode. With EDGE, the RLC interface could operate in either acknowledged mode, or unacknowledged mode. In unacknowledged mode, there is no retransmission of missing data blocks, so a single corrupt block would cause an entire upper-layer IP packet to be lost. With non-persistent mode, an RLC data block may be retransmitted if it is less than a certain age. Once this time expires, it is considered lost, and subsequent data blocks may then be forwarded to upper layers. Both uplink and downlink throughput is improved by using 16 or 32 QAM (quadrature amplitude modulation), along with turbo codes and higher symbol rates. A lesser-known version of the EDGE standard is Enhanced Circuit Switched Data (ECSD), which iscircuit switched.[22] A variant, so called Compact-EDGE, was developed for use in a portion ofDigital AMPSnetwork spectrum.[23] The Global mobile Suppliers Association (GSA) states that, as of May 2013, there were 604 GSM/EDGE networks in 213 countries, from a total of 606 mobile network operator commitments in 213 countries.[24]
https://en.wikipedia.org/wiki/Evolved_EDGE
Digital AMPS(D-AMPS), most often referred to asTDMA, is a second-generation (2G)cellular phonesystem that was once prevalent throughout theAmericas, particularly in theUnited StatesandCanadasince the first commercial network was deployed in 1993.[1]Former large D-AMPS networks included those ofAT&TandRogers Wireless. The name TDMA is based on the abbreviation fortime-division multiple access, a commonmultiple accesstechnique which is used in most 2G standards, includingGSM. D-AMPS competed against GSM and systems based oncode-division multiple access(CDMA). It is now consideredend-of-life, as existing networks have shut and been replaced by GSM/GPRSorCDMA2000technologies. The last carrier to operate a D-AMPS network wasU.S. Cellular, who terminated it on February 10, 2009.[2] The technical names for D-AMPS areIS-54and its successorIS-136.[3][4]IS-54 was the first mobile communication system which had provision for security, and the first to employtime-division multiple access(TDMA) technology.[5]IS-136 added a number of features to the original IS-54 specification, includingtext messaging(SMS),circuit switched data(CSD), and an improved compression protocol. SMS and CSD were both available as part of the GSM protocol, and IS-136 implemented them in a nearly identical fashion. D-AMPS was a further development of the North American1Gmobile systemAdvanced Mobile Phone System(AMPS) and used existing AMPS channels and allows for smooth transition between digital and analog systems in the same area. Capacity was increased over the preceding analog design by dividing each 30 kHz channel pair into three time slots (hencetime division) and digitally compressing the voice data, yielding three times the call capacity in a single cell. A digital system also made calls more secure in the beginning, as analogue scanners could not access digital signals. Calls were encrypted, usingCMEA, which was later found to be weak.[6] The evolution ofmobile communicationbegan in three different geographic regions:North America,EuropeandJapan. The standards used in these regions were quite independent of each other.[citation needed] The earliest mobile or wireless technologies implemented were wholly analogue, and are collectively known as 1st Generation (1G) technologies. In Japan, the 1G standards were:Nippon Telegraph and Telephone(NTT) and the high capacity version of it (Hicap). The early systems used throughout Europe were not compatible to each other, meaning the later idea of a common 'European Union' viewpoint/technological standard was absent at this time.[citation needed] The various 1G standards in use in Europe includedC-Netz(in Germany and Austria), Comviq (in Sweden),Nordic Mobile Telephones/450 (NMT450) and NMT900 (both in Nordic countries), NMT-F (French version of NMT900), TMA-450 (Spanish version of NMT450), Radiocom 2000 (RC2000) (in France),TACS(Total Access Communication System) (in theUnited Kingdom,ItalyandIreland), and TMA-900 (Spanish version of TACS). North American standards wereAdvanced Mobile Phone System(AMPS) and Narrow-band AMPS (N-AMPS). Despite theNordic countries' cooperation, European engineering efforts were divided among the various standards, and the Japanese standards did not get much attention[by whom?]. Developed byBell Labsin the 1970s and first used commercially in theUnited Statesin 1983, AMPS operates in the 800MHzband in the United States and is the most widely distributed analog cellular standard. (The 1900MHzPCSband, established in 1994, is for digital operation only.) The success of AMPS kick-started the mobile age in the North America. The market showed an increasing demand because it had higher capacity and mobility than the then-existing mobile communication standards were capable of handling. For example, the Bell Labs system in the 1970s could carry only 12 calls at a time throughout all ofNew York City. AMPS usedFrequency Division Multiple Access(FDMA) which enabled each cell site to transmit on different frequencies, allowing many cell sites to be built near each other. AMPS also had many disadvantages, as well. Primarily, it did not have the ability to support the ever-increasing demand for mobile communication usage. Each cell site did not have much capacity for carrying higher numbers of calls. AMPS also had a poor security system which allowed people to steal a phone's serial code to use for making illegal calls. All of these triggered the search for a more capable system. The quest resulted inIS-54, the first American 2G standard. In March 1990, the North American cellular network incorporated the IS-54B standard, the first North American dual mode digital cellular standard. This standard won overMotorola's Narrowband AMPS or N-AMPS, an analog scheme which increased capacity, by cutting down voice channels from 30 kHz to 10 kHz. IS-54, on the other hand, increased capacity by digital means usingTDMAprotocols. This method separates calls by time, placing parts of individual conversations on the same frequency, one after the next. TDMA tripled call capacity. Using IS-54, a cellular carrier could convert any of its system'sanalogvoice channels todigital. A dual mode phone uses digital channels where available, and defaults to regular AMPS where they are not. IS-54 wasbackward compatiblewith analogue cellular and indeed co-existed on the same radio channels as AMPS. No analogue customers were left behind; they simply could not access IS-54's new features. IS-54 also supportedauthentication, a help in preventing fraud. IS-54 employs the same 30 kHz channel spacing and frequency bands (824-849 and 869-894 MHz) as AMPS. Capacity was increased over the preceding analog design by dividing each 30 kHz channel pair into three time slots and digitally compressing the voice data, yielding three times the call capacity in a single cell. A digital system also made calls more secure because analog scanners could not access digital signals. The IS-54 standard specifies 84 control channels, 42 of which are shared with AMPS. To maintain compatibility with the existing AMPS cellular telephone system, the primary forward and reverse control channels in IS-54 cellular systems use the same signaling techniques and modulation scheme (binary FSK) as AMPS. An AMPS/IS-54 infrastructure can support use of either analog AMPS phones or D-AMPS phones. The access method used for IS-54 is Time Division Multiple Access (TDMA), which was the first U.S. digital standard to be developed. It was adopted by the TIA in 1992. TDMA subdivides each of the 30 kHz AMPS channels into three full-rate TDMA channels, each of which is capable of supporting a single voice call. Later, each of these full-rate channels was further sub-divided into two half-rate channels, each of which, with the necessary coding and compression, could also support a voice call. Thus, TDMA could provide three to six times the capacity of AMPS traffic channels. TDMA was initially defined by the IS-54 standard and is now specified in the IS-13x series of specifications of the EIA/TIA. The channel transmission bit rate for digitally modulating the carrier is 48.6 kbit/s. Each frame has six time slots of 6.67-ms duration. Each time slot carries 324 bits of information, of which 260 bits are for the 13-kbit/s full-rate traffic data. The other 64 bits are overhead; 28 of these are for synchronization, and they contain a specific bit sequence known by all receivers to establish frame alignment. Also, as with GSM, the known sequence acts as a training pattern to initialize an adaptive equalizer. The IS-54 system has different synchronization sequences for each of the six time slots making up the frame, thereby allowing each receiver to synchronize to its own preassigned time slots. An additional 12 bits in every time slot are for the SACCH (i.e. system control information). The digital verification color code (DVCC) is the equivalent of the supervisory audio tone used in the AMPS system. There are 256 different 8-bit color codes, which are protected by a (12, 8, 3) Hamming code. Each base station has its own preassigned color code, so any incoming interfering signals from distant cells can be ignored. The modulation scheme for IS-54 is 7C/4 differential quaternary phase shift keying (DQPSK), otherwise known as differential 7t/4 4-PSK or π/4 DQPSK. This technique allows a bit rate of 48.6 kbit/s with 30 kHz channel spacing, to give a bandwidth efficiency of 1.62 bit/s/Hz. This value is 20% better than GSM. The major disadvantage with this type of linear modulation method is the power inefficiency, which translates into a heavier hand-held portable and, even more inconvenient, a shorter time between battery recharges. A conversation's data bits makes up the DATA field. Six slots make up a complete IS-54 frame. DATA in slots 1 and 4, 2 and 5, and 3 and 6 make up a voice circuit. DVCC stands for digital verification color code, arcane terminology for a unique 8-bit code value assigned to each cell. G means guard time, the period between each time slot. RSVD stands for reserved. SYNC represents synchronization, a critical TDMA data field. Each slot in every frame must be synchronized against all others and a master clock for everything to work. Time slots for the mobile-to-base direction are constructed differently from the base-to-mobile direction. They essentially carry the same information but are arranged differently. Notice that the mobile-to-base direction has a 6-bit ramp time to enable its transmitter time to get up to full power, and a 6-bit guard band during which nothing is transmitted. These 12 extra bits in the base-to-mobile direction are reserved for future use. Once a call comes in the mobile switches to a different pair of frequencies; a voice radio channel which the system carrier has made analog or digital. This pair carries the call. If an IS-54 signal is detected it gets assigned a digital traffic channel if one is available. The fast associated channel or FACCH performs handoffs during the call, with no need for the mobile to go back to the control channel. In case of high noise, FACCH embedded within the digital traffic channel overrides the voice payload, degrading speech quality to convey control information. The purpose is to maintain connectivity. The slow associated control channel or SACCH does not perform handoffs but conveys things like signal strength information to the base station. The IS-54 speech coder uses the technique calledvector sum excited linear prediction(VSELP) coding. This is a special type of speech coder within a large class known ascode-excited linear prediction(CELP) coders. The speech coding rate of 7.95 kbit/s achieves a reconstructed speech quality similar to that of the analog AMPS system using frequency modulation. The 7.95-kbit/s signal is then passed through a channel coder that loads the bit rate up to 13 kbit/s. The new half-rate coding standard reduces the overall bit rate for each call to 6.5 kbit/s, and should provide comparable quality to the 13-kbit/s rate. This half-rate gives a channel capacity six times that of analog AMPS. The discussion of a communication system will not be complete without the explanation of a system example. A dual-mode cellular phone as specified by the IS-54 standard is explained. A dual-mode phone is capable of operating in an analog-only cell or a dual-mode cell. Both the transmitter and the receiver support both analog FM and digital time-division multiple access (TDMA) schemes. Digital transmission is preferred, so when a cellular system has digital capability, the mobile unit is assigned a digital channel first. If no digital channels are available, the cellular system will assign an analog channel. The transmitter converts the audio signal to a radio frequency (RF), and the receiver converts an RF signal to an audio signal. The antenna focuses and converts RF energy for reception and transmission into free space. The control panel serves as an input/output mechanism for the end user; it supports a keypad, a display, a microphone, and a speaker. The coordinator synchronizes the transmission and receives functions of the mobile unit. A dual-mode cellular phone consists of the following: By 1993 American cellular was again running out of capacity, despite a wide movement to IS-54. The American cellular business continued booming. Subscribers grew from one and a half million customers in 1988 to more than thirteen million subscribers in 1993. Room existed for other technologies to cater to the growing market. The technologies that followed IS-54 stuck to the digital backbone laid down by it. A pragmatic effort was launched to improve IS-54 that eventually added an extra channel to the IS-54 hybrid design. Unlike IS-54, IS-136 utilizes time-division multiplexing for both voice and control channel transmissions. Digital control channel allows residential and in-building coverage, dramatically increased battery standby time, several messaging applications, over the air activation and expanded data applications. IS-136 systems needed to support millions of AMPS phones, most of which were designed and manufactured before IS-54 and IS-136 were considered. IS-136 added a number of features to the original IS-54 specification, including text messaging, circuit switched data (CSD), and an improved compression protocol. IS-136 TDMA traffic channels use π/4-DQPSK modulation at a 24.3-kilobaudchannel rate and gives an effective 48.6 kbit/s data rate across the six time slots comprising one frame in the 30 kHz channel. AT&T Mobility, the largest US carrier to support D-AMPS (which it refers to as "TDMA"), had turned down its existing network in order to release the spectrum to its GSM andUMTSplatforms in 19 wireless markets, which started on May 30, 2007, with other areas that followed in June and July. The TDMA network in these markets operated on the 1900 MHz frequency and did not coexist with an AMPS network. Service on the remaining 850 MHz TDMA markets was discontinued along with AMPS service on February 18, 2008, except for in areas where service was provided byDobson Communications. The Dobson TDMA and AMPS network was shut down March 1, 2008. Rogers Wirelessin Canada removed all 1900 MHz IS-136 in 2003, and has done the same with its 800 MHz spectrum as the equipment failed. On May 31, 2007, Rogers Wireless decommissioned its D-AMPS (along with AMPS) networks and moved the remaining customers on these older networks onto its GSM network. Alltel, who primarily usedCDMA2000technology but acquired a TDMA network fromWestern Wireless, shut down its TDMA and AMPS networks in September 2008.US Cellular, which by then also primarily usedCDMA2000technology, shut down its TDMA network on February 10, 2009, the last in the United States.
https://en.wikipedia.org/wiki/Digital_AMPS
Cellular Digital Packet Data(CDPD) is an obsolete wide-area mobile data service which used unusedbandwidthnormally used byAdvanced Mobile Phone System(AMPS)mobile phonesbetween 800 and 900MHzto transfer data. Speeds up to 19.2kbit/swere possible, though real world speeds seldom reached higher than 9.6 kbit/s. The service was discontinued in conjunction with the retirement of the parent AMPS service; it has been functionally replaced by faster services such as1xRTT,Evolution-Data Optimized, andUMTS/High Speed Packet Access(HSPA). Developed in the early 1990s, CDPD was large on the horizon as a future technology. However, it had difficulty competing against existing slower but less expensiveMobitexandDataTACsystems, and never quite gained widespread acceptance before newer, faster standards such asGeneral Packet Radio Service(GPRS) became dominant. CDPD had very limited consumer products.AT&T Wirelessfirst sold the technology in the United States under the PocketNet brand. It was one of the first products of wireless web service.Digital Ocean, Inc.anoriginal equipment manufacturerlicensee of theApple Newton, sold the Seahorse product, which integrated the Newton handheld computer, an AMPS/CDPD handset/modem along with a web browser in 1996, winning the CTIA's hardware product of the year award as a smartphone, arguably the world's first. A company namedOmniSkyprovided service forPalm Vdevices. OmniSky then filed for bankruptcy in 2001 then was picked up byEarthLink Wireless. The technician that developed the tech support for all of the wireless technology was a man by the name of Myron Feasel he was brought from company to company ending up at Palm. Sierra Wireless sold PCMCIA devices and Airlink sold a serial modem. Both of these were used by police and fire departments for dispatch. Wireless later sold CDPD under the Wireless Internet brand (not to be confused with Wireless Internet Express, their brand for GPRS/EDGE data). PocketNet was generally considered a failure with competition from 2G services such as Sprint's Wireless Web. AT&T Wireless sold four PocketNet Phone models to the public: the Samsung Duette and the Mitsubishi MobileAccess-120 were AMPS/CDPD PocketNet phones introduced in October 1997; and two IS-136/CDPD Digital PocketNet phones, the Mitsubishi T-250 and the Ericsson R289LX. Despite its limited success as a consumer offering, CDPD was adopted in a number of enterprise and government networks. It was particularly popular as a first-generation wireless data solution fortelemetrydevices (machine to machine communications) and for public safety mobile data terminals. In 2004, major carriers in theUnited Statesannounced plans to shut down CDPD service. In July 2005, theAT&T WirelessandCingular WirelessCDPD networks were shut down. Primary elements of a CDPD network are: 1.End systems: physical & logical end systems that exchange information 2.Intermediate systems: CDPD infrastructure elements that store, forward & route the information There are 2 kinds of End systems 1.Mobile end system: subscriber unit to access CDPD network over a wireless interface 2.Fixed end system: common host/server that is connected to the CDPD backbone and providing access to specific application and data There are 2 kinds of Intermediate systems 1.Generic intermediate system: simple router with no knowledge of mobility issues 2.mobile data intermediate system: specialized intermediate system that routes data based on its knowledge of the current location of Mobile end system. It is a set of hardware and software functions that provide switching, accounting, registration, authentication, encryption, and so on. The design of CDPD was based on several design objectives that are often repeated in designing overlay networks or new networks. A lot of emphasis was laid on open architectures and reusing as much of the existing RF infrastructure as possible. The design goal of CDPD included location independence and independence fro[clarification needed], service provider, so that coverage could be maximized; application transparency and multiprotocol support, interoperability between products from multiple vendors.
https://en.wikipedia.org/wiki/Cellular_Digital_Packet_Data
ThePersonal Handy-phone System(PHS), also known as thePersonal Communication Telephone(PCT) in Thailand, and thePersonal Access System(PAS) and commercially branded asXiaolingtong(Chinese:小灵通) inChina, was amobile networksystem operating in the 1880–1930MHzfrequency band. InJapan, it was introduced as a low-cost wireless service with smaller coverage areas than standard cellular networks. Its affordability made it popular in China,Taiwan, and other parts ofAsia, as both the handsets and network infrastructure were relatively inexpensive to maintain.[1] Developed in the 1990s, PHS used amicrocellarchitecture with low-power base stations covering 100 to 500 metres (330 to 1,640 ft). unlike conventionalcellular networksthat relied on large cell sites for extensive coverage, PHS’s design was better suited for dense urban environments and reduced infrastructure costs. PHS was overtaken in the marketplace byGSM(3G) andUMTS(4G), with the last retail network decommissioned in 2021 and the last commercial network terminated in 2023.[2] PHS is essentially acordless telephonelikeDECT, with the capability tohandoverfrom onecellto another. PHS cells are small, with transmission power ofbase stationa maximum of 500 mW and range typically measures in tens or at most hundreds of metres (some can range up to about 2 kilometres in line-of-sight), contrary to the multi-kilometre ranges ofCDMAandGSM. This makes PHS suitable for dense urban areas, but impractical for rural areas, and the small cell size also makes it difficult if not impossible to make calls from rapidly moving vehicles. PHS usesTDMA/TDDfor its radiochannel access method, and 32 kbit/sADPCMfor its voicecodec. Modern PHS phone can also support manyvalue-added servicessuch as high speed wirelessdata/Internetconnection (64 kbit/s and higher),WWWaccess,e-mailing, and text messaging. PHS technology is also a popular option for providing awireless local loop, where it is used for bridging the "last mile" gap between thePOTSnetwork and the subscriber's home. It was developed under the concept of providing a wirelessfront-endof anISDNnetwork. Thus a PHS base station is compatible with ISDN and is often connected directly to ISDNtelephone exchangeequipment e.g. a digital switch. In spite of its low-costbase station,micro-cellularsystem and "Dynamic Cell Assignment" system, PHS offers higher number-of-digits frequency use efficiency with lower cost (throughput per area basis), compared with typical3Gcellular telephonesystems. It enables flat-rate wireless service such asAIR-EDGE, throughout Japan. The speed of an AIR-EDGE data connection is accelerated by combining lines, each of which basically is 32 kbit/s. The first version of AIR-EDGE, introduced in 2001, provided 32 kbit/s service. In 2002, 128 kbit/s service (AIR-EDGE 4×) started and in 2005, 256 kbit/s (8×) service started. In 2006, the speed of each line was also upgraded to 1.6 times with the introduction of "W-OAM" technology. The speed of AIR-EDGE 8× is up to 402 kbit/s with the latest "W-OAM" capable instrument. In April 2007, "W-OAM typeG" was introduced allowing data speeds of 512 kbit/s for AIR-EDGE 8x users. Furthermore, the "W-OAM typeG" 8× service was planned to be upgraded to a maximum throughput of 800 kbit/s, when the upgrading for access points (mainly switching lines from ISDN tofibre optic) in its system are completed. Thus it was expected to exceed the speeds of popularW-CDMA3G service likeNTT DoCoMo'sFOMAin Japan. Developed byNTT Laboratoryin Japan in 1989 and far simpler to implement and deploy than competing systems likePDCorGSM, the commercial services were started by three PHS operators (NTT-Personal, DDI-Pocket, and ASTEL) in Japan in 1995, forming the PIAF (PHS Internet Access Forum). However, the service was pejoratively dubbed as the "poor man's cellular", due to its limited range and roaming abilities. NTT DoCoMo, which absorbed NTT Personal, and ASTEL terminated the PHS service in January 2008. In Thailand, TelecomAsia (nowTrue Corporation) integrated the PHS system withIntelligent Networkand marketed the service as Personal Communication Telephone (PCT).[3]The integrated system was the world's first that allowed thefixed linetelephone subscribers of thepublic switched telephone networkto use PHS as a value added service with the same telephone number and shared the same voice mailbox.[4][5]The PCT service was commercially launched in November 1999 with the peak of 670,000 subscribers in 2001. However, the number of subscribers had declined to 470,000 in 2005 before the breakeven in 2006 after six years of heavy investment up to 15 billion THB. With the popularity of other cellular phone services, the company shifted the focus of the PCT to a niche market segment of youths ages 10-18.[6] Wireless local loop(WLL) systems based on PHS technology are in use in some of the above-mentioned countries.WILLCOM, formerly DDI-Pocket, introduced flat-ratewireless networkand flat-rate calling in Japan, which reversed the local fate of PHS up to an extent. In China, there was an explosive expansion of subscribers until around 2005. In Chile,Telefónica del Surlaunched a PHS-based telephony service in some cities of the southern part of the country in March 2006. In Brazil,Suporte Tecnologiahas a PHS-based telephony service inBetim, state of Minas Gerais[needs update], andTransit Telecomannounced a rollout of a PHS network in 2007[needs update]. China Telecomoperated a PAS system in China, although technically it was not regarded as allowed to provide mobile services, because of some particularities of the Chinese governance.China Netcom, the other fixed-line operator in China, also provides Xiaolingtong service. The system was a runaway hit, with over 90 million subscribers signed up as of 2007[update]; the largest equipment vendors wereUTStarcomandZTE. However, low priced mobile phones rapidly replaced PHS. TheMinistry of Industry and Information Technology of the People's Republic of Chinaissued notices on 13 February 2009 that both registration of new users and expansion of the network were to be discontinued, with the service to be ended by the end of 2011.[7] A PHS global roaming service was available between Japan (WILLCOM), Taiwan, and Thailand. This is a list of commercial PHS deployments around the world, all of which are now defunct:[8] Apr 2023(commercial)[2] PHS-enabledPCMCIA/CompactFlashcards include:
https://en.wikipedia.org/wiki/Personal_Handy-phone_System
Personal Digital Cellular(PDC) was a2Gmobiletelecommunicationsstandard used exclusively inJapan.[citation needed] After a peak of nearly 80 million subscribers to PDC, it had 46 million subscribers in December 2005, and was slowly phased out in favor of 3G technologies likeW-CDMAandCDMA2000. At the end of March 2012, the count had dwindled down to almost 200,000 subscribers. NTT Docomo shut down their network, mova, on April 1, 2012 at midnight.[1] LikeD-AMPSandGSM, PDC usesTDMA. The standard was defined by theRCR(later becameARIB) in April 1991, andNTT DoCoMolaunched its Digital mova service in March 1993. PDC uses 25 kHz carrier, pi/4-DQPSKmodulation with 3-timeslot 11.2 kbit/s (full-rate) or 6-timeslot 5.6 kbit/s (half-rate) voicecodecs. PDC is implemented in the 800 MHz (downlink 810–888 MHz, uplink 893–958 MHz), and 1.5 GHz (downlink 1477–1501 MHz, uplink 1429–1453 MHz) bands. The air interface is defined in RCR STD-27 and the core network MAP by JJ-70.10.NEC,Motorola, andEricssonare the major network equipment manufacturers.[citation needed] The services include voice (full and half-rate), supplementary services (call waiting, voice mail, three-way calling, call forwarding, and so on), data service (up to 9.6 kbit/sCSD), and packet-switched wireless data (up to 28.8 kbit/sPDC-P). Voice codecs arePDC-EFRandPDC-HR. Compared toGSM, PDC's weak broadcast strength allows small, portable phones with light batteries at the expense of substandard voice quality and problems maintaining the connection, particularly in enclosed spaces like elevators. PDC Enhanced Full Rateis aspeech codingstandard that was developed byARIBinJapanand used in PDC mobile networks in Japan. The carriers use one of those codecs as PDC-EFR:CS-ACELP8 kbit/s (a.k.a.NTT DoCoMoHypertalk) andACELP6.7 kbit/s (a.k.a.J-PHONECrystal Voice).[2][3] The PDC-EFR CS-ACELP usesG.729. The PDC-EFR ACELP is compatible with theAMRmode AMR_6.70. PDC Half Rateis aspeech codingstandard that was developed byARIBinJapanand used in PDC mobile networks in Japan. It operates with a bit-rate of 3.45 kbit/s and is based on Pitch Synchronous InnovationCELP(PSI-CELP).[4]
https://en.wikipedia.org/wiki/Personal_Digital_Cellular
3Grefers to the third-generation ofcellular networktechnology. These networks were rolled out beginning in the early 2000s and represented a significant advancement over the second-generation (2G), particularly in terms of data transfer speeds andmobile internetcapabilities. The major 3G standards areUMTS(developed by3GPP, succeedingGSM) andCDMA2000(developed byQualcomm, succeedingcdmaOne);[1][2]both of these are based on theIMT-2000specifications established by theInternational Telecommunication Union(ITU). While 2G networks such asGPRSandEDGEsupported limited data services, 3G introduced significantly higher-speed mobile internet and enhancedmultimediacapabilities, in addition to improvedvoicequality.[3]It provided moderate internet speeds suitable for generalweb browsingand multimedia content includingvideo callingandmobile TV,[3]supporting services that provide an information transfer rate of at least 144kbit/s.[4][5] Later 3G releases, often referred to as 3.5G (HSPA) and 3.75G (HSPA+) as well asEV-DO, introduced important improvements, enabling 3G networks to offermobile broadbandaccess with speeds ranging from severalMbit/sup to 42 Mbit/s.[6]These updates improved the reliability and speed of internet browsing, video streaming, and online gaming, enhancing the overall user experience forsmartphonesandmobile modemsin comparison to earlier 3G technologies. 3G was later succeeded by4Gtechnology, which provided even higher data transfer rates and introduced advancements in network performance. A new generation of cellular standards has emerged roughly every decade since the introduction of1Gsystems in 1979. Each generation is defined by the introduction of newfrequency bands, higher data rates, and transmission technologies that are not backward-compatible due to the need for significant changes in network architecture and infrastructure. Several telecommunications companies marketed wireless mobile Internet services as3G, indicating that the advertised service was provided over a 3G wireless network. However, 3G services have largely been supplanted in marketing by 4G and 5G services in most areas of the world. Services advertised as 3G are required to meetIMT-2000technical standards, including standards for reliability and speed (data transfer rates). To meet the IMT-2000 standards, Third-generation mobile networks, or 3G, must maintain minimum consistent Internet speeds of 144 Kbps.[5]However, many services advertised as 3G provide higher speed than the minimum technical requirements for a 3G service.[7]Subsequent 3G releases, denoted3.5Gand3.75G, provided mobile broadband access of severalMbit/sfor smartphones and mobile modems in laptop computers.[8] 3G branded standards: The 3G systems and radio interfaces are based onspread spectrumradio transmission technology. While theGSM EDGEstandard ("2.9G"),DECTcordless phones andMobile WiMAXstandards formally also fulfill the IMT-2000 requirements and are approved as 3G standards by ITU, these are typically not branded as 3G and are based on completely different technologies. The common standards complying with the IMT2000/3G standard are: While DECT cordless phones andMobile WiMAXstandards formally also fulfill the IMT-2000 requirements, they are not usually considered due to their rarity and unsuitability for usage with mobile phones.[9] The 3G (UMTS and CDMA2000) research and development projects started in 1992. In 1999, ITU approved five radio interfaces for IMT-2000 as a part of the ITU-R M.1457 Recommendation;WiMAXwas added in 2007.[10] There are evolutionary standards (EDGE and CDMA) that are backward-compatible extensions to pre-existing2Gnetworks as well as revolutionary standards that require all-new network hardware and frequency allocations. The cell phones use UMTS in combination with 2G GSM standards and bandwidths, butdo not support EDGE. The latter group is theUMTSfamily, which consists of standards developed for IMT-2000, as well as the independently developed standardsDECTand WiMAX, which were included because they fit the IMT-2000 definition. WhileEDGEfulfills the 3G specifications, most GSM/UMTS phones report EDGE ("2.75G") and UMTS ("3G") functionality.[11] 3G technology was the result of research and development work carried out by theInternational Telecommunication Union(ITU) in the early 1980s. 3G specifications and standards were developed in fifteen years. The technical specifications were made available to the public under the name IMT-2000. The communication spectrum between 400 MHz to 3 GHz was allocated for 3G. Both the government and communication companies approved the 3G standard. The first pre-commercial 3G network was launched byNTT DoCoMoin Japan in 1998,[12]branded asFOMA. It was first available in May 2001 as a pre-release (test) ofW-CDMAtechnology. The first commercial launch of 3G was also by NTT DoCoMo in Japan on 1 October 2001, although it was initially somewhat limited in scope;[13][14]broader availability of the system was delayed by apparent concerns over its reliability.[15][16][17][18][19] The first European pre-commercial network was anUMTSnetwork on theIsle of ManbyManx Telecom, the operator then owned byBritish Telecom, and the first commercial network (also UMTS based W-CDMA) in Europe was opened for business byTelenorin December 2001 with no commercial handsets and thus no paying customers. The first network to go commercially live was bySK Telecomin South Korea on the CDMA-based1xEV-DOtechnology in January 2002. By May 2002, the second South Korean 3G network was byKTon EV-DO and thus the South Koreans were the first to see competition among 3G operators. The first commercial United States 3G network was by Monet Mobile Networks, onCDMA20001x EV-DO technology, but the network provider later shut down operations. The second 3G network operator in the US was Verizon Wireless in July 2002, also on CDMA2000 1x EV-DO. AT&T Mobility was also a true 3GUMTSnetwork, having completed its upgrade of the 3G network toHSUPA. The first commercial United Kingdom 3G network was started byHutchison Telecomwhich was originally behindOrange S.A.[20]In 2003, it announced first commercial third generation or 3G mobile phone network in the UK. The first pre-commercial demonstration network in the southern hemisphere was built inAdelaide, South Australia, by m.Net Corporation in February 2002 using UMTS on 2100 MHz. This was a demonstration network for the 2002 IT World Congress. The first commercial 3G network was launched by Hutchison Telecommunications branded asThreeor "3" in June 2003.[21] InIndia, on 11 December 2008, the first 3G mobile and internet services were launched by a state-owned company, Mahanagar Telecom Nigam Limited (MTNL), within the metropolitan cities of Delhi and Mumbai. After MTNL, another state-owned company, Bharat Sanchar Nigam Limited (BSNL), began deploying the 3G networks country-wide. Emtellaunched the first 3G network in Africa.[22] Japanwas one of the first countries to adopt 3G, the reason being the process of 3G spectrum allocation, which in Japan was awarded without much upfront cost. The frequency spectrum was allocated in the US and Europe based on auctioning, thereby requiring a huge initial investment for any company wishing to provide 3G services. European companies collectively paid over 100 billion dollars in their spectrum auctions.[23] Nepal Telecomadopted 3G Service for the first time in southernAsia. However, its 3G was relatively slow to be adopted inNepal. In some instances, 3G networks do not use the same radio frequencies as2G, so mobile operators must build entirely new networks and license entirely new frequencies, especially to achieve high data transmission rates. Other countries' delays were due to the expenses of upgrading transmission hardware, especially forUMTS, whose deployment required the replacement of most broadcast towers. Due to these issues and difficulties with deployment, many carriers could not or delayed the acquisition of these updated capabilities. In December 2007, 190 3G networks were operating in 40 countries and 154HSDPAnetworks were operating in 71 countries, according to the Global Mobile Suppliers Association (GSA). In Asia, Europe, Canada, and the US, telecommunication companies useW-CDMAtechnology with the support of around 100 terminal designs to operate 3G mobile networks. The roll-out of 3G networks was delayed by the enormous costs of additional spectrum licensing fees in some countries. The license fees in some European countries were particularly high, bolstered by government auctions of a limited number of licenses andsealed bid auctions, and initial excitement over 3G's potential. This led to atelecoms crashthat ran concurrently with similar crashes in thefibre-opticanddot.comfields. The 3G standard is perhaps well known because of a massive expansion of the mobile communications market post-2G and advances of the consumer mobile phone. An especially notable development during this time is thesmartphone(for example, theiPhone, and theAndroidfamily), combining the abilities of a PDA with a mobile phone, leading to widespread demand for mobile internet connectivity. 3G has also introduced the term "mobile broadband" because its speed and capability made it a viable alternative for internet browsing, and USB Modems connecting to 3G networks, and now4Gbecame increasingly common. By June 2007, the 200 millionth 3G subscriber had been connected of which 10 million were inNepaland 8.2 million inIndia. This 200 millionth is only 6.7% of the 3 billion mobile phone subscriptions worldwide. (When counting CDMA2000 1x RTT customers—max bitrate 72% of the 200 kbit/s which defines 3G—the total size of the nearly-3G subscriber base was 475 million as of June 2007, which was 15.8% of all subscribers worldwide.) In the countries where 3G was launched first – Japan and South Korea – 3G penetration is over 70%.[24]In Europe the leading country[when?]for 3G penetration is Italy with a third of its subscribers migrated to 3G. Other leading countries[when?]for 3G use includeNepal,UK,Austria,AustraliaandSingaporeat the 32% migration level. According to ITU estimates,[25]as of Q4 2012 there were 2096 million active mobile-broadband[vague]subscribers worldwide out of a total of 6835 million subscribers—this is just over 30%. About half the mobile-broadband subscriptions are for subscribers in developed nations, 934 million out of 1600 million total, well over 50%. Note however that there is a distinction between a phone with mobile-broadband connectivity and asmart phonewith a large display and so on—although according[26]to the ITU and informatandm.com the US has 321 million mobile subscriptions, including 256 million that are 3G or 4G, which is both 80% of the subscriber base and 80% of the US population, according[25]to ComScore just a year earlier in Q4 2011 only about 42% of people surveyed in the US reported they owned a smart phone. In Japan, 3G penetration was similar at about 81%, but smart phone ownership was lower at about 17%.[25]InChina, there were 486.5 million 3G subscribers in June 2014,[27]in a population of 1,385,566,537 (2013 UN estimate). Since the increasing adoption of4Gnetworks across the globe, 3G use has been in decline. Several operators around the world have already or are in the process of shutting down their 3G networks (seetable below). In several places, 3G is being shut down while its older predecessor 2G is being kept in operation;VodafoneUK is doing this, citing 2G's usefulness as a low-power fallback.[28]EEin the UK, plans to switch off their 3G networks in early 2024.[29]In the US,Verizonshutdown their 3G services on 31 December 2022,[30]T-Mobile shut downSprint's networks on 31 March 2022 and shutdown their main networks on 1 July 2022,[31]andAT&Thas done so on 22 February 2022.[32] Currently 3G around the world is declining in availability and support. Technology that depends on 3G for usage are becoming inoperable in many places. For example, theEuropean Unionplans to ensure that member countries maintain 2G networks as a fallback[citation needed], so 3G devices that are backwards compatible with 2G frequencies can continue to be used. However, in countries that plan to decommission 2G networks or have already done so as well, such as the United States and Singapore, devices supporting only 3G and backwards compatible with 2G are becoming inoperable.[33]As of February 2022, less than 1% of cell phone customers in the United States used 3G; AT&T offered free replacement devices to some customers in the run-up to its shutdown.[34] It has been estimated that there are almost 8,000 patents declared essential (FRAND) related to the 483 technical specifications which form the3GPPand3GPP2standards.[35][36]Twelve companies accounted in 2004 for 90% of the patents (Qualcomm,Ericsson,Nokia,Motorola,Philips,NTT DoCoMo,Siemens,Mitsubishi,Fujitsu,Hitachi,InterDigital, andMatsushita). Even then, some patentsessentialto 3G might not have been declared by their patent holders. It is believed thatNortelandLucenthave undisclosed patents essential to these standards.[36] Furthermore, the existing 3G Patent Platform Partnership Patent pool has little impact onFRANDprotection because it excludes the four largest patent owners for 3G.[37][38] ITU has not provided a clear[39][vague]definition of the data rate that users can expect from 3G equipment or providers. Thus users sold 3G service may not be able to point to a standard and say that the rates it specifies are not being met. While stating in commentary that "it is expected that IMT-2000 will provide higher transmission rates: a minimum data rate of 2 Mbit/s for stationary or walking users, and 348 kbit/s in a moving vehicle,"[40]the ITU does not actually clearly specify minimum required rates, nor required average rates, nor what modes[clarification needed]of the interfaces qualify as 3G, so various[vague]data rates are sold as '3G' in the market. In a market implementation, 3G downlink data speeds defined by telecom service providers vary depending on the underlying technology deployed; up to 384 kbit/s for UMTS (WCDMA), up to 7.2 Mbit/sec for HSPA, and a theoretical maximum of 21.1 Mbit/s for HSPA+ and 42.2 Mbit/s for DC-HSPA+ (technically 3.5G, but usually clubbed under the tradename of 3G).[citation needed] 3G networks offer greater security than their 2G predecessors. By allowing the UE (User Equipment) to authenticate the network it is attaching to, the user can be sure the network is the intended one and not an impersonator.[41] 3G networks use theKASUMIblock cipherinstead of the olderA5/1stream cipher. However, a number of serious weaknesses in the KASUMI cipher have been identified. In addition to the 3G network infrastructure security, end-to-end security is offered when application frameworks such as IMS are accessed, although this is not strictly a 3G property. The bandwidth and location capabilities introduced by 3G networks enabled a wide range of applications that were previously impractical or unavailable on 2G networks. Among the most significant advancements was the ability to perform data-intensive tasks, such as browsing the internet seamlessly while on the move, as well as engaging in other activities that benefited from faster data speeds and enhanced reliability. Beyond personal communication, 3G networks supported applications in various fields, includingmedical devices,fire alarms, and ankle monitors. This versatility marked a significant milestone in cellular communications, as 3G became the first network to enable such a broad range of use cases.[42]By expanding its functionality beyond traditionalmobile phoneusage, 3G set the stage for the integration ofcellular networksinto a wide array of technologies and services, paving the way for further advancements with subsequent generations of mobile networks. Both3GPPand3GPP2are working on the extensions to 3G standards that are based on anall-IP network infrastructureand using advanced wireless technologies such asMIMO. These specifications already display features characteristic forIMT-Advanced(4G), the successor of 3G. However, falling short of the bandwidth requirements for 4G (which is 1 Gbit/s for stationary and 100 Mbit/s for mobile operation), these standards are classified as 3.9G or Pre-4G. 3GPP plans to meet the 4G goals withLTE Advanced, whereas Qualcomm has haltedUMBdevelopment in favour of the LTE family.[43] On 14 December 2009,TeliaSoneraannounced in an official press release that "We are very proud to be the first operator in the world to offer our customers 4G services."[44]With the launch of their LTE network, initially they are offeringpre-4G(orbeyond 3G) services in Stockholm, Sweden and Oslo, Norway.
https://en.wikipedia.org/wiki/3G
TheUniversal Mobile Telecommunications System(UMTS) is a3Gmobile cellular system for networks based on theGSMstandard.[1]UMTS useswideband code-division multiple access(W-CDMA) radio access technology to offer greater spectral efficiency and bandwidth tomobile network operatorscompared to previous2Gsystems likeGPRSandCSD.[2]UMTS on its provides a peak theoretical data rate of 2Mbit/s.[3] Developed and maintained by the3GPP(3rd Generation Partnership Project), UMTS is a component of theInternational Telecommunication UnionIMT-2000standard set and compares with theCDMA2000standard set for networks based on the competingcdmaOnetechnology. The technology described in UMTS is sometimes also referred to asFreedom of Mobile Multimedia Access(FOMA)[4]or 3GSM. UMTS specifies a complete network system, which includes theradio access network(UMTS Terrestrial Radio Access Network, or UTRAN), thecore network(Mobile Application Part, or MAP) and the authentication of users via SIM (subscriber identity module) cards. UnlikeEDGE(IMT Single-Carrier, based on GSM) and CDMA2000 (IMT Multi-Carrier), UMTS requires new base stations and new frequency allocations. UMTS has since been enhanced asHigh Speed Packet Access(HSPA).[5] UMTS supports theoretical maximum datatransfer ratesof 42Mbit/swhenEvolved HSPA(HSPA+) is implemented in the network.[6]Users in deployed networks can expect a transfer rate of up to 384 kbit/s for Release '99 (R99) handsets (the original UMTS release), and 7.2 Mbit/s forHigh-Speed Downlink Packet Access(HSDPA) handsets in the downlink connection. These speeds are significantly faster than the 9.6 kbit/s of a single GSM error-corrected circuit switched data channel, multiple 9.6 kbit/s channels inHigh-Speed Circuit-Switched Data(HSCSD) and 14.4 kbit/s for CDMAOne channels. Since 2006, UMTS networks in many countries have been or are in the process of being upgraded with High-Speed Downlink Packet Access (HSDPA), sometimes known as3.5G. Currently, HSDPA enablesdownlinktransfer speeds of up to 21 Mbit/s. Work is also progressing on improving the uplink transfer speed with theHigh-Speed Uplink Packet Access(HSUPA). The 3GPPLTEstandard succeeds UMTS and initially provided 4G speeds of 100 Mbit/s down and 50 Mbit/s up, with scalability up to 3 Gbps, using a next generation air interface technology based uponorthogonal frequency-division multiplexing. The first national consumer UMTS networks launched in 2002 with a heavy emphasis on telco-provided mobile applications such as mobile TV andvideo calling. The high data speeds of UMTS are now most often utilised for Internet access: experience in Japan and elsewhere has shown that user demand for video calls is not high, and telco-provided audio/video content has declined in popularity in favour of high-speed access to the World Wide Web – either directly on a handset or connected to a computer viaWi-Fi,BluetoothorUSB.[citation needed] UMTS combines three different terrestrialair interfaces,GSM'sMobile Application Part(MAP) core, and the GSM family ofspeech codecs. The air interfaces are called UMTS Terrestrial Radio Access (UTRA).[7]All air interface options are part ofITU'sIMT-2000. In the currently most popular variant for cellular mobile telephones, W-CDMA (IMT Direct Spread) is used. It is also called "Uu interface", as it links User Equipment to the UMTS Terrestrial Radio Access Network. Please note that the termsW-CDMA,TD-CDMAandTD-SCDMAare misleading. While they suggest covering just achannel access method(namely a variant ofCDMA), they are actually the common names for the whole air interface standards.[8] W-CDMA (WCDMA; WidebandCode-Division Multiple Access), along with UMTS-FDD, UTRA-FDD, or IMT-2000 CDMA Direct Spread is an air interface standard found in3Gmobile telecommunicationsnetworks. It supports conventional cellular voice, text andMMSservices, but can also carry data at high speeds, allowing mobile operators to deliver higher bandwidth applications including streaming and broadband Internet access.[9] W-CDMA uses theDS-CDMAchannel access method with a pair of 5 MHz wide channels. In contrast, the competingCDMA2000system uses one or more available 1.25 MHz channels for each direction of communication. W-CDMA systems are widely criticized for their large spectrum usage, which delayed deployment in countries that acted relatively slowly in allocating new frequencies specifically for 3G services (such as the United States). The specificfrequency bandsoriginally defined by the UMTS standard are 1885–2025 MHz for the mobile-to-base (uplink) and 2110–2200 MHz for the base-to-mobile (downlink). In the US, 1710–1755 MHz and 2110–2155 MHz are used instead, as the 1900 MHz band was already used.[10]While UMTS2100 is the most widely deployed UMTS band, some countries' UMTS operators use the 850 MHz (900 MHz in Europe) and/or 1900 MHz bands (independently, meaning uplink and downlink are within the same band), notably in the US byAT&T Mobility, New Zealand byTelecom New Zealandon theXT Mobile Networkand in Australia byTelstraon theNext Gnetwork. Some carriers such asT-Mobileuse band numbers to identify the UMTS frequencies. For example, Band I (2100 MHz), Band IV (1700/2100 MHz), and Band V (850 MHz). UMTS-FDD is an acronym for Universal Mobile Telecommunications System (UMTS) –frequency-division duplexing(FDD) and a3GPPstandardizedversion of UMTS networks that makes use of frequency-division duplexing forduplexingover an UMTS Terrestrial Radio Access (UTRA) air interface.[11] W-CDMA is the basis of Japan'sNTT DoCoMo'sFOMAservice and the most-commonly used member of the Universal Mobile Telecommunications System (UMTS) family and sometimes used as a synonym for UMTS.[12]It uses the DS-CDMA channel access method and the FDD duplexing method to achieve higher speeds and support more users compared to most previously usedtime-division multiple access(TDMA) andtime-division duplex(TDD) schemes. While not an evolutionary upgrade on the airside, it uses the samecore networkas the2GGSM networks deployed worldwide, allowingdual-mode mobileoperation along with GSM/EDGE; a feature it shares with other members of the UMTS family. In the late 1990s, W-CDMA was developed by NTT DoCoMo as the air interface for their 3G networkFOMA. Later NTT DoCoMo submitted the specification to theInternational Telecommunication Union(ITU) as a candidate for the international 3G standard known as IMT-2000. The ITU eventually accepted W-CDMA as part of the IMT-2000 family of 3G standards, as an alternative to CDMA2000, EDGE, and the short rangeDECTsystem. Later, W-CDMA was selected as an air interface forUMTS. As NTT DoCoMo did not wait for the finalisation of the 3G Release 99 specification, their network was initially incompatible with UMTS.[13]However, this has been resolved by NTT DoCoMo updating their network. Code-Division Multiple Access communication networks have been developed by a number of companies over the years, but development of cell-phone networks based on CDMA (prior to W-CDMA) was dominated byQualcomm, the first company to succeed in developing a practical and cost-effective CDMA implementation for consumer cell phones and its earlyIS-95air interface standard has evolved into the current CDMA2000 (IS-856/IS-2000) standard. Qualcomm created an experimental wideband CDMA system called CDMA2000 3x which unified the W-CDMA (3GPP) and CDMA2000 (3GPP2) network technologies into a single design for a worldwide standard air interface. Compatibility with CDMA2000 would have beneficially enabled roaming on existing networks beyond Japan, since Qualcomm CDMA2000 networks are widely deployed, especially in the Americas, with coverage in 58 countries as of 2006[update]. However, divergent requirements resulted in the W-CDMA standard being retained and deployed globally. W-CDMA has then become the dominant technology with 457 commercial networks in 178 countries as of April 2012.[14]Several CDMA2000 operators have even converted their networks to W-CDMA for international roaming compatibility and smooth upgrade path toLTE. Despite incompatibility with existing air-interface standards, late introduction and the high upgrade cost of deploying an all-new transmitter technology, W-CDMA has become the dominant standard. W-CDMA transmits on a pair of 5 MHz-wide radio channels, while CDMA2000 transmits on one or several pairs of 1.25 MHz radio channels. Though W-CDMA does use adirect-sequenceCDMA transmission technique like CDMA2000, W-CDMA is not simply a wideband version of CDMA2000 and differs in many aspects from CDMA2000. From an engineering point of view, W-CDMA provides a different balance of trade-offs between cost, capacity, performance, and density[citation needed]; it also promises to achieve a benefit of reduced cost for video phone handsets. W-CDMA may also be better suited for deployment in the very dense cities of Europe and Asia. However, hurdles remain, andcross-licensingofpatentsbetween Qualcomm and W-CDMA vendors has not eliminated possible patent issues due to the features of W-CDMA which remain covered by Qualcomm patents.[15] W-CDMA has been developed into a complete set of specifications, a detailed protocol that defines how a mobile phone communicates with the tower, how signals are modulated, how datagrams are structured, and system interfaces are specified allowing free competition on technology elements. The world's first commercial W-CDMA service, FOMA, was launched by NTT DoCoMo inJapanin 2001. Elsewhere, W-CDMA deployments are usually marketed under the UMTS brand. W-CDMA has also been adapted for use in satellite communications on the U.S.Mobile User Objective Systemusing geosynchronous satellites in place of cell towers. J-PhoneJapan (onceVodafoneand nowSoftBank Mobile) soon followed by launching their own W-CDMA based service, originally branded "Vodafone Global Standard" and claiming UMTS compatibility. The name of the service was changed to "Vodafone 3G" (now "SoftBank 3G") in December 2004. Beginning in 2003,Hutchison Whampoagradually launched their upstart UMTS networks. Most countries have, since the ITU approved of the 3G mobile service, either "auctioned" the radio frequencies to the company willing to pay the most, or conducted a "beauty contest" – asking the various companies to present what they intend to commit to if awarded the licences. This strategy has been criticised for aiming to drain the cash of operators to the brink of bankruptcy in order to honour their bids or proposals. Most of them have a time constraint for the rollout of the service – where a certain "coverage" must be achieved within a given date or the licence will be revoked. Vodafone launched several UMTS networks in Europe in February 2004.MobileOneofSingaporecommercially launched its 3G (W-CDMA) services in February 2005.New Zealandin August 2005 andAustraliain October 2005. AT&T Mobilityutilized a UMTS network, with HSPA+, from 2005 until its shutdown in February 2022. Rogers inCanadaMarch 2007 has launched HSDPA in the Toronto Golden Horseshoe district on W-CDMA at 850/1900 MHz and plan the launch the service commercial in the top 25 cities October, 2007. TeliaSoneraopened W-CDMA service inFinlandOctober 13, 2004, with speeds up to 384 kbit/s. Availability only in main cities. Pricing is approx. €2/MB.[citation needed] SK TelecomandKTF, two largest mobile phone service providers inSouth Korea, have each started offering W-CDMA service in December 2003. Due to poor coverage and lack of choice in handhelds, the W-CDMA service has barely made a dent in the Korean market which was dominated by CDMA2000. By October 2006 both companies are covering more than 90 cities whileSK Telecomhas announced that it will provide nationwide coverage for its WCDMA network in order for it to offer SBSM (Single Band Single Mode) handsets by the first half of 2007.KT Freecelwill thus cut funding to its CDMA2000 network development to the minimum. InNorway,Telenorintroduced W-CDMA in major cities by the end of 2004, while their competitor,NetCom, followed suit a few months later. Both operators have 98% national coverage on EDGE, but Telenor has parallel WLAN roaming networks on GSM, where the UMTS service is competing with this. For this reason Telenor is dropping support of their WLAN service in Austria (2006). Maxis CommunicationsandCelcom, two mobile phone service providers inMalaysia, started offering W-CDMA services in 2005. InSweden,Teliaintroduced W-CDMA in March 2004. UMTS-TDD, an acronym for Universal Mobile Telecommunications System (UMTS) – time-division duplexing (TDD), is a 3GPP standardized version of UMTS networks that use UTRA-TDD.[11]UTRA-TDD is a UTRA that usestime-division duplexingfor duplexing.[11]While a full implementation of UMTS, it is mainly used to provide Internet access in circumstances similar to those whereWiMAXmight be used.[citation needed]UMTS-TDD is not directly compatible with UMTS-FDD: a device designed to use one standard cannot, unless specifically designed to, work on the other, because of the difference in air interface technologies and frequencies used.[citation needed]It is more formally as IMT-2000 CDMA-TDD or IMT 2000 Time-Division (IMT-TD).[16][17] The two UMTS air interfaces (UTRAs) for UMTS-TDD are TD-CDMA and TD-SCDMA. Both air interfaces use a combination of two channel access methods,code-division multiple access(CDMA) and time-division multiple access (TDMA): the frequency band is divided into time slots (TDMA), which are further divided into channels using CDMA spreading codes. These air interfaces are classified as TDD, because time slots can be allocated to either uplink or downlink traffic. TD-CDMA, an acronym for Time-Division-Code-Division Multiple Access, is a channel-access method based on usingspread-spectrummultiple-access (CDMA) across multiple time slots (TDMA). TD-CDMA is the channel access method for UTRA-TDD HCR, which is an acronym for UMTS Terrestrial Radio Access-Time Division Duplex High Chip Rate.[16] UMTS-TDD's air interfaces that use the TD-CDMA channel access technique are standardized as UTRA-TDD HCR, which uses increments of 5MHzof spectrum, each slice divided into 10 ms frames containing fifteen time slots (1500 per second).[16]The time slots (TS) are allocated in fixed percentage for downlink and uplink. TD-CDMA is used to multiplex streams from or to multiple transceivers. Unlike W-CDMA, it does not need separate frequency bands for up- and downstream, allowing deployment in tightfrequency bands.[18] TD-CDMA is a part of IMT-2000, defined as IMT-TD Time-Division (IMT CDMA TDD), and is one of the three UMTS air interfaces (UTRAs), as standardized by the 3GPP in UTRA-TDD HCR. UTRA-TDD HCR is closely related to W-CDMA, and provides the same types of channels where possible. UMTS's HSDPA/HSUPA enhancements are also implemented under TD-CDMA.[19] In the United States, the technology has been used for public safety and government use in theNew York Cityand a few other areas.[needs update][20]In Japan, IPMobile planned to provide TD-CDMA service in year 2006, but it was delayed, changed to TD-SCDMA, and bankrupt before the service officially started. Time-Division Synchronous Code-Division Multiple Access(TD-SCDMA) or UTRA TDD 1.28Mcpslow chip rate (UTRA-TDD LCR)[17][8]is an air interface[17]found in UMTS mobile telecommunications networks in China as an alternative to W-CDMA. TD-SCDMA uses the TDMA channel access method combined with an adaptivesynchronous CDMAcomponent[17]on 1.6 MHz slices of spectrum, allowing deployment in even tighter frequency bands than TD-CDMA. It is standardized by the 3GPP and also referred to as "UTRA-TDD LCR". However, the main incentive for development of this Chinese-developed standard was avoiding or reducing the license fees that have to be paid to non-Chinese patent owners. Unlike the other air interfaces, TD-SCDMA was not part of UMTS from the beginning but has been added in Release 4 of the specification. Like TD-CDMA, TD-SCDMA is known as IMT CDMA TDD within IMT-2000. The term "TD-SCDMA" is misleading. While it suggests covering only a channel access method, it is actually the common name for the whole air interface specification.[8] TD-SCDMA / UMTS-TDD (LCR) networks are incompatible with W-CDMA / UMTS-FDD and TD-CDMA / UMTS-TDD (HCR) networks. TD-SCDMA was developed in the People's Republic of China by the Chinese Academy of Telecommunications Technology (CATT),Datang TelecomandSiemensin an attempt to avoid dependence on Western technology. This is likely primarily for practical reasons, since other 3G formats require the payment of patent fees to a large number of Western patent holders. TD-SCDMA proponents also claim it is better suited for densely populated areas.[17]Further, it is supposed to cover all usage scenarios, whereas W-CDMA is optimised for symmetric traffic and macro cells, while TD-CDMA is best used in low mobility scenarios within micro or pico cells.[17] TD-SCDMA is based on spread-spectrum technology which makes it unlikely that it will be able to completely escape the payment of license fees to western patent holders. The launch of a national TD-SCDMA network was initially projected by 2005[21]but only reached large scale commercial trials with 60,000 users across eight cities in 2008.[22] On January 7, 2009, China granted a TD-SCDMA 3G licence toChina Mobile.[23] On September 21, 2009, China Mobile officially announced that it had 1,327,000 TD-SCDMA subscribers as of the end of August, 2009. TD-SCDMA is not commonly used outside of China.[24] TD-SCDMA uses TDD, in contrast to the FDD scheme used byW-CDMA. By dynamically adjusting the number of timeslots used for downlink anduplink, the system can more easily accommodate asymmetric traffic with different data rate requirements on downlink and uplink than FDD schemes. Since it does not require paired spectrum for downlink and uplink, spectrum allocation flexibility is also increased. Using the same carrier frequency for uplink and downlink also means that the channel condition is the same on both directions, and thebase stationcan deduce the downlink channel information from uplink channel estimates, which is helpful to the application ofbeamformingtechniques. TD-SCDMA also uses TDMA in addition to the CDMA used in WCDMA. This reduces the number of users in each timeslot, which reduces the implementation complexity ofmultiuser detectionand beamforming schemes, but the non-continuous transmission also reducescoverage(because of the higher peak power needed), mobility (because of lowerpower controlfrequency) and complicatesradio resource managementalgorithms. The "S" in TD-SCDMA stands for "synchronous", which means that uplink signals are synchronized at the base station receiver, achieved by continuous timing adjustments. This reduces theinterferencebetween users of the same timeslot using different codes by improving theorthogonalitybetween the codes, therefore increasing system capacity, at the cost of some hardware complexity in achieving uplink synchronization. On January 20, 2006,Ministry of Information Industryof the People's Republic of China formally announced that TD-SCDMA is the country's standard of 3G mobile telecommunication. On February 15, 2006, a timeline for deployment of the network in China was announced, stating pre-commercial trials would take place starting after completion of a number of test networks in select cities. These trials ran from March to October, 2006, but the results were apparently unsatisfactory. In early 2007, the Chinese government instructed the dominant cellular carrier, China Mobile, to build commercial trial networks in eight cities, and the two fixed-line carriers,China TelecomandChina Netcom, to build one each in two other cities. Construction of these trial networks was scheduled to finish during the fourth quarter of 2007, but delays meant that construction was not complete until early 2008. The standard has been adopted by 3GPP since Rel-4, known as "UTRA TDD 1.28 Mcps Option".[17] On March 28, 2008, China Mobile Group announced TD-SCDMA "commercial trials" for 60,000 test users in eight cities from April 1, 2008. Networks using other 3G standards (WCDMA and CDMA2000 EV/DO) had still not been launched in China, as these were delayed until TD-SCDMA was ready for commercial launch. In January 2009, theMinistry of Industry and Information Technology(MIIT) in China took the unusual step of assigning licences for 3 different third-generation mobile phone standards to three carriers in a long-awaited step that is expected to prompt $41 billion in spending on new equipment. The Chinese-developed standard, TD-SCDMA, was assigned to China Mobile, the world's biggest phone carrier by subscribers. That appeared to be an effort to make sure the new system has the financial and technical backing to succeed. Licences for two existing 3G standards, W-CDMA andCDMA2000 1xEV-DO, were assigned toChina Unicomand China Telecom, respectively. Third-generation, or 3G, technology supports Web surfing, wireless video and other services and the start of service is expected to spur new revenue growth. The technical split by MIIT has hampered the performance of China Mobile in the 3G market, with users and China Mobile engineers alike pointing to the lack of suitable handsets to use on the network.[25]Deployment of base stations has also been slow, resulting in lack of improvement of service for users.[26]The network connection itself has consistently been slower than that from the other two carriers, leading to a sharp decline in market share. By 2011 China Mobile has already moved its focus onto TD-LTE.[27][28]Gradual closures of TD-SCDMA stations started in 2016.[29][30] The following is a list ofmobile telecommunicationsnetworks using third-generation TD-SCDMA / UMTS-TDD (LCR) technology. In Europe,CEPTallocated the 2010–2020 MHz range for a variant of UMTS-TDD designed for unlicensed, self-provided use.[33]Some telecom groups and jurisdictions have proposed withdrawing this service in favour of licensed UMTS-TDD,[34]due to lack of demand, and lack of development of a UMTS TDD air interface technology suitable for deployment in this band. Ordinary UMTS uses UTRA-FDD as an air interface and is known asUMTS-FDD. UMTS-FDD uses W-CDMA for multiple access andfrequency-division duplexfor duplexing, meaning that the up-link and down-link transmit on different frequencies. UMTS is usually transmitted on frequencies assigned for1G,2G, or 3G mobile telephone service in the countries of operation. UMTS-TDD uses time-division duplexing, allowing the up-link and down-link to share the same spectrum. This allows the operator to more flexibly divide the usage of available spectrum according to traffic patterns. For ordinary phone service, you would expect the up-link and down-link to carry approximately equal amounts of data (because every phone call needs a voice transmission in either direction), but Internet-oriented traffic is more frequently one-way. For example, when browsing a website, the user will send commands, which are short, to the server, but the server will send whole files, that are generally larger than those commands, in response. UMTS-TDD tends to be allocated frequency intended for mobile/wireless Internet services rather than used on existing cellular frequencies. This is, in part, because TDD duplexing is not normally allowed oncellular,PCS/PCN, and 3G frequencies. TDD technologies open up the usage of left-over unpaired spectrum. Europe-wide, several bands are provided either specifically for UMTS-TDD or for similar technologies. These are 1900 MHz and 1920 MHz and between 2010 MHz and 2025 MHz. In several countries the 2500–2690 MHz band (also known as MMDS in the USA) have been used for UMTS-TDD deployments. Additionally, spectrum around the 3.5 GHz range has been allocated in some countries, notably Britain, in a technology-neutral environment. In the Czech Republic UTMS-TDD is also used in a frequency range around 872 MHz.[35] UMTS-TDD has been deployed for public and/or private networks in at least nineteen countries around the world, with live systems in, amongst other countries, Australia, Czech Republic, France, Germany, Japan, New Zealand, Botswana, South Africa, the UK, and the USA. Deployments in the US thus far have been limited. It has been selected for a public safety support network used by emergency responders in New York,[36]but outside of some experimental systems, notably one fromNextel, thus far the WiMAX standard appears to have gained greater traction as a general mobile Internet access system. A variety of Internet-access systems exist which provide broadband speed access to the net. These include WiMAX andHIPERMAN. UMTS-TDD has the advantages of being able to use an operator's existing UMTS/GSM infrastructure, should it have one, and that it includes UMTS modes optimized for circuit switching should, for example, the operator want to offer telephone service. UMTS-TDD's performance is also more consistent. However, UMTS-TDD deployers often have regulatory problems with taking advantage of some of the services UMTS compatibility provides. For example, the UMTS-TDD spectrum in the UK cannot be used to provide telephone service, though the regulatorOFCOMis discussing the possibility of allowing it at some point in the future. Few operators considering UMTS-TDD have existing UMTS/GSM infrastructure. Additionally, the WiMAX and HIPERMAN systems provide significantly larger bandwidths when the mobile station is near the tower. Like most mobile Internet access systems, many users who might otherwise choose UMTS-TDD will find their needs covered by the ad hoc collection of unconnected Wi-Fi access points at many restaurants and transportation hubs, and/or by Internet access already provided by their mobile phone operator. By comparison, UMTS-TDD (and systems like WiMAX) offers mobile, and more consistent, access than the former, and generally faster access than the latter. UMTS also specifies the Universal Terrestrial Radio Access Network (UTRAN), which is composed of multiple base stations, possibly using different terrestrial air interface standards and frequency bands. UMTS and GSM/EDGE can share a Core Network (CN), making UTRAN an alternative radio access network toGERAN(GSM/EDGE RAN), and allowing (mostly) transparent switching between the RANs according to available coverage and service needs. Because of that, UMTS's and GSM/EDGE's radio access networks are sometimes collectively referred to as UTRAN/GERAN. UMTS networks are often combined with GSM/EDGE, the latter of which is also a part of IMT-2000. The UE (User Equipment) interface of theRAN(Radio Access Network) primarily consists ofRRC(Radio Resource Control),PDCP(Packet Data Convergence Protocol),RLC(Radio Link Control) and MAC (Media Access Control) protocols. RRC protocol handles connection establishment, measurements, radio bearer services, security and handover decisions. RLC protocol primarily divides into three Modes – Transparent Mode (TM), Unacknowledge Mode (UM), Acknowledge Mode (AM). The functionality of AM entity resembles TCP operation whereas UM operation resembles UDP operation. In TM mode, data will be sent to lower layers without adding any header toSDUof higher layers. MAC handles the scheduling of data on air interface depending on higher layer (RRC) configured parameters. The set of properties related to data transmission is called Radio Bearer (RB). This set of properties decides the maximum allowed data in a TTI (Transmission Time Interval). RB includes RLC information and RB mapping. RB mapping decides the mapping between RB<->logical channel<->transport channel. Signaling messages are sent on Signaling Radio Bearers (SRBs) and data packets (either CS or PS) are sent on data RBs. RRC andNASmessages go on SRBs. Security includes two procedures: integrity and ciphering. Integrity validates the resource of messages and also makes sure that no one (third/unknown party) on the radio interface has modified the messages. Ciphering ensures that no one listens to your data on the air interface. Both integrity and ciphering are applied for SRBs whereas only ciphering is applied for data RBs. With Mobile Application Part, UMTS uses the same core network standard as GSM/EDGE. This allows a simple migration for existing GSM operators. However, the migration path to UMTS is still costly: while much of the core infrastructure is shared with GSM, the cost of obtaining new spectrum licenses and overlaying UMTS at existing towers is high. The CN can be connected to variousbackbone networks, such as theInternetor anIntegrated Services Digital Network(ISDN) telephone network. UMTS (and GERAN) include the three lowest layers ofOSI model. The network layer (OSI 3) includes theRadio Resource Managementprotocol (RRM) that manages the bearer channels between the mobile terminals and the fixed network, including the handovers. A UARFCN (abbreviationfor UTRA Absolute Radio Frequency Channel Number, where UTRA stands for UMTS Terrestrial Radio Access) is used to identify a frequency in theUMTS frequency bands. Typically channel number is derived from the frequency in MHz through the formula Channel Number = Frequency * 5. However, this is only able to represent channels that are centered on a multiple of 200 kHz, which do not align with licensing in North America. 3GPP added several special values for the common North American channels. Over 130 licenses had been awarded to operators worldwide, as of December 2004, specifying W-CDMA radio access technology that builds on GSM. In Europe, the license process occurred at the tail end of the technology bubble, and the auction mechanisms for allocation set up in some countries resulted in some extremely high prices being paid for the original 2100 MHz licenses, notably in the UK and Germany. InGermany, bidders paid a total €50.8 billion for six licenses, two of which were subsequently abandoned and written off by their purchasers (Mobilcom and theSonera/Telefónicaconsortium). It has been suggested that these huge license fees have the character of a very large tax paid on future income expected many years down the road. In any event, the high prices paid put some European telecom operators close to bankruptcy (most notablyKPN). Over the last few years some operators have written off some or all of the license costs. Between 2007 and 2009, all three Finnish carriers began to use 900 MHz UMTS in a shared arrangement with its surrounding 2G GSM base stations for rural area coverage, a trend that is expected to expand over Europe in the next 1–3 years.[needs update] The 2100 MHz band (downlink around 2100 MHz and uplink around 1900 MHz) allocated for UMTS in Europe and most of Asia is already used in North America. The 1900 MHz range is used for 2G (PCS) services, and 2100 MHz range is used for satellite communications. Regulators have, however, freed up some of the 2100 MHz range for 3G services, together with a different range around 1700 MHz for the uplink.[needs update] AT&T Wireless launched UMTS services in the United States by the end of 2004 strictly using the existing 1900 MHz spectrum allocated for 2G PCS services. Cingular acquired AT&T Wireless in 2004 and has since then launched UMTS in select US cities. Cingular renamed itself AT&T Mobility and rolled out[37]some cities with a UMTS network at 850 MHz to enhance its existing UMTS network at 1900 MHz and now offers subscribers a number of dual-band UMTS 850/1900 phones. T-Mobile's rollout of UMTS in the US was originally focused on the 1700 MHz band. However, T-Mobile has been moving users from 1700 MHz to 1900 MHz (PCS) in order to reallocate the spectrum to 4GLTEservices.[38] In Canada, UMTS coverage is being provided on the 850 MHz and 1900 MHz bands on the Rogers and Bell-Telus networks. Bell and Telus share the network. Recently, new providersWind Mobile,MobilicityandVideotronhave begun operations in the 1700 MHz band. In 2008, Australian telco Telstra replaced its existing CDMA network with a national UMTS-based 3G network, branded asNextG, operating in the 850 MHz band. Telstra currently provides UMTS service on this network, and also on the 2100 MHz UMTS network, through a co-ownership of the owning and administrating company 3GIS. This company is also co-owned byHutchison 3G Australia, and this is the primary network used by their customers.Optusis currently rolling out a 3G network operating on the 2100 MHz band in cities and most large towns, and the 900 MHz band in regional areas.Vodafoneis also building a 3G network using the 900 MHz band. In India,BSNLhas started its 3G services since October 2009, beginning with the larger cities and then expanding over to smaller cities. The 850 MHz and 900 MHz bands provide greater coverage compared to equivalent 1700/1900/2100 MHz networks, and are best suited to regional areas where greater distances separate base station and subscriber. Carriers in South America are now also rolling out 850 MHz networks. UMTS phones (and data cards) are highly portable – they have been designed to roam easily onto other UMTS networks (if the providers have roaming agreements in place). In addition, almost all UMTS phones are UMTS/GSM dual-mode devices, so if a UMTS phone travels outside of UMTS coverage during a call the call may be transparently handed off to available GSM coverage. Roaming charges are usually significantly higher than regular usage charges. Most UMTS licensees consider ubiquitous, transparent globalroamingan important issue. To enable a high degree of interoperability, UMTS phones usually support several different frequencies in addition to their GSM fallback. Different countries support different UMTS frequency bands – Europe initially used 2100 MHz while the most carriers in the USA use 850 MHz and 1900 MHz. T-Mobile has launched a network in the US operating at 1700 MHz (uplink) /2100 MHz (downlink), and these bands also have been adopted elsewhere in the US and in Canada and Latin America. A UMTS phone and network must support a common frequency to work together. Because of the frequencies used, early models of UMTS phones designated for the United States will likely not be operable elsewhere and vice versa. There are now 11 different frequency combinations used around the world – including frequencies formerly used solely for 2G services. UMTS phones can use aUniversal Subscriber Identity Module, USIM (based on GSM'sSIM card) and also work (including UMTS services) with GSM SIM cards. This is a global standard of identification, and enables a network to identify and authenticate the (U)SIM in the phone. Roaming agreements between networks allow for calls to a customer to be redirected to them while roaming and determine the services (and prices) available to the user. In addition to user subscriber information and authentication information, the (U)SIM provides storage space for phone book contact. Handsets can store their data on their own memory or on the (U)SIM card (which is usually more limited in its phone book contact information). A (U)SIM can be moved to another UMTS or GSM phone, and the phone will take on the user details of the (U)SIM, meaning it is the (U)SIM (not the phone) which determines the phone number of the phone and the billing for calls made from the phone. Japan was the first country to adopt 3G technologies, and since they had not used GSM previously they had no need to build GSM compatibility into their handsets and their 3G handsets were smaller than those available elsewhere. In 2002, NTT DoCoMo's FOMA 3G network was the first commercial UMTS network – using a pre-release specification,[39]it was initially incompatible with the UMTS standard at the radio level but used standard USIM cards, meaning USIM card based roaming was possible (transferring the USIM card into a UMTS or GSM phone when travelling). Both NTT DoCoMo and SoftBank Mobile (which launched 3G in December 2002) now use standard UMTS. All of the major 2G phone manufacturers (that are still in business) are now manufacturers of 3G phones. The early 3G handsets and modems were specific to the frequencies required in their country, which meant they could only roam to other countries on the same 3G frequency (though they can fall back to the older GSM standard). Canada and USA have a common share of frequencies, as do most European countries. The article UMTS frequency bands is an overview of UMTS network frequencies around the world. Using acellular router, PCMCIA or USB card, customers are able to access 3G broadband services, regardless of their choice of computer (such as atablet PCor aPDA). Some softwareinstalls itselffrom the modem, so that in some cases absolutely no knowledge of technology is required to getonlinein moments. Using a phone that supports 3G and Bluetooth 2.0, multiple Bluetooth-capable laptops can be connected to the Internet. Some smartphones can also act as a mobileWLAN access point. There are very few 3G phones or modems available supporting all 3G frequencies (UMTS850/900/1700/1900/2100 MHz). In 2010, Nokia released a range of phones withPentaband3G coverage, including theN8andE7. Many other phones are offering more than one band which still enables extensive roaming. For example, Apple'siPhone 4contains a quadband chipset operating on 850/900/1900/2100 MHz, allowing usage in the majority of countries where UMTS-FDD is deployed. The main competitor to UMTS is CDMA2000 (IMT-MC), which is developed by the3GPP2. Unlike UMTS, CDMA2000 is an evolutionary upgrade to an existing 2G standard, cdmaOne, and is able to operate within the same frequency allocations. This and CDMA2000's narrower bandwidth requirements make it easier to deploy in existing spectra. In some, but not all, cases, existing GSM operators only have enough spectrum to implement either UMTS or GSM, not both. For example, in the US D, E, and F PCS spectrum blocks, the amount of spectrum available is 5 MHz in each direction. A standard UMTS system would saturate that spectrum. Where CDMA2000 is deployed, it usually co-exists with UMTS. In many markets however, the co-existence issue is of little relevance, as legislative hurdles exist to co-deploying two standards in the same licensed slice of spectrum. Another competitor to UMTS isEDGE(IMT-SC), which is an evolutionary upgrade to the 2G GSM system, leveraging existing GSM spectrums. It is also much easier, quicker, and considerably cheaper for wireless carriers to "bolt-on" EDGE functionality by upgrading their existing GSM transmission hardware to support EDGE rather than having to install almost all brand-new equipment to deliver UMTS. However, being developed by 3GPP just as UMTS, EDGE is not a true competitor. Instead, it is used as a temporary solution preceding UMTS roll-out or as a complement for rural areas. This is facilitated by the fact that GSM/EDGE and UMTS specifications are jointly developed and rely on the same core network, allowing dual-mode operation includingvertical handovers. China'sTD-SCDMAstandard is often seen as a competitor, too. TD-SCDMA has been added to UMTS' Release 4 as UTRA-TDD 1.28 Mcps Low Chip Rate (UTRA-TDD LCR). UnlikeTD-CDMA(UTRA-TDD 3.84 Mcps High Chip Rate, UTRA-TDD HCR) which complements W-CDMA (UTRA-FDD), it is suitable for both micro and macrocells. However, the lack of vendors' support is preventing it from being a real competitor. While DECT is technically capable of competing with UMTS and other cellular networks in densely populated, urban areas, it has only been deployed for domestic cordless phones and private in-house networks. All of these competitors have been accepted by ITU as part of the IMT-2000 family of 3G standards, along with UMTS-FDD. On the Internet access side, competing systems include WiMAX andFlash-OFDM. From a GSM/GPRS network, the following network elements can be reused: From a GSM/GPRS communication radio network, the following elements cannot be reused: They can remain in the network and be used in dual network operation where 2G and 3G networks co-exist while network migration and new 3G terminals become available for use in the network. The UMTS network introduces new network elements that function as specified by 3GPP: The functionality of MSC changes when going to UMTS. In a GSM system the MSC handles all the circuit switched operations like connecting A- and B-subscriber through the network. In UMTS the Media gateway (MGW) takes care of data transfer in circuit switched networks. MSC controls MGW operations. Some countries, including the United States, have allocated spectrum differently from theITUrecommendations, so that the standard bands most commonly used for UMTS (UMTS-2100) have not been available.[citation needed]In those countries, alternative bands are used, preventing the interoperability of existing UMTS-2100 equipment, and requiring the design and manufacture of different equipment for the use in these markets. As is the case with GSM900 today[when?], standard UMTS 2100 MHz equipment will not work in those markets. However, it appears as though UMTS is not suffering as much from handset band compatibility issues as GSM did, as many UMTS handsets are multi-band in both UMTS and GSM modes. Penta-band (850, 900, 1700, 2100, and 1900 MHz bands), quad-band GSM (850, 900, 1800, and 1900 MHz bands) and tri-band UMTS (850, 1900, and 2100 MHz bands) handsets are becoming more commonplace.[40] In its early days[when?], UMTS had problems in many countries: Overweight handsets with poor battery life were first to arrive on a market highly sensitive to weight and form factor.[citation needed]The Motorola A830, a debut handset on Hutchison's 3 network, weighed more than 200 grams and even featured a detachable camera to reduce handset weight. Another significant issue involved call reliability, related to problems with handover from UMTS to GSM. Customers found their connections being dropped as handovers were possible only in one direction (UMTS → GSM), with the handset only changing back to UMTS after hanging up. In most networks around the world this is no longer an issue.[citation needed] Compared to GSM, UMTS networks initially required a higherbase stationdensity. For fully-fledged UMTS incorporatingvideo on demandfeatures, one base station needed to be set up every 1–1.5 km (0.62–0.93 mi). This was the case when only the 2100 MHz band was being used, however with the growing use of lower-frequency bands (such as 850 and 900 MHz) this is no longer so. This has led to increasing rollout of the lower-band networks by operators since 2006.[citation needed] Even with current technologies and low-band UMTS, telephony and data over UMTS requires more power than on comparable GSM networks.Apple Inc.cited[41]UMTS power consumption as the reason that the first generationiPhoneonly supported EDGE. Their release of the iPhone 3G quotes talk time on UMTS as half that available when the handset is set to use GSM. Other manufacturers indicate different battery lifetime for UMTS mode compared to GSM mode as well. As battery and network technology improve, this issue is diminishing. As early as 2008, it was known that carrier networks can be used to surreptitiously gather user location information.[42]In August 2014, theWashington Postreported on widespread marketing of surveillance systems usingSignalling System No. 7(SS7) protocols to locate callers anywhere in the world.[42] In December 2014, news broke that SS7's very own functions can be repurposed for surveillance, because of its relaxed security, in order to listen to calls in real time or to record encrypted calls and texts for later decryption, or to defraud users and cellular carriers.[43] Deutsche Telekomand Vodafone declared the same day that they had fixed gaps in their networks, but that the problem is global and can only be fixed with a telecommunication system-wide solution.[44] The evolution of UMTS progresses according to planned releases. Each release is designed to introduce new features and improve upon existing ones.
https://en.wikipedia.org/wiki/TD-CDMA
High Speed Packet Access(HSPA)[1]is an amalgamation of twomobileprotocols—High Speed Downlink Packet Access (HSDPA) and High Speed Uplink Packet Access (HSUPA)—that extends and improves the performance of existing3Gmobile telecommunication networks using theWCDMAprotocols. A further-improved3GPPstandard calledEvolved High Speed Packet Access(also known as HSPA+) was released late in 2008, with subsequent worldwide adoption beginning in 2010. The newer standard allowsbit ratesto reach as high as 337 Mbit/s in the downlink and 34 Mbit/s in the uplink; however, these speeds are rarely achieved in practice.[2] The first HSPA specifications supported increased peak data rates of up to 14 Mbit/s in the downlink and 5.76 Mbit/s in the uplink. They also reduced latency and provided up to five times more system capacity in the downlink and up to twice as much system capacity in the uplink compared with original WCDMA protocol. High Speed Downlink Packet Access(HSDPA) is an enhanced3G(third-generation)mobilecommunications protocolin the High-Speed Packet Access (HSPA) family. HSDPA is also known as3.5Gand3G+. It allows networks based on theUniversal Mobile Telecommunications System(UMTS) to have higher data speeds and capacity. HSDPA also decreaseslatency, and therefore theround-trip timefor applications. HSDPA was introduced in3GPPRelease 5. It was accompanied by an improvement to the uplink that provided a new bearer of 384 kbit/s (the previous maximum bearer was 128 kbit/s).Evolved High Speed Packet Access(HSPA+), introduced in 3GPP Release 7, further increased data rates by adding 64QAM modulation,MIMO, andDual-Carrier HSDPAoperation. Under 3GPP Release 11, even higher speeds of up to 337.5 Mbit/s were possible.[3] The first phase of HSDPA was specified in 3GPP Release 5. This phase introduced new basic functions and was aimed to achieve peak data rates of 14.0 Mbit/s with significantly reduced latency. The improvement in speed and latency reduced the cost per bit and enhanced support for high-performance packet data applications. HSDPA is based on shared channel transmission, and its key features are shared channel and multi-code transmission,higher-order modulation, shortTransmission Time Interval(TTI), fast link adaptation and scheduling, and fasthybrid automatic repeat request(HARQ). Additional new features include the High Speed Downlink Shared Channels (HS-DSCH),quadrature phase-shift keying, 16-quadrature amplitude modulation, and the High Speed Medium Access protocol (MAC-hs) in base stations. The upgrade to HSDPA is often just a software update for WCDMA networks. In HSDPA, voice calls are usually prioritized over data transfer. The following table is derived from table 5.1a of the release 11 of 3GPP TS 25.306[4]and shows maximum data rates of different device classes and by what combination of features they are achieved. The per-cell, per-stream data rate is limited by the "maximum number of bits of an HS-DSCH transport block received within an HS-DSCH TTI" and the "minimum inter-TTI interval". The TTI is 2 milliseconds. So, for example, Cat 10 can decode 27,952 bits / 2 ms = 13.976 Mbit/s (and not 14.4 Mbit/s as often claimed incorrectly). Categories 1-4 and 11 have inter-TTI intervals of 2 or 3, which reduces the maximum data rate by that factor. Dual-Cell and MIMO 2x2 each multiply the maximum data rate by 2, because multiple independent transport blocks are transmitted over different carriers or spatial streams, respectively. The data rates given in the table are rounded to one decimal point. Further UE categories were defined from 3GGP Release 7 onwards asEvolved HSPA(HSPA+) and are listed inEvolved HSDPA UE Categories. As of 28 August 2009[update], 250 HSDPA networks had commercially launchedmobile broadbandservices in 109 countries. 169 HSDPA networks supported 3.6 Mbit/s peak downlink data throughput, and a growing number delivered 21 Mbit/s peak data downlink.[citation needed] CDMA2000-EVDOnetworks had the early lead on performance. In particular,Japaneseproviders were highly successful benchmarks for this network standard. However, this later changed in favor of HSDPA, as an increasing number of providers worldwide began adopting it. In 2007, an increasing number of telcos worldwide began sellingHSDPA USB modemsto provide mobile broadband connections. In addition, the popularity of HSDPA landline replacement boxes grew—these provided HSDPA for data viaEthernetandWi-Fi, as well as ports for connecting traditional landline telephones. Some were marketed with connection speeds of "up to 7.2 Mbit/s"[5]under ideal conditions. However, these services could be slower, such as when in fringe coverage indoors. High-Speed Uplink Packet Access(HSUPA) is a 3G mobile telephonyprotocolin the HSPA family. It is specified and standardized in 3GPP Release 6 to improve the uplink data rate to 5.76 Mbit/s, extend capacity, and reduce latency. Together with additional improvements, this allows for new features such asVoice over Internet Protocol(VoIP), uploading pictures, and sending large e-mail messages. HSUPA was the second major step in the UMTS evolution process. It has since been superseded by newer technologies with higher transfer rates, such asLTE(150 Mbit/s for downlink and 50 Mbit/s for uplink) andLTE Advanced(maximum downlink rates of over 1 Gbit/s). HSUPA adds a new transport channel to WCDMA, called the Enhanced Dedicated Channel (E-DCH). It also features several improvements similar to those of HSDPA, including multi-code transmission, shorter transmission time interval enabling fasterlink adaptation, fast scheduling, and fasthybrid automatic repeat request(HARQ) with incremental redundancy, makingretransmissionsmore effective. Similar to HSDPA, HSUPA uses a "packet scheduler", but it operates on a "request-grant" principle where theuser equipment(UE) requests permission to send data and the scheduler decides when and how many UEs will be allowed to do so. A request for transmission contains data about the state of the transmission buffer and the queue at the UE and its available power margin. However, unlike HSDPA, uplink transmissions are notorthogonalto each other. In addition to this "scheduled" mode of transmission, the standards allow a self-initiated transmission mode from the UEs, denoted "non-scheduled". The non-scheduled mode can, for example, be used for VoIP services for which even the reduced TTI and theNode Bbased scheduler are unable to provide the necessary short delay time and constant bandwidth. Each MAC-d flow (i.e., QoS flow) is configured to use either scheduled or non-scheduled modes. The UE adjusts the data rate for scheduled and non-scheduled flows independently. The maximum data rate of each non-scheduled flow is configured at call setup, and typically not frequently changed. The power used by the scheduled flows is controlled dynamically by the Node B through absolute grant (consisting of an actual value) and relative grant (consisting of a single up/down bit) messages. At thephysical layer, HSUPA introduces the following new channels: The following table shows uplink speeds for the different categories of HSUPA: Further UE categories were defined from 3GGP Release 7 onwards as Evolved HSPA (HSPA+) and are listed inEvolved HSUPA UE Categories. Evolved HSPA(also known as HSPA Evolution, HSPA+) is a wireless broadband standard defined in3GPPrelease 7 of the WCDMA specification. It provides extensions to the existing HSPA definitions and is thereforebackward compatibleall the way to the original Release 99 WCDMA network releases. Evolved HSPA provides data rates between 42.2 and 56 Mbit/s in the downlink and 22 Mbit/s in the uplink (per 5 MHz carrier) with multiple input, multiple output (2x2 MIMO) technologies and higher order modulation (64 QAM). With Dual Cell technology, these can be doubled. Since 2011, HSPA+ has been widely deployed among WCDMA operators, with nearly 200 commitments.[6]
https://en.wikipedia.org/wiki/High_Speed_Packet_Access
Evolved High Speed Packet Access,HSPA+,HSPA(Plus) orHSPAP, is atechnical standardforwireless broadbandtelecommunication, and extends the originalHSPA. The 3GPP standard organisation specified the original HSPA in release 7. HSPA+ can achieve data rates of up to 42.2 Mbit/s.[1]HSPA+ upgrades existing 3G networks to achieve speeds closer to 4G without a new radio interface. HSPA+ should not be confused withLTE, which uses an air interface based onorthogonal frequency-divisionmodulation and multiple access.[2] HSPA+ introduces antenna array technologies such asbeamformingandmultiple-input multiple-output communications(MIMO). Beamforming focuses antenna power in a beam toward the user's direction. MIMO uses multiple antennas on the sending and receiving side. Further releases of the standard have introduced dual carrier operation, allowing communication over two 5 MHz frequency bands simultaneously. Advanced HSPA+ is a further evolution of HSPA and provides download speeds up to 168 Mbit/s and upload speeds up to 22 Mbit/s. This is achieved with higherorder modulation(64QAM) or combining cells with Dual-Cell HSDPA. AnEvolved HSDPAnetwork can be up to 28 Mbit/s and 42 Mbit/s with a single 5 MHz carrier for Rel7 (MIMO with 16QAM) and Rel8 (64-QAM+MIMO). This doubling of cells used can improve throughput, diversity and joint scheduling.[3]Quality of service can be particularly improved for users with poor reception. Alternatively data rates can be doubles by double the bandwidth to 10 MHz (i.e. 2×5 MHz) by using DC-HSDPA. Dual-Carrier HSDPA, (aka Dual-Cell HSDPA), is part of3GPPRelease 8 specification, allowing communication with a mobile user over multiple frequency bands simultaneously. UMTS licenses are often issued as 5, 10, or 20 MHz paired spectrum allocations. The multicarrier feature achieves better resource utilization and spectrum efficiency through joint resource allocation and load balancing across the downlink carriers.[4] New HSDPAUser Equipment categories 21-24have been introduced that support DC-HSDPA. DC-HSDPA can support up to 42.2 Mbit/s, but unlike HSPA, it does not need to rely on MIMO transmission. The support of MIMO in combination with DC-HSDPA allows operators deploying Release 7 MIMO to benefit from the DC-HSDPA functionality defined in Release 8. While in Release 8 DC-HSDPA can only operate on adjacent carriers, Release 9 also allows that the paired cells can operate on two different frequency bands. Later releases allow the use of up to four carriers simultaneously. From Release 9 onwards is possible to use DC-HSDPA in combination with MIMO being used on both carriers. The support of MIMO in combination with DC-HSDPA allows theoretical speeds of up to 84.4 Mbit/s.[5][6] The following table is derived from table 5.1a of the release 11 of 3GPP TS 25.306[7]and shows maximum data rates of different device classes and by what combination of features they are achieved. The per-cell per-stream data rate is limited by theMaximum number of bits of an HS-DSCH transport block received within an HS-DSCH TTIand theMinimum inter-TTI interval. The TTI is 2 ms. So for example Cat 10 can decode 27,952 bits/2 ms = 13.976 Mbit/s (and not 14.4 Mbit/s as often claimed incorrectly). Categories 1-4 and 11 have inter-TTI intervals of 2 or 3, which reduces the maximum data rate by that factor. Dual-Cell and MIMO 2x2 each multiply the maximum data rate by 2, because multiple independent transport blocks are transmitted over different carriers or spatial streams, respectively. The data rates given in the table are rounded to one decimal point. Dual-Carrier HSUPA, also known asDual-Cell HSUPA, is a wireless broadband standard based on HSPA that is defined in3GPPUMTSrelease 9. Dual Cell (DC-)HSUPA is the natural evolution of HSPA by means of carrier aggregation in the uplink.[8]UMTS licenses are often issued as 10 or 15 MHz paired spectrum allocations. The basic idea of the multicarrier feature is to achieve better resource utilization and spectrum efficiency by means of joint resource allocation and load balancing across the uplink carriers. Similar enhancements as introduced withDual-Cell HSDPAin the downlink for 3GPP Release 8 were standardized for the uplink in 3GPP Release 9, called Dual-Cell HSUPA. The standardisation of Release 9 was completed in December 2009.[9][10][11] The following table shows uplink speeds for the different categories of Evolved HSUPA. The aggregation of more than two carriers has been studied and3GPPRelease 11 is scheduled to include 4-carrier HSPA. The standard was scheduled to be finalised in Q3 2012 and first chipsets supporting MC-HSPA in late 2013. Release 11 specifies 8-carrier HSPA allowed in non-contiguous bands with 4 × 4MIMOoffering peak transfer rates up to672 Mbit/s. The 168 Mbit/s and 22 Mbit/s represent theoretical peak speeds. The actual speed for a user will be lower. In general, HSPA+ offers higher bitrates only in very good radio conditions (very close to the cell tower) or if the terminal and network both support eitherMIMOorDual-Cell HSDPA, which effectively use two parallel transmit channels with different technical implementations. The higher 168 Mbit/s speeds are achieved by using multiple carriers withDual-Cell HSDPAand 4-wayMIMOtogether simultaneously.[12][13] A flattened all-IP architecture is an option for the network within HSPA+. In this architecture, the base stations connect to the network via IP (often Ethernet providing the transmission), bypassing legacy elements for the user's data connections. This makes the network faster and cheaper to deploy and operate. The legacy architecture is still permitted with the Evolved HSPA and is likely to exist for several years after adoption of the other aspects of HSPA+ (higher-order modulation, multiple streams, etc.). This 'flat architecture' connects the 'user plane' directly from the base station to theGGSNexternal gateway, using any available link technology supporting TCP/IP. The definition can be found in3GPP TR25.999. The user's data flow bypasses the Radio Network Controller (RNC) and theSGSNof the previous 3GPP UMTS architecture versions, thus simplifying the architecture, reducing costs and delays. This is nearly identical to the3GPP Long Term Evolution(LTE) flat architecture as defined in the 3GPP standard Rel-8. The changes allow cost-effective modern link layer technologies such as xDSL or Ethernet, and these technologies are no longer tied to the more expensive and rigid requirements of the older standard of SONET/SDH and E1/T1 infrastructure. There are no changes to the 'control plane'. Nokia Siemens NetworksInternet HSPA(I-HSPA) was the first commercial solution implementing the Evolved HSPA flattened all-IP architecture.[14]
https://en.wikipedia.org/wiki/HSPA%2B
Orthogonal frequency-division multiple access(OFDMA) is a multi-user version of the popularorthogonal frequency-division multiplexing(OFDM)digital modulationscheme.Multiple accessis achieved in OFDMA by assigning subsets ofsubcarriersto individual users. This allows simultaneous low-data-rate transmission from several users. OFDMA is often compared to the combination of OFDM withstatistical time-division multiplexing. The advantages and disadvantages summarized below are further discussed in theCharacteristics and principles of operationsection. See also the list ofOFDM key features. Based on feedback information about the channel conditions, adaptive user-to-subcarrier assignment can be achieved.[2]If the assignment is done sufficiently fast, this further improves the OFDM robustness to fastfadingand narrow-band cochannel interference, and makes it possible to achieve even bettersystem spectral efficiency. Different numbers of sub-carriers can be assigned to different users, in view to support differentiatedquality of service(QoS), i.e. to control the data rate and error probability individually for each user. OFDMA can be seen as an alternative to combining OFDM withtime-division multiple access(TDMA) or time-domainstatistical multiplexingcommunication. Low-data-rate users can send continuously with low transmission power instead of using a "pulsed" high-power carrier. Constant delay, and shorter delay, can be achieved. OFDMA can also be described as a combination of frequency-domain and time-domain multiple access, where the resources are partitioned in the time–frequency space, and slots are assigned along the OFDM symbol index, as well as OFDM sub-carrier index. OFDMA is considered as highly suitable for broadband wireless networks, due to advantages including scalability and use of multiple antennas (MIMO-friendliness), and ability to take advantage of channel frequency selectivity.[1] In spectrum sensingcognitive radio, OFDMA is a possible approach to filling freeradio frequencybands adaptively. Timo A. Weiss and Friedrich K. Jondral of the University of Karlsruhe proposed aspectrum poolingsystem in which free bands sensed by nodes were immediately filled by OFDMA subbands.[relevant?][citation needed] OFDMA is used in: OFDMA is also a candidate access method for theIEEE 802.22Wireless Regional Area Networks(WRAN), acognitive radiotechnology which useswhite spacesintelevisionspectrum, and the proposed access method forDECT-5G specification which aims to fulfillIMT-2020requirements for high-throughput mobile broadband (eMMB) and ultra-reliable low-latency (URLLC) applications.
https://en.wikipedia.org/wiki/OFDMA
Evolution-Data Optimized(EV-DO,EVDO, etc.) is atelecommunicationsstandard for thewirelesstransmission of data throughradiosignals, typically forbroadband Internet access. EV-DO is an evolution of theCDMA2000(IS-2000) standard which supports high data rates and can be deployed alongside a wireless carrier's voice services. It uses advancedmultiplexingtechniques includingcode-division multiple access(CDMA) as well astime-division multiplexing(TDM) to maximize throughput. It is a part of theCDMA2000family of standards and has been adopted by manymobile phoneservice providers around the world particularly those previously employingCDMAnetworks. It is also used on theGlobalstarsatellite phonenetwork.[1] An EV-DO channel has a bandwidth of 1.25 MHz, the same bandwidth size that IS-95A (IS-95) and IS-2000 (1xRTT) use,[2]though the channel structure is very different. The back-end network is entirelypacket-based, and is not constrained by restrictions typically present on acircuit switchednetwork. The EV-DO feature of CDMA2000 networks provides access to mobile devices withforward linkair interface speeds of up to 2.4 Mbit/s with Rel. 0 and up to 3.1 Mbit/s with Rev. A. Thereverse linkrate for Rel. 0 can operate up to 153 kbit/s, while Rev. A can operate at up to 1.8 Mbit/s. It was designed to be operated end-to-end as anIP-based network, and can support any application which can operate on such a network and bit rate constraints. There have been several revisions of the standard, starting with Release 0 (Rel. 0). This was later expanded upon with Revision A (Rev. A) to supportquality of service(to improve latency) and higher rates on the forward and reverse link. In late 2006, Revision B (Rev. B) was published, whose features include the ability to bundle multiple carriers to achieve even higher rates and lower latencies (seeTIA-856 Rev. Bbelow). The upgrade from EV-DO Rev. A to Rev. B involves a software update of the cell site modem, and additional equipment for new EV-DO carriers. Existing cdma2000 operators may have to retune some of their existing 1xRTT channels to other frequencies, as Rev. B requires all DO carriers be within 5 MHz. The initial design of EV-DO was developed byQualcommin 1999 to meetIMT-2000requirements for a greater-than-2 Mbit/s down link for stationary communications, as opposed to mobile communication (i.e., moving cellular phone service). Initially, the standard was called High Data Rate (HDR), but was renamed to 1xEV-DO after it was ratified by theInternational Telecommunication Union(ITU) under the designationTIA-856. Originally, 1xEV-DO stood for "1x Evolution-Data Only", referring to its being a direct evolution of the1x(1xRTT) air interface standard, with its channels carrying only data traffic. The title of the 1xEV-DO standard document is "cdma2000 High Rate Packet Data Air Interface Specification", as cdma2000 (lowercase) is another name for the 1x standard, numerically designated as TIA-2000. Later, due to possible negative connotations of the word "only", the "DO"-part of the standard's name 1xEV-DO was changed to stand for "Data Optimized", the full name - EV-DO now stands for "Evolution-Data Optimized." The 1x prefix has been dropped by many of the major carriers, and is marketed simply as EV-DO.[3]This provides a more market-friendly emphasis of the technology being data-optimized. The primary characteristic that differentiates an EV-DO channel from a 1xRTT channel is that it istime multiplexedon the forward link (from the tower to the mobile). This means that a single mobile has full use of the forward traffic channel within a particular geographic area (a sector) during a given slot of time. Using this technique, EV-DO is able tomodulateeach user’s time slot independently. This allows the service of users in favorable RF conditions with very complexmodulationtechniques while also serving users in poor RF conditions with simpler (and more redundant) signals.[4] The forward channel is divided into slots, each being 1.667 ms long. In addition to user traffic, overhead channels are interlaced into the stream, which include the 'pilot', which helps the mobile find and identify the channel, theMedia Access Channel (MAC)which tells the mobile devices when their data is scheduled, and the 'control channel', which contains other information the network needs the mobile devices to know. Themodulationto be used to communicate with a given mobile unit is determined by the mobile device itself; it listens to the traffic on the channel, and depending on the receive signal strength along with the perceived multi-path and fading conditions, makes a best guess as to what data-rate it can sustain while maintaining a reasonable frame error rate of 1-2%. It then communicates this information back to the serving sector in the form of an integer between 1 and 12 on the "Digital Rate Control" (DRC) channel. Alternatively, the mobile can select a "null" rate (DRC 0), indicating that the mobile either cannot decode data at any rate, or that it is attempting tohand offto another serving sector.[4] The DRC values are as follows:[5] Another important aspect of the EV-DO forward link channel is the scheduler. The scheduler most commonly used is called "proportional fair". It's designed to maximize sector throughput while also guaranteeing each user a certain minimum level of service. The idea is to schedule mobiles reporting higher DRC indices more often, with the hope that those reporting worse conditions will improve in time. The system also incorporatesIncremental Redundancy Hybrid ARQ. Each sub-packet of a multi-slot transmission is aturbo-codedreplica of the original data bits. This allows mobiles to acknowledge a packet before all of its sub-sections have been transmitted. For example, if a mobile transmits a DRC index of 3 and is scheduled to receive data, it will expect to get data during four time slots. If after decoding the first slot the mobile is able to determine the entire data packet, it can send an early acknowledgement back at that time; the remaining three sub-packets will be cancelled. If however the packet is not acknowledged, the network will proceed with the transmission of the remaining parts until all have been transmitted or the packet is acknowledged.[4] The reverse link (from the mobile back to theBase Transceiver Station) on EV-DO Rel. 0 operates very similar to that ofCDMA2000 1xRTT. The channel includes a reverse link pilot (helps with decoding the signal) along with the user data channels. Some additional channels that do not exist in 1x include the DRC channel (described above) and the ACK channel (used forHARQ). Only the reverse link has any sort ofpower control, because the forward link is always transmitted at full power for use by all the mobiles.[5]The reverse link has both open loop and closed loop power control. In the open loop, the reverse link transmission power is set based upon the received power on the forward link. In the closed loop, the reverse link power is adjusted up or down 800 times a second, as indicated by the serving sector (similar to1x).[6] All of the reverse link channels are combined usingcode divisionand transmitted back to the base station usingBPSK[7]where they are decoded. The maximum speed available for user data is 153.2 kbit/s, but in real-life conditions this is rarely achieved. Typical speeds achieved are between 20-50 kbit/s. Revision A of EV-DO makes several additions to the protocol while keeping it completely backwards compatible with Release 0. These changes included the introduction of several new forward link data rates that increase the maximum burst rate from 2.45 Mbit/s to 3.1 Mbit/s. Also included were protocols that would decrease connection establishment time (called enhanced access channel MAC), the ability for more than one mobile to share the same timeslot (multi-user packets) and the introduction ofQoSflags. All of these were put in place to allow for low latency, low bit rate communications such asVoIP.[8] The additional forward rates for EV-DO Rev. An are:[9] In addition to the changes on the forward link, the reverse link was enhanced to support higher complexitymodulation(and thus higher bit rates). An optional secondary pilot was added, which is activated by the mobile when it tries to achieve enhanced data rates. To combat reverse link congestion and noise rise, the protocol calls for each mobile to be given an interference allowance which is replenished by the network when the reverse link conditions allow it.[9]The reverse link has a maximum rate of 1.8 Mbit/s, but under normal conditions users experience a rate of approximately 500-1000 Kbit/s but with morelatencythanDOCSISandDSL. EV-DO Rev. B is a multi-carrier evolution of the Rev. A specification. It maintains the capabilities of EV-DO Rev. A, and provides the following enhancements: Qualcomm early on realized that EV-DO was a stop-gap solution, and foresaw an upcoming format war betweenLTEand determined that a new standard would be needed. Qualcomm originally called this technology EV-DV (Evolution Data and Voice).[10]As EV-DO became more pervasive, EV-DV evolved into EV-DO Rev C. The EV-DO Rev. C standard was specified by3GPP2to improve theCDMA2000mobile phone standard for next generation applications and requirements. It was proposed byQualcommas the natural evolution path for CDMA2000 and the specifications were published by 3GPP2 (C.S0084-*) and TIA (TIA-1121) in 2007 and 2008 respectively.[11][12] The brand nameUMB (Ultra Mobile Broadband)was introduced in 2006 as a synonym for this standard.[13] UMB was intended to be afourth-generationtechnology, which would make it compete withLTEandWiMAX. These technologies use a high bandwidth, low latency, underlyingTCP/IPnetwork with high level services such as voice built on top. Widespread deployment of 4G networks promises to make applications that were previously not feasible not only possible but ubiquitous. Examples of such applications include mobilehigh definition videostreaming and mobile gaming. Like LTE, the UMB system was to be based upon Internet networking technologies running over a next generation radio system, with peak rates of up to 280 Mbit/s. Its designers intended for the system to be more efficient and capable of providing more services than the technologies it was intended to replace. To provide compatibility with the systems it was intended to replace, UMB was to support handoffs with other technologies including existing CDMA2000 1X and 1xEV-DO systems. UMB's use of OFDMA would have eliminated many of the disadvantages of the CDMA technology used by its predecessor, including the "breathing" phenomenon, the difficulty of adding capacity via microcells, the fixed bandwidth sizes that limit the total bandwidth available to handsets, and the near complete control by one company of the required intellectual property. While capacity of existing Rel. B networks can be increased 1.5-fold by using EVRC-B voice codec and QLIC handset interference cancellation, 1x Advanced and EV-DO Advanced offers up to 4x network capacity increase using BTS interference cancellation (reverse link interference cancellation), multi-carrier links, and smart network management technologies.[14][15] In November 2008,Qualcomm, UMB's lead sponsor, announced it was ending development of the technology, favoringLTEinstead. This followed the announcement that most CDMA carriers chose to adopt eitherWiMAXorLTEstandard as their 4G technology. In fact no carrier had announced plans to adopt UMB.[16] However, during the ongoing development process of the 4G technology, 3GPP added some functionalities to LTE, allowing it to become a sole upgrade path for all wireless networks.
https://en.wikipedia.org/wiki/EVDO
SVDO, orSimultaneousVoice andEV-DOdata, is a technology that allows supportedCDMA2000EV-DO cellular phones to maintain an active 3G data session while the phone is on a call. Previously, the capability of being able to use data while on a call was found only on mobile phones usingGSMcellular networks. In 2011,Verizonreleased their first SVDO-supported phone, theHTC Thunderbolt. The following year,Sprintreleased their first SVDO-supported phone, theHTC Evo 4G LTE. Although both phones supportLTE, which already allows for simultaneous voice and data, when the devices are only in 3G data coverage, they can use SVDO to be in a 3G data session while on a phone call.
https://en.wikipedia.org/wiki/SVDO
International Mobile Telecommunications-Advanced(IMT-Advanced Standard) are the requirements issued by theITU Radiocommunication Sector(ITU-R) of theInternational Telecommunication Union(ITU) in 2008 for what is marketed as4G(or in Turkey as 4.5G[1][2][3]) mobile phone andInternet accessservice. An IMT-Advanced system is expected to provide a comprehensive and secureall-IPbasedmobile broadbandsolution to laptop computerwireless modems,smartphones, and other mobile devices.Facilitiessuch asultra-broadbandInternet access,voice over IP, gaming services, and streamed multimedia may be provided to users. IMT-Advanced is intended to accommodate thequality of service(QoS) and rate requirements set by further development of existing applications likemobile broadbandaccess,Multimedia Messaging Service(MMS),video chat,mobile TV, but also new services likehigh-definition television(HDTV). 4G may allow roaming with wireless local area networks and may interact withdigital video broadcastingsystems. It was meant to go beyond theInternational Mobile Telecommunications-2000requirements, which specifymobile phonessystems marketed as3G. Specific requirements of the IMT-Advanced report included: The first set of 3GPP requirements on LTE Advanced was approved in June 2008.[10] A summary of the technologies that have been studied as the basis for LTE Advanced is included in a technical report.[11] While the ITU adopts requirements and recommendations for technologies that would be used for future communications, they do not actually perform the development work themselves, and countries do not consider them binding standards. Other trade groups and standards bodies such as theInstitute of Electrical and Electronics Engineers, theWiMAX Forum, and3GPPalso have a role. Physical layer transmission techniques expected to be used include:[12] Long Term Evolution(LTE) has a theoreticalnet bitratemaximum capacity of 100 Mbit/s in the downlink and 50 Mbit/s in the uplink if a 20 MHz channel is used. The capacity is more if aMIMO(multiple-input and multiple-output) antenna array is used. The physical radio interface was at an early stage named "High-Speed Orthogonal Packet Access" and is now namedE-UTRA. TheCDMA's spread spectrumradio technology that was used in 3G systems andcdmaOnehas been abandoned. It was replaced byorthogonal frequency-division multiple accessand otherfrequency-division multiple accessschemes. This is combined with MIMO antenna arrays,dynamic channel allocation, andchannel-dependent scheduling. The first publicly available LTE services were branded "4G" and opened in Sweden's capital cityStockholm(Ericssonsystem) and Norway's capital cityOslo(aHuaweisystem) on 14 December 2009. The user terminals were manufactured by Samsung.[13]All three major U.S. wireless carriers offer LTE services. In South Korea, SK Telecom and LG U+ have enabled access to LTE service since July 2011 for data devices, slated to go nationwide by 2012.[14] TheMobile WiMAX(IEEE 802.16e-2005) mobile wireless broadband access (MWBA) standard (marketed asWiBroin South Korea) is sometimes branded 4G, and offers peak data rates of 128 Mbit/s downlink and 56 Mbit/s uplink over 20 MHz wide channels.[citation needed] The first commercial mobile WiMAX service was opened byKTin Seoul, South Korea in June 2006.[15] In September 2008,Sprint Nextelmarketed Mobile WiMAX as a "4G" network even though it did not fulfill the IMT Advanced requirements.[16] In Russia, Belarus, and Nicaragua, WiMax broadband internet access is offered by the Russian companyScarteland is also branded 4G,Yota. Ultra Mobile Broadband(UMB) was the brand name for a discontinued 4G project within the3GPP2standardization group to improve theCDMA2000mobile phone standard for next-generation applications and requirements. In November 2008,Qualcomm, UMB's lead sponsor, announced it was ending development of the technology, favoring LTE instead.[17]The objective was to achieve data speeds over 275 Mbit/s downstream and over 75 Mbit/s upstream. At an early stage, theFlash-OFDMsystem was expected to be further developed into a 4G standard. TheiBursttechnology, using High Capacity Spatial Division Multiple Access (HC-SDMA), was at an early stage considered as a 4G predecessor. It was incorporated by theMobile Broadband Wireless Access(MBWA) working group into the IEEE 802.20 standard in 2008.[18] In October 2010, ITU-R Working Party 5D approved two industry-developed technologies.[19]On December 6, 2010, ITU noted that while current versions of LTE, WiMax and other evolved 3G technologies do not fulfill IMT-Advanced requirements for 4G, some may use the term "4G" in an "undefined" fashion to represent forerunners to IMT-Advanced that show "a substantial level of improvement in performance and capabilities with respect to the initial third generation systems now deployed."[20] LTE Advanced(Long-term-evolution Advanced) was formally submitted by the3GPPorganization to ITU-T in the fall of 2009, and was released in 2011. The target of 3GPP LTE Advanced was to reach and surpass the ITU requirements.[21]LTE Advanced is an improvement on the existing LTE network. Release 10 of LTE is expected to achieve the LTE Advanced speeds. Release 8 in 2009 supported up to 300 Mbit/s download speeds which were still short of the IMT-Advanced standards.[22] The WirelessMAN-Advanced evolution of IEEE 802.16e was published in May 2011 as standardIEEE 802.16m-2011. The relevant industry promoting the technology gave it the marketing name of WiMAX Release 2. It had an objective to fulfill the IMT-Advanced criteria.[23][24]The IMT-Advanced group formally approved this technology as meeting its criteria in October 2010.[25]In the second half of 2012, the 802.16m-2011 standard was rolled up into the 802.16-2012 standard, excluding the WirelessMAN-Advanced radio interface part of the 802.16m-2011 standard, which got moved to IEEE Std 802.16.1-2012. The following table shows a comparison of IMT-Advanced candidate systems as well as other competing technologies. Antenna,RF front endenhancements and minor protocol timer tweaks have helped deploy long rangeP2Pnetworks compromising on radial coverage, throughput and/or spectra efficiency (310 km&382 km) Notes: All speeds are theoretical maximums and will vary by a number of factors, including the use of external antennas, distance from the tower and the ground speed (e.g. communications on a train may be poorer than when standing still). Usually the bandwidth is shared between several terminals. The performance of each technology is determined by a number of constraints, including thespectral efficiencyof the technology, the cell sizes used, and the amount of spectrum available. For more comparison tables, seebit rate progress trends,comparison of mobile phone standards,spectral efficiency comparison tableandOFDM system comparison table.
https://en.wikipedia.org/wiki/IMT_Advanced
LTE Advanced, also named or recognized asLTE+,LTE-Aor4G+, is a4Gmobilecellularcommunication standard developed by3GPPas a major enhancement of theLong Term Evolution(LTE) standard. Three technologies from the LTE-Advanced tool-kit –carrier aggregation, 4x4MIMOand256QAMmodulation in the downlink – if used together and with sufficient aggregated bandwidth, can deliver maximum peak downlink speeds approaching, or even exceeding, 1 Gbit/s. This is significantly more than the peak 300 Mbit/s rate offered by the preceding LTE standard.[1]Later developments have resulted inLTE Advanced Pro(or4.9G) which increases bandwidth even further.[2] The first ever LTE Advanced network was deployed in 2013 bySK Telecomin South Korea.[3]In August 2019, theGlobal mobile Suppliers Association(GSA) reported that there were 304 commercially launched LTE-Advanced networks in 134 countries. Overall, 335 operators are investing in LTE-Advanced (in the form of tests, trials, deployments or commercial service provision) in 141 countries.[4] LTE Advanced is also named (indicated as)LTE+,LTE-A,[5]or (onSamsung GalaxyandXiaomismartphones) as4G+. Such networks have also often been described as ‘Gigabit LTEnetworks’ mirroring a term that is also used in the fixed broadband industry.[6] The mobile communication industry and standards organizations have therefore started work on 4G access technologies, such as LTE Advanced.[when?]At a workshop in April 2008 in China, 3GPP agreed the plans for work on Long Term Evolution (LTE).[7]A first set of specifications were approved in June 2008.[8]Besides the peak data rate 1Gb/sas defined by the ITU-R, it also targets faster switching between power states and improved performance at the cell edge. Detailed proposals are being studied within theworking groups.[when?]The LTE+ format was first proposed byNTT DoCoMoofJapanand has been adopted as the international standard.[9]It was formally submitted as a candidate4GtoITU-Tin late 2009 as meeting the requirements of theIMT-Advancedstandard, and was standardized by the 3rd Generation Partnership Project (3GPP) in March 2011 as 3GPP Release 10.[10] The work by3GPPto define a4Gcandidate radio interface technology started in Release 9 with the study phase for LTE-Advanced. Being described as a3.9G(beyond 3G but pre-4G), the first release of LTE did not meet the requirements for4G(also calledIMT Advancedas defined by theInternational Telecommunication Union) such as peak data rates up to 1Gb/s. The ITU has invited the submission of candidate Radio Interface Technologies (RITs) following their requirements in a circular letter, 3GPP Technical Report (TR) 36.913, "Requirements for Further Advancements forE-UTRA(LTE-Advanced)."[11]These are based on ITU's requirements for4Gand on operators’ own requirements for advanced LTE. Major technical considerations include the following: Likewise, 'WiMAX 2', 802.16m, has been approved by ITU as theIMT Advancedfamily. WiMAX 2 is designed to be backward compatible with WiMAX 1 devices. Most vendors now support conversion of 'pre-4G', pre-advanced versions and some support software upgrades of base station equipment from 3G. The target of 3GPP LTE Advanced is to reach and surpass theITUrequirements. LTE Advanced should be compatible with first release LTE equipment, and should share frequency bands with first release LTE. In the feasibility study for LTE Advanced,3GPPdetermined that LTE Advanced would meet theITU-Rrequirements for4G. The results of the study are published in3GPPTechnical Report (TR) 36.912.[12] One of the important LTE Advanced benefits is the ability to take advantage of advanced topology networks; optimized heterogeneous networks with a mix of macrocells with low power nodes such aspicocells,femtocellsand new relay nodes. The next significant performance leap in wireless networks will come from making the most of topology, and brings the network closer to the user by adding many of these low power nodes – LTE Advanced further improves the capacity and coverage, and ensures user fairness. LTE Advanced also introduces multicarrier to be able to use ultra wide bandwidth, up to 100 MHz of spectrum supporting very high data rates. In the research phase many proposals have been studied as candidates for LTE Advanced (LTE-A) technologies. The proposals could roughly be categorized into:[13] Within the range of system development, LTE-Advanced and WiMAX 2 can use up to 8x8MIMOand 128-QAMin downlink direction. Example performance: 100 MHz aggregated bandwidth, LTE-Advanced provides almost 3.3 Gbit peak download rates per sector of the base station under ideal conditions. Advanced network architectures combined with distributed and collaborative smart antenna technologies provide several years road map of commercial enhancements. The3GPPstandards Release 12 added support for 256-QAM. A summary of a study carried out in 3GPP can be found in TR36.912.[14] Original standardization work for LTE-Advanced was done as part of 3GPP Release 10, which was frozen in April 2011. Trials were based on pre-release equipment. Major vendors support software upgrades to later versions and ongoing improvements. In order to improve the quality of service for users in hotspots and on cell edges,heterogeneous networks(HetNets) are formed of a mixture of macro-, pico- and femto base stations serving corresponding-size areas. Frozen in December 2012, 3GPP Release 11[15]concentrates on better support of HetNet. Coordinated Multi-Point operation (CoMP) is a key feature of Release 11 in order to support such network structures. Whereas users located at a cell edge in homogenous networks suffer from decreasing signal strength compounded by neighbor cell interference, CoMP is designed to enable use of a neighboring cell to also transmit the same signal as the serving cell, enhancing quality of service on the perimeter of a serving cell. In-device Co-existence (IDC) is another topic addressed in Release 11. IDC features are designed to ameliorate disturbances within the user equipment caused between LTE/LTE-A and the various other radio subsystems such as WiFi, Bluetooth, and the GPS receiver. Further enhancements for MIMO such as 4x4 configuration for the uplink were standardized. The higher number of cells in HetNet results in user equipment changing the serving cell more frequently when in motion. The ongoing work on LTE-Advanced[16]in Release 12, amongst other areas, concentrates on addressing issues that come about when users move through HetNet, such as frequent hand-overs between cells. It also included use of 256-QAM. This list covers technology demonstrations and field trials up to the year 2014, paving the way for a wider commercial deployment of the VoLTE technology worldwide. From 2014 onwards various further operators trialled and demonstrated the technology for future deployment on their respective networks. These are not covered here. Instead a coverage of commercial deployments can be found in the section below. LTE Advanced Pro(LTE-A Pro, also known as4.5G,4.5G Pro,4.9G,Pre-5G,5G Project)[45][46][47][48]is a name for3GPPrelease 13 and 14.[49][50]It is an evolution of LTE Advanced (LTE-A) cellular standard supporting data rates in excess of 3 Gbit/s using 32-carrier aggregation.[2]It also introduces the concept ofLicense Assisted Access, which allows sharing of licensed and unlicensed spectrum. Additionally, it incorporates several new technologies associated with5G, such as 256-QAM, MassiveMIMO,LTE-UnlicensedandLTE IoT,[51][52]that facilitated early migration of existing networks to enhancements promised with the full 5G standard.[53] Telstrain Australia deployed the very first LTE Advanced Pro network in January 2017.[54] LTE for UMTS - OFDMA and SC-FDMA Based Radio Access,ISBN978-0-470-99401-6Chapter 2.6: LTE Advanced for IMT-advanced, pp. 19–21. Resources (white papers, technical papers, application notes)
https://en.wikipedia.org/wiki/LTE_Advanced
LTE Advanced, also named or recognized asLTE+,LTE-Aor4G+, is a4Gmobilecellularcommunication standard developed by3GPPas a major enhancement of theLong Term Evolution(LTE) standard. Three technologies from the LTE-Advanced tool-kit –carrier aggregation, 4x4MIMOand256QAMmodulation in the downlink – if used together and with sufficient aggregated bandwidth, can deliver maximum peak downlink speeds approaching, or even exceeding, 1 Gbit/s. This is significantly more than the peak 300 Mbit/s rate offered by the preceding LTE standard.[1]Later developments have resulted inLTE Advanced Pro(or4.9G) which increases bandwidth even further.[2] The first ever LTE Advanced network was deployed in 2013 bySK Telecomin South Korea.[3]In August 2019, theGlobal mobile Suppliers Association(GSA) reported that there were 304 commercially launched LTE-Advanced networks in 134 countries. Overall, 335 operators are investing in LTE-Advanced (in the form of tests, trials, deployments or commercial service provision) in 141 countries.[4] LTE Advanced is also named (indicated as)LTE+,LTE-A,[5]or (onSamsung GalaxyandXiaomismartphones) as4G+. Such networks have also often been described as ‘Gigabit LTEnetworks’ mirroring a term that is also used in the fixed broadband industry.[6] The mobile communication industry and standards organizations have therefore started work on 4G access technologies, such as LTE Advanced.[when?]At a workshop in April 2008 in China, 3GPP agreed the plans for work on Long Term Evolution (LTE).[7]A first set of specifications were approved in June 2008.[8]Besides the peak data rate 1Gb/sas defined by the ITU-R, it also targets faster switching between power states and improved performance at the cell edge. Detailed proposals are being studied within theworking groups.[when?]The LTE+ format was first proposed byNTT DoCoMoofJapanand has been adopted as the international standard.[9]It was formally submitted as a candidate4GtoITU-Tin late 2009 as meeting the requirements of theIMT-Advancedstandard, and was standardized by the 3rd Generation Partnership Project (3GPP) in March 2011 as 3GPP Release 10.[10] The work by3GPPto define a4Gcandidate radio interface technology started in Release 9 with the study phase for LTE-Advanced. Being described as a3.9G(beyond 3G but pre-4G), the first release of LTE did not meet the requirements for4G(also calledIMT Advancedas defined by theInternational Telecommunication Union) such as peak data rates up to 1Gb/s. The ITU has invited the submission of candidate Radio Interface Technologies (RITs) following their requirements in a circular letter, 3GPP Technical Report (TR) 36.913, "Requirements for Further Advancements forE-UTRA(LTE-Advanced)."[11]These are based on ITU's requirements for4Gand on operators’ own requirements for advanced LTE. Major technical considerations include the following: Likewise, 'WiMAX 2', 802.16m, has been approved by ITU as theIMT Advancedfamily. WiMAX 2 is designed to be backward compatible with WiMAX 1 devices. Most vendors now support conversion of 'pre-4G', pre-advanced versions and some support software upgrades of base station equipment from 3G. The target of 3GPP LTE Advanced is to reach and surpass theITUrequirements. LTE Advanced should be compatible with first release LTE equipment, and should share frequency bands with first release LTE. In the feasibility study for LTE Advanced,3GPPdetermined that LTE Advanced would meet theITU-Rrequirements for4G. The results of the study are published in3GPPTechnical Report (TR) 36.912.[12] One of the important LTE Advanced benefits is the ability to take advantage of advanced topology networks; optimized heterogeneous networks with a mix of macrocells with low power nodes such aspicocells,femtocellsand new relay nodes. The next significant performance leap in wireless networks will come from making the most of topology, and brings the network closer to the user by adding many of these low power nodes – LTE Advanced further improves the capacity and coverage, and ensures user fairness. LTE Advanced also introduces multicarrier to be able to use ultra wide bandwidth, up to 100 MHz of spectrum supporting very high data rates. In the research phase many proposals have been studied as candidates for LTE Advanced (LTE-A) technologies. The proposals could roughly be categorized into:[13] Within the range of system development, LTE-Advanced and WiMAX 2 can use up to 8x8MIMOand 128-QAMin downlink direction. Example performance: 100 MHz aggregated bandwidth, LTE-Advanced provides almost 3.3 Gbit peak download rates per sector of the base station under ideal conditions. Advanced network architectures combined with distributed and collaborative smart antenna technologies provide several years road map of commercial enhancements. The3GPPstandards Release 12 added support for 256-QAM. A summary of a study carried out in 3GPP can be found in TR36.912.[14] Original standardization work for LTE-Advanced was done as part of 3GPP Release 10, which was frozen in April 2011. Trials were based on pre-release equipment. Major vendors support software upgrades to later versions and ongoing improvements. In order to improve the quality of service for users in hotspots and on cell edges,heterogeneous networks(HetNets) are formed of a mixture of macro-, pico- and femto base stations serving corresponding-size areas. Frozen in December 2012, 3GPP Release 11[15]concentrates on better support of HetNet. Coordinated Multi-Point operation (CoMP) is a key feature of Release 11 in order to support such network structures. Whereas users located at a cell edge in homogenous networks suffer from decreasing signal strength compounded by neighbor cell interference, CoMP is designed to enable use of a neighboring cell to also transmit the same signal as the serving cell, enhancing quality of service on the perimeter of a serving cell. In-device Co-existence (IDC) is another topic addressed in Release 11. IDC features are designed to ameliorate disturbances within the user equipment caused between LTE/LTE-A and the various other radio subsystems such as WiFi, Bluetooth, and the GPS receiver. Further enhancements for MIMO such as 4x4 configuration for the uplink were standardized. The higher number of cells in HetNet results in user equipment changing the serving cell more frequently when in motion. The ongoing work on LTE-Advanced[16]in Release 12, amongst other areas, concentrates on addressing issues that come about when users move through HetNet, such as frequent hand-overs between cells. It also included use of 256-QAM. This list covers technology demonstrations and field trials up to the year 2014, paving the way for a wider commercial deployment of the VoLTE technology worldwide. From 2014 onwards various further operators trialled and demonstrated the technology for future deployment on their respective networks. These are not covered here. Instead a coverage of commercial deployments can be found in the section below. LTE Advanced Pro(LTE-A Pro, also known as4.5G,4.5G Pro,4.9G,Pre-5G,5G Project)[45][46][47][48]is a name for3GPPrelease 13 and 14.[49][50]It is an evolution of LTE Advanced (LTE-A) cellular standard supporting data rates in excess of 3 Gbit/s using 32-carrier aggregation.[2]It also introduces the concept ofLicense Assisted Access, which allows sharing of licensed and unlicensed spectrum. Additionally, it incorporates several new technologies associated with5G, such as 256-QAM, MassiveMIMO,LTE-UnlicensedandLTE IoT,[51][52]that facilitated early migration of existing networks to enhancements promised with the full 5G standard.[53] Telstrain Australia deployed the very first LTE Advanced Pro network in January 2017.[54] LTE for UMTS - OFDMA and SC-FDMA Based Radio Access,ISBN978-0-470-99401-6Chapter 2.6: LTE Advanced for IMT-advanced, pp. 19–21. Resources (white papers, technical papers, application notes)
https://en.wikipedia.org/wiki/LTE_Advanced_Pro
Worldwide Interoperability for Microwave Access(WiMAX) is a family ofwireless broadbandcommunication standards based on theIEEE 802.16set of standards, which provide physical layer (PHY) andmedia access control(MAC) options. TheWiMAX Forumwas formed in June 2001 to promote conformity and interoperability, including the definition of system profiles for commercial vendors.[1]The forum describes WiMAX as "a standards-based technology enabling the delivery oflast milewireless broadband accessas an alternative tocableandDSL".[2] WiMAX was initially designed to provide 30 to 40 megabit-per-second data rates,[3]with the 2011 update providing up to 1 Gbit/s[3]for fixed stations.IEEE 802.16mor Wireless MAN-Advanced was a candidate for4G, in competition with theLTE Advancedstandard. WiMAX release 2.1, popularly branded asWiMAX 2+, is a backwards-compatible transition from previous WiMAX generations. It is compatible and interoperable withTD-LTE. Newer versions, still backward compatible, include WiMAX release 2.2 (2014) and WiMAX release 3 (2021, adds interoperation with5G NR). WiMAX refers to interoperable implementations of theIEEE 802.16family of wireless-networks standards ratified by the WiMAX Forum. (Similarly,Wi-Firefers to interoperable implementations of theIEEE 802.11Wireless LAN standards certified by theWi-Fi Alliance.) WiMAX Forum certification allows vendors to sell fixed or mobile products as WiMAX certified, thus ensuring a level of interoperability with other certified products, as long as they fit the same profile. The original IEEE 802.16 standard (now called "Fixed WiMAX") was published in 2001. WiMAX adopted some of its technology fromWiBro, a service marketed in Korea.[4] Mobile WiMAX (originally based on 802.16e-2005) is the revision that was deployed in many countries and is the basis for future revisions such as 802.16m-2011. WiMAX was sometimes referred to as "Wi-Fi on steroids"[5]and can be used for a number of applications including broadband connections, cellularbackhaul,hotspots, etc. It is similar toLong-range Wi-Fi, but it can enable usage at much greater distances.[6] The scalable physical layer architecture that allows for data rate to scale easily with available channel bandwidth and range of WiMAX make it suitable for the following potential applications: WiMAX can provide at-home or mobileInternet accessacross whole cities or countries. In many cases, this has resulted in competition in markets which typically only had access through an existing incumbent DSL (or similar) operator. Additionally, given the relatively low costs associated with the deployment of a WiMAX network (in comparison with3G,HSDPA,xDSL,HFCorFTTx), it is now economically viable to provide last-mile broadband Internet access in remote locations. Mobile WiMAX was a replacement candidate forcellular phonetechnologies such asGSMandCDMA, or can be used as an overlay to increase capacity. Fixed WiMAX is also considered as a wirelessbackhaultechnology for2G,3G, and4Gnetworks in both developed and developing nations.[7][8] In North America, backhaul for urban operations is typically provided via one or morecopper wireline connections, whereas remote cellular operations are sometimes backhauled via satellite. In other regions, urban and rural backhaul is usually provided bymicrowave links. (The exception to this is where the network is operated by an incumbent with ready access to the copper network.) WiMAX has more substantial backhaul bandwidth requirements than legacy cellular applications. Consequently, the use of wireless microwave backhaul is on the rise in North America and existing microwave backhaul links in all regions are being upgraded.[9]Capacities of between 34 Mbit/s and 1 Gbit/s[10]are routinely being deployed with latencies in the order of 1 ms. In many cases, operators are aggregating sites using wireless technology and then presenting traffic on to fiber networks where convenient. WiMAX in this application competes withmicrowave radio,E-lineand simple extension of the fiber network itself. WiMAX directly supports the technologies that maketriple-playservice offerings possible (such asquality of serviceandmulticast). These are inherent to the WiMAX standard rather than being added on ascarrier Ethernetis toEthernet. On May 7, 2008, in the United States,Sprint Nextel,Google,Intel,Comcast,Bright House, andTime Warnerannounced a pooling of an average of 120 MHz of spectrum and merged withClearwireto market the service. The new company hoped to benefit from combined services offerings and network resources as a springboard past its competitors. The cable companies were expected to provide media services to other partners while gaining access to the wireless network as aMobile virtual network operatorto provide triple-play services. Some wireless industry analysts, such as Ken Dulaney and Todd Kort at Gartner, were skeptical how the deal would work out: Although fixed-mobile convergence had been a recognized factor in the industry, prior attempts to form partnerships among wireless and cable companies had generally failed to lead to significant benefits for the participants. Other analysts at IDC favored the deal, pointing out that as wireless progresses to higher bandwidth, it inevitably competes more directly with cable, DSL and fiber, inspiring competitors into collaboration. Also, as wireless broadband networks grow denser and usage habits shift, the need for increased backhaul and media services accelerate, therefore the opportunity to leverage high bandwidth assets was expected to increase. The Aeronautical Mobile Airport Communication System (AeroMACS) is a wireless broadband network for the airport surface intended to link the control tower, aircraft, and fixed assets. In 2007, AeroMACS obtained a worldwide frequency allocation in the 5 GHz aviation band. As of 2018, there were 25 AeroMACS deployments in 8 countries, with at least another 25 deployments planned.[11] IEEE 802.16REVd and IEEE 802.16e standards support bothtime-division duplexingandfrequency-division duplexingas well as a half duplex FDD, that allows for a low cost implementation. Devices that provide connectivity to a WiMAX network are known assubscriber stations(SS). Portable units include handsets (similar to cellularsmartphones); PC peripherals (PC Cards or USB dongles); and embedded devices in laptops, which are now available for Wi-Fi services. In addition, there is much emphasis by operators on consumer electronics devices such as Gaming consoles, MP3 players and similar devices. WiMAX is more similar to Wi-Fi than to other3Gcellular technologies. The WiMAX Forum website provides a list of certified devices. However, this is not a complete list of devices available as certified modules are embedded into laptops, MIDs (Mobile Internet devices), and other private labeled devices. WiMAX gateway devices are available as both indoor and outdoor versions from manufacturers includingVecima Networks,Alvarion,Airspan,ZyXEL,Huawei, andMotorola. Thelist of WiMAX networksand WiMAX Forum[12]provide more links to specific vendors, products and installations. Many of the WiMAX gateways that are offered by manufactures such as these are stand-alone self-install indoor units. Such devices typically sit near the customer's window with the best signal, and provide: Indoor gateways are convenient, but radio losses mean that the subscriber may need to be significantly closer to the WiMAX base station than with professionally installed external units. Outdoor units are roughly the size of a laptop PC, and their installation is comparable to the installation of a residentialsatellite dish. A higher-gaindirectional outdoor unit will generally result in greatly increased range and throughput but with the obvious loss of practical mobility of the unit. USBcan provide connectivity to a WiMAX network through adongle. Generally, these devices are connected to a notebook or net book computer. Dongles typically have omnidirectional antennas which are of lower gain compared to other devices. As such, these devices are best used in areas of good coverage. HTC announced the first WiMAX enabledmobile phone, theMax 4G, on November 12, 2008.[13]The device was only available to certain markets in Russia on theYotanetwork until 2010.[14] HTC andSprint Nextelreleased the second WiMAX enabled mobile phone, theHTC Evo 4G, March 23, 2010 at the CTIA conference in Las Vegas. The device, made available on June 4, 2010,[15]is capable of both EV-DO(3G) and WiMAX(pre-4G) as well as simultaneous data & voice sessions. Sprint Nextel announced at CES 2012 that it will no longer be offering devices using the WiMAX technology due to financial circumstances, instead, along with its network partnerClearwire, Sprint Nextel rolled out a 4G network having decided to shift and utilizeLTE4G technology instead. WiMAX is based uponIEEE802.16e-2005,[16]approved in December 2005. It is a supplement to the IEEE Std 802.16-2004,[17]and so the actual standard is 802.16-2004 as amended by 802.16e-2005. Thus, these specifications need to be considered together. IEEE 802.16e-2005 improves upon IEEE 802.16-2004 by: SOFDMA (used in 802.16e-2005) and OFDM256 (802.16d) are not compatible thus equipment will have to be replaced if an operator is to move to the later standard (e.g., Fixed WiMAX to Mobile WiMAX). The original version of the standard on which WiMAX is based (IEEE 802.16) specified a physical layer operating in the 10 to 66 GHz range. 802.16a, updated in 2004 to 802.16-2004, added specifications for the 2 to 11 GHz range. 802.16-2004 was updated by 802.16e-2005 in 2005 and uses scalableorthogonal frequency-division multiple access[18](SOFDMA), as opposed to the fixedorthogonal frequency-division multiplexing(OFDM) version with 256 sub-carriers (of which 200 are used) in 802.16d. More advanced versions, including 802.16e, also bring multiple antenna support throughMIMO. (SeeWiMAX MIMO) This brings potential benefits in terms of coverage, self installation, power consumption, frequency re-use and bandwidth efficiency. WiMax is the most energy-efficient pre-4G technique amongLTEandHSPA+.[19] The WiMAX MAC uses ascheduling algorithmfor which the subscriber station needs to compete only once for initial entry into the network. After network entry is allowed, the subscriber station is allocated an access slot by the base station. The time slot can enlarge and contract, but remains assigned to the subscriber station, which means that other subscribers cannot use it. In addition to being stable under overload and over-subscription, the scheduling algorithm can also be morebandwidthefficient. The scheduling algorithm also allows the base station to control QoS parameters by balancing the time-slot assignments among the application needs of the subscriber station. As a standard intended to satisfy needs of next-generation data networks (4G), WiMAX is distinguished by its dynamic burst algorithm modulation adaptive to the physical environment the RF signal travels through. Modulation is chosen to be more spectrally efficient (more bits perOFDM/SOFDMAsymbol). That is, when the bursts have a highsignal strengthand a highcarrier to noiseplus interference ratio (CINR), they can be more easily decoded usingdigital signal processing(DSP). In contrast, operating in less favorable environments for RF communication, the system automatically steps down to a more robust mode (burst profile) which means fewer bits per OFDM/SOFDMA symbol; with the advantage that power per bit is higher and therefore simpler accurate signal processing can be performed. Burst profiles are used inverse (algorithmically dynamic) to low signal attenuation; meaning throughput between clients and the base station is determined largely by distance. Maximum distance is achieved by the use of the most robust burst setting; that is, the profile with the largest MAC frame allocation trade-off requiring more symbols (a larger portion of the MAC frame) to be allocated in transmitting a given amount of data than if the client were closer to the base station. The client's MAC frame and their individual burst profiles are defined as well as the specific time allocation. However, even if this is done automatically then the practical deployment should avoid high interference and multipath environments. The reason for which is obviously that too much interference causes the network to function poorly and can also misrepresent the capability of the network. The system is complex to deploy as it is necessary to track not only the signal strength and CINR (as in systems likeGSM) but also how the available frequencies will be dynamically assigned (resulting in dynamic changes to the available bandwidth.) This could lead to cluttered frequencies with slow response times or lost frames. As a result, the system has to be initially designed in consensus with the base station product team to accurately project frequency use, interference, and general product functionality. The Asia-Pacific region has surpassed the North American region in terms of 4G broadband wireless subscribers. There were around 1.7 million pre-WiMAX and WiMAX customers in Asia – 29% of the overall market – compared to 1.4 million in the US and Canada.[20] The WiMAX Forum has proposed an architecture that defines how a WiMAX network can be connected with an IP based core network, which is typically chosen by operators that serve as Internet Service Providers (ISP); Nevertheless, the WiMAX BS provide seamless integration capabilities with other types of architectures as with packet switched Mobile Networks. The WiMAX forum proposal defines a number of components, plus some of the interconnections (or reference points) between these, labeled R1 to R5 and R8: The functional architecture can be designed into various hardware configurations rather than fixed configurations. For example, the architecture is flexible enough to allow remote/mobile stations of varying scale and functionality and Base Stations of varying size – e.g. femto, pico, and mini BS as well as macros. WiMAX 2.1 and above can be integrated with a LTE TDD network and perform handovers from/to LTE TDD.[22]WiMAX 3 expands the integration to5G NR.[23] There is no uniform global licensed spectrum for WiMAX, however the WiMAX Forum published three licensed spectrum profiles: 2.3 GHz, 2.5 GHz and 3.5 GHz, in an effort to drive standardisation and decrease cost. In the US, the biggest segment available was around 2.5 GHz,[24]and is already assigned, primarily toSprint NextelandClearwire. Elsewhere in the world, the most-likely bands used will be the Forum approved ones, with 2.3 GHz probably being most important in Asia. Some countries in Asia likeIndiaandIndonesiawill use a mix of 2.5 GHz, 3.3 GHz and other frequencies.Pakistan'sWateen Telecomuses 3.5 GHz. Analog TV bands (700 MHz) may become available, but await the completedigital television transition, and other uses have been suggested for that spectrum. In the USA the FCCauction for this spectrumbegan in January 2008 and, as a result, the biggest share of the spectrum went to Verizon Wireless and the next biggest to AT&T.[25]Both of these companies stated their intention of supportingLTE, a technology which competes directly with WiMAX. EU commissionerViviane Redinghas suggested re-allocation of 500–800 MHz spectrum for wireless communication, including WiMAX.[26] WiMAX profiles define channel size,TDD/FDDand other necessary attributes in order to have interoperating products. The current fixed profiles are defined for both TDD and FDD profiles. At this point, all of the mobile profiles are TDD only. The fixed profiles have channel sizes of 3.5 MHz, 5 MHz, 7 MHz and 10 MHz. The mobile profiles are 5 MHz, 8.75 MHz and 10 MHz. (Note: the 802.16 standard allows a far wider variety of channels, but only the above subsets are supported as WiMAX profiles.) Since October 2007, the Radio communication Sector of the International Telecommunication Union (ITU-R) has decided to include WiMAX technology in the IMT-2000 set of standards.[27]This enables spectrum owners (specifically in the 2.5–2.69 GHz band at this stage) to use WiMAX equipment in any country that recognizes the IMT-2000. WiMAX cannot deliver 70Mbit/sover 50 km (31 mi). Like all wireless technologies, WiMAX can operate at higher bitrates or over longer distances but not both. Operating at the maximum range of 50 km (31 mi) increasesbit error rateand thus results in a much lower bitrate. Conversely, reducing the range (to under 1 km) allows a device to operate at higher bitrates. A citywide deployment of WiMAX inPerth,Australiademonstrated that customers at the cell-edge with an indoorCustomer-premises equipment(CPE) typically obtain speeds of around 1–4 Mbit/s, with users closer to the cell site obtaining speeds of up to 30 Mbit/s.[citation needed] Like all wireless systems, available bandwidth is shared between users in a given radio sector, so performance could deteriorate in the case of many active users in a single sector. However, with adequate capacity planning and the use of WiMAX's QoS, a minimum guaranteed throughput for each subscriber can be put in place. In practice, most users will have a range of 4–8 Mbit/s services and additional radio cards will be added to the base station to increase the number of users that may be served as required. A number of specialized companies produced baseband ICs and integrated RFICs for WiMAX Subscriber Stations in the 2.3, 2.5 and 3.5 GHz bands (refer to 'Spectrum allocation' above). These companies include, but are not limited to, Beceem,Sequans, andPicoChip. Comparisons and confusion between WiMAX andWi-Fiare frequent, because both are related to wireless connectivity and Internet access.[28] Although Wi-Fi and WiMAX are designed for different situations, they are complementary. WiMAX network operators typically provide a WiMAX Subscriber Unit that connects to the metropolitan WiMAX network and provides Wi-Fi connectivity within the home or business for computers and smartphones. This enables the user to place the WiMAX Subscriber Unit in the best reception area, such as a window, and have date access throughout their property. TTCN-3test specification language is used for the purposes of specifying conformance tests for WiMAX implementations. The WiMAX test suite is being developed by a Specialist Task Force atETSI(STF 252).[29] The WiMAX Forum is a non profit organization formed to promote the adoption of WiMAX compatible products and services.[30] A major role for the organization is to certify the interoperability of WiMAX products.[31]Those that pass conformance and interoperability testing achieve the "WiMAX Forum Certified" designation, and can display this mark on their products and marketing materials. Some vendors claim that their equipment is "WiMAX-ready", "WiMAX-compliant", or "pre-WiMAX", if they are not officially WiMAX Forum Certified. Another role of the WiMAX Forum is to promote the spread of knowledge about WiMAX. In order to do so, it has a certified training program that is currently offered in English and French. It also offers a series of member events and endorses some industry events. WiSOA was the first global organization composed exclusively of owners of WiMAX spectrum with plans to deploy WiMAX technology in those bands. WiSOA focused on the regulation, commercialisation, and deployment of WiMAX spectrum in the 2.3–2.5 GHz and the 3.4–3.5 GHz ranges. WiSOA merged with theWireless Broadband Alliancein April 2008.[32] In 2011, theTelecommunications Industry Associationreleased three technical standards (TIA-1164, TIA-1143, and TIA-1140) that cover the air interface and core networking aspects of Wi-MaxHigh-Rate Packet Data(HRPD) systems using a Mobile Station/Access Terminal (MS/AT) with a single transmitter.[33] Within the marketplace, WiMAX's main competition came from existing, widely deployed wireless systems such asUniversal Mobile Telecommunications System(UMTS),CDMA2000, existing Wi-Fi, mesh networking and eventually 4G (LTE). In the future, competition will be from the evolution of the major cellular standards to4G,[needs update]high-bandwidth, low-latency, all-IP networks with voice services built on top. The worldwide move to 4G for GSM/UMTS andAMPS/TIA(including CDMA2000) is the3GPP Long Term Evolution(LTE) effort. The LTE Standard was finalized in December 2008, with the first commercial deployment of LTE carried out by TeliaSonera in Oslo and Stockholm in December, 2009. Henceforth, LTE saw rapidly increasing adoption by mobile carriers around the world. Although WiMax was much earlier to market than LTE, LTE was an upgrade and extension of previous 3G (GSM and CDMA) standards, whereas WiMax was a relatively new and different technology without a large user base. Ultimately, LTE won the war to become the 4G standard because mobile operators such as Verizon, AT&T, Vodafone, NTT, and Deutsche Telekom chose to extend their investments in know-how, equipment and spectrum from 3G to LTE, rather than adopt a new technology standard. It would never have been cost-effective for WiMax network operators to compete against fixed-line broadband networks based on 4G technologies. By 2009, most mobile operators began to realize that mobile connectivity (not fixed 802.16e) was the future, and that LTE was going to become the new worldwide mobile connectivity standard, so they chose to wait for LTE to develop rather than switch from 3G to WiMax. WiMax was a superior technology in terms of speed (roughly 25 Mbit/s) for a few years (2005-2009), and it pioneered some new technologies such as MIMO. But the mobile version of WiMax (802.16m), intended to compete with GSM and CDMA technologies, was too little/too late in getting established, and by the time the LTE standard was finalized in December 2008, the fate of WiMax as a mobile solution was doomed and it was clear that LTE (not WiMax) would become the world's new 4G standard. The largest wireless broadband partner using WiMax, Clearwire, announced in 2008 that they would begin overlaying their existing WiMax network with LTE technology, which was necessary for Clearwire to obtain investments they needed to stay in business. In some areas of the world, the wide availability of UMTS and a general desire for standardization meant spectrum was not allocated for WiMAX: in July 2005, theEU-wide frequency allocation for WiMAX was blocked.[citation needed] Early WirelessMAN standards, The European standardHiperMANand Korean standardWiBrowere harmonized as part of WiMAX and are no longer seen as competition but as complementary.[citation needed]All networks now being deployed in South Korea, the home of the WiBro standard, are now WiMAX.[citation needed] The IEEE 802.16m-2011 standard[34]was the core technology for WiMAX 2. The IEEE 802.16m standard was submitted to the ITU forIMT-Advancedstandardization.[35]IEEE 802.16m is one of the major candidates for IMT-Advanced technologies by ITU. Among many enhancements, IEEE 802.16m systems can provide four times faster[clarification needed]data speed than the WiMAX Release 1. WiMAX Release 2 provided backward compatibility with Release 1. WiMAX operators could migrate from release 1 to release 2 by upgrading channel cards or software. The WiMAX 2 Collaboration Initiative was formed to help this transition.[36] It was anticipated that using 4X2MIMOin the urban microcell scenario with only a single 20 MHzTDDchannel available system wide, the 802.16m system can support both 120 Mbit/s downlink and 60 Mbit/s uplink per site simultaneously. It was expected that the WiMAX Release 2 would be available commercially in the 2011–2012 timeframe.[37] WiMAX Release 2.1 was released in early-2010s which broke compatibility with earlier WiMAX networks.[citation needed]Significant number of operators have migrated to the new standard that is compatible with TD-LTE by the end of 2010s. A field test conducted in 2007 by SUIRG (Satellite Users Interference Reduction Group) with support from the U.S. Navy, the Global VSAT Forum, and several member organizations yielded results showing interference at 12 km when using the same channels for both the WiMAX systems and satellites inC-band.[38] As of October 2010, the WiMAX Forum claimed over 592 WiMAX (fixed and mobile) networks deployed in over 148 countries, covering over 621 million people.[39]By February 2011, the WiMAX Forum cited coverage of over 823 million people, and estimated coverage to over 1 billion people by the end of the year. Note that coverage means the offer of availability of WiMAX service to populations within various geographies, not the number of WiMAX subscribers.[40] South Korea launched a WiMAX network in the second quarter of 2006. Spain delivered full coverage in two cities Seville and Málaga in 2008 reaching 20,000 portable units. By the end of 2008 there were 350,000 WiMAX subscribers in Korea.[41] Worldwide, by early 2010 WiMAX seemed to be ramping quickly relative to other available technologies, though access in North America lagged.[42]Yota, the largest WiMAX network operator in the world in 4Q 2009,[43][44]announced in May 2010 that it would move new network deployments to LTE and, subsequently, change its existing networks as well.[citation needed] A study published in September 2010 by Blycroft Publishing estimated 800 management contracts from 364 WiMAX operations worldwide offering active services (launched or still trading as opposed to just licensed and still to launch).[45]The WiMAX Forum announced on Aug 16, 2011 that there were more than 20 million WiMAX subscribers worldwide, the high-water mark for this technology.http://wimaxforum.org/Page/News/PR/20110816_WiMAX_Subscriptions_Surpass_20_Million_Globally
https://en.wikipedia.org/wiki/WiMAX
Worldwide Interoperability for Microwave Access(WiMAX) is a family ofwireless broadbandcommunication standards based on theIEEE 802.16set of standards, which provide physical layer (PHY) andmedia access control(MAC) options. TheWiMAX Forumwas formed in June 2001 to promote conformity and interoperability, including the definition of system profiles for commercial vendors.[1]The forum describes WiMAX as "a standards-based technology enabling the delivery oflast milewireless broadband accessas an alternative tocableandDSL".[2] WiMAX was initially designed to provide 30 to 40 megabit-per-second data rates,[3]with the 2011 update providing up to 1 Gbit/s[3]for fixed stations.IEEE 802.16mor Wireless MAN-Advanced was a candidate for4G, in competition with theLTE Advancedstandard. WiMAX release 2.1, popularly branded asWiMAX 2+, is a backwards-compatible transition from previous WiMAX generations. It is compatible and interoperable withTD-LTE. Newer versions, still backward compatible, include WiMAX release 2.2 (2014) and WiMAX release 3 (2021, adds interoperation with5G NR). WiMAX refers to interoperable implementations of theIEEE 802.16family of wireless-networks standards ratified by the WiMAX Forum. (Similarly,Wi-Firefers to interoperable implementations of theIEEE 802.11Wireless LAN standards certified by theWi-Fi Alliance.) WiMAX Forum certification allows vendors to sell fixed or mobile products as WiMAX certified, thus ensuring a level of interoperability with other certified products, as long as they fit the same profile. The original IEEE 802.16 standard (now called "Fixed WiMAX") was published in 2001. WiMAX adopted some of its technology fromWiBro, a service marketed in Korea.[4] Mobile WiMAX (originally based on 802.16e-2005) is the revision that was deployed in many countries and is the basis for future revisions such as 802.16m-2011. WiMAX was sometimes referred to as "Wi-Fi on steroids"[5]and can be used for a number of applications including broadband connections, cellularbackhaul,hotspots, etc. It is similar toLong-range Wi-Fi, but it can enable usage at much greater distances.[6] The scalable physical layer architecture that allows for data rate to scale easily with available channel bandwidth and range of WiMAX make it suitable for the following potential applications: WiMAX can provide at-home or mobileInternet accessacross whole cities or countries. In many cases, this has resulted in competition in markets which typically only had access through an existing incumbent DSL (or similar) operator. Additionally, given the relatively low costs associated with the deployment of a WiMAX network (in comparison with3G,HSDPA,xDSL,HFCorFTTx), it is now economically viable to provide last-mile broadband Internet access in remote locations. Mobile WiMAX was a replacement candidate forcellular phonetechnologies such asGSMandCDMA, or can be used as an overlay to increase capacity. Fixed WiMAX is also considered as a wirelessbackhaultechnology for2G,3G, and4Gnetworks in both developed and developing nations.[7][8] In North America, backhaul for urban operations is typically provided via one or morecopper wireline connections, whereas remote cellular operations are sometimes backhauled via satellite. In other regions, urban and rural backhaul is usually provided bymicrowave links. (The exception to this is where the network is operated by an incumbent with ready access to the copper network.) WiMAX has more substantial backhaul bandwidth requirements than legacy cellular applications. Consequently, the use of wireless microwave backhaul is on the rise in North America and existing microwave backhaul links in all regions are being upgraded.[9]Capacities of between 34 Mbit/s and 1 Gbit/s[10]are routinely being deployed with latencies in the order of 1 ms. In many cases, operators are aggregating sites using wireless technology and then presenting traffic on to fiber networks where convenient. WiMAX in this application competes withmicrowave radio,E-lineand simple extension of the fiber network itself. WiMAX directly supports the technologies that maketriple-playservice offerings possible (such asquality of serviceandmulticast). These are inherent to the WiMAX standard rather than being added on ascarrier Ethernetis toEthernet. On May 7, 2008, in the United States,Sprint Nextel,Google,Intel,Comcast,Bright House, andTime Warnerannounced a pooling of an average of 120 MHz of spectrum and merged withClearwireto market the service. The new company hoped to benefit from combined services offerings and network resources as a springboard past its competitors. The cable companies were expected to provide media services to other partners while gaining access to the wireless network as aMobile virtual network operatorto provide triple-play services. Some wireless industry analysts, such as Ken Dulaney and Todd Kort at Gartner, were skeptical how the deal would work out: Although fixed-mobile convergence had been a recognized factor in the industry, prior attempts to form partnerships among wireless and cable companies had generally failed to lead to significant benefits for the participants. Other analysts at IDC favored the deal, pointing out that as wireless progresses to higher bandwidth, it inevitably competes more directly with cable, DSL and fiber, inspiring competitors into collaboration. Also, as wireless broadband networks grow denser and usage habits shift, the need for increased backhaul and media services accelerate, therefore the opportunity to leverage high bandwidth assets was expected to increase. The Aeronautical Mobile Airport Communication System (AeroMACS) is a wireless broadband network for the airport surface intended to link the control tower, aircraft, and fixed assets. In 2007, AeroMACS obtained a worldwide frequency allocation in the 5 GHz aviation band. As of 2018, there were 25 AeroMACS deployments in 8 countries, with at least another 25 deployments planned.[11] IEEE 802.16REVd and IEEE 802.16e standards support bothtime-division duplexingandfrequency-division duplexingas well as a half duplex FDD, that allows for a low cost implementation. Devices that provide connectivity to a WiMAX network are known assubscriber stations(SS). Portable units include handsets (similar to cellularsmartphones); PC peripherals (PC Cards or USB dongles); and embedded devices in laptops, which are now available for Wi-Fi services. In addition, there is much emphasis by operators on consumer electronics devices such as Gaming consoles, MP3 players and similar devices. WiMAX is more similar to Wi-Fi than to other3Gcellular technologies. The WiMAX Forum website provides a list of certified devices. However, this is not a complete list of devices available as certified modules are embedded into laptops, MIDs (Mobile Internet devices), and other private labeled devices. WiMAX gateway devices are available as both indoor and outdoor versions from manufacturers includingVecima Networks,Alvarion,Airspan,ZyXEL,Huawei, andMotorola. Thelist of WiMAX networksand WiMAX Forum[12]provide more links to specific vendors, products and installations. Many of the WiMAX gateways that are offered by manufactures such as these are stand-alone self-install indoor units. Such devices typically sit near the customer's window with the best signal, and provide: Indoor gateways are convenient, but radio losses mean that the subscriber may need to be significantly closer to the WiMAX base station than with professionally installed external units. Outdoor units are roughly the size of a laptop PC, and their installation is comparable to the installation of a residentialsatellite dish. A higher-gaindirectional outdoor unit will generally result in greatly increased range and throughput but with the obvious loss of practical mobility of the unit. USBcan provide connectivity to a WiMAX network through adongle. Generally, these devices are connected to a notebook or net book computer. Dongles typically have omnidirectional antennas which are of lower gain compared to other devices. As such, these devices are best used in areas of good coverage. HTC announced the first WiMAX enabledmobile phone, theMax 4G, on November 12, 2008.[13]The device was only available to certain markets in Russia on theYotanetwork until 2010.[14] HTC andSprint Nextelreleased the second WiMAX enabled mobile phone, theHTC Evo 4G, March 23, 2010 at the CTIA conference in Las Vegas. The device, made available on June 4, 2010,[15]is capable of both EV-DO(3G) and WiMAX(pre-4G) as well as simultaneous data & voice sessions. Sprint Nextel announced at CES 2012 that it will no longer be offering devices using the WiMAX technology due to financial circumstances, instead, along with its network partnerClearwire, Sprint Nextel rolled out a 4G network having decided to shift and utilizeLTE4G technology instead. WiMAX is based uponIEEE802.16e-2005,[16]approved in December 2005. It is a supplement to the IEEE Std 802.16-2004,[17]and so the actual standard is 802.16-2004 as amended by 802.16e-2005. Thus, these specifications need to be considered together. IEEE 802.16e-2005 improves upon IEEE 802.16-2004 by: SOFDMA (used in 802.16e-2005) and OFDM256 (802.16d) are not compatible thus equipment will have to be replaced if an operator is to move to the later standard (e.g., Fixed WiMAX to Mobile WiMAX). The original version of the standard on which WiMAX is based (IEEE 802.16) specified a physical layer operating in the 10 to 66 GHz range. 802.16a, updated in 2004 to 802.16-2004, added specifications for the 2 to 11 GHz range. 802.16-2004 was updated by 802.16e-2005 in 2005 and uses scalableorthogonal frequency-division multiple access[18](SOFDMA), as opposed to the fixedorthogonal frequency-division multiplexing(OFDM) version with 256 sub-carriers (of which 200 are used) in 802.16d. More advanced versions, including 802.16e, also bring multiple antenna support throughMIMO. (SeeWiMAX MIMO) This brings potential benefits in terms of coverage, self installation, power consumption, frequency re-use and bandwidth efficiency. WiMax is the most energy-efficient pre-4G technique amongLTEandHSPA+.[19] The WiMAX MAC uses ascheduling algorithmfor which the subscriber station needs to compete only once for initial entry into the network. After network entry is allowed, the subscriber station is allocated an access slot by the base station. The time slot can enlarge and contract, but remains assigned to the subscriber station, which means that other subscribers cannot use it. In addition to being stable under overload and over-subscription, the scheduling algorithm can also be morebandwidthefficient. The scheduling algorithm also allows the base station to control QoS parameters by balancing the time-slot assignments among the application needs of the subscriber station. As a standard intended to satisfy needs of next-generation data networks (4G), WiMAX is distinguished by its dynamic burst algorithm modulation adaptive to the physical environment the RF signal travels through. Modulation is chosen to be more spectrally efficient (more bits perOFDM/SOFDMAsymbol). That is, when the bursts have a highsignal strengthand a highcarrier to noiseplus interference ratio (CINR), they can be more easily decoded usingdigital signal processing(DSP). In contrast, operating in less favorable environments for RF communication, the system automatically steps down to a more robust mode (burst profile) which means fewer bits per OFDM/SOFDMA symbol; with the advantage that power per bit is higher and therefore simpler accurate signal processing can be performed. Burst profiles are used inverse (algorithmically dynamic) to low signal attenuation; meaning throughput between clients and the base station is determined largely by distance. Maximum distance is achieved by the use of the most robust burst setting; that is, the profile with the largest MAC frame allocation trade-off requiring more symbols (a larger portion of the MAC frame) to be allocated in transmitting a given amount of data than if the client were closer to the base station. The client's MAC frame and their individual burst profiles are defined as well as the specific time allocation. However, even if this is done automatically then the practical deployment should avoid high interference and multipath environments. The reason for which is obviously that too much interference causes the network to function poorly and can also misrepresent the capability of the network. The system is complex to deploy as it is necessary to track not only the signal strength and CINR (as in systems likeGSM) but also how the available frequencies will be dynamically assigned (resulting in dynamic changes to the available bandwidth.) This could lead to cluttered frequencies with slow response times or lost frames. As a result, the system has to be initially designed in consensus with the base station product team to accurately project frequency use, interference, and general product functionality. The Asia-Pacific region has surpassed the North American region in terms of 4G broadband wireless subscribers. There were around 1.7 million pre-WiMAX and WiMAX customers in Asia – 29% of the overall market – compared to 1.4 million in the US and Canada.[20] The WiMAX Forum has proposed an architecture that defines how a WiMAX network can be connected with an IP based core network, which is typically chosen by operators that serve as Internet Service Providers (ISP); Nevertheless, the WiMAX BS provide seamless integration capabilities with other types of architectures as with packet switched Mobile Networks. The WiMAX forum proposal defines a number of components, plus some of the interconnections (or reference points) between these, labeled R1 to R5 and R8: The functional architecture can be designed into various hardware configurations rather than fixed configurations. For example, the architecture is flexible enough to allow remote/mobile stations of varying scale and functionality and Base Stations of varying size – e.g. femto, pico, and mini BS as well as macros. WiMAX 2.1 and above can be integrated with a LTE TDD network and perform handovers from/to LTE TDD.[22]WiMAX 3 expands the integration to5G NR.[23] There is no uniform global licensed spectrum for WiMAX, however the WiMAX Forum published three licensed spectrum profiles: 2.3 GHz, 2.5 GHz and 3.5 GHz, in an effort to drive standardisation and decrease cost. In the US, the biggest segment available was around 2.5 GHz,[24]and is already assigned, primarily toSprint NextelandClearwire. Elsewhere in the world, the most-likely bands used will be the Forum approved ones, with 2.3 GHz probably being most important in Asia. Some countries in Asia likeIndiaandIndonesiawill use a mix of 2.5 GHz, 3.3 GHz and other frequencies.Pakistan'sWateen Telecomuses 3.5 GHz. Analog TV bands (700 MHz) may become available, but await the completedigital television transition, and other uses have been suggested for that spectrum. In the USA the FCCauction for this spectrumbegan in January 2008 and, as a result, the biggest share of the spectrum went to Verizon Wireless and the next biggest to AT&T.[25]Both of these companies stated their intention of supportingLTE, a technology which competes directly with WiMAX. EU commissionerViviane Redinghas suggested re-allocation of 500–800 MHz spectrum for wireless communication, including WiMAX.[26] WiMAX profiles define channel size,TDD/FDDand other necessary attributes in order to have interoperating products. The current fixed profiles are defined for both TDD and FDD profiles. At this point, all of the mobile profiles are TDD only. The fixed profiles have channel sizes of 3.5 MHz, 5 MHz, 7 MHz and 10 MHz. The mobile profiles are 5 MHz, 8.75 MHz and 10 MHz. (Note: the 802.16 standard allows a far wider variety of channels, but only the above subsets are supported as WiMAX profiles.) Since October 2007, the Radio communication Sector of the International Telecommunication Union (ITU-R) has decided to include WiMAX technology in the IMT-2000 set of standards.[27]This enables spectrum owners (specifically in the 2.5–2.69 GHz band at this stage) to use WiMAX equipment in any country that recognizes the IMT-2000. WiMAX cannot deliver 70Mbit/sover 50 km (31 mi). Like all wireless technologies, WiMAX can operate at higher bitrates or over longer distances but not both. Operating at the maximum range of 50 km (31 mi) increasesbit error rateand thus results in a much lower bitrate. Conversely, reducing the range (to under 1 km) allows a device to operate at higher bitrates. A citywide deployment of WiMAX inPerth,Australiademonstrated that customers at the cell-edge with an indoorCustomer-premises equipment(CPE) typically obtain speeds of around 1–4 Mbit/s, with users closer to the cell site obtaining speeds of up to 30 Mbit/s.[citation needed] Like all wireless systems, available bandwidth is shared between users in a given radio sector, so performance could deteriorate in the case of many active users in a single sector. However, with adequate capacity planning and the use of WiMAX's QoS, a minimum guaranteed throughput for each subscriber can be put in place. In practice, most users will have a range of 4–8 Mbit/s services and additional radio cards will be added to the base station to increase the number of users that may be served as required. A number of specialized companies produced baseband ICs and integrated RFICs for WiMAX Subscriber Stations in the 2.3, 2.5 and 3.5 GHz bands (refer to 'Spectrum allocation' above). These companies include, but are not limited to, Beceem,Sequans, andPicoChip. Comparisons and confusion between WiMAX andWi-Fiare frequent, because both are related to wireless connectivity and Internet access.[28] Although Wi-Fi and WiMAX are designed for different situations, they are complementary. WiMAX network operators typically provide a WiMAX Subscriber Unit that connects to the metropolitan WiMAX network and provides Wi-Fi connectivity within the home or business for computers and smartphones. This enables the user to place the WiMAX Subscriber Unit in the best reception area, such as a window, and have date access throughout their property. TTCN-3test specification language is used for the purposes of specifying conformance tests for WiMAX implementations. The WiMAX test suite is being developed by a Specialist Task Force atETSI(STF 252).[29] The WiMAX Forum is a non profit organization formed to promote the adoption of WiMAX compatible products and services.[30] A major role for the organization is to certify the interoperability of WiMAX products.[31]Those that pass conformance and interoperability testing achieve the "WiMAX Forum Certified" designation, and can display this mark on their products and marketing materials. Some vendors claim that their equipment is "WiMAX-ready", "WiMAX-compliant", or "pre-WiMAX", if they are not officially WiMAX Forum Certified. Another role of the WiMAX Forum is to promote the spread of knowledge about WiMAX. In order to do so, it has a certified training program that is currently offered in English and French. It also offers a series of member events and endorses some industry events. WiSOA was the first global organization composed exclusively of owners of WiMAX spectrum with plans to deploy WiMAX technology in those bands. WiSOA focused on the regulation, commercialisation, and deployment of WiMAX spectrum in the 2.3–2.5 GHz and the 3.4–3.5 GHz ranges. WiSOA merged with theWireless Broadband Alliancein April 2008.[32] In 2011, theTelecommunications Industry Associationreleased three technical standards (TIA-1164, TIA-1143, and TIA-1140) that cover the air interface and core networking aspects of Wi-MaxHigh-Rate Packet Data(HRPD) systems using a Mobile Station/Access Terminal (MS/AT) with a single transmitter.[33] Within the marketplace, WiMAX's main competition came from existing, widely deployed wireless systems such asUniversal Mobile Telecommunications System(UMTS),CDMA2000, existing Wi-Fi, mesh networking and eventually 4G (LTE). In the future, competition will be from the evolution of the major cellular standards to4G,[needs update]high-bandwidth, low-latency, all-IP networks with voice services built on top. The worldwide move to 4G for GSM/UMTS andAMPS/TIA(including CDMA2000) is the3GPP Long Term Evolution(LTE) effort. The LTE Standard was finalized in December 2008, with the first commercial deployment of LTE carried out by TeliaSonera in Oslo and Stockholm in December, 2009. Henceforth, LTE saw rapidly increasing adoption by mobile carriers around the world. Although WiMax was much earlier to market than LTE, LTE was an upgrade and extension of previous 3G (GSM and CDMA) standards, whereas WiMax was a relatively new and different technology without a large user base. Ultimately, LTE won the war to become the 4G standard because mobile operators such as Verizon, AT&T, Vodafone, NTT, and Deutsche Telekom chose to extend their investments in know-how, equipment and spectrum from 3G to LTE, rather than adopt a new technology standard. It would never have been cost-effective for WiMax network operators to compete against fixed-line broadband networks based on 4G technologies. By 2009, most mobile operators began to realize that mobile connectivity (not fixed 802.16e) was the future, and that LTE was going to become the new worldwide mobile connectivity standard, so they chose to wait for LTE to develop rather than switch from 3G to WiMax. WiMax was a superior technology in terms of speed (roughly 25 Mbit/s) for a few years (2005-2009), and it pioneered some new technologies such as MIMO. But the mobile version of WiMax (802.16m), intended to compete with GSM and CDMA technologies, was too little/too late in getting established, and by the time the LTE standard was finalized in December 2008, the fate of WiMax as a mobile solution was doomed and it was clear that LTE (not WiMax) would become the world's new 4G standard. The largest wireless broadband partner using WiMax, Clearwire, announced in 2008 that they would begin overlaying their existing WiMax network with LTE technology, which was necessary for Clearwire to obtain investments they needed to stay in business. In some areas of the world, the wide availability of UMTS and a general desire for standardization meant spectrum was not allocated for WiMAX: in July 2005, theEU-wide frequency allocation for WiMAX was blocked.[citation needed] Early WirelessMAN standards, The European standardHiperMANand Korean standardWiBrowere harmonized as part of WiMAX and are no longer seen as competition but as complementary.[citation needed]All networks now being deployed in South Korea, the home of the WiBro standard, are now WiMAX.[citation needed] The IEEE 802.16m-2011 standard[34]was the core technology for WiMAX 2. The IEEE 802.16m standard was submitted to the ITU forIMT-Advancedstandardization.[35]IEEE 802.16m is one of the major candidates for IMT-Advanced technologies by ITU. Among many enhancements, IEEE 802.16m systems can provide four times faster[clarification needed]data speed than the WiMAX Release 1. WiMAX Release 2 provided backward compatibility with Release 1. WiMAX operators could migrate from release 1 to release 2 by upgrading channel cards or software. The WiMAX 2 Collaboration Initiative was formed to help this transition.[36] It was anticipated that using 4X2MIMOin the urban microcell scenario with only a single 20 MHzTDDchannel available system wide, the 802.16m system can support both 120 Mbit/s downlink and 60 Mbit/s uplink per site simultaneously. It was expected that the WiMAX Release 2 would be available commercially in the 2011–2012 timeframe.[37] WiMAX Release 2.1 was released in early-2010s which broke compatibility with earlier WiMAX networks.[citation needed]Significant number of operators have migrated to the new standard that is compatible with TD-LTE by the end of 2010s. A field test conducted in 2007 by SUIRG (Satellite Users Interference Reduction Group) with support from the U.S. Navy, the Global VSAT Forum, and several member organizations yielded results showing interference at 12 km when using the same channels for both the WiMAX systems and satellites inC-band.[38] As of October 2010, the WiMAX Forum claimed over 592 WiMAX (fixed and mobile) networks deployed in over 148 countries, covering over 621 million people.[39]By February 2011, the WiMAX Forum cited coverage of over 823 million people, and estimated coverage to over 1 billion people by the end of the year. Note that coverage means the offer of availability of WiMAX service to populations within various geographies, not the number of WiMAX subscribers.[40] South Korea launched a WiMAX network in the second quarter of 2006. Spain delivered full coverage in two cities Seville and Málaga in 2008 reaching 20,000 portable units. By the end of 2008 there were 350,000 WiMAX subscribers in Korea.[41] Worldwide, by early 2010 WiMAX seemed to be ramping quickly relative to other available technologies, though access in North America lagged.[42]Yota, the largest WiMAX network operator in the world in 4Q 2009,[43][44]announced in May 2010 that it would move new network deployments to LTE and, subsequently, change its existing networks as well.[citation needed] A study published in September 2010 by Blycroft Publishing estimated 800 management contracts from 364 WiMAX operations worldwide offering active services (launched or still trading as opposed to just licensed and still to launch).[45]The WiMAX Forum announced on Aug 16, 2011 that there were more than 20 million WiMAX subscribers worldwide, the high-water mark for this technology.http://wimaxforum.org/Page/News/PR/20110816_WiMAX_Subscriptions_Surpass_20_Million_Globally
https://en.wikipedia.org/wiki/WiMAX-Advanced
Evolution-Data Optimized(EV-DO,EVDO, etc.) is atelecommunicationsstandard for thewirelesstransmission of data throughradiosignals, typically forbroadband Internet access. EV-DO is an evolution of theCDMA2000(IS-2000) standard which supports high data rates and can be deployed alongside a wireless carrier's voice services. It uses advancedmultiplexingtechniques includingcode-division multiple access(CDMA) as well astime-division multiplexing(TDM) to maximize throughput. It is a part of theCDMA2000family of standards and has been adopted by manymobile phoneservice providers around the world particularly those previously employingCDMAnetworks. It is also used on theGlobalstarsatellite phonenetwork.[1] An EV-DO channel has a bandwidth of 1.25 MHz, the same bandwidth size that IS-95A (IS-95) and IS-2000 (1xRTT) use,[2]though the channel structure is very different. The back-end network is entirelypacket-based, and is not constrained by restrictions typically present on acircuit switchednetwork. The EV-DO feature of CDMA2000 networks provides access to mobile devices withforward linkair interface speeds of up to 2.4 Mbit/s with Rel. 0 and up to 3.1 Mbit/s with Rev. A. Thereverse linkrate for Rel. 0 can operate up to 153 kbit/s, while Rev. A can operate at up to 1.8 Mbit/s. It was designed to be operated end-to-end as anIP-based network, and can support any application which can operate on such a network and bit rate constraints. There have been several revisions of the standard, starting with Release 0 (Rel. 0). This was later expanded upon with Revision A (Rev. A) to supportquality of service(to improve latency) and higher rates on the forward and reverse link. In late 2006, Revision B (Rev. B) was published, whose features include the ability to bundle multiple carriers to achieve even higher rates and lower latencies (seeTIA-856 Rev. Bbelow). The upgrade from EV-DO Rev. A to Rev. B involves a software update of the cell site modem, and additional equipment for new EV-DO carriers. Existing cdma2000 operators may have to retune some of their existing 1xRTT channels to other frequencies, as Rev. B requires all DO carriers be within 5 MHz. The initial design of EV-DO was developed byQualcommin 1999 to meetIMT-2000requirements for a greater-than-2 Mbit/s down link for stationary communications, as opposed to mobile communication (i.e., moving cellular phone service). Initially, the standard was called High Data Rate (HDR), but was renamed to 1xEV-DO after it was ratified by theInternational Telecommunication Union(ITU) under the designationTIA-856. Originally, 1xEV-DO stood for "1x Evolution-Data Only", referring to its being a direct evolution of the1x(1xRTT) air interface standard, with its channels carrying only data traffic. The title of the 1xEV-DO standard document is "cdma2000 High Rate Packet Data Air Interface Specification", as cdma2000 (lowercase) is another name for the 1x standard, numerically designated as TIA-2000. Later, due to possible negative connotations of the word "only", the "DO"-part of the standard's name 1xEV-DO was changed to stand for "Data Optimized", the full name - EV-DO now stands for "Evolution-Data Optimized." The 1x prefix has been dropped by many of the major carriers, and is marketed simply as EV-DO.[3]This provides a more market-friendly emphasis of the technology being data-optimized. The primary characteristic that differentiates an EV-DO channel from a 1xRTT channel is that it istime multiplexedon the forward link (from the tower to the mobile). This means that a single mobile has full use of the forward traffic channel within a particular geographic area (a sector) during a given slot of time. Using this technique, EV-DO is able tomodulateeach user’s time slot independently. This allows the service of users in favorable RF conditions with very complexmodulationtechniques while also serving users in poor RF conditions with simpler (and more redundant) signals.[4] The forward channel is divided into slots, each being 1.667 ms long. In addition to user traffic, overhead channels are interlaced into the stream, which include the 'pilot', which helps the mobile find and identify the channel, theMedia Access Channel (MAC)which tells the mobile devices when their data is scheduled, and the 'control channel', which contains other information the network needs the mobile devices to know. Themodulationto be used to communicate with a given mobile unit is determined by the mobile device itself; it listens to the traffic on the channel, and depending on the receive signal strength along with the perceived multi-path and fading conditions, makes a best guess as to what data-rate it can sustain while maintaining a reasonable frame error rate of 1-2%. It then communicates this information back to the serving sector in the form of an integer between 1 and 12 on the "Digital Rate Control" (DRC) channel. Alternatively, the mobile can select a "null" rate (DRC 0), indicating that the mobile either cannot decode data at any rate, or that it is attempting tohand offto another serving sector.[4] The DRC values are as follows:[5] Another important aspect of the EV-DO forward link channel is the scheduler. The scheduler most commonly used is called "proportional fair". It's designed to maximize sector throughput while also guaranteeing each user a certain minimum level of service. The idea is to schedule mobiles reporting higher DRC indices more often, with the hope that those reporting worse conditions will improve in time. The system also incorporatesIncremental Redundancy Hybrid ARQ. Each sub-packet of a multi-slot transmission is aturbo-codedreplica of the original data bits. This allows mobiles to acknowledge a packet before all of its sub-sections have been transmitted. For example, if a mobile transmits a DRC index of 3 and is scheduled to receive data, it will expect to get data during four time slots. If after decoding the first slot the mobile is able to determine the entire data packet, it can send an early acknowledgement back at that time; the remaining three sub-packets will be cancelled. If however the packet is not acknowledged, the network will proceed with the transmission of the remaining parts until all have been transmitted or the packet is acknowledged.[4] The reverse link (from the mobile back to theBase Transceiver Station) on EV-DO Rel. 0 operates very similar to that ofCDMA2000 1xRTT. The channel includes a reverse link pilot (helps with decoding the signal) along with the user data channels. Some additional channels that do not exist in 1x include the DRC channel (described above) and the ACK channel (used forHARQ). Only the reverse link has any sort ofpower control, because the forward link is always transmitted at full power for use by all the mobiles.[5]The reverse link has both open loop and closed loop power control. In the open loop, the reverse link transmission power is set based upon the received power on the forward link. In the closed loop, the reverse link power is adjusted up or down 800 times a second, as indicated by the serving sector (similar to1x).[6] All of the reverse link channels are combined usingcode divisionand transmitted back to the base station usingBPSK[7]where they are decoded. The maximum speed available for user data is 153.2 kbit/s, but in real-life conditions this is rarely achieved. Typical speeds achieved are between 20-50 kbit/s. Revision A of EV-DO makes several additions to the protocol while keeping it completely backwards compatible with Release 0. These changes included the introduction of several new forward link data rates that increase the maximum burst rate from 2.45 Mbit/s to 3.1 Mbit/s. Also included were protocols that would decrease connection establishment time (called enhanced access channel MAC), the ability for more than one mobile to share the same timeslot (multi-user packets) and the introduction ofQoSflags. All of these were put in place to allow for low latency, low bit rate communications such asVoIP.[8] The additional forward rates for EV-DO Rev. An are:[9] In addition to the changes on the forward link, the reverse link was enhanced to support higher complexitymodulation(and thus higher bit rates). An optional secondary pilot was added, which is activated by the mobile when it tries to achieve enhanced data rates. To combat reverse link congestion and noise rise, the protocol calls for each mobile to be given an interference allowance which is replenished by the network when the reverse link conditions allow it.[9]The reverse link has a maximum rate of 1.8 Mbit/s, but under normal conditions users experience a rate of approximately 500-1000 Kbit/s but with morelatencythanDOCSISandDSL. EV-DO Rev. B is a multi-carrier evolution of the Rev. A specification. It maintains the capabilities of EV-DO Rev. A, and provides the following enhancements: Qualcomm early on realized that EV-DO was a stop-gap solution, and foresaw an upcoming format war betweenLTEand determined that a new standard would be needed. Qualcomm originally called this technology EV-DV (Evolution Data and Voice).[10]As EV-DO became more pervasive, EV-DV evolved into EV-DO Rev C. The EV-DO Rev. C standard was specified by3GPP2to improve theCDMA2000mobile phone standard for next generation applications and requirements. It was proposed byQualcommas the natural evolution path for CDMA2000 and the specifications were published by 3GPP2 (C.S0084-*) and TIA (TIA-1121) in 2007 and 2008 respectively.[11][12] The brand nameUMB (Ultra Mobile Broadband)was introduced in 2006 as a synonym for this standard.[13] UMB was intended to be afourth-generationtechnology, which would make it compete withLTEandWiMAX. These technologies use a high bandwidth, low latency, underlyingTCP/IPnetwork with high level services such as voice built on top. Widespread deployment of 4G networks promises to make applications that were previously not feasible not only possible but ubiquitous. Examples of such applications include mobilehigh definition videostreaming and mobile gaming. Like LTE, the UMB system was to be based upon Internet networking technologies running over a next generation radio system, with peak rates of up to 280 Mbit/s. Its designers intended for the system to be more efficient and capable of providing more services than the technologies it was intended to replace. To provide compatibility with the systems it was intended to replace, UMB was to support handoffs with other technologies including existing CDMA2000 1X and 1xEV-DO systems. UMB's use of OFDMA would have eliminated many of the disadvantages of the CDMA technology used by its predecessor, including the "breathing" phenomenon, the difficulty of adding capacity via microcells, the fixed bandwidth sizes that limit the total bandwidth available to handsets, and the near complete control by one company of the required intellectual property. While capacity of existing Rel. B networks can be increased 1.5-fold by using EVRC-B voice codec and QLIC handset interference cancellation, 1x Advanced and EV-DO Advanced offers up to 4x network capacity increase using BTS interference cancellation (reverse link interference cancellation), multi-carrier links, and smart network management technologies.[14][15] In November 2008,Qualcomm, UMB's lead sponsor, announced it was ending development of the technology, favoringLTEinstead. This followed the announcement that most CDMA carriers chose to adopt eitherWiMAXorLTEstandard as their 4G technology. In fact no carrier had announced plans to adopt UMB.[16] However, during the ongoing development process of the 4G technology, 3GPP added some functionalities to LTE, allowing it to become a sole upgrade path for all wireless networks.
https://en.wikipedia.org/wiki/Ultra_Mobile_Broadband
IEEE 802.20orMobile Broadband Wireless Access(MBWA) was a specification by thestandard associationof theInstitute of Electrical and Electronics Engineers(IEEE) formobile broadbandnetworks. The main standard was published in 2008.[1]MBWA is no longer being actively developed. Thiswireless broadbandtechnology is also known and promoted asiBurst(orHC-SDMA, High Capacity Spatial Division Multiple Access). It was originally developed byArrayCommand optimizes the use of itsbandwidthwith the help of smart antennas.Kyocerais themanufacturerof iBurst devices. iBurst is a mobile broadband wireless access system that was first developed by ArrayComm, and announced with partnerSonyin April 2000.[2]It was adopted as the High Capacity – Spatial Division Multiple Access (HC-SDMA) radio interface standard (ATIS-0700004-2005) by theAlliance for Telecommunications Industry Solutions(ATIS). The standard was prepared by ATIS’ Wireless Technology and Systems Committee's Wireless Wideband Internet Access subcommittee and accepted as an American National Standard in 2005.[3] HC-SDMA was announced as considered byISOTC204 WG16 for the continuous communications standards architecture, known asCommunications, Air-interface, Long and Medium range(CALM), which ISO is developing forintelligent transport systems(ITS). ITS may include applications forpublic safety,network congestionmanagement during traffic incidents, automatic toll booths, and more. An official liaison was established between WTSC and ISO TC204 WG16 for this in 2005.[3] The HC-SDMA interface provides wide-area broadband wireless data-connectivity for fixed, portable and mobile computing devices and appliances. The protocol is designed to be implemented withsmart antennaarray techniques (calledMIMOfor multiple-input multiple-output) to substantially improve theradio frequency(RF) coverage, capacity and performance for the system.[4]In January 2006, the IEEE 802.20 Mobile Broadband Wireless Access Working Group adopted a technology proposal that included the use of the HC-SDMA standard for the 625kHzMulti-Carriertime-division duplex(TDD) mode of the standard. One Canadian vendor operates at 1.8 GHz. The HC-SDMA interface operates on a similar premise ascellular phones, with hand-offs between HC-SDMA cells repeatedly providing the user with a seamless wirelessInternet accesseven when moving at the speed of a car or train. The standard's proposed benefits: Some technical details were: The protocol: The protocol also supportsLayer 3(L3) mechanisms for creating and controlling logical connections (sessions) between client device and base including registration, stream start, power control, handover, link adaptation, and stream closure, as well as L3 mechanisms for client device authentication andsecure transmissionon the data links. Currently deployed iBurst systems allow connectivity up to 2 Mbit/s for each subscriber equipment. Apparently there will be future firmware upgrade possibilities to increase these speeds up to 5 Mbit/s, consistent with HC-SDMA protocol.[citation needed] The 802.20 working group was proposed in response to products using technology originally developed byArrayCommmarketed under theiBurstbrand name. TheAlliance for Telecommunications Industry Solutionsadopted iBurst as ATIS-0700004-2005.[5][3]The Mobile Broadband Wireless Access (MBWA) Working Group was approved byIEEEStandards Board on December 11, 2002, to prepare a formal specification for a packet-based air interface designed forInternet Protocol-based services. At its height, the group had 175 participants.[6] On June 8, 2006, the IEEE-SA Standards Board directed that all activities of the 802.20 Working Group be temporarily suspended until October 1, 2006.[7]The decision came from complaints of a lack of transparency, and that the group's chair, Jerry Upton, was favoringQualcomm.[8]The unprecedented step came after other working groups had also been subject to related allegations of large companies undermining the standard process.[9]IntelandMotorolahad filed appeals, claiming they were not given time to prepare proposals. These claims were cited in a 2007 lawsuit filed byBroadcomagainst Qualcomm.[10] On September 15, 2006, the IEEE-SA Standards Board approved a plan to enable the working group to move towards completion and approval by reorganizing.[11]The chair at the November 2006 meeting was Arnold Greenspan.[12]On July 17, 2007, the IEEE 802 Executive Committee along with its 802.20 Oversight Committee approved a change to voting in the 802.20 working group. Instead of a vote per attending individual, each entity would have a single vote.[13][14] On June 12, 2008, the IEEE approved the base standard to be published.[1]Additional supporting standards included IEEE 802.20.2-2010, a protocol conformance statement, 802.20.3-2010, minimum performance characteristics, an amendment 802.20a-2010 for aManagement Information Baseand some corrections, and amendment 802.20b-2010 to supportbridging.[15] 802.20 standard was put to hibernation in March 2011 due to lack of activity.[citation needed] In 2004 another wireless standard group had been formed asIEEE 802.22, for wireless regional networks using unused television station frequencies.[16]Trials such as those in the Netherlands byT-MobileInternational in 2004 were announced as "Pre-standard 802.20". These were based on anorthogonal frequency-division multiplexingtechnology known as FLASH-OFDM developed by Flarion[17](since 2006 owned by Qualcomm). However, other service providers soon adopted 802.16e (the mobile version of WiMAX).[18] In September 2008, theAssociation of Radio Industries and BusinessesinJapanadopted the 802.20-2008 standard as ARIB STD-T97.Kyoceramarkets products supporting the standard under the iBurst name. As of March 2011[update], Kyocera claimed 15 operators offered service in 12 countries.[5] Various options are already commercially available using: iBurst was commercially available in twelve countries in 2011 includingAzerbaijan,Lebanon, andUnited States.[5][19][20] iBurst (Pty) Ltd started operation inSouth Africain 2005.[21] iBurst Africa International provided the service inGhanain 2007, and then later inMozambique,Democratic Republic of the CongoandKenya.[22] MoBif Wireless Broadband Sdn Bhd, started service inMalaysiain 2007, changing its name to iZZinet.[23]The provider ceased operations in March 2011. InAustralia, Veritel and Personal Broadband Australia (a subsidiary of Commander Australia Limited), offered iBurst services however both have since been shut down after the increase of 3.5G and 4G mobile data services.BigAiracquired Veritel's iBurst customers in 2006,[24]and shut down the service in 2009.[25]Personal Broadband Australia's iBurst service was shut down in December 2008. iBurstSouth Africaofficially shut down on August 31, 2017.[26]Users were given a choice to keep their @iburst.co.za or @wbs.co.za. iBurst still keeps support staff available, however this is also expected to be shut down by the end of 2017 (no information about support remaining for the email addresses from iBurst has been given).
https://en.wikipedia.org/wiki/IEEE_802.20
5G NR(5GNew Radio)[1]is aradio access technology(RAT) developed by the 3rd Generation Partnership Project (3GPP) for the5G(fifth generation) mobile network.[1]It was designed to be the global standard for the air interface of 5G networks.[2]It is based onorthogonal frequency-division multiplexing(OFDM), as is the4G(fourth generation) long-term evolution (LTE) standard. The 3GPP specification 38 series[3]provides the technical details behind 5G NR, the successor of LTE. The study of 5G NR within 3GPP started in 2015, and the first specification was made available by the end of 2017. While the 3GPP standardization process was ongoing, the industry had already begun efforts to implement infrastructure compliant with the draft standard, with the first large-scale commercial launch of 5G NR having occurred in the end of 2018. Since 2019, many operators have deployed 5G NR networks and handset manufacturers have developed 5G NR enabled handsets.[4] 5G NR uses frequency bands in two broad frequency ranges: gNodeBorgNb(Next GenerationNode B) means a 5Gbase station. It transmits radio data to and receives radio data from user equipment. Its coverage area is called a cell. The gNodeB may be a tower. A "Non-Standalone" (NSA) gNodeB is built on an existing LTE (4G) base station (eNodeB or eNB). Ooredoowas the first carrier to launch a commercial 5G NR network, in May 2018 inQatar. Other carriers around the world have been following suit. In 2018,3GPPpublishedRelease 15, which includes what is described as "Phase 1" standardization for 5G NR. The timeline for Release 16, which will be "5G phase 2", follows a freeze date of March 2020 and a completion date of June 2020,[6]Release 17 was originally scheduled for delivery in September 2021.[7]but, because of theCOVID-19 pandemic, it was rescheduled for June 2022.[8] Release 18 work has started in 3GPP. Rel.18 is referred to as "NR Advanced" signifying another milestone in wireless communication systems. NR Advanced will include features such as eXtended Reality (XR), AI/ML studies, and Mobility enhancements. Mobility is in the core of 3GPP technology and has so far been handled on Layer 3 (RRC), now, in Rel-18 the work on mobility is to introduce lower layer triggered mobility. Initial 5G NR launches will depend on existing LTE infrastructure in non-standalone (NSA) mode, before maturation of the standalone (SA) mode with the 5G core network. Additionally, the spectrum can be dynamically shared between LTE and 5G NR. To make better use of existing assets, carriers may opt to dynamically share it between LTE and 5G NR. The spectrum is multiplexed over time between both generations of mobile networks, while still using the LTE network for control functions, depending on user demand. Dynamic spectrum sharing (DSS) may be deployed on existing LTE equipment as long as it is compatible with 5G NR. Only the 5G NR terminal needs to be compatible with DSS.[9] The non-standalone (NSA) mode of 5G NR refers to an option of 5G NR deployment that depends on the control plane of an existing LTE network for control functions, while 5G NR is exclusively focused on the user plane.[10][11]This is reported to speed up 5G adoption, however some operators and vendors have criticized prioritizing the introduction of 5G NR NSA on the grounds that it could hinder the implementation of the standalone mode of the network.[12][13]It uses the same core network as a 4G network, but with upgraded radio equipment.[14][15] The standalone (SA) mode of 5G NR refers to using 5G cells for both signalling and information transfer.[10]It includes the new5G Packet Corearchitecture instead of relying on the 4GEvolved Packet Core,[16][17]to allow the deployment of 5G without the LTE network.[18]It is expected to have lower cost, better efficiency, and to assist development of new use cases.[12][19]However, initial deployment might see slower speed than existing network due to the allocation of spectrum.[20]It uses a new core network dedicated to 5G.[21] 5G NR supports seven subcarrier spacings: The length of thecyclic prefixis inversely proportional to thesubcarrier spacing. It is 4.7 μs with 15 kHz, and 4.7 / 16 = 0.29 μs for 240 kHz subcarrier spacing. Additionally, higher subcarrier spacings allow for reduced latency and increased support for high-frequency bands, essential for the ultra-reliable low-latency communications (URLLC) and enhanced mobile broadband (eMBB) applications in 5G. In 5G NR Release 17, the 3GPP introduced NR-Light for reduced capabilities (RedCap) devices. NR-Light, also known as RedCap, is designed to support a wide range of new and emerging use cases that require lower complexity and reduced power consumption compared to traditional 5G NR devices. NR-Light targets devices in the mid-tier performance category, striking a balance between the high-performance capabilities of standard 5G NR devices and the ultra-low complexity of LTE-M and NB-IoT devices. This makes it ideal for applications such as: Key features of NR-Light include: NR-Light enhances the 5G ecosystem by providing a scalable solution that caters to the needs of devices with varying performance requirements, expanding the potential applications and fostering the growth of IoT and other connected technologies.
https://en.wikipedia.org/wiki/5G_NR
Intelecommunications,5Gis the "fifth generation" ofcellular networktechnology, as the successor to the fourth generation (4G), and has been deployed bymobile operatorsworldwide since 2019. Compared to 4G, 5G networks offer not only higherdownload speeds, with a peak speed of 10gigabits per second(Gbit/s),[a]but also substantially lowerlatency, enabling near-instantaneous communication through cellularbase stationsand antennae.[1]There is one global unified 5G standard:5G New Radio(5G NR),[2]which has been developed by the 3rd Generation Partnership Project (3GPP) based on specifications defined by the International Telecommunication Union (ITU) under theIMT-2020requirements.[3] The increasedbandwidthof 5G over 4G allows them to connect more devices simultaneously and improving the quality of cellulardataservices in crowded areas.[4]These features make 5G particularly suited for applications requiring real-time data exchange, such asextended reality(XR),autonomous vehicles,remote surgery, and industrial automation. Additionally, the increased bandwidth is expected to drive the adoption of 5G as a generalInternet service provider(ISP), particularly throughfixed wireless access(FWA), competing with existing technologies such ascable Internet, while also facilitating new applications in themachine-to-machinecommunication and theInternet of Things(IoT), the latter of which may include diverse applications such assmart cities, connected infrastructure, industrial IoT, and automated manufacturing processes. Unlike 4G, which was primarily designed for mobile broadband, 5G can handle millions of IoT devices with stringent performance requirements, such as real-time sensor data processing andedge computing. 5G networks also extend beyond terrestrial infrastructure, incorporating non-terrestrial networks (NTN) such as satellites and high-altitude platforms, to provide global coverage, including remote and underserved areas. 5G deployment faces challenges such as significant infrastructure investment, spectrum allocation, security risks, and concerns about energy efficiency and environmental impact associated with the use of higher frequency bands. However, it is expected to drive advancements in sectors like healthcare, transportation, and entertainment. 5G networks arecellular networks,[5]in which the service area is divided into small geographical areas calledcells. All 5G wireless devices in a cell communicate by radio waves with acellular base stationvia fixedantennas, over frequencies assigned by the base station. The base stations, termednodes, are connected to switching centers in thetelephone networkand routers forInternet accessby high-bandwidthoptical fiberor wirelessbackhaul connections. As in othercellular networks, a mobile device moving from one cell to another is automaticallyhanded offseamlessly. The industry consortium setting standards for 5G, the3rd Generation Partnership Project(3GPP), defines "5G" as any system using5G NR(5G New Radio) software—a definition that came into general use by late 2018. 5G continues to useOFDMencoding. Several network operators usemillimeter wavesormmWavecalledFR2in 5G terminology, for additional capacity and higher throughputs. Millimeter waves have a shorter range than the lower frequencymicrowaves, therefore the cells are of a smaller size. Millimeter waves also have more trouble passing through building walls and humans. Millimeter-wave antennas are smaller than the large antennas used in previous cellular networks. The increased data rate is achieved partly by using additional higher-frequency radio waves in addition to the low- and medium-band frequencies used in previouscellular networks. For providing a wide range of services, 5G networks can operate in three frequency bands—low, medium or high. 5G can be implemented in low-band, mid-band or high-band millimeter-wave. Low-band 5G uses a similar frequency range to 4G smartphones, 600–900MHz, which can potentially offer higher download speeds than 4G: 5–250megabits per second(Mbit/s).[6][7]Low-bandcell towershave a range and coverage area similar to 4G towers. Mid-band 5G usesmicrowavesof 1.7–4.7GHz, allowing speeds of 100–900 Mbit/s, with each cell tower providing service up to several kilometers in radius. This level of service is the most widely deployed, and was deployed in many metropolitan areas in 2020. Some regions are not implementing the low band, making Mid-band the minimum service level. High-band 5G uses frequencies of 24–47 GHz, near the bottom of the millimeter wave band, although higher frequencies may be used in the future. It often achieves download speeds in thegigabit-per-second(Gbit/s) range, comparable to co-axial cable Internet service. However,millimeter waves(mmWave or mmW) have a more limited range, requiring many small cells.[8]They can be impeded or blocked by materials in walls or windows or pedestrians.[9][10]Due to their higher cost, plans are to deploy these cells only in dense urban environments and areas where crowds of people congregate such as sports stadiums and convention centers. The above speeds are those achieved in actual tests in 2020, and speeds are expected to increase during rollout.[6]The spectrum ranging from 24.25 to 29.5 GHz has been the most licensed and deployed 5G mmWave spectrum range in the world.[11] Rollout of 5G technology has led to debate over its security andrelationship with Chinese vendors. It has also been the subject ofhealth concernsand misinformation, includingdiscredited conspiracy theorieslinking it to theCOVID-19 pandemic. TheITU-Rhas defined three main application areas for the enhanced capabilities of 5G. They are Enhanced Mobile Broadband (eMBB), Ultra Reliable Low Latency Communications (URLLC), and Massive Machine Type Communications (mMTC).[12]Only eMBB is deployed in 2020; URLLC and mMTC are several years away in most locations.[13] Enhanced Mobile Broadband (eMBB) uses 5G as a progression from 4G LTEmobile broadbandservices, with faster connections, higher throughput, and more capacity. This will benefit areas of higher traffic such as stadiums, cities, and concert venues.[14]'Ultra-Reliable Low-Latency Communications' (URLLC) refers to using the network for mission-critical applications that require uninterrupted and robust data exchange. Short-packet data transmission is used to meet both reliability and latency requirements of the wireless communication networks. Massive Machine-Type Communications (mMTC) would be used to connect to a large number ofdevices. 5G technology will connect some of the 50 billion connected IoT devices.[15]Most will use the less expensive Wi-Fi. Drones, transmitting via 4G or 5G, will aid in disaster recovery efforts, providing real-time data for emergency responders.[15]Most cars will have a 4G or 5G cellular connection for many services. Autonomous cars do not require 5G, as they have to be able to operate where they do not have a network connection.[16]However, most autonomous vehicles also feature tele-operations for mission accomplishment, and these greatly benefit from 5G technology.[17][18] The5G Automotive Associationhas been promoting theC-V2Xcommunication technology that will first be deployed in 4G. It provides for communication between vehicles and infrastructures.[19] A real timedigital twinof the real object such as aturbine engine, aircraft, wind turbines,offshore platformand pipelines. 5G networks helps in building it due to the latency and throughput to capture near real-time IoT data and supportdigital twins. Mission-critical push-to-talk (MCPTT) and mission-critical video and data are expected to be furthered in 5G.[20] Fixed wireless connections will offer an alternative to fixed-line broadband (ADSL,VDSL,fiber optic, andDOCSISconnections) in some locations. Utilizing 5G technology,fixed wireless access(FWA) can deliver high-speed internet to homes and businesses without the need for extensive physical infrastructure. This approach is particularly beneficial in rural or underserved areas where traditional broadband deployment is too expensive or logistically challenging. 5G FWA can outperform older fixed-line technologies such as ADSL and VDSL in terms of speed and latency, making it suitable for bandwidth-intensive applications like streaming, gaming, and remote work.[21][22][23] Sony has tested the possibility of using local 5G networks to replace theSDIcables currently used in broadcast camcorders.[24]The5G Broadcasttests started around 2020 (Orkney,Bavaria,Austria,Central Bohemia) based on FeMBMS (Further evolved multimedia broadcast multicast service).[25]The aim is to serve unlimited number of mobile or fixed devices with video (TV) and audio (radio) streams without these consuming any data flow or even being authenticated in a network. 5G networks, like 4G networks, do not natively support voice calls traditionally carried overcircuit-switchedtechnology. Instead, voice communication is transmitted over theIP network, similar toIPTVservices. To address this,Voice over NR(VoNR) is implemented, allowing voice calls to be carried over the 5G network using the samepacket-switchedinfrastructure as other IP-based services, such as video streaming and messaging. Similarly to howVoice over LTE(VoLTE) enables voice calls on 4G networks, VoNR (Vo5G) serves as the 5G equivalent for voice communication, but it requires a5G standalone(SA) network to function.[26] 5G is capable of delivering significantly faster data rates than 4G (5G is approximately 10 times faster than 4G),[27][28]with peak data rates of up to 20 gigabits per second (Gbps).[29]Furthermore, average 5G download speeds have been recorded at 186.3 Mbit/s in theU.S.byT-Mobile,[30]whileSouth Korea, as of May 2022[update], leads globally with average speeds of 432 megabits per second (Mbps).[31][32]5G networks are also designed to provide significantly more capacity than 4G networks, with a projected 100-fold increase in network capacity and efficiency.[33] The most widely used form of 5G, sub-6 GHz 5G (mid-band), is capable of delivering data rates ranging from 10 to 1,000 megabits per second (Mbps), with a much greater reach than mm Wave bands. C-Band (n77/n78) was deployed by various U.S. operators in 2022 in the sub-6 bands, although its deployment byVerizonandAT&Twas delayed until early January 2022 due to safety concerns raised by theFederal Aviation Administration. The record for 5G speed in a deployed network is 5.9 Gbit/s as of 2023, but this was tested before the network was launched.[34]Low-band frequencies (such as n5) offer a greater coverage area for a given cell, but their data rates are lower than those of mid and high bands in the range of 5–250 megabits per second (Mbps).[7] In 5G, the ideal "air latency" is of the order of 8 to 12 milliseconds i.e., excluding delays due toHARQretransmissions, handovers, etc. Retransmission latency and backhaul latency to the server must be added to the "air latency" for correct comparisons. Verizon reported the latency on its 5G early deployment is 30 ms.[35]Edge Servers close to the towers have the possibility to reduceround-trip time(RTT) latency to 14 milliseconds and the minimumjitterto 1.84 milliseconds.[36] Latency is much higher during handovers; ranging from 50 to 500 milliseconds depending on the type of handover[citation needed]. Reducing handover interruption time is an ongoing area of research and development; options include modifying the handover margin (offset) and the time-to-trigger (TTT). 5G uses an adaptive modulation and coding scheme (MCS) to keep the block error rate (BLER) extremely low. Whenever the error rate crosses a (very low) threshold the transmitter will switch to a lower MCS, which will be less error-prone. This way speed is sacrificed to ensure an almost zero error rate. The range of 5G depends on many factors: transmit power, frequency, andinterference. For example, mmWave (e.g.: band n258) will have a lower range than mid-band (e.g.: band n78) which will have a lower range than low-band (e.g.: band n5) Given the marketing hype on what 5G can offer,simulatorsanddrive testsare used by cellular service providers for the precise measurement of 5G performance. Initially, the term was associated with theInternational Telecommunication Union'sIMT-2020standard, which required a theoretical peak download speed of 20 gigabits per second and 10 gigabits per second upload speed, along with other requirements.[29]Then, the industry standards group 3GPP chose the5G NR(New Radio) standard together with LTE as their proposal for submission to the IMT-2020 standard.[37][38] 5G NR can include lower frequencies (FR1), below 6 GHz, and higher frequencies (FR2), above 24 GHz.[39]However, the speed and latency in early FR1 deployments, using 5G NR software on 4G hardware (non-standalone), are only slightly better than new 4G systems, estimated at 15 to 50% better.[40][41]The standard documents are organized by 3rd Generation Partnership Project (3GPP),[42][43]with its system architecture defined in TS 23.501.[44]The packet protocol for mobility management (establishing connection and moving between base stations) and session management (connecting to networks and network slices) is described in TS 24.501.[45]Specifications of key data structures are found in TS 23.003.[46]DECT NR+ is a related, non-cellular standard of 5G based onDECT-2020specifications based on a mesh network.[47][48] IEEEcovers several areas of 5G with a core focus on wireline sections between the Remote Radio Head (RRH) and Base Band Unit (BBU). The 1914.1 standards focus on network architecture and dividing the connection between the RRU and BBU into two key sections. Radio Unit (RU) to the Distributor Unit (DU) being the NGFI-I (Next Generation Fronthaul Interface) and the DU to the Central Unit (CU) being the NGFI-II interface allowing a more diverse and cost-effective network. NGFI-I and NGFI-II have defined performance values which should be compiled to ensure different traffic types defined by the ITU are capable of being carried.[page needed]The IEEE 1914.3 standard is creating a new Ethernet frame format capable of carryingIQ datain a much more efficient way depending on the functional split utilized. This is based on the3GPPdefinition of functional splits.[page needed] 5G NR(5G New Radio) is the de factoair interfacedeveloped for 5G networks.[49]It is the global standard for 3GPP 5G networks.[50] The study of 5G NR within 3GPP started in 2015, and the first specification was made available by the end of 2017. While the 3GPP standardization process was ongoing, the industry had already begun efforts to implement infrastructure compliant with the draft standard, with the first large-scale commercial launch of 5G NR having occurred at the end of 2018. Since 2019, many operators have deployed 5G NR networks and handset manufacturers have developed 5G NR enabled handsets.[51] 5Gi is an alternative 5G variant developed in India. It was developed in a joint collaboration between IIT Madras, IIT Hyderabad, TSDSI, and the Centre of Excellence in Wireless Technology (CEWiT)[citation needed]. 5Gi is designed to improve 5G coverage in rural and remote areas over varying geographical terrains. 5Gi uses Low Mobility Large Cell (LMLC) to extend 5G connectivity and the range of a base station.[52] In April 2022, 5Gi was merged with the global 5G NR standard in the3GPPRelease 17 specifications.[53] In theInternet of things(IoT), 3GPP is going to submit the evolution ofNB-IoTandeMTC(LTE-M) as 5G technologies for theLPWA(Low Power Wide Area) use case.[56] Standards are being developed by 3GPP to provide access to end devices via non-terrestrial networks (NTN), i.e. satellite or airborne telecommunication equipment to allow for better coverage outside of populated or otherwise hard to reach locations.[57][58]The enhanced communication quality relies on the unique properties ofAir to Ground channel. Several manufacturers have announced and released hardware that integrates 5G with satellite networks: 5G-Advanced (also known as 5.5G or 5G-A) is an evolutionary upgrade to 5G technology, defined under the 3GPP Release 18 standard. It serves as a transitional phase between 5G and future6Gnetworks, focusing on performance optimization, enhanced spectral efficiency, energy efficiency, and expanded functionality. This technology supports advanced applications such asextended reality(XR), massive machine-type communication (mMTC), and ultra-low latency for critical services, such asautonomous vehicles.[66][67][68]5G-Advanced would offer a theoretical 10 Gbps downlink, 1 Gbps uplink, 100 billion device connections and lower latency.[69] Additionally, 5G-Advanced integratesartificial intelligence(AI) andmachine learning(ML) to optimize network operations, enabling smarter resource allocation and predictive maintenance. It also enhances network slicing, allowing highly customized virtual networks for specific use cases such as industrial automation,smart cities, and critical communication systems. 5G-Advanced aims to minimize service interruption times duringhandoversto nearly zero, ensuring robust connectivity for devices in motion, such as high-speed trains and autonomous vehicles. To further support emerging IoT applications, 5G-Advanced expands the capabilities of RedCap (Reduced Capability) devices, enabling their efficient use in scenarios that require low complexity and power consumption.[70][71]Furthermore, 5G-Advanced introduces advanced time synchronization methods independent ofGNSS, providing more precise timing for critical applications. For the first time in the development of mobile network standards defined by 3GPP, it offers fully independent geolocation capabilities, allowing position determination without relying on satellite systems such as GPS. The standard includes extended support for non-terrestrial networks (NTN), enabling communication via satellites and unmanned aerial vehicles, which facilitates connectivity in remote or hard-to-reach areas.[72] In December 2023, Finnish operatorDNAdemonstrated 10 Gbps speeds on its network using 5G-Advanced technology.[73][74]The Release 18 specifications were finalized by mid-2024.[75][76]On February 27, 2025,Elisaannounced its deployment of the first 5G-Advanced network in Finland.[77]In March 2025,China Mobilestarted deployment of 5G-Advanced network inHangzhou.[78] Beyond mobile operator networks, 5G is also expected to be used for private networks with applications in industrial IoT, enterprise networking, and critical communications, in what being described asNR-U(5G NR in Unlicensed Spectrum)[79]and Non-Public Networks (NPNs) operating in licensed spectrum. By the mid-to-late 2020s, standalone private 5G networks are expected to become the predominant wireless communications medium to support the ongoing Industry 4.0 revolution for the digitization and automation of manufacturing and process industries.[80]5G was expected to increase phone sales.[81] Initial 5G NR launches depended on pairing with existing LTE (4G) infrastructure innon-standalone (NSA) mode(5G NR radio with 4G core), before maturation of thestandalone (SA) modewith the 5G core network.[82] As of April 2019, theGlobal Mobile Suppliers Associationhad identified 224 operators in 88 countries that have demonstrated, are testing or trialing, or have been licensed to conduct field trials of 5G technologies, are deploying 5G networks or have announced service launches.[83]The equivalent numbers in November 2018 were 192 operators in 81 countries.[84]The first country to adopt 5G on a large scale was South Korea, in April 2019. Swedish telecoms giant Ericsson predicted that 5G Internet will cover up to 65% of the world's population by the end of 2025.[85]Also, it plans to invest 1 billion reals ($238.30 million) in Brazil to add a new assembly line dedicated to fifth-generation technology (5G) for its Latin American operations.[86] When South Korea launched its 5G network, all carriers used Samsung, Ericsson, and Nokiabase stationsand equipment, except forLG U Plus, who also used Huawei equipment.[87][88]Samsung was the largest supplier for 5G base stations in South Korea at launch, having shipped 53,000 base stations at the time, out of 86,000 base stations installed across the country at the time.[89] The first fairly substantial deployments were in April 2019. In South Korea,SK Telecomclaimed 38,000 base stations,KT Corporation30,000 andLG U Plus18,000; of which 85% are in six major cities.[90]They are using 3.5 GHz (sub-6) spectrum innon-standalone (NSA) modeand tested speeds were from 193 to 430Mbit/sdown.[91]260,000 signed up in the first month and 4.7 million by the end of 2019.[92]T-Mobile USwas the first company in the world to launch a commercially available 5G NR Standalone network.[93] Nine companies sell 5G radio hardware and 5G systems for carriers:Altiostar,Cisco Systems,Datang Telecom/Fiberhome,Ericsson,Huawei,Nokia,Qualcomm,Samsung, andZTE.[94][95][96][97][98][99][100]As of 2023, Huawei is the leading 5G equipment manufacturer and has the greatest market share of 5G equipment and has built approximately 70% of worldwide 5G base stations.[101]: 182 Large quantities of newradio spectrum(5G NR frequency bands) have been allocated to 5G.[102]For example, in July 2016, the U.S.Federal Communications Commission(FCC) freed up vast amounts of bandwidth in underused high-band spectrum for 5G. The Spectrum Frontiers Proposal (SFP) doubled the amount of millimeter-wave unlicensed spectrum to 14 GHz and created four times the amount of flexible, mobile-use spectrum the FCC had licensed to date.[103]In March 2018,European Unionlawmakers agreed to open up the 3.6 and 26 GHz bands by 2020.[104] As of March 2019[update], there are reportedly 52 countries, territories, special administrative regions, disputed territories and dependencies that are formally considering introducing certain spectrum bands for terrestrial 5G services, are holding consultations regarding suitable spectrum allocations for 5G, have reserved spectrum for 5G, have announced plans toauction frequenciesor have already allocated spectrum for 5G use.[105] In March 2019, theGlobal Mobile Suppliers Associationreleased the industry's first database tracking worldwide 5G device launches.[106]In it, the GSA identified 23 vendors who have confirmed the availability of forthcoming 5G devices with 33 different devices including regional variants. There were seven announced 5G device form factors: (telephones (×12 devices), hotspots (×4), indoor and outdoorcustomer-premises equipment(×8), modules (×5), Snap-on dongles and adapters (×2), and USB terminals (×1)).[107]By October 2019, the number of announced 5G devices had risen to 129, across 15 form factors, from 56 vendors.[108] In the 5G IoT chipset arena, as of April 2019 there were four commercial 5G modem chipsets (Intel, MediaTek, Qualcomm, Samsung) and one commercial processor/platform, with more launches expected in the near future.[109] On March 4, 2019, the first-ever all-5G smartphoneSamsung Galaxy S10 5Gwas released. According toBusiness Insider, the 5G feature was showcased as more expensive in comparison with the 4GSamsung Galaxy S10e.[110]On March 19, 2020,HMD Global, the current maker of Nokia-branded phones, announced theNokia 8.3 5G, which it claimed as having a wider range of 5G compatibility than any other phone released to that time. The mid-range model is claimed to support all 5G bands from 600 MHz to 3.8 GHz.[111]Google Pixelsmartphones support 5G starting with the4a 5GandPixel 5,[112]whileApplesmartphones support 5G starting with theiPhone 12.[113][114] The air interface defined by 3GPP for 5G is known as 5G New Radio (5G NR), and the specification is subdivided into two frequency bands, FR1 (below 6 GHz) and FR2 (24–54 GHz). Otherwise known as sub-6, the maximum channel bandwidth defined for FR1 is 100 MHz, due to the scarcity of continuous spectrum in this crowded frequency range. The band most widely being used for 5G in this range is 3.3–4.2 GHz. The Korean carriers use the n78 band at 3.5 GHz. Some parties used the term "mid-band" frequency to refer to higher part of this frequency range that was not used in previous generations of mobile communication. The minimum channel bandwidth defined for FR2 is 50 MHz and the maximum is 400 MHz, with two-channel aggregation supported in 3GPP Release 15. Signals in this frequency range with wavelengths between 4 and 12 mm are called millimeter waves. The higher the carrier frequency, the greater the ability to support high data-transfer speeds. This is because a given channel bandwidth takes up a lower fraction of the carrier frequency, so high-bandwidth channels are easier to realize at higher carrier frequencies. 5G in the 24 GHz range or above use higher frequencies than 4G, and as a result, some 5G signals are not capable of traveling large distances (over a few hundred meters), unlike 4G or lower frequency 5G signals (sub 6 GHz). This requires placing 5G base stations every few hundred meters in order to use higher frequency bands. Also, these higher frequency 5G signals cannot penetrate solid objects easily, such as cars, trees, walls, and even humans, because of the nature of these higher frequency electromagnetic waves. 5G cells can be deliberately designed to be as inconspicuous as possible, which finds applications in places like restaurants and shopping malls.[115] MIMO (multiple-input and multiple-output) systems use multiple antennas at the transmitter and receiver ends of a wireless communication system. Multiple antennas use the spatial dimension for multiplexing in addition to the time and frequency ones, without changing the bandwidth requirements of the system. Spatial multiplexing gains allow for an increase in the number of transmission layers, thereby boosting system capacity. Massive MIMOantennasincreases sector throughput and capacity density using large numbers of antennas. This includes Single User MIMO andMulti-user MIMO(MU-MIMO). Theantenna arraycan schedule users separately to satisfy their needs andbeamformtowards the intended users, minimizing interference.[116] Edge computingis delivered by computing servers closer to the ultimate user. It reduces latency, data traffic congestion[117][118]and can improve service availability.[119] Small cells are low-powered cellular radio access nodes that operate in licensed and unlicensed spectrum that have a range of 10 meters to a few kilometers. Small cells are critical to 5G networks, as 5G's radio waves can't travel long distances, because of 5G's higher frequencies.[120][121][122][123] There are two kinds of beamforming (BF): digital and analog. Digital beamforming involves sending the data across multiple streams (layers), while analog beamforming shaping the radio waves to point in a specific direction. The analog BF technique combines the power from elements of the antenna array in such a way that signals at particular angles experience constructive interference, while other signals pointing to other angles experience destructive interference. This improves signal quality in the specific direction, as well as data transfer speeds. 5G uses both digital and analog beamforming to improve the system capacity.[124][125] One expected benefit of the transition to 5G is the convergence of multiple networking functions to achieve cost, power, and complexity reductions. LTE has targeted convergence withWi-Fiband/technology via various efforts, such asLicense Assisted Access(LAA; 5G signal in unlicensed frequency bands that are also used by Wi-Fi) andLTE-WLAN Aggregation(LWA; convergence with Wi-Fi Radio), but the differing capabilities of cellular and Wi-Fi have limited the scope of convergence. However, significant improvement in cellular performance specifications in 5G, combined with migration from DistributedRadio Access Network(D-RAN) to Cloud- or Centralized-RAN (C-RAN) and rollout of cellularsmall cellscan potentially narrow the gap between Wi-Fi and cellular networks in dense and indoor deployments. Radio convergence could result in sharing ranging from the aggregation of cellular and Wi-Fi channels to the use of a single silicon device for multiple radio access technologies.[126] NOMA (non-orthogonal multiple access) is a proposed multiple-access technique for future cellular systems via allocation of power.[127] Initially, cellular mobile communications technologies were designed in the context of providing voice services and Internet access. Today a new era of innovative tools and technologies is inclined towards developing a new pool of applications. This pool of applications consists of different domains such as the Internet of Things (IoT), web of connected autonomous vehicles, remotely controlled robots, and heterogeneous sensors connected to serve versatile applications.[128]In this context,network slicinghas emerged as a key technology to efficiently embrace this new market model.[129] The 5G Service-Based architecture replaces the referenced-based architecture of theEvolved Packet Corethat is used in 4G. The SBA breaks up the core functionality of the network into interconnected network functions (NFs), which are typically implemented asCloud-Native Network Functions. These NFs register with the Network Repository Function (NRF) which maintains their state, and communicate with each other using the Service Communication Proxy (SCP). The interfaces between the elements all utilizeRESTfulAPIs.[130]By breaking functionality down this way, mobile operators are able to utilize different infrastructure vendors for different functions, and the flexibility to scale each function independently as needed.[130] In addition, the standard describes network entities for roaming and inter-network connectivity, including the Security Edge Protection Proxy (SEPP), the Non-3GPP InterWorking Function (N3IWF), the Trusted Non-3GPP Gateway Function (TNGF), the Wireline Access Gateway Function (W-AGF), and the Trusted WLAN Interworking Function (TWIF). These can be deployed by operators as needed depending on their deployment. Thechannel codingtechniques for 5G NR have changed fromTurbo codesin 4G topolar codesfor the control channels andLDPC(low-density parity check codes) for the data channels.[132][133] In December 2018,3GPPbegan working onunlicensed spectrumspecifications known as 5G NR-U, targeting 3GPP Release 16.[134]Qualcomm has made a similar proposal forLTE in unlicensed spectrum. 5G wireless power is a technology based on 5G standards thattransfers wireless power.[135][136]It adheres totechnical standardsset by the3rd Generation Partnership Project, theInternational Telecommunication Union, and theInstitute of Electrical and Electronics Engineers. It utilizesextremely high frequencyradio waves withwavelengthsfrom one to ten millimeters, also known asmmWaves.[137][138]Up to 6μW of power has been demonstrated being captured from 5G signals at a distance of 180m by researchers atGeorgia Tech.[135] Internet of thingsdevices could benefit from 5G wireless power technology, given their low power requirements that are within the range of what has been achieved using 5G power capture.[139] A report published by theEuropean CommissionandEuropean Agency for Cybersecuritydetails the security issues surrounding 5G. The report warns against using a single supplier for a carrier's 5G infrastructure, especially those based outside the European Union;NokiaandEricssonare the only European manufacturers of 5G equipment.[140] On October 18, 2018, a team of researchers fromETH Zurich, theUniversity of Lorraineand theUniversity of Dundeereleased a paper entitled, "A Formal Analysis of 5G Authentication".[141][142]It alerted that 5G technology could open ground for a new era of security threats. The paper described the technology as "immature and insufficiently tested," and one that "enables the movement and access of vastly higher quantities of data, and thus broadens attack surfaces". Simultaneously, network security companies such asFortinet,[143]Arbor Networks,[144]A10 Networks,[145]and Voxility[146]advised on personalized and mixed security deployments against massiveDDoS attacksforeseen after 5G deployment. IoT Analytics estimated an increase in the number ofIoTdevices, enabled by 5G technology, from 7 billion in 2018 to 21.5 billion by 2025.[147]This can raise the attack surface for these devices to a substantial scale, and the capacity for DDoS attacks,cryptojacking, and othercyberattackscould boost proportionally.[142]In addition, the EPS solution for 5G networks has identified a design vulnerability. The vulnerability affects the operation of the device during cellular network switching.[148] Due to fears of potential espionage of users of Chinese equipment vendors, several countries (including the United States, Australia and the United Kingdom as of early 2019)[149]have taken actions to restrict or eliminate the use of Chinese equipment in their respective 5G networks. A 2012 U.S. House Permanent Select Committee on Intelligence report concluded that using equipment made by Huawei and ZTE, another Chinese telecommunications company, could "undermine core U.S. national security interests".[150]In 2018, six U.S. intelligence chiefs, including the directors of the CIA and FBI, cautioned Americans against using Huawei products, warning that the company could conduct "undetected espionage".[151]Further, a 2017 investigation by the FBI determined that Chinese-made Huawei equipment could disrupt U.S. nuclear arsenal communications.[152]Chinese vendors and the Chinese government have denied claims of espionage, but experts have pointed out that Huawei would have no choice but to hand over network data to the Chinese government if Beijing asked for it because of Chinese National Security Law.[153] In August 2020, the U.S. State Department launched "The Clean Network" as a U.S. government-led, bi-partisan effort to address what it described as "the long-term threat to data privacy, security, human rights and principled collaboration posed to the free world from authoritarian malign actors". Promoters of the initiative have stated that it has resulted in an "alliance of democracies and companies", "based on democratic values". On October 7, 2020, theUK Parliament's Defence Committeereleased a report claiming that there was clear evidence of collusion between Huawei and Chinese state and theChinese Communist Party. The UK Parliament's Defence Committee said that the government should consider removal of all Huawei equipment from its 5G networks earlier than planned.[154]In December 2020, the United States announced that more than 60 nations, representing more than two thirds of the world's gross domestic product, and 200 telecom companies, had publicly committed to the principles of The Clean Network. This alliance of democracies included 27 of the 30NATOmembers; 26 of the 27EUmembers, 31 of the 37OECDnations, 11 of the 12Three Seasnations as well as Japan, Israel, Australia, Singapore, Taiwan, Canada, Vietnam, and India. Thespectrumused by various 5G proposals, especially the n258 band centered at 26 GHz, will be near that of passiveremote sensingsuch as byweatherandEarth observation satellites, particularly forwater vapormonitoring at 23.8 GHz.[155]Interferenceis expected to occur due to such proximity and its effect could be significant without effective controls. An increase in interference already occurred with some other prior proximatebandusages.[156][157]Interference to satellite operations impairsnumerical weather predictionperformance with substantially deleterious economic and public safety impacts in areas such ascommercial aviation.[158][159] The concerns promptedU.S. Secretary of CommerceWilbur Rossand NASA AdministratorJim Bridenstinein February 2019 to urge the FCC to delay some spectrum auction proposals, which was rejected.[160]The chairs of theHouse Appropriations CommitteeandHouse Science Committeewrote separate letters to FCC chairmanAjit Paiasking for further review and consultation withNOAA,NASA, andDoD, and warning of harmful impacts to national security.[161]Acting NOAA director Neil Jacobs testified before the House Committee in May 2019 that 5G out-of-band emissions could produce a 30% reduction inweather forecastaccuracy and that the resulting degradation inECMWF modelperformance would have resulted in failure to predict the track and thus the impact ofSuperstorm Sandyin 2012. TheUnited States Navyin March 2019 wrote a memorandum warning of deterioration and made technical suggestions to control band bleed-over limits, for testing and fielding, and for coordination of the wireless industry and regulators with weather forecasting organizations.[162] At the 2019 quadrennialWorld Radiocommunication Conference(WRC), atmospheric scientists advocated for a strong buffer of −55dBW, European regulators agreed on a recommendation of −42 dBW, and US regulators (the FCC) recommended a restriction of −20 dBW, which would permit signals 150 times stronger than the European proposal. The ITU decided on an intermediate −33 dBW until September 1, 2027, and after that a standard of −39 dBW.[163]This is closer to the European recommendation but even the delayed higher standard is much weaker than that requested by atmospheric scientists, triggering warnings from theWorld Meteorological Organization(WMO) that the ITU standard, at 10 times less stringent than its recommendation, brings the "potential to significantly degrade the accuracy of data collected".[164]A representative of theAmerican Meteorological Society(AMS) also warned of interference,[165]and theEuropean Centre for Medium-Range Weather Forecasts(ECMWF), sternly warned, saying that society risks "history repeat[ing] itself" by ignoring atmospheric scientists' warnings (referencingglobal warming, monitoring of which could be imperiled).[166]In December 2019, a bipartisan request was sent from the US House Science Committee to theGovernment Accountability Office(GAO) to investigate why there is such a discrepancy between recommendations of US civilian and military science agencies and the regulator, the FCC.[167] The United StatesFAAhas warned thatradar altimeterson aircraft, which operate between 4.2 and 4.4 GHz, might be affected by 5G operations between 3.7 and 3.98 GHz. This is particularly an issue with older altimeters usingRF filters[168]which lack protection from neighboring bands.[169]This is not as much of an issue in Europe, where 5G uses lower frequencies between 3.4 and 3.8 GHz.[170]Nonetheless, theDGACin France has also expressed similar worries and recommended 5G phones be turned off or be put inairplane modeduring flights.[171] On December 31, 2021, U.S. Transportation SecretaryPete Buttigiegand Steve Dickinson, administrator of theFederal Aviation Administrationasked the chief executives of AT&T and Verizon to delay 5G implementation over aviation concerns. The government officials asked for a two-week delay starting on January 5, 2022, while investigations are conducted on the effects on radar altimeters. The government transportation officials also asked the cellular providers to hold off their new 5G service near 50 priority airports, to minimize disruption to air traffic that would be caused by some planes being disallowed from landing in poor visibility.[172]After coming to an agreement with government officials the day before,[173]Verizon and AT&T activated their 5G networks on January 19, 2022, except for certain towers near 50 airports.[174]AT&T scaled back its deployment even further than its agreement with the FAA required.[175] The FAA rushed to test and certify radar altimeters for interference so that planes could be allowed to perform instrument landings (e.g. at night and in low visibility) at affected airports. By January 16, it had certified equipment on 45% of the U.S. fleet, and 78% by January 20.[176]Airlines complained about the avoidable impact on their operations, and commentators said the affair called into question the competence of the FAA.[177]Several international airlines substituted different planes so they could avoid problems landing at scheduled airports, and about 2% of flights (320) were cancelled by the evening of January 19.[178] A number of 5G networks deployed on the radio frequency band of 3.3–3.6 GHz are expected to cause interference withC-Bandsatellite stations, which operate by receiving satellite signals at 3.4–4.2 GHz frequency.[179]This interference can be mitigated withlow-noise block downconvertersandwaveguide filters.[179] In regions like the US and EU, the 6 GHz band is to be opened up for unlicensed applications, which would permit the deployment of 5G-NR Unlicensed, 5G version ofLTE in unlicensed spectrum, as well asWi-Fi 6e. However, interference could occur with the co-existence of different standards in the frequency band.[180] There have been concerns surrounding the promotion of 5G, questioning whether the technology is overhyped. There are questions on whether 5G will truly change the customer experience,[181]ability for 5G's mmWave signal to provide significant coverage,[182][183]overstating what 5G can achieve or misattributing continuous technological improvement to "5G",[184]lack of new use case for carriers to profit from,[185]wrong focus on emphasizing direct benefits on individual consumers instead of for Internet of Things devices or solving thelast mile problem,[186]and overshadowing the possibility that in some aspects there might be other more appropriate technologies.[187]Such sort of concerns have also led to consumers not trusting information provided by cellular providers on the topic.[188] There is a long history of fear and anxiety surrounding wireless signals that predates 5G technology. The fears about 5G are similar to those that have persisted throughout the 1990s and 2000s. According to theUSCenters for Disease Control and Prevention(CDC) "exposure to intense, direct amounts of non-ionizing radiation may result in damage to tissue due toheat. This is not common and mainly of concern in the workplace for those who work on large sources of non-ionizing radiation devices and instruments."[189]Some advocates of fringe health claim the regulatory standards are too low and influenced by lobbying groups.[190] There have been rumors that 5G mobile phone use can cause cancer, but this is a myth.[191]Many popular books of dubious merit have been published on the subject[additional citation(s) needed]including one byJoseph Mercolaalleging that wireless technologies caused numerous conditions fromADHDto heart diseases and brain cancer. Mercola has drawn sharp criticism for hisanti-vaccinationismduring theCOVID-19 pandemicand was warned by theFood and Drug Administrationto stop selling fake COVID-19 cures through his onlinealternative medicinebusiness.[190][192] According toThe New York Times, one origin of the 5G health controversy was an erroneous unpublished study that physicist Bill P. Curry did for the Broward County School Board in 2000 which indicated that the absorption of external microwaves by brain tissue increased with frequency.[193]According to experts[vague]this was wrong, the millimeter waves used in 5G are safer than lower frequency microwaves because they cannot penetrate the skin and reach internal organs. Curry had confusedin vitroandin vivoresearch. However Curry's study was widely distributed on the Internet. Writing inThe New York Timesin 2019,William Broadreported thatRT Americabegan airing programming linking 5G to harmful health effects which "lack scientific support", such as "brain cancer, infertility, autism, heart tumors, and Alzheimer's disease". Broad asserted that the claims had increased. RT America had run seven programs on this theme by mid-April 2019 but only one in the whole of 2018. The network's coverage had spread to hundreds of blogs and websites.[194] In April 2019, the city ofBrusselsinBelgiumblocked a 5G trial because of radiation rules.[195]InGeneva,Switzerland, a planned upgrade to 5G was stopped for the same reason.[196]The Swiss Telecommunications Association (ASUT) has said that studies have been unable to show that 5G frequencies have any health impact.[197] According toCNET,[198]"Members of Parliament in theNetherlandsare also calling on the government to take a closer look at 5G. Several leaders in theUnited States Congresshave written to theFederal Communications Commissionexpressing concern about potential health risks. InMill Valley, California, the city council blocked the deployment of new 5G wireless cells."[198][199][200][201][202]Similar concerns were raised inVermont[203]andNew Hampshire.[198]The USFDAis quoted saying that it "continues to believe that the current safety limits for cellphone radiofrequency energy exposure remain acceptable for protecting the public health".[204]After campaigning by activist groups, a series of small localities in the UK, including Totnes, Brighton and Hove, Glastonbury, and Frome, passed resolutions against the implementation of further 5G infrastructure, though these resolutions have no impact on rollout plans.[205][206][207] Low-level EMF does have some effects on other organisms.[208]Vianet al., 2006 finds an effect ofmicrowaveongene expressioninplants.[208]A meta-analysis of 95in vitroandin vivostudies showed that an average of 80% of thein vivoresearch showed effects of such radiation, as did 58% of thein vitroresearch, but that the results were inconclusive as to whether any of these effects pose a health risk.[209] As the introduction of 5G technology coincided with the time of theCOVID-19 pandemic, several conspiracy theories circulating online posited a link betweenCOVID-19and 5G.[210]This has led to dozens ofarsonattacks being made on telecom masts in the Netherlands (Amsterdam, Rotterdam, etc.), Ireland (Cork,[211]etc.), Cyprus, the United Kingdom (Dagenham,Huddersfield,Birmingham,BelfastandLiverpool),[212][213]Belgium (Pelt), Italy (Maddaloni), Croatia (Bibinje)[214]and Sweden.[215]It led to at least 61 suspected arson attacks against telephone masts in the United Kingdom alone[216]and over twenty in The Netherlands. In the early months of the pandemic, anti-lockdown protesters atprotests over responses to the COVID-19 pandemicin Australia were seen with anti-5G signs, an early sign of what became a wider campaign by conspiracy theorists to link the pandemic with 5G technology. There are two versions of the 5G-COVID-19 conspiracy theory:[190] In various parts of the world, carriers have launched numerous differently branded technologies, such as "5G Evolution", which advertise improving existing networks with the use of "5G technology".[217]However, these pre-5G networks are an improvement on specifications of existing LTE networks that are not exclusive to 5G. While the technology promises to deliver higher speeds, and is described by AT&T as a "foundation for our evolution to 5G while the 5G standards are being finalized", it cannot be considered to be true 5G. When AT&T announced 5G Evolution, 4x4 MIMO, the technology that AT&T is using to deliver the higher speeds, had already been put in place byT-Mobilewithout being branded with the 5G moniker. It is claimed that such branding is a marketing move that will cause confusion with consumers, as it is not made clear that such improvements are not true 5G.[218] With the rollout of 5G, 4G has become more available and affordable, with the world's most developed countries having >90% LTE coverage.[219]Because of this, 4G is still not obsolete even today.[220]4G plans are sold alongside 5G plans on US carriers,[221]with 4G being cheaper than 5G.[222] In April 2008, NASA partnered with Geoff Brown andMachine-to-Machine Intelligence (M2Mi) Corpto develop a fifth generation communications technology approach, though largely concerned with working with nanosats.[223]That same year, the South Korean IT R&D program of "5G mobile communication systems based on beam-division multiple access and relays with group cooperation" was formed.[224] In August 2012, New York University founded NYU Wireless, a multi-disciplinary academic research centre that has conducted pioneering work in 5G wireless communications.[225]On October 8, 2012, the UK'sUniversity of Surreysecured £35M for a new 5G research centre, jointly funded by the British government's UK Research Partnership Investment Fund (UKRPIF) and a consortium of key international mobile operators and infrastructure providers, includingHuawei,Samsung,TelefónicaEurope,FujitsuLaboratories Europe,Rohde & Schwarz, andAircom International. It will offer testing facilities to mobile operators keen to develop a mobile standard that uses less energy and less radio spectrum, while delivering speeds higher than current 4G with aspirations for the new technology to be ready within a decade.[226][227][228][229]On November 1, 2012, the EU project "Mobile and wireless communications Enablers for the Twenty-twenty Information Society" (METIS) started its activity toward the definition of 5G. METIS achieved an early global consensus on these systems. In this sense, METIS played an important role in building consensus among other external major stakeholders prior to global standardization activities. This was done by initiating and addressing work in relevant global fora (e.g. ITU-R), as well as in national and regional regulatory bodies.[230]That same month, the iJOIN EU project was launched, focusing on "small cell" technology, which is of key importance for taking advantage of limited and strategic resources, such as theradio wavespectrum. According toGünther Oettinger, the European Commissioner for Digital Economy and Society (2014–2019), "an innovative utilization of spectrum" is one of the key factors at the heart of 5G success. Oettinger further described it as "the essential resource for the wireless connectivity of which 5G will be the main driver".[231]iJOIN was selected by the European Commission as one of the pioneering 5G research projects to showcase early results on this technology at theMobile World Congress2015 (Barcelona, Spain). In February 2013, ITU-R Working Party 5D (WP 5D) started two study items: (1) Study on IMT Vision for 2020 and beyond, and; (2) Study on future technology trends for terrestrial IMT systems. Both aiming at having a better understanding of future technical aspects of mobile communications toward the definition of the next generation mobile.[232]On May 12, 2013, Samsung Electronics stated that they had developed a "5G" system. The core technology has a maximum speed of tens of Gbit/s (gigabits per second). In testing, the transfer speeds for the "5G" network sent data at 1.056 Gbit/s to a distance of up to 2 kilometers with the use of an 8*8 MIMO.[233][234]In July 2013,IndiaandIsraelagreed to work jointly on development of fifth generation (5G) telecom technologies.[235]On October 1, 2013, NTT (Nippon Telegraph and Telephone), the same company to launch world's first 5G network in Japan, wins Minister of Internal Affairs and Communications Award atCEATECfor 5G R&D efforts.[236]On November 6, 2013,Huaweiannounced plans to invest a minimum of $600 million into R&D for next generation 5G networks capable of speeds 100 times higher than modern LTE networks.[237] On April 3, 2019,South Koreabecame the first country to adopt 5G.[238]Just hours later, Verizon launched its 5G services in the United States, and disputed South Korea's claim of becoming the world's first country with a 5G network, because allegedly, South Korea's 5G service was launched initially for just six South Korean celebrities so that South Korea could claim the title of having the world's first 5G network.[239]In fact, the three main South Korean telecommunication companies (SK Telecom,KT, andLG Uplus) added more than 40,000 users to their 5G network on the launch day.[240]In June 2019, the Philippines became the first country in Southeast Asia to roll out a 5Gbroadbandnetwork after Globe Telecom commercially launched its 5G data plans to customers.[241]AT&T brings 5G service to consumers and businesses in December 2019 ahead of plans to offer 5G throughout the United States in the first half of 2020.[242][243][244] In 2020,AISandTrueMove Hlaunched 5G services inThailand, making it the first country inSoutheast Asiato have commercial 5G.[245][246]A functional mockup of a Russian 5G base station, developed by domestic specialists as part of Rostec's digital division Rostec.digital, was presented in Nizhny Novgorod at the annual conference "Digital Industry of Industrial Russia".[247][248]5G speeds have declined in many countries since 2022, which has driven the development of 5.5G to increase connection speeds.[249]
https://en.wikipedia.org/wiki/5G-Advanced
The3rd Generation Partnership Project(3GPP) is an umbrella term for a number ofstandards organizationswhich develop protocols formobile telecommunications. Its best known work is the development and maintenance of:[1] 3GPP is a consortium withseven national or regional telecommunication standards organizationsas primary members ("organizational partners") and avariety of other organizationsas associate members ("market representation partners"). The 3GPP organizes its work into three different streams:Radio Access Networks, Services and Systems Aspects, and Core Network and Terminals.[2] The project was established in December 1998 with the goal of developing a specification for a3Gmobile phonesystem based on the2GGSMsystem, within the scope of theInternational Telecommunication Union'sInternational Mobile Telecommunications-2000, hence the name 3GPP.[3]It should not be confused with3rd Generation Partnership Project 2(3GPP2), which developed a competing 3G system,CDMA2000.[4] The 3GPP administrative support team (known as the "Mobile Competence Centre") is located at theEuropean Telecommunications Standards Instituteheadquarters in theSophia Antipolistechnology park in France.[5] The seven 3GPP Organizational Partners are from Asia, Europe and North America. Their aim is to determine the general policy and strategy of 3GPP and perform the following tasks: Together with the Market Representation Partners (MRPs) perform the following tasks: The Organizational Partners are:[6] The 3GPP Organizational Partners can invite a Market Representation Partner to take part in 3GPP, which: As of January 2025[update], the Market Representation Partners are:[6] 3GPP standards are structured asReleases. Discussion of 3GPP thus frequently refers to the functionality in one release or another. TSG SA groups focused on further enhancements to the 5G system and enablers for new features and services: Enhanced support of: non-public networks,industrial Internet of Things, low complexity NR devices,edge computingin 5GC, access traffic steering, switch and splitting support, network automation for 5G,network slicing, advanced V2X service, multiple USIM support, proximity-based services in 5GS, 5G multicast broadcast services,Unmanned Aerial Systems(UAS), satellite access in 5G, 5GC location services, Multimedia Priority Service...[19] Each release incorporates hundreds of individual Technical Specification and Technical Report documents, each of which may have been through many revisions. Current 3GPP standards incorporate the latest revision of theGSMstandards. The documents are made available without charge on 3GPP's web site. The Technical Specifications cover not only the radio part ("Air Interface") and Core Network, but also billing information and speech coding down to source code level.Cryptographicaspects (such asauthentication,confidentiality) are also specified. The 3GPP specification work is done in Technical Specification Groups (TSGs) and Working Groups (WGs).[23] There are three Technical Specifications Groups, each of which consists of multiple WGs: The closure of GERAN was announced in January 2016.[24]The specification work on legacy GSM/EDGE system was transferred to RAN WG, RAN6. RAN6 was closed in July 2020 (https://www.3gpp.org/news-events/2128-r6_geran). The 3GPP structure also includes a Project Coordination Group, which is the highest decision-making body. Its missions include the management of overall timeframe and work progress. 3GPP standardization work is contribution-driven. Companies ("individual members") participate through their membership to a 3GPP Organizational Partner. As of December 2020, 3GPP is composed of 719 individual members.[25] Specification work is done at WG and at TSG level:[26] 3GPP follows a three-stage methodology as defined inITU-TRecommendation I.130:[27] Test specifications are sometimes defined as stage 4, as they follow stage 3. Specifications are grouped into releases. A release consists of a set of internally consistent set of features and specifications. Timeframes are defined for each release by specifying freezing dates. Once a release is frozen, only essential corrections are allowed (i.e. addition and modifications of functions are forbidden). Freezing dates are defined for each stage. The 3GPP specifications are transposed into deliverables by the Organizational Partners.
https://en.wikipedia.org/wiki/3GPP
Cellular frequenciesare the sets of frequency ranges within theultra high frequencyband that have beenassignedfor cellular-compatiblemobile devices, such asmobile phones, to connect tocellular networks.[1]Most mobile networks worldwide use portions of theradio frequency spectrum,allocatedto themobile service, for the transmission and reception of their signals. The particular bands may also be shared with otherradiocommunication services, e.g.broadcasting service, andfixed serviceoperation. Radio frequencies used for cellular networks differ inITU Regions(Americas, Europe, Africa and Asia). The first commercial standard for mobile connection in the United States wasAMPS, which was in the 800 MHz frequency band. In Nordic countries ofEurope, the first widespread automatic mobile network was based on theNMT-450 standard, which was in the 450 MHz band. As mobile phones became more popular and affordable, mobile providers encountered a problem because they couldn't provide service to the increasing number of customers. They had to develop their existing networks and eventually introduce new standards, often based on other frequencies. Some European countries (and Japan) adoptedTACSoperating in 900 MHz. TheGSMstandard, which appeared in Europe to replace NMT-450 and other standards, initially used the 900 MHz band too. As demand grew, carriers acquired licenses in the 1,800 MHz band. (Generally speaking, lower frequencies allow carriers to provide coverage over a larger area, while higher frequencies allow carriers to provide service to more customers in a smaller area.) In the U.S., the analog AMPS standard that used the cellular band (800 MHz) was replaced by a number of digital systems. Initially, systems based upon the AMPS mobile phone model were popular, includingIS-95(often known as "CDMA", theair interfacetechnology it uses) andIS-136(often known as D-AMPS, Digital AMPS, or "TDMA", the air interface technology it uses). Eventually, IS-136 on these frequencies was replaced by most operators with GSM. GSM had already been running for some time on USPCS(1,900 MHz) frequencies. And, some NMT-450 analog networks have been replaced with digital networks using the same frequency. In Russia and some other countries, local carriers received licenses for 450 MHz frequency to provide CDMA mobile coverage area. ManyGSMphones support three bands (900/1,800/1,900 MHz or 850/1,800/1,900 MHz) or four bands (850/900/1,800/1,900 MHz), and are usually referred to astri-band and quad-bandphones, orworld phones; with such a phone one can travel internationally and use the same handset. This portability is not as extensive with IS-95 phones, however, as IS-95 networks do not exist in most of Europe. Mobile networks based on different standards may use the same frequency range; for example, AMPS,D-AMPS,N-AMPSand IS-95 all use the 800 MHz frequency band. Moreover, one can find both AMPS and IS-95 networks in use on the same frequency in the same area that do not interfere with each other. This is achieved by the use of different channels to carry data. The actual frequency used by a particular phone can vary from place to place, depending on the settings of the carrier's base station. Other articles:
https://en.wikipedia.org/wiki/Cellular_frequencies
TheGlobal Positioning System(GPS) is asatellite-basedhyperbolic navigationsystem owned by theUnited States Space Forceand operated byMission Delta 31.[2][3]It is one of theglobal navigation satellite systems(GNSS) that providegeolocationandtime informationto aGPS receiveranywhere on or near the Earth where there is an unobstructed line of sight to four or more GPS satellites.[4]It does not require the user to transmit any data, and operates independently of any telephone or Internet reception, though these technologies can enhance the usefulness of the GPS positioning information.[5]It provides critical positioning capabilities to military, civil, and commercial users around the world. Although the United States government created, controls, and maintains the GPS system, it is freely accessible to anyone with a GPS receiver.[6] The GPS project was started by theU.S. Department of Defensein 1973.[7]The first prototype spacecraft was launched in 1978 and the full constellation of 24 satellites became operational in 1993.[7]AfterKorean Air Lines Flight 007was shot down when it mistakenly entered Soviet airspace, PresidentRonald Reaganannounced that the GPS system would be made available for civilian use as of September 16, 1983;[8]however, initially this civilian use was limited to an average accuracy of 100 meters (330 ft) by use ofSelective Availability(SA), a deliberate error introduced into the GPS data that military receivers could correct for. As civilian GPS usage grew, there was increasing pressure to remove this error. The SA system was temporarily disabled during theGulf War, as a shortage of military GPS units meant that many US soldiers were using civilian GPS units sent from home. In the 1990s,Differential GPSsystems from theUS Coast Guard,Federal Aviation Administration, and similar agencies in other countries began to broadcast local GPS corrections, reducing the effect of both SA degradation and atmospheric effects (that military receivers also corrected for). The U.S. military had also developed methods to perform local GPS jamming, meaning that the ability to globally degrade the system was no longer necessary. As a result, United States PresidentBill Clintonsigned a bill ordering that Selective Availability be disabled on May 1, 2000;[9]and, in 2007, the US government announced that the next generation of GPS satellites would not include the feature at all. Advances in technology and new demands on the existing system have now led to efforts to modernize the GPS and implement the next generation ofGPS Block IIIsatellites and Next Generation Operational Control System (OCX)[10]which was authorized by theU.S. Congressin 2000. When Selective Availability was discontinued, GPS was accurate to about 5 meters (16 ft). GPS receivers that use the L5 band have much higher accuracy of 30 centimeters (12 in), while those for high-end applications such as engineering and land surveying are accurate to within2 cm (3⁄4in) and can even provide sub-millimeter accuracy with long-term measurements.[9][11][12]Consumer devices such as smartphones can be accurate to 4.9 m (16 ft) or better when used with assistive services likeWi-Fi positioning.[13] As of July 2023[update], 18 GPS satellites broadcast L5 signals, which are considered pre-operational prior to being broadcast by a full complement of 24 satellites in 2027.[14] The GPS project was launched in the United States in 1973 to overcome the limitations of previous navigation systems,[15]combining ideas from several predecessors, including classified engineering design studies from the 1960s. TheU.S. Department of Defensedeveloped the system, which originally used 24 satellites, for use by the United States military, and became fully operational in 1993. Civilian use was allowed from the 1980s.Roger L. Eastonof theNaval Research Laboratory,Ivan A. GettingofThe Aerospace Corporation, andBradford Parkinsonof theApplied Physics Laboratoryare credited with inventing it.[16]The work ofGladys Weston the creation of the mathematical geodetic Earth model is credited as instrumental in the development of computational techniques for detecting satellite positions with the precision needed for GPS.[17][18] The design of GPS is based partly on similar ground-basedradio-navigationsystems, such asLORANand theDecca Navigator System, developed in the early 1940s. In 1955,Friedwardt Winterbergproposed a test ofgeneral relativity—detecting time slowing in a strong gravitational field using accurate atomic clocks placed in orbit inside artificial satellites. Special and general relativity predicted that the clocks on GPS satellites, as observed by those on Earth, run 38 microseconds faster per day than those on the Earth. The design of GPS corrects for this difference; because without doing so, GPS calculated positions would accumulate errors of up to 10 kilometers per day (6 mi/d).[19] When theSoviet Unionlaunched its first artificial satellite (Sputnik 1) in 1957, two American physicists, William Guier and George Weiffenbach, atJohns Hopkins University'sApplied Physics Laboratory(APL) monitored its radio transmissions.[20]Within hours they realized that, because of theDoppler effect, they could pinpoint where the satellite was along its orbit. The Director of the APL gave them access to theirUNIVAC Icomputer to perform the heavy calculations required. Early the next year, Frank McClure, the deputy director of the APL, asked Guier and Weiffenbach to investigate the inverse problem: pinpointing the user's location, given the satellite's. (At the time, the Navy was developing the submarine-launchedPolarismissile, which required them to know the submarine's location.) This led them and APL to develop theTRANSITsystem.[21]In 1959, ARPA (renamedDARPAin 1972) also played a role in TRANSIT.[22][23][24] TRANSIT was first successfully tested in 1960.[25]It used aconstellationof five satellites and could provide a navigational fix approximately once per hour. In 1967, the U.S. Navy developed theTimationsatellite, which proved the feasibility of placing accurate clocks in space, a technology required for GPS.[26] In the 1970s, the ground-basedOMEGAnavigation system, based on phase comparison of signal transmission from pairs of stations,[27]became the first worldwide radio navigation system. Limitations of these systems drove the need for a more universal navigation solution with greater accuracy. Although there were wide needs for accurate navigation in military and civilian sectors, almost none of those was seen as justification for the billions of dollars it would cost in research, development, deployment, and operation of a constellation of navigation satellites. During theCold Wararms race, the nuclear threat to the existence of the United States was the one need that did justify this cost in the view of the United States Congress. This deterrent effect is why GPS was funded.[citation needed]It is also the reason for the ultra-secrecy at that time. Thenuclear triadconsisted of the United States Navy'ssubmarine-launched ballistic missiles(SLBMs) along withUnited States Air Force(USAF)strategic bombersandintercontinental ballistic missiles(ICBMs). Considered vital to thenuclear deterrenceposture, accurate determination of the SLBM launch position was aforce multiplier. Precise navigation would enable United Statesballistic missile submarinesto get an accurate fix of their positions before they launched their SLBMs.[28]The USAF, with two-thirds of the nuclear triad, also had requirements for a more accurate and reliable navigation system. The U.S. Navy and U.S. Air Force were developing their own technologies in parallel to solve what was essentially the same problem. To increase the survivability of ICBMs, there was a proposal to use mobile launch platforms (comparable to the SovietSS-24andSS-25) and so the need to fix the launch position had similarity to the SLBM situation. In 1960, the Air Force proposed a radio-navigation system called MOSAIC (MObile System for Accurate ICBM Control) that was essentially a 3-D LORAN System. A follow-on study, Project 57, was performed in 1963 and it was "in this study that the GPS concept was born". That same year, the concept was pursued as Project 621B, which had "many of the attributes that you now see in GPS"[29]and promised increased accuracy for U.S. Air Force bombers as well as ICBMs. Updates from the Navy TRANSIT system were too slow for the high speeds of Air Force operation. TheNaval Research Laboratory(NRL) continued making advances with theirTimation(Time Navigation) satellites, first launched in 1967, second launched in 1969, with the third in 1974 carrying the firstatomic clockinto orbit and the fourth launched in 1977.[30] Another important predecessor to GPS came from a different branch of the United States military. In 1964, theUnited States Armyorbited its first Sequential Collation of Range (SECOR) satellite used for geodetic surveying.[31]The SECOR system included three ground-based transmitters at known locations that would send signals to the satellite transponder in orbit. A fourth ground-based station, at an undetermined position, could then use those signals to fix its location precisely. The last SECOR satellite was launched in 1969.[32] With these parallel developments in the 1960s, it was realized that a superior system could be developed by synthesizing the best technologies from 621B, Transit, Timation, and SECOR in a multi-service program. Satellite orbital position errors, induced by variations in thegravity fieldandradar refractionamong others, had to be resolved. A team led by Harold L. Jury of Pan Am Aerospace Division in Florida from 1970 to 1973, used real-time data assimilation and recursive estimation to do so, reducing systematic and residual errors to a manageable level to permit accurate navigation.[33] During Labor Day weekend in 1973, a meeting of about twelve military officers at the Pentagon discussed the creation of aDefense Navigation Satellite System (DNSS). It was at this meeting that the real synthesis that became GPS was created. Later that year, the DNSS program was namedNavstar.[34]Navstar is often erroneously considered an acronym for "NAVigation System using Timing And Ranging" but was never considered as such by the GPS Joint Program Office (TRW may have once advocated for a different navigational system that used that acronym).[35]With the individual satellites being associated with the name Navstar (as with the predecessors Transit and Timation), a more fully encompassing name was used to identify the constellation of Navstar satellites,Navstar-GPS.[36]Ten "Block I" prototype satellites were launched between 1978 and 1985 (an additional unit was destroyed in a launch failure).[37] The effect of the ionosphere on radio transmission was investigated in a geophysics laboratory ofAir Force Cambridge Research Laboratory, renamed to Air Force Geophysical Research Lab (AFGRL) in 1974. AFGRL developed the Klobuchar model for computingionosphericcorrections to GPS location.[38]Of note is work done by Australian space scientistElizabeth Essex-Cohenat AFGRL in 1974. She was concerned with the curving of the paths of radio waves (atmospheric refraction) traversing the ionosphere from NavSTAR satellites.[39] AfterKorean Air Lines Flight 007, aBoeing 747carrying 269 people, was shot down by a Sovietinterceptor aircraftafter straying inprohibited airspacebecause of navigational errors,[40]in the vicinity ofSakhalinandMoneron Islands, PresidentRonald Reaganissued a directive making GPS freely available for civilian use, once it was sufficiently developed, as a common good.[41]The first Block II satellite was launched on February 14, 1989,[42]and the 24th satellite was launched in 1994. The GPS program cost at this point, not including the cost of the user equipment but including the costs of the satellite launches, has been estimated at US$5 billion (equivalent to $11 billion in 2024).[43] Initially, the highest-quality signal was reserved for military use, and the signal available for civilian use was intentionally degraded, in a policy known asSelective Availability. This changed on May 1, 2000, with U.S. PresidentBill Clintonsigning a policy directive to turn off Selective Availability to provide the same accuracy to civilians that was afforded to the military. The directive was proposed by the U.S. Secretary of Defense,William Perry, in view of the widespread growth ofdifferential GPSservices by private industry to improve civilian accuracy. Moreover, the U.S. military was developing technologies to deny GPS service to potential adversaries on a regional basis.[44]Selective Availability was removed from the GPS architecture beginning with GPS-III. Since its deployment, the U.S. has implemented several improvements to the GPS service, including new signals for civil use and increased accuracy and integrity for all users, all the while maintaining compatibility with existing GPS equipment. Modernization of the satellite system has been an ongoing initiative by the U.S. Department of Defense through a series ofsatellite acquisitionsto meet the growing needs of the military, civilians, and the commercial market. As of early 2015, high-quality Standard Positioning Service (SPS) GPS receivers provided horizontal accuracy of better than 3.5 meters (11 ft),[9]although many factors such as receiver and antenna quality and atmospheric issues can affect this accuracy. GPS is owned and operated by the United States government as a national resource. The Department of Defense is the steward of GPS. TheInteragency GPS Executive Board (IGEB)oversaw GPS policy matters from 1996 to 2004. After that, the National Space-Based Positioning, Navigation and Timing Executive Committee was established by presidential directive in 2004 to advise and coordinate federal departments and agencies on matters concerning the GPS and related systems.[45]The executive committee is chaired jointly by the Deputy Secretaries of Defense and Transportation. Its membership includes equivalent-level officials from the Departments of State, Commerce, and Homeland Security, theJoint Chiefs of StaffandNASA. Components of the executive office of the president participate as observers to the executive committee, and the FCC chairman participates as a liaison. The U.S. Department of Defense is required by law to "maintain a Standard Positioning Service (as defined in the federal radio navigation plan and the standard positioning service signal specification) that will be available on a continuous, worldwide basis" and "develop measures to prevent hostile use of GPS and its augmentations without unduly disrupting or degrading civilian uses". USA-203from Block IIR-M is unhealthy[50]For a more complete list, seeList of GPS satellites On February 10, 1993, theNational Aeronautic Associationselected the GPS Team as winners of the 1992Robert J. Collier Trophy, the US's most prestigious aviation award. This team combines researchers from the Naval Research Laboratory, the U.S. Air Force, theAerospace Corporation,Rockwell InternationalCorporation, andIBMFederal Systems Company. The citation honors them "for the most significant development for safe and efficient navigation and surveillance of air and spacecraft since the introduction of radio navigation 50 years ago". Two GPS developers received theNational Academy of EngineeringCharles Stark Draper Prizefor 2003: GPS developerRoger L. Eastonreceived theNational Medal of Technologyon February 13, 2006.[70]Francis X. Kane(Col. USAF, ret.) was inducted into the U.S. Air Force Space and Missile Pioneers Hall of Fame at Lackland A.F.B., San Antonio, Texas, March 2, 2010, for his role in space technology development and the engineering design concept of GPS conducted as part of Project 621B. In 1998, GPS technology was inducted into theSpace FoundationSpace Technology Hall of Fame.[71] On October 4, 2011, theInternational Astronautical Federation(IAF) awarded the Global Positioning System (GPS) its 60th Anniversary Award, nominated by IAF member, the American Institute for Aeronautics and Astronautics (AIAA). The IAF Honors and Awards Committee recognized the uniqueness of the GPS program and the exemplary role it has played in building international collaboration for the benefit of humanity.[72]On December 6, 2018, Gladys West was inducted into the Air Force Space and Missile Pioneers Hall of Fame in recognition of her work on an extremely accurate geodetic Earth model, which was ultimately used to determine the orbit of the GPS constellation.[73][74]On February 12, 2019, four founding members of the project were awarded the Queen Elizabeth Prize for Engineering with the chair of the awarding board stating: "Engineering is the foundation of civilisation; ...They've re-written, in a major way, the infrastructure of our world."[75] The GPS satellites carry very stableatomic clocksthat are synchronized with one another and with the reference atomic clocks at the ground control stations; any drift of the clocks aboard the satellites from the reference time maintained on the ground stations is corrected regularly.[76]Since the speed ofradio waves(speed of light)[77]is constant and independent of the satellite speed, the time delay between when the satellite transmits a signal and the ground station receives it is proportional to the distance from the satellite to the ground station. With the distance information collected from multiple ground stations, the location coordinates of any satellite at any time can be calculated with great precision. Each GPS satellite carries an accurate record of its own position and time,[78]and broadcasts that data continuously. Based on data received from multiple GPSsatellites, an end user's GPS receiver can calculate its ownfour-dimensional positioninspacetime; However, at a minimum, four satellites must be in view of the receiver for it to compute four unknown quantities (three position coordinates and the deviation of its own clock from satellite time).[79] Each GPS satellite continually broadcasts a signal (carrier wavewithmodulation) that includes: Conceptually, the receiver measures the TOAs (according to its own clock) of four satellite signals. From the TOAs and the TOTs, the receiver forms fourtime of flight(TOF) values, which are (given the speed of light) approximately equivalent to receiver-satellite ranges plus time difference between the receiver and GPS satellites multiplied by speed of light, which are called pseudo-ranges. The receiver then computes its three-dimensional position and clock deviation from the four TOFs. In practice the receiver position (in three dimensionalCartesian coordinateswith origin at the Earth's center) and the offset of the receiver clock relative to the GPS time are computed simultaneously, using thenavigation equationsto process the TOFs. The receiver's Earth-centered solution location is usually converted tolatitude,longitudeand height relative to an ellipsoidal Earth model. The height may then be further converted to height relative to thegeoid, which is essentially mean sea level. These coordinates may be displayed, such as on amoving map display, or recorded or used by some other system, such as a vehicle guidance system. Although usually not formed explicitly in the receiver processing, the conceptual time differences of arrival (TDOAs) define the measurement geometry. Each TDOA corresponds to ahyperboloidof revolution (seeMultilateration). The line connecting the two satellites involved (and its extensions) forms the axis of the hyperboloid. The receiver is located at the point where three hyperboloids intersect.[80][81] It is sometimes incorrectly said that the user location is at the intersection of three spheres. While simpler to visualize, this is the case only if the receiver has a clock synchronized with the satellite clocks (i.e., the receiver measures true ranges to the satellites rather than range differences). There are marked performance benefits to the user carrying a clock synchronized with the satellites. Foremost is that only three satellites are needed to compute a position solution. If it were an essential part of the GPS concept that all users needed to carry a synchronized clock, a smaller number of satellites could be deployed, but the cost and complexity of the user equipment would increase. The description above is representative of a receiver start-up situation. Most receivers have atrack algorithm, sometimes called atracker, that combines sets of satellite measurements collected at different times—in effect, taking advantage of the fact that successive receiver positions are usually close to each other. After a set of measurements are processed, the tracker predicts the receiver location corresponding to the next set of satellite measurements. When the new measurements are collected, the receiver uses a weighting scheme to combine the new measurements with the tracker prediction. In general, a tracker can (a) improve receiver position and time accuracy, (b) reject bad measurements, and (c) estimate receiver speed and direction. The disadvantage of a tracker is that changes in speed or direction can be computed only with a delay, and that derived direction becomes inaccurate when the distance traveled between two position measurements drops below or near therandom errorof position measurement. GPS units can use measurements of theDoppler shiftof the signals received to compute velocity accurately.[82]More advanced navigation systems use additional sensors like acompassor aninertial navigation systemto complement GPS. GPS requires four or more satellites to be visible for accurate navigation.[83]The solution of thenavigation equationsgives the position of the receiver along with the difference between the time kept by the receiver's on-board clock and the true time-of-day, thereby eliminating the need for a more precise and possibly impractical receiver based clock. Applications for GPS such astime transfer, traffic signal timing, andsynchronization of cell phone base stations,make use ofthis cheap and highly accurate timing. Some GPS applications use this time for display, or, other than for the basic position calculations, do not use it at all. Although four satellites are required for normal operation, fewer apply in special cases. If one variable is already known, a receiver can determine its position using only three satellites. For example, a ship on the open ocean usually has a known elevationclose to 0m, and the elevation of an aircraft may be known.[a]Some GPS receivers may use additional clues or assumptions such as reusing the last known altitude,dead reckoning,inertial navigation, or including information from the vehicle computer, to give a (possibly degraded) position when fewer than four satellites are visible.[84][85][86] The current GPS consists of three major segments. These are the space segment, a control segment, and a user segment.[55]TheU.S. Space Forcedevelops, maintains, and operates the space and control segments. GPS satellitesbroadcast signalsfrom space, and each GPS receiver uses these signals to calculate its three-dimensional location (latitude, longitude, and altitude) and the current time.[87] The space segment (SS) is composed of 24 to 32 satellites, or Space Vehicles (SV), inmedium Earth orbit, and also includes the payload adapters to the boosters required to launch them into orbit. The GPS design originally called for 24 SVs, eight each in three approximately circularorbits,[88]but this was modified to six orbital planes with four satellites each.[89]The six orbit planes have approximately 55°inclination(tilt relative to the Earth'sequator) and are separated by 60°right ascensionof theascending node(angle along the equator from a reference point to the orbit's intersection).[90]Theorbital periodis one-half of asidereal day,i.e., 11 hours and 58 minutes, so thatthe satellites pass over the same locations[91]or almost the same locations[92]every day. The orbits are arranged so that at least six satellites are always withinline of sightfrom everywhere on the Earth's surface (see animation at right).[93]The result of this objective is that the four satellites are not evenly spaced (90°) apart within each orbit. In general terms, the angular difference between satellites in each orbit is 30°, 105°, 120°, and 105° apart, which sum to 360°.[94] Orbiting at an altitude of approximately 20,200 km (12,600 mi); orbital radius of approximately 26,600 km (16,500 mi),[95]each SV makes two complete orbits eachsidereal day, repeating the sameground trackeach day.[96]This was very helpful during development because even with only four satellites, correct alignment means all four are visible from one spot for a few hours each day. For military operations, the ground track repeat can be used to ensure good coverage in combat zones. As of February 2019[update],[97]there are 31 satellites in the GPSconstellation, 27 of which are in use at a given time with the rest allocated as stand-bys. A 32nd was launched in 2018, but as of July 2019 is still in evaluation. More decommissioned satellites are in orbit and available as spares. The additional satellites improve the precision of GPS receiver calculations by providing redundant measurements. With the increased number of satellites, the constellation was changed to a nonuniform arrangement. Such an arrangement was shown to improve accuracy but also improves reliability and availability of the system, relative to a uniform system, when multiple satellites fail.[98]With the expanded constellation, nine satellites are usually visible at any time from any point on the Earth with a clear horizon, ensuring considerable redundancy over the minimum four satellites needed for a position. The control segment (CS) is composed of: The MCS can also accessSatellite Control Network(SCN) ground antennas (for additional command and control capability) and NGA (National Geospatial-Intelligence Agency) monitor stations. The flight paths of the satellites are tracked by dedicated U.S. Space Force monitoring stations in Hawaii,Kwajalein Atoll,Ascension Island,Diego Garcia,Colorado Springs, ColoradoandCape Canaveral, Florida, along with shared NGA monitor stations operated in England, Argentina, Ecuador, Bahrain, Australia and Washington, DC.[99]The tracking information is sent to the MCS atSchriever Space Force Base25 km (16 mi) ESE of Colorado Springs, which is operated by the2nd Space Operations Squadron(2 SOPS) of the U.S. Space Force. Then 2 SOPS contacts each GPS satellite regularly with a navigational update using dedicated or shared (AFSCN) ground antennas (GPS dedicated ground antennas are located atKwajalein,Ascension Island,Diego Garcia, andCape Canaveral). These updates synchronize the atomic clocks on board the satellites to within a fewnanosecondsof each other, and adjust theephemerisof each satellite's internal orbital model. The updates are created by aKalman filterthat uses inputs from the ground monitoring stations,space weatherinformation, and various other inputs.[100] When a satellite's orbit is being adjusted, the satellite is markedunhealthy, so receivers do not use it. After the maneuver, engineers track the new orbit from the ground, upload the new ephemeris, and mark the satellite healthy again. The operation control segment (OCS) currently serves as the control segment of record. It provides the operational capability that supports GPS users and keeps the GPS operational and performing within specification. OCS replaced the 1970s-era mainframe computer at Schriever Air Force Base in September 2007. After installation, the system helped enable upgrades and provide a foundation for a new security architecture that supported U.S. armed forces. OCS will continue to be the ground control system of record until the new segment, Next Generation GPS Operation Control System[10](OCX), is fully developed and functional. The U.S. Department of Defense has claimed that the new capabilities provided by OCX will be the cornerstone for enhancing GPS's mission capabilities, enabling U.S. Space Force to enhance GPS operational services to U.S. combat forces, civil partners and domestic and international users.[101][102]The GPS OCX program also will reduce cost, schedule and technical risk. It is designed to provide 50%[103]sustainment cost savings through efficient software architecture and Performance-Based Logistics. In addition, GPS OCX is expected to cost millions of dollars less than the cost to upgrade OCS while providing four times the capability. The GPS OCX program represents a critical part of GPS modernization and provides information assurance improvements over the current GPS OCS program. On September 14, 2011,[104]the U.S. Air Force announced the completion of GPS OCX Preliminary Design Review and confirmed that the OCX program is ready for the next phase of development. The GPS OCX program missed major milestones and pushed its launch into 2021, 5 years past the original deadline. According to the Government Accounting Office in 2019, the 2021 deadline looked shaky.[105] The project remained delayed in 2023, and was (as of June 2023) 73% over its original estimated budget.[106][107]In late 2023, Frank Calvelli, the assistant secretary of the Air Force for space acquisitions and integration, stated that the project was estimated to go live some time during the summer of 2024.[108] The user segment (US) is composed of hundreds of thousands of U.S. and allied military users of the secure GPS Precise Positioning Service, and tens of millions of civil, commercial and scientific users of the Standard Positioning Service. In general, GPS receivers are composed of an antenna, tuned to the frequencies transmitted by the satellites, receiver-processors, and a highly stable clock (often acrystal oscillator). They may also include a display for providing location and speed information to the user. GPS receivers may include an input for differential corrections, using theRTCMSC-104 format. This is typically in the form of anRS-232port at 4,800 bit/s speed. Data is actually sent at a much lower rate, which limits the accuracy of the signal sent using RTCM.[citation needed]Receivers with internal DGPS receivers can outperform those using external RTCM data.[citation needed]As of 2006[update], even low-cost units commonly includeWide Area Augmentation System(WAAS) receivers. Many GPS receivers can relay position data to a PC or other device using theNMEA 0183protocol. Although this protocol is officially defined by the National Marine Electronics Association (NMEA),[109]references to this protocol have been compiled from public records, allowing open source tools likegpsdto read the protocol without violating intellectual property laws.[clarification needed]Other proprietary protocols exist as well, such as theSiRFandMTKprotocols. Receivers can interface with other devices using methods including a serial connection,USB, orBluetooth. While originally a military project, GPS is considered adual-use technology, meaning it has significant civilian applications as well. GPS has become a widely deployed and useful tool for commerce, scientific uses, tracking, and surveillance. GPS's accurate time facilitates everyday activities such as banking, mobile phone operations, and even the control of power grids by allowing well synchronized hand-off switching.[87] Many civilian applications use one or more of GPS's three basic components: absolute location, relative movement, and time transfer. The U.S. government controls the export of some civilian receivers. All GPS receivers capable of functioning above 60,000 ft (18 km) above sea level and 1,000 kn (500 m/s; 2,000 km/h; 1,000 mph), or designed or modified for use with unmanned missiles and aircraft, are classified asmunitions(weapons)—which means they requireState Departmentexport licenses.[140]This rule applies even to otherwise purely civilian units that only receive the L1 frequency and the C/A (Coarse/Acquisition) code. Disabling operation above these limits exempts the receiver from classification as a munition. Vendor interpretations differ. The rule refers to operation at both the target altitude and speed, but some receivers stop operating even when stationary. This has caused problems with some amateur radio balloon launches that regularly reach 30 km (100,000 feet). These limits only apply to units or components exported from the United States. A growing trade in various components exists, including GPS units from other countries. These are expressly sold asITAR-free. As of 2009, military GPS applications include: GPS type navigation was first used in war in the1991 Persian Gulf War, before GPS was fully developed in 1995, to assistCoalition Forcesto navigate and perform maneuvers in the war. The war also demonstrated the vulnerability of GPS to beingjammed, when Iraqi forces installed jamming devices on likely targets that emitted radio noise, disrupting reception of the weak GPS signal.[147] GPS's vulnerability to jamming is a threat that continues to grow as jamming equipment and experience grows.[148][149]GPS signals have been reported to have been jammed many times over the years for military purposes. Russia seems to have several objectives for this approach, such as intimidating neighbors while undermining confidence in their reliance on American systems, promoting their GLONASS alternative, disrupting Western military exercises, and protecting assets from drones.[150]China uses jamming to discourage US surveillance aircraft near the contestedSpratly Islands.[151]North Koreahas mounted several major jamming operations near its border with South Korea and offshore, disrupting flights, shipping and fishing operations.[152]Iranian Armed Forces disrupted the civilian airliner plane FlightPS752's GPS when it shot down the aircraft.[153][154] In theRusso-Ukrainian War, GPS-guided munitions provided to Ukraine by NATO countries experienced significant failure rates as a result of Russian electronic warfare. Excalibur artillery shells efficiency rate hitting targets dropped from 70% to 6% as Russia adapted its electronic warfare activities.[155] While most clocks derive their time fromCoordinated Universal Time(UTC), the atomic clocks on the satellites are set toGPS time. The difference is that GPS time is not corrected to match the rotation of the Earth, so it does not contain newleap secondsor other corrections that are periodically added to UTC. GPS time was set to match UTC in 1980, but has since diverged. The lack of corrections means that GPS time remains at a constant offset withInternational Atomic Time(TAI) (TAI – GPS = 19 seconds). Periodic corrections are performed to the on-board clocks to keep them synchronized with ground clocks.[85]: Section 1.2.2 The GPS navigation message includes the difference between GPS time and UTC. As of January 2017,[update]GPS time is 18 seconds ahead of UTC because of the leap second added to UTC on December 31, 2016.[156]Receivers subtract this offset from GPS time to calculate UTC and specific time zone values. New GPS units may not show the correct UTC time until after receiving the UTC offset message. The GPS-UTC offset field can accommodate 255 leap seconds (eight bits). GPS time is theoretically accurate to about 14 nanoseconds, due to theclock driftrelative toInternational Atomic Timethat the atomic clocks in GPS transmitters experience.[157]Most receivers lose some accuracy in their interpretation of the signals and are only accurate to about 100 nanoseconds.[158][159] The GPS implements two major corrections to its time signals for relativistic effects: one for relative velocity of satellite and receiver, using the special theory of relativity, and one for the difference in gravitational potential between satellite and receiver, using general relativity. The acceleration of the satellite could also be computed independently as a correction, depending on purpose, but normally the effect is already dealt with in the first two corrections.[160][161] As opposed to the year, month, and day format of theGregorian calendar, the GPS date is expressed as a week number and a seconds-into-week number. The week number is transmitted as a ten-bitfield in the C/A and P(Y) navigation messages, and so it becomes zero again every 1,024 weeks (19.6 years). GPS week zero started at 00:00:00 UTC (00:00:19 TAI) on January 6, 1980, and the week number became zero again for the first time at 23:59:47 UTC on August 21, 1999 (00:00:19 TAI on August 22, 1999). It happened the second time at 23:59:42 UTC on April 6, 2019. To determine the current Gregorian date, a GPS receiver must be provided with the approximate date (to within 3,584 days) to correctly translate the GPS date signal. To address this concern in the future the modernized GPS civil navigation (CNAV) message will use a 13-bit field that only repeats every 8,192 weeks (157 years), thus lasting until 2137 (157 years after GPS week zero). The navigational signals transmitted by GPS satellites encode a variety of information including satellite positions, the state of the internal clocks, and the health of the network. These signals are transmitted on two separate carrier frequencies that are common to all satellites in the network. Two different encodings are used: a public encoding that enables lower resolution navigation, and an encrypted encoding used by the U.S. military.[162] Each GPS satellite continuously broadcasts anavigation messageon L1 (C/A and P/Y) and L2 (P/Y) frequencies at a rate of 50 bits per second (seebitrate). Each complete message takes 750 seconds (12+1⁄2minutes) to complete. The message structure has a basic format of a 1500-bit-long frame made up of five subframes, each subframe being 300 bits (6 seconds) long. Subframes 4 and 5 aresubcommutated25 times each, so that a complete data message requires the transmission of 25 full frames. Each subframe consists of ten words, each 30 bits long. Thus, with 300 bits in a subframe times 5 subframes in a frame times 25 frames in a message, each message is 37,500 bits long. At a transmission rate of 50-bit/s, this gives 750 seconds to transmit an entirealmanac message (GPS). Each 30-second frame begins precisely on the minute or half-minute as indicated by the atomic clock on each satellite.[163] The first subframe of each frame encodes the week number and the time within the week,[164]as well as the data about the health of the satellite. The second and the third subframes contain theephemeris– the precise orbit for the satellite. The fourth and fifth subframes contain thealmanac, which contains coarse orbit and status information for up to 32 satellites in the constellation as well as data related to error correction. Thus, to obtain an accurate satellite location from this transmitted message, the receiver must demodulate the message from each satellite it includes in its solution for 18 to 30 seconds. To collect all transmitted almanacs, the receiver must demodulate the message for 732 to 750 seconds or12+1⁄2minutes.[165] All satellites broadcast at the same frequencies, encoding signals using uniquecode-division multiple access(CDMA) so receivers can distinguish individual satellites from each other. The system uses two distinct CDMA encoding types: the coarse/acquisition (C/A) code, which is accessible by the general public, and the precise (P(Y)) code, which is encrypted so that only the U.S. military and other NATO nations who have been given access to the encryption code can access it.[166] The ephemeris is updated every 2 hours and is sufficiently stable for 4 hours, with provisions for updates every 6 hours or longer in non-nominal conditions. The almanac is updated typically every 24 hours. Additionally, data for a few weeks following is uploaded in case of transmission updates that delay data upload.[citation needed] All satellites broadcast at the same two frequencies, 1.57542 GHz (L1 signal) and 1.2276 GHz (L2 signal). The satellite network uses a CDMA spread-spectrum technique[167]: 607where the low-bitrate message data is encoded with a high-ratepseudo-random(PRN) sequence that is different for each satellite. The receiver must be aware of the PRN codes for each satellite to reconstruct the actual message data. The C/A code, for civilian use, transmits data at 1.023 millionchipsper second, whereas the P code, for U.S. military use, transmits at 10.23 million chips per second. The actual internal reference of the satellites is 10.22999999543 MHz to compensate forrelativistic effects[168][169]that make observers on the Earth perceive a different time reference with respect to the transmitters in orbit. The L1 carrier is modulated by both the C/A and P codes, while the L2 carrier is only modulated by the P code.[94]The P code can be encrypted as a so-called P(Y) code that is only available to military equipment with a proper decryption key. Both the C/A and P(Y) codes impart the precise time-of-day to the user. The L3 signal at a frequency of 1.38105 GHz is used to transmit data from the satellites to ground stations. This data is used by the United States Nuclear Detonation (NUDET) Detection System (USNDS) to detect, locate, and report nuclear detonations (NUDETs) in the Earth's atmosphere and near space.[170]One usage is the enforcement of nuclear test ban treaties. The L4 band at 1.379913 GHz is being studied for additional ionospheric correction.[167]: 607 The L5 frequency band at 1.17645 GHz was added in the process ofGPS modernization. This frequency falls into an internationally protected range for aeronautical navigation, promising little or no interference under all circumstances. The first Block IIF satellite that provides this signal was launched in May 2010.[171]On February 5, 2016, the 12th and final Block IIF satellite was launched.[172]The L5 consists of two carrier components that are in phase quadrature with each other. Each carrier component is bi-phase shift key (BPSK) modulated by a separate bit train. "L5, the third civil GPS signal, will eventually support safety-of-life applications for aviation and provide improved availability and accuracy."[173] In 2011, a conditional waiver was granted toLightSquaredto operate a terrestrial broadband service near the L1 band. Although LightSquared had applied for a license to operate in the 1525 to 1559 band as early as 2003 and it was put out for public comment, the FCC asked LightSquared to form a study group with the GPS community to test GPS receivers and identify issues that might arise due to the larger signal power from the LightSquared terrestrial network. The GPS community had not objected to the LightSquared (formerly MSV and SkyTerra) applications until November 2010, when LightSquared applied for a modification to its Ancillary Terrestrial Component (ATC) authorization. This filing (SAT-MOD-20101118-00239) amounted to a request to run several orders of magnitude more power in the same frequency band for terrestrial base stations, essentially repurposing what was supposed to be a "quiet neighborhood" for signals from space as the equivalent of a cellular network. Testing in the first half of 2011 has demonstrated that the effects from the lower 10 MHz of spectrum are minimal to GPS devices (less than 1% of the total GPS devices are affected). The upper 10 MHz intended for use by LightSquared may have some effect on GPS devices. There is some concern that this may seriously degrade the GPS signal for many consumer uses.[174][175]Aviation Weekmagazine reports that the latest testing (June 2011) confirms "significant jamming" of GPS by LightSquared's system.[176] Because all of the satellite signals are modulated onto the same L1 carrier frequency, the signals must be separated after demodulation. This is done by assigning each satellite a unique binarysequenceknown as aGold code. The signals are decoded after demodulation using addition of the Gold codes corresponding to the satellites monitored by the receiver.[177][178] If the almanac information has previously been acquired, the receiver picks the satellites to listen for by their PRNs, unique numbers in the range 1 through 32. If the almanac information is not in memory, the receiver enters a search mode until a lock is obtained on one of the satellites. To obtain a lock, it is necessary that there be an unobstructed line of sight from the receiver to the satellite. The receiver can then acquire the almanac and determine the satellites it should listen for. As it detects each satellite's signal, it identifies it by its distinct C/A code pattern. There can be a delay of up to 30 seconds before the first estimate of position because of the need to read the ephemeris data. Processing of the navigation message enables the determination of the time of transmission and the satellite position at this time. For more information seeDemodulation and Decoding, Advanced. The receiver uses messages received from satellites to determine the satellite positions and time sent. Thex, y,andzcomponents of satellite position and the time sent (s) are designated as [xi, yi, zi, si] where the subscriptidenotes the satellite and has the value 1, 2, ...,n, wheren≥ 4. When the time of message reception indicated by the on-board receiver clock ist~i{\displaystyle {\tilde {t}}_{i}}, the true reception time isti=t~i−b{\displaystyle t_{i}={\tilde {t}}_{i}-b}, wherebis the receiver's clock bias from the much more accurate GPS clocks employed by the satellites. The receiver clock bias is the same for all received satellite signals (assuming the satellite clocks are all perfectly synchronized). The message's transit time ist~i−b−si{\displaystyle {\tilde {t}}_{i}-b-s_{i}}, wheresiis the satellite time. Assuming the message traveled atthe speed of light,c, the distance traveled is(t~i−b−si)c{\displaystyle \left({\tilde {t}}_{i}-b-s_{i}\right)c}. For n satellites, the equations to satisfy are: wherediis the geometric distance or range between receiver and satellitei(the values without subscripts are thex, y,andzcomponents of receiver position): Definingpseudorangesaspi=(t~i−si)c{\displaystyle p_{i}=\left({\tilde {t}}_{i}-s_{i}\right)c}, we see they are biased versions of the true range: Since the equations have four unknowns [x, y, z, b]—the three components of GPS receiver position and the clock bias—signals from at least four satellites are necessary to attempt solving these equations. They can be solved by algebraic or numerical methods. Existence and uniqueness of GPS solutions are discussed by Abell and Chaffee.[80]Whennis greater than four, this system isoverdeterminedand afitting methodmust be used. The amount of error in the results varies with the received satellites' locations in the sky, since certain configurations (when the received satellites are close together in the sky) cause larger errors. Receivers usually calculate a running estimate of the error in the calculated position. This is done by multiplying the basic resolution of the receiver by quantities called thegeometric dilution of position(GDOP) factors, calculated from the relative sky directions of the satellites used.[181]The receiver location is expressed in a specific coordinate system, such as latitude and longitude using theWGS 84geodetic datumor a country-specific system.[182] The GPS equations can be solved by numerical and analytical methods. Geometrical interpretations can enhance the understanding of these solution methods. The measured ranges, called pseudoranges, contain clock errors. In a simplified idealization in which the ranges are synchronized, these true ranges represent the radii of spheres, each centered on one of the transmitting satellites. The solution for the position of the receiver is then at the intersection of the surfaces of these spheres; seetrilateration(more generally, true-range multilateration). Signals from at minimum three satellites are required, and their three spheres would typically intersect at two points.[183]One of the points is the location of the receiver, and the other moves rapidly in successive measurements and would not usually be on Earth's surface. In practice, there are many sources of inaccuracy besides clock bias, including random errors as well as the potential for precision loss from subtracting numbers close to each other if the centers of the spheres are relatively close together. This means that the position calculated from three satellites alone is unlikely to be accurate enough. Data from more satellites can help because of the tendency for random errors to cancel out and also by giving a larger spread between the sphere centers. But at the same time, more spheres will not generally intersect at one point. Therefore, a near intersection gets computed, typically via least squares. The more signals available, the better the approximation is likely to be. If the pseudorange between the receiver and satelliteiand the pseudorange between the receiver and satellitejare subtracted,pi−pj, the common receiver clock bias (b) cancels out, resulting in a difference of distancesdi−dj. The locus of points having a constant difference in distance to two points (here, two satellites) is ahyperbolaon a plane and ahyperboloid of revolution(more specifically, atwo-sheeted hyperboloid) in 3D space (seeMultilateration). Thus, from four pseudorange measurements, the receiver can be placed at the intersection of the surfaces of three hyperboloids each withfociat a pair of satellites. With additional satellites, the multiple intersections are not necessarily unique, and a best-fitting solution is sought instead.[80][81][184][185][186][187] The receiver position can be interpreted as the center of aninscribed sphere(insphere) of radiusbc, given by the receiver clock biasb(scaled by the speed of lightc). The insphere location is such that it touches other spheres. Thecircumscribing spheresare centered at the GPS satellites, whose radii equal the measured pseudorangespi. This configuration is distinct from the one described above, in which the spheres' radii were the unbiased or geometric rangesdi.[186]: 36–37[188] The clock in the receiver is usually not of the same quality as the ones in the satellites and will not be accurately synchronized to them. This producespseudorangeswith large differences compared to the true distances to the satellites. Therefore, in practice, the time difference between the receiver clock and the satellite time is defined as an unknown clock biasb. The equations are then solved simultaneously for the receiver position and the clock bias. The solution space [x, y, z, b] can be seen as a four-dimensionalspacetime, and signals from at minimum four satellites are needed. In that case each of the equations describes ahypercone(or spherical cone),[189]with the cusp located at the satellite, and the base a sphere around the satellite. The receiver is at the intersection of four or more of such hypercones. When more than four satellites are available, the calculation can use the four best, or more than four simultaneously (up to all visible satellites), depending on the number of receiver channels, processing capability, andgeometric dilution of precision(GDOP). Using more than four involves an over-determined system of equations with no unique solution; such a system can be solved by aleast-squaresor weighted least squares method.[179] Both the equations for four satellites, or the least squares equations for more than four, are non-linear and need special solution methods. A common approach is by iteration on a linearized form of the equations, such as theGauss–Newton algorithm. The GPS was initially developed assuming use of a numerical least-squares solution method—i.e., before closed-form solutions were found. One closed-form solution to the above set of equations was developed by S. Bancroft.[180][190]Its properties are well known;[80][81][191]in particular, proponents claim it is superior in low-GDOPsituations, compared to iterative least squares methods.[190] Bancroft's method is algebraic, as opposed to numerical, and can be used for four or more satellites. When four satellites are used, the key steps are inversion of a 4x4 matrix and solution of a single-variable quadratic equation. Bancroft's method provides one or two solutions for the unknown quantities. When there are two (usually the case), only one is a near-Earth sensible solution.[180] When a receiver uses more than four satellites for a solution, Bancroft uses thegeneralized inverse(i.e., the pseudoinverse) to find a solution. A case has been made that iterative methods, such as the Gauss–Newton algorithm approach for solving over-determinednon-linear least squaresproblems, generally provide more accurate solutions.[192] Leick et al. (2015) states that "Bancroft's (1985) solution is a very early, if not the first, closed-form solution."[193]Other closed-form solutions were published afterwards,[194][195]although their adoption in practice is unclear. GPS error analysis examines error sources in GPS results and the expected size of those errors. GPS makes corrections for receiver clock errors and other effects, but some residual errors remain uncorrected. Error sources include signal arrival time measurements, numerical calculations, atmospheric effects (ionospheric/tropospheric delays),ephemerisand clock data, multipath signals, and natural and artificial interference. Magnitude of residual errors from these sources depends on geometric dilution of precision. Artificial errors may result from jamming devices and threaten ships and aircraft[196]or from intentional signal degradation through selective availability, which limited accuracy to ≈ 6–12 m (20–40 ft), but has been switched off since May 1, 2000.[197][198] GNSS enhancementrefers to techniques used to improve the accuracy of positioning information provided by the Global Positioning System or otherglobal navigation satellite systemsin general, a network of satellites used for navigation. In the United States, GPS receivers are regulated under theFederal Communications Commission's (FCC)Part 15rules. As indicated in the manuals of GPS-enabled devices sold in the United States, as a Part 15 device, it "must accept any interference received, including interference that may cause undesired operation".[199]With respect to GPS devices in particular, the FCC states that GPS receiver manufacturers "must use receivers that reasonably discriminate against reception of signals outside their allocated spectrum".[200]For the last 30 years, GPS receivers have operated next to the Mobile Satellite Service band, and have discriminated against reception of mobile satellite services, such as Inmarsat, without any issue. The spectrum allocated for GPS L1 use by the FCC is 1559 to 1610 MHz, while the spectrum allocated for satellite-to-ground use owned by Lightsquared is the Mobile Satellite Service band.[201]Since 1996, the FCC has authorized licensed use of the spectrum neighboring the GPS band of 1525 to 1559 MHz to theVirginiacompanyLightSquared. On March 1, 2001, the FCC received an application from LightSquared's predecessor,MotientServices, to use their allocated frequencies for an integrated satellite-terrestrial service.[202]In 2002, the U.S. GPS Industry Council came to an out-of-band-emissions (OOBE) agreement with LightSquared to prevent transmissions from LightSquared's ground-based stations from emitting transmissions into the neighboring GPS band of 1559 to 1610 MHz.[203]In 2004, the FCC adopted the OOBE agreement in its authorization for LightSquared to deploy a ground-based network ancillary to their satellite system – known as the Ancillary Tower Components (ATCs) – "We will authorize MSS ATC subject to conditions that ensure that the added terrestrial component remains ancillary to the principal MSS offering. We do not intend, nor will we permit, the terrestrial component to become a stand-alone service."[204]This authorization was reviewed and approved by the U.S. Interdepartment Radio Advisory Committee, which includes theU.S. Department of Agriculture, U.S. Space Force, U.S. Army,U.S. Coast Guard,Federal Aviation Administration,National Aeronautics and Space Administration(NASA),U.S. Department of the Interior, andU.S. Department of Transportation.[205] In January 2011, the FCC conditionally authorized LightSquared's wholesale customers—such asBest Buy,Sharp, andC Spire—to only purchase an integrated satellite-ground-based service from LightSquared and re-sell that integrated service on devices that are equipped to only use the ground-based signal using LightSquared's allocated frequencies of 1525 to 1559 MHz.[206]In December 2010, GPS receiver manufacturers expressed concerns to the FCC that LightSquared's signal would interfere with GPS receiver devices[174]although the FCC's policy considerations leading up to the January 2011 order did not pertain to any proposed changes to the maximum number of ground-based LightSquared stations or the maximum power at which these stations could operate. The January 2011 order makes final authorization contingent upon studies of GPS interference issues carried out by a LightSquared led working group along with GPS industry and Federal agency participation. On February 14, 2012, the FCC initiated proceedings to vacate LightSquared's Conditional Waiver Order based on the NTIA's conclusion that there was currently no practical way to mitigate potential GPS interference. GPS receiver manufacturers design GPS receivers to use spectrum beyond the GPS-allocated band. In some cases, GPS receivers are designed to use up to 400 MHz of spectrum in either direction of the L1 frequency of 1575.42 MHz, because mobile satellite services in those regions are broadcasting from space to ground, and at power levels commensurate with mobile satellite services.[207]As regulated under the FCC's Part 15 rules, GPS receivers are not warranted protection from signals outside GPS-allocated spectrum.[200]This is why GPS operates next to the Mobile Satellite Service band, and also why the Mobile Satellite Service band operates next to GPS. The symbiotic relationship of spectrum allocation ensures that users of both bands are able to operate cooperatively and freely. The FCC adopted rules in February 2003 that allowed Mobile Satellite Service (MSS) licensees such as LightSquared to construct a small number of ancillary ground-based towers in their licensed spectrum to "promote more efficient use of terrestrial wireless spectrum".[208]In those 2003 rules, the FCC stated: "As a preliminary matter, terrestrial [Commercial Mobile Radio Service ('CMRS')] and MSS ATC are expected to have different prices, coverage, product acceptance and distribution; therefore, the two services appear, at best, to be imperfect substitutes for one another that would be operating in predominantly different market segments ... MSS ATC is unlikely to compete directly with terrestrial CMRS for the same customer base...". In 2004, the FCC clarified that the ground-based towers would be ancillary, noting: "We will authorize MSS ATC subject to conditions that ensure that the added terrestrial component remains ancillary to the principal MSS offering. We do not intend, nor will we permit, the terrestrial component to become a stand-alone service."[204]In July 2010, the FCC stated that it expected LightSquared to use its authority to offer an integrated satellite-terrestrial service to "provide mobile broadband services similar to those provided by terrestrial mobile providers and enhance competition in the mobile broadband sector".[209]GPS receiver manufacturers have argued that LightSquared's licensed spectrum of 1525 to 1559 MHz was never envisioned as being used for high-speed wireless broadband based on the 2003 and 2004 FCC ATC rulings making clear that the Ancillary Tower Component (ATC) would be, in fact, ancillary to the primary satellite component.[210]To build public support of efforts to continue the 2004 FCC authorization of LightSquared's ancillary terrestrial component vs. a simple ground-based LTE service in the Mobile Satellite Service band, GPS receiver manufacturerTrimble NavigationLtd. formed the "Coalition To Save Our GPS".[211] The FCC and LightSquared have each made public commitments to solve the GPS interference issue before the network is allowed to operate.[212][213]According to Chris Dancy of theAircraft Owners and Pilots Association, airline pilots with the type of systems that would be affected "may go off course and not even realize it".[214]The problems could also affect the Federal Aviation Administration upgrade to theair traffic controlsystem,United States Defense Departmentguidance, and localemergency servicesincluding911.[214] On February 14, 2012, the FCC moved to bar LightSquared's planned national broadband network after being informed by theNational Telecommunications and Information Administration(NTIA), the federal agency that coordinates spectrum uses for the military and other federal government entities, that "there is no practical way to mitigate potential interference at this time".[215][216]LightSquared is challenging the FCC's action.[needs update] Following the United States's deployment of GPS, other countries have also developed their own satellite navigation systems. These systems include: In the event of adversespace weatheror the deployment of an anti-satellite weapon against GPS, the United States has no terrestrial backup system. The potential cost of such an event to the U.S. economy is estimated at $1 billion per day. TheLORAN-Csystem was turned off in North America in 2010 and Europe in 2015.eLoranis proposed as an American terrestrial backup system, but as of 2024 has not received approval or funding.[228] China continues to operate LORAN-C transmitters,[229]and Russia has a similar system calledCHAYKA("Seagull").
https://en.wikipedia.org/wiki/Global_Positioning_System
Roamingis awirelesstelecommunicationterm typically used with mobile devices, such asmobile phones. It refers to a mobile phone being used outside the range of its native network and connecting to another availablecell network. In more technical terms, roaming refers tothe ability for acellularcustomer to automatically make and receive voice calls, send and receive data, or access other services, including home data services, when travelling outside the geographical coverage area of the homenetwork, by means of using a visited network. For example: should a subscriber travel beyond their cell phone company's transmitter range, their cell phone would automatically utilize another phone company's service, if available. The process is supported by the Telecommunication processes of mobility management,authentication,authorizationandaccountingbillingprocedures (known as AAA or 'triple A'). Roaming is divided into "SIM-based roaming" and "username/password-based roaming", whereby the technical term "roaming" also encompasses roaming between networks of different network standards, e.g.WLAN (Wireless Local Area Network)orGSM (Global System for Mobile Communications). Device equipment and functionality, such asSIM cardcapability,antennaandnetwork interfaces, andpower management, determine the access possibilities.[1] Using the example of WLAN/GSM roaming, the following scenarios can be differentiated (cf. GSM Association Permanent Reference Document AA.39[2]): Although these user/network scenarios focus on roaming from GSM network operator's networks, clearly roaming can be bi-directional, i.e. from public WLAN operators to GSM networks. Traditional roaming in networks of the same standard, e.g. from a WLAN to a WLAN or a GSM network to a GSM network, has already been described above and is likewise defined by the foreignness of the network based on the type of subscriber entry in the home subscriber register. In the case of session continuity, seamless access to these services across different access types is provided. The term "roaming", also known as "e-roaming", is a concept for charging battery electric vehicles (BEVs) at other charging stations.[3]In practice, e-roaming allows EV drivers to achieve greater interoperability by providing access to public charging points from any owner/operator'sEV charging networkthrough a common platform and a single network subscription or contract. There are proprietary (closed) charging networks, such asTesla Superchargers, or providers sharing charging points through contracts and agreements. Thus, the EV can "roam" between those charging points. "Home network" refers to the network the subscriber is registered with. "Visitor network" refers to the network a subscriber roams temporarily and is outside the bounds of the "home network". The legal roaming business aspects negotiated between the roaming partners for billing of the services obtained are usually stipulated in so calledroaming agreements.TheGSM Associationbroadly outlines the content of such roaming agreements in standardized form for its members. For the legal aspects of authentication, authorization and billing of the visiting subscriber, the roaming agreements typically can comprise minimal safety standards, as e.g.location update proceduresor financial security or warranty procedures. The details of the roaming process differ among types of cellular networks, but in general, the process resembles the following: Location updating is the mechanism that is used to determine the location of anMSin the idle state (connected to the network, but with no active call). It occurs for example when a call is made to a roaming cell phone. Signaling process: In order that a subscriber is able to register on to a visited network, a roaming agreement needs to be in place between the visited network and the home network. This agreement is established after a series of testing processes called IREG (International Roaming Expert Group) andTADIG(Transferred Account Data Interchange Group). While the IREG testing is to test the proper functioning of the established communication links, the TADIG testing is to check the billability of the calls. The usage by a subscriber in a visited network is captured in a file called the TAP (Transferred Account Procedure) for GSM / CIBER (Cellular Intercarrier Billing Exchange Record) for CDMA, AMPS etc... file and is transferred to the home network. A TAP/CIBER file contains details of the calls made by the subscriber viz. location, calling party, called party, time of call and duration, etc. The TAP/CIBER files are rated as per the tariffs charged by the visited operator. The home operator then bills these calls to its subscribers and may charge a mark-up/tax applicable locally. As recently many carriers launched own retail rate plans and bundles for Roaming, TAP records are generally used for wholesale Inter-Operators settlements only Roaming fees are typically charged on a per-minute basis for wireless voice service, per text message sent and received and per megabyte of data used for data service, and they are typically determined by the service provider's pricing plan. Several carriers in both the United States and India have eliminated these fees in their nationwide pricing plans. All of the major carriers[citation needed][which?]now offer pricing plans that allow consumers to purchase nationwide roaming-free minutes. However, carriers define "nationwide" in different ways. For example, some carriers define "nationwide" as anywhere in the U.S., whereas others define it as anywhere within the carrier's network.[4] In the UK, the main network providers generally send text alerts to advise users that they will now be charged international rates so it is clear when this will apply. UK data roaming charges abroad vary depending on the nature of the phone agreement (eitherpay as you goormonthly contracts). Some carriers, includingT-MobileandVirgin Mobile, do not allow pay as you go customers to use international roaming without pre-purchase of an international "add on" or "bolt on."[5] An operator intending to provide roaming services to visitors publishes the tariffs that would be charged in their network at least sixty days prior to its implementation under normal situations. The visited operator tariffs may include tax, discounts etc. and would be based on duration in case of voice calls. For data calls, the charging may be based on the data volume sent and received. Some operators also charge a separate fee for call setup i.e. for the establishment of a call. This charge is called aflagfallcharge. In theEuropean Union, regulation on roaming charges began on 30 June 2007, forcing service providers to lower their roaming fees across the 28-member bloc. It later also includedEEAmember states. The regulation set a price cap of €0.39 (€0.49 in 2007, €0.46 in 2008, €0.43 in 2009) per minute for outgoing calls, and €0.15 (€0.24 in 2007, €0.22 in 2008, €0.19 in 2009) per minute for incoming calls - excluding tax.[6]Having still found that market conditions did not justify lifting the capping of roaming within the EEA, theCommissionreplaced the law in 2012. Under the 2012 Regulation, retail roaming capping charges expired in 2017 and wholesale capping charges expired in 2022. In mid-2009 there was also an €0.11 (excluding tax) maximum price for SMS text message included into this regulation. On 11 June 2013, the European Commission voted to end mobile roaming charges for the first time.[7] Following a European Commission vote on 15 December 2016, roaming charges within the European Union were to be abolished by June 2017. While the European Commission (EC) believed that ending roaming charges would stimulate entrepreneurship and trade, mobile operators had their doubts about the changes.[8] On 15 June 2017, Regulation (EU) 2016/2286, nicknamed "Roam like at Home" and having been signed by the European Parliament and Commission in May of the same year came into force. It abolished all roaming charges within the EU, Iceland, Liechtenstein and Norway.[9] Countries that do not share a supra-national authority have also begun examining the provision of international roaming services. In April 2011, Singapore and Malaysia announced that they had agreed with operators to reduce voice and SMS rates for roaming between their two countries.[10]In August 2012, Australia and New Zealand published a draft report proposing coordinated action on roaming services.[11]This was followed by a final report in February 2013 recommending that the two countries equip their telecommunications regulators with an extended palette of regulatory remedies, when they investigate international roaming.[12]The Australian and New Zealand prime ministers subsequently announced that they would introduce legislation to effect the recommendations of the final report.[13] On 19 February 2020,Bolivia,Colombia,EcuadorandPeruvoted, through the auspice of theAndean Community, to eliminate roaming fees amongst themselves. The agreement is set to start in 2022.[14] On 1 July 2021,Serbia,Albania,Montenegro,Bosnia & Herzegovina,North MacedoniaandKosovoabolished roaming fees as part ofMini Schengenproject, allowing SIM holders on those countries to use their domestic packages on another country in the agreement without having to pay their roaming fee. The agreement was signed in April 2019. There are no additional charges, just like in EU's "Roaming like home" project.[15][16] In November 2021, Cameroon, Central African Republic, Congo, Equatorial Guinea, Chad and Gabon committed to bilateral agreements to lift charges and cut interconnection tariffs.[17] In December 2024,RussiaandBelarussigned a resolution to permanently abolish roaming between the two countries effective 1 March 2025. Work on abolishing roaming between Russia and Belarus has been underway for many years. The "roadmap" for the gradual abolition of roaming between the two countries was signed back in 2019. According to the resolution, mobile users are guaranteed at least 300 minutes of free incoming voice calls and at least 5 GB of data transfer per month.[18] This type refers to the ability of moving from one region to another region inside national coverage of the mobile operator ("internal roaming"). Initially, operators may have provided commercial offers restricted to a region (sometimes to a town). Due to the success of GSM and the decrease in cost, regional roaming is rarely offered to clients except in nations with wide geographic areas like the US, Russia, India, etc., in which there are a number of regional operators. Prior to 2019, inRussiaeven country-wide operators charged different tariffs depending on whether the users are within or outside of their "home region". A number of legislative attempts in early days to remove the "internal roaming" failed due to opposition from operators.[19]Following theannexation of Crimeain 2014 theRussian operatorsfaced significant criticism as they did not offer their services inside Crimea directly, even though formally it's recognized as a regularfederal subject inside Russia.[20]In 2019 however, Federal Law No. 527-FZ "On Amendments to Articles 46 and 54 of the Federal Law "On Communications"" was adopted, according to which, starting June 1, 2019, all mobile radiotelephone operators were prohibited charging a fee for incoming calls and SMS when registering in the network of a third-party operator (roaming partner), regardless of the region of Russia in which the subscriber is located, leading to all fees for incoming calls and SMS in national roaming being canceled for good.[21][22] This type refers to the ability to move from one mobile operator to another in the same country. For example, apostpaidsubscriber ofT-Mobile USAwho is allowed to roam onAT&T Mobilityand/or the regional carriersViaero WirelessandU.S. Cellular's networks would have national roaming rights;prepaidproviders on the other hand typically only allow a more restricted national roaming ability for cost reasons. For commercial and license reasons, this type of roaming is not allowed unless under very specific circumstances and under regulatory scrutiny. This has often taken place when a new company is assigned a mobile telephony license (such asFree Italia's 10-year national roaming deal withWind Tre), to create a more competitive market by allowing the new entrant to offer coverage comparable to that of established operators (by requiring the existing operators to allow roaming while the new entrant has time to build up its own network), or where mobile network infrastructure has been destroyed by natural or man-made means, such as during the2022 Russian invasion of Ukrainewhere Ukrainian mobile operators had to quickly implement national roaming with each other to compensate for network infrastructure destroyed in said invasion.[23] In a country like India, where the number of regional operators is high and the country is divided intotelecom circles,this type of roaming is common. Following the launch of thePebble Networkin the UK on 15 July 2015, national roaming has been possible across the major UK networks at no additional cost using a Pebble Network SIM card. This type of roaming refers to the ability to move to a foreign service provider's network. It is, consequently, of particular interest to international tourists and business travelers. Broadly speaking, international roaming is typically easiest when using the GSM standard, as it is used by over 80% of the world's mobile operators, and most devices support it. However, even then, there may be problems, since countries have allocated different frequency bands for GSM communications (there are two groups of countries: most GSM countries use 900/1800 MHz, but theUnited Statesand some other countries in the Americas have allocated 850/1900 MHz): for a phone to work in a country with a different frequency allocation, it must support one or both of that country's frequencies, and thus betriorquad band. If international roaming allows the traveler to stay connected during their trip, it can also generate significant costs for users, due to the trend of carriers pricing GSM usage internationally outrageously high if the traveler elects to not purchase an optional addon to their current phone service. In fact, the use of mobile networks outside its original country can lead to significant billing by its original mobile data operator without an addon to their current phone service.[24] This type refers to roaming between two standards. This term is now widely used in mobile communications where especiallyCDMAcustomers want to use their phone in areas where there is no CDMA network or there is no roaming agreement in place to support roaming on the used standard. In Europe there are hardly any CDMA networks. Most CDMA customers originate from the Americas or the Far East. In order to enable them to roam in Europe inter-standard roaming is the solution. The CDMA customers arriving in Europe can register on the availableGSMnetworks. Since mobile communication technologies have evolved independently across continents, there is significant challenge in achieving seamless roaming across these technologies. Typically, these technologies were implemented in accordance with technological standards laid down by different industry bodies and hence the name. A number of the standards making industry bodies have come together to define and achieve interoperability between the technologies as a means to achieve inter-standards roaming. This is currently an ongoing effort. Mobile signature roamingallows anaccess pointto get amobile signaturefrom any end-user, even if the AP and the end-user have not contracted a commercial relationship with the sameMSSP. Otherwise, an AP would have to build commercial terms with as many MSSPs as possible, and this might be a cost burden. This means that a mobile signature transaction issued by an Application Provider should be able to reach the appropriate MSSP, and this should be transparent for the AP.[25] Network elements belonging to the same Operator but located in different areas (a typical situation where assignment of local licenses is a common practice) pair depends on the switch and its location. Hence, software changes and a greater processing capability are required, but furthermore this situation could introduce the fairly new concept of roaming on a per MSC basis instead of per Operator basis. But this is actually a burden, so it is avoided.[26] This type refers to customers who purchase service with amobile phone operatorintending to permanently roam, or be off-network. This becomes possible because of the increasing popularity and availability of "free roaming" service plan, where there is no cost difference between on and off network usage. The benefits of getting service from a mobile phone operator, that is not local to a user, can include cheaper rates, or features and phones that are not available on their local mobile phone operator, or to get to a particular mobile phone operator's network to get free calls to other customers of that mobile phone operator through a free unlimited mobile to mobile feature. Most mobile phone operators will require the customer's living or billing address be inside their coverage area or less often inside the government issuedradio frequencylicense of the mobile phone operator, this is usually determined by a computer estimate because it isimpossible to guarantee coverage. If a potential customer's address is not within the requirements of that mobile phone operator, they will be denied service. In order to permanently roam customers may use a false address andonline billing, or a relative or friend's address which is in the required area, and a 3rd party billing option. Most mobile phone operators discourage or prohibit permanent roaming since they must pay per minute rates to the network operator their customer is roaming onto. This is because they can not pass that extra cost onto customers ("free roaming"). Roaming calls within a local tariff area, when at least one of the phones belong outside that area. Usually implemented withtrombone routingalso known astromboning.[27]
https://en.wikipedia.org/wiki/Roaming
Long-Term Evolution(LTE) telecommunications networks use severalfrequency bandswith associatedbandwidths. From Tables 5.5-1 "E-UTRA Operating Bands" and 5.6.1-1 "E-UTRA Channel Bandwidth" of the latest published version of the 3GPP TS 36.101,[1]the following table lists the specified frequency bands ofLTEand the channel bandwidths each band supports. Band numbers can be written prefixed by a "b" as in "b66" for band 66. These bands were defined by the 3GPP, but have never been deployed commercially, supported by commercial devices or are no longer used.[1] The following table shows the standardized LTE bands and their regional use. The main LTE bands are inboldprint. Not yet deployed are not available (N/A). Partial deployments varies from country to country and the details are available atList of LTE networks.
https://en.wikipedia.org/wiki/LTE_frequency_bands
Frequency bands for5G New Radio(5G NR), which is the air interface orradio access technologyof the5Gmobile networks, are separated into two different frequency ranges. First there is Frequency Range 1 (FR1),[1]which includes sub-7 GHz frequency bands, some of which are traditionally used by previous standards, but has been extended to cover potential new spectrum offerings from 410 MHz to 7125 MHz. The other is Frequency Range 2 (FR2),[2]which includes frequency bands from 24.25 GHz to 71.0 GHz. In November and December 2023, a third band, Frequency Range 3 (FR3),[3]covering frequencies from 7.125 GHz to 24.25 GHz, was proposed by theWorld Radio Conference; as of September 2024, this band has not been added to the official standard. Frequency bands are also available for non-terrestrial networks (NTN)[4]in both the sub-7 GHz and in the 17.3 GHz to 30 GHz ranges. From the latest published version (Rel. 18) of the respective3GPPtechnical standard (TS 38.101),[5]the following tables list the specified frequency bands and the channel bandwidths of the 5G NR standard. Note that the NR bands are defined with prefix of "n". When the NR band is overlapping with the4G LTE band, they share the same band number.
https://en.wikipedia.org/wiki/5G_NR_frequency_bands
CDMA frequency bandsor frequency ranges are thecellular frequenciesdesignated by theITUfor the operation ofcdmaOneandCDMA2000mobile phonesand othermobile devices.[1][2][3][4] From the latest published version of the respective3GPP2technical standard (C.S0057-F),[5]the following table lists the specified frequency bands of the cdmaOne and CDMA2000 standards.[6][7][8]
https://en.wikipedia.org/wiki/CDMA_frequency_bands
TheUnited States 700 MHz FCC wirelessspectrum auction, officially known asAuction 73,[1]was started by theFederal Communications Commission(FCC) on January 24, 2008 for the rights to operate the 700 MHzradio frequencybandin theUnited States. The details of process were the subject of debate among severaltelecommunicationscompanies, includingVerizon Wireless,AT&T Mobility, as well as the Internet companyGoogle. Much of the debate swirled around the open access requirements set down by the Second Report and Order released by the FCC determining the process and rules for the auction. All bidding was required by law to commence by January 28.[2] Full-powerTV stationswere forcedto transition to digital broadcastingin order to free 108 MHz ofradio spectrumfor newerwirelessservices. Most analog broadcasts ceased on June 12, 2009. The 700 MHz spectrum was previously used for analogtelevision broadcasting, specificallyUHF channels 52 through 69. The FCC ruled that the 700 MHz spectrum would no longer be necessary for TV because of the improvedspectral efficiencyof digital broadcasts. Digital broadcasts allow TV channels to be broadcast onadjacent channelswithout having to leaveemptyTV channelsasguard bandsbetween them.[3]All broadcasters were required to move to the frequencies occupied by channels 2 through 51 as part of the digital TV transition. A similar reallocation was employed in 1989 to expandanalog cellphone service, having previously eliminated TV channels 70-83 at the uppermost UHF frequencies. This created an unusual situation where old TV tuning equipment was able tolisten tocellularphone calls, although such activity was made illegal and the FCC prohibited the sale of future devices with that capability. Some of the 700 MHzspectrum licenseswere already auctioned in Auctions 44 and 49. Paired channels 54/59 (lower-700 MHz block C) and unpaired channel 55 (block D) were sold and in some areas were already being used for broadcasting and Internet access. For example, QualcommMediaFLOin 2007 started using channel 55 for broadcastingmobile TVto cell phones in some markets.[4]Qualcomm later ended the service and sold (at a large profit) channel 55 nationwide to AT&T Mobility, along with channel 56 in theNortheast Corridorand much ofCalifornia.Dish Networkbought channel 56 (block E) licenses in the remainder of the nation'smedia markets, so far using it only for testingATSC-M/H. As of 2015[update], AT&T does not appear to be using block D or E (band class 29) yet, but plans to uselink aggregationfor increaseddownloadspeeds and capacity.[5] For the 700-MHz auction, the FCC designed a new multi-round process that limits the number of package bids that each bidder can submit (12 items and 12 package bids) and the prices at which they can be submitted, provides computationally intensive feedback prices similar to the pricing approach.[6]This package bidding process (which is often referred to ascombinatorial auctions) was the first of its kind to be used by the FCC in an actual auction. Bidders were allowed to bid on individual licenses or on an all-or-nothing bid which could be done up to twelve packages, which the bidder determined at any point in the auction. Doing the auction this way allowed the bidder to avoid the exposure problem when licenses are complements. The provisional winning bids are the set of consistent bids that maximize total revenues. The 700 MHz auction represented a good test-case for package bidding for two reasons. First, the 700 MHz auction only involves 12 licenses: 2 bands (one 10 MHz and one 20 MHz) in each of the 6 regions.[7]Secondly, prospective bidders had expressed interest in alternative packaging because some Internet service providers had different needs and the flexibility would benefit them. The FCC issued Public Notice DA00-1486 adopted and described the package bidding rules for the 700 MHz auction. The FCC's original proposal allowed only nine package bids: the six 30 MHz regional bids and three nationwide bids (10, 20, or 30 MHz). Although these nine packages were consistent with the expressed desires of many prospective bidders, others felt that the nine packages were too restrictive. The activity rule is unchanged, aside from a new definition of activity and a lower activity requirement of 50%. A bidder must be active on 50% of its current eligibility or its eligibility in the next round will be reduced to two times its activity. Bids made in different rounds were treated as mutually exclusive and a bidder wishing to add a license or package to its provisional winnings must renew the provisional winning bids in the current round. The FCC placed rules onpublic safetyfor the auction. 20 MHz of the valuable 700 MHz spectrum were set aside for the creation of a public/private partnership that will eventually roll out to a new nationwide broadband network tailored to the requirements of public safety. The FCC offered the commercial licensee extra spectrum adjacent to the public safety block that the licensee can use as it wants. The licensee is allowed to use whatever bandwidth that is available on the public safety side of the network to offer data services of their own.[8] In an effort to encouragenetwork neutrality, groups such asPublic Knowledge,MoveOn.org,Media Access Project, along with individuals such asCraigslistfounderCraig Newmark, and Harvard Law professorLawrence Lessigappealed to theFederal Communications Commissionto make the newly freed airways open access to the public.[9] Prior to the bidding process, Google asked that the spectrum be free to lease wholesale and the devices operating under the spectrum be open. At the time, many providers such as Verizon and AT&T used technological measures to block external applications. In return, Google guaranteed a minimum bid of $4.6 billion. Google's specific requests were the adoption of these policies: The result of the auction was that Google was outbid by others in the auction, triggering the open platform restrictions Google had asked for without having to actually purchase any licenses.[11]Google was actively involved in the bidding process although it had no intentions of actually winning any licenses.[12]The reason for this was that it could push up the price of the bidding process in order to reach the US$4.6B reserve price, therefore triggering the open source restrictions listed above. Had Google not been actively involved in the bidding process, it would have made sense for businesses to suppress their bidding strategies in order to trigger a new auction without the restrictions imposed by Google and the FCC.[11]Google's upfront payment of $287 million in order to participate in the bidding process was largely recovered after the auction since it had not actually purchased any licences. Despite this, Google ended paying interest costs, which resulted in an estimated loss of 13 million dollars.[11] The FCC ruled in favor of Google's requests.[13]Only two of the four requirements were put in place on the upper C-Block, open applications and open devices.[14]Google had wanted the purchaser to allow 'rental' of the blocks to different providers. In retaliation, on September 13, 2007, Verizon filed a lawsuit against the Federal Communications Commission to remove the provisions Google had asked for. Verizon called the rules "arbitrary and capricious, unsupported by substantial evidence and otherwise contrary to law."[15][16][17][18] On October 23, Verizon chose to drop the lawsuit after losing its appeal for a speedy resolution on October 3. However,CTIA - The Wireless Associationchallenged the same regulations in a lawsuit filed the same day.[19]On November 13, 2008, CTIA dropped its lawsuit against the FCC.[20] The auction divided UHF spectrum into five blocks:[21] The FCC placed very detailed rules about the process of this auction of the 698–806 MHz part of the wireless spectrum. Bids were anonymous and designed to promote competition. The aggregatereserve pricefor all block C licenses was approximately $4.6 billion.[22]The total reserve price for all five blocks being auctioned in Auction 73 was just over $10 billion.[22] Auction 73 generally went as planned by telecommunications analysts. In total, Auction 73 raised $19.592 billion.[23]Verizon WirelessandAT&T Mobilitytogether accounted for $16.3 billion of the total revenue.[24]Of the 214 approved applicants, 101 successfully purchased at least one license. Despite their heavy involvement with the auction,Googledid not purchase any licenses. However, Google did place the minimum bid on Block C licenses in order to ensure that the license would be required to be open-access.[25][26][27] The results for each of the five blocks: After the end of Auction 73, there remained some licenses that either went unsold or were defaulted on by the winning bidder from Blocks A and B. A new auction, Auction 92, was held on July 19, 2011 to sell the 700 MHz band licenses that were still available. The auction closed on July 28, 2011, with 7 bidders having won 16 licenses worth $19.8 million.[30] Six years after the end of the auction of 700 MHz spectrum, block A remained largely unused, althoughT-Mobile USAbegan to deploy its extended-range LTE in 2015 on licenses purchased from Verizon Wireless and cleared ofRF interferencein several areas by TV stations changing off of channel 51. This delay was caused by technical issues which wereregulatoryand possiblyanticompetitivein nature. After the March 2008 conclusion of Auction 73, Motorola initiated steps to have3GPPestablish a new industry standard (later designated as band class 17) that would be limited to the lower 700 MHz B and C blocks. In proposing band class 17, Motorola cited the need to address concerns about high-power transmissions of TV stations still broadcasting on channel 51 and the lower-700 MHz D and E blocks. As envisioned and ultimately adopted, the band class 17 standard allowsLTEoperations in only the lower-700 MHz B and C blocks using a specific signaling protocol that would filter out all other frequencies. Although band class 17 operates on two of the three blocks common to band class 12, band class 17 devices use more narrowelectronic filters, which have the effect of permitting a smaller range of frequencies topass throughthe filter. In addition, band class 12 and 17signalingprotocolsare not compatible.[31] The creation of two non-interoperable band classes has had numerous effects. Customers are unable to switch between a licensee deploying its service using band class 17 and a licensee that provides its service using band class 12 without purchasing a new device (even when the two operators use the same 2G and 3G technologies and bands), and band class 12 and 17 devices cannotroamon each other'scellular networks.[31] When deploying its LTE network,C Spire Wirelessdecided not to use A block because of the lack of band-12 support inmobile devices, issues with roaming, and the increased cost ofbase stationsdue to lack of supply.[32]US Cellular deployed a band class 12 LTE network, however not all of US Cellular's devices were able to access it. In particular, theiPhone 5SandiPhone 5Ccould not.[33]Other wireless telecommunication providers launched LTE band class 12 networks, but have not been able to offersmartphonesthat access them, instead resorting tofixedormobilewireless broadband modems.[34]As of April 2015, only three telecom providers were offering smartphones that use band 12: US Cellular, T-Mobile USA, and Nex-Tech Wireless. While smaller US telecommunication providers were upset at the lack of interoperability,AT&Tdefended the creation of band 17 and told the other carriers to seek interoperability withSprintandT-Mobileinstead.[35]However, in September 2013, AT&T changed its stance and committed to support and sell band-12 devices.[36] Following AT&T's commitment the Federal Communications Commission ruled:[31] Consistent with these commitments, AT&T anticipates that its focus and advocacy within the 3GPP standards setting process will shift to band-12-related projects and work streams. AT&T must place priority within the 3GPP RAN committee on the development of various band-12 carrier-aggregation scenarios. Upon completing implementation of the MFBI feature, AT&T anticipates that its focus on new standards related to the paired lower-700 MHz spectrum will be almost exclusively on band 12 configurations, features and capabilities.[31] Additionally,Dish Networkagreed to lower its maximumeffective radiated powerlevels on block E, which is on the loweradjacent channelto the downlink (tower-to-user transmissions) for block A. It did this in exchange for the FCC allowing it to operate the block as a one-way service, effectively making it a broadcast, although it could still be interactive through other means. Since Dish has already been experimentally operating it as asingle-frequency network, this should not have a significant effect on whatever service it might offer in the future.
https://en.wikipedia.org/wiki/2008_United_States_wireless_spectrum_auction
Amobile broadband modem, also known aswireless modemorcellular modem, is a type ofmodemthat allows apersonal computeror arouterto receivewirelessInternet accessvia amobile broadbandconnection instead of usingtelephoneorcable televisionlines. A mobile Internet user can connect using a wireless modem to a wirelessInternet Service Provider(ISP) to getInternet access.[1][2] While someanaloguemobile phones provided a standardRJ11telephone socket into which a normal landline modem could be plugged, this only provided slowdial-upconnections, usually 2.4 kilobit per second (kbit/s) or less. The next generation of phones, known as 2G (for 'second generation'), were digital, and offered faster dial-up speeds of 9.6 kbit/s or 14.4 kbit/s without the need for a separate modem. A further evolution calledHSCSDused multiple GSM channels (two or three in each direction) to support up to 43.2 kbit/s. All of these technologies still required their users to have a dial-upISPto connect to and provide the Internet access - it was not provided by the mobile phone network itself. The release of2.5Gphones with support forpacketdata changed this. The 2.5G networks break both digital voice and data into small chunks, and mix both onto the network simultaneously in a process calledpacket switching. This allows the phone to have a voice connection and a data connection at the same time, rather than a single channel that has to be used for one or the other. The network can link the data connection into a company network, but for most users the connection is to the Internet. This allows web browsing on the phone, but a PC can also tap into this service if it connects to the phone. The PC needs to send a special telephone number to the phone to get access to the packet data connection. From the PC's viewpoint, the connection still looks like a normal PPP dial-up link, but it is all terminating on the phone, which then handles the exchange of data with the network. Speeds on 2.5G networks are usually in the 30–50 kbit/s range. The firstpersonal computerwith a built-in mobile broadband modem was the ITC 286 CAT, a laptop byIntelligence Technology Corporation. Released in 1988, it featured aHayes-compatibleAMPSmodem capable of transmitting data at 1.2 kbit/s.[3][4] 3Gnetworks have taken this approach to a higher level, using different underlying technology but the same principles. They routinely provide speeds over 300 kbit/s. Due to the now increased internet speed, internet connection sharing viaWLANhas become a workable reality. Devices which allow internet connection sharing or other types of routing on cellular networks are called alsocellular routers. A further evolution is the3.5GtechnologyHSDPA, which provides speeds of multipleMegabits per second. Several of themobile network operatorsthat provide 3G or faster wireless internet access offer plans and wireless modems that enable computers to connect to and access the internet. These wireless modems are typically in the form of a small USB based device or a small, portable mobile hotspot that acts as a WiFi access point (hotspot) to enable multiple devices to connect to the internet.WiMAXbased services that provide high speed wireless internet access are available in some countries and also rely on wireless modems that connect to the provider's wireless network. Wireless USB modems are nicknamed as "dongles". Early 3G mobile broadband modems used thePCMCIAorExpressCardports, commonly found on legacy laptops. The expression "connect card" (instead of connection card) had been registered and used the first time byVodafoneas brand for its products but now is become abrandnomerorgenericized trademarkused incolloquialorcommercialspeech for similar product, made by different manufacturers, too. Major producers areHuawei,Option N.V., Novatel Wireless. More recently, the expression "connect card" is also used to identify internetUSBkeys. Vodafone brands this type of device as a Vodem.[5] Often a mobile network operator will supply a 'locked' modem or other wireless device that can only be used on their network. It is possible to use online unlocking services that will remove the 'lock' so the device accepts SIM cards from any network. Standalone mobile broadband modems are designed to be connected directly to one computer. In the past thePCMCIAandExpressCardstandards were used to connect to the computer. AsUSBconnectivity became almost universal, these various standards were largely superseded by USB modems in the early 21st century. Some models haveGPSsupport, providing geographical location information.[6] Many mobile broadband modems sold nowadays also have built-in routing capabilities. They provide traditional networking interfaces such asEthernet,USBandWi-Fi.[7] Numeroussmartphonessupport theHayes command setand therefore can be used as a mobile broadband modem. Somemobile network operatorscharge a fee for this facility,[8]if able to detect the tethering. Other networks have an allowance for full speed mobile broadband access, which—if exceeded—can result in overage charges or slower speeds.[9] An Internet-accessing smartphone may have the same capabilities as a standalone modem, and, when connected via a USB cable to a computer, can serve as a modem for the computer. Smartphones with built-in Wi-Fi also typically provide routing andwireless access pointfacilities. This method of connecting is commonly referred to as "tethering."[9] There are competingcommon carriersbroadcastingsignal in most countries.
https://en.wikipedia.org/wiki/Mobile_broadband_modem
Reverse 911is apublic safetycommunications technology used by public safety organizations in Canada and the United States to communicate with groups of people in a defined geographic area. The system uses a database of telephone numbers and associated addresses, which, when tied intogeographic information systems(GIS), can be used to deliver recordedemergency notificationsto a selected set of telephone service subscribers.[1] Reverse 911 was developed by Sigma Micro Corporation, later known as Sigma Communications, in 1993.[2]After a number of corporate acquisitions,Motorola Solutionsultimately gained ownership of the technology and rights developed by Sigma, and Motorola has folded Reverse 911 into their Vesta suite of public safety systems.[3] The system is used to notify residents in emergency situations, for example: This article related totelecommunicationsis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Reverse_911
Affirmed,United States v. Carpenter, 819F.3d880(6th Cir. 2016). Remanded for resentencing, 788 Fed. Appx. 364 (6th Cir. 2019).Affirmed, No.22-1198(6th Cir. 2023).Rehearingen bancdenied (6th Cir. 2023). Carpenter v. United States,585U.S.296(2018), is alandmarkUnited States Supreme Courtcase concerning the privacy of historicalcell site location information(CSLI). The Court held that government entities violate theFourth Amendment to the United States Constitutionwhen accessing historical CSLI records containing the physical locations of cellphones without asearch warrant.[1] Prior toCarpenter, government entities could obtain cellphone location records from service providers by claiming the information was required as part of an investigation, without a warrant, but the ruling changed this procedure. Recognizing the influence of new consumer communications devices in the 2010s, the Court expanded its conceptions of constitutional rights toward the privacy of this type of data. However, the Court emphasized that theCarpenterruling was narrowly restricted to the precise types of information and search procedures that were relevant to this case.[2][3] Cellular telephoneservice providers are able to find the location of cell phones through either global positioning system (GPS) data orcell site location information(CSLI), in the process of connecting calls and data transmissions. CSLI is captured by nearbycell towers, and this information is used totriangulatethe location of phones.[4]Service providers capture and store this data for business purposes, such as troubleshooting, maximizing network efficiencies, and determining whether to charge customers roaming fees for particular calls.[5] The data can also illustrate the historical movements of a cellphone. Thus, anyone with access to this data has the ability to know where the phone has been and what other cell phones were in the same area at a given time. When users travel with their cellphones, this data can theoretically illustrate every place a person has traveled, and possibly the locations of other people encountered via their corresponding data.[6] Prior toCarpenter, the Supreme Court consistently held that a person had no reasonable expectation of privacy in regard to information voluntarily turned over to third-parties such as telephone companies, and therefore asearch warrantis not required when government officials seek this information.[7]This legal theory is known as thethird-party doctrine, established by the Supreme Court inSmith v. Maryland(1979), in which the Court determined that government can obtain a list of phone numbers dialed from a suspect's phone.[8] By the 2010s, cellphones and particularlysmartphoneshad become important tools for nearly every person in the United States.[9]Many applications, such asGPSnavigation and location tools, require a phone to send and receive information constantly, including the exact location of the phone, often without an affirmative action on the part of its owner. As technology advanced in the 2010s, the Supreme Court began to modify its precedents on government searches of personal communications devices, given new consumer behaviors that may transcend the third-party doctrine.[10] Between December 2010 and March 2011, several individuals in theDetroit,Michiganarea conspired and participated in armed robberies atRadioShackandT-Mobilestores across the region.[11]In April 2011, four of the robbers were captured and arrested. The petitioner, Timothy Carpenter, was not among the initial group of arrestees. One of those arrested confessed and turned over his phone so thatFBIagents could review the calls made from his phone around the time of the robberies.[1]The agents obtained a search warrant to inspect the information in that arrestee's phone, in order to find additional contacts of the arrestee and compile more evidence about the crime ring.[12][13] From the historical cell site records on the arrestee's phone, the agents confirmed that Timothy Carpenter was also part of the crime ring, and proceeded to compile information about the location of his phone over 127 days. In turn, this information revealed that Carpenter had been within a two-mile radius of four robberies at the times they were perpetrated.[1]This evidence was used to support Carpenter's arrest. At criminal court, Carpenter was found guilty of several counts of aiding and abetting robberies that affected interstate commerce, and another count of using a firearm during a violent crime. He was sentenced to 116 years in prison.[14] Carpenter appealed his conviction and sentence to theUnited States Court of Appeals for the Sixth Circuit, arguing that the CSLI evidence used against him should be suppressed because the police had not obtained a warrant pertaining tohisCSLI records before searching through them. In 2015, the Circuit Court upheld Carpenter's conviction.[15]This ruling was largely based on theSmith v. Marylandprecedent, stating that Carpenter used cellular telephone networks voluntarily, and per thethird-party doctrinehe had noreasonable expectationthat the data should be private. Thus, review of that information by the police did not constitute a "search" and did not require a warrant under theFourth Amendment.[16] Carpenter appealed this ruling to the U.S. Supreme Court, which grantedcertiorariin 2016.[17][18] Twentyamicus curiaebriefs were filed by interested organizations, scholars, and corporations for Carpenter's case.[19]Some considered the case to be the most important Fourth Amendment dispute to come before the Supreme Court in a generation.[20][21]The Court issued its decision in 2018, with the majority opinion written by Chief JusticeJohn Roberts. The Court's ruling recognized that theCarpentercase revealed a contradiction between two lines of Supreme Court rulings on the matter of police searches of personal communications information.[1]InUnited States v. Jones(2012) the Court had ruled thatGPS trackingcould constitute a search under theFourth Amendmentas a violation of a person'sreasonable expectation of privacy.[22]Meanwhile, the Court had held inSmith v. Maryland(1979) that thethird-party doctrineabsolved the government from warrant requirements when searching through telephone records.[23] Ultimately, inCarpenterthe court determined that the third-party doctrine could not be extended to historicalcell site location information(CSLI). Instead, the Court compared "detailed, encyclopedic, and effortlessly compiled" CSLI records to theGPSinformation at issue inUnited States v. Jones, recognizing that both forms of data accord the government the ability to track individuals' past movements.[24]Furthermore, the Court noted that CSLI could pose even greater privacy risks thanGPSdata, as the prevalence ofcellphonescould accord the government "near perfect surveillance" of an individual's movements. Accordingly, the Court ruled that, under theFourth Amendment, the government must obtain asearch warrantin order to access historical CSLI records.[1] Roberts argued that technology "has afforded law enforcement a powerful new tool to carry out its important responsibilities. At the same time, this tool risks Government encroachment of the sort the Framers [of the U.S. Constitution], after consulting the lessons of history, drafted the Fourth Amendment to prevent."[25]As stated in the opinion, "Unlike the nosy neighbor who keeps an eye on comings and goings, they [new technologies] are ever alert, and their memory is nearly infallible. There is a world of difference between the limited types of personal information addressed inSmith[...] and the exhaustive chronicle of location information casually collected by wireless carriers today."[26] However, Roberts stressed that theCarpenterdecision was a very narrow one and did not affect other uses of thethird-party doctrine, such as searches of banking records. Similarly, he noted that the decision did not prevent the collection of CSLI without a warrant in cases of emergency or for issues of national security.[27] JusticeAnthony Kennedy, in a dissenting opinion joined by Thomas and Alito, cautioned against the limitations on law enforcement inherent in the majority opinion. According to Kennedy, the ruling "places undue restrictions on the lawful and necessary enforcement powers exercised not only by the Federal Government, but also by law enforcement in every State and locality throughout the Nation. Adherence to this Court's longstanding precedents and analytic framework would have been the proper and prudent way to resolve this case."[28] In another dissent, JusticeSamuel Alitowrote: "I fear that today's decision will do far more harm than good. The Court's reasoning fractures two fundamental pillars of Fourth Amendment law, and in doing so, it guarantees a blizzard of litigation while threatening many legitimate and valuable investigative practices upon which law enforcement has rightfully come to rely."[29] In his dissent, JusticeNeil Gorsuchargued that the Fourth Amendment had lost its original meaning based on property rights, stating that it "grants you the right to invoke its guarantees whenever one of your protected things (your person, your house, your papers, or your effects) is unreasonably searched or seized. Period."[30]Gorsuch further recommended that thethird-party doctrine, as well asKatz v. United States, be overturned as inconsistent with the original meaning and application of the Fourth Amendment.[31]On the facts of the case, Gorsuch stressed that CSLI data is personal property, and its storage by telephone companies should be immaterial as the company is serving, in effect, as abailee.[32] JusticeThomasargued, in his dissent, that what mattered was not if there was a search, but rather on whose property was searched.[33]Thomas points that the Fourth Amendment guarantees each person to be secure from unreasonable searches intheir own property.He argues that the cell phone records were the property of the phone service provider, since Carpenter did not keep the records, maintain the records, nor could he destroy them. Therefore, Carpenter could not bring suit under a Fourth Amendment violation, since it was not his property which was searched. Additionally, Thomas criticizes theKatzdecision's framework, on which this case, and other cases relied on by the majority such asSmith v. MarylandandUnited States v. Jones,draw from heavily. Thomas says that theKatztest has, "no basis in the text or history of the Fourth Amendment. And, it invites courts to make judgements about policy, not law."[34]Thomas goes on further to write that, "The Fourth Amendment, as relevant here, protects '[t]he right of people to be secure in their persons, houses, papers, and effects, against unreasonable searches.' By defining 'search' to mean 'any violation of reasonable expectations of privacy,' theKatztest misconstrues virtually every one of those words."[35]Thomas concludes by saying theKatztest is a failed experiment, and that the court should reconsider it. After the Supreme Court ruling, Carpenter's criminal conviction wasremandedto theSixth Circuitto determine if it could stand without the CSLI data that required a warrant per the Supreme Court. Carpenter's lawyers argued that the data should have been subject to theexclusionary ruleand thrown out as material collected without a proper warrant under the Supreme Court's ruling. However, the Circuit Court judges concluded that the FBI was acting ingood faithwith respect to collecting the data based on the law at the time the crimes were committed.[36]This type of good faith exemption is permitted per another Supreme Court precedent,Davis v. United States(2011).[37]The evidence was allowed to stand, and the Sixth Circuit again upheld Carpenter's criminal conviction and prison sentence. His arguments concerning sentencing procedures under the recently enactedFirst Step Actwere rejected.[36] The Supreme Court's ruling inCarpenterwas narrow and did not otherwise change thethird-party doctrinerelated to other business records that might incidentally reveal location information, nor did it overrule prior decisions concerning conventional surveillance techniques and tools such assecurity cameras. The Court did not extend its ruling to other matters related to cellphones not presented inCarpenter, including real-time CSLI or "tower dumps" (the downloading of information about all the devices that were connected to a particular cell site during a particular interval). The opinion also did not consider other data collection goals involving foreign affairs or national security.[2][3]
https://en.wikipedia.org/wiki/Carpenter_v._United_States
Cellphone surveillance(also known ascellphone spying) may involve tracking,bugging, monitoring,eavesdropping, and recording conversations and text messages onmobile phones.[1]It also encompasses the monitoring of people's movements, which can betracked using mobile phone signalswhen phones are turned on.[2] StingRay devicesare a technology that mimics acellphone tower, causing nearby cellphones to connect and pass data through them instead of legitimate towers.[3]This process is invisible to the end-user and allows the device operator full access to any communicated data.[3]They are also capable of capturing information from phones of bystanders.[4]This technology is a form ofman-in-the-middle attack.[5] StingRays are used by law enforcement agencies to track people's movements, and intercept and record conversations, names, phone numbers and text messages from mobile phones.[1]Their use entails the monitoring and collection of data from all mobile phones within a target area.[1]Law enforcement agencies inNorthern Californiathat have purchased StingRay devices include theOakland Police Department,San Francisco Police Department,Sacramento County Sheriff's Department,San Jose Police Departmentand Fremont Police Department.[1]The Fremont Police Department's use of a StingRay device is in a partnership with the Oakland Police Department and Alameda County District Attorney's Office.[1] End-to-end encryptionsuch asSignalprotects message and call traffic against StingRay devices usingcryptographicstrategies.[6] Dirtboxis a technology similar to Stingrays that are usually mounted on aerial vehicles that can mimiccell sitesand also jam signals. The device uses anIMSI-catcherand is claimed to be able to bypass cryptographic encryption by getting IMSI numbers and ESNs (electronic serial numbers). A tower dump is the sharing of identifying information by acell toweroperator, which can be used to identify where a given individual was at a certain time.[7][8]As mobile phone users move, their devices will connect to nearby cell towers in order to maintain a strong signal even while the phone is not actively in use.[9][8]These towers record identifying information about cellphones connected to them which then can be used to track individuals.[7][8] In most of the United States, police can get many kinds of cellphone data without obtaining a warrant. Law-enforcement records show police can use initial data from a tower dump to ask for another court order for more information, including addresses, billing records and logs of calls, texts and locations.[8] Cellphone bugs can be created by disabling the ringing feature on a mobile phone, allowing a caller to call a phone to access its microphone and listening. One example of this was thegroup FaceTime bug. This bug enables people to eavesdrop on conversations without calls being answered by the recipient. In the United States, theFBIhas used "roving bugs", which entails the activation of microphones on mobile phones to the monitoring of conversations.[10] Cellphone spying software[11]is a type of cellphone bugging, tracking, and monitoring software that is surreptitiously installed on mobile phones. This software can enable conversations to be heard and recorded from phones upon which it is installed.[12]Cellphone spying software can be downloaded onto cellphones.[13]Cellphone spying software enables the monitoring orstalkingof a target cellphone from a remote location with some of the following techniques:[14] Cellphone spying software can enable microphones on mobile phones when phones are not being used, and can be installed by mobile providers.[10] Intentionally hiding a cell phone in a location is a bugging technique. Some hidden cellphone bugs rely onWi-Fihotspots, rather than cellular data, where the trackerrootkitsoftware periodically "wakes up" and signs into a public Wi-Fi hotspot to upload tracker data onto a public internet server. Governments may sometimes legally monitor mobile phone communications - a procedure known aslawful interception.[15] In the United States, the government pays phone companies directly to record and collect cellular communications from specified individuals.[15]U.S.law enforcement agenciescan also legally track the movements of people from their mobile phone signals upon obtaining a court order to do so.[2] In 2018, United States cellphone carriers that sell customers' real-time location data -AT&T,Verizon,T-Mobile, andSprint- publicly stated they would cease those data sales because theFCCfound the companies had been negligent in protecting the personal privacy of their customers' data. Location aggregators, bounty hunters, and others including law enforcement agencies that did not obtain search warrants used that information. FCC ChairmanAjit Paiconcluded that carriers had apparently violated federal law. However, in 2019, the carriers were continuing to sell real-time location data. In late February 2020, the FCC was seeking fines on the carriers in the case.[16] In 2005, theprime minister of Greecewas advised that his, over 100 dignitaries', and the mayor of Athens' mobile phones were bugged.[12]Kostas Tsalikidis, a Vodafone-Panafon employee, was implicated in the matter as using his position as head of the company's network planning to assist in the bugging.[12]Tsalikidis was found hanged in his apartment the day before the leaders were notified about the bugging, which was reported as "an apparent suicide."[17][18][19][20] Security holes withinSignalling System No. 7(SS7), called Common Channel Signalling System 7 (CCSS7) in the US and Common Channel Interoffice Signaling 7 (CCIS7) in the UK, were demonstrated atChaos Communication Congress, Hamburg in 2014.[21][22] During thecoronaviruspandemicIsraelauthorized its internal security service,Shin Bet, to use its access to historic cellphone metadata[23]to engage inlocation trackingof COVID-19 carriers.[24] Some indications of possible cellphone surveillance occurring may include a mobile phone waking up unexpectedly, using a lot of battery power when on idle or when not in use, hearing clicking or beeping sounds when conversations are occurring and the circuit board of the phone being warm despite the phone not being used.[30][38][47]However, sophisticated surveillance methods can be completely invisible to the user and may be able to evade detection techniques currently employed by security researchers and ecosystem providers.[48] Preventive measures against cellphone surveillance include not losing or allowing strangers to use a mobile phone and the utilization of an access password.[13][14]Another technique would be turning off the phone and then also removing the battery when not in use.[13][14]Jammingdevices or aFaraday cagemay also work, the latter obviating removal of the battery[49] https://mfggang.com/read-messages/how-to-read-texts-from-another-phone/
https://en.wikipedia.org/wiki/Cellphone_surveillance
Ageofence warrantor areverse location warrantis asearch warrantissued by a court to allowlaw enforcementto search a database to find all activemobile deviceswithin a particulargeo-fencearea. Courts have granted law enforcement geo-fence warrants to obtain information from databases such asGoogle'sSensorvault, which collects users' historicalgeolocationdata.[1][2]Geo-fence warrants are a part of a category of warrants known asreverse search warrants.[3] Geofence warrants were first used in 2016.[4]Googlereported that it had received 982 such warrants in 2018, 8,396 in 2019, and 11,554 in 2020.[3]A 2021 transparency report showed that 25% of data requests from law enforcement to Google were geo-fence data requests.[5]Google is the most common recipient of geo-fence warrants and the main provider of such data,[4][6]although companies includingApple,Snapchat,Lyft, andUberhave also received such warrants.[4][5] Some lawyers and privacy experts believe reverse search warrants are unconstitutional under theFourth Amendment to the United States Constitution, which protects people from unreasonablesearches and seizures, and requires any search warrants be specific to what and to whom they apply.[7]The Fourth Amendment specifies that warrants may only be issued "upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized."[7]Some lawyers, legal scholars, and privacy experts have likened reverse search warrants togeneral warrants, which were made illegal by the Fourth Amendment.[7] Groups including theElectronic Frontier Foundationhave opposed geo-fence warrants inamicus briefsfiled in motions to quash such orders to disclose geo-fence data.[8] In 2024 theUnited States Fifth Circuit Court of Appealsfound that geofence warrants are "categorically prohibited by the Fourth Amendment."[9]
https://en.wikipedia.org/wiki/Geofence_warrant
Geopositioningis the process of determining or estimating thegeographic positionof an object or a person.[1]Geopositioning yields a set ofgeographic coordinates(such aslatitudeandlongitude) in a givenmap datum. Geographic positions may also be expressed indirectly, as a distance inlinear referencingor as a bearing and range from a known landmark. In turn, positions can determine a meaningful location, such as astreet address. Geoposition is sometimes referred to asgeolocation, and the process of geopositioning may also be described asgeo-localization. Specific instances include: Geofencinginvolves creating a virtual geographic boundary (ageofence), enabling software to trigger a response when a device enters or leaves a particular area.[3]Geopositioning is a pre-requisite for geofencing. Geopositioning uses various visual andelectronicmethods includingposition linesandposition circles,celestial navigation,radio navigation,radio and WiFi positioning systems, and the use ofsatellite navigation systems. The calculation requires measurements or observations of distances or angles to reference points whose positions are known. In 2D surveys, observations of three reference points are enough to compute a position in atwo-dimensionalplane. In practice, observations are subject to errors resulting from various physical and atmospheric factors that influence the measurement of distances and angles.[4] A practical example of obtaining a position fix would be for a ship to takebearingmeasurements on threelighthousespositioned along the coast. These measurements could be made visually using ahand bearing compass, or in case of poor visibility, electronically usingradarorradio direction finding. Since all physical observations are subject to errors, the resulting position fix is also subject to inaccuracy. Although in theory two lines of position (LOP) are enough to define a point, in practice 'crossing' more LOPs provides greater accuracy and confidence, especially if the lines cross at a good angle to each other. Three LOPs are considered the minimum for a practical navigational fix.[5]The three LOPs when drawn on the chart will in general form a triangle, known as a 'cocked hat'. The navigator will have more confidence in a position fix that is formed by a small cocked hat with angles close to those of anequilateral triangle.[6]The area of doubt surrounding a position fix is called anerror ellipse. To minimize the error,electronic navigationsystems generally use more than three reference points to compute a position fix to increase thedata redundancy. As more redundant reference points are added, the position fix becomes more accurate and the area of the resulting error ellipse decreases.[7] The process of using 3 reference points to calculate the location is calledTrilateration, and when using more than 3 points,multilateration. Combining multiple observations to compute a position fix is equivalent to solving a system oflinear equations. Navigation systems useregression algorithmssuch asleast squaresin order to compute a position fix in 3D space. This is most commonly done by combining distance measurements to 4 or moreGPSsatellites, which orbit the Earth along known paths.[8] The result of position fixing is called aposition fix(PF), or simply afix, a position derived from measuring in relation to external reference points.[9]In nauticalnavigation, the term is generally used with manual or visual techniques, such as the use of intersecting visual or radioposition lines, rather than the use of more automated and accurate electronic methods likeGPS; in aviation, use of electronic navigation aids is more common. A visual fix can be made by using any sighting device with abearingindicator. Two or more objects of known position are sighted, and the bearings recorded. Bearing lines are then plotted on a chart through the locations of the sighted items. The intersection of these lines is the current position of the vessel. Usually, a fix is where two or more position lines intersect at any given time. If three position lines can be obtained, the resulting "cocked hat", where the three lines do not intersect at the same point, but create a triangle, gives the navigator an indication of the accuracy. The most accurate fixes occur when the position lines are perpendicular to each other. Fixes are a necessary aspect of navigation bydead reckoning, which relies on estimates ofspeedandcourse. The fix confirms the actual position during a journey. A fix can introduce inaccuracies if the reference point is not correctly identified or is inaccurately measured. Geopositioning can be referred to both global positioning and outdoor positioning, using for exampleGPS, and to indoor positioning, for all the situations where satellite GPS is not a viable option and the localization process has to happen indoors. For indoor positioning, tracking and localization there are many technologies that can be used, depending on the specific needs and on the environmental characteristics.[10]
https://en.wikipedia.org/wiki/Geolocation
GLONASS(ГЛОНАСС,IPA:[ɡɫɐˈnas]; Russian:Глобальная навигационная спутниковая система,romanized:Global'naya Navigatsionnaya Sputnikovaya Sistema,lit.'Global Navigation Satellite System') is a Russiansatellite navigationsystem operating as part of aradionavigation-satellite service. It provides an alternative toGlobal Positioning System(GPS) and is the second navigational system in operation with global coverage and of comparable precision. Satellite navigation devicessupporting both GPS and GLONASS have more satellites available, meaning positions can be fixed more quickly and accurately, especially in built-up areas where buildings may obscure the view to some satellites.[1][2][3]Owing to its higherorbital inclination, GLONASS supplementation of GPS systems also improves positioning inhigh latitudes(near the poles).[4] Development of GLONASS began in the Soviet Union in 1976. Beginning on 12 October 1982, numerous rocket launches added satellites to the system until the completion of theconstellationin 1995. In 2001, after a decline in capacity during the late 1990s, the restoration of the system was made a government priority, and funding increased substantially. GLONASS is the most expensive program ofRoscosmos, consuming a third of its budget in 2010. By 2010, GLONASS had achieved full coverage ofRussia's territory. In October 2011, the full orbital constellation of 24 satellites was restored, enabling full global coverage. The GLONASS satellites' designs have undergone several upgrades, with the latest version,GLONASS-K2, launched in 2023.[5] GLONASS is a global navigation satellite system, providing real time position and velocity determination for military and civilian users. The satellites are located in middle circular orbit at 19,100 km (11,900 mi) altitude with a 64.8° inclination and an orbital period of 11 hours and 16 minutes (every 17 revolutions, done in 8 sidereal days,a satellite passes over the same location[6]).[7][8]GLONASS's orbit makes it especially suited for usage in high latitudes (north or south), where getting aGPSsignal can be problematic.[9][10] The constellation operates in three orbital planes, with eight evenly spaced satellites on each.[8]A fully operational constellation with global coverage consists of 24 satellites, while 18 satellites are necessary for covering the territory of Russia. To get a position fix the receiver must be in the range of at least four satellites.[7] GLONASS satellites transmit two types of signals: open standard-precision signal L1OF/L2OF, andobfuscatedhigh-precision signal L1SF/L2SF. The signals use similarDSSSencoding andbinary phase-shift keying(BPSK) modulation as in GPS signals. All GLONASS satellites transmit the same code as their standard-precision signal; however each transmits on a different frequency using a 15-channelfrequency-division multiple access(FDMA) technique spanning either side from 1602.0MHz, known as the L1 band. The center frequency is 1602 MHz +n× 0.5625 MHz, wherenis a satellite's frequency channel number (n=−6,...,0,...,6, previouslyn=0,...,13). Signals are transmitted in a 38° cone, using right-handcircular polarization, at anEIRPbetween 25 and 27dBW(316 to 500 watts). Note that the 24-satellite constellation is accommodated with only 15 channels by using identical frequency channels to supportantipodal(opposite side of planet in orbit) satellite pairs, as these satellites are never both in view of an Earth-based user at the same time. The L2 band signals use the same FDMA as the L1 band signals, but transmit straddling 1246 MHz with the center frequency 1246 MHz +n× 0.4375 MHz, wherenspans the same range as for L1.[11]In the original GLONASS design, only obfuscated high-precision signal was broadcast in the L2 band, but starting with GLONASS-M, an additional civil reference signal L2OF is broadcast with an identical standard-precision code to the L1OF signal. The open standard-precision signal is generated withmodulo-2 addition(XOR) of 511 kbit/s pseudo-random ranging code, 50 bit/s navigation message, and an auxiliary 100 Hzmeandersequence (Manchester code), all generated using a single time/frequency oscillator. The pseudo-random code is generated with a 9-stage shift register operating with a period of 1milliseconds. The navigational message is modulated at 50 bits per second. The superframe of the open signal is 7500 bits long and consists of 5 frames of 30 seconds, taking 150 seconds (2.5 minutes) to transmit the continuous message. Each frame is 1500 bits long and consists of 15 strings of 100 bits (2 seconds for each string), with 85 bits (1.7 seconds) for data and check-sum bits, and 15 bits (0.3 seconds) for time mark. Strings 1-4 provide immediate data for the transmitting satellite, and are repeated every frame; the data includeephemeris, clock and frequency offsets, and satellite status. Strings 5-15 provide non-immediate data (i.e.almanac) for each satellite in the constellation, with frames I-IV each describing five satellites, and frame V describing remaining four satellites. The ephemerides are updated every 30 minutes using data from the Ground Control segment; they useEarth Centred Earth Fixed(ECEF) Cartesian coordinates in position and velocity, and include lunisolar acceleration parameters. The almanac uses modifiedorbital elements(Keplerian elements) and is updated daily. The more accurate high-precision signal is available for authorized users, such as the Russian military, yet unlike the United States P(Y) code, which is modulated by an encrypting W code, the GLONASS restricted-use codes are broadcast in the clear using onlysecurity through obscurity. The details of the high-precision signal have not been disclosed. The modulation (and therefore the tracking strategy) of the data bits on the L2SF code has recently changed from unmodulated to 250 bit/s burst at random intervals. The L1SF code is modulated by the navigation data at 50 bit/s without aManchester meander code. The high-precision signal is broadcast in phase quadrature with the standard-precision signal, effectively sharing the same carrier wave, but with a ten-times-higher bandwidth than the open signal. The message format of the high-precision signal remains unpublished, although attempts at reverse-engineering indicate that the superframe is composed of 72 frames, each containing 5 strings of 100 bits and taking 10 seconds to transmit, with total length of 36 000 bits or 720 seconds (12 minutes) for the whole navigational message. The additional data are seemingly allocated to criticalLunisolaracceleration parameters and clock correction terms. At peak efficiency, the standard-precision signal offers horizontal positioning accuracy within 5–10 metres, vertical positioning within 15 m (49 ft), a velocity vector measuring within 100 mm/s (3.9 in/s), and timing within 200nanoseconds, all based on measurements from four first-generation satellites simultaneously;[12]newer satellites such as GLONASS-M improve on this. GLONASS uses a coordinatedatumnamed "PZ-90" (Earth Parameters 1990 – Parametry Zemli 1990), in which the precise location of theNorth Poleis given as an average of its position from 1990 to 1995. This is in contrast to the GPS's coordinate datum,WGS 84, which uses the location of the North Pole in 1984. As of 17 September 2007, the PZ-90 datum has been updated to version PZ-90.02 which differ from WGS 84 by less than 400 mm (16 in) in any given direction. Since 31 December 2013, version PZ-90.11 is being broadcast, which is aligned to theInternational Terrestrial Reference System and Frame2008 at epoch 2011.0 at the centimetre level, but ideally a conversion to ITRF2008 should be done.[13][14] Since 2008, newCDMAsignals are being researched for use with GLONASS.[15][16][17][18][19][20][21][22][23] The interface control documents for GLONASS CDMA signals was published in August 2016.[24] According to GLONASS developers, there will be three open and two restricted CDMA signals. The open signal L3OC is centered at 1202.025 MHz and uses BPSK(10) modulation for both data and pilot channels; the ranging code transmits at 10.23 millionchipsper second, modulated onto the carrier frequency using QPSK with in-phase data and quadrature pilot. The data is error-coded with 5-bitBarker codeand the pilot with 10-bitNeuman-Hoffman code.[25][26] Open L1OC and restricted L1SC signals are centered at 1600.995 MHz, and open L2OC and restricted L2SC signals are centered at 1248.06 MHz, overlapping with GLONASS FDMA signals. Open signals L1OC and L2OC usetime-division multiplexingto transmit pilot and data signals, with BPSK(1) modulation for data and BOC(1,1) modulation for pilot; wide-band restricted signals L1SC and L2SC use BOC (5, 2.5) modulation for both data and pilot, transmitted in quadrature phase to the open signals; this places peak signal strength away from the center frequency of narrow-band open signals.[21][27] Binary phase-shift keying(BPSK) is used by standard GPS and GLONASS signals.Binary offset carrier(BOC) is the modulation used byGalileo,modernized GPS, andBeiDou-2. The navigational message of CDMA signals is transmitted as a sequence of text strings. The message has variable size - each pseudo-frame usually includes six strings and containsephemeridesfor the current satellite (string types 10, 11, and 12 in a sequence) and part of the almanac for three satellites (three strings of type 20). To transmit the full almanac for all current 24 satellites, a superframe of 8 pseudo-frames is required. In the future, the superframe will be expanded to 10 pseudo-frames of data to cover full 30 satellites.[28] The message can also containEarth's rotationparameters,ionospheremodels, long-term orbit parameters for GLONASS satellites, andCOSPAS-SARSATmessages. The system time marker is transmitted with each string;UTC leap secondcorrection is achieved by shortening or lengthening (zero-padding) the final string of the day by one second, with abnormal strings being discarded by the receiver.[28] The strings have a version tag to facilitateforward compatibility: future upgrades to the message format will not break older equipment, which will continue to work by ignoring new data (as long as the constellation still transmits old string types), but up-to-date equipment will be able to use additional information from newer satellites.[29] The navigational message of the L3OC signal is transmitted at 100 bit/s, with each string of symbols taking 3 seconds (300 bits). A pseudo-frame of 6 strings takes 18 seconds (1800 bits) to transmit. A superframe of 8 pseudo-frames is 14,400 bits long and takes 144 seconds (2 minutes 24 seconds) to transmit the full almanac. The navigational message of the L1OC signal is transmitted at 100 bit/s. The string is 250 bits long and takes 2.5 seconds to transmit. A pseudo-frame is 1500 bits (15 seconds) long, and a superframe is 12,000 bits or 120 seconds (2 minutes). L2OC signal does not transmit any navigational message, only the pseudo-range codes: ‡Glonass-M spacecraft produced since 2014 include L3OC signal Glonass-K1test satellite launched in 2011 introduced L3OC signal. Glonass-M satellites produced since 2014 (s/n 755+) will also transmit L3OC signal for testing purposes. Enhanced Glonass-K1 andGlonass-K2satellites, to be launched from 2023, will feature a full suite of modernized CDMA signals in the existing L1 and L2 bands, which includes L1SC, L1OC, L2SC, and L2OC, as well as the L3OC signal. Glonass-K2 series should gradually replace existing satellites starting from 2023, when Glonass-M launches will cease.[23][30] Glonass-KM satellites will be launched by 2025. Additional open signals are being studied for these satellites, based on frequencies and formats used by existing GPS, Galileo, andBeidou/COMPASSsignals: Such an arrangement will allow easier and cheaper implementation of multi-standardGNSSreceivers. With the introduction of CDMA signals, the constellation will be expanded to 30 active satellites by 2025; this may require eventual deprecation of FDMA signals.[32]The new satellites will be deployed into three additional planes, bringing the total to six planes from the current three—aided bySystem for Differential Correction and Monitoring(SDCM), which is aGNSS augmentation systembased on a network of ground-based control stations and communication satellitesLuch 5AandLuch 5B.[33][34]GLONASS-KM satellites will also use new L3SVI open signal to broadcast Precise Point Positioning (PPP) to deliver GLONASS High Accuracy Services.[35] Six additionalGlonass-Vsatellites, usingTundra orbitin three orbital planes, will be launched starting in 2025;[5]this regional high-orbit segment will offer increased regional availability and 25% improvement in precision overEastern Hemisphere, similar to JapaneseQZSSsystem andBeidou-1.[36]The new satellites will form two ground traces with inclination of 64.8°, eccentricity of 0.072, period of 23.9 hours, and ascending node longitude of 60° and 120°. Glonass-V vehicles are based on Glonass-K platform and will broadcast new CDMA signals only.[36]PreviouslyMolniya orbit,geosynchronous orbit, orinclined orbitwere also under consideration for the regional segment.[17][28] Roscosmos also plans to launch up to 240 small size satellites on thelow Earth orbit(LEO) to improve signal availability and interfecence; LEO satellites will have a limited lifespan of 5 years to allow a faster pace of replenishment.[35] The main contractor of the GLONASS program is Joint Stock CompanyInformation Satellite Systems Reshetnev(ISS Reshetnev, formerly called NPO-PM). The company, located inZheleznogorsk, is the designer of all GLONASS satellites, in cooperation with the Institute for Space Device Engineering (ru:РНИИ КП) and the Russian Institute of Radio Navigation and Time. Serial production of the satellites is accomplished by the companyProduction Corporation PolyotinOmsk. Over the three decades of development, the satellite designs have gone through numerous improvements, and can be divided into three generations: the original GLONASS (since 1982), GLONASS-M (since 2003) and GLONASS-K (since 2011). Each GLONASS satellite has aGRAUdesignation 11F654, and each of them also has the military "Cosmos-NNNN" designation.[37] The true first generation of GLONASS (also called Uragan) satellites were all three-axis stabilized vehicles, generally weighing 1,250 kg (2,760 lb) and were equipped with a modest propulsion system to permit relocation within the constellation. Over time they were upgraded to Block IIa, IIb, and IIv vehicles, with each block containing evolutionary improvements. Six Block IIa satellites were launched in 1985–1986 with improved time and frequency standards over the prototypes, and increased frequency stability. These spacecraft also demonstrated a 16-month average operational lifetime. Block IIb spacecraft, with a two-year design lifetimes, appeared in 1987, of which a total of 12 were launched, but half were lost in launch vehicle accidents. The six spacecraft that made it to orbit worked well, operating for an average of nearly 22 months. Block IIv was the most prolific of the first generation. Used exclusively from 1988 to 2000, and continued to be included in launches through 2005, a total of 56 satellites were launched. The design life was three years, however numerous spacecraft exceeded this, with one late model lasting 68 months, nearly double.[38] Block II satellites were typically launched three at a time from theBaikonur CosmodromeusingProton-KBlok-DM2or Proton-KBriz-Mboosters. The only exception was when, on two launches, anEtalongeodetic reflector satellitewas substituted for a GLONASS satellite. The second generation of satellites, known asGlonass-M, were developed beginning in 1990 and first launched in 2003. These satellites possess a substantially increased lifetime of seven years and weigh slightly more at 1,480 kg (3,260 lb). They are approximately 2.4 m (7 ft 10 in) in diameter and 3.7 m (12 ft) high, with a solar array span of 7.2 m (24 ft) for an electrical power generation capability of 1600 watts at launch. The aft payload structure houses 12 primary antennas for L-band transmissions. Laser corner-cube reflectors are also carried to aid in precise orbit determination and geodetic research. On-boardcesium clocksprovide the local clock source. 52 Glonass-M have been produced and launched. A total of 41 second generation satellites were launched through the end of 2013. As with the previous generation, the second generation spacecraft were launched three at a time usingProton-KBlok-DM2 or Proton-K Briz-M boosters. Some were launched alone withSoyuz-2-1b/Fregat. In July 2015,ISS Reshetnevannounced that it had completed the last GLONASS-M (No. 61) spacecraft and it was putting it in storage waiting for launch, along with eight previously built satellites.[39][40] As on 22 September 2017, GLONASS-M No.52 satellite went into operation and the orbital grouping has again increased to 24 space vehicles.[41] GLONASS-K is a substantial improvement of the previous generation: it is the first unpressurised GLONASS satellite with a much reduced mass of 750 kg (1,650 lb) versus the 1,450 kg (3,200 lb) of GLONASS-M. It has an operational lifetime of 10 years, compared to the 7-year lifetime of the second generation GLONASS-M. It will transmit more navigation signals to improve the system's accuracy — including new CDMA signals in the L3 and L5 bands, which will use modulation similar to modernized GPS, Galileo, and BeiDou. Glonass-K consist of 26 satellites having satellite index 65-98 and widely used in Russian Military space.[42][43] The new satellite's advanced equipment—made solely from Russian components — will allow the doubling of GLONASS' accuracy.[7]As with the previous satellites, these are 3-axis stabilized,nadirpointing with dual solar arrays.[citation needed]The first GLONASS-K satellite was successfully launched on 26 February 2011.[42][44] Due to their weight reduction, GLONASS-K spacecraft can be launched in pairs from thePlesetsk Cosmodromelaunch site using the substantially lower costSoyuz-2.1bboosters or in six-at-once from theBaikonur Cosmodromeusing Proton-K Briz-M launch vehicles.[7][8] The ground control segment of GLONASS is almost entirely located within former Soviet Union territory, except for several in Brazil and one in Nicaragua.[45][46][47][48] The GLONASS ground segment consists of:[49] Companies producing GNSS receivers making use of GLONASS: NPO Progress describes a receiver calledGALS-A1, which combines GPS and GLONASS reception. SkyWave Mobile Communicationsmanufactures anInmarsat-based satellite communications terminal that uses both GLONASS and GPS.[52] As of 2011[update], some of the latest receivers in theGarmineTrex line also support GLONASS (along with GPS).[53]Garmin also produce a standaloneBluetoothreceiver, the GLO for Aviation, which combines GPS,WAASand GLONASS.[54] Varioussmartphonesfrom 2011 onwards have integrated GLONASS capability in addition to their pre-existingGPSreceivers, with the intention of reducing signal acquisition periods by allowing the device to pick up more satellites than with a single-network receiver, including devices from: As of 17 February 2024[update], the GLONASS constellation status is:[62] The system requires 18 satellites for continuous navigation services covering all of Russia, and 24 satellites to provide services worldwide.[citation needed]The GLONASS system covers 100% of worldwide territory. On 2 April 2014, the system experienced a technical failure that resulted in practical unavailability of the navigation signal for around 12 hours.[63] On 14–15 April 2014, nine GLONASS satellites experienced a technical failure due to software problems.[64] On 19 February 2016, three GLONASS satellites experienced a technical failure: the batteries of GLONASS-738 exploded, the batteries of GLONASS-737 were depleted, and GLONASS-736 experienced a stationkeeping failure due to human error during maneuvering. GLONASS-737 and GLONASS-736 were expected to be operational again after maintenance, and one new satellite (GLONASS-751) to replace GLONASS-738 was expected to complete commissioning in early March 2016. The full capacity of the satellite group was expected to be restored in the middle of March 2016.[65] After the launching of two new satellites and maintenance of two others, the full capacity of the satellite group was restored. According to Russian System of Differentional Correction and Monitoring's data, as of 2010[update], precision of GLONASS navigation definitions (for p=0.95) for latitude and longitude were 4.46–7.38 m (14.6–24.2 ft) with mean number of navigation space vehicles (NSV) equals 7—8 (depending on station). In comparison, the same time precision of GPSnavigationdefinitions were 2.00–8.76 m (6 ft 7 in – 28 ft 9 in) with mean number of NSV equals 6—11 (depending on station). Some modern receivers are able to use both GLONASS and GPS satellites together, providing greatly improved coverage in urban canyons and giving a very fast time to fix due to over 50 satellites being available. In indoor, urban canyon or mountainous areas, accuracy can be greatly improved over using GPS alone. For using both navigation systems simultaneously, precision of GLONASS/GPS navigation definitions were 2.37–4.65 m (7 ft 9 in – 15 ft 3 in) with mean number of NSV equals 14—19 (depends on station). In May 2009,Anatoly Perminov, then director of theRoscosmos, stated that actions were undertaken to expand GLONASS's constellation and to improve theground segmentto increase the navigation definition of GLONASS to an accuracy of 2.8 m (9 ft 2 in) by 2011.[66]In particular, the latest satellite design,GLONASS-Khas the ability to double the system's accuracy once introduced. The system's ground segment is also to undergo improvements. As of early 2012, sixteen positioning ground stations are under construction inRussiaand in theAntarcticat theBellingshausenandNovolazarevskayabases. New stations will be built around the southern hemisphere fromBraziltoIndonesia. Together, these improvements are expected to bring GLONASS' accuracy to 0.6 m or better by 2020.[67]The setup of a GLONASS receiving station in thePhilippinesis also now under negotiation.[68]
https://en.wikipedia.org/wiki/GLONASS
Google Latitudewas alocation-awarefeature ofGoogle Maps, developed byGoogleas a successor to its earlierSMS-based serviceDodgeball. Latitude allowed amobile phoneuser to allow certain people to view their current location. Via their ownGoogle Account, the user's cell phone location was mapped on Google Maps. The user could control the accuracy and details of what each of the other users can see — an exact location could be allowed, or it could be limited to identifying the city only. For privacy, it could also be turned off by the user, or a location could be manually entered. Users had to explicitly opt into Latitude and were only able to see the location of those friends who had decided to share their location with them.[1] On July 10, 2013, Google announced plans to shut down Latitude, and it was discontinued on August 9, 2013.[2]After the feature moved toGoogle+in between, Google incorporated Latitude's location sharing feature intoGoogle Mapsin March 2017.[3][4] Dodgeball was founded in 2000 byNew York UniversitystudentsDennis CrowleyandAlex Rainert. The company was acquired by Google in 2005 and Crowley and Rainert hired,[5]which led to the coinage of the termacquihire. In April 2007, Crowley and Rainert left Google, with Crowley describing their experience there as "incredibly frustrating".[6]After leaving Google, Crowley created a similar service known asFoursquarewith the help ofNaveen Selvadurai.[7] Dodgeball offered a facility to users by way ofSMS. Dodgeball was available for the cities ofSeattle,Portland,San Francisco,Los Angeles,Las Vegas,San Diego,Phoenix,Dallas–Fort Worth,Austin,Houston,New Orleans,Miami,Atlanta,Washington, D.C.,Philadelphia,New York City,Boston,Detroit,Chicago,Madison,Minneapolis–St. PaulandDenver.[8] In January 2009,Vic Gundotra, Vice President of Engineering at Google, announced that the company would "discontinue Dodgeball.com in the next couple of months, after which this service will no longer be available."[9]Dodgeball was shut down and succeeded in February 2009 by Google Latitude.[10] With Google Latitude, the service expanded to PC browsers (it used theGeolocation APIas well as user-driven input) and automated location detection on mobile phones usingcellular positioning,Wi-Fi positioning, andGPS. In November 2009, Google announced a Latitude feature called "Location History" which stores and analyzes a user's location over time, for example attempting to identify a user's home and workplace.[11]Web-based Location History is now provided by Google Maps. In May 2010, Google announced anAPIfor Latitude for developers to incorporate Latitude functionality into their apps. The functionality was "opt in" and had to be enabled by users due to the sensitivity of location data.[12]Users had the ability to share their exact location, a more general city-level location, or even share a location as a destination. In February 2012, a Leaderboard feature was added that provides point scoring and score comparison with friends.[13][14] On July 10, 2013, Google announced plans to shut down Google Latitude on August 9, 2013.[2]Google then offered location reporting onGoogle+, but this did not run on all the platforms that Google Maps does (BlackBerry,Windows Mobile,S60, etc.). Later it was fully migrated into Google Maps.[4] Google Latitude was compatible with most devices runningiOS,Android,BlackBerry OS,Windows Mobile, andSymbian S60.[15][16]Initially Google stated on the Latitude page that it would be available forJava MEphones,[citation needed]but this claim was later removed from the site. On most platforms Latitude could continue to update the user's location in the background when the application was not in use, while on others it only updated the user's location when the application was in use. TheSony Ericsson W995,C905, C903, C510, Elm and Satio mobile phones supported Google Latitude as part of their built-inGoogle Mapsapplication. Although this was a Java ME application, it could not be downloaded for use with other mobile phones. Amid concerns overlocational privacy,[17]Google announced that Latitude overwrites a user's previous location with the new location data and does not keep logs of locations provided to the service.[18][19]It also reflected to whom the location was shared and can trace 24*7. By early 2011, Google Latitude optionally recorded a history of places visited and counts time spent at each place. This information was then used to display statistics such as "Time At Work", "Time Spent At Home" and "Time Spent Out". OwnTracks is afree and open-source softwarepackage for tracking people, without relying on third party cloud services. It has been described as an alternative to the now defunct Google Latitude. The project was founded in 2014.[20]
https://en.wikipedia.org/wiki/Google_Latitude
Asatellite navigation(satnav)deviceorGPS deviceis a device that usessatellitesof theGlobal Positioning System(GPS) or similarglobal navigation satellite systems(GNSS). A satnav device candeterminethe user'sgeographic coordinatesand may display thegeographical positionon a map and offer routing directions (as inturn-by-turn navigation). As of 2023[update], four GNSS systems are operational: the original United States' GPS, the European Union'sGalileo, Russia'sGLONASS,[1][2]and China'sBeiDouNavigation Satellite System. TheIndian Regional Navigation Satellite System(IRNSS) will follow and Japan'sQuasi-Zenith Satellite System(QZSS) scheduled for 2023 will augment the accuracy of a number of GNSS. A satellite navigation device can retrieve location and time information from one or more GNSS systems in all weather conditions, anywhere on or near the Earth's surface. Satnav reception requires an unobstructed line of sight to four or more GNSS satellites,[3]and is subject to poor satellite signal conditions. In exceptionally poor signal conditions, for example in urban areas, satellite signals may exhibitmultipath propagationwhere signals bounce off structures, or areweakenedby meteorological conditions. Obstructed lines of sight may arise from a tree canopy or inside a structure, such as in a building, garage or tunnel. Today, most standalone Satnav receivers are used in automobiles. The Satnav capability ofsmartphonesmay useassisted GNSS(A-GNSS) technology, which can use thebase stationorcell towersto provide a fasterTime to First Fix(TTFF), especially when satellite signals are poor or unavailable. However, the mobile network part of the A-GNSS technology would not be available when the smartphone is outside the range of the mobile reception network, while the satnav aspect would otherwise continue to be available. As with many other technological breakthroughs of the latter 20th century, the modern GNSS system can reasonably be argued to be a direct outcome of theCold Warof the latter 20th century. The multibillion-dollar[citation needed]expense of the US and Russian programs was initially justified by military interest. In contrast, the European Galileo was conceived as purely civilian. In 1960, the US Navy put into service itsTransitsatellite-based navigation system to aid in naval navigation. The US Navy in the mid-1960s conducted an experiment to track a submarine with missiles with six satellites and orbiting poles and was able to observe satellite changes.[4]Between 1960 and 1982, as the benefits were shown, the US military consistently improved and refined its satellite navigation technology and satellite system. In 1973, the US military began to plan for a comprehensive worldwide navigational system which eventually became known as the GPS (Global Positioning System). In 1983, in the wake of the tragedy of the downing ofKorean Air Lines Flight 007, an aircraft which was shot down while in Soviet airspace due to a navigational error, PresidentRonald Reaganmade the navigation capabilities of the existing military GPS system available for dual civilian use. However, civilian use was initially only a slightly degraded "Selective Availability" positioning signal. This new availability of the US military GPS system for civilian use required a certain technical collaboration with the private sector for some time, before it could become a commercial reality.TheMacrometer Interferometric Surveyorwas the first commercial GNSS-based system for performinggeodeticmeasurements.[5][6] In 1989,Magellan Navigation Inc.unveiled its Magellan NAV 1000, the world's first commercial handheld GPS receiver. These units initially sold for approximately US$2,900 each. In 1990,Mazda'sEunos Cosmowas the first production car in the world with abuilt-in Satnav system.[7]In 1991,Mitsubishiintroduced Satnav car navigation on theMitsubishi Debonair(MMCS: Mitsubishi Multi Communication System).[8]In 1997, a navigation system usingDifferential GPSwas developed as a factory-installed option on theToyota Prius.[9]In 2000, the Clinton administration removed the military use signal restrictions, thus providing full commercial access to the US Satnav satellite system. As GNSS navigation systems became more and more widespread and popular, the pricing of such systems began to fall, and their widespread availability steadily increased. Several additional manufacturers of these systems, such asGarmin(1991),Benefon(1999),Mio(2002) andTomTom(2002) entered the market. Mitac Mio 168 was the first PocketPC to contain a built-in GPS receiver.[10]Benefon's 1999 entry into the market also presented users with the world's first phone based GPS navigation system. Later, as smartphone technology developed, a GPS chip eventually became standard equipment for most smartphones. To date, ever more popular satellite navigation systems and devices continue to proliferate with newly developed software and hardware applications. It has been incorporated, for example, into cameras. While the American GPS was the firstsatellite navigationsystem to be deployed on a fully global scale, and to be made available for commercial use, this is not the only system of its type. Due to military and other concerns, similar global or regional systems have been, or will soon be deployed by Russia, the European Union, China, India, and Japan. GNSS devices vary in sensitivity, speed, vulnerability tomultipath propagation, and other performance parameters. High-sensitivity receivers use large banks of correlators[clarification needed][citation needed]anddigital signal processingto search for signals very quickly. This results in very fasttimes to first fixwhen the signals are at their normal levels, for example, outdoors. When signals are weak, for example, indoors, the extra processing power can be used to integrate weak signals to the point where they can be used to provide a position or timing solution. GNSS signals are already very weak when they arrive at the Earth's surface. TheGPS satellitesonly transmit 27 W (14.3 dBW) from a distance of 20,200 km inorbitabove the Earth. By the time the signals arrive at the user's receiver, they are typically as weak as −160dBW, equivalent to 100 attowatts (10−16W)[clarification needed]. This is well below the thermal noise level in its bandwidth. Outdoors, GPS signals are typically around the −155 dBW level (−125dBm). Conventional GPS receivers integrate the received GPS signals for the same amount of time as the duration of a completeC/A code cyclewhich is 1 ms. This results in the ability to acquire and track signals down to around the −160 dBW level. High-sensitivity GPS receivers are able to integrate the incoming signals for up to 1,000 times longer than this and therefore acquire signals up to 1,000 times weaker, resulting in an integration gain of 30 dB. A good high-sensitivity GPS receiver can acquire signals down to −185 dBW, and tracking can be continued down to levels approaching −190 dBW. High-sensitivity GPS can provide positioning in many but not allindoor locations. Signals are either heavilyattenuatedby the building materials or reflected as inmultipath. Given that high-sensitivity GPS receivers may be up to 30dBmore sensitive, this is sufficient to track through 3 layers of dry bricks, or up to 20 cm (8 inches) of steel-reinforced concrete, for example.[citation needed]Examples of high-sensitivity receiver chips includeSiRFstarIIIandMediaTekʼs MTK II.[11] In aviation, the GPS receivers can be "armed" to the approach mode for the destination airport, so that when the aircraft is within 30 nmi (56 km; 35 mi), the receiver sensitivity will automatically change from en route (±5 nm) and RAIM (±2 nm) to terminal (±1 nm), and change again to ±0.3 nm at 2 nmi (3.7 km; 2.3 mi) before reaching the final approach way point.[12] A sequential GPS receiver tracks the necessary satellites by typically using one or two hardware channels.[13]The set will track one satellite at a time, time tag the measurements and combine them when all four satellitepseudorangeshave been measured. Thesereceiversare among the least expensive available, but they cannot operate under high dynamics and have the slowesttime-to-first-fix (TTFF)performance. Consumer GNSS navigation devices include: Dedicated devices have various degrees of mobility.Hand-held,outdoor, orsportreceivers have replaceable batteries that can run them for several hours, making them suitable forhiking,bicycle touringand other activities far from an electric power source. Their design isergonomical, their screens are small, and some do not show color, in part to save power. Some usetransflective liquid-crystal displays, allowing use in bright sunlight. Cases are rugged and some are water-resistant. Other receivers, often calledmobileare intended primarily for use in a car, but have a small rechargeable internal battery that can power themfor an hour or two[citation needed]away from the car. Special purpose devices for use in a car may be permanently installed and depend entirely on the automotive electrical system. Many of them havetouch-sensitive screensas input method. Maps may be stored on amemory card. Some offer additional functionality such as a rudimentarymusic player,image viewer, andvideo player.[14] The pre-installed embedded software of early receivers did not display maps; 21st-century ones commonly show interactive street maps (of certain regions) that may also showpoints of interest, route information and step-by-step routing directions, often in spoken form with a feature called "text to speech". Manufacturers include: Almost allsmartphonesnow incorporateGNSS receivers[citation needed]. This has been driven both by consumer demand and by service suppliers. There are now many phone apps that depend on location services, such as navigational aids, and multiple commercial opportunities, such as localised advertising. In its early development, access to user location services was driven by European and American emergency services to help locate callers.[15] All smartphone operating systems offerfree mapping and navigational servicesthat require a data connection; some allow the pre-purchase and downloading of maps but the demand for this is diminishing as data connection reliant maps can generally be cached anyway. There are many navigation applications and new versions are constantly being introduced. Major apps includeGoogle Maps Navigation,Apple MapsandWaze, which require data connections,iGofor Android, Maverick andHEREfor Windows Phone, which use cached maps and can operate without a data connection. Consequently, almost any smartphone now qualifies as apersonal navigation assistant. The use of mobile phones as navigational devices has outstripped the use of standalone GNSS devices. In 2009, independent analyst firm Berg Insight found that GNSS-enabled GSM/WCDMA handsets in the USA alone numbered 150 million units,[16]against the sale of only 40 million standalone GNSS receivers.[17] Assisted GPS(A-GPS) uses a combination of satellite data and cell tower data to shorten thetime to first fix, reduce the need to download a satellite almanac periodically and to help resolve a location when satellite signals are disturbed by the proximity of large buildings. When out of range of a cell tower the location performance of a phone using A-GPS may be reduced. Phones with an A-GPS basedhybrid positioning systemcan maintain a location fix when GPS signals are inadequate by cell tower triangulation and WiFi hotspot locations. Most smartphones download a satellite almanac when online to accelerate a GPS fix when out of cell tower range.[18] Some, older,Java-enabled phones lacking integrated GPS may still use external GPS receivers viaserialorBluetooth) connections, but the need for this is now rare. Bytetheringto alaptop, some phones can provide localisation services to a laptop as well.[19] Software companies have made availableGPS navigation softwareprograms for in-vehicle use on laptop computers.[20]Benefits of GPS on a laptop include larger map overview, ability to use the keyboard to control GPS functions, and some GPS software for laptops offers advanced trip-planning features not available on other platforms, such as midway stops, capability of finding alternative scenic routes as well as only highway option. Palms[21]andPocket PC's can also be equipped with GPS navigation.[22]A pocket PC differs from a dedicated navigation device as it has an own operating system and can also run other applications. Other GPS devices need to be connected to a computer in order to work. This computer can be ahome computer,laptop,PDA,digital camera, orsmartphones. Depending on the type of computer and available connectors, connections can be made through aserialorUSBcable, as well asBluetooth,CompactFlash,SD,PCMCIAand the newerExpressCard.[23]Some PCMCIA/ExpressCard GPS units also include awireless modem.[24] Devices usually do not come with pre-installedGPS navigation software, thus, once purchased, the user must install or write their own software. As the user can choose which software to use, it can be better matched to their personal taste. It is very common for a PC-based GPS receiver to come bundled with a navigation software suite. Also, software modules are significantly cheaper than complete stand-alone systems (around€50 to €100). The software may include maps only for a particular region, or the entire world, if software such as Google Maps are used. Some hobbyists have also made some Satnav devices and open-sourced the plans. Examples include the Elektor GPS units.[25][26]These are based around aSiRFstarIIIchip and are comparable to their commercial counterparts. Other chips and software implementations are also available.[27] Anautomotive navigation systemtakes its location from a GNSS system and, depending on the installed software, may offer the following services: Aviatorsuse Satnav to navigate and to improve safety and the efficiency of the flight. This may allow pilots to be independent of ground-based navigational aids, enable more efficient routes and provide navigation into airports that lack ground-based navigation and surveillance equipment. There are now some GPS units that allow aviators to get a clearer look in areas where the satellite is augmented to be able to have safe landings in bad visibility conditions. There have now been two new signals made for GPS, the first being made to help in critical conditions in the sky and the other will make GPS more of a robust navigation service. Many aviator services have now made it a required service to use a GPS.[28]Commercial aviation applications include GNSS devices that calculate location and feed that information to large multi-input navigational computers forautopilot, course information and correction displays to the pilots, and course tracking and recording devices. Military applications include devices similar to consumer sport products for foot soldiers (commanders and regular soldiers), small vehicles and ships, and devices similar to commercial aviation applications for aircraft and missiles. Examples are the United States military'sCommander's Digital Assistantand theSoldier Digital Assistant.[29][30][31][32]Prior to May 2000 only the military had access to the full accuracy of GPS. Consumer devices were restricted byselective availability(SA), which was scheduled to be phased out but was removed abruptly by President Clinton.[33]Differential GPSis a method of cancelling out the error of SA and improving GPS accuracy, and has been routinely available in commercial applications such as for golf carts.[34]GPS is limited to about 15 meter accuracy even without SA. DGPS can be within a few centimeters.[35] GPS maps and directions are occasionally imprecise.[citation needed]Some people have gotten lost by asking for the shortest route.[36][37][38][39]Brad Preston, Oregon claims that people are routed into his driveway five to eight times a week because their Satnav shows a street through his property.[39]Other hazards involve an alley being listed as a street, a lane being identified as a road,[40]or rail tracks as a road.[41] Userprivacymay be compromised if Satnav equipped handheld devices such as mobile phones upload user geo-location data through associated software installed on the device. User geo-location is currently the basis for navigational apps such as Google Maps,location-based advertising, which can promote nearby shops and may allow anadvertising agencyto track user movements and habits for future use. Regulatory bodies differ between countries regarding the treatment of geo-location data as privileged or not. Privileged data cannot be stored, or otherwise used, without the user's consent.[42] Vehicle tracking systemsallow employers to track their employees' location raising questions regarding violation of employee privacy. There are cases where employers continued to collect geo-location data when an employee was off duty in private time.[43] Rental carservices may use the same technique to geo-fence their customers to the areas they have paid for, charging additional fees for violations.[44]In 2010,New York Civil Liberties Unionfiled a case against the Labor Department for firing Michael Cunningham after tracking his daily activity and locations using a Satnav device attached to his car.[45]Private investigatorsuse planted GPS devices to provide information to their clients on a target's movements.
https://en.wikipedia.org/wiki/GPS_phone
Anindoor positioning system(IPS) is a network of devices used to locate people or objects whereGPSand other satellite technologies lack precision or fail entirely, such as inside multistory buildings, airports, alleys, parking garages, and underground locations.[1] A large variety of techniques and devices are used to provide indoor positioning ranging from reconfigured devices already deployed such as smartphones,WiFiandBluetoothantennas, digital cameras, and clocks; to purpose built installations with relays and beacons strategically placed throughout a defined space. Lights, radio waves, magnetic fields, acoustic signals, and behavioral analytics are all used in IPS networks.[2][3]IPS can achieve position accuracy of 2 cm,[4]which is on par withRTKenabled GNSS receivers that can achieve 2 cm accuracy outdoors.[5]IPS use different technologies, including distance measurement to nearby anchor nodes (nodes with known fixed positions, e.g.WiFi/LiFiaccess points,Bluetooth beaconsor Ultra-Wideband beacons),magnetic positioning,dead reckoning.[6]They either actively locate mobile devices and tags or provide ambient location or environmental context for devices to get sensed.[7][8][9]The localized nature of an IPS has resulted in design fragmentation, with systems making use of variousoptical,[10]radio,[11][12][13][14][15][16][17]or evenacoustic[18][19]technologies. IPS has broad applications in commercial, military, retail, and inventory tracking industries. There are several commercial systems on the market, but no standards for an IPS system. Instead each installation is tailored to spatial dimensions, building materials, accuracy needs, and budget constraints. For smoothing to compensate forstochastic(unpredictable) errors there must be a sound method for reducing the error budget significantly. The system might include information from other systems to cope for physical ambiguity and to enable error compensation. Detecting the device's orientation (often referred to as thecompass directionin order to disambiguate it from smartphone vertical orientation) can be achieved either by detecting landmarks inside images taken in real time, or by using trilateration with beacons.[20]There also exist technologies for detecting magnetometric information inside buildings or locations with steel structures or in iron ore mines.[21] Due to the signalattenuationcaused by construction materials, the satellite basedGlobal Positioning System(GPS) loses significant power indoors affecting the required coverage for receivers by at least four satellites. In addition, the multiple reflections at surfaces cause multi-path propagation serving for uncontrollable errors. These very same effects are degrading all known solutions for indoor locating which uses electromagnetic waves from indoor transmitters to indoor receivers. A bundle of physical and mathematical methods are applied to compensate for these problems. Promising direction radio frequency positioning error correction opened by the use of alternative sources of navigational information, such asinertial measurement unit(IMU), monocular cameraSimultaneous localization and mapping(SLAM) and WiFi SLAM. Integration of data from various navigation systems with different physical principles can increase the accuracy and robustness of the overall solution.[22] The U.S.Global Positioning System(GPS) and other similarGlobal navigation satellite systems(GNSS) are generally not suitable to establish indoor locations, since microwaves will be attenuated and scattered by roofs, walls and other objects. However, in order to make the positioning signals become ubiquitous, integration between GPS and indoor positioning can be made.[23][24][25][26][27][28][29][30] Currently,GNSSreceivers are becoming more and more sensitive due to increasing microchip processing power.High Sensitivity GNSSreceivers are able to receive satellite signals in most indoor environments and attempts to determine the 3D position indoors have been successful.[31]Besides increasing the sensitivity of the receivers, the technique ofA-GPSis used, where the almanac and other information are transferred through a mobile phone. However, despite the fact that proper coverage for the required four satellites to locate a receiver is not achieved with all current designs (2008–11) for indoor operations, GPS emulation has been deployed successfully in Stockholm metro.[32]GPS coverage extension solutions have been able to provide zone-based positioning indoors, accessible with standard GPS chipsets like the ones used in smartphones.[32] While most current IPS are able to detect the location of an object, they are so coarse that they cannot be used to detect theorientationordirectionof an object.[33] One of the methods to thrive for sufficient operational suitability is "tracking". Whether a sequence of locations determined form a trajectory from the first to the most actual location. Statistical methods then serve for smoothing the locations determined in a track resembling the physical capabilities of the object to move. This smoothing must be applied, when a target moves and also for a resident target, to compensate erratic measures. Otherwise the single resident location or even the followed trajectory would compose of an itinerant sequence of jumps. In most applications the population of targets is larger than just one. Hence the IPS must serve a proper specific identification for each observed target and must be capable to segregate and separate the targets individually within the group. An IPS must be able to identify the entities being tracked, despite the "non-interesting" neighbors. Depending on the design, either asensor networkmust know from which tag it has received information, or a locating device must be able to identify the targets directly. Any wireless technology can be used for locating. Many different systems take advantage of existing wireless infrastructure for indoor positioning. There are three primary system topology options for hardware and software configuration, network-based, terminal-based, and terminal-assisted. Positioning accuracy can be increased at the expense of wireless infrastructure equipment and installations. Wi-Fi positioning system (WPS) is used whereGPSis inadequate. The localization technique used for positioning with wireless access points is based on measuring the intensity of the received signal (received signal strengthin English RSS) and the method of "fingerprinting".[34][35][36][37]To increase the accuracy of fingerprinting methods, statistical post-processing techniques (likeGaussian processtheory) can be applied, to transform discrete set of "fingerprints" to a continuous distribution of RSSI of each access point over entire location.[38][39][40]Typical parameters useful to geolocate theWi-Fi hotspotorwireless access pointinclude theSSIDand theMAC addressof the access point. The accuracy depends on the number of positions that have been entered into the database. The possible signal fluctuations that may occur can increase errors and inaccuracies in the path of the user.[41][42] Originally,Bluetoothwas concerned about proximity, not about exact location.[43]Bluetooth was not intended to offer a pinned location like GPS, however is known as ageo-fenceor micro-fence solution which makes it an indoor proximity solution, not an indoor positioning solution. Micromapping and indoor mapping[44]has been linked to Bluetooth[45]and to theBluetooth LEbasediBeaconpromoted byApple Inc.Large-scale indoor positioning system based on iBeacons has been implemented and applied in practice.[46][47] Bluetooth speaker position andhome networkscan be used for broad reference. In 2021 Apple released theirAirTagswhich allow a combination of Bluetooth andUWBtechnology to track Apple devices amongst theFind Mynetwork causing a surge of popularity for tracking technology. Simple concept of location indexing and presence reporting for tagged objects, uses known sensor identification only.[16]This is usually the case with passiveradio-frequency identification(RFID) /NFCsystems, which do not report the signal strengths and various distances of single tags or of a bulk of tags and do not renew any before known location coordinates of the sensor or current location of any tags. Operability of such approaches requires some narrow passage to prevent from passing by out of range. Instead of long range measurement, a dense network of low-range receivers may be arranged, e.g. in a grid pattern for economy, throughout the space being observed. Due to the low range, a tagged entity will be identified by only a few close, networked receivers. An identified tag must be within range of the identifying reader, allowing a rough approximation of the tag location. Advanced systems combine visual coverage with a camera grid with the wireless coverage for the rough location. Most systems use a continuous physical measurement (such as angle and distance or distance only) along with the identification data in one combined signal. Reach by these sensors mostly covers an entire floor, or an aisle or just a single room. Short reach solutions get applied with multiple sensors and overlapping reach. Angle of arrival(AoA) is the angle from which a signal arrives at a receiver. AoA is usually determined by measuring thetime difference of arrival(TDOA) between multiple antennas in a sensor array. In other receivers, it is determined by an array of highly directional sensors—the angle can be determined by which sensor received the signal. AoA is usually used withtriangulationand a known base line to find the location relative to two anchor transmitters. Time of arrival(ToA, also time of flight) is the amount of time a signal takes to propagate from transmitter to receiver. Because the signal propagation rate is constant and known (ignoring differences in mediums) the travel time of a signal can be used to directly calculate distance. Multiple measurements can be combined withtrilaterationandmultilaterationto find a location. This is the technique used byGPSandUltra Widebandsystems. Systems which use ToA, generally require a complicated synchronization mechanism to maintain a reliable source of time for sensors (though this can be avoided in carefully designed systems by using repeaters to establish coupling[17]). The accuracy of the TOA based methods often suffers from massive multipath conditions in indoor localization, which is caused by the reflection and diffraction of the RF signal from objects (e.g., interior wall, doors or furniture) in the environment. However, it is possible to reduce the effect of multipath by applying temporal or spatial sparsity based techniques.[48][49] Joint estimation of angles and times of arrival is another method of estimating the location of the user. Indeed, instead of requiring multiple access points and techniques such as triangulation and trilateration, a single access point will be able to locate a user with combined angles and times of arrival.[50]Even more, techniques that leverage both space and time dimensions can increase the degrees of freedom of the whole system and further create more virtual resources to resolve more sources, via subspace approaches.[51] Received signal strength indication(RSSI) is a measurement of the power level received by sensor. Because radio waves propagate according to theinverse-square law, distance can be approximated (typically to within 1.5 meters in ideal conditions and 2 to 4 meters in standard conditions[52]) based on the relationship between transmitted and received signal strength (the transmission strength is a constant based on the equipment being used), as long as no other errors contribute to faulty results. The inside of buildings is notfree space, so accuracy is significantly impacted by reflection and absorption from walls. Non-stationary objects such as doors, furniture, and people can pose an even greater problem, as they can affect the signal strength in dynamic, unpredictable ways. A lot of systems use enhancedWi-Fiinfrastructure to provide location information.[12][14][15]None of these systems serves for proper operation with any infrastructure as is. Unfortunately, Wi-Fi signal strength measurements are extremelynoisy, so there is ongoing research focused on making more accurate systems Non-radio technologies can be used for positioning without using the existing wireless infrastructure. This can provide increased accuracy at the expense of costly equipment and installations. Magnetic positioningcan offer pedestrians with smartphones an indoor accuracy of 1–2 meters with 90% confidence level, without using the additional wireless infrastructure for positioning. Magnetic positioning is based on the iron inside buildings that create local variations in the Earth's magnetic field. Un-optimized compass chips inside smartphones can sense and record these magnetic variations to map indoor locations.[55] Pedestriandead reckoningand other approaches for positioning of pedestrians propose aninertial measurement unitcarried by the pedestrian either by measuring steps indirectly (step counting) or in a foot mounted approach,[56]sometimes referring to maps or other additional sensors to constrain the inherent sensor drift encountered with inertial navigation. The MEMS inertial sensors suffer from internal noises which result in cubically growing position error with time. To reduce the error growth in such devices aKalman Filteringbased approach is often used.[57][58][59][60]However, in order to make it capable to build map itself, the SLAM algorithm framework[61]will be used.[62][63][64] Inertial measures generally cover the differentials of motion, hence the location gets determined with integrating and thus requires integration constants to provide results.[65][66]The actual position estimation can be found as the maximum of a 2-d probability distribution which is recomputed at each step taking into account the noise model of all the sensors involved and the constraints posed by walls and furniture.[67]Based on the motions and users' walking behaviors, IPS is able to estimate users' locations by machine learning algorithms.[68] A visual positioning system can determine the location of a camera-enabled mobile device by decoding location coordinates from visual markers. In such a system, markers are placed at specific locations throughout a venue, each marker encoding that location's coordinates: latitude, longitude and height off the floor. Measuring the visual angle from the device to the marker enables the device to estimate its own location coordinates in reference to the marker. Coordinates include latitude, longitude, level and altitude off the floor.[69][70]As visual markers usually are not symmetric, also the orientation of the user can be determined.[71] A collection of successive snapshots from a mobile device's camera can build a database of images that is suitable for estimating location in a venue. Once the database is built, a mobile device moving through the venue can take snapshots that can be interpolated into the venue's database, yielding location coordinates. These coordinates can be used in conjunction with other location techniques for higher accuracy. Note that this can be a special case of sensor fusion where a camera plays the role of yet another sensor. Once sensor data has been collected, an IPS tries to determine the location from which the received transmission was most likely collected. The data from a single sensor is generally ambiguous and must be resolved by a series of statistical procedures to combine several sensor input streams. One way to determine position is to match the data from the unknown location with a large set of known locations using an algorithm such ask-nearest neighbor. This technique requires a comprehensive on-site survey and will be inaccurate with any significant change in the environment (due to moving persons or moved objects). Location will be calculated mathematically by approximating signal propagation and finding angles and / or distance. Inverse trigonometry will then be used to determine location: Advanced systems combine more accurate physical models with statistical procedures: The major consumer benefit of indoor positioning is the expansion oflocation-awaremobile computing indoors. As mobile devices become ubiquitous,contextual awarenessfor applications has become a priority for developers. Most applications currently rely on GPS, however, and function poorly indoors. Applications benefiting from indoor location include:
https://en.wikipedia.org/wiki/Indoor_positioning
TheInternational Mobile Equipment Identity(IMEI)[1]is a numericidentifier, usuallyunique,[2][3]for3GPPandiDENmobile phones, as well as somesatellite phones. It is usually found printed inside the battery compartment of the phone but can also be displayed on-screen on most phones by entering theMMI Supplementary Service code*#06#on the dialpad, or alongside other system information in the settings menu on smartphone operating systems. GSMnetworks use the IMEI number to identify valid devices, and can stop a stolen phone from accessing the network. For example, if amobile phoneis stolen, the owner can have their network provider use the IMEI number to blocklist the phone. This renders the phone useless on that network and sometimes other networks, even if the thief changes the phone'sSIMcard. Devices without a SIM card slot oreSIMcapability usually do not have an IMEI, except for certain earlySprintLTEdevices such as theSamsungGalaxy NexusandS IIIwhich emulated a SIM-freeCDMAactivation experience and lacked roaming capabilities in3GPP-only countries.[4]However, the IMEI only identifies the device and has no particular relationship to the subscriber. The phone identifies the subscriber by transmitting theInternational mobile subscriber identity(IMSI) number, which is stored on a SIM card that can, in theory, be transferred to any handset. However, the network's ability to know a subscriber's current, individual device enables many network and security features.[citation needed] Dual SIM enabled phones will normally have two IMEI numbers, except for devices such as thePixel 3(which has an eSIM and one physical SIM) which only allow one SIM card to be active at once. Many countries have acknowledged the use of the IMEI in reducing the effect of mobile phone thefts. For example, in theUnited Kingdom, under the Mobile Telephones (Re-programming) Act, changing the IMEI of a phone, or possessing equipment that can change it, is considered an offence under some circumstances.[5][6]A bill was introduced in the United States by SenatorChuck Schumerin 2012 that would have made the changing of an IMEI illegal, but the bill was not enacted.[7] IMEI blocking is not the only way to fight phone theft. Instead, mobile operators are encouraged to take measures such as immediate suspension of service and replacement of SIM cards in case of loss or theft.[8] The existence of a formally allocated IMEI number range for a GSM terminal does not mean that the terminal is approved or complies with regulatory requirements. The linkage between regulatory approval and IMEI allocation was removed in April 2000, with the introduction of the European R&TTE Directive.[9]Since that date, IMEIs have been allocated byBABT(or one of several other regional administrators acting on behalf of theGSM Association) to legitimate GSM terminal manufacturers without the need to provide evidence of approval. When someone has their mobile equipment stolen or lost, they can ask their service provider to block the phone from their network, and the operator may do so, especially if required by law. If the local operator maintains an Equipment Identity Register (EIR), it adds the device IMEI to it. Optionally, it also adds the IMEI to shared registries, such as theCentral Equipment Identity Register(CEIR), which blocklists the device with other operators that use the CEIR. This blocklisting makes the device unusable on any operator that uses the CEIR, which makes mobile equipment theft pointless, except for parts. To make blocklisting effective, the IMEI number is supposed to be difficult to change. However, a phone's IMEI may be easy to change with special tools.[10][better source needed]In addition, IMEI is an un-authenticated mobile identifier (as opposed to IMSI, which is routinely authenticated by home and serving mobile networks.) Using a spoofed IMEI can thwart some efforts to track handsets, or target handsets for lawful intercept.[citation needed] Australia was the first nation to implement IMEI blocking across all GSM networks, in 2003.[11]In Australia the Electronic Information Exchange (EIE) Administration Node provides a blocked IMEI lookup service for Australian customers.[12] In the UK, a voluntary charter operated by the mobile networks ensures that any operator's blocklisting of a handset is communicated to the CEIR and subsequently to all other networks. This ensures that the handset is quickly unusable for calls, at most within 48 hours. Some UK Police forces, including theMetropolitan Police Service, actively check IMEI numbers of phones found involved in crime. In New Zealand, the NZ Telecommunications Forum Inc[13]provides a blocked IMEI lookup service for New Zealand consumers. The service allows up to three lookups per day[14]and checks against a database that is updated daily by the three major mobile network operators. A blocked IMEI cannot be connected to any of these three operators. In Latvia the SIA "Datorikas institūts DIVI"[15]provides a blocked IMEI lookup service for checks against a database that is updated by all major mobile network operators in Latvia. In some countries, such blocklisting is not customary. In 2012, major network companies in the United States, under government pressure, committed to introducing a blocklisting service, but it's not clear whether it will interoperate with the CEIR.[16][17]GSM carriers AT&T and T-Mobile began blocking newly reported IMEIs in November 2012.[18]Thefts reported prior to November 2012 were not added to the database. TheCTIArefers users to websites atwww.stolenphonechecker.org[19]andthe GSMA[19]where consumers can check whether a smartphone has been reported as lost or stolen to its member carriers. The relationship between the former and any national or internationalIMEI blocklistsis unclear.[19] It is unclear whether local barring of IMEI has any positive effect, as it may result in international smuggling of stolen phones.[20] IMEIs can sometimes be removed from a blocklist, depending on local arrangements. This would typically include quoting a password chosen at the time of blocklisting.[citation needed] Law enforcement and intelligence services can use an IMEI number as input for tracking devices that are able to locate a cell phone with an accuracy of a few meters. Saudi Arabian government agencies have reportedly used IMEI numbers retrieved from cell phone packaging to locate and detain women who fled Saudi Arabia's patriarchal society in other countries.[21] An IMEI number retrieved from the remnants of aNokia 5110was used to trace and identify the perpetrators behind the2002 Bali bombings.[22] Some countries use allowlists instead of blocklists for IMEI numbers, so that any mobile phone needs to be legally registered in the country in order to be able to access mobile networks of the country, with possible exceptions for international roaming and a grace period for registering.[23]These include Chile,[24]Turkey,[25]Azerbaijan,[26]Colombia,[27]and Nepal.[28]Other countries that have adopted some form of mandatory IMEI registration include India, Pakistan, Indonesia, Cambodia, Thailand, Iran, Nigeria, Ecuador, Ukraine, Lebanon,[29]and Kenya.[30] Prior to their merger withT-Mobile,Sprintin the United States used an allowlist of devices where a user had to register their IMEI and SIM card before aLTE-capable device could be used, despite noUSlaw mandating it.[31]If a user changed their device, they had to register their new IMEI and SIM card. This isn't the case with other CDMA carriers likeVerizonwhich only used allowlists for 3G (which was a requirement for CDMA) and T-Mobile does not use an allowlist but instead a blocklist, including for former Sprint customers. AT&T[32]andTelus[33]also use an allowlist forVoLTEaccess, but does not require IMEI registration by customers. Instead, phone manufacturers are required to register their devices into AT&T's or Telus' databases, and customers are able to freely swap SIM cards or eSIMs into any allowlisted device. This has the problem that imported phones and some non-imported phones such as olderOnePlusmodels or selectCDMA-capable LTE devices (including models sold onVerizonorSprint) will not work for voice calls even if they have the LTE/5G bands for AT&T and Telus and support VoLTE on competitors or via VoLTE roaming. The IMEI (15 decimal digits: 14 digits plus a check digit) or IMEISV (16 decimal digits: 14 digits plus two software version digits) includes information on the origin, model, and serial number of the device. The structure of the IMEI/SV is specified in3GPP TS 23.003. The model and origin comprise the initial 8-digit portion of the IMEI/SV, known as theType Allocation Code(TAC). The remainder of the IMEI is manufacturer-defined, with aLuhn check digitat the end. For the IMEI format prior to 2003, the GSMA guideline was to have this Check Digit always transmitted to the network as zero. This guideline seems to have disappeared for the format valid from 2003 onwards.[34] As of 2004[update], the format of the IMEI isAA-BBBBBB-CCCCCC-D, although it may not always be displayed this way. The IMEISV does not have the Luhn check digit but instead has two digits for the Software Version Number (SVN), making the formatAA-BBBBBB-CCCCCC-EE Prior to 2002, the TAC was six digits and followed by a two-digitFinal Assembly Code(FAC), which was a manufacturer-specific code indicating the location of the device's construction. From January 1, 2003 until April 1, 2004, theFACfor all phones was 00. After April 1, 2004, the Final Assembly Code ceased to exist and the Type Allocation Code increased to eight digits in length. In any of the above cases, the first two digits of the TAC are theReporting Body Identifier, which identifies the GSMA-approved group that allocated the TAC. The RBI numbers are allocated by the Global Decimal Administrator. IMEI numbers being decimal helps distinguish them from anMEID, which is hexadecimal and always has 0xA0 or larger as the first two hexadecimal digits. For example, the old style IMEI code 35-209900-176148-1 or IMEISV code 35-209900-176148-23 tells us the following: TAC: 35-2099 - issued by theBABT(code 35) with the allocation number 2099FAC: 00 - indicating the phone was made during the transition period when FACs were being removed.SNR: 176148 - uniquely identifying a unit of this modelCD: 1 so it is a GSM Phase 2 or higherSVN: 23 - The "software version number" identifying the revision of the software installed on the phone. 99 is reserved. By contrast, the new style IMEI code 49-015420-323751-8 has an 8-digit TAC of 49-015420. The CDMAMobile Equipment Identifieruses the same basic format as the IMEI but gives more flexibility in allocation sizes and usage. The last number of the IMEI is acheck digit, calculated using theLuhn algorithm, as defined in theIMEI Allocation and Approval Guidelines: The Check Digit shall be calculated according toLuhn formula(ISO/IEC 7812). (See GSM 02.16 / 3GPP 22.016). The Check Digit is a function of all other digits in the IMEI. The Software Version Number (SVN) of a mobile is not included in the calculation. The purpose of the Check Digit is to help guard against the possibility of incorrect entries to the CEIR and EIR equipment. The presentation of the Check Digit both electronically and in printed form on the label and packaging is very important. Logistics (using bar-code reader) and EIR/CEIR administration cannot use the Check Digit unless it is printed outside of the packaging, and on the ME IMEI/Type Accreditation label. The check digit is not transmitted over the radio interface, nor is it stored in the EIR database at any point. Therefore, all references to the last three or six digits of an IMEI refer to the actual IMEI number, to which the check digit does not belong. The check digit is validated in three steps: Conversely, one can calculate the IMEI by choosing the check digit that would give a sum divisible by 10. For the example IMEI 49015420323751?, To make the sum divisible by 10, we setx= 8, so the complete IMEI becomes 490154203237518. IMEI validation[35]is the process of verifying the authenticity and integrity of a mobile device’s 15-digitIMEInumber, ensuring it conforms to global registry standards and has not been tampered with. An IMEI consists of four parts: theType Allocation Code(TAC), which identifies the device model; theFinal Assembly Code(FAC), denoting the manufacturing site; theSerial Number(SNR), unique to each unit; and theCheck Digit, calculated via theLuhn algorithm. During validation, the first 14 digits are processed through the Luhn checksum procedure and compared to the Check Digit—any mismatch indicates an invalid or forged IMEI. Widely employed byGSMnetwork operators, regulatory agencies, and anti-theft platforms, IMEI validation helps combat device cloning, unauthorized resale, and mobile phone theft, while maintaining network security and consumer trust. TheBroadband Global Area Network(BGAN),IridiumandThurayasatellite phonenetworks all use IMEI numbers on their transceiver units as well as SIM cards in much the same way as GSM phones do. The Iridium 9601 modem relies solely on its IMEI number for identification and uses no SIM card; however, Iridium is a proprietary network and the device is incompatible with terrestrial GSM networks.
https://en.wikipedia.org/wiki/IMEI_number
Apositioning systemis a system for determining thepositionof an object inspace.[1]Positioning system technologies exist ranging from interplanetary coverage with meter accuracy to workspace and laboratory coverage with sub-millimeter accuracy. A major subclass is made ofgeopositioningsystems, used for determining an object's position with respect to Earth, i.e., itsgeographical position; one of the most well-known and commonly used geopositioning systems is theGlobal Positioning System(GPS) and similarglobal navigation satellite systems(GNSS). Interplanetary-radio communication systems not only communicate with spacecraft, but they are also used to determine their position.Radarcan track targets near the Earth, but spacecraft in deep space must have a workingtransponderon board to echo a radio signal back. Orientation information can be obtained usingstar trackers. Global navigation satellite systems(GNSS) allow specialized radio receivers to determine their 3-D space position, as well as time, with an accuracy of 2–20 metres or tens of nanoseconds. Currently deployed systems use microwave signals that can only be received reliably outdoors and that cover most of Earth's surface, as well as near-Earth space. The existing and planned systems are: Networks of land-based positioning transmitters allow specializedradio receiversto determine their 2-D position on the surface of the Earth. They are generally less accurate than GNSS because their signals are not entirely restricted toline-of-sight propagation, and they have only regional coverage. However, they remain useful for special purposes and as a backup where their signals are more reliably received, including underground and indoors, and receivers can be built that consume very low battery power.LORANis an example of such a system. Alocal positioning system(LPS) is a navigation system that provides location information in all weather, anywhere within the coverage of the network, where there is an unobstructedline of sightto three or more signalingbeaconsof which the exact position on Earth is known.[2][3][4][5] UnlikeGPSor otherglobal navigation satellite systems,local positioning systemsdon't provide global coverage. Instead, they use beacons, which have a limited range, hence requiring the user to be near these. Beacons includecellularbase stations,Wi-FiandLiFiaccess points, and radiobroadcast towers. In the past, long-range LPS's have been used for navigation of ships and aircraft. Examples are theDecca Navigator SystemandLORAN. Nowadays, local positioning systems are often used as complementary (and in some cases alternative) positioning technology to GPS, especially in areas where GPS does not reach or is weak, for example,inside buildings, orurban canyons. Local positioning using cellular andbroadcast towerscan be used on cell phones that do not have a GPS receiver. Even if the phone has a GPS receiver, battery life will be extended if cell tower location accuracy is sufficient. They are also used in trackless amusement rides likePooh's Hunny HuntandMystic Manor. Examples of existing systems include Indoor positioning systems are optimized for use within individual rooms, buildings, or construction sites. They typically offer centimeter-accuracy. Some provide6-Dlocation and orientation information. Examples of existing systems include These are designed to cover only a restricted workspace, typically a few cubic meters, but can offer accuracy in the millimeter-range or better. They typically provide 6-D position and orientation. Example applications includevirtual realityenvironments, alignment tools forcomputer-assisted surgeryor radiology, and cinematography (motion capture,match moving). Examples:Wii Remotewith Sensor Bar, Polhemus Tracker, Precision Motion Tracking Solutions InterSense.[6] High performance positioning systemis used in manufacturing processes to move an object (tool or part) smoothly and accurately in six degrees of freedom, along a desired path, at a desired orientation, with highacceleration, highdeceleration, highvelocityand lowsettling time. It is designed to quickly stop its motion and accurately place the moving object at its desired final position and orientation with minimal jittering. Examples: high velocitymachine tools,laser scanning,wire bonding,printed circuit boardinspection,lab automationassaying,flight simulators Multiple technologies exist to determine the position and orientation of an object or person in a room, building or in the world. Time of flightsystems determine the distance by measuring the time of propagation of pulsed signals between a transmitter and receiver. When distances of at least three locations are known, a fourth position can be determined usingtrilateration.Global Positioning Systemis an example. Optical trackers, such aslaser ranging trackerssuffer fromline of sightproblems and their performance is adversely affected by ambient light and infrared radiation. On the other hand, they do not suffer from distortion effects in the presence of metals and can have high update rates because of the speed of light.[7] Ultrasonic trackershave a more limited range because of the loss of energy with the distance traveled. Also they are sensitive to ultrasonic ambient noise and have a low update rate. But the main advantage is that they do not need line of sight. Systems usingradio wavessuch as theGlobal navigation satellite systemdo not suffer ambient light, but still need line of sight. A spatial scan system uses (optical) beacons and sensors. Two categories can be distinguished: By aiming the sensor at the beacon the angle between them can be measured. Withtriangulationthe position of the object can be determined. The main advantage of aninertial sensingis that it does not require an external reference. Instead it measures rotation with agyroscopeor position with anaccelerometerwith respect to a known starting position and orientation. Because these systems measure relative positions instead of absolute positions they can suffer from accumulated errors and therefore are subject to drift. A periodic re-calibration of the system will provide more accuracy. This type of tracking system uses mechanical linkages between the reference and the target. Two types of linkages have been used. One is an assembly of mechanical parts that can each rotate, providing the user with multiple rotation capabilities. The orientation of the linkages is computed from the various linkage angles measured with incremental encoders or potentiometers. Other types of mechanical linkages are wires that are rolled in coils. A spring system ensures that the wires are tensed in order to measure the distance accurately. The degrees of freedom sensed by mechanical linkage trackers are dependent upon the constitution of the tracker's mechanical structure. While six degrees of freedom are most often provided, typically only a limited range of motions is possible because of the kinematics of the joints and the length of each link. Also, the weight and the deformation of the structure increase with the distance of the target from the reference and impose a limit on the working volume.[8] Phase differencesystems measure the shift in phase of an incoming signal from an emitter on a moving target compared to the phase of an incoming signal from a reference emitter. With this the relative motion of the emitter with respect to the receiver can be calculated. Like inertial sensing systems, phase-difference systems can suffer from accumulated errors and therefore are subject to drift, but because the phase can be measured continuously they are able to generate high data rates.Omega (navigation system)is an example. Direct field sensing systems use a known field to derive orientation or position: A simplecompassuses theEarth's magnetic fieldto know its orientation in two directions.[8]Aninclinometeruses theearth gravitational fieldto know its orientation in the remaining third direction. The field used for positioning does not need to originate from nature, however. A system of threeelectromagnetsplaced perpendicular to each other can define a spatial reference. On the receiver, three sensors measure the components of the field's flux received as a consequence ofmagnetic coupling. Based on these measures, the system determines the position and orientation of the receiver with respect to the emitters' reference. Optical positioning systems are based onopticscomponents, such as intotal stations.[9] Magnetic positioningis an IPS (Indoor positioning system) solution that takes advantage of the magnetic field anomalies typical of indoor settings by using them as distinctive place recognition signatures. The first citation of positioning based on magnetic anomaly can be traced back to military applications in 1970.[10]The use of magnetic field anomalies for indoor positioning was first claimed in 1999,[11]with later publications related to robotics in the early 2000s.[12][13] Most recent applications can employ magnetic sensor data from asmartphoneused to wirelessly locate objects or people inside a building.[14] Because every technology has its pros and cons, most systems use more than one technology. A system based on relative position changes like the inertial system needs periodic calibration against a system with absolute position measurement. Systems combining two or more technologies are called hybrid positioning systems.[16] Hybrid positioning systems are systems for finding the location of a mobile device using several different positioning technologies. Usually GPS (Global Positioning System) is one major component of such systems, combined with cell tower signals, wireless internet signals,Bluetoothsensors,IP addressesand network environment data.[17] These systems are specifically designed to overcome the limitations of GPS, which is very exact in open areas, but works poorly indoors or between tall buildings (theurban canyoneffect). By comparison, cell tower signals are not hindered by buildings or bad weather, but usually provide less precise positioning.Wi-Fi positioning systemsmay give very exact positioning, in urban areas with high Wi-Fi density - and depend on a comprehensive database of Wi-Fi access points. Hybrid positioning systems are increasingly being explored for certain civilian and commerciallocation-based servicesandlocation-based media, which need to work well in urban areas in order to be commercially and practically viable. Early works in this area include the Place Lab project, which started in 2003 and went inactive in 2006. Later methods let smartphones combine the accuracy of GPS with the low power consumption of cell-ID transition point finding.[18]In 2022, the satellite-free positioning systemSuperGPSwith higher-resolution than GPS using existing telecommunications networks was demonstrated.[19][20]
https://en.wikipedia.org/wiki/Local_positioning_system
Mobile datingservices, also known ascell dating,cellular dating, orcell phone dating, allow individuals to chat,flirt, meet, and possibly become romantically involved by means oftext messaging, mobile chatting, and themobile web. These services allow their users to provide information about themselves in a short profile which is either stored in their phones as a dating ID or as a username on the mobile dating site. They can then search for other IDs online or by calling a certain phone number dictated by the service. The criteria include age, gender and sexual preference. Usually these sites are free to use but standard text messaging fees may still apply as well as a small fee the dating service charges per message. Mobile dating websites, in order to increase the opportunities for meeting, focus attention on users that share the same social network and proximity. Some companies even offer services such ashoming devicesto alert users when another user is within thirty feet of one another.[1]Some systems involvebluetooth technologyto connect users in locations such as bars and clubs. This is known as proximity dating. These systems are actually more popular in some countries in Europe and Asia than online dating. With the advent ofGPS PhonesandGSM localization, proximity dating is likely to rise sharply in popularity.[needs update] According to The San Francisco Chronicle in 2005, "Mobile dating is the next big leap in online socializing."[1]More than 3.6 million cell phone users logged into mobile dating sites in March 2007,[2]with most users falling between the ages of 27-35. Some experts[who?]believe that the rise in mobile dating is due to the growing popularity of online dating. Others believe it is all about choice, as Joe Brennan Jr., vice president of Webdate says, "It's about giving people a choice. They don't have to date on their computer. They can date on their handset, it's all about letting people decide what path is best for them."[1]A study published in 2015 showed that 7.8 million singles per month in the UK were searching for a partner online, which was a significant increase from 2011 (6.3 million). This increase is allegedly caused by Mobile Dating due to current social dating services like Tinder or Badoo, which allow people to quickly make new contacts on the go.[3] The rise of mobile dating and in particular, dating app Tinder has changed the way people meet potential partners and date. Some believe that the proliferation of such apps has fueled modern dating behaviors.[4] Some avoid these services for fear that the technology could be used to electronically harass users.[5]Another issue is "asymmetry of interests", i.e. an attractive user receives excessive attention from other users and leaves, which may result in deterioration of membership.[6]At the 2012 iDate Mobile Dating Conference, the first ever consumer focus group for mobile dating apps unanimously reiterated the same complaints from years prior. All participants had some concerns about risk. These concerns varied between participants and included physical, emotional and sexual risks, the risk of being scammed, the risk of encountering dangerous and dodgy people, the risk of pregnancy, risks to family and the risk of lies and deceit. To counter these risks, participants undertook various activities that made use of the technological resources available to them and also assessed how others did or did not use technology.[7] An issue amplified by dating apps is a phenomenon known as 'ghosting', whereby one party in a relationship cuts off all communication with the other party without warning or explanation. Ghosting poses a serious problem for dating apps as it can lead to users deleting the apps. For this reason companies likeBumbleandBadooare cracking down on the practice with new features that make it easier for users to end chat conversations more politely. Entering a different era with many technological advancements a "technosexualera", we also enter a different era of dating more "sexualized".[8] Mobile dating began to take shape in 2003.[9]ProxiDating was one of the first dating services using Bluetooth. In 2004Match.com, Webdate and Lavalife were the mobile dating early leaders. It wasn't until theiPhonearrived in 2007 that mobile dating took off. 2010 was the year mobile dating becoming mainstream. Starting from 2012, mobile dating has been gradually overtaking online dating. Match.com and POF.com[10]now see over 40% of their log-ins coming from mobile phones. The mobile dating market is expected to grow to $1.4B by 2013.[needs update][11] 3G Dating is emerging as 3G networks and Video Mobiles become more widespread.[needs update]The potential for one-to-one video calling offers additional safety and helps ensures members are real. In the dating market, both online dating sites are adding mobile web versions and applications to phones. Some sites are offered as mobile only for Phones and Pads, with no access to web versions. Mobile dating apps market is estimated to be worth $2.1 billion".[12]In 2013 there was "exponential growth" of dating websites creating apps and dating apps being used through a mobile device.[13]Tinderhas been up to par competing in this market "as of October 2014, the app has more than fifty million users" and also it is valued "anywhere from $750 million to $1 billion".[14]
https://en.wikipedia.org/wiki/Mobile_dating
Mobile device forensicsis a branch ofdigital forensicsrelating to recovery ofdigital evidenceor data from amobile deviceunderforensicallysound conditions. The phrasemobile deviceusually refers tomobile phones; however, it can also relate to any digital device that has both internal memory andcommunicationability, includingPDAdevices,GPSdevices andtablet computers. Mobile devices can be used to save several types of personal information such as contacts, photos, calendars and notes,SMSandMMSmessages. Smartphones may additionally contain video, email, web browsing information,location information, and social networking messages and contacts. There is growing need for mobile forensics due to several reasons and some of the prominent reasons are: Mobile device forensics can be particularly challenging on a number of levels:[2] Evidential and technical challenges exist. For example, cell site analysis following from the use of a mobile phone usage coverage, is not an exact science. Consequently, whilst it is possible to determine roughly the cell site zone from which a call was made or received, it is not yet possible to say with any degree of certainty, that a mobile phone call emanated from a specific location e.g. a residential address. As a result of these challenges, a wide variety of tools exist to extract evidence from mobile devices; no one tool or method can acquire all the evidence from all devices. It is therefore recommended that forensic examiners, especially those wishing to qualify as expert witnesses in court, undergo extensive training in order to understand how each tool and method acquires evidence; how it maintains standards for forensic soundness; and how it meets legal requirements such as theDaubert standardorFrye standard. As a field of study, forensic examination of mobile devices dates from the late 1990s and early 2000s. The role of mobile phones in crime had long been recognized by law enforcement. With the increased availability of such devices on the consumer market and the wider array of communication platforms they support (e.g. email, web browsing) demand for forensic examination grew.[4] Early efforts to examine mobile devices used similar techniques to the first computer forensics investigations: analyzing phone contents directly via the screen and photographing important content.[4]However, this proved to be a time-consuming process, and as the number of mobile devices began to increase, investigators called for more efficient means of extracting data. Enterprising mobile forensic examiners sometimes used cell phone or PDA synchronization software to "back up" device data to a forensic computer for imaging, or sometimes, simply performed computer forensics on the hard drive of a suspect computer where data had been synchronized. However, this type of software could write to the phone as well as reading it, and could not retrieve deleted data.[5] Some forensic examiners found that they could retrieve even deleted data using "flasher" or "twister" boxes, tools developed by OEMs to "flash" a phone's memory for debugging or updating. However, flasher boxes are invasive and can change data; can be complicated to use; and, because they are not developed as forensic tools, perform neither hash verifications nor (in most cases) audit trails.[6]For physical forensic examinations, therefore, better alternatives remain necessary. To meet these demands, commercial tools appeared which allowed examiners to recover phone memory with minimal disruption and analyze it separately.[4]Over time these commercial techniques have developed further and the recovery of deleted data from proprietary mobile devices has become possible with some specialist tools. Moreover, commercial tools have even automated much of the extraction process, rendering it possible even for minimally trained first responders—who currently are much more likely to encounter suspects with mobile devices in their possession, compared to computers—to perform basic extractions for triage and data preview purposes. Mobile device forensics is best known for its application to law enforcement investigations, but it is also useful formilitary intelligence, corporate investigations,private investigations, criminal and civildefense, andelectronic discovery. Asmobile devicetechnology advances, the amount and types of data that can be found on amobile deviceis constantly increasing. Evidence that can be potentially recovered from a mobile phone may come from several different sources, including handset memory,SIM card, and attached memory cards such asSDcards. Traditionally mobile phone forensics has been associated with recoveringSMSandMMSmessaging, as well as call logs, contact lists and phoneIMEI/ESNinformation. However, newer generations of smartphones also include wider varieties of information; from web browsing,Wireless networksettings,geolocationinformation (includinggeotagscontained within imagemetadata), e-mail and other forms of rich internet media, including important data—such associal networking serviceposts and contacts—now retained on smartphone 'apps'.[7] Nowadays mostlyflash memoryconsisting of NAND or NOR types are used for mobile devices.[8] External memory devices areSIMcards,SDcards (commonly found within GPS devices as well as mobile phones),MMCcards,CFcards, and theMemory Stick. Although not technically part of mobile device forensics, thecall detail records(and occasionally, text messages) fromwireless carriersoften serve as "back up" evidence obtained after the mobile phone has been seized. These are useful when the call history and/or text messages have been deleted from the phone, or whenlocation-based servicesare not turned on. Call detail records andcell site(tower) dumps can show the phone owner's location, and whether they were stationary or moving (i.e., whether the phone's signal bounced off the same side of a single tower, or different sides of multiple towers along a particular path of travel).[9]Carrier data and device data together can be used to corroborate information from other sources, for instance,video surveillancefootage or eyewitness accounts; or to determine the general location where a non-geotagged image or video was taken. The European Union requires its member countries to retain certaintelecommunications datafor use in investigations. This includes data on calls made and retrieved. The location of a mobile phone can be determined and this geographical data must also be retained. In the United States, however, no such requirement exists, and no standards govern how long carriers should retain data or even what they must retain. For example, text messages may be retained only for a week or two, while call logs may be retained anywhere from a few weeks to several months. To reduce the risk of evidence being lost, law enforcement agents must submit a preservation letter to the carrier, which they then must back up with asearch warrant.[9] The forensics process for mobile devices broadly matches other branches of digital forensics; however, some particular concerns apply. Generally, the process can be broken down into three main categories: seizure, acquisition, and examination/analysis. Other aspects of the computer forensic process, such as intake, validation, documentation/reporting, and archiving still apply.[2] Seizing mobile devices is covered by the same legal considerations as other digital media. Mobiles will often be recovered switched on; as the aim of seizure is to preserve evidence, the device will often be transported in the same state to avoid a shutdown, which would change files.[10]In addition, the investigator or first responder would risk user lock activation. However, leaving the phone on carries another risk: the device can still make a network/cellular connection. This may bring in new data, overwriting evidence. To prevent a connection, mobile devices will often be transported and examined from within aFaraday cage(or bag). Even so, there are two disadvantages to this method. First, most bags render the device unusable, as its touch screen or keypad cannot be used. However, special cages can be acquired that allow the use of the device with a see-through glass and special gloves. The advantage with this option is the ability to also connect to other forensic equipment while blocking the network connection, as well as charging the device. If this option is not available, network isolation is advisable either through placing the device inAirplane Mode, orcloningits SIM card (a technique which can also be useful when the device is missing its SIM card entirely).[2] It is to note that while this technique can prevent triggering a remote wipe (or tampering) of the device, it doesn't do anything against a localDead man's switch. The second step in the forensic process isacquisition, in this case usually referring to retrieval of material from a device (as compared to the bit-copy imaging used in computer forensics).[10] Due to the proprietary nature of mobiles it is often not possible to acquire data with it powered down; most mobile device acquisition is performed live. With more advanced smartphones using advanced memory management, connecting it to a recharger and putting it into a faraday cage may not be good practice. The mobile device would recognize the network disconnection and therefore it would change its status information that can trigger the memory manager to write data.[11] Most acquisition tools for mobile devices are commercial in nature and consist of a hardware and software component, often automated. As an increasing number ofmobile devicesuse high-levelfile systems, similar to the file systems of computers, methods and tools can be taken over from hard disk forensics or only need slight changes.[12] TheFATfile system is generally used onNAND memory.[13]A difference is theblock sizeused, which is larger than 512bytesfor hard disks and depends on the used memory type, e.g.,NORtype 64, 128, 256 andNAND memory16, 128, 256, or 512kilobyte. Different software tools can extract the data from the memory image. One could use specialized and automated forensic software products or generic file viewers such as anyhex editorto search for characteristics of file headers. The advantage of the hex editor is the deeper insight into the memory management, but working with a hex editor means a lot of handwork and file system as well as file header knowledge. In contrast, specialized forensic software simplifies the search and extracts the data but may not find everything.AccessData,Sleuthkit, ESI Analyst andEnCase, to mention only some, are forensic software products to analyze memory images.[14]Since there is no tool that extracts all possible information, it is advisable to use two or more tools for examination. There is currently (February 2010) no software solution to get all evidences from flash memories.[8] Mobile device data extraction can be classified according to a continuum, along which methods become more technical and “forensically sound,” tools become more expensive, analysis takes longer, examiners need more training, and some methods can even become more invasive.[15] The examiner utilizes theuser interfaceto investigate the content of the phone's memory. Therefore, the device is used as normal, with the examiner taking pictures of each screen's contents. This method has an advantage in that theoperating systemmakes it unnecessary to use specialized tools or equipment to transform raw data into human interpretable information. In practice this method is applied to cell phones,PDAsandnavigation systems.[16]Disadvantages are that only data visible to the operating system can be recovered; that all data is only available in the form of pictures; and the process itself is time-consuming. Logical acquisition implies a bit-by-bit copy of logical storage objects (e.g., directories and files) that reside on a logical storage (e.g., a file system partition). Logical acquisition has the advantage that system data structures are easier for a tool to extract and organize. Logical extraction acquires information from the device using theoriginal equipment manufacturerapplication programming interfacefor synchronizing the phone's contents with apersonal computer. A logical extraction is generally easier to work with as it does not produce a largebinary blob. However, a skilled forensic examiner will be able to extract far more information from a physical extraction. Logical extraction usually does not produce any deleted information, due to it normally being removed from the phone's file system. However, in some cases—particularly with platforms built onSQLite, such asiOSandAndroid—the phone may keep adatabasefile of information which does not overwrite the information but simply marks it as deleted and available for later overwriting. In such cases, if the device allowsfile systemaccess through its synchronization interface, it is possible to recover deleted information. File system extraction is useful for understanding the file structure, web browsing history, or app usage, as well as providing the examiner with the ability to perform an analysis with traditional computer forensic tools.[17] Physical acquisition implies a bit-for-bit copy of an entire physical store (e.g.flash memory); therefore, it is the method most similar to the examination of apersonal computer. A physical acquisition has the advantage of allowing deleted files and data remnants to be examined. Physical extraction acquires information from the device by direct access to the flash memories. Generally this is harder to achieve because the deviceoriginal equipment manufacturerneeds to secure against arbitrary reading of memory; therefore, a device may be locked to a certain operator. To get around this security, mobile forensics tool vendors often develop their ownboot loaders, enabling the forensic tool to access the memory (and often, also to bypass user passcodes or pattern locks).[18] Generally the physical extraction is split into two steps, the dumping phase and the decoding phase. Brute force acquisition can be performed by 3rd party passcode brute force tools that send a series of passcodes / passwords to the mobile device.[19]Brute-force attackis a time-consuming method, but effective nonetheless. This technique uses trial and error in an attempt to create the correct combination of password or PIN to authenticate access to the mobile device. Despite the process taking an extensive amount of time, it is still one of the best methods to employ if the forensic professional is unable to obtain the passcode. With current available software and hardware it has become quite easy to break the encryption on a mobile device's password file to obtain the passcode.[20]Two manufacturers have become public since the release of the iPhone5,[21]CellebriteandGrayShift. These manufacturers are intended for law enforcement agencies and police departments. The Cellebrite UFED Ultimate[22]unit costs over $40,000 US dollars and Grayshifts system costs $15,000.[23]Brute forcing tools are connected to the device and will physically send codes on iOS devices starting from 0000 to 9999 in sequence until the correct code is successfully entered. Once the code entry has been successful, full access to the device is given and data extraction can commence. Early investigations consisted of live manual analysis of mobile devices; with examiners photographing or writing down useful material for use as evidence. Without forensic photography equipment such as Fernico ZRT, EDEC Eclipse, or Project-a-Phone, this had the disadvantage of risking the modification of the device content, as well as leaving many parts of the proprietary operating system inaccessible. In recent years a number of hardware/software tools have emerged to recover logical and physical evidence from mobile devices. Most tools consist of both hardware and software portions. The hardware includes a number of cables to connect the mobile device to the acquisition machine; the software exists to extract the evidence and, occasionally, even to analyze it. Most recently, mobile device forensic tools have been developed for the field. This is in response both to military units' demand for fast and accurate anti-terrorism intelligence, and to law enforcement demand for forensic previewing capabilities at a crime scene, search warrant execution, or exigent circumstances. Such mobile forensic tools are oftenruggedizedfor harsh environments (e.g. the battlefield) and rough treatment (e.g. being dropped or submerged in water).[24] Generally, because it is impossible for any one tool to capture all evidence from all mobile devices, mobile forensic professionals recommend that examiners establish entire toolkits consisting of a mix of commercial, open source, broad support, and narrow support forensic tools, together with accessories such as battery chargers, Faraday bags or other signal disruption equipment, and so forth.[25] Some current tools include Belkasoft Evidence Center,Cellebrite UFED, Oxygen Forensic Detective, Elcomsoft Mobile Forensic Bundle, Susteen Secure View, MOBILEdit Forensic Express, andMicro Systemation XRY. Some tools have additionally been developed to address increasing criminal usage of phones manufactured with Chinese chipsets, which includeMediaTek(MTK),SpreadtrumandMStar. Such tools includeCellebrite's CHINEX, andXRY PinPoint. Most open source mobile forensics tools are platform-specific and geared toward smartphone analysis. Though not originally designed to be a forensics tool,BitPimhas been widely used on CDMA phones as well as LG VX4400/VX6000 and many Sanyo Sprint cell phones.[26] Commonly referred to as a "Chip-Off" technique within the industry, the last and most intrusive method to get a memory image is todesolderthenon-volatile memorychip and connect it to a memory chip reader. This method contains the potential danger of total data destruction: it is possible to destroy the chip and its content because of the heat required during desoldering. Before the invention of theBGAtechnology it was possible to attach probes to the pins of the memory chip and to recover the memory through these probes. The BGA technique bonds the chips directly onto thePCBthrough moltensolder balls, such that it is no longer possible to attach probes. Desoldering the chips is done carefully and slowly, so that the heat does not destroy the chip or data. Before the chip is desoldered the PCB is baked in an oven to eliminate remaining water. This prevents the so-called popcorn effect, at which the remaining water would blow the chip package at desoldering. There are mainly three methods to melt the solder: hot air, infrared light, and steam-phasing. The infrared light technology works with a focused infrared light beam onto a specificintegrated circuitand is used for small chips. The hot air and steam methods cannot focus as much as the infrared technique. After desoldering the chip a re-balling process cleans the chip and adds new tin balls to the chip. Re-balling can be done in two different ways. A third method makes the entire re-balling process unnecessary. The chip is connected to an adapter with Y-shaped springs or spring-loadedpogo pins. The Y-shaped springs need to have a ball onto the pin to establish an electric connection, but thepogo pinscan be used directly on the pads on the chip without the balls.[11][12] The advantage of forensic desoldering is that the device does not need to be functional and that a copy without any changes to the original data can be made. The disadvantage is that the re-balling devices are expensive, so this process is very costly and there are some risks of total data loss. Hence, forensic desoldering should only be done by experienced laboratories.[13] Existing standardized interfaces for reading data are built into several mobile devices, e.g., to get position data fromGPSequipment (NMEA) or to get deceleration information from airbag units.[16] Not all mobile devices provide such a standardized interface nor does there exist a standard interface for all mobile devices, but all manufacturers have one problem in common. The miniaturizing of device parts opens the question how to automatically test the functionality and quality of the soldered integrated components. For this problem an industry group, theJoint Test Action Group(JTAG), developed a test technology calledboundary scan. Despite the standardization there are four tasks before the JTAG device interface can be used to recover the memory. To find the correct bits in the boundary scan register one must know which processor and memory circuits are used and how they are connected to the system bus. When not accessible from outside one must find the test points for the JTAG interface on the printed circuit board and determine which test point is used for which signal. The JTAG port is not always soldered with connectors, such that it is sometimes necessary to open the device and re-solder the access port.[12]The protocol for reading the memory must be known and finally the correct voltage must be determined to prevent damage to the circuit.[11] The boundary scan produces a complete forensic image of thevolatileandnon-volatile memory. The risk of data change is minimized and the memory chip doesn't have to bedesoldered. Generating the image can be slow and not allmobile devicesare JTAG enabled. Also, it can be difficult to find the test access port.[13] Mobile devices do not provide the possibility to run or boot from aCD, connecting to anetwork shareor another device with clean tools. Therefore, system commands could be the only way to save the volatile memory of a mobile device. With the risk of modified system commands it must be estimated if the volatile memory is really important. A similar problem arises when no network connection is available and no secondary memory can be connected to a mobile device because the volatile memory image must be saved on the internalnon-volatile memory, where the user data is stored and most likely deleted important data will be lost. System commands are the cheapest method, but imply some risks of data loss. Every command usage with options and output must be documented. AT commands are oldmodemcommands, e.g.,Hayes command setandMotorola phone AT commands, and can therefore only be used on a device that has modem support. Using these commands one can only obtain information through theoperating system, such that no deleted data can be extracted.[11] For external memory and the USB flash drive, appropriate software, e.g., the Unix commanddd, is needed to make the bit-level copy. Furthermore,USB flash driveswith memory protection do not need special hardware and can be connected to any computer. Many USB drives and memory cards have a write-lock switch that can be used to prevent data changes, while making a copy. If the USB drive has no protection switch, a blocker can be used to mount the drive in a read-only mode or, in an exceptional case, the memory chip can bedesoldered. The SIM and memory cards need acard readerto make the copy.[29]The SIM card is soundly analyzed, such that it is possible to recover (deleted) data like contacts or text messages.[11] The Android operating system includes the dd command. In a blog post on Android forensic techniques, a method to live image an Android device using the dd command is demonstrated.[30] A flasher tool is programming hardware and/or software that can be used to program (flash) the device memory, e.g.,EEPROMorflash memory. These tools mainly originate from the manufacturer or service centers for debugging, repair, or upgrade services. They can overwrite the non-volatile memory and some, depending on the manufacturer or device, can also read the memory to make a copy, originally intended as a backup. The memory can be protected from reading, e.g., by software command or destruction of fuses in the read circuit.[31] Note, this would not prevent writing or using the memory internally by theCPU. The flasher tools are easy to connect and use, but some can change the data and have other dangerous options or do not make a complete copy.[12] In general there exists no standard for what constitutes a supported device in a specific product. This has led to the situation where different vendors define a supported device differently. A situation such as this makes it much harder to compare products based on vendor provided lists of supported devices. For instance a device where logical extraction using one product only produces a list of calls made by the device may be listed as supported by that vendor while another vendor can produce much more information. Furthermore, different products extract different amounts of information from different devices. This leads to a very complex landscape when trying to overview the products. In general this leads to a situation where testing a product extensively before purchase is strongly recommended. It is quite common to use at least two products which complement each other. Mobile phone technology is evolving at a rapid pace. Digital forensics relating to mobile devices seems to be at a stand still or evolving slowly. For mobile phone forensics to catch up with release cycles of mobile phones, more comprehensive and in depth framework for evaluating mobile forensic toolkits should be developed and data on appropriate tools and techniques for each type of phone should be made available a timely manner.[32] Anti-computer forensicsis more difficult because of the small size of the devices and the user's restricted data accessibility.[13]Nevertheless, there are developments to secure the memory in hardware with security circuits in theCPUand memory chip, such that the memory chip cannot be read even after desoldering.[33][34]
https://en.wikipedia.org/wiki/Mobile_device_forensics
Themobile identification number(MIN) ormobile subscription identification number(MSIN) refers to the 10-digitunique number that awireless carrieruses to identify amobile phone, which is the last part of theinternational mobile subscriber identity(IMSI). The MIN is a number that uniquely identifies a mobile phone working under TIA standards for cellular andPCStechnologies (e.g. EIA/TIA–553 analog, IS–136 TDMA, IS–95 or IS-2000 CDMA). MIN usage became prevalent for mobile number portability to switch providers. It can also be called theMSID(Mobile Station ID) orIMSI_S(Short IMSI). The mobile identification number (MIN) is a number that is derived from the 10-digit directorytelephone numberassigned to a mobile station. The rules for deriving the MIN from the 10-digit telephone number are given in theIS-95standard. MIN1 is the first or least significant 24 binary digits of the MIN. MIN2 is the second part of the MIN containing the 10 most significant binary digits. MIN1, and theESN, along with other digital input, are used during the authentication process. The MIN is used to identify a mobile station. In the case ofanalog cellular, the MIN is used to route the call. In most second generation systems, temporary numbers are assigned to the handset when routing calls as a security precaution. This article related totelecommunicationsis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Mobile_identification_number
Apositioning systemis a system for determining thepositionof an object inspace.[1]Positioning system technologies exist ranging from interplanetary coverage with meter accuracy to workspace and laboratory coverage with sub-millimeter accuracy. A major subclass is made ofgeopositioningsystems, used for determining an object's position with respect to Earth, i.e., itsgeographical position; one of the most well-known and commonly used geopositioning systems is theGlobal Positioning System(GPS) and similarglobal navigation satellite systems(GNSS). Interplanetary-radio communication systems not only communicate with spacecraft, but they are also used to determine their position.Radarcan track targets near the Earth, but spacecraft in deep space must have a workingtransponderon board to echo a radio signal back. Orientation information can be obtained usingstar trackers. Global navigation satellite systems(GNSS) allow specialized radio receivers to determine their 3-D space position, as well as time, with an accuracy of 2–20 metres or tens of nanoseconds. Currently deployed systems use microwave signals that can only be received reliably outdoors and that cover most of Earth's surface, as well as near-Earth space. The existing and planned systems are: Networks of land-based positioning transmitters allow specializedradio receiversto determine their 2-D position on the surface of the Earth. They are generally less accurate than GNSS because their signals are not entirely restricted toline-of-sight propagation, and they have only regional coverage. However, they remain useful for special purposes and as a backup where their signals are more reliably received, including underground and indoors, and receivers can be built that consume very low battery power.LORANis an example of such a system. Alocal positioning system(LPS) is a navigation system that provides location information in all weather, anywhere within the coverage of the network, where there is an unobstructedline of sightto three or more signalingbeaconsof which the exact position on Earth is known.[2][3][4][5] UnlikeGPSor otherglobal navigation satellite systems,local positioning systemsdon't provide global coverage. Instead, they use beacons, which have a limited range, hence requiring the user to be near these. Beacons includecellularbase stations,Wi-FiandLiFiaccess points, and radiobroadcast towers. In the past, long-range LPS's have been used for navigation of ships and aircraft. Examples are theDecca Navigator SystemandLORAN. Nowadays, local positioning systems are often used as complementary (and in some cases alternative) positioning technology to GPS, especially in areas where GPS does not reach or is weak, for example,inside buildings, orurban canyons. Local positioning using cellular andbroadcast towerscan be used on cell phones that do not have a GPS receiver. Even if the phone has a GPS receiver, battery life will be extended if cell tower location accuracy is sufficient. They are also used in trackless amusement rides likePooh's Hunny HuntandMystic Manor. Examples of existing systems include Indoor positioning systems are optimized for use within individual rooms, buildings, or construction sites. They typically offer centimeter-accuracy. Some provide6-Dlocation and orientation information. Examples of existing systems include These are designed to cover only a restricted workspace, typically a few cubic meters, but can offer accuracy in the millimeter-range or better. They typically provide 6-D position and orientation. Example applications includevirtual realityenvironments, alignment tools forcomputer-assisted surgeryor radiology, and cinematography (motion capture,match moving). Examples:Wii Remotewith Sensor Bar, Polhemus Tracker, Precision Motion Tracking Solutions InterSense.[6] High performance positioning systemis used in manufacturing processes to move an object (tool or part) smoothly and accurately in six degrees of freedom, along a desired path, at a desired orientation, with highacceleration, highdeceleration, highvelocityand lowsettling time. It is designed to quickly stop its motion and accurately place the moving object at its desired final position and orientation with minimal jittering. Examples: high velocitymachine tools,laser scanning,wire bonding,printed circuit boardinspection,lab automationassaying,flight simulators Multiple technologies exist to determine the position and orientation of an object or person in a room, building or in the world. Time of flightsystems determine the distance by measuring the time of propagation of pulsed signals between a transmitter and receiver. When distances of at least three locations are known, a fourth position can be determined usingtrilateration.Global Positioning Systemis an example. Optical trackers, such aslaser ranging trackerssuffer fromline of sightproblems and their performance is adversely affected by ambient light and infrared radiation. On the other hand, they do not suffer from distortion effects in the presence of metals and can have high update rates because of the speed of light.[7] Ultrasonic trackershave a more limited range because of the loss of energy with the distance traveled. Also they are sensitive to ultrasonic ambient noise and have a low update rate. But the main advantage is that they do not need line of sight. Systems usingradio wavessuch as theGlobal navigation satellite systemdo not suffer ambient light, but still need line of sight. A spatial scan system uses (optical) beacons and sensors. Two categories can be distinguished: By aiming the sensor at the beacon the angle between them can be measured. Withtriangulationthe position of the object can be determined. The main advantage of aninertial sensingis that it does not require an external reference. Instead it measures rotation with agyroscopeor position with anaccelerometerwith respect to a known starting position and orientation. Because these systems measure relative positions instead of absolute positions they can suffer from accumulated errors and therefore are subject to drift. A periodic re-calibration of the system will provide more accuracy. This type of tracking system uses mechanical linkages between the reference and the target. Two types of linkages have been used. One is an assembly of mechanical parts that can each rotate, providing the user with multiple rotation capabilities. The orientation of the linkages is computed from the various linkage angles measured with incremental encoders or potentiometers. Other types of mechanical linkages are wires that are rolled in coils. A spring system ensures that the wires are tensed in order to measure the distance accurately. The degrees of freedom sensed by mechanical linkage trackers are dependent upon the constitution of the tracker's mechanical structure. While six degrees of freedom are most often provided, typically only a limited range of motions is possible because of the kinematics of the joints and the length of each link. Also, the weight and the deformation of the structure increase with the distance of the target from the reference and impose a limit on the working volume.[8] Phase differencesystems measure the shift in phase of an incoming signal from an emitter on a moving target compared to the phase of an incoming signal from a reference emitter. With this the relative motion of the emitter with respect to the receiver can be calculated. Like inertial sensing systems, phase-difference systems can suffer from accumulated errors and therefore are subject to drift, but because the phase can be measured continuously they are able to generate high data rates.Omega (navigation system)is an example. Direct field sensing systems use a known field to derive orientation or position: A simplecompassuses theEarth's magnetic fieldto know its orientation in two directions.[8]Aninclinometeruses theearth gravitational fieldto know its orientation in the remaining third direction. The field used for positioning does not need to originate from nature, however. A system of threeelectromagnetsplaced perpendicular to each other can define a spatial reference. On the receiver, three sensors measure the components of the field's flux received as a consequence ofmagnetic coupling. Based on these measures, the system determines the position and orientation of the receiver with respect to the emitters' reference. Optical positioning systems are based onopticscomponents, such as intotal stations.[9] Magnetic positioningis an IPS (Indoor positioning system) solution that takes advantage of the magnetic field anomalies typical of indoor settings by using them as distinctive place recognition signatures. The first citation of positioning based on magnetic anomaly can be traced back to military applications in 1970.[10]The use of magnetic field anomalies for indoor positioning was first claimed in 1999,[11]with later publications related to robotics in the early 2000s.[12][13] Most recent applications can employ magnetic sensor data from asmartphoneused to wirelessly locate objects or people inside a building.[14] Because every technology has its pros and cons, most systems use more than one technology. A system based on relative position changes like the inertial system needs periodic calibration against a system with absolute position measurement. Systems combining two or more technologies are called hybrid positioning systems.[16] Hybrid positioning systems are systems for finding the location of a mobile device using several different positioning technologies. Usually GPS (Global Positioning System) is one major component of such systems, combined with cell tower signals, wireless internet signals,Bluetoothsensors,IP addressesand network environment data.[17] These systems are specifically designed to overcome the limitations of GPS, which is very exact in open areas, but works poorly indoors or between tall buildings (theurban canyoneffect). By comparison, cell tower signals are not hindered by buildings or bad weather, but usually provide less precise positioning.Wi-Fi positioning systemsmay give very exact positioning, in urban areas with high Wi-Fi density - and depend on a comprehensive database of Wi-Fi access points. Hybrid positioning systems are increasingly being explored for certain civilian and commerciallocation-based servicesandlocation-based media, which need to work well in urban areas in order to be commercially and practically viable. Early works in this area include the Place Lab project, which started in 2003 and went inactive in 2006. Later methods let smartphones combine the accuracy of GPS with the low power consumption of cell-ID transition point finding.[18]In 2022, the satellite-free positioning systemSuperGPSwith higher-resolution than GPS using existing telecommunications networks was demonstrated.[19][20]
https://en.wikipedia.org/wiki/Positioning_technology