text
stringlengths 21
172k
| source
stringlengths 32
113
|
|---|---|
Invector calculus, thesurface gradientis avectordifferential operatorthat is similar to the conventionalgradient. The distinction is that the surface gradient takes effect along a surface.
For asurfaceS{\displaystyle S}in ascalar fieldu{\displaystyle u}, the surface gradient is defined and notated as
wheren^{\displaystyle \mathbf {\hat {n}} }is a unitnormalto the surface.[1]Examining the definition shows that the surface gradient is the (conventional) gradient with the component normal to the surface removed (subtracted), hence this gradient is tangent to the surface. In other words, the surface gradient is theorthographic projectionof the gradient onto the surface.
The surface gradient arises whenever the gradient of a quantity over a surface is important. In the study ofcapillary surfacesfor example, the gradient of spatially varyingsurface tensiondoesn't make much sense, however the surface gradient does and serves certain purposes.
|
https://en.wikipedia.org/wiki/Surface_gradient
|
Incomputing,code generationis part of the process chain of acompiler, in which anintermediate representationofsource codeis converted into a form (e.g.,machine code) that can be readily executed by the target system.
Sophisticated compilers typically performmultiple passesover various intermediate forms. This multi-stage process is used because manyalgorithmsforcode optimizationare easier to apply one at a time, or because the input to one optimization relies on the completed processing performed by another optimization. This organization also facilitates the creation of a single compiler that can target multiple architectures, as only the last of the code generation stages (thebackend) needs to change from target to target. (For more information on compiler design, seeCompiler.)
The input to the code generator typically consists of aparse treeor anabstract syntax tree.[1]The tree is converted into a linear sequence of instructions, usually in anintermediate languagesuch asthree-address code. Further stages of compilation may or may not be referred to as "code generation", depending on whether they involve a significant change in the representation of the program. (For example, apeephole optimizationpass would not likely be called "code generation", although a code generator might incorporate a peephole optimization pass.)
In addition to the basic conversion from an intermediate representation into a linear sequence of machine instructions, a typical code generator tries to optimize the generated code in some way.
Tasks which are typically part of a sophisticated compiler's "code generation" phase include:
Instruction selection is typically carried out by doing arecursivepostorder traversalon the abstract syntax tree, matching particular tree configurations against templates; for example, the treeW := ADD(X,MUL(Y,Z))might be transformed into a linear sequence of instructions by recursively generating the sequences fort1 := Xandt2 := MUL(Y,Z), and then emitting the instructionADD W, t1, t2.
In a compiler that uses an intermediate language, there may be two instruction selection stages—one to convert the parse tree into intermediate code, and a second phase much later to convert the intermediate code into instructions from theinstruction setof the target machine. This second phase does not require a tree traversal; it can be done linearly, and typically involves a simple replacement of intermediate-language operations with their correspondingopcodes. However, if the compiler is actually alanguage translator(for example, one that convertsJavatoC++), then the second code-generation phase may involvebuildinga tree from the linear intermediate code.
When code generation occurs atruntime, as injust-in-time compilation(JIT), it is important that the entire process beefficientwith respect to space and time. For example, whenregular expressionsare interpreted and used to generate code at runtime, a non-deterministicfinite-state machineis often generated instead of a deterministic one, because usually the former can be created more quickly and occupies less memory space than the latter. Despite its generally generating less efficient code, JIT code generation can take advantage ofprofilinginformation that is available only at runtime.
The fundamental task of taking input in one language and producing output in a non-trivially different language can be understood in terms of the coretransformationaloperations offormal language theory. Consequently, some techniques that were originally developed for use in compilers have come to be employed in other ways as well. For example,YACC(Yet AnotherCompiler-Compiler) takes input inBackus–Naur formand converts it to a parser inC. Though it was originally created for automatic generation of a parser for a compiler, yacc is also often used to automate writing code that needs to be modified each time specifications are changed.[3]
Manyintegrated development environments(IDEs) support some form of automaticsource-code generation, often using algorithms in common with compiler code generators, although commonly less complicated. (See also:Program transformation,Data transformation.)
In general, a syntax and semantic analyzer tries to retrieve the structure of the program from the source code, while a code generator uses this structural information (e.g.,data types) to produce code. In other words, the formeraddsinformation while the latterlosessome of the information. One consequence of this information loss is thatreflectionbecomes difficult or even impossible. To counter this problem, code generators often embed syntactic and semantic information in addition to the code necessary for execution.
|
https://en.wikipedia.org/wiki/Code_generation_(compiler)
|
Note: Varies byjurisdiction
Note: Varies byjurisdiction
Internet homicide, also calledinternet assassination, refers to killing in which victim and perpetrator met online, in some cases having known each other previously only through the Internet.[1][2][3]AlsoInternet killeris an appellation found in media reports for a person who broadcasts thecrimeofmurderonline or who murders a victim met through theInternet.[4][5]Depending on the venue used, other terms used in the media areInternet chat room killer,Craigslist killer,Facebook serial killer.Internet homicide can also be part of anInternet suicide pactorconsensual homicide.[4]Some commentators believe that reports on these homicides have overemphasized their connection to the Internet.[6]
Serial killersare murderers who target three or more victims sequentially, with a "cooling off" period between each murder, and whose motivation for killing is largely based onpsychologicalgratification.[7][8]Such killers have used forms of social networking to attract victims long before the advent of the Internet. For example, between 1900 and 1914, Hungarian serial killerBéla Kisslured his 24 victims by usingpersonal adspublished in newspapers.[9]
According to Paul Bocj, the author ofCyberstalking: Harassment in the Internet Age and How to Protect Your Family, "The idea that a serial killer may have operated via the Internet is, understandably, one that has resulted in a great deal of public anxiety."[10]InHarold Schecter'sA to Z Encyclopedia of Serial Killers, the entry for "Internet" reads in part: "If the Internet has become a very useful tool for people interested in serial killers, there's some indication that it may also prove to be a resource for serial killers themselves."[11]Maurice Godwin, a forensic consultant, argued that "There are some sadistic predators that rely on the Mardi Gras Effect ["the ability to hide one's identity on the Internet"] to lure and murder repeatedly."[12]The first serial killer known to have used the Internet to find victims wasJohn Edward Robinson, who was arrested in 2000 and was referred to inLaw Enforcement Newsas the "USA's first Internet serial killer" and "the nation's first documented serial killer to use the Internet as a means of luring victims."[13][14]
Online predators, participants ininternet suicide and suicide-homicide pacts, and internet killers may seek out victims throughinternet forums,chat rooms,listservs,email,bulletin boards,social networking sites,online role playing games,online dating services,Yahoo groups, orUsenet.[15][16][17]
Onlinechatroomsare sometimes used by killers to meet and bait potential victims.[2][3][18]For example, the Japanese serial murdererHiroshi Maeueis known to have found victims by using online suicide chat rooms.[19]The killerLisa Marie Montgomeryis reported to have met her victim in an online chatroom forrat terrierlovers called "Ratter Chatter."[20]
Online chatrooms are also used, in some cases, to plan consensual homicides. For example, in 1996, aMarylandwoman,Sharon Lopatka, apparently agreed to be killed bytortureandstrangulationin a conversation with a man in an online chatroom.[21][22]Robert Frederick Glass pleaded guilty to killing Lopatka and later died in prison while serving his sentence. In a case that might be regarded as a quasi-consensual homicide, "John," a teenage boy fromAltrincham, England, allegedly tricked another teenager into killing him using long conversations in an online chatroom. The other teenager, Mark, apparently believed he was being recruited by a female Secret Service agent. The suicide-by-homicide failed and on May 29, 2004 John pleaded guilty to inciting someone to murder him and was sentenced to three years supervision. Mark pleaded guilty to attempted murder and was sentenced to two years supervision. The boys were forbidden to contact each other.[23]
As an article in theNew York Daily Newsexplained in 2009, "Long before there was a craigslist or dot-com dating, there were places where men and women who were too shy or busy to meet face to face could find romance. Calling themselves "matrimonial bureaus," these organizations were known mostly as the "lonely hearts clubs," and they flourished through the middle of the 20th century."[24]It was in venues like these—print media such asnewspaperclassified adsandpersonal or lonely hearts club ads—that 20th century murderers such asHarry Powers, the so-called "Matrimonial Bureau Murderer,"[24]andHarvey Carignan, "the Want Ad Killer"[25]met their victims.
Electronic advertising has gradually replaced printed ads and the Internet is now a venue where murderers who employ a similarmodus operandican meet their victims; in Schecter'sEncyclopedia, the entry for "Ads" mentions Internet dating and the use of Internet ads by the so-called "Internet Cannibal"Armin Meiwes.[11]Since 2007, several accused and convicted killers have contacted victims through advertising services such asCraigslist, a popularclassified advertisingwebsite. These killers are sometimes referred to in the media as "Craigslist killers";[26][27][28]the first use of the termCraigslist killingsmay date to October 31, 2007, when the phrase appeared in a headline in theSaint Paul Pioneer PressinMinnesota, in reference to the murder of Katherine Olson by Michael John Anderson, who was then dubbed "the Craigslist killer".[29]
Since 2007, several suspected and convicted perpetrators have met their victims or solicited murder through Craigslist. Of those cases, two were convicted for crimes in the three-month period encompassing February to April 2009 and a further four were accused of crimes during the 13-month span of March 2008 through April 2009.[26][27][28][30]Although, by definition, Craigslist will have been the initial contact point and a killing will have taken place in order for the suspected, accused, or convicted perpetrator to be dubbed a Craigslist killer, the actual motivations of these criminals are varied. The victims' deaths may result from arobberyor asexualencounter that turned violent. Some of these perpetrators may not have intended to commit murder, but killed their victims during the course of a struggle or to prevent capture. Each case is different.
In 1995, Match.com was launched as the first online dating application. In the following decades internet dating has become the second largest paid Internet industry. However, often people suffering from relatedness frustration will seek affection and care online, but find their needs are not met. The self-esteem enhancement was found to produce problematic usage of internet dating apps due to the sex motive.[31]
According to Michael Largo, the author ofFinal Exits: The Illustrated Encyclopedia of How We Die,[32]"Internet dating is becoming very popular, but since 1995, there's been[...] over 400 instances where a homicide has been related to the person that [the victim] met online."[33][failed verification–see discussion]
Several legal and technology experts have questioned the idea that there is a phenomenon of Internet killings. A legal theorist pressed for an Internet angle on a murder by a journalist related that "I asked her whether, if I called her up and asked her out on a blind date and murdered her, she would think it was a "telephone-related murder"?".[34]
Leslie Harris, CEO of theCenter for Democracy and Technologysaid of the term "Craigslist Killer" that "A great many of the tragic incidents that tangentially involve the Internet have little or nothing to do with the Internet itself. The Craigslist case is the latest example of that phenomenon. Craigslist is an innovative and valuable resource, which frankly, is being unfairly smeared because it is an Internet site."[6]The bookHypercrimeargues that "The more one looks, the more these widely circulated instances of 'cyberkilling' appear to vanish into the smoke of a 'cyberspace'."[4]
Susan Brenner, a professor of law and technology wrote that "Is it a cybercrime for John to meet Mary on the Internet, correspond with her and use e-mail to lure her to a meeting where he kills her? News stories often describe conduct such as this as a cybercrime, or as 'Internet murder.' But why is this anything other than murder? We do not, for example, refer to killings orchestrated over the telephone as 'tele-murder' or by snail mail as 'mail murder.' It seems that this is not a cybercrime, that it is simply a real-world crime the commission of which happens to involve the use of computer technology," but she conceded that "there may be reasons to treat conduct such as this differently and to construe it as something other than a conventional crime."[35]
The following individuals have been arrested and/or convicted of crimes in which police claimed that Internet services such as chat rooms and Craigslist advertisements were used to contact victims or hire a murderer. Despite sharing a similar method of contacting victims, they apparently have varied motivations. In the list below, the victims' deaths may have been premeditated, especially if the perpetrator is aserial killer, but they may also have resulted from a robbery,insurance fraud, or a sexual encounter that turned violent.
|
https://en.wikipedia.org/wiki/Internet_homicide
|
AMarkov numberorMarkoff numberis a positiveintegerx,yorzthat is part of a solution to the MarkovDiophantine equation
studied byAndrey Markoff(1879,1880).
The first few Markov numbers are
appearing as coordinates of the Markov triples
There are infinitely many Markov numbers and Markov triples.
There are two simple ways to obtain a new Markov triple from an old one (x,y,z). First, one maypermutethe 3 numbersx,y,z, so in particular one can normalize the triples so thatx≤y≤z. Second, if (x,y,z) is a Markov triple then so is (x,y, 3xy−z). Applying this operation twice returns the same triple one started with. Joining each normalized Markov triple to the 1, 2, or 3 normalized triples one can obtain from this gives a graph starting from (1,1,1) as in the diagram. This graph isconnected; in other words every Markov triple can be connected to(1,1,1)by a sequence of these operations.[1]If one starts, as an example, with(1, 5, 13)we get its threeneighbors(5, 13, 194),(1, 13, 34)and(1, 2, 5)in the Markov tree ifzis set to 1, 5 and 13, respectively. For instance, starting with(1, 1, 2)and tradingyandzbefore each iteration of the transform lists Markov triples withFibonacci numbers. Starting with that same triplet and tradingxandzbefore each iteration gives the triples withPell numbers.
All the Markov numbers on the regions adjacent to 2's region areodd-indexed Pell numbers (or numbersnsuch that 2n2− 1 is asquare,OEIS:A001653), and all the Markov numbers on the regions adjacent to 1's region are odd-indexed Fibonacci numbers (OEIS:A001519). Thus, there are infinitely many Markov triples of the form
whereFkis thekthFibonacci number. Likewise, there are infinitely many Markov triples of the form
wherePkis thekthPell number.[2]
Aside from the two smallestsingulartriples (1, 1, 1) and (1, 1, 2), every Markov triple consists of three distinct integers.[3]
Theunicity conjecture, as remarked byFrobeniusin 1913,[4]states that for a given Markov numberc, there is exactly one normalized solution havingcas its largest element:proofsof thisconjecturehave been claimed but none seems to be correct.[5]Martin Aigner[6]examines several weaker variants of the unicity conjecture. His fixed numerator conjecture was proved by Rabideau and Schiffler in 2020,[7]while the fixed denominator conjecture and fixed sum conjecture were proved by Lee, Li, Rabideau and Schiffler in 2023.[8]
None of the prime divisors of a Markov number is congruent to 3 modulo 4, which implies that an odd Markov number is 1 more than a multiple of 4.[9]Furthermore, ifm{\displaystyle m}is a Markov number then none of the prime divisors of9m2−4{\displaystyle 9m^{2}-4}is congruent to 3 modulo 4. AnevenMarkov number is 2 more than a multiple of 32.[10]
In his 1982 paper,Don Zagierconjectured that thenth Markov number is asymptotically given by
The erroro(1)=(log(3mn)/C)2−n{\displaystyle o(1)=(\log(3m_{n})/C)^{2}-n}is plotted below.
Moreover, he pointed out thatx2+y2+z2=3xyz+4/9{\displaystyle x^{2}+y^{2}+z^{2}=3xyz+4/9}, an approximation of the original Diophantine equation, is equivalent tof(x)+f(y)=f(z){\displaystyle f(x)+f(y)=f(z)}withf(t) =arcosh(3t/2).[11]The conjecture was proved[disputed–discuss]byGreg McShaneandIgor Rivinin 1995 using techniques fromhyperbolic geometry.[12]
ThenthLagrange numbercan be calculated from thenth Markov number with the formula
The Markov numbers are sums of (non-unique) pairs of squares.
Markoff (1879,1880) showed that if
is anindefinitebinary quadratic formwithrealcoefficients anddiscriminantD=b2−4ac{\displaystyle D=b^{2}-4ac}, then there are integersx,yfor whichftakes a nonzero value ofabsolute valueat most
unlessfis aMarkov form:[13]a constant times a form
such that
where (p,q,r) is a Markov triple.
Let tr denote thetracefunction overmatrices. IfXandYare inSL2(C{\displaystyle \mathbb {C} }), then
so that iftr(XYX−1Y−1)=−2{\textstyle \operatorname {tr} (XYX^{-1}Y^{-1})=-2}then
In particular ifXandYalso have integer entries then tr(X)/3, tr(Y)/3, and tr(XY)/3 are a Markov triple. IfX⋅Y⋅Z=Ithen tr(XtY) = tr(Z), so more symmetrically ifX,Y, andZare in SL2(Z{\displaystyle \mathbb {Z} }) withX⋅Y⋅Z= I and thecommutatorof two of them has trace −2, then their traces/3 are a Markov triple.[14]
|
https://en.wikipedia.org/wiki/Markov_number
|
Incryptography, theZimmermann–Sassaman key-signing protocolis a protocol to speed up thepublic key fingerprintverification part of akey signing party. It requires somework before the event.
The protocol was invented during a key signing party withLen Sassaman,Werner Koch,Phil Zimmermann, and others.
The Sassaman-Efficient method is the first of the 2 types developed. Before the event, all participants email the keysigning coordinator their public keys. The coordinator then makes a text file of all the keys and accompanied fingerprint and then hashes it. They then proceed to make the text file and checksum available to all participants. The participants then download the file and check the validity using the hash. Then the participants print out the list and make sure that their own key is correct.
Everyone brings their own key list so that they know it is correct and not manipulated. Then the coordinator reads aloud or projects the checksums of the keys. Each participant verifies and states that their key is correct and once that is established a check mark can be put by that key. Once all the keys have been checked then the line folds upon itself and the participants then show each other at least 2 government-issued IDs. Once sufficient verification is established with the authenticity of the person, the other participant puts a second check mark by their name.
The participants then fetch the keys from a server or obtain a keyring made for the event. They sign each key on their list with 2 check marks and make sure that the fingerprints match. The signatures are then uploaded to the server or mailed directly to the key owner (if requested).[1]
The Sassaman-Projected method is a modified version of the Sassaman-Efficient, with the purpose for large groups. They both follow the same way with the exception of verifying identity. Instead of doing it individually the 2 forms of ID are projected for everyone to see at once. Once the person has verified that it is their key, the rest of the participants make 2 check marks next to the key.[2]
This cryptography-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Zimmermann%E2%80%93Sassaman_key-signing_protocol
|
TheCalifornia Online Privacy Protection Act of 2003(CalOPPA),[1]effective as of July 1, 2004 and amended in 2013, is the first state law in theUnited Statesrequiring commercialwebsiteson theWorld Wide Weband online services to include aprivacy policyon their website. According to thisCalifornia State Law, under the Business and Professions Code, Division 8 Special Business Regulations, Chapter 22Internet PrivacyRequirements, operators of commercial websites that collectPersonally Identifiable Information(PII) from California's residents are required to conspicuously post and comply with aprivacy policythat meets specific requirements.[2]A website operator who fails to post theirprivacypolicy within 30 days after being notified about noncompliance will be deemed in violation. PII includes information such as name, street address, email address, telephone number, date of birth,Social Security number, or other details about a person that could allow a consumer to be contacted physically or online.
According to the act, the operator of awebsitemust post a distinctive and easily found link to the website'sprivacy policy, commonly listed under the heading "Your California Privacy Rights". The privacy policy must detail the kinds of information gathered by the website, how the information will or could be shared with other parties, and, if such a process exists, describe the process the users can use to review and make changes to their stored information. It also must include the policy's effective date and an update on any changes that take place since then.
The owner of a website can be subject to legal actions over CalOPPA within 30 days of being notified for not posting the privacy policy or not meeting the law's criteria. The owner could be faulted for theirnegligence, possibly even consciously, over their inability to comply with the act, which ultimately results in charges filed against them for this noncompliance.[3]
CalOPPA non-compliance violations may be reported to theCalifornia Attorney General's office via their website.[4][2]
The act is broad in scope, well beyond California's border. Neither theweb servernor the company that created the website has to be in California in order to be under the scope of the law. The website only has to be accessible by California residents.[5]ManyAmericanwebsites thus include aboilerplatedisclaimer, usually under the titledhyperlinkof "Your California Privacy Rights", on their site'sfootersection by default for all-page access.[6]
As it does not containenforcementprovisions of its own, CalOPPA is expected to be enforced throughCalifornia's Unfair Competition Law (UCL),[7]which prohibits unlawful, unfair, or fraudulent business acts or practices. UCL may be enforced for violations of CalOPPA by government officials seeking civil penalties or equitable relief, or by private parties seeking private claims.[8]
Non-compliance violations may be reported to the California Attorney General's officewebsite.
In May 2007, getting toGoogle'sprivacy policy required clicking on "About Google" on its home page, which brought up a page that included a link to its privacy policy.New York Timesreporter Saul Hansell posted ablogentry[9]raising questions about Google's compliance with this act. A coalition of privacy groups also sent a letter[10]to Google's CEO,Eric Schmidt, questioning the absence of a privacy policy link on its home page. According toElectronic Privacy Information CenterdirectorMarc Rotenberg, a lawsuit challenging Google's privacy policy practices as a violation of California law was not filed in the hope that their informal complaints could be resolved through discussions.[11]Later, Google added a direct link to its privacy policy on its homepage.[12]
Assembly Bill 370 (Muratsuchi), which was signed into law in 2013, amended CalOPPA requiring new privacy policy disclosures for websites and online services that track visitors. It was defined in the legislative analysis of the bill as "the monitoring of an individual across multiple websites to build a profile of behavior and interests."[13][14]It required privacy policies to either contain a disclosure, or link to a disclosure on a separate page, detailing how websites responded to theDo Not Trackheader and "other mechanisms that provide consumers the ability to exercise choice regarding the collection of personally identifiable information about an individual consumer’s online activities over time and across third-party Web sites or online services", if websites tracked thepersonally identifiable informationof users. It also required privacy policies to disclose if websites allowed third-parties to engage in cross-site tracking of their users. See Cal. Assembly Bill 370, which became effective on January 1, 2014.
On February 6, 2013, Assembly Member Ed Chau had introduced AB 242, which would amend the act to impose additional requirements on privacy policies.[15]The amendments would require:
AB 242 died in the Assembly Judiciary Committee.[16]
|
https://en.wikipedia.org/wiki/California_Privacy_Rights
|
TheIBM 1620was a model of scientificminicomputerproduced byIBM. It was announced on October 21, 1959,[1]and was then marketed as an inexpensive scientific computer.[2]After a total production of about two thousand machines, it was withdrawn on November 19, 1970. Modified versions of the 1620 were used as the CPU of theIBM 1710andIBM 1720Industrial Process Control Systems (making it the first digital computer considered reliable enough forreal-timeprocess controlof factory equipment).[1]
Beingvariable-word-lengthdecimal, as opposed to fixed-word-length pure binary, made it an especially attractive first computer to learn on – and hundreds of thousands of students had their first experiences with a computer on the IBM 1620.
Core memory cycle times were 20 microseconds for the (earlier)Model I, 10 microseconds for theModel II(about a thousand times slower than typical computer main memory in 2006). The Model II was introduced in 1962.[3]
The IBM 1620 Model I was a variable "word" length decimal (BCD) computer usingcore memory. The Model I core could hold 20,000 decimal digits with each digit stored in six bits.[4][3]More memory could be added with the IBM 1623 Storage Unit, Model 1 which held 40,000 digits, or the 1623 Model 2 which held 60,000.[1]
The Model II deployed the IBM 1625 core-storage memory unit,[5][6]whose memory cycle time was halved by using faster cores, compared to the Model I's (internal or 1623 memory unit): to 10 μs (i.e., the cycle speed was raised to 100 kHz).
While the five-digit addresses of either model could have addressed 100,000 decimal digits, no machine larger than 60,000 decimal digits was ever marketed.[7]
Memory was accessed two decimal digits at the same time (even-odd digit pair for numeric data or onealphamericcharacter for text data). Each decimal digit was six bits, composed of an odd parityCheck bit, aFlag bit, and four BCD bits for the value of the digit in the following format:[8]
TheFlag bit had several uses:
In addition to the valid BCD digit values there were threespecialdigit values (these couldnotbe used in calculations):
Instructionswere fixed length (12 decimal digits), consisting of a two-digit "op code", a five-digit "P Address" (usually thedestinationaddress), and a five-digit "Q Address" (usually thesourceaddress or thesourceimmediate value). Some instructions, such as the B (branch) instruction, used only the P Address, and later smart assemblers included a "B7" instruction that generated a seven-digit branch instruction (op code, P address, and one extra digit because the next instruction had to start on an even-numbered digit).
Fixed-pointdata "words" could be any size from two decimal digits up to all of memory not used for other purposes.
Floating-pointdata "words" (using the hardwarefloating-pointoption) could be any size from 4 decimal digits up to 102 decimal digits (2 to 100 digits for themantissaand two digits for theexponent).
The Fortran II compiler offered limited access to this flexibility via a "Source Program Control Card" preceding the Fortran source in a fixed format:
The * in column one,ffthe number of digits for the mantissa of floating-point numbers (allowing 02 to 28),kkthe number of digits for fixed-point numbers (allowing 04 to 10) andsis to specify the memory size of the computer to run the code if not the current computer: 2, 4, or 6 for memories of 20,000 or 40,000 or 60,000 digits.
The machine had no programmer-accessible registers: all operations were memory to memory (including theindex registersof the1620 II).
The table below lists Alphameric mode characters (and op codes).
The table below lists numeric mode characters.
TheModel Iused theCyrillic characterЖ(pronounced zh) on the typewriter as a general purpose invalid character with correct parity (invalid parity being indicated with an overstrike "–"). In some 1620 installations it was called a SMERSH, as used in theJames Bondnovels that had become popular in the late 1960s. TheModel IIused a new character ❚ (called "pillow") as a general purpose invalid character with correct parity.
Although the IBM 1620's architecture was very popular in the scientific and engineering community, computer scientistEdsger Dijkstrapointed out several flaws in its design in EWD37, "A review of the IBM 1620 data processing system".[9]Among these are that the machine's Branch and Transmit instruction together with Branch Back allow only one level of nested subroutine call, forcing the programmer of any code with more than one level to decide where the use of this "feature" would be most effective. He also showed how the machine's paper tape reading support could not properly read tapes containing record marks, since record marks are used to terminate the characters read in storage. One effect of this is that the 1620 cannot duplicate a tape with record marks in a straightforward way: when the record mark is encountered, the punch instruction punches an EOL character instead and terminates. However this was not a crippling problem:
Most 1620 installations used the more convenient punched card input/output,[10]rather than paper tape.
The successor to the 1620, theIBM 1130,[11]was based on a totally different, 16-bit binary architecture. (The 1130 line retained one 1620 peripheral, theIBM 1627drum plotter.)
IBM supplied the following software for the 1620:
The Monitors provided disk based versions of 1620 SPS IId, FORTRAN IId as well as a DUP (Disk Utility Program). Both Monitor systems required 20,000 digits or more of memory and one or more1311 disk drives.
A collection of IBM 1620 related manuals in PDF format exists at bitsavers.[13]
Since theModel Iused in-memory lookup tables for addition/subtraction,[14]limited bases (5 to 9) unsigned number arithmetic could be performed by changing the contents of these tables, but noting that the hardware included a ten's complementer for subtraction (and addition of oppositely signed numbers).
To do fully signed addition and subtraction in bases 2 to 4 required detailed understanding of the hardware to create a "folded" addition table that would fake out the complementer and carry logic.
Also the addition table would have to be reloaded for normal base 10 operation every time address calculations were required in the program, then reloaded again for the alternate base. This made the "trick" somewhat less than useful for any practical application.
Since theModel IIhad addition and subtraction fully implemented in hardware, changing the table in memory could not be used as a "trick" to change arithmetic bases. However an optional special feature in hardware for octal input/output, logical operations,
and base conversion to/from decimal was available.
Although bases other than 8 and 10 were not supported, this made the Model II very practical for applications that needed to manipulate data formatted in octal by other computers (e.g., the IBM 7090).
TheIBM 1620 Model I(commonly called "1620" from 1959 until the 1962 introduction of theModel II) was the original. It was produced as inexpensively as possible, tokeep the price low.
TheIBM 1620 Model II(commonly called simply the Model II) was a vastly improved implementation, compared to the originalModel I. The Model II was introduced in 1962.
While theLower consolefor both the Model 1[18]and the Model 2[19]IBM 1620 systems had the same lamps and switches, theUpper consoleof the pair were partly different.
The balance of theUpper consolewas the same on both models:
TheModel Iconsole typewriter was a modifiedModel B1, interfaced by a set of relays, and it typed at only 10 characters per second.
There were a set of instructions that wrote to the typewriter, or read from it. The generalRN(read numeric) andWN(write numeric) instructions had assembly language mnemonics that supplied the "device" code in the second address field, and the control code in the low-order digit of the second address field.
To simplify input and output, there were two instructions:
TheModel IIused a modifiedSelectrictypewriter, which could type at 15.5 cps – a 55% improvement.
Available peripherals were:
The standard "output" mechanism for a program was to punch cards, which was faster than using the typewriter. These punched cards were then fed through anIBM 407mechanical calculator which could be programmed to print two cards, thus being able to use the additional print columns available on the 407. All output was synchronous, and the processor paused while theInput/Output(I/O) device produced the output, so the typewriter output could completely dominate program running time.
A faster output option, theIBM 1443printer was introduced May 6, 1963,[22]and its 150–600 lines/minute capability was available for use with either model of the 1620.[23][24]
It could print either 120 or 144 columns. The character width was fixed, so it was the paper size that changed; the printer printed 10 characters to the inch, so a printer could print a maximum of 12 inches or 14.4 inches of text. In addition, the printer had a buffer, so the I/O delay for the processor was reduced. However, the print instruction would block if the line had not completed.
The "operating system" for the computer constituted the human operator, who would use controls on the computerconsole, which consisted of afront paneland typewriter, to load programs from the available bulk storage media such as decks of punched cards or rolls of paper tape that were kept in cabinets nearby. Later, the model 1311 disc storage device attached to the computer enabled a reduction in the fetch and carry of card decks or paper tape rolls, and a simple "Monitor" operating system could be loaded to help in selecting what to load from disc.[20][25]
A standard preliminary was to clear the computer memory of any previous user's detritus – being magnetic cores, the memory retained its last state even if the power had been switched off. This was effected by using the console facilities to load a simple computer program via typing its machine code at the console typewriter, running it, and stopping it. This was not challenging as only one instruction was needed such as 160001000000, loaded at address zero and following. This meanttransmit field immediate(the 16: two-digit op-codes) to address 00010 the immediate constant field having the value 00000 (five-digit operand fields, the second being from address 11 back to 7), decrementing source and destination addresses until such time as a digit with a "flag" was copied. This was the normal machine code means of copying a constant of up to five digits. The digit string was addressed at its low-order end and extended through lower addresses until a digit with a flag marked its end. But for this instruction, no flag would ever be found because the source digits had shortly before been overwritten by digits lacking a flag. Thus the operation would roll around memory (even overwriting itself) filling it with all zeroes until the operator grew tired of watching the roiling of the indicator lights and pressed theInstant Stop - Single Cycle Executebutton. Each 20,000 digit module of memory took just under one second to clear. On the1620 IIthis instruction wouldNOTwork (due to certain optimizations in the implementation). Instead there was a button on the console calledModifywhich when pressed together with theCheck Resetbutton, when the computer was in Manual mode, would set the computer in a mode that would clear all of memory in a tenth of one second regardless of how much memory you had; when you pressedStart. It also stopped automatically when memory was cleared, instead of requiring the operator to stop it.
Other than typing machine code at the console, a program could be loaded via either the paper tape reader, the card reader, or any disk drive. Loading from either tape or disk required first typing a "bootstrap" routine on the console typewriter.
The card reader made things easier because it had a specialLoadbutton to signify that the first card was to be read into the computer's memory (starting at address 00000) and executed (as opposed to just starting the card reader, which then awaits commands from the computer to read cards) – this is the "bootstrap" process that gets into the computer just enough code to read in the rest of the code (from the card reader, or disc, or...) that constitutes the loader that will read in and execute the desired program.
Programs were prepared ahead of time, offline, on paper tape or punched cards. But usually the programmers were allowed to run the programs personally, hands-on, instead of submitting them to operators as was the case with mainframe computers at that time. And the console typewriter allowed entering data and getting output in an interactive fashion, instead of just getting the normal printed output from a blind batch run on a pre-packaged data set. As well, there were fourprogram switcheson the console whose state a running program could test and so have its behavior directed by its user. The computer operator could also stop a running program (or it may come to a deliberately programmed stop) then investigate or modify the contents of memory: being decimal-based, this was quite easy; even floating-point numbers could be read at a glance. Execution could then be resumed, from any desired point. Aside from debugging, scientific programming is typically exploratory, by contrast to commercial data processing where the same work is repeated on a regular schedule.
The most important items on the 1620's console were a pair of buttons labeledInsert&Release, and the console typewriter.
The typewriter is used for operator input/output, both as the main console control of the computer and for program controlled input/output. Later models of the typewriter had a special key markedR-Sthat combined the functions of the consoleRelease&Startbuttons (this would be considered equivalent to anEnterkey on a modern keyboard). Note: several keys on the typewriter did not generate input characters, these includedTabandReturn(the 1620s alphameric and numeric BCD character sets lacked character codes for these keys).
The next most important items on the console were the buttons labeledStart,Stop-SIE, andInstant Stop-SCE.
For program debugging there were the buttons labeledSave&Display MAR.
When a Branch Back instruction was executed inSavemode, it copied the saved value back to the program counter (instead of copying the return address register as it normally did) and deactivatedSavemode.
This was used during debugging to remember where the program had been stopped to allow it to be resumed after the debugging instructions that the operator had typed on the typewriter had finished. Note: the MARS register used to save the program counter in was also used by theMultiplyinstruction, so this instruction and theSavemode were incompatible! However, there was no need to use multiply in debugging code, so this was not considered to be a problem.
All of main memory could be cleared from the console by entering and executing a transfer instruction from address to address +1, this would overwrite any word mark, that would normally stop a transfer instruction, and wrap around at the end of memory. After a moment, pressing Stop would stop the transfer instruction and memory would be cleared.
TheIBM 1621 Paper Tape Readercould read a maximum of 150 characters per second;TheIBM 1624 Paper Tape Punchcould output a maximum of 15 characters/second.[1]
Both units:
The1621Tape Reader and1624Tape Punch included controls for:
TheIBM 1622 Card reader/punchcould:
The 1622's controls were divided into three groups: 3 punch control rocker switches, 6 buttons, and 2 reader control rocker switches.
Punch Rocker switches:
Buttons:
Reader Rocker switches:
The1311Disk drive controls.
The FORTRAN II compiler and SPS assembler were somewhat cumbersome to use[26][27]by modern standards, however, with repetition, the procedure soon became automatic and you no longer thought about the details involved.
GOTRAN was much simpler to use, as it directly produced an executable in memory. However it was not a complete FORTRAN implementation.
To improve this various third-party FORTRAN compilers were developed. One of these was developed by Bob Richardson,[28][29]a programmer atRice University, the FLAG (FORTRAN Load-and-Go) compiler. Once the FLAG deck had been loaded, all that was needed was to load the source deck to get directly to the output deck; FLAG stayed in memory, so it was immediately ready to accept the next source deck. This was particularly convenient for dealing with many small jobs. For instance, atAuckland Universitya batch job processor for student assignments (typically, many small programs not requiring much memory) chugged through a class lot rather faster than the laterIBM 1130did with its disk-based system. The compiler remained in memory, and the student's program had its chance in the remaining memory to succeed or fail, though a bad failure might disrupt the resident compiler.
Later, disk storage devices were introduced, removing the need for working storage on card decks. The various decks of cards constituting the compiler and loader no longer need be fetched from their cabinets but could be stored on disk and loaded under the control of a simple disk-based operating system: a lot of activity becomes less visible, but still goes on.
Since the punch side of the card reader-punch did not edge-print the characters across the top of the cards, one had to take any output decks over to aseparate machine, typically anIBM 557Alphabetic Interpreter, that read each card and printed its contents along the top. Listings were usually generated by punching a listing deck and using anIBM 407accounting machine to print the deck.
Most of the logic circuitry of the 1620 was a type ofresistor–transistor logic(RTL) using"drift" transistors(a type of transistor invented byHerbert Kroemerin 1953) for their speed, that IBM referred to asSaturated Drift Transistor Resistor Logic(SDTRL). Other IBM circuit types used were referred to as:Alloy(some logic, but mostly various non-logic functions, named for the kind of transistors used),CTRL(another type of RTL, but slower thanSDTRL),CTDL(a type ofdiode–transistor logic(DTL)), andDL(another type of RTL, named for the kind of transistor used, "drift" transistors). Typical logic levels of all these circuits (S Level) were high: 0 V to -0.5 V, low: -6 V to -12 V.Transmission linelogic levels ofSDTRLcircuits (C Level) were high: 1 V, low: -1 V. Relay circuits used either of two logic levels (T Level) high: 51 V to 46 V, low: 16 V to 0 V or (W Level) high: 24 V, low: 0 V.
These circuits were constructed of individual discrete components mounted on single sided paper-epoxyprinted circuitboards 2.5 by 4.5 inches (64 by 114 millimeters) with a 16-pin gold-platededge connector, that IBM referred to asSMScards (Standard Modular System). The amount of logic on one card was similar to that in one7400 seriesSSIor simplerMSIpackage (e.g., 3 to 5 logic gates or a couple of flip-flops).
These boards were inserted into sockets mounted in door-like racks which IBM referred to asgates. The machine had the following "gates" in its basic configuration:
There were two different types ofcore memoryused in the 1620:
The address decoding logic of the Main memory also used two planes of 100pulse transformercores per module to generate the X-Y Line half-current pulses.
There were two models of the 1620, each having totally different hardware implementations:
In 1958 IBM assembled a team at thePoughkeepsie, New Yorkdevelopment laboratory to study the "small scientific market". Initially the team consisted of Wayne Winger (Manager), Robert C. Jackson, and William H. Rhodes.
The competing computers in this market were theLibrascope LGP-30and theBendix G-15; both weredrum memorymachines. IBM's smallest computer at the time was the popularIBM 650, a fixed word length decimal machine that also used drum memory. All three usedvacuum tubes. It was concluded that IBM could offer nothing really new in that area. To compete effectively would require use of technologies that IBM had developed for larger computers, yet the machine would have to be produced at the least possible cost.
To meet this objective, the team set the following requirements:
The team expanded with the addition of Anne Deckman, Kelly B. Day, William Florac, and James Brenza. They completed the(codename) CADETprototype in the spring of 1959.
Meanwhile, theSan Jose, Californiafacility was working on a proposal of its own. IBM could only build one of the two and thePoughkeepsieproposal won because "the San Jose version is top of the line and not expandable, while your proposal has all kinds of expansion capability - never offer a machine that cannot be expanded".
in the IBM announcement of the machine.
Management was not entirely convinced thatcore memorycould be made to work in small machines, so Gerry Ottaway was loaned to the team to design adrum memoryas a backup. Duringacceptance testingby the Product Test Lab, repeated core memory failures were encountered and it looked likely that management's predictions would come true. However, at the last minute it was found that themuffin fanused to blow hot air through the core stack was malfunctioning, causing the core to pick up noise pulses and fail to read correctly. After the fan problem was fixed, there were no further problems with the core memory and the drum memory design effort was discontinued as unnecessary.
Following announcement of the IBM 1620 on October 21, 1959, due to an internal reorganization of IBM, it was decided to transfer the computer from the Data Processing Division at Poughkeepsie (large scale mainframe computers only) to the General Products Division at San Jose (small computers and support products only) for manufacturing.
Following transfer to San Jose, someone there jokingly suggested that the code nameCADETactually stood for "Can'tAdd,Doesn'tEvenTry", referring to the use of addition tables in memory rather than dedicated addition circuitry (and SDTRL actually stood for "SoldDownTheRiverLogic" became a common joke among the CEs). This stuck and became very well known among the user community.[30][31][32]
An IBM 1620 model II was used by Vearl N. Huff, NASA Headquarters (FOB 10B, Washington DC) to program a three-dimensional simulation inFortranof the tetheredGeminicapsule –Agenarocket module two-body problem at a time when it was not completely understood if it was safe to tether two objects together in space due to possible elastic tether induced collisions. The same computer was also used to simulate the orbits of the Gemini flights, producing printer-art charts of each orbit. These simulation were run over-night and the data examined the next day.[33]
In 1963 an IBM 1620 was installed at IIT Kanpur providing the kicker for India's software prowess.[34]
In 1964 at the Australian National University, Martin Ward used an IBM 1620 model I to calculate the order of the Janko groupJ1.[35]
In 1966 theITUproduced an explanatory film on a 1963 system fortypesettingby computer at theWashington Evening Star, using an IBM 1620 and a Linofilmphototypesetter.[36]
In 1964 an IBM 1620 was installed atThe University of Iceland, becoming the first computer in Iceland.[37]
Many in the user community recall the 1620 being referred to asCADET, jokingly meaning "Can'tAdd,Doesn'tEvenTry", referring to the use of addition tables in memory rather than dedicated addition circuitry.[41]
Seedevelopment historyfor an explanation of all three known interpretations of the machine's code name.
The internal code nameCADETwas selected for the machine. One of the developers says that this stood for "Computer withADvancedEconomicTechnology", however others recall it as simply being one half of"SPACE - CADET", whereSPACEwas the internal code name of theIBM 1401machine, also then under development.[citation needed]
|
https://en.wikipedia.org/wiki/IBM_1620
|
MalwareMustDie(MMD), NPO[1][2]is awhite hat hackingresearch workgroup that was launched in August 2012. MalwareMustDie is a registerednonprofit organizationas a medium for IT professionals and security researchers gathered to form a work flow to reducemalwareinfection in theinternet. The group is known for their malware analysis blog.[3]They have a list[4]ofLinux malwareresearch and botnet analysis that they have completed. The team communicates information about malware in general and advocates for better detection forLinux malware.[5]
MalwareMustDie is also known for their efforts in original analysis for a new emerged malware or botnet, sharing of their found malware source code[6]to the law enforcement and security industry, operations to dismantle several malicious infrastructure,[7][8]technical analysis on specific malware's infection methods and reports for the cyber crime emerged toolkits.
Several notable internet threats that were first discovered and announced by MalwareMustDie are:
MalwareMustDie has also been active in analysis for client vector threat's vulnerability. For example,Adobe FlashCVE-2013-0634(LadyBoyle SWF exploit)[56][57]and other undisclosed Adobe vulnerabilities in 2014 have received Security Acknowledgments for Independent Security Researchers from Adobe.[58]Another vulnerability researched by the team was reverse engineering a proof of concept for a backdoor case (CVE-2016-6564) of one brand of Android phone device that was later found to affect 2 billion devices.[59]
Recent activity of the team still can be seen in several noted threat disclosures, for example, the "FHAPPI" state-sponsored malware attack,[60]the finding of first ARC processor malware,[61][62][63]and "Strudel" threat analysis (credential stealing scheme).[64]The team continues to post new Linux malware research on Twitter and their subreddit.
MalwareMustDie compares their mission to theCrusades, emphasizing the importance of fighting online threats out of a sense of moral duty. Many people have joined the group because they want to help the community by contributing to this effort.[65]
|
https://en.wikipedia.org/wiki/MalwareMustDie
|
Automatic parallelization, alsoauto parallelization, orautoparallelizationrefers to converting sequentialcodeintomulti-threadedand/orvectorizedcode in order to use multiple processors simultaneously in a shared-memorymultiprocessor(SMP) machine.[1]Fully automatic parallelization of sequential programs is a challenge because it requires complexprogram analysisand the best approach may depend upon parameter values that are not known at compilation time.[2]
The programming control structures on which autoparallelization places the most focus areloops, because, in general, most of theexecution timeof a program takes place inside some form of loop.
There are two main approaches to parallelization of loops: pipelined multi-threading and cyclic multi-threading.[3]For example, consider a loop that on each iteration applies a hundred operations, and runs for a thousand iterations. This can be thought of as a grid of 100 columns by 1000 rows, a total of 100,000 operations. Cyclic multi-threading assigns each row to a different thread. Pipelined multi-threading assigns each column to a different thread.
This is the first stage where the scanner will read the input source files to identify all static and extern usages. Each line in the file will be checked against pre-defined patterns to segregate intotokens. These tokens will be stored in a file which will be used later by the
grammar engine. The grammar engine will check patterns of tokens that match with pre-defined rules to identify variables, loops, control
statements, functions etc. in the code.
Theanalyzeris used to identify sections of code that can be executed concurrently. The analyzer uses the static data information provided by the scanner-parser. The analyzer will first find all the totally independent functions and mark them as individual tasks. The analyzer then finds which tasks have dependencies.
Theschedulerwill list all the tasks and their dependencies on each other in terms of execution and start times. The scheduler will produce the optimal schedule in terms of number of processors to be used or the total execution time for the application.
Theschedulerwill generate a list of all the tasks and the details of the cores on which they will execute along with the time that they will execute for. The code Generator will insert special constructs in the code that will be read during execution by the scheduler. These constructs will instruct the scheduler on which core a particular task will execute along with the start and end times.
A cyclic multi-threading parallelizing compiler tries tosplit up a loopso that eachiterationcan be executed on a separateprocessorconcurrently.
Thecompilerusually conducts two passes of analysis before actual parallelization in order to determine the following:
The first pass of the compiler performs adata dependence analysisof the loop to determine whether each iteration of the loop can be executed independently of the others. Data dependence can sometimes be dealt with, but it may incur additional overhead in the form ofmessage passing, synchronization ofshared memory, or some other method of processor communication.
The second pass attempts to justify the parallelization effort by comparing the theoretical execution time of the code after parallelization to the code's sequential execution time. Somewhat counterintuitively, code does not always benefit from parallel execution. The extra overhead that can be associated with using multiple processors can eat into the potential speedup of parallelized code.
A loop is called DOALL if all of its iterations, in any given invocation, can be executed concurrently.
TheFortrancode below is DOALL, and can be auto-parallelized by a compiler because each iteration is independent of the others, and the final result of arrayzwill be correct regardless of the execution order of the other iterations.
There are manypleasingly parallelproblems that have such DOALL loops. For example, whenrenderinga ray-traced movie, each frame of the movie can be independently rendered, and each pixel of a single frame may be independently rendered.
On the other hand, the following code cannot be auto-parallelized, because the value ofz(i)depends on the result of the previous iteration,z(i - 1).
This does not mean that the code cannot be parallelized. Indeed, it is equivalent to the DOALL loop
However, current parallelizing compilers are not usually capable of bringing out these parallelisms automatically, and it is questionable whether this code would benefit from parallelization in the first place.
A pipelined multi-threading parallelizing compiler tries to break up the sequence of operations inside a loop into a series of code blocks, such that each code block can be executed on separateprocessorsconcurrently.
There are many pleasingly parallel problems that have such relatively independent code blocks, in particular systems usingpipes and filters.
For example, when producing live broadcast television, the following tasks must be performed many times a second:
A pipelined multi-threading parallelizing compiler could assign each of these six operations to a different processor, perhaps arranged in asystolic array, inserting the appropriate code to forward the output of one processor to the next processor.
Recent research focuses on using the power of GPU's[4]and multicore systems[5]to compute such independent code blocks( or simply independent iterations of a loop) at runtime.
The memory accessed (whether direct or indirect) can be simply marked for different iterations of a loop and can be compared for dependency detection. Using this information, the iterations are grouped into levels such that iterations belonging to the same level are independent of each other, and can be executed in parallel.
Automatic parallelization by compilers or tools is very difficult due to the following reasons:[6]
Due to the inherent difficulties in full automatic parallelization, several easier approaches exist to get a parallel program in higher quality.
One of these is to allow programmers to add "hints" to their programs to guide compiler parallelization, such asHPFfordistributed memorysystems andOpenMPorOpenHMPPforshared memorysystems.
Another approach is to build an interactive system between programmers and parallelizing tools/compilers. Notable examples areVector Fabrics' Pareon,SUIFExplorer (The Stanford University Intermediate Format compiler), the Polaris compiler, and ParaWise (formally CAPTools).
Finally, another approach is hardware-supportedspeculative multithreading.
Most researchcompilersfor automatic parallelization considerFortranprograms,[citation needed]because Fortran makes stronger guarantees aboutaliasingthan languages such asC. Typical examples are:
Recently, Aubert, Rubiano, Rusch, andSeiller[8]used a dependency analysis technique[9]to automatically parallelise loops inCcode.
|
https://en.wikipedia.org/wiki/Automatic_parallelization
|
Insoftware engineering,portingis the process of adaptingsoftwarefor the purpose of achieving some form of execution in acomputing environmentthat is different from the one that a given program (meant for such execution) was originally designed for (e.g., differentCPU, operating system, or third partylibrary). The term is also used when software/hardware is changed to make them usable in different environments.[1][2]
Software isportablewhen the cost of porting it to a new platform is significantly less than the cost of writing it from scratch. The lower the cost of porting software relative to its implementation cost, the more portable it is said to be. This is distinct fromcross-platform software, which is designed from the ground up without any single "native" platform.
The term "port" is derived from the Latinportāre, meaning "to carry".[3]When code is not compatible with a particularoperating systemorarchitecture, the code must be "carried" to the new system.
The term is not generally applied to the process of adapting software to run with less memory on the same CPU and operating system.
Software developers often claim that the software they write isportable, meaning that little effort is needed to adapt it to a new environment. The amount of effort actually needed depends on several factors, including the extent to which the original environment (thesource platform) differs from the new environment (thetarget platform), the experience of the original authors in knowing whichprogramming languageconstructs and third party library calls are unlikely to be portable, and the amount of effort invested by the original authors in only using portable constructs (platform specific constructs often provide a cheaper solution).
The number of significantly different CPUs and operating systems used on the desktop today is much smaller than in the past. The dominance of thex86architecturemeans that most desktop software is never ported to a different CPU. In that same market, the choice of operating systems has effectively been reduced to three:Microsoft Windows,macOS, andLinux. However, in theembedded systemsandmobilemarkets,portabilityremains a significant issue, with theARMbeing a widely used alternative.
International standards, such as those promulgated by theISO, greatly facilitate porting by specifying details of the computing environment in a way that helps reduce differences between different standards-conformingplatforms. Writing software that stays within the bounds specified by these standards represents a practical although nontrivial effort. Porting such a program between two standards-compliant platforms (such asPOSIX.1) can be just a matter of loading the source code andrecompilingit on the new platform, but practitioners often find that various minor corrections are required, due to subtle platform differences. Most standards suffer from "gray areas" where differences in interpretation of standards lead to small variations from platform to platform.
There also exists an ever-increasing number of tools to facilitate porting, such as theGNU Compiler Collection, which provides consistent programming languages on different platforms, andAutotools, which automates the detection of minor variations in the environment and adapts the software accordingly before compilation.
The compilers for somehigh-level programming languages(e.g.Eiffel,Esterel) gain portability by outputting source code in another high levelintermediate language(such asC) for which compilers for many platforms are generally available.
Two activities related to (but distinct from) porting areemulatingandcross-compiling.
Instead of translating directly intomachine code, moderncompilerstranslate to a machine independentintermediate codein order to enhance portability of the compiler and minimize design efforts. The intermediate language defines avirtual machinethat can execute all programs written in theintermediate language(a machine is defined by its language and vice versa).[4]The intermediate code instructions are translated into equivalent machine code sequences by acode generatorto createexecutable code. It is also possible to skip the generation of machine code by actually implementing aninterpreterorJITfor the virtual machine.[5]
The use of intermediate code enhances portability of the compiler, because only the machine dependent code (the interpreter or the code generator) of the compiler itself needs to be ported to the target machine. The remainder of the compiler can be imported as intermediate code and then further processed by the ported code generator or interpreter, thus producing the compiler software or directly executing the intermediate code on the interpreter. The machine independent part can be developed andtestedon another machine (thehost machine). This greatly reduces design efforts, because the machine independent part needs to be developed only once to create portable intermediate code.[6]
An interpreter is less complex and therefore easier to port than a code generator, because it is not able to do code optimizations due to its limited view of the program code (it only sees one instruction at a time, and users need a sequence to do optimization). Some interpreters are extremely easy to port, because they only make minimal assumptions about the instruction set of the underlying hardware. As a result, the virtual machine is even simpler than the target CPU.[7]
Writing the compiler sources entirely in the programming language the compiler is supposed to translate, makes the following approach, better known ascompiler bootstrapping, feasible on the target machine:
The difficult part of coding the optimization routines is done using the high-level language instead of the assembly language of the target.
According to the designers of theBCPLlanguage, interpreted code (in the BCPL case) is more compact than machine code, typically by a factor of two to one. Interpreted code however runs about ten times slower than compiled code on the same machine.[8]
The designers of theJava programming languagetry to take advantage of the compactness of interpreted code, because a Java program may need to be transmitted over the Internet before execution can start on the target'sJava virtual machine(JVM).
Porting is also the term used when avideo gamedesigned to run on one platform, be it anarcade,video game console, orpersonal computer, is converted to run on a different platform, perhaps with some minor differences.[9]From the beginning of video games through to the 1990s, "ports", at the time often known as "conversions", were often not true ports, but rather reworked versions of the games due to the limitations of different systems. For example, the 1982 gameThe Hobbit, a text adventure augmented with graphic images, has significantly different graphic styles across the range of personal computers that its ports were developed for.[10]However, many 21st century video games are developed using software (often inC++) that can output code for one or more consoles as well as for a PC without the need for actual porting (instead relying on the common porting of individual componentlibraries).[10]
Porting arcade games to home systems with inferior hardware was difficult. The ported version ofPac-Manfor theAtari 2600omitted many of the visual features of the original game to compensate for the lack ofROMspace and the hardware struggled when multiple ghosts appeared on the screen creating a flickering effect. The poor performance of theAtari 2600Pac-Manis cited by some scholars as a cause of thevideo game crash of 1983.[11]
Many early ports suffered significant gameplay quality issues because computers greatly differed.[12]Richard Garriottstated in 1984 atOrigins Game FairthatOrigin Systemsdeveloped video games for theApple IIfirst then ported them toCommodore 64andAtari 8-bit computers, because the latter machines'spritesand other sophisticated features made porting from them to Apple "far more difficult, perhaps even impossible".[13]Reviews complained of ports that suffered from "Apple conversionitis",[14]retaining the Apple's "lousy sound and black-white-green-purple graphics";[15][16]after Garriott's statement, whenDan Buntenasked "Atari and Commodore people in the audience, are you happy with the Apple rewrites?" the audience shouted "No!" Garriott responded, "[otherwise] the Apple version will never get done. From a publisher's point of view that's not money wise".[13]
Others worked differently.Ozark Softscape, for example, wroteM.U.L.E.for the Atari first because it preferred to develop for the most advanced computers, removing or altering features as necessary during porting. Such a policy was not always feasible; Bunten stated that "M.U.L.E. can't be done for an Apple",[12]and that the non-Atari versions ofThe Seven Cities of Goldwere inferior.[17]Compute!'s Gazettewrote in 1986 that when porting from Atari to Commodore the original was usually superior. The latter's games' quality improved when developers began creating new software for it in late 1983, the magazine stated.[18]
In portingarcade games, the terms "arcade perfect" or "arcade accurate" were often used to describe how closely the gameplay, graphics, and other assets on the ported version matched the arcade version. Many arcade ports in the early 1980s were far from arcade perfect as home consoles and computers lacked the sophisticated hardware in arcade games, but games could still approximate the gameplay. Notably,Space Invaderson theAtari VCSbecame the console'skiller appdespite its differences,[19]while the laterPac-Manportwas notorious for its deviations from the arcade version.[20]Arcade-accurate games became more prevalent starting in the 1990s as home consoles caught up to the power of arcade systems. Notably, theNeo Geosystem fromSNK, which was introduced as a multi-game arcade system, would also be offered as a home console with the same specifications. This allowed arcade perfect games to be played at home.[10]
A "console port" is a game that was originally or primarily made for a console before a version is created which can be played on apersonal computer. The process of porting games from console to PC is often regarded more cynically than other types of port due to the more powerful hardware some PCs have even at console launch being underutilized, partially due to console hardware being fixed throughout eachgenerationas newer PCs constantly become even more powerful. While broadly similar today, some architectural differences persist, such as the use ofunified memoryand smallerOSson consoles. Other objections arise fromuser interfacedifferences conventional to consoles, such asgamepads,TFUIsaccompanied by narrowFoV, fixedcheckpoints,onlinerestricted to officialserversorP2P, poor or nomoddingsupport, as well as the generally greater reliance among console developers on internalhard codinganddefaultsinstead of externalAPIsandconfigurability, all of which may require expensive deep reaching redesign to avoid a "lazy" feeling port to PC.[21]
|
https://en.wikipedia.org/wiki/Porting
|
Intel MPX(Memory Protection Extensions) are a discontinued set of extensions to thex86instruction set architecture. Withcompiler,runtime libraryandoperating systemsupport, Intel MPX claimed to enhance security tosoftwareby checkingpointer referenceswhose normal compile-time intentions are maliciously exploited at runtime due tobuffer overflows. In practice, there have been too many flaws discovered in the design for it to be useful, and support has been deprecated or removed from most compilers and operating systems.Intelhas listed MPX as removed in 2019 and onward hardware in section 2.5 of its Intel® 64 and IA-32 Architectures Software Developer's Manual Volume 1.[1]
Intel MPX introduces new boundsregisters, and newinstruction setextensions that operate on these registers. Additionally, there is a new set of "bound tables" that store bounds beyond what can fit in the bounds registers.[2][3][4][5][6]
MPX uses four new 128-bit bounds registers,BND0toBND3, each storing a pair of 64-bit lower bound (LB) and upper bound (UB) values of a buffer. The upper bound is stored inones' complementform, withBNDMK(create bounds) andBNDCU(check upper bound) performing the conversion. The architecture includes two configuration registersBNDCFGx(BNDCFGUin user space andBNDCFGSin kernel mode), and a status registerBNDSTATUS, which provides a memory address and error code in case of an exception.[7][8]
Two-level address translation is used for storing bounds in memory. The top layer consists of a Bounds Directory (BD) created on the application startup. Each BD entry is either empty or contains a pointer to a dynamically created Bounds Table (BT), which in turn contains a set of pointer bounds along with the linear addresses of the pointers. The bounds load (BNDLDX) and store (BNDSTX) instructions transparently perform the address translation and access bounds in the proper BT entry.[7][8]
Intel MPX was introduced as part of theSkylakemicroarchitecture.[9]
IntelGoldmontmicroarchitecture also supports Intel MPX.[9]
A study examined a detailed cross-layer dissection of the MPX system stack and comparison with three prominent software-based memory protection mechanisms (AddressSanitizer, SAFECode, and SoftBound) and presents the following conclusions.[8]
In addition, a review concluded MPX was not production ready, andAddressSanitizerwas a better option.[8]A review by Kostya Serebryany at Google, AddressSanitizer's developer,[22]had similar findings.[23]
Another study[24]exploring the scope ofSpectreandMeltdownsecurity vulnerabilities discovered that Meltdown can be used to bypass Intel MPX, using the Bound Range Exceeded (#BR) hardware exception. According to their publication, the researchers were able to leak information through a Flush+Reload covert channel from an out-of-bound access on an array safeguarded by the MPX system. Their Proof Of Concept has not been publicly disclosed.
|
https://en.wikipedia.org/wiki/Intel_MPX
|
iBeaconis a protocol developed byAppleand introduced at theApple Worldwide Developers Conferencein 2013.[1]Various vendors have since made iBeacon-compatible hardware transmitters – typically calledbeacons– a class ofBluetooth Low Energy(BLE) devices that broadcast their identifier to nearbyportable electronicdevices. The technology enablessmartphones,tabletsand other devices to perform actions when in proximity to an iBeacon.[2][3]
iBeacon is based onBluetooth low energy proximity sensingby transmitting auniversally unique identifier[4]picked up by a compatible app or operating system. The identifier and several bytes sent with it can be used to determine the device's physical location,[5]track customers, or trigger alocation-basedaction on the device such as acheck-in on social mediaor apush notification.
iBeacon can also be used with an application as anindoor positioning system,[6][7][8]which helps smartphones determine their approximate location or context. With the help of an iBeacon, a smartphone's software can approximately find its relative location to an iBeacon in a store.Brick and mortarretail stores use the beacons formobile commerce, offering customers special deals throughmobile marketing,[9]and can enablemobile paymentsthroughpoint of salesystems.
Another application is distributing messages at a specificPoint of Interest, for example a store, a bus stop, a room or a more specific location like a piece of furniture or a vending machine. This is similar to previously used geopush technology based onGPS, but with a much reduced impact on battery life and better precision.
iBeacon differs from some other location-based technologies as the broadcasting device (beacon) is only a 1-way transmitter to the receiving smartphone or receiving device, and necessitates a specific app installed on the device to interact with the beacons. This ensures that only the installed app (not the iBeacon transmitter) can track users as they walk around the transmitters.
iBeacon compatible transmitters come in a variety of form factors, including small coin cell devices, USB sticks, and generic Bluetooth 4.0 capable USBdongles.[10]
An iBeacon deployment consists of one or more iBeacon devices that transmit their own unique identification number to the local area. Software on a receiving device may then look up the iBeacon and perform various functions, such as notifying the user. Receiving devices can also connect to the iBeacons to retrieve values from iBeacon's GATT (generic attribute profile) service. iBeacons do not push notifications to receiving devices (other than their own identity). However, mobile software can use signals received from iBeacons to trigger their own push notifications.[11]
Region monitoring (limited to 20 regions on iOS) can function in the background (of the listening device) and has different delegates to notify the listening app (and user) of entry/exit in the region - even if the app is in the background or the phone is locked. Region monitoring also allows for a small window in which iOS gives a closed app an opportunity to react to the entry of a region.
As opposed to monitoring, which enables users to detect movement in-and-out of range of the beacons, ranging provides a list of beacons detected in a given region, along with the estimated distance from the user's device to each beacon.[12]Ranging works only in the foreground but will return (to the listening device) an array (unlimited) of all iBeacons found along with their properties (UUID, etc.)[13]
An iOS device receiving an iBeacon transmission can approximate the distance from the iBeacon. The distance (between transmitting iBeacon and receiving device) is categorized into 3 distinct ranges:[14]
An iBeacon broadcast has the ability to approximate when a user has entered, exited, or lingered in region. Depending on a customer's proximity to a beacon, they are able to receive different levels of interaction at each of these three ranges.[15]
The maximum range of an iBeacon transmission will depend on the location and placement, obstructions in the environment and where the device is being stored (e.g. in a leather handbag or with a thick case). Standard beacons have an approximate range of 70 meters. Long range beacons can reach up to 450 meters.
The frequency of the iBeacon transmission depends on the configuration of the iBeacon and can be altered using device specific methods. Both the rate and the transmit power have an effect on the iBeacon battery life. iBeacons come with predefined settings and several of them can be changed by the developer, including the rate, the transmit power, and the Major and Minor values. The Major and Minor values are settings which can be used to connect to specific iBeacons or to work with more than one iBeacon at the same time. Typically, multiple iBeacon deployment at a venue will have the same UUID, and use the major and minor pairs to segment and distinguish subspaces within the venue. For example, the Major values of all the iBeacons in a specific store can be set to the same value and the Minor value can be used to identify a specific iBeacon within the store.
The Bluetooth LE protocol is significantly more power efficient than Bluetooth Classic. Several chipsets makers, includingTexas Instruments[17]andNordic Semiconductornow supply chipsets optimized for iBeacon use. Power consumption depends on iBeacon configuration parameters of advertising interval and transmit power. A study on 16 different iBeacon vendors reports that battery life can range between 1–24 months. Apple's recommended setting of 100 ms advertising interval with a coin cell battery provides for 1–3 months of life, which increases to 2–3 years as advertising interval is increased to 900 ms.[18]
Battery consumption of the phones is a factor that must be taken into account when deploying beacon-enabled apps. A recent report has shown that
older phones tend to draw more battery in the vicinity of iBeacons, while the newer phones can be more efficient in the same environment.[19]In addition to the time spent by the phone scanning, number of scans and number of beacons in the vicinity are also significant factors for battery drain, as pointed out by theAislelabsreport.[20]In a follow-up report, Aislelabs found a drastic improvement in battery consumption for iPhone 5s, iPhone 5c versus the older model iPhone 4s.
At 10 surrounding iBeacons, iPhone 4s can consume up to 11% of battery per hour whereas iPhone 5s consumes a little less than 5% battery per hour.[21]An energy efficient iBeacon application needs to consider these aspects in order to strike a good balance between app responsiveness and battery consumption.
In mid-2013Appleintroduced iBeacons and experts wrote about how it is designed to help the retail industry by simplifying payments and enabling on-site offers. On December 6, 2013, Apple activated iBeacons across its 254 US retail stores.[22]McDonald'shas used the devices to give special offers to consumers in its fast-food stores.[9]
As of May 2014, different hardware iBeacons can be purchased for as little as $5 per device to more than $30 per device.[23]Each of these different iBeacons have varying default settings for their default transmit power and iBeacon advertisement frequency. Some hardware iBeacons advertise at frequencies as low as 1 Hz while others can be as high as 10 Hz.
iBeacon technology is still in its infancy. One well-reported software quirk exists on 4.2 and 4.3 Android systems whereby the system's bluetooth stack crashes when presented with many iBeacons.[24]This was reportedly fixed in Android 4.4.4.[25]
Bluetooth low energydevices can operate in an advertisement mode to notify nearby devices of their presence.[26]In the simplest form, an iBeacon is a Bluetooth low energy device emitting advertisements following a strict format, that being an Apple-defined iBeacon prefix, followed by a variable UUID, and a major, minor pair.[27]An example iBeacon advertisement frame could look like:
wherefb0b57a2-8228-44cd-913a-94a122ba1206is the UUID.
Since iBeacon advertising is just an application of the general Bluetooth Low Energy advertisement, the above iBeacon can be emitted by issuing the following commands on Linux to a supported Bluetooth 4 Low Energy device on a modern kernel:[28]
For the retransmission interval setting (first of above commands) to work again, the transmission must be stopped with:
Devices running theAndroid operating systemprior to version 4.3 can only receive iBeacon advertisements but cannot emit iBeacon advertisements. Android 5.0 ("Lollipop") added the support for both central and peripheral modes.[29]
Byte 0-2: Standard BLE Flags (Not necessary but standard)
Byte 3-29: Apple Defined iBeacon Data
Unlike iOS, Android does not have native iBeacon support. Due to this, to use iBeacon on Android, a developer either has to use an existing library or create code that parses BLE packets to find iBeacon advertisements.
BLE support was introduced inAndroid Jelly Beanwith major bug fixes inAndroid KitKat. Stability improvements and additional BLE features have been progressively added there after, with a major stability improvement in version 6.01 ofAndroid Marshmallowthat prevents inter-app connection leaking.
By design, the iBeacon advertisement frame is plainly visible.
This leaves the door open for interested parties to capture, copy and reproduce the iBeacon advertisement frames at different physical locations.
This can be done simply by issuing the right sequence of commands to compatible Bluetooth 4.0 USB dongles.
Successful spoofing of Apple store iBeacons was reported in February 2014.[30]This is not a security flaw in the iBeacon per se, but application developers must keep this in mind when designing their applications with iBeacons.
PayPalhas taken a more robust approach, where the iBeacon is purely the start of a complex security negotiation (Challenge–response authentication). This is not likely to be hacked, nor is it likely that it would be disrupted by copies of beacons.[31]
Listening for iBeacon can be achieved using the following commands with a modern Linux distribution:
On another terminal, launch the protocol dump program:
See Bluetooth Core Spec. Volume 4, Part E, 7.7.65.2: LE Meta Event::LE Advertising Report Sub-Event, for details on the hcidump output.
TheMAC addressof the iBeacon along with its iBeacon payload is clearly identifiable. The sequence of commands intechnical detailscan then be used to reproduce the iBeacon frame.
Even though theNFCenvironment is very different, and has many non-overlapping applications, it still compares with iBeacons.
The NFC range is up to 20 cm (7.87 inches) but the optimum range is less than 4 cm (1.57 inches). iBeacons have a significantly higher range.
Not all phones carry NFC chips. Apple's first iPhone model containing NFC chips was the iPhone 6, introduced September 2014, but most modern phones have had Bluetooth 4.0 or later capability for several years prior to this.
|
https://en.wikipedia.org/wiki/IBeacon
|
JPL sequencesorJPL codesconsist of twolinear feedback shift registers(LFSRs) whose code sequence lengthsLaandLbmust be prime (relatively prime).[1]In this case the code sequence length of the generated overall sequenceLcis equal to:
It is also possible for more than two LFSRs to be interconnected through multipleXORsat the output for as long as all code sequence lengths of the individual LFSR are relatively prime to one another.
JPL sequences were originally developed in theJet Propulsion Labs, from which the name for these code sequences is derived.
Areas of application include distance measurements utilizingspread spectrumsignals for satellites and in space technology. They are also utilized in the more precise militaryP/Y codeused in theGlobal Positioning System(GPS).[2]However, they are currently replaced by the new M-code.
Due to the relatively long spreading sequences, they can be used to measure relatively long ranges without ambiguities, as required for deep space missions. By having a rough synchronziation between receiver and transmitter, this can be achieved with shorter sequences as well.
Their major advantage is, that they produce relatively long sequences with only two LFSRs, which makes it energy efficient and very hard to detect due to huge spreading factor. The same structure can be used to realize a dither generator, used as an additive noise source to remove a numerical bias in digital computations (due to fixed point arithmetics, that have one more negative than positive number, i.e. the mean value is slightly negative).
This article related totelecommunicationsis astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/JPL_sequence
|
Technology education[1]is the study oftechnology, in which students "learn about the processes and knowledge related to technology".[2]As a field of study, it covers the human's ability to shape and change the physical world to meet needs, by manipulating materials andtoolswith techniques. It addresses the disconnect between wide usage and the lack of knowledge about technical components of technologies used and how to fix them.[3]This emergent discipline seeks to contribute to the learners' overallscientificandtechnological literacy,[4]andtechnacy.
Technology education should not be confused witheducational technology. Educational technology focuses on a more narrow subset of technology use that revolves around the use of technology in and for education as opposed to technology education's focus on technology's use in general.[5]
Technology education is an offshoot of theIndustrial Artstradition in theUnited Statesand the Craft teaching or vocational education in other countries.[4]In 1980, through what was called the "Futuring Project", the name of "industrial arts education" was changed to be "technology education" inNew York State; the goal of this movement was to increase students' technological literacy.[6]Since the nature of technology education is significantly different from its predecessor, Industrial Arts teachers underwent inservice education in the mid-1980s while a Technology Training Network was also established by the New York State Education Department (NYSED).[4]
In Sweden, technology as a new subject emerged from the tradition of crafts subjects while in countries like Taiwan and Australia, its elements are discernible in historical vocational programs.[7]
In the 21st century,Mars suitdesign was utilized as a topic for technology education.[8]Technical education is entirely different from general education
TeachThought, a private entity, described technology education as being in the “status of childhood and bold experimentation.[9]” A survey of teachers across the United States by an independent market research company found out that 86 percent of teacher-respondents agree that technology must be used in the classroom. 96 percent say it promotes engagement of students and 89% agree technology improves student outcomes.[10]Technology is present in many education systems. As of July 2018, American public schools provide one desktop computer for every five students and spend over $3 billion annually on digital content.[11]In school year 2015–2016, the government conducted more state-standardized testing for elementary and middle levels through digital platforms instead of the traditional pen and paper method.[12]
The digital revolution offers fresh learning prospects. Students can learn online even if they are not inside the classroom. Advancement in technology entails new approaches of combining present and future technological improvements and incorporating these innovations into the public education system.[13]With technology incorporated into everyday learning, this creates a new environment with new personalized and blended learning. Students are able to complete work based on their own needs as well as having the versatility of individualized study and it evolves the overall learning experience. Technology space in education is huge. It advances and changes rapidly.[14]In the United Kingdom, computer technology helped elevate standards in different schools to confront various challenges.[15]The UK adopted the “Flipped Classroom” concept after it became popular in the United States. The idea is to reverse conventional teaching methods through the delivery of instructions online and outside of traditional classrooms.[16]
In Europe, the European Commission espoused a Digital Education Plan in January 2018. The program consists of 11 initiatives that support utilization of technology and digital capabilities in education development.[17]The Commission also adopted an action plan called the Staff Working Document[18]which details its strategy in implementing digital education. This plan includes three priorities formulating measures to assist European Union member-states to tackle all related concerns.[19]The whole framework will support the European Qualifications Framework for Lifelong Learning[20]and European Classification of Skills, Competences, Qualifications, and Occupations.[21]
In East Asia, TheWorld Bankco-sponsored a yearly (two-day) international symposium[22]In October 2017 with South Korea's Ministry of Education, Science, and Technology and the World Bank to support education and ICT concerns for industry practitioners and senior policymakers. Participants plan and discuss issues in use of new technologies for schools within the region.[23]
|
https://en.wikipedia.org/wiki/Tech_ed
|
Instatistics, thesample maximumandsample minimum,also called thelargest observationandsmallest observation,are the values of the greatest and least elements of asample.[1]They are basicsummary statistics, used indescriptive statisticssuch as thefive-number summaryandBowley's seven-figure summaryand the associatedbox plot.
The minimum and the maximum value are the first and lastorder statistics(often denotedX(1)andX(n)respectively, for a sample size ofn).
If the sample hasoutliers, they necessarily include the sample maximum or sample minimum, or both, depending on whether they are extremely high or low. However, the sample maximum and minimum need not be outliers, if they are not unusually far from other observations.
The sample maximum and minimum are theleastrobust statistics: they are maximally sensitive to outliers.
This can either be an advantage or a drawback: if extreme values are real (not measurement errors), and of real consequence, as in applications ofextreme value theorysuch as building dikes or financial loss, then outliers (as reflected in sample extrema) are important. On the other hand, if outliers have little or no impact on actual outcomes, then using non-robust statistics such as the sample extrema simply clouds the statistics, and robust alternatives should be used, such as otherquantiles: the 10th and 90thpercentiles(first and lastdecile) are more robust alternatives.
In addition to being a component of every statistic that uses all elements of the sample, the sample extrema are important parts of therange, a measure of dispersion, andmid-range, a measure of location. They also realize themaximum absolute deviation: one of them is thefurthestpoint from any given point, particularly a measure of center such as the median or mean.
For a sample set, the maximum function is non-smooth and thus non-differentiable. For optimization problems that occur in statistics it often needs to be approximated by a smooth function that is close to the maximum of the set.
Asmooth maximum, for example,
is a good approximation of the sample maximum.
The sample maximum and minimum are basicsummary statistics, showing the most extreme observations, and are used in thefive-number summaryand a version of theseven-number summaryand the associatedbox plot.
The sample maximum and minimum provide a non-parametricprediction interval:
in a sample from a population, or more generally anexchangeable sequenceof random variables, each observation is equally likely to be the maximum or minimum.
Thus if one has a sample{X1,…,Xn},{\displaystyle \{X_{1},\dots ,X_{n}\},}and one picks another observationXn+1,{\displaystyle X_{n+1},}then this has1/(n+1){\displaystyle 1/(n+1)}probability of being the largest value seen so far,1/(n+1){\displaystyle 1/(n+1)}probability of being the smallest value seen so far, and thus the other(n−1)/(n+1){\displaystyle (n-1)/(n+1)}of the time,Xn+1{\displaystyle X_{n+1}}falls between the sample maximum and sample minimum of{X1,…,Xn}.{\displaystyle \{X_{1},\dots ,X_{n}\}.}Thus, denoting the sample maximum and minimum byMandm,this yields an(n−1)/(n+1){\displaystyle (n-1)/(n+1)}prediction interval of [m,M].
For example, ifn= 19, then [m,M] gives an 18/20 = 90% prediction interval – 90% of the time, the 20th observation falls between the smallest and largest observation seen heretofore. Likewise,n= 39 gives a 95% prediction interval, andn= 199 gives a 99% prediction interval.
Due to their sensitivity to outliers, the sample extrema cannot reliably be used asestimatorsunless data is clean – robust alternatives include the first and lastdeciles.
However, with clean data or in theoretical settings, they can sometimes prove very good estimators, particularly forplatykurticdistributions, where for small data sets themid-rangeis the mostefficientestimator.
They are inefficient estimators of location for mesokurtic distributions, such as thenormal distribution, and leptokurtic distributions, however.
For sampling without replacement from auniform distributionwith one or two unknown endpoints (so1,2,…,N{\displaystyle 1,2,\dots ,N}withNunknown, orM,M+1,…,N{\displaystyle M,M+1,\dots ,N}with bothMandNunknown), the sample maximum, or respectively the sample maximum and sample minimum, aresufficientandcompletestatistics for the unknown endpoints; thus an unbiased estimator derived from these will beUMVUestimator.
If only the top endpoint is unknown, the sample maximum is a biased estimator for the population maximum, but the unbiased estimatork+1km−1{\displaystyle {\frac {k+1}{k}}m-1}(wheremis the sample maximum andkis the sample size) is the UMVU estimator; seeGerman tank problemfor details.
If both endpoints are unknown, then the sample range is a biased estimator for the population range, but correcting as for maximum above yields the UMVU estimator.
If both endpoints are unknown, then themid-rangeis an unbiased (and hence UMVU) estimator of the midpoint of the interval (here equivalently the population median, average, or mid-range).
The reason the sample extrema are sufficient statistics is that the conditional distribution of the non-extreme samples is just the distribution for the uniform interval between the sample maximum and minimum – once the endpoints are fixed, the values of the interior points add no additional information.
The sample extrema can be used for a simplenormality test, specifically of kurtosis: one computes thet-statisticof the sample maximum and minimum (subtractssample meanand divides by thesample standard deviation), and if they are unusually large for the sample size (as per thethree sigma ruleand table therein, or more precisely aStudent's t-distribution), then the kurtosis of the sample distribution deviates significantly from that of the normal distribution.
For instance, a daily process should expect a 3σ event once per year (of calendar days; once every year and a half of business days), while a 4σ event happens on average every 40 years of calendar days, 60 years of business days (once in a lifetime), 5σ events happen every 5,000 years (once in recorded history), and 6σ events happen every 1.5 million years (essentially never). Thus if the sample extrema are 6 sigmas from the mean, one has a significant failure of normality.
Further, this test is very easy to communicate without involved statistics.
These tests of normality can be applied if one faceskurtosis risk, for instance.
Sample extrema play two main roles inextreme value theory:
However, caution must be used in using sample extrema as guidelines: inheavy-tailed distributionsor fornon-stationaryprocesses, extreme events can be significantly more extreme than any previously observed event. This is elaborated inblack swan theory.
|
https://en.wikipedia.org/wiki/Sample_maximum_and_minimum
|
Inneuroscience,synaptic plasticityis the ability ofsynapsestostrengthen or weakenover time, in response to increases or decreases in their activity.[1]Sincememoriesare postulated to be represented by vastly interconnectedneural circuitsin thebrain, synaptic plasticity is one of the important neurochemical foundations oflearningandmemory(seeHebbian theory).
Plastic change often results from the alteration of the number ofneurotransmitter receptorslocated on a synapse.[2]There are several underlying mechanisms that cooperate to achieve synaptic plasticity, including changes in the quantity ofneurotransmittersreleased into a synapse and changes in how effectively cells respond to those neurotransmitters.[3]Synaptic plasticity in bothexcitatoryandinhibitorysynapses has been found to be dependent uponpostsynapticcalciumrelease.[2]
In 1973,Terje LømoandTim Blissfirst described the now widely studied phenomenon oflong-term potentiation(LTP) in a publication in theJournal of Physiology. The experiment described was conducted on the synapse between theperforant pathanddentate gyrusin thehippocampiof anaesthetised rabbits. They were able to show a burst of tetanic (100 Hz) stimulus on perforant path fibres led to a dramatic and long-lasting augmentation in the post-synaptic response of cells onto which these fibres synapse in the dentate gyrus. In the same year, the pair published very similar data recorded from awake rabbits. This discovery was of particular interest due to the proposed role of the hippocampus in certain forms of memory.
Two molecular mechanisms for synaptic plasticity involve theNMDAandAMPAglutamate receptors. Opening of NMDA channels (which relates to the level of cellulardepolarization) leads to a rise in post-synaptic Ca2+concentration and this has been linked to long-term potentiation, LTP (as well as to proteinkinaseactivation); strong depolarization of the post-synaptic cell completely displaces themagnesiumions that block NMDA ion channels and allows calcium ions to enter a cell – probably causing LTP, while weaker depolarization only partially displaces the Mg2+ions, resulting in less Ca2+entering the post-synaptic neuron and lower intracellular Ca2+concentrations (which activate protein phosphatases and inducelong-term depression, LTD).[4]
These activated protein kinases serve to phosphorylate post-synaptic excitatory receptors (e.g.AMPA receptors), improving cation conduction, and thereby potentiating the synapse. Also, these signals recruit additional receptors into the post-synaptic membrane, stimulating the production of a modified receptor type, thereby facilitating an influx of calcium. This in turn increases post-synaptic excitation by a given pre-synaptic stimulus. This process can be reversed via the activity of protein phosphatases, which act to dephosphorylate these cation channels.[5]
The second mechanism depends on asecond messengercascade regulatinggene transcriptionand changes in the levels of key proteins such asCaMKIIand PKAII. Activation of the second messenger pathway leads to increased levels of CaMKII and PKAII within thedendritic spine. These protein kinases have been linked to growth in dendritic spine volume and LTP processes such as the addition of AMPA receptors to theplasma membraneand phosphorylation of ion channels for enhanced permeability.[6]Localization or compartmentalization of activated proteins occurs in the presence of their given stimulus which creates local effects in the dendritic spine. Calcium influx from NMDA receptors is necessary for the activation of CaMKII. This activation is localized to spines with focal stimulation and is inactivated before spreading to adjacent spines or the shaft, indicating an important mechanism of LTP in that particular changes in protein activation can be localized or compartmentalized to enhance the responsivity of single dendritic spines. Individual dendritic spines are capable of forming unique responses to presynaptic cells.[7]This second mechanism can be triggered byprotein phosphorylationbut takes longer and lasts longer, providing the mechanism for long-lasting memory storage. The duration of the LTP can be regulated by breakdown of thesesecond messengers.Phosphodiesterase, for example, breaks down the secondary messengercAMP, which has been implicated in increased AMPA receptor synthesis in the post-synaptic neuron[citation needed].
Long-lasting changes in the efficacy of synaptic connections (long-term potentiation, or LTP) between two neurons can involve the making and breaking of synaptic contacts. Genes such as activin ß-A, which encodes a subunit ofactivin A, are up-regulated during early stage LTP. The activin molecule modulates the actin dynamics in dendritic spines through theMAP-kinase pathway. By changing theF-actincytoskeletalstructure of dendritic spines, spine necks are lengthened producing increased electrical isolation.[8]The end result is long-term maintenance of LTP.[9]
The number ofion channelson the post-synaptic membrane affects the strength of the synapse.[10]Research suggests that the density of receptors on post-synaptic membranes changes, affecting the neuron's excitability in response to stimuli. In a dynamic process that is maintained in equilibrium,N-methyl D-aspartate receptor (NMDA receptor)and AMPA receptors are added to the membrane byexocytosisand removed byendocytosis.[11][12][13]These processes, and by extension the number of receptors on the membrane, can be altered by synaptic activity.[11][13]Experiments have shown that AMPA receptors are delivered to the synapse through vesicularmembrane fusionwith the postsynaptic membrane via the protein kinase CaMKII, which is activated by the influx of calcium through NMDA receptors. CaMKII also improves AMPA ionic conductance through phosphorylation.[14]When there is high-frequency NMDA receptor activation, there is an increase in the expression of a proteinPSD-95that increases synaptic capacity for AMPA receptors.[15]This is what leads to a long-term increase in AMPA receptors and thus synaptic strength and plasticity.
If the strength of a synapse is only reinforced by stimulation or weakened by its lack, apositive feedback loopwill develop, causing some cells never to fire and some to fire too much. But two regulatory forms of plasticity, called scaling andmetaplasticity, also exist to providenegative feedback.[13]Synaptic scaling is a primary mechanism by which a neuron is able to stabilize firing rates up or down.[16]
Synaptic scalingserves to maintain the strengths of synapses relative to each other, lowering amplitudes of smallexcitatory postsynaptic potentialsin response to continual excitation and raising them after prolonged blockage or inhibition.[13]This effect occurs gradually over hours or days, by changing the numbers ofNMDA receptorsat the synapse (Pérez-Otaño and Ehlers, 2005).Metaplasticityvaries the threshold level at which plasticity occurs, allowing integrated responses to synaptic activity spaced over time and preventing saturated states of LTP and LTD. Since LTP and LTD (long-term depression) rely on the influx ofCa2+through NMDA channels, metaplasticity may be due to changes in NMDA receptors, altered calcium buffering, altered states of kinases or phosphatases and a priming of protein synthesis machinery.[17]Synaptic scaling is a primary mechanism by which a neuron to be selective to its varying inputs.[18]The neuronal circuitry affected by LTP/LTD and modified by scaling and metaplasticity leads to reverberatory neural circuit development and regulation in a Hebbian manner which is manifested as memory, whereas the changes in neural circuitry, which begin at the level of the synapse, are an integral part in the ability of an organism to learn.[19]
There is also a specificity element of biochemical interactions to create synaptic plasticity, namely the importance of location. Processes occur at microdomains – such asexocytosisof AMPA receptors is spatially regulated by thet-SNARESTX4.[20]Specificity is also an important aspect of CAMKII signaling involving nanodomain calcium.[7]The spatial gradient of PKA between dendritic spines and shafts is also important for the strength and regulation of synaptic plasticity.[6]It is important to remember that the biochemical mechanisms altering synaptic plasticity occur at the level of individual synapses of a neuron. Since the biochemical mechanisms are confined to these "microdomains," the resulting synaptic plasticity affects only the specific synapse at which it took place.
A bidirectional model, describing both LTP and LTD, of synaptic plasticity has proved necessary for a number of different learning mechanisms incomputational neuroscience,neural networks, andbiophysics. Three major hypotheses for the molecular nature of this plasticity have been well-studied, and none are required to be the exclusive mechanism:
Of these, the latter two hypotheses have been recently mathematically examined to have identical calcium-dependent dynamics which provides strong theoretical evidence for a calcium-based model of plasticity, which in a linear model where the total number of receptors are conserved looks like
where
BothΩ{\displaystyle \Omega }andτ{\displaystyle \tau }are found experimentally and agree on results from both hypotheses. The model makes important simplifications that make it unsuited for actual experimental predictions, but provides a significant basis for the hypothesis of a calcium-based synaptic plasticity dependence.[21]
Short-term synaptic plasticity acts on a timescale of tens of milliseconds to a few minutes unlike long-term plasticity, which lasts from minutes to hours. Short-term plasticity can either strengthen or weaken a synapse.
Short-term synaptic enhancement results from an increased probability of synaptic terminals releasing transmitters in response to pre-synaptic action potentials. Synapses will strengthen for a short time because of an increase in the amount of packaged transmitter released in response to each action potential.[22]Depending on the time scales over which it acts synaptic enhancement is classified asneural facilitation,synaptic augmentationorpost-tetanic potentiation.
Synaptic fatigueor depression is usually attributed to the depletion of the readily releasable vesicles. Depression can also arise from post-synaptic processes and from feedback activation of presynaptic receptors.[23]heterosynapticdepression is thought to be linked to the release ofadenosine triphosphate(ATP) fromastrocytes.[24]
Long-term depression(LTD) andlong-term potentiation(LTP) are two forms of long-term plasticity, lasting minutes or more, that occur at excitatory synapses.[2]NMDA-dependent LTD and LTP have been extensively researched, and are found to require the binding ofglutamate, andglycineorD-serinefor activation of NMDA receptors.[24]The turning point for the synaptic modification of a synapse has been found to be modifiable itself, depending on the history of the synapse.[25]Recently, a number of attempts have been made to offer a comprehensive model that could account for most forms of synaptic plasticity.[26]
Brief activation of an excitatory pathway can produce what is known as long-term depression (LTD) of synaptic transmission in many areas of the brain. LTD is induced by a minimum level of postsynaptic depolarization and simultaneous increase in the intracellular calcium concentration at the postsynaptic neuron. LTD can be initiated at inactive synapses if the calcium concentration is raised to the minimum required level by heterosynaptic activation, or if the extracellular concentration is raised. These alternative conditions capable of causing LTD differ from the Hebb rule, and instead depend on synaptic activity modifications.D-serinerelease byastrocyteshas been found to lead to a significant reduction of LTD in the hippocampus.[24]Activity-dependent LTD was investigated in 2011 for the electrical synapses (modification of Gap Junctions efficacy through their activity).[27]In the brain, cerebellum is one of the structures where LTD is a form of neuroplasticity.[28]
Long-term potentiation, commonly referred to as LTP, is an increase in synaptic response following potentiating pulses of electrical stimuli that sustains at a level above the baseline response for hours or longer. LTP involves interactions between postsynaptic neurons and the specific presynaptic inputs that form a synaptic association, and is specific to the stimulated pathway of synaptic transmission.
The long-term stabilization of synaptic changes is determined by a parallel increase of pre- and postsynaptic structures such asaxonal bouton,dendritic spineandpostsynaptic density.[15]On the molecular level, an increase of the postsynaptic scaffolding proteinsPSD-95andHomer1chas been shown to correlate with the stabilization of synaptic enlargement.[15]
Modification of astrocyte coverage at the synapses in the hippocampus has been found to result from theinduction of LTP, which has been found to be linked to the release ofD-serine,nitric oxide, and thechemokine,s100Bbyastrocytes.[24]LTP is also a model for studying the synaptic basis of Hebbian plasticity. Induction conditions resemble those described for the initiation of long-term depression (LTD), but a stronger depolarization and a greater increase of calcium are necessary to achieve LTP.[29]Experiments performed by stimulating an array of individual dendritic spines, have shown that synaptic cooperativity by as few as two adjacent dendritic spines prevents LTD, allowing only LTP.[30]
The modification ofsynaptic strengthis referred to as functional plasticity. Changes in synaptic strength involve distinct mechanisms of particular types ofglial cells, the most researched type beingastrocytes.[24]
Every kind of synaptic plasticity has different computational uses.[31]Short-term facilitation has been demonstrated to serve as both working memory and mapping input for readout, short-term depression for removing auto-correlation. Long-term potentiation is used for spatial memory storage while long-term depression for both encoding space features, selective weakening of synapses and clearing old memory traces respectively. Forwardspike-timing-dependent plasticityis used for long range temporal correlation, temporal coding and spatiotemporal coding. The reversedspike-timing-dependent plasticityacts as sensory filtering.
|
https://en.wikipedia.org/wiki/Synaptic_plasticity
|
This is a list ofcomputability and complexity topics, by Wikipedia page.
Computability theoryis the part of the theory ofcomputationthat deals with what can be computed, in principle.Computational complexity theorydeals with how hard computations are, in quantitative terms, both with upper bounds (algorithmswhose complexity in the worst cases, as use of computing resources, can be estimated), and from below (proofs that no procedure to carry out some task can be very fast).
For more abstract foundational matters, see thelist of mathematical logic topics. See alsolist of algorithms,list of algorithm general topics.
See thelist of complexity classes
|
https://en.wikipedia.org/wiki/List_of_computability_and_complexity_topics
|
Triggerfishdescribes a technology ofcell phoneinterceptionandsurveillanceusing a mobilecellular base station(microcellorpicocell). The devices are also known as cell-site simulators or digital analyzers.
Neither the user nor the cell phone provider need to know about Triggerfish for it to be used successfully.[2]Acourt orderis required, but the device circumvents provisions ofCALEAbarring use ofpen registerortrap-and-trace devices.[3]
The device is similar to but distinct from anIMSI catcher.[4]
On March 28, 2013, theWashington Postreported that federal investigators "routinely" use the systems to track criminal suspects, but sometimes fail to explain the technology sufficiently tomagistrate judgesfrom whom they seek search warrants.[5]
Thisgovernment-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Triggerfish_(surveillance)
|
Inmathematics, more specifically infunctional analysis, aBanach space(/ˈbɑː.nʌx/,Polish pronunciation:[ˈba.nax]) is acompletenormed vector space. Thus, a Banach space is a vector space with ametricthat allows the computation ofvector lengthand distance between vectors and is complete in the sense that aCauchy sequenceof vectors always converges to a well-definedlimitthat is within the space.
Banach spaces are named after the Polish mathematicianStefan Banach, who introduced this concept and studied it systematically in 1920–1922 along withHans HahnandEduard Helly.[1]Maurice René Fréchetwas the first to use the term "Banach space" and Banach in turn then coined the term "Fréchet space".[2]Banach spaces originally grew out of the study offunction spacesbyHilbert,Fréchet, andRieszearlier in the century. Banach spaces play a central role in functional analysis. In other areas ofanalysis, the spaces under study are often Banach spaces.
ABanach spaceis acompletenormed space(X,‖⋅‖).{\displaystyle (X,\|{\cdot }\|).}A normed space is a pair[note 1](X,‖⋅‖){\displaystyle (X,\|{\cdot }\|)}consisting of avector spaceX{\displaystyle X}over a scalar fieldK{\displaystyle \mathbb {K} }(whereK{\displaystyle \mathbb {K} }is commonlyR{\displaystyle \mathbb {R} }orC{\displaystyle \mathbb {C} }) together with a distinguished[note 2]norm‖⋅‖:X→R.{\displaystyle \|{\cdot }\|:X\to \mathbb {R} .}Like all norms, this norm induces atranslation invariant[note 3]distance function, called thecanonicalor(norm) induced metric, defined for all vectorsx,y∈X{\displaystyle x,y\in X}by[note 4]d(x,y):=‖y−x‖=‖x−y‖.{\displaystyle d(x,y):=\|y-x\|=\|x-y\|.}This makesX{\displaystyle X}into ametric space(X,d).{\displaystyle (X,d).}A sequencex1,x2,…{\displaystyle x_{1},x_{2},\ldots }is calledCauchy in(X,d){\displaystyle (X,d)}ord{\displaystyle d}-Cauchyor‖⋅‖{\displaystyle \|{\cdot }\|}-Cauchyif for every realr>0,{\displaystyle r>0,}there exists some indexN{\displaystyle N}such thatd(xn,xm)=‖xn−xm‖<r{\displaystyle d(x_{n},x_{m})=\|x_{n}-x_{m}\|<r}wheneverm{\displaystyle m}andn{\displaystyle n}are greater thanN.{\displaystyle N.}The normed space(X,‖⋅‖){\displaystyle (X,\|{\cdot }\|)}is called aBanach spaceand the canonical metricd{\displaystyle d}is called acomplete metricif(X,d){\displaystyle (X,d)}is acomplete metric space, which by definition means for everyCauchy sequencex1,x2,…{\displaystyle x_{1},x_{2},\ldots }in(X,d),{\displaystyle (X,d),}there exists somex∈X{\displaystyle x\in X}such thatlimn→∞xn=xin(X,d),{\displaystyle \lim _{n\to \infty }x_{n}=x\;{\text{ in }}(X,d),}where because‖xn−x‖=d(xn,x),{\displaystyle \|x_{n}-x\|=d(x_{n},x),}this sequence's convergence tox{\displaystyle x}can equivalently be expressed aslimn→∞‖xn−x‖=0inR.{\displaystyle \lim _{n\to \infty }\|x_{n}-x\|=0\;{\text{ in }}\mathbb {R} .}
The norm‖⋅‖{\displaystyle \|{\cdot }\|}of a normed space(X,‖⋅‖){\displaystyle (X,\|{\cdot }\|)}is called acomplete normif(X,‖⋅‖){\displaystyle (X,\|{\cdot }\|)}is a Banach space.
For any normed space(X,‖⋅‖),{\displaystyle (X,\|{\cdot }\|),}there exists anL-semi-inner product⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }onX{\displaystyle X}such that‖x‖=⟨x,x⟩{\textstyle \|x\|={\sqrt {\langle x,x\rangle }}}for allx∈X.{\displaystyle x\in X.}[3]In general, there may be infinitely many L-semi-inner products that satisfy this condition and the proof of the existence of L-semi-inner products relies on the non-constructiveHahn–Banach theorem[3]. L-semi-inner products are a generalization ofinner products, which are what fundamentally distinguishHilbert spacesfrom all other Banach spaces. This shows that all normed spaces (and hence all Banach spaces) can be considered as being generalizations of (pre-)Hilbert spaces.
The vector space structure allows one to relate the behavior of Cauchy sequences to that of convergingseries of vectors.
A normed spaceX{\displaystyle X}is a Banach space if and only if eachabsolutely convergentseries inX{\displaystyle X}converges to a value that lies withinX,{\displaystyle X,}[4]symbolically∑n=1∞‖vn‖<∞⟹∑n=1∞vnconverges inX.{\displaystyle \sum _{n=1}^{\infty }\|v_{n}\|<\infty \implies \sum _{n=1}^{\infty }v_{n}{\text{ converges in }}X.}
The canonical metricd{\displaystyle d}of a normed space(X,‖⋅‖){\displaystyle (X,\|{\cdot }\|)}induces the usualmetric topologyτd{\displaystyle \tau _{d}}onX,{\displaystyle X,}which is referred to as thecanonicalornorm inducedtopology.
Every normed space is automatically assumed to carry thisHausdorfftopology, unless indicated otherwise.
With this topology, every Banach space is aBaire space, although there exist normed spaces that are Baire but not Banach.[5]The norm‖⋅‖:X→R{\displaystyle \|{\cdot }\|:X\to \mathbb {R} }is always acontinuous functionwith respect to the topology that it induces.
The open and closed balls of radiusr>0{\displaystyle r>0}centered at a pointx∈X{\displaystyle x\in X}are, respectively, the setsBr(x):={z∈X∣‖z−x‖<r}andCr(x):={z∈X∣‖z−x‖≤r}.{\displaystyle B_{r}(x):=\{z\in X\mid \|z-x\|<r\}\qquad {\text{ and }}\qquad C_{r}(x):=\{z\in X\mid \|z-x\|\leq r\}.}Any such ball is aconvexandbounded subsetofX,{\displaystyle X,}but acompactball/neighborhoodexists if and only ifX{\displaystyle X}isfinite-dimensional.
In particular, no infinite–dimensional normed space can belocally compactor have theHeine–Borel property.
Ifx0{\displaystyle x_{0}}is a vector ands≠0{\displaystyle s\neq 0}is a scalar, thenx0+sBr(x)=B|s|r(x0+sx)andx0+sCr(x)=C|s|r(x0+sx).{\displaystyle x_{0}+s\,B_{r}(x)=B_{|s|r}(x_{0}+sx)\qquad {\text{ and }}\qquad x_{0}+s\,C_{r}(x)=C_{|s|r}(x_{0}+sx).}Usings=1{\displaystyle s=1}shows that the norm-induced topology istranslation invariant, which means that for anyx∈X{\displaystyle x\in X}andS⊆X,{\displaystyle S\subseteq X,}the subsetS{\displaystyle S}isopen(respectively,closed) inX{\displaystyle X}if and only if its translationx+S:={x+s∣s∈S}{\displaystyle x+S:=\{x+s\mid s\in S\}}is open (respectively, closed).
Consequently, the norm induced topology is completely determined by anyneighbourhood basisat the origin. Some common neighborhood bases at the origin include{Br(0)∣r>0},{Cr(0)∣r>0},{Brn(0)∣n∈N},and{Crn(0)∣n∈N},{\displaystyle \{B_{r}(0)\mid r>0\},\qquad \{C_{r}(0)\mid r>0\},\qquad \{B_{r_{n}}(0)\mid n\in \mathbb {N} \},\qquad {\text{ and }}\qquad \{C_{r_{n}}(0)\mid n\in \mathbb {N} \},}wherer1,r2,…{\displaystyle r_{1},r_{2},\ldots }can be any sequence of positive real numbers that converges to0{\displaystyle 0}inR{\displaystyle \mathbb {R} }(common choices arern:=1n{\displaystyle r_{n}:={\tfrac {1}{n}}}orrn:=1/2n{\displaystyle r_{n}:=1/2^{n}}).
So, for example, any open subsetU{\displaystyle U}ofX{\displaystyle X}can be written as a unionU=⋃x∈IBrx(x)=⋃x∈Ix+Brx(0)=⋃x∈Ix+rxB1(0){\displaystyle U=\bigcup _{x\in I}B_{r_{x}}(x)=\bigcup _{x\in I}x+B_{r_{x}}(0)=\bigcup _{x\in I}x+r_{x}\,B_{1}(0)}indexed by some subsetI⊆U,{\displaystyle I\subseteq U,}where eachrx{\displaystyle r_{x}}may be chosen from the aforementioned sequencer1,r2,….{\displaystyle r_{1},r_{2},\ldots .}(The open balls can also be replaced with closed balls, although the indexing setI{\displaystyle I}and radiirx{\displaystyle r_{x}}may then also need to be replaced).
Additionally,I{\displaystyle I}can always be chosen to becountableifX{\displaystyle X}is aseparable space, which by definition means thatX{\displaystyle X}contains some countabledense subset.
All finite–dimensional normed spaces are separable Banach spaces and any two Banach spaces of the same finite dimension are linearly homeomorphic.
Every separable infinite–dimensionalHilbert spaceis linearly isometrically isomorphic to the separable Hilbertsequence spaceℓ2(N){\displaystyle \ell ^{2}(\mathbb {N} )}with its usual norm‖⋅‖2.{\displaystyle \|{\cdot }\|_{2}.}
TheAnderson–Kadec theoremstates that every infinite–dimensional separableFréchet spaceishomeomorphicto theproduct space∏i∈NR{\textstyle \prod _{i\in \mathbb {N} }\mathbb {R} }of countably many copies ofR{\displaystyle \mathbb {R} }(this homeomorphism need not be alinear map).[6][7]Thus all infinite–dimensional separable Fréchet spaces are homeomorphic to each other (or said differently, their topology is uniqueup toa homeomorphism).
Since every Banach space is a Fréchet space, this is also true of all infinite–dimensional separable Banach spaces, includingℓ2(N).{\displaystyle \ell ^{2}(\mathbb {N} ).}In fact,ℓ2(N){\displaystyle \ell ^{2}(\mathbb {N} )}is evenhomeomorphicto its ownunitsphere{x∈ℓ2(N)∣‖x‖2=1},{\displaystyle \{x\in \ell ^{2}(\mathbb {N} )\mid \|x\|_{2}=1\},}which stands in sharp contrast to finite–dimensional spaces (theEuclidean planeR2{\displaystyle \mathbb {R} ^{2}}is not homeomorphic to theunit circle, for instance).
This pattern inhomeomorphism classesextends to generalizations ofmetrizable(locally Euclidean)topological manifoldsknown asmetricBanach manifolds, which aremetric spacesthat are around every point,locally homeomorphicto some open subset of a given Banach space (metricHilbert manifoldsand metricFréchet manifoldsare defined similarly).[7]For example, every open subsetU{\displaystyle U}of a Banach spaceX{\displaystyle X}is canonically a metric Banach manifold modeled onX{\displaystyle X}since theinclusion mapU→X{\displaystyle U\to X}is anopenlocal homeomorphism.
Using Hilbert spacemicrobundles, David Henderson showed[8]in 1969 that every metric manifold modeled on a separable infinite–dimensional Banach (orFréchet) space can betopologically embeddedas anopensubsetofℓ2(N){\displaystyle \ell ^{2}(\mathbb {N} )}and, consequently, also admits a uniquesmooth structuremaking it into aC∞{\displaystyle C^{\infty }}Hilbert manifold.
There is a compact subsetS{\displaystyle S}ofℓ2(N){\displaystyle \ell ^{2}(\mathbb {N} )}whoseconvex hullco(S){\displaystyle \operatorname {co} (S)}isnotclosed and thus alsonotcompact.[note 5][9]However, like in all Banach spaces, theclosedconvex hullco¯S{\displaystyle {\overline {\operatorname {co} }}S}of this (and every other) compact subset will be compact.[10]In a normed space that is not complete then it is in generalnotguaranteed thatco¯S{\displaystyle {\overline {\operatorname {co} }}S}will be compact wheneverS{\displaystyle S}is; an example[note 5]can even be found in a (non-complete)pre-Hilbertvector subspace ofℓ2(N).{\displaystyle \ell ^{2}(\mathbb {N} ).}
This norm-induced topology also makes(X,τd){\displaystyle (X,\tau _{d})}into what is known as atopological vector space(TVS), which by definition is a vector space endowed with a topology making the operations of addition and scalar multiplication continuous. It is emphasized that the TVS(X,τd){\displaystyle (X,\tau _{d})}isonlya vector space together with a certain type of topology; that is to say, when considered as a TVS, it isnotassociated withanyparticular norm or metric (both of which are "forgotten"). This Hausdorff TVS(X,τd){\displaystyle (X,\tau _{d})}is evenlocally convexbecause the set of all open balls centered at the origin forms aneighbourhood basisat the origin consisting of convexbalancedopen sets. This TVS is alsonormable, which by definition refers to any TVS whose topology is induced by some (possibly unknown)norm. Normable TVSsare characterized bybeing Hausdorff and having aboundedconvexneighborhood of the origin.
All Banach spaces arebarrelled spaces, which means that everybarrelis neighborhood of the origin (all closed balls centered at the origin are barrels, for example) and guarantees that theBanach–Steinhaus theoremholds.
Theopen mapping theoremimplies that whenτ1{\displaystyle \tau _{1}}andτ2{\displaystyle \tau _{2}}are topologies onX{\displaystyle X}that make both(X,τ1){\displaystyle (X,\tau _{1})}and(X,τ2){\displaystyle (X,\tau _{2})}intocomplete metrizable TVSes(for example, Banach orFréchet spaces), if one topology isfiner or coarserthan the other, then they must be equal (that is, ifτ1⊆τ2{\displaystyle \tau _{1}\subseteq \tau _{2}}orτ2⊆τ1{\displaystyle \tau _{2}\subseteq \tau _{1}}thenτ1=τ2{\displaystyle \tau _{1}=\tau _{2}}).[11]So, for example, if(X,p){\displaystyle (X,p)}and(X,q){\displaystyle (X,q)}are Banach spaces with topologiesτp{\displaystyle \tau _{p}}andτq,{\displaystyle \tau _{q},}and if one of these spaces has some open ball that is also an open subset of the other space (or, equivalently, if one ofp:(X,τq)→R{\displaystyle p:(X,\tau _{q})\to \mathbb {R} }orq:(X,τp)→R{\displaystyle q:(X,\tau _{p})\to \mathbb {R} }is continuous), then their topologies are identical and the normsp{\displaystyle p}andq{\displaystyle q}areequivalent.
Two norms,p{\displaystyle p}andq,{\displaystyle q,}on a vector spaceX{\displaystyle X}are said to beequivalentif they induce the same topology;[12]this happens if and only if there exist real numbersc,C>0{\displaystyle c,C>0}such thatcq(x)≤p(x)≤Cq(x){\textstyle c\,q(x)\leq p(x)\leq C\,q(x)}for allx∈X.{\displaystyle x\in X.}Ifp{\displaystyle p}andq{\displaystyle q}are two equivalent norms on a vector spaceX{\displaystyle X}then(X,p){\displaystyle (X,p)}is a Banach space if and only if(X,q){\displaystyle (X,q)}is a Banach space.
See this footnote for an example of a continuous norm on a Banach space that isnotequivalent to that Banach space's given norm.[note 6][12]All norms on a finite-dimensional vector space are equivalent and every finite-dimensional normed space is a Banach space.[13]
A metricD{\displaystyle D}on a vector spaceX{\displaystyle X}is induced by a norm onX{\displaystyle X}if and only ifD{\displaystyle D}istranslation invariant[note 3]andabsolutely homogeneous, which means thatD(sx,sy)=|s|D(x,y){\displaystyle D(sx,sy)=|s|D(x,y)}for all scalarss{\displaystyle s}and allx,y∈X,{\displaystyle x,y\in X,}in which case the function‖x‖:=D(x,0){\displaystyle \|x\|:=D(x,0)}defines a norm onX{\displaystyle X}and the canonical metric induced by‖⋅‖{\displaystyle \|{\cdot }\|}is equal toD.{\displaystyle D.}
Suppose that(X,‖⋅‖){\displaystyle (X,\|{\cdot }\|)}is a normed space and thatτ{\displaystyle \tau }is the norm topology induced onX.{\displaystyle X.}Suppose thatD{\displaystyle D}isanymetriconX{\displaystyle X}such that the topology thatD{\displaystyle D}induces onX{\displaystyle X}is equal toτ.{\displaystyle \tau .}IfD{\displaystyle D}istranslation invariant[note 3]then(X,‖⋅‖){\displaystyle (X,\|{\cdot }\|)}is a Banach space if and only if(X,D){\displaystyle (X,D)}is a complete metric space.[14]IfD{\displaystyle D}isnottranslation invariant, then it may be possible for(X,‖⋅‖){\displaystyle (X,\|{\cdot }\|)}to be a Banach space but for(X,D){\displaystyle (X,D)}tonotbe a complete metric space[15](see this footnote[note 7]for an example). In contrast, a theorem of Klee,[16][17][note 8]which also applies to allmetrizable topological vector spaces, implies that if there existsany[note 9]complete metricD{\displaystyle D}onX{\displaystyle X}that induces the norm topologyτ{\displaystyle \tau }onX,{\displaystyle X,}then(X,‖⋅‖){\displaystyle (X,\|{\cdot }\|)}is a Banach space.
AFréchet spaceis alocally convex topological vector spacewhose topology is induced by some translation-invariant complete metric.
Every Banach space is a Fréchet space but not conversely; indeed, there even exist Fréchet spaces on which no norm is a continuous function (such as thespace of real sequencesRN=∏i∈NR{\textstyle \mathbb {R} ^{\mathbb {N} }=\prod _{i\in \mathbb {N} }\mathbb {R} }with theproduct topology).
However, the topology of every Fréchet space is induced by somecountablefamily of real-valued (necessarily continuous) maps calledseminorms, which are generalizations ofnorms.
It is even possible for a Fréchet space to have a topology that is induced by a countable family ofnorms(such norms would necessarily be continuous)[note 10][18]but to not be a Banach/normable spacebecause its topology can not be defined by anysinglenorm.
An example of such a space is theFréchet spaceC∞(K),{\displaystyle C^{\infty }(K),}whose definition can be found in the article onspaces of test functions and distributions.
There is another notion of completeness besides metric completeness and that is the notion of acomplete topological vector space(TVS) or TVS-completeness, which uses the theory ofuniform spaces.
Specifically, the notion of TVS-completeness uses a unique translation-invariantuniformity, called thecanonical uniformity, that dependsonlyon vector subtraction and the topologyτ{\displaystyle \tau }that the vector space is endowed with, and so in particular, this notion of TVS completeness is independent of whatever norm induced the topologyτ{\displaystyle \tau }(and even applies to TVSs that arenoteven metrizable).
Every Banach space is a complete TVS. Moreover, a normed space is a Banach space (that is, its norm-induced metric is complete) if and only if it is complete as a topological vector space.
If(X,τ){\displaystyle (X,\tau )}is ametrizable topological vector space(such as any norm induced topology, for example), then(X,τ){\displaystyle (X,\tau )}is a complete TVS if and only if it is asequentiallycomplete TVS, meaning that it is enough to check that every Cauchysequencein(X,τ){\displaystyle (X,\tau )}converges in(X,τ){\displaystyle (X,\tau )}to some point ofX{\displaystyle X}(that is, there is no need to consider the more general notion of arbitrary Cauchynets).
If(X,τ){\displaystyle (X,\tau )}is a topological vector space whose topology is induced bysome(possibly unknown) norm (such spaces are callednormable), then(X,τ){\displaystyle (X,\tau )}is a complete topological vector space if and only ifX{\displaystyle X}may be assigned anorm‖⋅‖{\displaystyle \|{\cdot }\|}that induces onX{\displaystyle X}the topologyτ{\displaystyle \tau }and also makes(X,‖⋅‖){\displaystyle (X,\|{\cdot }\|)}into a Banach space.
AHausdorfflocally convex topological vector spaceX{\displaystyle X}isnormableif and only if itsstrong dual spaceXb′{\displaystyle X'_{b}}is normable,[19]in which caseXb′{\displaystyle X'_{b}}is a Banach space (Xb′{\displaystyle X'_{b}}denotes thestrong dual spaceofX,{\displaystyle X,}whose topology is a generalization of thedual norm-induced topology on thecontinuous dual spaceX′{\displaystyle X'}; see this footnote[note 11]for more details).
IfX{\displaystyle X}is ametrizablelocally convex TVS, thenX{\displaystyle X}is normable if and only ifXb′{\displaystyle X'_{b}}is aFréchet–Urysohn space.[20]This shows that in the category oflocally convex TVSs, Banach spaces are exactly those complete spaces that are bothmetrizableand have metrizablestrong dual spaces.
Every normed space can beisometricallyembedded onto a dense vector subspace of a Banach space, where this Banach space is called acompletionof the normed space. This Hausdorff completion is unique up toisometricisomorphism.
More precisely, for every normed spaceX,{\displaystyle X,}there exists a Banach spaceY{\displaystyle Y}and a mappingT:X→Y{\displaystyle T:X\to Y}such thatT{\displaystyle T}is anisometric mappingandT(X){\displaystyle T(X)}is dense inY.{\displaystyle Y.}IfZ{\displaystyle Z}is another Banach space such that there is an isometric isomorphism fromX{\displaystyle X}onto a dense subset ofZ,{\displaystyle Z,}thenZ{\displaystyle Z}is isometrically isomorphic toY.{\displaystyle Y.}The Banach spaceY{\displaystyle Y}is the Hausdorffcompletionof the normed spaceX.{\displaystyle X.}The underlying metric space forY{\displaystyle Y}is the same as the metric completion ofX,{\displaystyle X,}with the vector space operations extended fromX{\displaystyle X}toY.{\displaystyle Y.}The completion ofX{\displaystyle X}is sometimes denoted byX^.{\displaystyle {\widehat {X}}.}
IfX{\displaystyle X}andY{\displaystyle Y}are normed spaces over the sameground fieldK,{\displaystyle \mathbb {K} ,}the set of allcontinuousK{\displaystyle \mathbb {K} }-linear mapsT:X→Y{\displaystyle T:X\to Y}is denoted byB(X,Y).{\displaystyle B(X,Y).}In infinite-dimensional spaces, not all linear maps are continuous. A linear mapping from a normed spaceX{\displaystyle X}to another normed space is continuous if and only if it isboundedon the closedunit ballofX.{\displaystyle X.}Thus, the vector spaceB(X,Y){\displaystyle B(X,Y)}can be given theoperator norm‖T‖=sup{‖Tx‖Y∣x∈X,‖x‖X≤1}.{\displaystyle \|T\|=\sup\{\|Tx\|_{Y}\mid x\in X,\ \|x\|_{X}\leq 1\}.}
ForY{\displaystyle Y}a Banach space, the spaceB(X,Y){\displaystyle B(X,Y)}is a Banach space with respect to this norm. In categorical contexts, it is sometimes convenient to restrict thefunction spacebetween two Banach spaces to only theshort maps; in that case the spaceB(X,Y){\displaystyle B(X,Y)}reappears as a naturalbifunctor.[21]
IfX{\displaystyle X}is a Banach space, the spaceB(X)=B(X,X){\displaystyle B(X)=B(X,X)}forms a unitalBanach algebra; the multiplication operation is given by the composition of linear maps.
IfX{\displaystyle X}andY{\displaystyle Y}are normed spaces, they areisomorphic normed spacesif there exists a linear bijectionT:X→Y{\displaystyle T:X\to Y}such thatT{\displaystyle T}and its inverseT−1{\displaystyle T^{-1}}are continuous. If one of the two spacesX{\displaystyle X}orY{\displaystyle Y}is complete (orreflexive,separable, etc.) then so is the other space. Two normed spacesX{\displaystyle X}andY{\displaystyle Y}areisometrically isomorphicif in addition,T{\displaystyle T}is anisometry, that is,‖T(x)‖=‖x‖{\displaystyle \|T(x)\|=\|x\|}for everyx{\displaystyle x}inX.{\displaystyle X.}TheBanach–Mazur distanced(X,Y){\displaystyle d(X,Y)}between two isomorphic but not isometric spacesX{\displaystyle X}andY{\displaystyle Y}gives a measure of how much the two spacesX{\displaystyle X}andY{\displaystyle Y}differ.
Everycontinuous linear operatoris abounded linear operatorand if dealing only with normed spaces then the converse is also true. That is, alinear operatorbetween two normed spaces isboundedif and only if it is acontinuous function. So in particular, because the scalar field (which isR{\displaystyle \mathbb {R} }orC{\displaystyle \mathbb {C} }) is a normed space, alinear functionalon a normed space is abounded linear functionalif and only if it is acontinuous linear functional. This allows for continuity-related results (like those below) to be applied to Banach spaces. Although boundedness is the same as continuity for linear maps between normed spaces, the term "bounded" is more commonly used when dealing primarily with Banach spaces.
Iff:X→R{\displaystyle f:X\to \mathbb {R} }is asubadditive function(such as a norm, asublinear function, or real linear functional), then[22]f{\displaystyle f}iscontinuous at the originif and only iff{\displaystyle f}isuniformly continuouson all ofX{\displaystyle X}; and if in additionf(0)=0{\displaystyle f(0)=0}thenf{\displaystyle f}is continuous if and only if itsabsolute value|f|:X→[0,∞){\displaystyle |f|:X\to [0,\infty )}is continuous, which happens if and only if{x∈X∣|f(x)|<1}{\displaystyle \{x\in X\mid |f(x)|<1\}}is an open subset ofX.{\displaystyle X.}[22][note 12]And very importantly for applying theHahn–Banach theorem, a linear functionalf{\displaystyle f}is continuous if and only if this is true of itsreal partRef{\displaystyle \operatorname {Re} f}and moreover,‖Ref‖=‖f‖{\displaystyle \|\operatorname {Re} f\|=\|f\|}andthe real partRef{\displaystyle \operatorname {Re} f}completely determinesf,{\displaystyle f,}which is why the Hahn–Banach theorem is often stated only for real linear functionals.
Also, a linear functionalf{\displaystyle f}onX{\displaystyle X}is continuous if and only if theseminorm|f|{\displaystyle |f|}is continuous, which happens if and only if there exists a continuous seminormp:X→R{\displaystyle p:X\to \mathbb {R} }such that|f|≤p{\displaystyle |f|\leq p}; this last statement involving the linear functionalf{\displaystyle f}and seminormp{\displaystyle p}is encountered in many versions of the Hahn–Banach theorem.
The Cartesian productX×Y{\displaystyle X\times Y}of two normed spaces is not canonically equipped with a norm. However, several equivalent norms are commonly used,[23]such as‖(x,y)‖1=‖x‖+‖y‖,‖(x,y)‖∞=max(‖x‖,‖y‖){\displaystyle \|(x,y)\|_{1}=\|x\|+\|y\|,\qquad \|(x,y)\|_{\infty }=\max(\|x\|,\|y\|)}which correspond (respectively) to thecoproductandproductin the category of Banach spaces and short maps (discussed above).[21]For finite (co)products, these norms give rise to isomorphic normed spaces, and the productX×Y{\displaystyle X\times Y}(or the direct sumX⊕Y{\displaystyle X\oplus Y}) is complete if and only if the two factors are complete.
IfM{\displaystyle M}is aclosedlinear subspaceof a normed spaceX,{\displaystyle X,}there is a natural norm on the quotient spaceX/M,{\displaystyle X/M,}‖x+M‖=infm∈M‖x+m‖.{\displaystyle \|x+M\|=\inf \limits _{m\in M}\|x+m\|.}
The quotientX/M{\displaystyle X/M}is a Banach space whenX{\displaystyle X}is complete.[24]The quotient map fromX{\displaystyle X}ontoX/M,{\displaystyle X/M,}sendingx∈X{\displaystyle x\in X}to its classx+M,{\displaystyle x+M,}is linear, onto, and of norm1,{\displaystyle 1,}except whenM=X,{\displaystyle M=X,}in which case the quotient is the null space.
The closed linear subspaceM{\displaystyle M}ofX{\displaystyle X}is said to be acomplemented subspaceofX{\displaystyle X}ifM{\displaystyle M}is therangeof asurjectivebounded linearprojectionP:X→M.{\displaystyle P:X\to M.}In this case, the spaceX{\displaystyle X}is isomorphic to the direct sum ofM{\displaystyle M}andkerP,{\displaystyle \ker P,}the kernel of the projectionP.{\displaystyle P.}
Suppose thatX{\displaystyle X}andY{\displaystyle Y}are Banach spaces and thatT∈B(X,Y).{\displaystyle T\in B(X,Y).}There exists a canonical factorization ofT{\displaystyle T}as[24]T=T1∘π,T:X⟶πX/kerT⟶T1Y{\displaystyle T=T_{1}\circ \pi ,\quad T:X{\overset {\pi }{{}\longrightarrow {}}}X/\ker T{\overset {T_{1}}{{}\longrightarrow {}}}Y}where the first mapπ{\displaystyle \pi }is the quotient map, and the second mapT1{\displaystyle T_{1}}sends every classx+kerT{\displaystyle x+\ker T}in the quotient to the imageT(x){\displaystyle T(x)}inY.{\displaystyle Y.}This is well defined because all elements in the same class have the same image. The mappingT1{\displaystyle T_{1}}is a linear bijection fromX/kerT{\displaystyle X/\ker T}onto the rangeT(X),{\displaystyle T(X),}whose inverse need not be bounded.
Basic examples[25]of Banach spaces include: theLp spacesLp{\displaystyle L^{p}}and their special cases, thesequence spacesℓp{\displaystyle \ell ^{p}}that consist of scalar sequences indexed bynatural numbersN{\displaystyle \mathbb {N} }; among them, the spaceℓ1{\displaystyle \ell ^{1}}ofabsolutely summablesequences and the spaceℓ2{\displaystyle \ell ^{2}}of square summable sequences; the spacec0{\displaystyle c_{0}}of sequences tending to zero and the spaceℓ∞{\displaystyle \ell ^{\infty }}of bounded sequences; the spaceC(K){\displaystyle C(K)}of continuous scalar functions on a compact Hausdorff spaceK,{\displaystyle K,}equipped with the max norm,‖f‖C(K)=max{|f(x)|∣x∈K},f∈C(K).{\displaystyle \|f\|_{C(K)}=\max\{|f(x)|\mid x\in K\},\quad f\in C(K).}
According to theBanach–Mazur theorem, every Banach space is isometrically isomorphic to a subspace of someC(K).{\displaystyle C(K).}[26]For every separable Banach spaceX,{\displaystyle X,}there is a closed subspaceM{\displaystyle M}ofℓ1{\displaystyle \ell ^{1}}such thatX:=ℓ1/M.{\displaystyle X:=\ell ^{1}/M.}[27]
AnyHilbert spaceserves as an example of a Banach space. A Hilbert spaceH{\displaystyle H}onK=R,C{\displaystyle \mathbb {K} =\mathbb {R} ,\mathbb {C} }is complete for a norm of the form‖x‖H=⟨x,x⟩,{\displaystyle \|x\|_{H}={\sqrt {\langle x,x\rangle }},}where⟨⋅,⋅⟩:H×H→K{\displaystyle \langle \cdot ,\cdot \rangle :H\times H\to \mathbb {K} }is theinner product, linear in its first argument that satisfies the following:⟨y,x⟩=⟨x,y⟩¯,for allx,y∈H⟨x,x⟩≥0,for allx∈H⟨x,x⟩=0if and only ifx=0.{\displaystyle {\begin{aligned}\langle y,x\rangle &={\overline {\langle x,y\rangle }},\quad {\text{ for all }}x,y\in H\\\langle x,x\rangle &\geq 0,\quad {\text{ for all }}x\in H\\\langle x,x\rangle =0{\text{ if and only if }}x&=0.\end{aligned}}}
For example, the spaceL2{\displaystyle L^{2}}is a Hilbert space.
TheHardy spaces, theSobolev spacesare examples of Banach spaces that are related toLp{\displaystyle L^{p}}spaces and have additional structure. They are important in different branches of analysis,Harmonic analysisandPartial differential equationsamong others.
ABanach algebrais a Banach spaceA{\displaystyle A}overK=R{\displaystyle \mathbb {K} =\mathbb {R} }orC,{\displaystyle \mathbb {C} ,}together with a structure ofalgebra overK{\displaystyle \mathbb {K} }, such that the product mapA×A∋(a,b)↦ab∈A{\displaystyle A\times A\ni (a,b)\mapsto ab\in A}is continuous. An equivalent norm onA{\displaystyle A}can be found so that‖ab‖≤‖a‖‖b‖{\displaystyle \|ab\|\leq \|a\|\|b\|}for alla,b∈A.{\displaystyle a,b\in A.}
IfX{\displaystyle X}is a normed space andK{\displaystyle \mathbb {K} }the underlyingfield(either therealsor thecomplex numbers), thecontinuous dual spaceis the space of continuous linear maps fromX{\displaystyle X}intoK,{\displaystyle \mathbb {K} ,}orcontinuous linear functionals.
The notation for the continuous dual isX′=B(X,K){\displaystyle X'=B(X,\mathbb {K} )}in this article.[28]SinceK{\displaystyle \mathbb {K} }is a Banach space (using theabsolute valueas norm), the dualX′{\displaystyle X'}is a Banach space, for every normed spaceX.{\displaystyle X.}TheDixmier–Ng theoremcharacterizes the dual spaces of Banach spaces.
The main tool for proving the existence of continuous linear functionals is theHahn–Banach theorem.
Hahn–Banach theorem—LetX{\displaystyle X}be avector spaceover the fieldK=R,C.{\displaystyle \mathbb {K} =\mathbb {R} ,\mathbb {C} .}Let further
Then, there exists a linear functionalF:X→K{\displaystyle F:X\to \mathbb {K} }so thatF|Y=f,andfor allx∈X,Re(F(x))≤p(x).{\displaystyle F{\big \vert }_{Y}=f,\quad {\text{ and }}\quad {\text{ for all }}x\in X,\ \ \operatorname {Re} (F(x))\leq p(x).}
In particular, every continuous linear functional on a subspace of a normed space can be continuously extended to the whole space, without increasing the norm of the functional.[29]An important special case is the following: for every vectorx{\displaystyle x}in a normed spaceX,{\displaystyle X,}there exists a continuous linear functionalf{\displaystyle f}onX{\displaystyle X}such thatf(x)=‖x‖X,‖f‖X′≤1.{\displaystyle f(x)=\|x\|_{X},\quad \|f\|_{X'}\leq 1.}
Whenx{\displaystyle x}is not equal to the0{\displaystyle \mathbf {0} }vector, the functionalf{\displaystyle f}must have norm one, and is called anorming functionalforx.{\displaystyle x.}
TheHahn–Banach separation theoremstates that two disjoint non-emptyconvex setsin a real Banach space, one of them open, can be separated by a closedaffinehyperplane.
The open convex set lies strictly on one side of the hyperplane, the second convex set lies on the other side but may touch the hyperplane.[30]
A subsetS{\displaystyle S}in a Banach spaceX{\displaystyle X}istotalif thelinear spanofS{\displaystyle S}isdenseinX.{\displaystyle X.}The subsetS{\displaystyle S}is total inX{\displaystyle X}if and only if the only continuous linear functional that vanishes onS{\displaystyle S}is the0{\displaystyle \mathbf {0} }functional: this equivalence follows from the Hahn–Banach theorem.
IfX{\displaystyle X}is the direct sum of two closed linear subspacesM{\displaystyle M}andN,{\displaystyle N,}then the dualX′{\displaystyle X'}ofX{\displaystyle X}is isomorphic to the direct sum of the duals ofM{\displaystyle M}andN.{\displaystyle N.}[31]IfM{\displaystyle M}is a closed linear subspace inX,{\displaystyle X,}one can associate theorthogonal ofM{\displaystyle M}in the dual,M⊥={x′∈X∣x′(m)=0for allm∈M}.{\displaystyle M^{\bot }=\{x'\in X\mid x'(m)=0{\text{ for all }}m\in M\}.}
The orthogonalM⊥{\displaystyle M^{\bot }}is a closed linear subspace of the dual. The dual ofM{\displaystyle M}is isometrically isomorphic toX′/M⊥.{\displaystyle X'/M^{\bot }.}The dual ofX/M{\displaystyle X/M}is isometrically isomorphic toM⊥.{\displaystyle M^{\bot }.}[32]
The dual of a separable Banach space need not be separable, but:
Theorem[33]—LetX{\displaystyle X}be a normed space. IfX′{\displaystyle X'}isseparable, thenX{\displaystyle X}is separable.
WhenX′{\displaystyle X'}is separable, the above criterion for totality can be used for proving the existence of a countable total subset inX.{\displaystyle X.}
Theweak topologyon a Banach spaceX{\displaystyle X}is thecoarsest topologyonX{\displaystyle X}for which all elementsx′{\displaystyle x'}in the continuous dual spaceX′{\displaystyle X'}are continuous.
The norm topology is thereforefinerthan the weak topology.
It follows from the Hahn–Banach separation theorem that the weak topology isHausdorff, and that a norm-closedconvex subsetof a Banach space is also weakly closed.[34]A norm-continuous linear map between two Banach spacesX{\displaystyle X}andY{\displaystyle Y}is alsoweakly continuous, that is, continuous from the weak topology ofX{\displaystyle X}to that ofY.{\displaystyle Y.}[35]
IfX{\displaystyle X}is infinite-dimensional, there exist linear maps which are not continuous. The spaceX∗{\displaystyle X^{*}}of all linear maps fromX{\displaystyle X}to the underlying fieldK{\displaystyle \mathbb {K} }(this spaceX∗{\displaystyle X^{*}}is called thealgebraic dual space, to distinguish it fromX′{\displaystyle X'}also induces a topology onX{\displaystyle X}which isfinerthan the weak topology, and much less used in functional analysis.
On a dual spaceX′,{\displaystyle X',}there is a topology weaker than the weak topology ofX′,{\displaystyle X',}called theweak* topology.
It is the coarsest topology onX′{\displaystyle X'}for which all evaluation mapsx′∈X′↦x′(x),{\displaystyle x'\in X'\mapsto x'(x),}wherex{\displaystyle x}ranges overX,{\displaystyle X,}are continuous.
Its importance comes from theBanach–Alaoglu theorem.
Banach–Alaoglu theorem—LetX{\displaystyle X}be anormed vector space. Then theclosedunit ballB={x∈X∣‖x‖≤1}{\displaystyle B=\{x\in X\mid \|x\|\leq 1\}}of the dual space iscompactin the weak* topology.
The Banach–Alaoglu theorem can be proved usingTychonoff's theoremabout infinite products of compact Hausdorff spaces.
WhenX{\displaystyle X}is separable, the unit ballB′{\displaystyle B'}of the dual is ametrizablecompact in the weak* topology.[36]
The dual ofc0{\displaystyle c_{0}}is isometrically isomorphic toℓ1{\displaystyle \ell ^{1}}: for every bounded linear functionalf{\displaystyle f}onc0,{\displaystyle c_{0},}there is a unique elementy={yn}∈ℓ1{\displaystyle y=\{y_{n}\}\in \ell ^{1}}such thatf(x)=∑n∈Nxnyn,x={xn}∈c0,and‖f‖(c0)′=‖y‖ℓ1.{\displaystyle f(x)=\sum _{n\in \mathbb {N} }x_{n}y_{n},\qquad x=\{x_{n}\}\in c_{0},\ \ {\text{and}}\ \ \|f\|_{(c_{0})'}=\|y\|_{\ell _{1}}.}
The dual ofℓ1{\displaystyle \ell ^{1}}is isometrically isomorphic toℓ∞{\displaystyle \ell ^{\infty }}.
The dual ofLebesgue spaceLp([0,1]){\displaystyle L^{p}([0,1])}is isometrically isomorphic toLq([0,1]){\displaystyle L^{q}([0,1])}when1≤p<∞{\displaystyle 1\leq p<\infty }and1p+1q=1.{\displaystyle {\frac {1}{p}}+{\frac {1}{q}}=1.}
For every vectory{\displaystyle y}in a Hilbert spaceH,{\displaystyle H,}the mappingx∈H→fy(x)=⟨x,y⟩{\displaystyle x\in H\to f_{y}(x)=\langle x,y\rangle }
defines a continuous linear functionalfy{\displaystyle f_{y}}onH.{\displaystyle H.}TheRiesz representation theoremstates that every continuous linear functional onH{\displaystyle H}is of the formfy{\displaystyle f_{y}}for a uniquely defined vectory{\displaystyle y}inH.{\displaystyle H.}The mappingy∈H→fy{\displaystyle y\in H\to f_{y}}is anantilinearisometric bijection fromH{\displaystyle H}onto its dualH′.{\displaystyle H'.}When the scalars are real, this map is an isometric isomorphism.
WhenK{\displaystyle K}is a compact Hausdorff topological space, the dualM(K){\displaystyle M(K)}ofC(K){\displaystyle C(K)}is the space ofRadon measuresin the sense of Bourbaki.[37]The subsetP(K){\displaystyle P(K)}ofM(K){\displaystyle M(K)}consisting of non-negative measures of mass 1 (probability measures) is a convex w*-closed subset of the unit ball ofM(K).{\displaystyle M(K).}Theextreme pointsofP(K){\displaystyle P(K)}are theDirac measuresonK.{\displaystyle K.}The set of Dirac measures onK,{\displaystyle K,}equipped with the w*-topology, ishomeomorphictoK.{\displaystyle K.}
Banach–Stone Theorem—IfK{\displaystyle K}andL{\displaystyle L}are compact Hausdorff spaces and ifC(K){\displaystyle C(K)}andC(L){\displaystyle C(L)}are isometrically isomorphic, then the topological spacesK{\displaystyle K}andL{\displaystyle L}arehomeomorphic.[38][39]
The result has been extended by Amir[40]and Cambern[41]to the case when the multiplicativeBanach–Mazur distancebetweenC(K){\displaystyle C(K)}andC(L){\displaystyle C(L)}is<2.{\displaystyle <2.}The theorem is no longer true when the distance is=2.{\displaystyle =2.}[42]
In the commutativeBanach algebraC(K),{\displaystyle C(K),}themaximal idealsare precisely kernels of Dirac measures onK,{\displaystyle K,}Ix=kerδx={f∈C(K)∣f(x)=0},x∈K.{\displaystyle I_{x}=\ker \delta _{x}=\{f\in C(K)\mid f(x)=0\},\quad x\in K.}
More generally, by theGelfand–Mazur theorem, the maximal ideals of a unital commutative Banach algebra can be identified with itscharacters—not merely as sets but as topological spaces: the former with thehull-kernel topologyand the latter with the w*-topology.
In this identification, the maximal ideal space can be viewed as a w*-compact subset of the unit ball in the dualA′.{\displaystyle A'.}
Theorem—IfK{\displaystyle K}is a compact Hausdorff space, then the maximal ideal spaceΞ{\displaystyle \Xi }of the Banach algebraC(K){\displaystyle C(K)}ishomeomorphictoK.{\displaystyle K.}[38]
Not every unital commutative Banach algebra is of the formC(K){\displaystyle C(K)}for some compact Hausdorff spaceK.{\displaystyle K.}However, this statement holds if one placesC(K){\displaystyle C(K)}in the smaller category of commutativeC*-algebras.Gelfand'srepresentation theoremfor commutative C*-algebras states that every commutative unitalC*-algebraA{\displaystyle A}is isometrically isomorphic to aC(K){\displaystyle C(K)}space.[43]The Hausdorff compact spaceK{\displaystyle K}here is again the maximal ideal space, also called thespectrumofA{\displaystyle A}in the C*-algebra context.
IfX{\displaystyle X}is a normed space, the (continuous) dualX″{\displaystyle X''}of the dualX′{\displaystyle X'}is called thebidualorsecond dualofX.{\displaystyle X.}For every normed spaceX,{\displaystyle X,}there is a natural map,{FX:X→X″FX(x)(f)=f(x)for allx∈X,and for allf∈X′{\displaystyle {\begin{cases}F_{X}\colon X\to X''\\F_{X}(x)(f)=f(x)&{\text{ for all }}x\in X,{\text{ and for all }}f\in X'\end{cases}}}
This definesFX(x){\displaystyle F_{X}(x)}as a continuous linear functional onX′,{\displaystyle X',}that is, an element ofX″.{\displaystyle X''.}The mapFX:x→FX(x){\displaystyle F_{X}\colon x\to F_{X}(x)}is a linear map fromX{\displaystyle X}toX″.{\displaystyle X''.}As a consequence of the existence of anorming functionalf{\displaystyle f}for everyx∈X,{\displaystyle x\in X,}this mapFX{\displaystyle F_{X}}is isometric, thusinjective.
For example, the dual ofX=c0{\displaystyle X=c_{0}}is identified withℓ1,{\displaystyle \ell ^{1},}and the dual ofℓ1{\displaystyle \ell ^{1}}is identified withℓ∞,{\displaystyle \ell ^{\infty },}the space of bounded scalar sequences.
Under these identifications,FX{\displaystyle F_{X}}is the inclusion map fromc0{\displaystyle c_{0}}toℓ∞.{\displaystyle \ell ^{\infty }.}It is indeed isometric, but not onto.
IfFX{\displaystyle F_{X}}issurjective, then the normed spaceX{\displaystyle X}is calledreflexive(seebelow).
Being the dual of a normed space, the bidualX″{\displaystyle X''}is complete, therefore, every reflexive normed space is a Banach space.
Using the isometric embeddingFX,{\displaystyle F_{X},}it is customary to consider a normed spaceX{\displaystyle X}as a subset of its bidual.
WhenX{\displaystyle X}is a Banach space, it is viewed as a closed linear subspace ofX″.{\displaystyle X''.}IfX{\displaystyle X}is not reflexive, the unit ball ofX{\displaystyle X}is a proper subset of the unit ball ofX″.{\displaystyle X''.}TheGoldstine theoremstates that the unit ball of a normed space is weakly*-dense in the unit ball of the bidual.
In other words, for everyx″{\displaystyle x''}in the bidual, there exists anet(xi)i∈I{\displaystyle (x_{i})_{i\in I}}inX{\displaystyle X}so thatsupi∈I‖xi‖≤‖x″‖,x″(f)=limif(xi),f∈X′.{\displaystyle \sup _{i\in I}\|x_{i}\|\leq \|x''\|,\ \ x''(f)=\lim _{i}f(x_{i}),\quad f\in X'.}
The net may be replaced by a weakly*-convergent sequence when the dualX′{\displaystyle X'}is separable.
On the other hand, elements of the bidual ofℓ1{\displaystyle \ell ^{1}}that are not inℓ1{\displaystyle \ell ^{1}}cannot be weak*-limit ofsequencesinℓ1,{\displaystyle \ell ^{1},}sinceℓ1{\displaystyle \ell ^{1}}isweakly sequentially complete.
Here are the main general results about Banach spaces that go back to the time of Banach's book (Banach (1932)) and are related to theBaire category theorem.
According to this theorem, a complete metric space (such as a Banach space, aFréchet spaceor anF-space) cannot be equal to a union of countably many closed subsets with emptyinteriors.
Therefore, a Banach space cannot be the union of countably many closed subspaces, unless it is already equal to one of them; a Banach space with a countableHamel basisis finite-dimensional.
Banach–Steinhaus Theorem—LetX{\displaystyle X}be a Banach space andY{\displaystyle Y}be anormed vector space. Suppose thatF{\displaystyle F}is a collection of continuous linear operators fromX{\displaystyle X}toY.{\displaystyle Y.}The uniform boundedness principle states that if for allx{\displaystyle x}inX{\displaystyle X}we havesupT∈F‖T(x)‖Y<∞,{\displaystyle \sup _{T\in F}\|T(x)\|_{Y}<\infty ,}thensupT∈F‖T‖Y<∞.{\displaystyle \sup _{T\in F}\|T\|_{Y}<\infty .}
The Banach–Steinhaus theorem is not limited to Banach spaces.
It can be extended for example to the case whereX{\displaystyle X}is aFréchet space, provided the conclusion is modified as follows: under the same hypothesis, there exists a neighborhoodU{\displaystyle U}of0{\displaystyle \mathbf {0} }inX{\displaystyle X}such that allT{\displaystyle T}inF{\displaystyle F}are uniformly bounded onU,{\displaystyle U,}supT∈Fsupx∈U‖T(x)‖Y<∞.{\displaystyle \sup _{T\in F}\sup _{x\in U}\;\|T(x)\|_{Y}<\infty .}
The Open Mapping Theorem—LetX{\displaystyle X}andY{\displaystyle Y}be Banach spaces andT:X→Y{\displaystyle T:X\to Y}be a surjective continuous linear operator, thenT{\displaystyle T}is an open map.
Corollary—Every one-to-one bounded linear operator from a Banach space onto a Banach space is an isomorphism.
The First Isomorphism Theorem for Banach spaces—Suppose thatX{\displaystyle X}andY{\displaystyle Y}are Banach spaces and thatT∈B(X,Y).{\displaystyle T\in B(X,Y).}Suppose further that the range ofT{\displaystyle T}is closed inY.{\displaystyle Y.}ThenX/kerT{\displaystyle X/\ker T}is isomorphic toT(X).{\displaystyle T(X).}
This result is a direct consequence of the precedingBanach isomorphism theoremand of the canonical factorization of bounded linear maps.
Corollary—If a Banach spaceX{\displaystyle X}is the internal direct sum of closed subspacesM1,…,Mn,{\displaystyle M_{1},\ldots ,M_{n},}thenX{\displaystyle X}is isomorphic toM1⊕⋯⊕Mn.{\displaystyle M_{1}\oplus \cdots \oplus M_{n}.}
This is another consequence of Banach's isomorphism theorem, applied to the continuous bijection fromM1⊕⋯⊕Mn{\displaystyle M_{1}\oplus \cdots \oplus M_{n}}ontoX{\displaystyle X}sendingm1,⋯,mn{\displaystyle m_{1},\cdots ,m_{n}}to the summ1+⋯+mn.{\displaystyle m_{1}+\cdots +m_{n}.}
The Closed Graph Theorem—LetT:X→Y{\displaystyle T:X\to Y}be a linear mapping between Banach spaces. The graph ofT{\displaystyle T}is closed inX×Y{\displaystyle X\times Y}if and only ifT{\displaystyle T}is continuous.
The normed spaceX{\displaystyle X}is calledreflexivewhen the natural map{FX:X→X″FX(x)(f)=f(x)for allx∈X,and for allf∈X′{\displaystyle {\begin{cases}F_{X}:X\to X''\\F_{X}(x)(f)=f(x)&{\text{ for all }}x\in X,{\text{ and for all }}f\in X'\end{cases}}}is surjective. Reflexive normed spaces are Banach spaces.
Theorem—IfX{\displaystyle X}is a reflexive Banach space, every closed subspace ofX{\displaystyle X}and every quotient space ofX{\displaystyle X}are reflexive.
This is a consequence of the Hahn–Banach theorem.
Further, by the open mapping theorem, if there is a bounded linear operator from the Banach spaceX{\displaystyle X}onto the Banach spaceY,{\displaystyle Y,}thenY{\displaystyle Y}is reflexive.
Theorem—IfX{\displaystyle X}is a Banach space, thenX{\displaystyle X}is reflexive if and only ifX′{\displaystyle X'}is reflexive.
Corollary—LetX{\displaystyle X}be a reflexive Banach space. ThenX{\displaystyle X}isseparableif and only ifX′{\displaystyle X'}is separable.
Indeed, if the dualY′{\displaystyle Y'}of a Banach spaceY{\displaystyle Y}is separable, thenY{\displaystyle Y}is separable.
IfX{\displaystyle X}is reflexive and separable, then the dual ofX′{\displaystyle X'}is separable, soX′{\displaystyle X'}is separable.
Theorem—Suppose thatX1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}are normed spaces and thatX=X1⊕⋯⊕Xn.{\displaystyle X=X_{1}\oplus \cdots \oplus X_{n}.}ThenX{\displaystyle X}is reflexive if and only if eachXj{\displaystyle X_{j}}is reflexive.
Hilbert spaces are reflexive. TheLp{\displaystyle L^{p}}spaces are reflexive when1<p<∞.{\displaystyle 1<p<\infty .}More generally,uniformly convex spacesare reflexive, by theMilman–Pettis theorem.
The spacesc0,ℓ1,L1([0,1]),C([0,1]){\displaystyle c_{0},\ell ^{1},L^{1}([0,1]),C([0,1])}are not reflexive.
In these examples of non-reflexive spacesX,{\displaystyle X,}the bidualX″{\displaystyle X''}is "much larger" thanX.{\displaystyle X.}Namely, under the natural isometric embedding ofX{\displaystyle X}intoX″{\displaystyle X''}given by the Hahn–Banach theorem, the quotientX″/X{\displaystyle X''/X}is infinite-dimensional, and even nonseparable.
However, Robert C. James has constructed an example[44]of a non-reflexive space, usually called "the James space" and denoted byJ,{\displaystyle J,}[45]such that the quotientJ″/J{\displaystyle J''/J}is one-dimensional.
Furthermore, this spaceJ{\displaystyle J}is isometrically isomorphic to its bidual.
Theorem—A Banach spaceX{\displaystyle X}is reflexive if and only if its unit ball iscompactin theweak topology.
WhenX{\displaystyle X}is reflexive, it follows that all closed and boundedconvex subsetsofX{\displaystyle X}are weakly compact.
In a Hilbert spaceH,{\displaystyle H,}the weak compactness of the unit ball is very often used in the following way: every bounded sequence inH{\displaystyle H}has weakly convergent subsequences.
Weak compactness of the unit ball provides a tool for finding solutions in reflexive spaces to certainoptimization problems.
For example, everyconvexcontinuous function on the unit ballB{\displaystyle B}of a reflexive space attains its minimum at some point inB.{\displaystyle B.}
As a special case of the preceding result, whenX{\displaystyle X}is a reflexive space overR,{\displaystyle \mathbb {R} ,}every continuous linear functionalf{\displaystyle f}inX′{\displaystyle X'}attains its maximum‖f‖{\displaystyle \|f\|}on the unit ball ofX.{\displaystyle X.}The followingtheorem of Robert C. Jamesprovides a converse statement.
James' Theorem—For a Banach space the following two properties are equivalent:
The theorem can be extended to give a characterization of weakly compact convex sets.
On every non-reflexive Banach spaceX,{\displaystyle X,}there exist continuous linear functionals that are notnorm-attaining.
However, theBishop–Phelpstheorem[46]states that norm-attaining functionals are norm dense in the dualX′{\displaystyle X'}ofX.{\displaystyle X.}
A sequence{xn}{\displaystyle \{x_{n}\}}in a Banach spaceX{\displaystyle X}isweakly convergentto a vectorx∈X{\displaystyle x\in X}if{f(xn)}{\displaystyle \{f(x_{n})\}}converges tof(x){\displaystyle f(x)}for every continuous linear functionalf{\displaystyle f}in the dualX′.{\displaystyle X'.}The sequence{xn}{\displaystyle \{x_{n}\}}is aweakly Cauchy sequenceif{f(xn)}{\displaystyle \{f(x_{n})\}}converges to a scalar limitL(f){\displaystyle L(f)}for everyf{\displaystyle f}inX′.{\displaystyle X'.}A sequence{fn}{\displaystyle \{f_{n}\}}in the dualX′{\displaystyle X'}isweakly* convergentto a functionalf∈X′{\displaystyle f\in X'}iffn(x){\displaystyle f_{n}(x)}converges tof(x){\displaystyle f(x)}for everyx{\displaystyle x}inX.{\displaystyle X.}Weakly Cauchy sequences, weakly convergent and weakly* convergent sequences are norm bounded, as a consequence of theBanach–Steinhaustheorem.
When the sequence{xn}{\displaystyle \{x_{n}\}}inX{\displaystyle X}is a weakly Cauchy sequence, the limitL{\displaystyle L}above defines a bounded linear functional on the dualX′,{\displaystyle X',}that is, an elementL{\displaystyle L}of the bidual ofX,{\displaystyle X,}andL{\displaystyle L}is the limit of{xn}{\displaystyle \{x_{n}\}}in the weak*-topology of the bidual.
The Banach spaceX{\displaystyle X}isweakly sequentially completeif every weakly Cauchy sequence is weakly convergent inX.{\displaystyle X.}It follows from the preceding discussion that reflexive spaces are weakly sequentially complete.
Theorem[47]—For every measureμ,{\displaystyle \mu ,}the spaceL1(μ){\displaystyle L^{1}(\mu )}is weakly sequentially complete.
An orthonormal sequence in a Hilbert space is a simple example of a weakly convergent sequence, with limit equal to the0{\displaystyle \mathbf {0} }vector.
Theunit vector basisofℓp{\displaystyle \ell ^{p}}for1<p<∞,{\displaystyle 1<p<\infty ,}or ofc0,{\displaystyle c_{0},}is another example of aweakly null sequence, that is, a sequence that converges weakly to0.{\displaystyle \mathbf {0} .}For every weakly null sequence in a Banach space, there exists a sequence of convex combinations of vectors from the given sequence that is norm-converging to0.{\displaystyle \mathbf {0} .}[48]
The unit vector basis ofℓ1{\displaystyle \ell ^{1}}is not weakly Cauchy.
Weakly Cauchy sequences inℓ1{\displaystyle \ell ^{1}}are weakly convergent, sinceL1{\displaystyle L^{1}}-spaces are weakly sequentially complete.
Actually, weakly convergent sequences inℓ1{\displaystyle \ell ^{1}}are norm convergent.[49]This means thatℓ1{\displaystyle \ell ^{1}}satisfiesSchur's property.
Weakly Cauchy sequences and theℓ1{\displaystyle \ell ^{1}}basis are the opposite cases of the dichotomy established in the following deep result of H. P. Rosenthal.[50]
Theorem[51]—Let{xn}n∈N{\displaystyle \{x_{n}\}_{n\in \mathbb {N} }}be a bounded sequence in a Banach space. Either{xn}n∈N{\displaystyle \{x_{n}\}_{n\in \mathbb {N} }}has a weakly Cauchy subsequence, or it admits a subsequenceequivalentto the standard unit vector basis ofℓ1.{\displaystyle \ell ^{1}.}
A complement to this result is due to Odell and Rosenthal (1975).
Theorem[52]—LetX{\displaystyle X}be a separable Banach space. The following are equivalent:
By the Goldstine theorem, every element of the unit ballB″{\displaystyle B''}ofX″{\displaystyle X''}is weak*-limit of a net in the unit ball ofX.{\displaystyle X.}WhenX{\displaystyle X}does not containℓ1,{\displaystyle \ell ^{1},}every element ofB″{\displaystyle B''}is weak*-limit of asequencein the unit ball ofX.{\displaystyle X.}[53]
When the Banach spaceX{\displaystyle X}is separable, the unit ball of the dualX′,{\displaystyle X',}equipped with the weak*-topology, is a metrizable compact spaceK,{\displaystyle K,}[36]and every elementx″{\displaystyle x''}in the bidualX″{\displaystyle X''}defines a bounded function onK{\displaystyle K}:x′∈K↦x″(x′),|x″(x′)|≤‖x″‖.{\displaystyle x'\in K\mapsto x''(x'),\quad |x''(x')|\leq \|x''\|.}
This function is continuous for the compact topology ofK{\displaystyle K}if and only ifx″{\displaystyle x''}is actually inX,{\displaystyle X,}considered as subset ofX″.{\displaystyle X''.}Assume in addition for the rest of the paragraph thatX{\displaystyle X}does not containℓ1.{\displaystyle \ell ^{1}.}By the preceding result of Odell and Rosenthal, the functionx″{\displaystyle x''}is thepointwise limitonK{\displaystyle K}of a sequence{xn}⊆X{\displaystyle \{x_{n}\}\subseteq X}of continuous functions onK,{\displaystyle K,}it is therefore afirst Baire class functiononK.{\displaystyle K.}The unit ball of the bidual is a pointwise compact subset of the first Baire class onK.{\displaystyle K.}[54]
WhenX{\displaystyle X}is separable, the unit ball of the dual is weak*-compact by theBanach–Alaoglu theoremand metrizable for the weak* topology,[36]hence every bounded sequence in the dual has weakly* convergent subsequences.
This applies to separable reflexive spaces, but more is true in this case, as stated below.
The weak topology of a Banach spaceX{\displaystyle X}is metrizable if and only ifX{\displaystyle X}is finite-dimensional.[55]If the dualX′{\displaystyle X'}is separable, the weak topology of the unit ball ofX{\displaystyle X}is metrizable.
This applies in particular to separable reflexive Banach spaces.
Although the weak topology of the unit ball is not metrizable in general, one can characterize weak compactness using sequences.
Eberlein–Šmulian theorem[56]—A setA{\displaystyle A}in a Banach space is relatively weakly compact if and only if every sequence{an}{\displaystyle \{a_{n}\}}inA{\displaystyle A}has a weakly convergent subsequence.
A Banach spaceX{\displaystyle X}is reflexive if and only if each bounded sequence inX{\displaystyle X}has a weakly convergent subsequence.[57]
A weakly compact subsetA{\displaystyle A}inℓ1{\displaystyle \ell ^{1}}is norm-compact. Indeed, every sequence inA{\displaystyle A}has weakly convergent subsequences by Eberlein–Šmulian, that are norm convergent by the Schur property ofℓ1.{\displaystyle \ell ^{1}.}
A way to classify Banach spaces is through the probabilistic notion oftype and cotype, these two measure how far a Banach space is from a Hilbert space.
ASchauder basisin a Banach spaceX{\displaystyle X}is a sequence{en}n≥0{\displaystyle \{e_{n}\}_{n\geq 0}}of vectors inX{\displaystyle X}with the property that for every vectorx∈X,{\displaystyle x\in X,}there existuniquelydefined scalars{xn}n≥0{\displaystyle \{x_{n}\}_{n\geq 0}}depending onx,{\displaystyle x,}such thatx=∑n=0∞xnen,i.e.,x=limnPn(x),Pn(x):=∑k=0nxkek.{\displaystyle x=\sum _{n=0}^{\infty }x_{n}e_{n},\quad {\textit {i.e.,}}\quad x=\lim _{n}P_{n}(x),\ P_{n}(x):=\sum _{k=0}^{n}x_{k}e_{k}.}
Banach spaces with a Schauder basis are necessarilyseparable, because the countable set of finite linear combinations with rational coefficients (say) is dense.
It follows from the Banach–Steinhaus theorem that the linear mappings{Pn}{\displaystyle \{P_{n}\}}are uniformly bounded by some constantC.{\displaystyle C.}Let{en∗}{\displaystyle \{e_{n}^{*}\}}denote the coordinate functionals which assign to everyx{\displaystyle x}inX{\displaystyle X}the coordinatexn{\displaystyle x_{n}}ofx{\displaystyle x}in the above expansion.
They are calledbiorthogonal functionals. When the basis vectors have norm1,{\displaystyle 1,}the coordinate functionals{en∗}{\displaystyle \{e_{n}^{*}\}}have norm≤2C{\displaystyle {}\leq 2C}in the dual ofX.{\displaystyle X.}
Most classical separable spaces have explicit bases.
TheHaar system{hn}{\displaystyle \{h_{n}\}}is a basis forLp([0,1]){\displaystyle L^{p}([0,1])}when1≤p<∞.{\displaystyle 1\leq p<\infty .}Thetrigonometric systemis a basis inLp(T){\displaystyle L^{p}(\mathbf {T} )}when1<p<∞.{\displaystyle 1<p<\infty .}TheSchauder systemis a basis in the spaceC([0,1]).{\displaystyle C([0,1]).}[58]The question of whether the disk algebraA(D){\displaystyle A(\mathbf {D} )}has a basis[59]remained open for more than forty years, until Bočkarev showed in 1974 thatA(D){\displaystyle A(\mathbf {D} )}admits a basis constructed from theFranklin system.[60]
Since every vectorx{\displaystyle x}in a Banach spaceX{\displaystyle X}with a basis is the limit ofPn(x),{\displaystyle P_{n}(x),}withPn{\displaystyle P_{n}}of finite rank and uniformly bounded, the spaceX{\displaystyle X}satisfies thebounded approximation property.
The first example byEnfloof a space failing the approximation property was at the same time the first example of a separable Banach space without a Schauder basis.[61]
Robert C. James characterized reflexivity in Banach spaces with a basis: the spaceX{\displaystyle X}with a Schauder basis is reflexive if and only if the basis is bothshrinking and boundedly complete.[62]In this case, the biorthogonal functionals form a basis of the dual ofX.{\displaystyle X.}
LetX{\displaystyle X}andY{\displaystyle Y}be twoK{\displaystyle \mathbb {K} }-vector spaces. Thetensor productX⊗Y{\displaystyle X\otimes Y}ofX{\displaystyle X}andY{\displaystyle Y}is aK{\displaystyle \mathbb {K} }-vector spaceZ{\displaystyle Z}with a bilinear mappingT:X×Y→Z{\displaystyle T:X\times Y\to Z}which has the followinguniversal property:
The image underT{\displaystyle T}of a couple(x,y){\displaystyle (x,y)}inX×Y{\displaystyle X\times Y}is denoted byx⊗y,{\displaystyle x\otimes y,}and called asimple tensor.
Every elementz{\displaystyle z}inX⊗Y{\displaystyle X\otimes Y}is a finite sum of such simple tensors.
There are various norms that can be placed on the tensor product of the underlying vector spaces, amongst others theprojective cross normandinjective cross normintroduced byA. Grothendieckin 1955.[63]
In general, the tensor product of complete spaces is not complete again. When working with Banach spaces, it is customary to say that theprojective tensor product[64]of two Banach spacesX{\displaystyle X}andY{\displaystyle Y}is thecompletionX⊗^πY{\displaystyle X{\widehat {\otimes }}_{\pi }Y}of the algebraic tensor productX⊗Y{\displaystyle X\otimes Y}equipped with the projective tensor norm, and similarly for theinjective tensor product[65]X⊗^εY.{\displaystyle X{\widehat {\otimes }}_{\varepsilon }Y.}Grothendieck proved in particular that[66]
C(K)⊗^εY≃C(K,Y),L1([0,1])⊗^πY≃L1([0,1],Y),{\displaystyle {\begin{aligned}C(K){\widehat {\otimes }}_{\varepsilon }Y&\simeq C(K,Y),\\L^{1}([0,1]){\widehat {\otimes }}_{\pi }Y&\simeq L^{1}([0,1],Y),\end{aligned}}}whereK{\displaystyle K}is a compact Hausdorff space,C(K,Y){\displaystyle C(K,Y)}the Banach space of continuous functions fromK{\displaystyle K}toY{\displaystyle Y}andL1([0,1],Y){\displaystyle L^{1}([0,1],Y)}the space of Bochner-measurable and integrable functions from[0,1]{\displaystyle [0,1]}toY,{\displaystyle Y,}and where the isomorphisms are isometric.
The two isomorphisms above are the respective extensions of the map sending the tensorf⊗y{\displaystyle f\otimes y}to the vector-valued functions∈K→f(s)y∈Y.{\displaystyle s\in K\to f(s)y\in Y.}
LetX{\displaystyle X}be a Banach space. The tensor productX′⊗^εX{\displaystyle X'{\widehat {\otimes }}_{\varepsilon }X}is identified isometrically with the closure inB(X){\displaystyle B(X)}of the set of finite rank operators.
WhenX{\displaystyle X}has theapproximation property, this closure coincides with the space ofcompact operatorsonX.{\displaystyle X.}
For every Banach spaceY,{\displaystyle Y,}there is a natural norm1{\displaystyle 1}linear mapY⊗^πX→Y⊗^εX{\displaystyle Y{\widehat {\otimes }}_{\pi }X\to Y{\widehat {\otimes }}_{\varepsilon }X}obtained by extending the identity map of the algebraic tensor product. Grothendieck related theapproximation problemto the question of whether this map is one-to-one whenY{\displaystyle Y}is the dual ofX.{\displaystyle X.}Precisely, for every Banach spaceX,{\displaystyle X,}the mapX′⊗^πX⟶X′⊗^εX{\displaystyle X'{\widehat {\otimes }}_{\pi }X\ \longrightarrow X'{\widehat {\otimes }}_{\varepsilon }X}is one-to-one if and only ifX{\displaystyle X}has the approximation property.[67]
Grothendieck conjectured thatX⊗^πY{\displaystyle X{\widehat {\otimes }}_{\pi }Y}andX⊗^εY{\displaystyle X{\widehat {\otimes }}_{\varepsilon }Y}must be different wheneverX{\displaystyle X}andY{\displaystyle Y}are infinite-dimensional Banach spaces.
This was disproved byGilles Pisierin 1983.[68]Pisier constructed an infinite-dimensional Banach spaceX{\displaystyle X}such thatX⊗^πX{\displaystyle X{\widehat {\otimes }}_{\pi }X}andX⊗^εX{\displaystyle X{\widehat {\otimes }}_{\varepsilon }X}are equal. Furthermore, just asEnflo'sexample, this spaceX{\displaystyle X}is a "hand-made" space that fails to have the approximation property. On the other hand, Szankowski proved that the classical spaceB(ℓ2){\displaystyle B(\ell ^{2})}does not have the approximation property.[69]
A necessary and sufficient condition for the norm of a Banach spaceX{\displaystyle X}to be associated to an inner product is theparallelogram identity:
Parallelogram identity—for allx,y∈X:‖x+y‖2+‖x−y‖2=2(‖x‖2+‖y‖2).{\displaystyle x,y\in X:\qquad \|x+y\|^{2}+\|x-y\|^{2}=2(\|x\|^{2}+\|y\|^{2}).}
It follows, for example, that theLebesgue spaceLp([0,1]){\displaystyle L^{p}([0,1])}is a Hilbert space only whenp=2.{\displaystyle p=2.}If this identity is satisfied, the associated inner product is given by thepolarization identity. In the case of real scalars, this gives:⟨x,y⟩=14(‖x+y‖2−‖x−y‖2).{\displaystyle \langle x,y\rangle ={\tfrac {1}{4}}(\|x+y\|^{2}-\|x-y\|^{2}).}
For complex scalars, defining theinner productso as to beC{\displaystyle \mathbb {C} }-linear inx,{\displaystyle x,}antilineariny,{\displaystyle y,}the polarization identity gives:⟨x,y⟩=14(‖x+y‖2−‖x−y‖2+i(‖x+iy‖2−‖x−iy‖2)).{\displaystyle \langle x,y\rangle ={\tfrac {1}{4}}\left(\|x+y\|^{2}-\|x-y\|^{2}+i(\|x+iy\|^{2}-\|x-iy\|^{2})\right).}
To see that the parallelogram law is sufficient, one observes in the real case that⟨x,y⟩{\displaystyle \langle x,y\rangle }is symmetric, and in the complex case, that it satisfies theHermitian symmetryproperty and⟨ix,y⟩=i⟨x,y⟩.{\displaystyle \langle ix,y\rangle =i\langle x,y\rangle .}The parallelogram law implies that⟨x,y⟩{\displaystyle \langle x,y\rangle }is additive inx.{\displaystyle x.}It follows that it is linear over the rationals, thus linear by continuity.
Several characterizations of spaces isomorphic (rather than isometric) to Hilbert spaces are available.
The parallelogram law can be extended to more than two vectors, and weakened by the introduction of a two-sided inequality with a constantc≥1{\displaystyle c\geq 1}: Kwapień proved that ifc−2∑k=1n‖xk‖2≤Ave±‖∑k=1n±xk‖2≤c2∑k=1n‖xk‖2{\displaystyle c^{-2}\sum _{k=1}^{n}\|x_{k}\|^{2}\leq \operatorname {Ave} _{\pm }\left\|\sum _{k=1}^{n}\pm x_{k}\right\|^{2}\leq c^{2}\sum _{k=1}^{n}\|x_{k}\|^{2}}for every integern{\displaystyle n}and all families of vectors{x1,…,xn}⊆X,{\displaystyle \{x_{1},\ldots ,x_{n}\}\subseteq X,}then the Banach spaceX{\displaystyle X}is isomorphic to a Hilbert space.[70]Here,Ave±{\displaystyle \operatorname {Ave} _{\pm }}denotes the average over the2n{\displaystyle 2^{n}}possible choices of signs±1.{\displaystyle \pm 1.}In the same article, Kwapień proved that the validity of a Banach-valuedParseval's theoremfor the Fourier transform characterizes Banach spaces isomorphic to Hilbert spaces.
Lindenstrauss and Tzafriri proved that a Banach space in which every closed linear subspace is complemented (that is, is the range of a bounded linear projection) is isomorphic to a Hilbert space.[71]The proof rests uponDvoretzky's theoremabout Euclidean sections of high-dimensional centrally symmetric convex bodies. In other words, Dvoretzky's theorem states that for every integern,{\displaystyle n,}any finite-dimensional normed space, with dimension sufficiently large compared ton,{\displaystyle n,}contains subspaces nearly isometric to then{\displaystyle n}-dimensional Euclidean space.
The next result gives the solution of the so-calledhomogeneous space problem. An infinite-dimensional Banach spaceX{\displaystyle X}is said to behomogeneousif it is isomorphic to all its infinite-dimensional closed subspaces. A Banach space isomorphic toℓ2{\displaystyle \ell ^{2}}is homogeneous, and Banach asked for the converse.[72]
Theorem[73]—A Banach space isomorphic to all its infinite-dimensional closed subspaces is isomorphic to a separable Hilbert space.
An infinite-dimensional Banach space ishereditarily indecomposablewhen no subspace of it can be isomorphic to the direct sum of two infinite-dimensional Banach spaces.
TheGowersdichotomy theorem[73]asserts that every infinite-dimensional Banach spaceX{\displaystyle X}contains, either a subspaceY{\displaystyle Y}withunconditional basis, or a hereditarily indecomposable subspaceZ,{\displaystyle Z,}and in particular,Z{\displaystyle Z}is not isomorphic to its closed hyperplanes.[74]IfX{\displaystyle X}is homogeneous, it must therefore have an unconditional basis. It follows then from the partial solution obtained by Komorowski andTomczak–Jaegermann, for spaces with an unconditional basis,[75]thatX{\displaystyle X}is isomorphic toℓ2.{\displaystyle \ell ^{2}.}
IfT:X→Y{\displaystyle T:X\to Y}is anisometryfrom the Banach spaceX{\displaystyle X}onto the Banach spaceY{\displaystyle Y}(where bothX{\displaystyle X}andY{\displaystyle Y}are vector spaces overR{\displaystyle \mathbb {R} }), then theMazur–Ulam theoremstates thatT{\displaystyle T}must be an affine transformation.
In particular, ifT(0X)=0Y,{\displaystyle T(0_{X})=0_{Y},}this isT{\displaystyle T}maps the zero ofX{\displaystyle X}to the zero ofY,{\displaystyle Y,}thenT{\displaystyle T}must be linear. This result implies that the metric in Banach spaces, and more generally in normed spaces, completely captures their linear structure.
Finite dimensional Banach spaces are homeomorphic as topological spaces, if and only if they have the same dimension as real vector spaces.
Anderson–Kadec theorem(1965–66) proves[76]that any two infinite-dimensionalseparableBanach spaces are homeomorphic as topological spaces. Kadec's theorem was extended by Torunczyk, who proved[77]that any two Banach spaces are homeomorphic if and only if they have the samedensity character, the minimum cardinality of a dense subset.
When two compact Hausdorff spacesK1{\displaystyle K_{1}}andK2{\displaystyle K_{2}}arehomeomorphic, the Banach spacesC(K1){\displaystyle C(K_{1})}andC(K2){\displaystyle C(K_{2})}are isometric. Conversely, whenK1{\displaystyle K_{1}}is not homeomorphic toK2,{\displaystyle K_{2},}the (multiplicative) Banach–Mazur distance betweenC(K1){\displaystyle C(K_{1})}andC(K2){\displaystyle C(K_{2})}must be greater than or equal to2,{\displaystyle 2,}see above theresults by Amir and Cambern.
Although uncountable compact metric spaces can have different homeomorphy types, one has the following result due to Milutin:[78]
Theorem[79]—LetK{\displaystyle K}be an uncountable compact metric space. ThenC(K){\displaystyle C(K)}is isomorphic toC([0,1]).{\displaystyle C([0,1]).}
The situation is different forcountably infinitecompact Hausdorff spaces.
Every countably infinite compactK{\displaystyle K}is homeomorphic to some closed interval ofordinal numbers⟨1,α⟩={γ∣1≤γ≤α}{\displaystyle \langle 1,\alpha \rangle =\{\gamma \mid 1\leq \gamma \leq \alpha \}}equipped with theorder topology, whereα{\displaystyle \alpha }is a countably infinite ordinal.[80]The Banach spaceC(K){\displaystyle C(K)}is then isometric toC(⟨1,α⟩). Whenα,β{\displaystyle \alpha ,\beta }are two countably infinite ordinals, and assumingα≤β,{\displaystyle \alpha \leq \beta ,}the spacesC(⟨1,α⟩)andC(⟨1,β⟩)are isomorphic if and only ifβ<αω.[81]For example, the Banach spacesC(⟨1,ω⟩),C(⟨1,ωω⟩),C(⟨1,ωω2⟩),C(⟨1,ωω3⟩),⋯,C(⟨1,ωωω⟩),⋯{\displaystyle C(\langle 1,\omega \rangle ),\ C(\langle 1,\omega ^{\omega }\rangle ),\ C(\langle 1,\omega ^{\omega ^{2}}\rangle ),\ C(\langle 1,\omega ^{\omega ^{3}}\rangle ),\cdots ,C(\langle 1,\omega ^{\omega ^{\omega }}\rangle ),\cdots }are mutually non-isomorphic.
Glossary of symbols for the table below:
Several concepts of a derivative may be defined on a Banach space. See the articles on theFréchet derivativeand theGateaux derivativefor details.
The Fréchet derivative allows for an extension of the concept of atotal derivativeto Banach spaces. The Gateaux derivative allows for an extension of adirectional derivativetolocally convextopological vector spaces.
Fréchet differentiability is a stronger condition than Gateaux differentiability.
Thequasi-derivativeis another generalization of directional derivative that implies a stronger condition than Gateaux differentiability, but a weaker condition than Fréchet differentiability.
Several important spaces in functional analysis, for instance the space of all infinitely often differentiable functionsR→R,{\displaystyle \mathbb {R} \to \mathbb {R} ,}or the space of alldistributionsonR,{\displaystyle \mathbb {R} ,}are complete but are not normed vector spaces and hence not Banach spaces.
InFréchet spacesone still has a completemetric, whileLF-spacesare completeuniformvector spaces arising as limits of Fréchet spaces.
|
https://en.wikipedia.org/wiki/Banach_space
|
Cooperative multitasking, also known asnon-preemptive multitasking, is acomputer multitaskingtechnique in which theoperating systemnever initiates acontext switchfrom a runningprocessto another process. Instead, in order to run multiple applications concurrently, processes voluntarilyyield controlperiodically or when idle or logicallyblocked. This type of multitasking is calledcooperativebecause all programs must cooperate for the scheduling scheme to work.
In this scheme, theprocess schedulerof an operating system is known as acooperative schedulerwhose role is limited to starting the processes and letting them return control back to it voluntarily.[1][2]
This is related to theasynchronous programmingapproach.
Although it is rarely used as the primary scheduling mechanism in modern operating systems, it is widely used in memory-constrainedembedded systemsand also, in specific applications such asCICSor theJES2subsystem. Cooperative multitasking was the primary scheduling scheme for 16-bit applications employed byMicrosoft WindowsbeforeWindows 95andWindows NT, and by theclassic Mac OS.Windows 9xused non-preemptive multitaskingfor 16-bit legacy applications, and thePowerPCVersions of Mac OS X prior toLeopardused it forclassicapplications.[1]NetWare, which is a network-oriented operating system, used cooperative multitasking up to NetWare 6.5. Cooperative multitasking is still used onRISC OSsystems.[3]
Cooperative multitasking is similar toasync/awaitin languages, such asJavaScriptorPython, that feature a single-threaded event-loop in their runtime. This contrasts with cooperative multitasking in that await cannot be invoked from a non-async function, but only an async function, which is a kind ofcoroutine.[4][5]
Cooperative multitasking allows much simpler implementation of applications because their execution is never unexpectedly interrupted by the process scheduler; for example, variousfunctionsinside the application do not need to bereentrant.[2]
As a cooperatively multitasked system relies on each process regularly giving up time to other processes on the system, one poorly designed program can consume all of the CPU time for itself, either by performing extensive calculations or bybusy waiting; both would cause the whole system tohang. In aserverenvironment, this is a hazard that is often considered to make the entire environment unacceptably fragile,[1]though, as noted above,
cooperative multitasking has been
used frequently in server environments including NetWare and CICS.
In contrast,preemptivemultitasking interrupts applications and gives control to other processes outside the application's control.
The potential for system hang can be alleviated by using awatchdog timer, often implemented in hardware; this typically invokes ahardware reset.
|
https://en.wikipedia.org/wiki/Cooperative_multitasking
|
TheBSD checksum algorithmwas a commonly used, legacychecksumalgorithm. It has been implemented in oldBSDand is also available through thesumcommand line utility.
This algorithm is useless from a security perspective, and is weaker than theCRC-32cksumfor error detection.[1][2]
Below is the relevant part of theGNUsum source code (GPLlicensed). It computes a 16-bit checksum by adding up all bytes (8-bit words) of the input data stream. In order to avoid many of the weaknesses of simply adding the data, the checksum accumulator is circular rotated to the right by one bit at each step before the new char is added.
As mentioned above, this algorithm computes a checksum by segmenting the data and adding it to an accumulator that is circular right shifted between each summation. To keep the accumulator within return value bounds, bit-masking with 1's is done.
Example:Calculating a 4-bit checksum using 4-bit sized segments (big-endian)
Iteration 1:
a) Apply circular shift to the checksum:
b) Add checksum and segment together, apply bitmask onto the obtained result:
Iteration 2:
a) Apply circular shift to the checksum:
b) Add checksum and segment together, apply bitmask onto the obtained result:
Iteration 3:
a) Apply circular shift to the checksum:
b) Add checksum and segment together, apply bitmask onto the obtained result:
Final checksum:1000
|
https://en.wikipedia.org/wiki/BSD_checksum
|
Indifferential topology, theHopf fibration(also known as theHopf bundleorHopf map) describes a3-sphere(ahypersphereinfour-dimensional space) in terms ofcirclesand an ordinarysphere. Discovered byHeinz Hopfin 1931, it is an influential early example of afiber bundle. Technically, Hopf found a many-to-onecontinuous function(or "map") from the3-sphere onto the2-sphere such that each distinctpointof the2-sphere is mapped from a distinctgreat circleof the3-sphere (Hopf 1931).[1]Thus the3-sphere is composed of fibers, where each fiber is a circle — one for each point of the2-sphere.
This fiber bundle structure is denoted
meaning that the fiber spaceS1(a circle) isembeddedin the total spaceS3(the3-sphere), andp:S3→S2(Hopf's map) projectsS3onto the base spaceS2(the ordinary2-sphere). The Hopf fibration, like any fiber bundle, has the important property that it islocallyaproduct space. However it is not atrivialfiber bundle, i.e.,S3is notgloballya product ofS2andS1although locally it is indistinguishable from it.
This has many implications: for example the existence of this bundle shows that the higherhomotopy groups of spheresare not trivial in general. It also provides a basic example of aprincipal bundle, by identifying the fiber with thecircle group.
Stereographic projectionof the Hopf fibration induces a remarkable structure onR3, in which all of 3-dimensional space, except for the z-axis, is filled with nestedtorimade of linkingVillarceau circles. Here each fiber projects to acirclein space (one of which is a line, thought of as a "circle through infinity"). Each torus is the stereographic projection of theinverse imageof a circle of latitude of the2-sphere. (Topologically, a torus is the product of two circles.) These tori are illustrated in the images at right. WhenR3is compressed to the boundary of a ball, some geometric structure is lost although the topological structure is retained (seeTopology and geometry). The loops arehomeomorphicto circles, although they are not geometriccircles.
There are numerous generalizations of the Hopf fibration. The unit sphere incomplex coordinate spaceCn+1fibers naturally over thecomplex projective spaceCPnwith circles as fibers, and there are alsoreal,quaternionic,[2]andoctonionicversions of these fibrations. In particular, the Hopf fibration belongs to a family of four fiber bundles in which the total space, base space, and fiber space are all spheres:
ByAdams's theoremsuch fibrations can occur only in these dimensions.
For anynatural numbern, ann-dimensional sphere, orn-sphere, can be defined as the set of points in an(n+1){\displaystyle (n+1)}-dimensionalspacewhich are a fixed distance from a centralpoint. For concreteness, the central point can be taken to be theorigin, and the distance of the points on the sphere from this origin can be assumed to be a unit length. With this convention, then-sphere,Sn{\displaystyle S^{n}}, consists of the points(x1,x2,…,xn+1){\displaystyle (x_{1},x_{2},\ldots ,x_{n+1})}inRn+1{\displaystyle \mathbb {R} ^{n+1}}withx12+x22+ ⋯+xn+ 12= 1. For example, the3-sphere consists of the points (x1,x2,x3,x4) inR4withx12+x22+x32+x42= 1.
The Hopf fibrationp:S3→S2of the3-sphere over the2-sphere can be defined in several ways.
IdentifyR4withC2(whereCdenotes thecomplex numbers) by writing:
and identifyR3withC×Rby writing
ThusS3is identified with thesubsetof all(z0,z1)inC2such that|z0|2+ |z1|2= 1, andS2is identified with the subset of all(z,x)inC×Rsuch that|z|2+x2= 1. (Here, for a complex numberz=x+ iy, its squared absolute value is |z|2=zz∗=x2+y2, where the star denotes thecomplex conjugate.) Then the Hopf fibrationpis defined by
The first component is a complex number, whereas the second component is real. Any point on the3-sphere must have the property that|z0|2+ |z1|2= 1. If that is so, thenp(z0,z1)lies on the unit2-sphere inC×R, as may be shown by adding the squares of the absolute values of the complex and real components ofp
Furthermore, if two points on the 3-sphere map to the same point on the 2-sphere, i.e., ifp(z0,z1) =p(w0,w1), then(w0,w1)must equal(λz0,λz1)for some complex numberλwith|λ|2= 1. The converse is also true; any two points on the3-sphere that differ by a common complex factorλmap to the same point on the2-sphere. These conclusions follow, because the complex factorλcancels with its complex conjugateλ∗in both parts ofp: in the complex2z0z1∗component and in the real component|z0|2− |z1|2.
Since the set of complex numbersλwith|λ|2= 1form the unit circle in the complex plane, it follows that for each pointminS2, theinverse imagep−1(m)is a circle, i.e.,p−1m≅S1. Thus the3-sphere is realized as adisjoint unionof these circular fibers.
A direct parametrization of the3-sphere employing the Hopf map is as follows.[3]
or in EuclideanR4
Whereηruns over the range from0toπ/2,ξ1runs over the range from0to2π, andξ2can take any value from0to4π. Every value ofη, except0andπ/2which specify circles, specifies a separateflat torusin the3-sphere, and one round trip (0to4π) of eitherξ1orξ2causes you to make one full circle of both limbs of the torus.
A mapping of the above parametrization to the2-sphere is as follows, with points on the circles parametrized byξ2.
A geometric interpretation of the fibration may be obtained using thecomplex projective line,CP1, which is defined to be the set of all complex one-dimensionalsubspacesofC2. Equivalently,CP1is thequotientofC2\{0}by theequivalence relationwhich identifies(z0,z1)with(λz0,λz1)for any nonzero complex numberλ. On any complex line inC2there is a circle of unit norm, and so the restriction of thequotient mapto the points of unit norm is a fibration ofS3overCP1.
CP1is diffeomorphic to a2-sphere: indeed it can be identified with theRiemann sphereC∞=C∪ {∞}, which is theone point compactificationofC(obtained by adding apoint at infinity). The formula given forpabove defines an explicit diffeomorphism between the complex projective line and the ordinary2-sphere in3-dimensional space. Alternatively, the point(z0,z1)can be mapped to the ratioz1/z0in the Riemann sphereC∞.
The Hopf fibration defines afiber bundle, with bundle projectionp. This means that it has a "local product structure", in the sense that every point of the2-sphere has someneighborhoodUwhose inverse image in the3-sphere can beidentifiedwith theproductofUand a circle:p−1(U) ≅U×S1. Such a fibration is said to belocally trivial.
For the Hopf fibration, it is enough to remove a single pointmfromS2and the corresponding circlep−1(m)fromS3; thus one can takeU=S2\{m}, and any point inS2has a neighborhood of this form.
Another geometric interpretation of the Hopf fibration can be obtained by considering rotations of the2-sphere in ordinary3-dimensional space. Therotation group SO(3)has adouble cover, thespin groupSpin(3),diffeomorphicto the3-sphere. The spin group actstransitivelyonS2by rotations. Thestabilizerof a point is isomorphic to thecircle group; its elements are angles of rotation leaving the given point unmoved, all sharing the axis connecting that point to the sphere's center. It follows easily that the3-sphere is aprincipal circle bundleover the2-sphere, and this is the Hopf fibration.
To make this more explicit, there are two approaches: the groupSpin(3)can either be identified with the groupSp(1)ofunit quaternions, or with thespecial unitary groupSU(2).
In the first approach, a vector(x1,x2,x3,x4)inR4is interpreted as a quaternionq∈Hby writing
The3-sphere is then identified with theversors, the quaternions of unit norm, thoseq∈Hfor which|q|2= 1, where|q|2=q q∗, which is equal tox12+x22+x32+x42forqas above.
On the other hand, a vector(y1,y2,y3)inR3can be interpreted as a pure quaternion
Then, as is well-known sinceCayley (1845), the mapping
is a rotation inR3: indeed it is clearly anisometry, since|q p q∗|2=q p q∗q p∗q∗=q p p∗q∗= |p|2, and it is not hard to check that it preserves orientation.
In fact, this identifies the group ofversorswith the group of rotations ofR3, modulo the fact that the versorsqand−qdetermine the same rotation. As noted above, the rotations act transitively onS2, and the set of versorsqwhich fix a given right versorphave the formq=u+vp, whereuandvare real numbers withu2+v2= 1. This is a circle subgroup. For concreteness, one can takep=k, and then the Hopf fibration can be defined as the map sending a versorωtoωkω∗. All the quaternionsωq, whereqis one of the circle of versors that fixk, get mapped to the same thing (which happens to be one of the two180°rotations rotatingkto the same place asωdoes).
Another way to look at this fibration is that every versor ω moves the plane spanned by{1,k}to a new plane spanned by{ω,ωk}. Any quaternionωq, whereqis one of the circle of versors that fixk, will have the same effect. We put all these into one fibre, and the fibres can be mapped one-to-one to the2-sphere of180°rotations which is the range ofωkω*.
This approach is related to the direct construction by identifying a quaternionq=x1+ix2+jx3+kx4with the2×2matrix:
This identifies the group of versors withSU(2), and the imaginary quaternions with the skew-hermitian2×2matrices (isomorphic toC×R).
The rotation induced by a unit quaternionq=w+ix+jy+kzis given explicitly by theorthogonal matrix
Here we find an explicit real formula for the bundle projection by noting that the fixed unit vector along thezaxis,(0,0,1), rotates to another unit vector,
which is a continuous function of(w,x,y,z). That is, the image ofqis the point on the2-sphere where it sends the unit vector along thezaxis. The fiber for a given point onS2consists of all those unit quaternions that send the unit vector there.
We can also write an explicit formula for the fiber over a point(a,b,c)inS2. Multiplication of unit quaternions produces composition of rotations, and
is a rotation by2θaround thezaxis. Asθvaries, this sweeps out agreat circleofS3, our prototypical fiber. So long as the base point,(a,b,c), is not the antipode,(0, 0, −1), the quaternion
will send(0, 0, 1)to(a,b,c). Thus the fiber of(a,b,c)is given by quaternions of the formq(a,b,c)qθ, which are theS3points
Since multiplication byq(a,b,c)acts as a rotation of quaternion space, the fiber is not merely a topological circle, it is a geometric circle.
The final fiber, for(0, 0, −1), can be given by definingq(0,0,−1)to equali, producing
which completes the bundle. But note that this one-to-one mapping betweenS3andS2×S1is not continuous on this circle, reflecting the fact thatS3is not topologically equivalent toS2×S1.
Thus, a simple way of visualizing the Hopf fibration is as follows. Any point on the3-sphere is equivalent to aquaternion, which in turn is equivalent to a particular rotation of aCartesian coordinate framein three dimensions. The set of all possible quaternions produces the set of all possible rotations, which moves the tip of one unit vector of such a coordinate frame (say, thezvector) to all possible points on a unit2-sphere. However, fixing the tip of thezvector does not specify the rotation fully; a further rotation is possible about thez-axis. Thus, the3-sphere is mapped onto the2-sphere, plus a single rotation.
The rotation can be represented using theEuler anglesθ,φ, andψ. The Hopf mapping maps the rotation to the point on the 2-sphere given by θ and φ, and the associated circle is parametrized by ψ. Note that when θ = π the Euler angles φ and ψ are not well defined individually, so we do not have a one-to-one mapping (or a one-to-two mapping) between the3-torusof (θ,φ,ψ) andS3.
If the Hopf fibration is treated as a vector field in 3 dimensional space then there is a solution to the (compressible, non-viscous)Navier–Stokes equationsof fluid dynamics in which the fluid flows along the circles of the projection of the Hopf fibration in 3 dimensional space. The size of the velocities, the density and the pressure can be chosen at each point to satisfy the equations. All these quantities fall to zero going away from the centre. If a is the distance to the inner ring, the velocities, pressure and density fields are given by:
for arbitrary constantsAandB. Similar patterns of fields are found assolitonsolutions ofmagnetohydrodynamics:[4]
The Hopf construction, viewed as a fiber bundlep:S3→CP1, admits several generalizations, which are also often known as Hopf fibrations. First, one can replace the projective line by ann-dimensionalprojective space. Second, one can replace the complex numbers by any (real)division algebra, including (forn= 1) theoctonions.
A real version of the Hopf fibration is obtained by regarding the circleS1as a subset ofR2in the usual way and by
identifying antipodal points. This gives a fiber bundleS1→RP1over thereal projective linewith fiberS0= {1, −1}. Just asCP1is diffeomorphic to a sphere,RP1is diffeomorphic to a circle.
More generally, then-sphereSnfibers overreal projective spaceRPnwith fiberS0.
The Hopf construction gives circle bundlesp:S2n+1→CPnovercomplex projective space. This is actually the restriction of thetautological line bundleoverCPnto the unit sphere inCn+1.
Similarly, one can regardS4n+3as lying inHn+1(quaternionicn-space) and factor out by unit quaternion (=S3) multiplication to get thequaternionic projective spaceHPn. In particular, sinceS4=HP1, there is a bundleS7→S4with fiberS3.
A similar construction with theoctonionsyields a bundleS15→S8with fiberS7. But the sphereS31does not fiber overS16with fiberS15. One can regardS8as theoctonionic projective lineOP1. Although one can also define anoctonionic projective planeOP2, the sphereS23does not fiber overOP2with fiberS7.[5][6]
Sometimes the term "Hopf fibration" is restricted to the fibrations between spheres obtained above, which are
As a consequence ofAdams's theorem, fiber bundles withspheresas total space, base space, and fiber can occur only in these dimensions.
Fiber bundles with similar properties, but different from the Hopf fibrations, were used byJohn Milnorto constructexotic spheres.
The Hopf fibration has many implications, some purely attractive, others deeper. For example,stereographic projectionS3→R3induces a remarkable structure inR3, which in turn illuminates the topology of the bundle (Lyons 2003). Stereographic projection preserves circles and maps the Hopf fibers to geometrically perfect circles inR3which fill space. Here there is one exception: the Hopf circle containing the projection point maps to a straight line inR3— a "circle through infinity".
The fibers over a circle of latitude onS2form atorusinS3(topologically, a torus is the product of two circles) and these project to nestedtorusesinR3which also fill space. The individual fibers map to linkingVillarceau circleson these tori, with the exception of the circle through the projection point and the one through itsopposite point: the former maps to a straight line, the latter to a unit circle perpendicular to, and centered on, this line, which may be viewed as a degenerate torus whose minor radius has shrunken to zero. Every other fiber image encircles the line as well, and so, by symmetry, each circle is linked througheverycircle, both inR3and inS3. Two such linking circles form aHopf linkinR3
Hopf proved that the Hopf map hasHopf invariant1, and therefore is notnull-homotopic. In fact it generates thehomotopy groupπ3(S2) and has infinite order.
Inquantum mechanics, the Riemann sphere is known as theBloch sphere, and the Hopf fibration describes the topological structure of a quantum mechanicaltwo-level systemorqubit. Similarly, the topology of a pair of entangled two-level systems is given by the Hopf fibration
(Mosseri & Dandoloff 2001). Moreover, the Hopf fibration is equivalent to the fiber bundle structure of theDirac monopole.[7]
Hopf fibration also found applications inrobotics, where it was used to generate uniform samples onSO(3)for theprobabilistic roadmapalgorithm in motion planning.[8]It also found application in theautomatic controlofquadrotors.[9][10]
|
https://en.wikipedia.org/wiki/Hopf_bundle
|
In the context ofartificial neural networks, therectifierorReLU (rectified linear unit) activation function[1][2]is anactivation functiondefined as the non-negative part of its argument, i.e., theramp function:
wherex{\displaystyle x}is the input to aneuron. This is analogous tohalf-wave rectificationinelectrical engineering.
ReLU is one of the most popular activation functions for artificial neural networks,[3]and finds application incomputer vision[4]andspeech recognition[5][6]usingdeep neural netsandcomputational neuroscience.[7][8]
The ReLU was first used byAlston Householderin 1941 as a mathematical abstraction of biological neural networks.[9]
Kunihiko Fukushimain 1969 used ReLU in the context of visual feature extraction in hierarchical neural networks.[10][11]30 years later, Hahnloser et al. argued that ReLU approximates the biological relationship between neural firing rates and input current, in addition to enabling recurrent neural network dynamics to stabilise under weaker criteria.[12][13]
Prior to 2010, most activation functions used were thelogistic sigmoid(which is inspired byprobability theory; seelogistic regression) and its more numerically efficient[14]counterpart, thehyperbolic tangent. Around 2010, the use of ReLU became common again.
Jarrett et al. (2009) noted that rectification by eitherabsoluteor ReLU (which they called "positive part") was critical for object recognition in convolutional neural networks (CNNs), specifically because it allowsaverage poolingwithout neighboring filter outputs cancelling each other out. They hypothesized that the use of sigmoid or tanh was responsible for poor performance in previous CNNs.[15]
Nair and Hinton (2010) made a theoretical argument that thesoftplusactivation function should be used, in that the softplus function numerically approximates the sum of an exponential number of linear models that share parameters. They then proposed ReLU as a good approximation to it. Specifically, they began by considering a single binary neuron in aBoltzmann machinethat takesx{\displaystyle x}as input, and produces 1 as output with probabilityσ(x)=11+e−x{\displaystyle \sigma (x)={\frac {1}{1+e^{-x}}}}. They then considered extending its range of output by making infinitely many copies of itX1,X2,X3,…{\displaystyle X_{1},X_{2},X_{3},\dots }, that all take the same input, offset by an amount0.5,1.5,2.5,…{\displaystyle 0.5,1.5,2.5,\dots }, then their outputs are added together as∑i=1∞Xi{\displaystyle \sum _{i=1}^{\infty }X_{i}}. They then demonstrated that∑i=1∞Xi{\displaystyle \sum _{i=1}^{\infty }X_{i}}is approximately equal toN(log(1+ex),σ(x)){\displaystyle {\mathcal {N}}(\log(1+e^{x}),\sigma (x))}, which is also approximately equal toReLU(N(x,σ(x))){\displaystyle \operatorname {ReLU} ({\mathcal {N}}(x,\sigma (x)))}, whereN{\displaystyle {\mathcal {N}}}stands for thegaussian distribution.
They also argued for another reason for using ReLU: that it allows "intensity equivariance" in image recognition. That is, multiplying input image by a constantk{\displaystyle k}multiplies the output also. In contrast, this is false for other activation functions like sigmoid or tanh. They found that ReLU activation allowed good empirical performance inrestricted Boltzmann machines.[16]
Glorot et al (2011) argued that ReLU has the following advantages over sigmoid or tanh. ReLU is more similar to biological neurons' responses in their main operating regime. ReLU avoids vanishing gradients. ReLU is cheaper to compute. ReLU creates sparse representation naturally, because many hidden units output exactly zero for a given input. They also found empirically that deep networks trained with ReLU can achieve strong performancewithoutunsupervised pre-training, especially on large, purely supervised tasks.[4]
Advantages of ReLU include:
Possible downsides can include:
Leaky ReLUallows a small, positive gradient when the unit is inactive,[6]helping to mitigate the vanishing gradient problem. This gradient is defined by a parameterα{\displaystyle \alpha }, typically set to 0.01–0.3.[17][18]
The same function can also be expressed without the piecewise notation as:
Parametric ReLU (PReLU)takes this idea further by makingα{\displaystyle \alpha }a learnable parameter along with the other network parameters.[19]
Note that forα≤1{\displaystyle \alpha \leq 1}, this is equivalent to
and thus has a relation to "maxout" networks.[19]
Concatenated ReLU (CReLU)preserves positive and negative phase information:[20]
ExtendeD Exponential Linear Unit (DELU) is an activation function which is smoother within the neighborhood of zero and sharper for bigger values, allowing better allocation of neurons in the learning process for higher performance. Thanks to its unique design, it has been shown that DELU may obtain higher classification accuracy than ReLU and ELU.[21]
In these formulas,a{\displaystyle a},b{\displaystyle b}andxc{\displaystyle x_{c}}arehyperparametervalues which could be set as default constraintsa=1{\displaystyle a=1},b=2{\displaystyle b=2}andxc=1.25643{\displaystyle x_{c}=1.25643}, as done in the original work.
GELU is a smooth approximation to the rectifier:
whereΦ(x)=P(X⩽x){\displaystyle \Phi (x)=P(X\leqslant x)}is thecumulative distribution functionof the standardnormal distribution.
This activation function is illustrated in the figure at the start of this article. It has a "bump" to the left ofx< 0 and serves as the default activation for models such asBERT.[22]
The SiLU (sigmoid linear unit) orswish function[23]is another smooth approximation which uses thesigmoid function, first introduced in the GELU paper:[22]
A smooth approximation to the rectifier is theanalytic function
which is called thesoftplus[24][4]orSmoothReLUfunction.[25]For large negativex{\displaystyle x}it is roughlyln1{\displaystyle \ln 1}, so just above 0, while for large positivex{\displaystyle x}it is roughlyln(ex){\displaystyle \ln(e^{x})}, so just abovex{\displaystyle x}.
This function can be approximated as:
By making the change of variablesx=yln(2){\displaystyle x=y\ln(2)}, this is equivalent to
A sharpness parameterk{\displaystyle k}may be included:
The derivative of softplus is thelogistic function.
The logisticsigmoid functionis a smooth approximation of the derivative of the rectifier, theHeaviside step function.
The multivariable generalization of single-variable softplus is theLogSumExpwith the first argument set to zero:
The LogSumExp function is
and its gradient is thesoftmax; the softmax with the first argument set to zero is the multivariable generalization of the logistic function. Both LogSumExp and softmax are used in machine learning.
Exponential linear units try to make the mean activations closer to zero, which speeds up learning. It has been shown that ELUs can obtain higher classification accuracy than ReLUs.[26]
In these formulas,α{\displaystyle \alpha }is ahyperparameterto be tuned with the constraintα≥0{\displaystyle \alpha \geq 0}.
Given the same interpretation ofα{\displaystyle \alpha }, ELU can be viewed as a smoothed version of a shifted ReLU (SReLU), which has the formf(x)=max(−α,x){\displaystyle f(x)=\max(-\alpha ,x)}.
The mish function can also be used as a smooth approximation of the rectifier.[23]It is defined as
wheretanh(x){\displaystyle \tanh(x)}is thehyperbolic tangent, andsoftplus(x){\displaystyle \operatorname {softplus} (x)}is thesoftplusfunction.
Mish is non-monotonicandself-gated.[27]It was inspired bySwish, itself a variant ofReLU.[27]
Squareplus[28]is the function
whereb≥0{\displaystyle b\geq 0}is a hyperparameter that determines the "size" of the curved region nearx=0{\displaystyle x=0}. (For example, lettingb=0{\displaystyle b=0}yields ReLU, and lettingb=4{\displaystyle b=4}yields themetallic meanfunction.)
Squareplus shares many properties with softplus: It ismonotonic, strictlypositive, approaches 0 asx→−∞{\displaystyle x\to -\infty }, approaches the identity asx→+∞{\displaystyle x\to +\infty }, and isC∞{\displaystyle C^{\infty }}smooth. However, squareplus can be computed using onlyalgebraic functions, making it well-suited for settings where computational resources or instruction sets are limited. Additionally, squareplus requires no special consideration to ensure numerical stability whenx{\displaystyle x}is large.
|
https://en.wikipedia.org/wiki/Rectifier_(neural_networks)
|
OpenHarmony(OHOS,OH) is a family ofopen-sourcedistributed operating systems based onHarmonyOSderived fromLiteOS, donated the L0-L2 branch source code byHuaweito theOpenAtom Foundation. Similar to HarmonyOS, the open-sourcedistributed operating systemis designed with a layered architecture, consisting of four layers from the bottom to the top: thekernellayer, system service layer, framework layer, andapplicationlayer. It is also an extensive collection offree software, which can be used as an operating system or in parts with other operating systems via Kernel Abstraction Layer subsystems.[3][4]
OpenHarmony supports various devices running a mini system, such as printers, speakers, smartwatches, and other smart device with memory as small as 128 KB, or running a standard system with memory greater than 128 MB.[5]
The system contains the basic and some advanced capabilities of HarmonyOS such as DSoftBus technology with distributed device virtualization platform,[6]that is a departure from traditional virtualised guest OS for connected devices.[7]
The operating system is oriented towards theInternet of things(IoT) andembedded devicesmarket with a diverse range of device support, includingsmartphones,tablets,smart TVs,smart watches,personal computersand othersmart devices.[8]
The first version of OpenHarmony was launched by the OpenAtom Foundation on September 10, 2020, after receiving a donation of the open-source code from Huawei.[9]
In December 2020, theOpenAtom Foundationand Runhe Software officially launched OpenHarmony open source project with seven units includingHuaweiand Software Institute of the Chinese Academy of Sciences.
The OpenHarmony 2.0 (Canary version) was launched in June 2021, supporting a variety of smart terminal devices.[9]
Based on its earlier version, OpenAtom Foundation launched OpenHarmony 3.0 on September 30, 2021, and brought substantial improvements over the past version to optimize the operating system, including supports for file security access (the ability to convert files into URIs and resolve URIs to open files) and support for basic capabilities of relational databases and distributed data management.[10]
A release of OpenHarmony supporting devices with up to 4 GB RAM was made available in April 2021.[11]
OpenAtom Foundation added a UniProton kernel, a hardware-basedMicrokernelreal-time operating system, into its repo as part of the Kernel subsystem of the OpenHarmony operating system as an add-on on August 10, 2022.[12]
The primaryIDEknown asDevEco Studioto build OpenHarmony applications with OpenHarmony SDK full development kit that includes a comprehensive set of development tools, including adebugger, tester system via DevEco Testing, a repository withsoftware librariesforsoftware development, an embedded deviceemulator, previewer, documentation, sample code, and tutorials.
Applications for OpenHarmony are mostly built using components ofArkUI, a Declarative User Interface framework. ArkUI elements are adaptable to various custom open-source hardware and industry hardware devices and include new interface rules with automatic updates along with HarmonyOS updates.[13]
Hardware development is developed using DevEco Studio via DevEco Device tool for building on OpenHarmony, also creating distros with operating system development withtoolchainsprovided, including verification certification processes for the platform, as well as customising the operating system as an open source variant compared to original closed distro variant HarmonyOS that primarily focus on HarmonyOS Connect partners with Huawei.[14]
OpenHarmony Application Binary Interface (ABI) ensures compatibility across various OpenHarmony powered devices with diverse set of chipset instruction set platforms.[15]
HDC (OpenHarmony Device Connector) is a command-line tool tailored for developers working with OpenHarmony devices. The BM command tool component of HDC tool is used to facilitate debugging by developers. After entering in the HDC shell command, the BM tool can be utilised.[16][17]
LikeHarmonyOS, OpenHarmony usesApp Packfiles suffixed with .app, also known as APP files onAppGalleryand third party distribution application stores on OpenHarmony-based and non-OpenHarmony operating systems such as Linux-basedUnity Operating Systemwhich is beneficial for interoperability and compatibility. Each App Pack has one or moreHarmonyOS Ability Packages(HAP) containing code for their abilities, resources, libraries, and aJSONfile withconfigurationinformation.[18]
While incorporating the OpenHarmony layer for running the APP files developed based on HarmonyOS APIs, the operating system utilizes the mainLinux kernelfor bigger memory devices, as well as the RTOS-basedLiteOSkernel for smaller memory-constrained devices, as well as add-ons, custom kernels in distros in the Kernel Abstract Layer (KAL) subsystem that is not kernel dependent nor instruction set dependent. For webview applications, it incorporatesArkWebsoftware engine as of API 11 release at system level for security enhancingChromium Embedded Frameworknweb software engine that facilitatedBlink-basedChromiumin API 5.[19]
Unlike with open-sourceAndroidoperating system with countless third-party dependency packages repeatedly built into the apps at a disadvantage when it comes to fragmentation. The OpenHarmony central repositories with the Special Interest Group atOpenAtomgovernance provides commonly used third-party public repositories for developers in the open-source environment which brings greaterinteroperabilityandcompatibilitywith OpenHarmony-based operating systems. Apps does not require repeated built-in third-party dependencies, such asChromium,UnityandUnreal Engine. This can greatly reduce the system ROM volume.[20]
Harmony Distributed File System (HMDFS) is a distributed file system designed for large-scale data storage and processing that is also used inopenEuler. It is inspired by theHadoop Distributed File System (HDFS). The file system suitable for scenarios where large-scale data storage and processing are essential, such as IoT applications, edge computing, and cloud services.[21]On Orange Pi OS (OHOS), the native file system shows LOCAL and shared_disk via OpenHarmony's Distributed File System (HMDFS)File path/rootfolder for the file system uses ">" instead of traditional "/" in Unix/Linux/Unix-like and "\" onWindowswith itsDLL (Dynamic-link library)system.
Access token manager is an essential component in OpenHarmony-based distributed operating systems, responsible for unified app permission management based on access tokens. Access tokens serve as identifiers for apps, containing information such as app ID, user ID, app privilege level (APL), and app permissions. By default, apps can access limited system resources. ATM ensures controlled access to sensitive functionalities which combines bothRBACandCBACmodels as a hybridACLmodel.[22]
OpenHarmony kernel abstract layer employs the third-partymusl libclibrary and native APIs, providing support for the Portable Operating System Interface (POSIX) forLinuxsyscalls within theLinux kernelside and LiteOS kernel that is the inherent part of the originalLiteOSdesign in POSIX API compatibility within multi-kernel Kernel Abstract Layer architecture.[23]Developers and vendors can create components and applications that work on the kernel based onPOSIXstandards.[24]
OpenHarmony NDK is a toolset that enables developers to incorporate C and C++ code into their applications. Specifically, in the case of OpenHarmony, the NDK serves as a bridge between the native world (C/C++) and the OpenHarmony ecosystem.[25]
This NAPI method is a vital importance of open source community of individual developers, companies and non-profit organisations of stakeholders in manufacturers creating third party libraries for interoperability and compatibility on the operating system native open source and commercial applications development from third-party developers between southbound and northbound interface development of richer APIs, e.g. third party Node.js,Simple DirectMedia Layer,Qtframework,LLVMcompiler,FFmpegetc.[26][27]
OpenHarmony can be deployed on various hardware devices ofARM,RISC-Vandx86architectures withmemoryvolumes ranging from as small as 128 KB up to more than 1 MB. It supports hardware devices with three types of system as follows:[28]
To ensure OpenHarmony-based devices are compatible and interoperable in the ecosystem, the OpenAtom Foundation has set up product compatibility specifications, with a Compatibility Working Group to evaluate and certify the products that are compatible with OpenHarmony.[29][30]
The following two types of certifications were published for the partners supporting the compatibility work, with the right to use the OpenHarmony Compatibility Logo on their certified products, packaging, and marketing materials.[31]
On April 25, 2022, 44 products have obtained the compatibility certificates, and more than 80 software and hardware products are in the process of evaluation for OpenHarmony compatibility.[citation needed]
Since OpenHarmony was open source in September 2020 to December 2021, more than 1,200 developers and 40 organizations have participated in the open source project and contributed code. At present, OpenHarmony has developed to 4.x version.
Support for rich 3D applications, withOpenGL,OpenGL ESandWebGLtechnologies.[35]
Connection security, etc., media support for richer encoding, support for more refined broadcast control capabilities, etc. As well asArkWebsoftware engine featured onHarmonyOS NEXT, replaces old nweb software engine that takes advantage ofChromiumweb browser andBlinkbrowser engine.
Core File Kit API enhancedAccess token managerwith on-deviceAIandcapability-basedfeatures on OpenHarmony Distributed File System (HMDFS) system as well as Local file system with Application files, user files and system files taking advantage of TEE kernel hardware-level features interoperable with commercial HarmonyOS NEXT system cross-file sharing and accessing interactions.[38]
NFC provides HCE card emulation capabilities.
Public Basic Class Library supportsThread Pools, "workers" within HSP and HAR modules ofHAPapps.
ArkGraphics 2D, 2D Draw API supported.
[39][40][41][42]
OpenHarmony is the most activeopen sourceproject hosted on theGiteeplatform. As of September 2023, it has over 30 open-source software distributions compatible with OpenHarmony for various sectors such as education, finance, smart home, transportation, digital government and other industries.[47][48][49]
On 14, September 2021, Huawei announced the launch of commercial proprietary MineHarmony OS, a customized operating system by Huawei based on its in-house HarmonyOS distro based on OpenHarmony for industrial use. MineHarmony is compatible with about 400 types of underground coal mining equipment, providing the equipment with a single interface to transmit and collect data for analysis. Wang Chenglu, President of Huawei's consumer business AI and smart full-scenario business department, indicated that the launch of MineHarmony OS signified that the HarmonyOS ecology had taken a step further fromB2CtoB2B.[50][51][52]
Midea, a Chinese electrical appliance manufacturer launched Midea IoT operating system 1.0. An IoT centric operating system based on OpenHarmony 2.0 officially launched in October 2021. After, the company used HarmonyOS operating system with Huawei partnership for its smart devices compatibility since June 2, 2021 launch of HarmonyOS 2.0.[53][54][55][56]
On January 6, 2022, OpenHarmony in Space (OHIS) by OHIS Working Group and Dalian University of Technology led by Yu Xiaozhou was reported to be a vital play in the future from a scientific and engineering point of view, expecting to open up opportunities for development in China's satellite systems, and surpassSpaceX’s Star Chain plan with the idea of micro-nano satellite technology.[57]
Based on OpenHarmony, SwanLinkOS was released in June 2022 by Honghu Wanlian (Jiangsu) Technology Development, a subsidiary ofiSoftStone, for the transportation industry. The operating system supports mainstream chipsets, such asRockchipRK3399 and RK3568, and can be applied in transportation and shipping equipment for monitoring road conditions, big data analysis, maritime search and rescue.[58]
It was awarded the OpenHarmony Ecological Product Compatibility Certificate by the OpenAtom Foundation.[59]
On November 7, 2022, ArcherMind Cooperation that deals with operating systems, interconnection solutions, smart innovations, and R&D aspects launched the HongZOS system that supports OpenHarmony and HiSilicon chips, solution mainly focuses on AIoT in industrial sectors.[60]
On November 28, 2022, Orange Pi launched the Orange Pi OS based on the open-source OpenHarmony version.[61]In October 2023, they released the Orange Pi 3B board with the Orange Pi OHOS version for hobbyists and developers based on the OpenHarmony 4.0 Beta1 version.[62][63][64]
On December 23, 2022, the integrated software and hardware solution together with the self-developed hardware products of Youbo Terminal runs RobanTrust OS, based on OpenHarmony that was launched as version 1.0 with 3.1.1 compatibility release.[65]
On January 14, 2023, Red Flag smart supercharger, first launched on OpenHarmony-based KaihongOS with OpenHarmony 3.1 support that supports the distributed soft bus that allows interconnection with other electronic devices and electrical facilities.[66]On January 17, 2023, an electronic class card with 21.5-inch screen developed by Chinasoft and New Cape Electronics.[67]On November 17, 2023, Kaihong Technology andLeju Robotcollaborated to release the world's first humanoid robot powered by the open-source OpenHarmony distro KaihongOS withRockchipSoC hardware usingRTOSkernel technology for industrial robotic machines with predictable response times in determinism.[citation needed]
On April 15, 2023, Tongxin Software became OpenAtom's OpenHarmony Ecological Partner.[citation needed]An intelligent terminal operating system for enterprises in China by Tongxin Software was passed for compatibility certification on June 7, 2023. Tongxin intelligent terminal operating system supports ARM, X86, and other architectures that is supported. Tongxin has established cooperative relations with major domestic mobile chip manufacturers and has completed adaptations using the Linux kernel. Together with the desktop operating system and the server operating system, it constitutes the Tongxin operating system family.[citation needed]
PolyOS Mobile is anAIIoTopen-source operating system tailored forRISC-Vintelligent terminal devices by the PolyOS Project based on OpenHarmony, which was released on August 30, 2023, and is available forQEMUvirtualisation on Windows 10 and 11 desktop machines.[68]
LightBeeOS launched on September 28, 2023, is an OpenHarmony-based distro that supports financial level security, with distribution bus by Shenzhen Zhengtong Company used for industrial public banking solutions of systems, tested on ATM machines with UnionPay in Chinese domestic market. The operating system has been launched with OpenHarmony 3.2 support and up.[69]
On September 28, 2021, theEclipse Foundationand theOpenAtom Foundationannounced their intention to form a partnership to collaborate on OpenHarmony European distro which is a global family of operating systems under it and a family of the OpenHarmony operating system. Like OpenHarmony, it is one OS kit for all paradigm, enables a collection offree software, which can be used as an operating system or can be used in parts with other operating systems via Kernel Abstraction Layer subsystems on Oniro OS distros.[71]
Oniro OS or simply Oniro, also known as Eclipse Oniro Core Platform, is adistributed operating systemforAIoTembedded systemslaunched on October 26, 2021, as Oniro OS 1.0, which is implemented to be compatible with HarmonyOS based on OpenHarmony L0-L2 branch source code, was later launched by the Eclipse Foundation for the global market with the founding members including Huawei,Linaroand Seco among others joined later on. Oniro is designed on the basis of open source and aims to be transparent, vendor-neutral, and independent system in the era ofIoTwith globalisation and localisation strategies resolving a fragmentated IoT andEmbedded devicesmarket.[72][73]
The operating system featured aYoctosystem ofLinux kernelfor developments ofOpenEmbeddedbuild system with BitBake and Poky which is now part of Oniro blueprints that aims to be platform agnostic, however it is now aligned with OpenAtom development of OpenHarmony.[74]The goal is to increase the distro with partners that create their own OpenHarmony-Oniro compatible distros that increase interoperability which reduces fragmentation of diverse platforms with diverse set of hardwares with enhancements from derived project back to original project in Upstream development of OpenHarmony source code branch to improve global industrial standards compatibilities customised for global markets. It is also used for Downstream development for enhancing OpenHarmony base in global and western markets for compatibility and interoperability with connected IoT systems as well as custom third-party support on-device AI features on custom frameworks such asTensorflow,CUDAand others, alongside native HuaweiMindSporesolutions across the entire OpenHarmony ecosystem. Oniro platform which is both compatible with OpenHarmony systems in China and Huawei's ownHarmonyOSplatform globally, including western markets in connectivity and apps.[75][76]
Rustin a framework alongside theData Plane Development Kit(DPDK) IP Pipeline andprofiling,React Nativeand Kanto in Applications development system on top of OpenHarmony,ServoandLinarotools in system services,Matteropеn-sourcе, royalty-frее connеctivity standard that aims to unify smart homе dеvicеs and incrеasе thеir compatibility with various platforms andOSGiin driver subsystem, IoTex in swappable kernel development, andEclipse Theiainintegrated development environmentto build Oniro OS apps that has interoperability with OpenHarmony based operating systems. Data can be transmitted directly rather than being shared via cloud online, enabling low latency architectures in more secure methods and privacy functions suitable for AIoT and smart home devices integration.[77][78]
In September 2023, Open Mobile Hub (OMH) led byLinux Foundationwas formed, as an open-source platform ecosystem that aims to simplify and enhance the development of mobile applications for various platforms, includingiOS,Android, and OpenHarmony based global Oniro OS alongside,HarmonyOS(NEXT) with greater cross platform and open interoperability in mobile with OMH plugins such asGoogle APIs,Google Drive,OpenStreetMapalongsideBing Maps,Mapbox,Microsoft,Facebook,Dropbox,LinkedIn,Xand more. Open Mobile Hub platform aims to provide a set of tools and resources to streamline the mobile app development process.[79]
The Oniro project is focused on being a horizontal platform for application processors and microcontrollers.[80]it is anembeddedOS, using theYoctobuild system, with a choice of either theLinux kernel,Zephyr, orFreeRTOS.[80]It includes an IP toolchain, maintenance,OTA, and OpenHarmony. It provides example combinations of components for various use cases, called "Blueprints".[80]Oniro OS 2.0 was released in 2022 and Oniro OS 3.0 based on OpenHarmony 3.2 LTS in October 2023, alongside latest 4.0 version as of December 6, 2023 on the main branch.[81][82][83]
Huaweiofficially announced the commercial distro of proprietary HarmonyOS NEXT,microkernel-based coredistributed operating systemforHarmonyOSat Huawei Developer Conference 2023 (HDC) on August 4, 2023, which supports only nativeAPPapps via Ark Compiler withHuawei Mobile Services(HMS) Core support. Proprietary system built on OpenHarmony, HarmonyOS NEXT has the HarmonyOS microkernel at its core and it has noapkcompatibility support built exclusively for Huawei devices ecosystem.[85]
With its customized architecture, HarmonyOS NEXT moves beyond OpenHarmony to support a broader range of applications and device ecosystems. It integrates a dual-frame design, optimizing compatibility with EMUI userland. The system is tailored for various hardware categories, including smartphones, tablets, cars, TVs, wearables, and IoT devices, utilizing either a Linux-based kernel or the lightweight LiteOS kernel for specific applications. On the same day at HDC 2023, the developer preview version of HarmonyOS NEXT was opened for cooperating enterprise developers to build and test native mobile apps. It will be open to all developers in the first quarter of 2024 according to the official announcement.[86][87][88]
On 18 January 2024, Huawei announced HarmonyOS NEXT Galaxy stable rollout will begin in Q4 2024 based on OpenHarmony 5.0 (API 12) version after OpenHarmony 4.1 (API 11) based Q2 Developer Beta after release of public developer access of HarmonyOS NEXT Developer Preview 1 that has been in the hands of closed cooperative developers partners since August 2023 debut. The new system of HarmonyOS 5 version will replace previous HarmonyOS 4.2 system for commercial Huawei consumer devices that can only run native HarmonyOS apps built for HarmonyOS and OpenHarmony, as well as localisation using Oniro OS for downstream development at global level customised to global markets and standards enhancing OpenHarmony development.[89]
On June 21, 2024, Huawei announced via HDC 2024 conference and released Developer Beta milestone of HarmonyOS NEXT based on OpenHarmony 5.0 beta1 version for registered public developers withHMSCore library embedded in native NEXT-specific API Developer Kit alongside supported compatible OpenHarmony APIs for native OpenHarmony-based HarmonyOS apps. The company officially confirmed the operating system is OpenHarmony compatible with the new boot image system.[90]
On October 22, 2024, Huawei launched HarmonyOS 5.0.0 at its launch event, upgrading the HarmonyOS Next developer internal and public software versions, completing the transitioning and replacing dual-framework of previous mainline HarmonyOS versions with full OpenHarmony base with customHarmonyOS kernelon the original L0-L2 codebase branch, marking officially as an independent commercial operating system and ecosystem fromAndroidfork dependencies with 15,000+ native apps launched on the platform. As a result, OpenHarmony-based systems, including Oniro-based systems are aimed to be compatible with HarmonyOS nativeHAPapps,NearLinkwireless connectivity stack and cross-device with upgraded DSoftBus connectivity.[91][92]
In terms of architecture, OpenHarmony alongside HarmonyOS has close relationship with server-based multi-kernel operating system OpenEuler, which is a community edition ofEulerOS, as they have implemented the sharing of kernel technology as revealed by Deng Taihua, President of Huawei's Computing Product Line.[93]The sharing is reportedly to be strengthened in the future in the areas of the distributedsoftware bus, app framework, system security, device driver framework and new programming language on the server side.[94]
Harmony Distributed File System (HMDFS) is a distributed file system designed for large-scale data storage and processing that is also used in openEuler server operating system.
|
https://en.wikipedia.org/wiki/OpenHarmony
|
Ametamodelis a model of a model, andmetamodelingis the process of generating such metamodels. Thus metamodeling or meta-modeling is the analysis, construction, and development of the frames, rules, constraints, models, and theories applicable and useful formodelinga predefined class of problems. As its name implies, this concept applies the notions ofmeta-and modeling insoftware engineeringandsystems engineering. Metamodels are of many types and have diverse applications.[2]
A metamodel/ surrogate model is a model of the model, i.e. a simplified model of an actual model of a circuit, system, or software-like entity.[3][4]Metamodel can be a mathematical relation or algorithm representing input and output relations. Amodelis an abstraction of phenomena in thereal world; a metamodel is yet another abstraction, highlighting the properties of the model itself. A model conforms to its metamodel in the way that a computer program conforms to the grammar of the programming language in which it is written. Various types of metamodels include polynomial equations, neural networks,Kriging, etc. "Metamodeling" is the construction of a collection of "concepts" (things, terms, etc.) within a certain domain. Metamodeling typically involves studying the output and input relationships and then fitting the right metamodels to represent that behavior.
Common uses for metamodels are:
Because of the "meta" character of metamodeling, both thepraxisand theory of metamodels are of relevance tometascience,metaphilosophy,metatheoriesandsystemics, and meta-consciousness. The concept can be useful inmathematics, and has practical applications incomputer scienceandcomputer engineering/software engineering. The latter are the main focus of this article.
Insoftware engineering, the use ofmodelsis an alternative to more common code-based development techniques. A model always conforms to a unique metamodel. One of the currently most active branches ofModel Driven Engineeringis the approach namedmodel-driven architectureproposed byOMG. This approach is embodied in theMeta Object Facility(MOF) specification.[citation needed]
Typical metamodelling specifications proposed byOMGareUML,SysML, SPEM or CWM.ISOhas also published the standard metamodelISO/IEC 24744.[6]All the languages presented below could be defined as MOF metamodels.
Metadata modelingis a type of metamodeling used insoftware engineeringandsystems engineeringfor the analysis and construction of models applicable and useful to some predefined class of problems. (see also:data modeling).
One important move inmodel-driven engineeringis the systematic use ofmodel transformation languages. The OMG has proposed a standard for this calledQVTfor Queries/Views/Transformations.QVTis based on the meta-object facility (MOF). Among many othermodel transformation languages(MTLs), some examples of implementations of this standard are AndroMDA,VIATRA,Tefkat,MT,ManyDesigns Portofino.
Meta-models are closely related toontologies. Both are often used to describe and analyze the relations between concepts:[7]
For software engineering, severaltypesof models (and their corresponding modeling activities) can be distinguished:
A library of similar metamodels has been called a Zoo of metamodels.[11]There are several types of meta-model zoos.[12]Some are expressed in ECore. Others are written inMOF1.4 –XMI1.2. The metamodels expressed inUML-XMI1.2 may be uploaded in Poseidon for UML, aUMLCASEtool.
|
https://en.wikipedia.org/wiki/Metamodeling
|
Aretronymis a newer name for something thatdifferentiatesit from something else that is newer, similar, or seen in everyday life; thus, avoiding confusion between the two.[1][2]
The termretronym, aneologismcomposed of thecombining formsretro-(from Latinretro,[3]"before") +-nym(from Greekónoma, "name"), was coined byFrank Mankiewiczin 1980 and popularized byWilliam SafireinThe New York Times Magazine.[4][5]
In 2000,The American Heritage Dictionary(4th edition) became the first major dictionary to include the wordretronym.[6]
The global war from 1914 to 1918 was referred to at the time as theGreat War. However, after the subsequent global war erupted in 1939, the phraseGreat Warwas gradually deprecated. The first came to be known asWorld War Iand the second asWorld War II.
The first bicycles with two wheels of equal size were called "safety bicycles" because they were easier to handle than the then-dominant style that had one large wheel and one small wheel, which then became known as an "ordinary" bicycle.[7]Since the end of the 19th century, most bicycles have been expected to have two equal-sized wheels, and the other type has been renamed "penny-farthing" or "high-wheeler" bicycle.[8]
TheAtari Video Computer Systemplatform was rebranded the "Atari 2600" (after its product code, CX-2600) in 1982 following the launch of its successor, theAtari 5200, and all hardware and software related to the platform were released under this new branding from that point on. Prior to that time, Atari often used the initialism "VCS" in official literature and other media, but colloquially the Video Computer System was often simply called "the Atari."[9]
The first film in theStar Warsfranchise released in 1977 was simply titledStar Wars. It was given the subtitle "Episode IV: A New Hope" for its 1981 theatrical re-release, shortly after the release of its sequelThe Empire Strikes Backin 1980.[10]Initially, this subtitle was limited to the opening text crawl, as all three films in theoriginal Star Wars trilogy(Star Wars,The Empire Strikes Back, andReturn of the Jedi) were still sold under their original theatrical titles on home media formats (such as VHS and Laserdisc). It was not until their 2004 DVD releases that the titles of the individual three films were changed to follow the same titling pattern as theStar Wars prequel trilogy(e.g.Star Wars Episode IV - A New Hope).
In the 1990s, when the Internet became widely popular andemailaccounts' instant delivery common, mail carried by thepostal servicecame to be called "snail mail" for its slower delivery and email sometimes just "mail."[citation needed]
Advances in technology and science are often responsible for the coinage of retronyms. For example, the termacoustic guitarwas coined with the advent of theelectric guitar,[4]analog watchwas introduced to distinguish from thedigital watch,[5]push bikewas created to distinguish from themotorized bicycle, andfeature phonewas coined to distinguish from thesmartphone. Likewise,visible lightrefers toelectromagnetic radiationon the narrowvisible spectrum, andwater icewas coined to distinguish the solid state ofwater(including exotic forms) from the solid state of othervolatilessuch as carbon dioxide and argon.
|
https://en.wikipedia.org/wiki/Retronym
|
AQUA@homewas avolunteer computingproject operated byD-Wave Systemsthat ran on theBerkeley Open Infrastructure for Network Computing(BOINC)software platform. It ceased functioning in August 2011. Its goal was to predict the performance ofsuperconductingadiabatic quantum computerson a variety of problems arising in fields ranging frommaterials sciencetomachine learning. It designed and analyzedquantum computingalgorithms, usingQuantum Monte Carlotechniques.
AQUA@home was the first BOINC project to providemulti-threadedapplications.[1]It was also the first project to deploy anOpenCLtest application under BOINC.[2]
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
Thisquantum mechanics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/AQUA@home
|
Inmathematics, afunctionbetweentopological spacesis calledproperifinverse imagesofcompact subsetsare compact.[1]Inalgebraic geometry, theanalogousconcept is called aproper morphism.
There are several competing definitions of a "properfunction".
Some authors call a functionf:X→Y{\displaystyle f:X\to Y}between twotopological spacesproperif thepreimageof everycompactset inY{\displaystyle Y}is compact inX.{\displaystyle X.}Other authors call a mapf{\displaystyle f}properif it is continuous andclosed with compact fibers; that is if it is acontinuousclosed mapand the preimage of every point inY{\displaystyle Y}iscompact. The two definitions are equivalent ifY{\displaystyle Y}islocally compactandHausdorff.
Letf:X→Y{\displaystyle f:X\to Y}be a closed map, such thatf−1(y){\displaystyle f^{-1}(y)}is compact (inX{\displaystyle X}) for ally∈Y.{\displaystyle y\in Y.}LetK{\displaystyle K}be a compact subset ofY.{\displaystyle Y.}It remains to show thatf−1(K){\displaystyle f^{-1}(K)}is compact.
Let{Ua:a∈A}{\displaystyle \left\{U_{a}:a\in A\right\}}be an open cover off−1(K).{\displaystyle f^{-1}(K).}Then for allk∈K{\displaystyle k\in K}this is also an open cover off−1(k).{\displaystyle f^{-1}(k).}Since the latter is assumed to be compact, it has a finite subcover. In other words, for everyk∈K,{\displaystyle k\in K,}there exists a finite subsetγk⊆A{\displaystyle \gamma _{k}\subseteq A}such thatf−1(k)⊆∪a∈γkUa.{\displaystyle f^{-1}(k)\subseteq \cup _{a\in \gamma _{k}}U_{a}.}The setX∖∪a∈γkUa{\displaystyle X\setminus \cup _{a\in \gamma _{k}}U_{a}}is closed inX{\displaystyle X}and its image underf{\displaystyle f}is closed inY{\displaystyle Y}becausef{\displaystyle f}is a closed map. Hence the setVk=Y∖f(X∖∪a∈γkUa){\displaystyle V_{k}=Y\setminus f\left(X\setminus \cup _{a\in \gamma _{k}}U_{a}\right)}is open inY.{\displaystyle Y.}It follows thatVk{\displaystyle V_{k}}contains the pointk.{\displaystyle k.}NowK⊆∪k∈KVk{\displaystyle K\subseteq \cup _{k\in K}V_{k}}and becauseK{\displaystyle K}is assumed to be compact, there are finitely many pointsk1,…,ks{\displaystyle k_{1},\dots ,k_{s}}such thatK⊆∪i=1sVki.{\displaystyle K\subseteq \cup _{i=1}^{s}V_{k_{i}}.}Furthermore, the setΓ=∪i=1sγki{\displaystyle \Gamma =\cup _{i=1}^{s}\gamma _{k_{i}}}is a finite union of finite sets, which makesΓ{\displaystyle \Gamma }a finite set.
Now it follows thatf−1(K)⊆f−1(∪i=1sVki)⊆∪a∈ΓUa{\displaystyle f^{-1}(K)\subseteq f^{-1}\left(\cup _{i=1}^{s}V_{k_{i}}\right)\subseteq \cup _{a\in \Gamma }U_{a}}and we have found a finite subcover off−1(K),{\displaystyle f^{-1}(K),}which completes the proof.
IfX{\displaystyle X}is Hausdorff andY{\displaystyle Y}is locally compact Hausdorff then proper is equivalent touniversally closed. A map is universally closed if for any topological spaceZ{\displaystyle Z}the mapf×idZ:X×Z→Y×Z{\displaystyle f\times \operatorname {id} _{Z}:X\times Z\to Y\times Z}is closed. In the case thatY{\displaystyle Y}is Hausdorff, this is equivalent to requiring that for any mapZ→Y{\displaystyle Z\to Y}the pullbackX×YZ→Z{\displaystyle X\times _{Y}Z\to Z}be closed, as follows from the fact thatX×YZ{\displaystyle X\times _{Y}Z}is a closed subspace ofX×Z.{\displaystyle X\times Z.}
An equivalent, possibly more intuitive definition whenX{\displaystyle X}andY{\displaystyle Y}aremetric spacesis as follows: we say an infinite sequence of points{pi}{\displaystyle \{p_{i}\}}in a topological spaceX{\displaystyle X}escapes to infinityif, for every compact setS⊆X{\displaystyle S\subseteq X}only finitely many pointspi{\displaystyle p_{i}}are inS.{\displaystyle S.}Then a continuous mapf:X→Y{\displaystyle f:X\to Y}is proper if and only if for every sequence of points{pi}{\displaystyle \left\{p_{i}\right\}}that escapes to infinity inX,{\displaystyle X,}the sequence{f(pi)}{\displaystyle \left\{f\left(p_{i}\right)\right\}}escapes to infinity inY.{\displaystyle Y.}
It is possible to generalize
the notion of proper maps of topological spaces tolocalesandtopoi, see (Johnstone 2002).
|
https://en.wikipedia.org/wiki/Proper_map
|
The proposition inprobability theoryknown as thelaw of total expectation,[1]thelaw of iterated expectations[2](LIE),Adam's law,[3]thetower rule,[4]and thesmoothing property of conditional expectation,[5]among other names, states that ifX{\displaystyle X}is arandom variablewhose expected valueE(X){\displaystyle \operatorname {E} (X)}is defined, andY{\displaystyle Y}is any random variable on the sameprobability space, then
i.e., theexpected valueof theconditional expected valueofX{\displaystyle X}givenY{\displaystyle Y}is the same as the expected value ofX{\displaystyle X}.
Theconditional expected valueE(X∣Y){\displaystyle \operatorname {E} (X\mid Y)}, withY{\displaystyle Y}a random variable, is not a simple number; it is a random variable whose value depends on the value ofY{\displaystyle Y}. That is, the conditional expected value ofX{\displaystyle X}given theeventY=y{\displaystyle Y=y}is a number and it is a function ofy{\displaystyle y}. If we writeg(y){\displaystyle g(y)}for the value ofE(X∣Y=y){\displaystyle \operatorname {E} (X\mid Y=y)}then the random variableE(X∣Y){\displaystyle \operatorname {E} (X\mid Y)}isg(Y){\displaystyle g(Y)}.
One special case states that if{Ai}{\displaystyle {\left\{A_{i}\right\}}}is a finite orcountablepartitionof thesample space, then
Suppose that only two factories supplylight bulbsto the market. FactoryX{\displaystyle X}'s bulbs work for an average of 5000 hours, whereas factoryY{\displaystyle Y}'s bulbs work for an average of 4000 hours. It is known that factoryX{\displaystyle X}supplies 60% of the total bulbs available. What is the expected length of time that a purchased bulb will work for?
Applying the law of total expectation, we have:
where
Thus each purchased light bulb has an expected lifetime of 4600 hours.
When a jointprobability density functioniswell definedand the expectations areintegrable, we write for the general caseE(X)=∫xPr[X=x]dxE(X∣Y=y)=∫xPr[X=x∣Y=y]dxE(E(X∣Y))=∫(∫xPr[X=x∣Y=y]dx)Pr[Y=y]dy=∫∫xPr[X=x,Y=y]dxdy=∫x(∫Pr[X=x,Y=y]dy)dx=∫xPr[X=x]dx=E(X).{\displaystyle {\begin{aligned}\operatorname {E} (X)&=\int x\Pr[X=x]~dx\\\operatorname {E} (X\mid Y=y)&=\int x\Pr[X=x\mid Y=y]~dx\\\operatorname {E} (\operatorname {E} (X\mid Y))&=\int \left(\int x\Pr[X=x\mid Y=y]~dx\right)\Pr[Y=y]~dy\\&=\int \int x\Pr[X=x,Y=y]~dx~dy\\&=\int x\left(\int \Pr[X=x,Y=y]~dy\right)~dx\\&=\int x\Pr[X=x]~dx\\&=\operatorname {E} (X)\,.\end{aligned}}}A similar derivation works for discrete distributions using summation instead of integration. For the specific case of a partition, give each cell of the partition a unique label and let the random variableYbe the function of the sample space that assigns a cell's label to each point in that cell.
Let(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},\operatorname {P} )}be a probability space on which two subσ-algebrasG1⊆G2⊆F{\displaystyle {\mathcal {G}}_{1}\subseteq {\mathcal {G}}_{2}\subseteq {\mathcal {F}}}are defined. For a random variableX{\displaystyle X}on such a space, the smoothing law states that ifE[X]{\displaystyle \operatorname {E} [X]}is defined, i.e.min(E[X+],E[X−])<∞{\displaystyle \min(\operatorname {E} [X_{+}],\operatorname {E} [X_{-}])<\infty }, then
Proof. Since a conditional expectation is aRadon–Nikodym derivative, verifying the following two properties establishes the smoothing law:
The first of these properties holds by definition of the conditional expectation. To prove the second one,
so the integral∫G1XdP{\displaystyle \textstyle \int _{G_{1}}X\,d\operatorname {P} }is defined (not equal∞−∞{\displaystyle \infty -\infty }).
The second property thus holds sinceG1∈G1⊆G2{\displaystyle G_{1}\in {\mathcal {G}}_{1}\subseteq {\mathcal {G}}_{2}}implies
Corollary.In the special case whenG1={∅,Ω}{\displaystyle {\mathcal {G}}_{1}=\{\emptyset ,\Omega \}}andG2=σ(Y){\displaystyle {\mathcal {G}}_{2}=\sigma (Y)}, the smoothing law reduces to
Alternative proof forE[E[X∣Y]]=E[X].{\displaystyle \operatorname {E} [\operatorname {E} [X\mid Y]]=\operatorname {E} [X].}
This is a simple consequence of the measure-theoretic definition ofconditional expectation. By definition,E[X∣Y]:=E[X∣σ(Y)]{\displaystyle \operatorname {E} [X\mid Y]:=\operatorname {E} [X\mid \sigma (Y)]}is aσ(Y){\displaystyle \sigma (Y)}-measurable random variable that satisfies
for every measurable setA∈σ(Y){\displaystyle A\in \sigma (Y)}. TakingA=Ω{\displaystyle A=\Omega }proves the claim.
|
https://en.wikipedia.org/wiki/Law_of_total_expectation
|
File attributesare a type ofmetadatathat describe and may modify howfilesand/ordirectoriesin afilesystembehave. Typical file attributes may, for example, indicate or specify whether a file is visible, modifiable, compressed, or encrypted. The availability of most file attributes depends on support by the underlying filesystem (such asFAT,NTFS,ext4)
where attribute data must be stored along with other control structures. Each attribute can have one of two states: set and cleared. Attributes are considered distinct from other metadata, such as dates and times,filename extensionsorfile system permissions. In addition to files,folders,volumesand other file system objects may have attributes.
Traditionally, inDOSandMicrosoft Windows,filesandfoldersaccepted four attributes:[1][2][3]
As new versions of Windows came out, Microsoft has added to the inventory of available attributes on theNTFSfile system,[7]including but not limited to:[8]
Other attributes that are displayed in the "Attributes" column of Windows Explorer[7]include:
In DOS,OS/2and Windows, theattribcommand incmd.exeandcommand.comcan be used to change and display the four traditional file attributes.[3][9]File Explorer in Windows can show the seven mentioned attributes but cannot set or clear the System attribute.[5]Windows PowerShell, which has become a component ofWindows 7and later, features two commands that can read and write attributes:Get-ItemPropertyandSet-ItemProperty.[10]To change an attribute on a file onWindows NT, the user must have appropriatefile system permissionsknown asWrite AttributesandWrite Extended Attributes.[11]
InUnixand Unix-like systems, includingPOSIX-conforming systems, each file has a 'mode' containing 9 bit flags controlling read, write and execute permission for each of the file's owner, group and all other users (seeFile-system permissions §Traditional Unix permissionsfor more details) plus thesetuidandsetgidbit flags and a'sticky' bit flag.
The mode also specifies thefile type(regular file, directory, or some other special kind).
In4.4BSDand4.4BSD-Lite, files and directories (folders) accepted four attributes that could be set by the owner of the file or thesuperuser(the "User" attributes) and two attributes that could only be set by the superuser (the "System" attributes):[12]
FreeBSDadded some additional attributes,[13]also supported byDragonFly BSD:[14]
FreeBSD also supports:[13]
whereas DragonFly BSD supports:[14]
NetBSDadded another attribute,[15]also supported byOpenBSD:[16]
macOSadded three attributes:
In these systems, thechflagsandlscommands can be used to change and display file attributes. To change a "user" attribute on a file in 4.4BSD-derived operating systems, the user must be the owner of the file or the superuser; to change a "system" attribute, the user must be the superuser.
TheLinuxoperating system can support awide range of file attributesthat can be listed by thelsattrcommand and modified, where possible, by thechattrcommand.
Programs can examine and alter attributes usingioctloperations.[18]
Many Linux file systems support only a limited set of attributes, and none of them support every attribute thatchattrcan change. File systems that support at least some attributes includeext4,XFSandbtrfs.
Writing to file only allowed in append mode.
Prevents any change to file's contents or metadata: file/directory cannot be written to, deleted, renamed, or hard-linked.
Support for "system attributes" (in which the operating system defines the meaning, unlike generalextended file attributes) was added to OpenSolaris in 2007 in support of the CIFS server.[19]It has been carried forward from there into both theOracle Solaris11 releases and the open sourceillumosproject.
In this implementation, awide range of attributescan be set via thechmodcommand[20][21]and listed by thelscommand.[22][23]Programs can examine and alter attributes using thegetattratandsetattratfunctions.[24][25]
Currently theZFSfile system supports all defined attributes, and starting in Oracle Solaris 11.2, thetmpfsfile system supports a subset of attributes.[26]
Writing to file only allowed in append mode.
Prevents any change to file's contents or metadata (except access time): file/directory cannot be written to, deleted, or renamed.
|
https://en.wikipedia.org/wiki/File_attribute
|
Inoptimization,line searchis a basiciterativeapproach to find alocal minimumx∗{\displaystyle \mathbf {x} ^{*}}of anobjective functionf:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }. It first finds adescent directionalong which the objective functionf{\displaystyle f}will be reduced, and then computes a step size that determines how farx{\displaystyle \mathbf {x} }should move along that direction. The descent direction can be computed by various methods, such asgradient descentorquasi-Newton method. The step size can be determined either exactly or inexactly.
Supposefis a one-dimensional function,f:R→R{\displaystyle f:\mathbb {R} \to \mathbb {R} }, and assume that it isunimodal, that is, contains exactly one local minimumx* in a given interval [a,z]. This means thatfis strictly decreasing in [a,x*] and strictly increasing in [x*,z]. There are several ways to find an (approximate) minimum point in this case.[1]: sec.5
Zero-order methods use only function evaluations (i.e., avalue oracle) - not derivatives:[1]: sec.5
Zero-order methods are very general - they do not assume differentiability or even continuity.
First-order methods assume thatfis continuously differentiable, and that we can evaluate not onlyfbut also its derivative.[1]: sec.5
Curve-fitting methods try to attainsuperlinear convergenceby assuming thatfhas some analytic form, e.g. a polynomial of finite degree. At each iteration, there is a set of "working points" in which we know the value off(and possibly also its derivative). Based on these points, we can compute a polynomial that fits the known values, and find its minimum analytically. The minimum point becomes a new working point, and we proceed to the next iteration:[1]: sec.5
Curve-fitting methods have superlinear convergence when started close enough to the local minimum, but might diverge otherwise.Safeguarded curve-fitting methodssimultaneously execute a linear-convergence method in parallel to the curve-fitting method. They check in each iteration whether the point found by the curve-fitting method is close enough to the interval maintained by safeguard method; if it is not, then the safeguard method is used to compute the next iterate.[1]: 5.2.3.4
In general, we have a multi-dimensionalobjective functionf:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }. The line-search method first finds adescent directionalong which the objective functionf{\displaystyle f}will be reduced, and then computes a step size that determines how farx{\displaystyle \mathbf {x} }should move along that direction. The descent direction can be computed by various methods, such asgradient descentorquasi-Newton method. The step size can be determined either exactly or inexactly. Here is an example gradient method that uses a line search in step 5:
At the line search step (2.3), the algorithm may minimizehexactly, by solvingh′(αk)=0{\displaystyle h'(\alpha _{k})=0}, orapproximately, by using one of the one-dimensional line-search methods mentioned above. It can also be solvedloosely, by asking for a sufficient decrease inhthat does not necessarily approximate the optimum. One example of the former isconjugate gradient method. The latter is called inexact line search and may be performed in a number of ways, such as abacktracking line searchor using theWolfe conditions.
Like other optimization methods, line search may be combined withsimulated annealingto allow it to jump over somelocal minima.
|
https://en.wikipedia.org/wiki/Line_search
|
Principal variation search(sometimes equated with the practically identicalNegaScout) is anegamaxalgorithm that can be faster thanalpha–beta pruning. Like alpha–beta pruning, NegaScout is a directional search algorithm for computing theminimaxvalue of a node in atree. It dominates alpha–beta pruning in the sense that it will never examine a node that can be pruned by alpha–beta; however, it relies on accurate node ordering to capitalize on this advantage.
NegaScout works best when there is a good move ordering. In practice, the move ordering is often determined by previous shallower searches. It produces more cutoffs than alpha–beta by assuming that the first explored node is the best. In other words, it supposes the first node is in theprincipal variation. Then, it can check whether that is true by searching the remaining nodes with a null window (also known as a scout window; when alpha and beta are equal), which is faster than searching with the regular alpha–beta window. If the proof fails, then the first node was not in the principal variation, and the search continues as normal alpha–beta. Hence, NegaScout works best when the move ordering is good. With a random move ordering, NegaScout will take more time than regular alpha–beta; although it will not explore any nodes alpha–beta did not, it will have to re-search many nodes.
Alexander Reinefeldinvented NegaScout several decades after the invention of alpha–beta pruning. He gives a proof of correctness of NegaScout in his book.[1]
Another search algorithm calledSSS*can theoretically result in fewer nodes searched. However, its original formulation has practical issues (in particular, it relies heavily on an OPEN list for storage) and nowadays most chess engines still use a form of NegaScout in their search. Most chess engines use a transposition table in which the relevant part of the search tree is stored. This part of the tree has the same size as SSS*'s OPEN list would have.[2]A reformulation called MT-SSS* allowed it to be implemented as a series of null window calls to Alpha–Beta (or NegaScout) that use a transposition table, and direct comparisons using game playing programs could be made. It did not outperform NegaScout in practice. Yet another search algorithm, which does tend to do better than NegaScout in practice, is the best-first algorithm calledMTD(f), although neither algorithm dominates the other. There are trees in which NegaScout searches fewer nodes than SSS* or MTD(f) and vice versa.
NegaScout takes after SCOUT, invented byJudea Pearlin 1980, which was the first algorithm to outperform alpha–beta and to be proven asymptotically optimal.[3][4]Null windows, with β=α+1 in a negamax setting, were invented independently by J.P. Fishburn and used in an algorithm similar to SCOUT in an appendix to his Ph.D. thesis,[5]in a parallel alpha–beta algorithm,[6]and on the last subtree of a search tree root node.[7]
Most of the moves are not acceptable for both players, so we do not need to fully search every node to get the exact score. The exact score is only needed for nodes in theprincipal variation(an optimal sequence of moves for both players), where it will propagate up to the root. In iterative deepening search, the previous iteration has already established a candidate for such a sequence, which is also commonly called the principal variation. For any non-leaf in this principal variation, its children are reordered such that the next node from this principal variation is the first child. All other children are assumed to result in a worse or equal score for the current player (this assumption follows from the assumption that the current PV candidate is an actual PV). To test this, we search the first move with a full window to establish an upper bound on the score of the other children, for which we conduct a zero window search to test if a move can be better. Since a zero window search is much cheaper due to the higher frequency of beta cut-offs, this can save a lot of effort. If we find that a move can raise alpha, our assumption has been disproven for this move and we do a re-search with the full window to get the exact score.[8][9]
|
https://en.wikipedia.org/wiki/Negascout
|
Incomputer science, thecontrolled NOT gate(alsoC-NOTorCNOT),controlled-Xgate,controlled-bit-flip gate,Feynman gateorcontrolled Pauli-Xis aquantum logic gatethat is an essential component in the construction of agate-basedquantum computer. It can be used toentangleand disentangleBell states. Any quantum circuit can be simulated to an arbitrary degree of accuracy using a combination of CNOT gates and singlequbitrotations.[1][2]The gate is sometimes named afterRichard Feynmanwho developed an early notation for quantum gate diagrams in 1986.[3][4][5]
The CNOT can be expressed in thePauli basisas:
Being bothunitaryandHermitian, CNOThas the propertyeiθU=(cosθ)I+(isinθ)U{\displaystyle e^{i\theta U}=(\cos \theta )I+(i\sin \theta )U}andU=eiπ2(I−U)=e−iπ2(I−U){\displaystyle U=e^{i{\frac {\pi }{2}}(I-U)}=e^{-i{\frac {\pi }{2}}(I-U)}}, and isinvolutory.
The CNOT gate can be further decomposed as products ofrotation operator gatesand exactly onetwo qubit interaction gate, for example
In general, any single qubitunitary gatecan be expressed asU=eiH{\displaystyle U=e^{iH}}, whereHis aHermitian matrix, and then the controlledUisCU=ei12(I1−Z1)H2{\displaystyle CU=e^{i{\frac {1}{2}}(I_{1}-Z_{1})H_{2}}}.
The CNOT gate is also used in classicalreversible computing.
The CNOT gate operates on aquantum registerconsisting of 2 qubits. The CNOT gate flips the second qubit (the target qubit) if and only if the first qubit (the control qubit) is|1⟩{\displaystyle |1\rangle }.
If{|0⟩,|1⟩}{\displaystyle \{|0\rangle ,|1\rangle \}}are the only allowed input values for both qubits, then the TARGET output of the CNOT gate corresponds to the result of a classicalXOR gate. Fixing CONTROL as|1⟩{\displaystyle |1\rangle }, the TARGET output of the CNOT gate yields the result of a classicalNOT gate.
More generally, the inputs are allowed to be a linear superposition of{|0⟩,|1⟩}{\displaystyle \{|0\rangle ,|1\rangle \}}. The CNOT gate transforms the quantum state:
a|00⟩+b|01⟩+c|10⟩+d|11⟩{\displaystyle a|00\rangle +b|01\rangle +c|10\rangle +d|11\rangle }
into:
a|00⟩+b|01⟩+c|11⟩+d|10⟩{\displaystyle a|00\rangle +b|01\rangle +c|11\rangle +d|10\rangle }
The action of the CNOT gate can be represented by the matrix (permutation matrixform):
The first experimental realization of a CNOT gate was accomplished in 1995. Here, a singleBerylliumion in atrapwas used. The two qubits were encoded into an optical state and into the vibrational state of the ion within the trap. At the time of the experiment, the reliability of the CNOT-operation was measured to be on the order of 90%.[6]
In addition to a regular controlled NOT gate, one could construct a function-controlled NOT gate, which accepts an arbitrary numbern+1 of qubits as input, wheren+1 is greater than or equal to 2 (aquantum register). This gate flips the last qubit of the register if and only if a built-in function, with the firstnqubits as input, returns a 1.
The function-controlled NOT gate is an essential element of theDeutsch–Jozsa algorithm.
When viewed only in the computational basis{|0⟩,|1⟩}{\displaystyle \{|0\rangle ,|1\rangle \}}, the behaviour of the CNOTappears to be like the equivalent classical gate. However, the simplicity of labelling one qubit thecontroland the other thetargetdoes not reflect the complexity of what happens for most input values of both qubits.
Insight can be gained by expressing the CNOT gate with respect to a Hadamard transformed basis{|+⟩,|−⟩}{\displaystyle \{|+\rangle ,|-\rangle \}}. The Hadamard transformed basis[a]of a one-qubitregisteris given by
and the corresponding basis of a 2-qubit register is
etc. Viewing CNOT in this basis, the state of the second qubit remains unchanged, and the state of the first qubit is flipped, according to the state of the second bit. (For details see below.) "Thus, in this basis the sense of which bit is thecontrol bitand which thetarget bithas reversed. But we have not changed the transformation at all, only the way we are thinking about it."[7]
The "computational" basis{|0⟩,|1⟩}{\displaystyle \{|0\rangle ,|1\rangle \}}is the eigenbasis for the spin in the Z-direction, whereas the Hadamard basis{|+⟩,|−⟩}{\displaystyle \{|+\rangle ,|-\rangle \}}is the eigenbasis for spin in the X-direction. Switching X and Z and qubits 1 and 2, then, recovers the original transformation."[8]This expresses a fundamental symmetry of the CNOT gate.
The observation that both qubits are (equally) affected in a CNOTinteraction is of importance when considering information flow in entangled quantum systems.[9]
We now proceed to give the details of the computation. Working through each of the Hadamard basis states, the results on the right column show that the first qubit flips between|+⟩{\displaystyle |+\rangle }and|−⟩{\displaystyle |-\rangle }when the second qubit is|−⟩{\displaystyle |-\rangle }:
A quantum circuit that performs a Hadamard transform followed by CNOTthen another Hadamard transform, can be described as performing the CNOT gate in the Hadamard basis (i.e. achange of basis):
(H1⊗ H1)−1. CNOT. (H1⊗ H1)
The single-qubit Hadamard transform, H1, isHermitianand therefore its own inverse. The tensor product of two Hadamard transforms operating (independently) on two qubits is labelledH2. We can therefore write the matrices as:
H2. CNOT. H2
When multiplied out, this yields a matrix that swaps the|01⟩{\displaystyle |01\rangle }and|11⟩{\displaystyle |11\rangle }terms over, while leaving the|00⟩{\displaystyle |00\rangle }and|10⟩{\displaystyle |10\rangle }terms alone. This is equivalent to a CNOT gate where qubit 2 is the control qubit and qubit 1 is the target qubit:[b]
12[11111−11−111−1−11−1−11].[1000010000010010].12[11111−11−111−1−11−1−11]=[1000000100100100]{\displaystyle {\frac {1}{2}}{\begin{bmatrix}{\begin{array}{rrrr}1&1&1&1\\1&-1&1&-1\\1&1&-1&-1\\1&-1&-1&1\end{array}}\end{bmatrix}}.{\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&0&1\\0&0&1&0\end{bmatrix}}.{\frac {1}{2}}{\begin{bmatrix}{\begin{array}{rrrr}1&1&1&1\\1&-1&1&-1\\1&1&-1&-1\\1&-1&-1&1\end{array}}\end{bmatrix}}={\begin{bmatrix}1&0&0&0\\0&0&0&1\\0&0&1&0\\0&1&0&0\end{bmatrix}}}
A common application of the CNOTgate is to maximally entangle two qubits into the|Φ+⟩{\displaystyle |\Phi ^{+}\rangle }Bell state; this forms part of the setup of thesuperdense coding,quantum teleportation, and entangledquantum cryptographyalgorithms.
To construct|Φ+⟩{\displaystyle |\Phi ^{+}\rangle }, the inputs A (control) and B (target) to the CNOTgate are
After applying CNOT, the resulting Bell state12(|00⟩+|11⟩){\textstyle {\frac {1}{\sqrt {2}}}(|00\rangle +|11\rangle )}has the property that the individual qubits can be measured using any basis and will always present a 50/50 chance of resolving to each state. In effect, the individual qubits are in an undefined state. The correlation between the two qubits is the complete description of the state of the two qubits; if we both choose the same basis to measure both qubits and compare notes, the measurements will perfectly correlate.
When viewed in the computational basis, it appears that qubit A is affecting qubit B. Changing our viewpoint to the Hadamard basis demonstrates that, in a symmetrical way, qubit B is affecting qubit A.
The input state can alternately be viewed as
In the Hadamard view, the control and target qubits have conceptually swapped and qubit A is inverted when qubit B is|−⟩B{\displaystyle |-\rangle _{B}}. The output state after applying the CNOTgate is12(|++⟩+|−−⟩),{\displaystyle {\tfrac {1}{\sqrt {2}}}(|++\rangle +|--\rangle ),}which can be shown as follows:
The C-ROT gate (controlledRabi rotation) is equivalent to a C-NOT gate except for aπ/2{\displaystyle \pi /2}rotation of the nuclear spin around the z axis.[10][11]
Trapped ion quantum computers:
In May, 2024, Canada implementedexport restrictionson the sale of quantum computers containing more than 34qubitsand error rates below a certain CNOTerror threshold, along with restrictions for quantum computers with more qubits and higher error rates.[12]The same restrictions quickly popped up in the UK, France, Spain and the Netherlands. They offered few explanations for this action, but all of them areWassenaar Arrangementstates, and the restrictions seem related tonational securityconcerns potentially includingquantum cryptographyorprotection from competition.[13][14]
|
https://en.wikipedia.org/wiki/Controlled_NOT_gate
|
AZadoff–Chu (ZC) sequence[1]: 152is acomplex-valuedmathematicalsequencewhich, when applied to asignal, gives rise to a new signal of constantamplitude. Whencyclically shiftedversions of a Zadoff–Chu sequence are imposed upon a signal the resulting set of signals detected at the receiver areuncorrelatedwith one another.
Zadoff–Chu sequences exhibit the useful property that cyclically shifted versions of themselves areorthogonalto one another.
A generated Zadoff–Chu sequence that has not been shifted is known as aroot sequence.
The complex value at each positionnof each root Zadoff–Chu sequence parametrised byuis given by
where
Zadoff–Chu sequences are CAZAC sequences (constant amplitude zero autocorrelation waveform).
Note that the special caseq=0{\displaystyle q=0}results in a Chu sequence,.[1]: 151Settingq≠0{\displaystyle q\neq 0}produces a sequence that is equal to the cyclically shifted version of the Chu sequence byq{\displaystyle q},andmultipliedby a complex, modulus 1 number, where by multiplied we mean that each element is multiplied by the same number.
1. They areperiodicwith periodNZC{\displaystyle N_{\text{ZC}}}.
2. IfNZC{\displaystyle N_{\text{ZC}}}is prime, theDiscrete Fourier Transformof a Zadoff–Chu sequence is another Zadoff–Chu sequence conjugated, scaled and time scaled.
3. The auto correlation of a Zadoff–Chu sequence with a cyclically shifted version of itself is zero, i.e., it is non-zero only at one instant which corresponds to the cyclic shift.
4. Thecross-correlationbetween two prime length Zadoff–Chu sequences, i.e. different values ofu,u=u1,u=u2{\displaystyle u,u=u_{1},u=u_{2}}, is constant1/NZC{\displaystyle 1/{\sqrt {N_{\text{ZC}}}}}, provided thatu1−u2{\displaystyle u_{1}-u_{2}}is relatively prime toNZC{\displaystyle N_{\text{ZC}}}.[2]
Zadoff–Chu sequences are used in the3GPPLong Term Evolution(LTE)air interfacein the Primary Synchronization Signal (PSS), random access preamble (PRACH), uplink control channel (PUCCH), uplink traffic channel (PUSCH) and sounding reference signals (SRS).
By assigningorthogonalZadoff–Chu sequences to each LTEeNodeBand multiplying their transmissions by their respective codes, thecross-correlationof simultaneous eNodeB transmissions is reduced, thus reducing inter-cell interference and uniquely identifying eNodeB transmissions.
Zadoff–Chu sequences are an improvement over theWalsh–Hadamard codesused inUMTSbecause they result in a constant-amplitude output signal, reducing the cost and complexity of theradio's power amplifier.[3]
|
https://en.wikipedia.org/wiki/Zadoff%E2%80%93Chu_sequence
|
ThePearson distributionis a family ofcontinuousprobability distributions. It was first published byKarl Pearsonin 1895 and subsequently extended by him in 1901 and 1916 in a series of articles onbiostatistics.
The Pearson system was originally devised in an effort to model visiblyskewedobservations. It was well known at the time how to adjust a theoretical model to fit the first twocumulantsormomentsof observed data: Anyprobability distributioncan be extended straightforwardly to form alocation-scale family. Except inpathologicalcases, a location-scale family can be made to fit the observedmean(first cumulant) andvariance(second cumulant) arbitrarily well. However, it was not known how to construct probability distributions in which theskewness(standardized third cumulant) andkurtosis(standardized fourth cumulant) could be adjusted equally freely. This need became apparent when trying to fit known theoretical models to observed data that exhibited skewness. Pearson's examples include survival data, which are usually asymmetric.
In his original paper, Pearson (1895, p. 360) identified four types of distributions (numbered I through IV) in addition to thenormal distribution(which was originally known as type V). The classification depended on whether the distributions weresupportedon a bounded interval, on a half-line, or on the wholereal line; and whether they were potentially skewed or necessarily symmetric. A second paper (Pearson 1901) fixed two omissions: it redefined the type V distribution (originally just thenormal distribution, but now theinverse-gamma distribution) and introduced the type VI distribution. Together the first two papers cover the five main types of the Pearson system (I, III, IV, V, and VI). In a third paper, Pearson (1916) introduced further special cases and subtypes (VII through XII).
Rhind (1909, pp. 430–432) devised a simple way of visualizing the parameter space of the Pearson system, which was subsequently adopted by Pearson (1916, plate 1 and pp. 430ff., 448ff.). The Pearson types are characterized by two quantities, commonly referred to as β1and β2. The first is the square of theskewness: β1= γ1where γ1is the skewness, or thirdstandardized moment. The second is the traditionalkurtosis, or fourth standardized moment: β2= γ2+ 3. (Modern treatments define kurtosis γ2in terms of cumulants instead of moments, so that for a normal distribution we have γ2= 0 and β2= 3. Here we follow the historical precedent and use β2.) The diagram shows which Pearson type a given concrete distribution (identified by a point (β1, β2)) belongs to.
Many of the skewed or non-mesokurticdistributions familiar to statisticians today were still unknown in the early 1890s. What is now known as thebeta distributionhad been used byThomas Bayesas aposterior distributionof the parameter of aBernoulli distributionin his 1763 work oninverse probability. The Beta distribution gained prominence due to its membership in Pearson's system and was known until the 1940s as the Pearson type I distribution.[1](Pearson's type II distribution is a special case of type I, but is usually no longer singled out.) Thegamma distributionoriginated from Pearson's work (Pearson 1893, p. 331; Pearson 1895, pp. 357, 360, 373–376) and was known as the Pearson type III distribution, before acquiring its modern name in the 1930s and 1940s.[2]Pearson's 1895 paper introduced the type IV distribution, which containsStudent'st-distributionas a special case, predatingWilliam Sealy Gosset's subsequent use by several years. His 1901 paper introduced theinverse-gamma distribution(type V) and thebeta prime distribution(type VI).
A Pearsondensitypis defined to be any valid solution to thedifferential equation(cf. Pearson 1895, p. 381)
with:
According to Ord,[3]Pearson devised the underlying form of Equation (1) on the basis of, firstly, the formula for the derivative of the logarithm of the density function of thenormal distribution(which gives a linear function) and, secondly, from a recurrence relation for values in theprobability mass functionof thehypergeometric distribution(which yields the linear-divided-by-quadratic structure).
In Equation (1), the parameteradetermines astationary point, and hence under some conditions amodeof the distribution, since
follows directly from the differential equation.
Since we are confronted with afirst-order linear differential equation with variable coefficients, its solution is straightforward:
The integral in this solution simplifies considerably when certain special cases of the integrand are considered. Pearson (1895, p. 367) distinguished two main cases, determined by the sign of thediscriminant(and hence the number of realroots) of thequadratic function
If the discriminant of the quadratic function (2) is negative (b12−4b2b0<0{\displaystyle b_{1}^{2}-4b_{2}b_{0}<0}), it has no real roots. Then define
Observe thatαis a well-defined real number andα≠ 0, because by assumption4b2b0−b12>0{\displaystyle 4b_{2}b_{0}-b_{1}^{2}>0}and thereforeb2≠ 0. Applying these substitutions, the quadratic function (2) is transformed into
The absence of real roots is obvious from this formulation, because α2is necessarily positive.
We now express the solution to the differential equation (1) as a function ofy:
Pearson (1895, p. 362) called this the "trigonometrical case", because the integral
involves theinversetrigonometricarctan function. Then
Finally, let
Applying these substitutions, we obtain the parametric function:
This unnormalized density hassupporton the entirereal line. It depends on ascale parameterα > 0 andshape parametersm> 1/2 andν. One parameter was lost when we chose to find the solution to the differential equation (1) as a function ofyrather thanx. We therefore reintroduce a fourth parameter, namely thelocation parameterλ. We have thus derived the density of thePearson type IV distribution:
Thenormalizing constantinvolves thecomplexGamma function(Γ) and theBeta function(B).
Notice that thelocation parameterλhere is not the same as the original location parameter introduced in the general formulation, but is related via
The shape parameterνof the Pearson type IV distribution controls itsskewness. If we fix its value at zero, we obtain a symmetric three-parameter family. This special case is known as thePearson type VII distribution(cf. Pearson 1916, p. 450). Its density is
where B is theBeta function.
An alternative parameterization (and slight specialization) of the type VII distribution is obtained by letting
which requiresm> 3/2. This entails a minor loss of generality but ensures that thevarianceof the distribution exists and is equal to σ2. Now the parametermonly controls thekurtosisof the distribution. Ifmapproaches infinity asλandσare held constant, thenormal distributionarises as a special case:
This is the density of a normal distribution with meanλand standard deviationσ.
It is convenient to require thatm> 5/2 and to let
This is another specialization, and it guarantees that the first four moments of the distribution exist. More specifically, the Pearson type VII distribution parameterized in terms of (λ, σ, γ2) has a mean ofλ,standard deviationofσ,skewnessof zero, and positiveexcess kurtosisof γ2.
The Pearson type VII distribution is equivalent to the non-standardizedStudent'st-distributionwith parameters ν > 0, μ, σ2by applying the following substitutions to its original parameterization:
Observe that the constraintm> 1/2is satisfied.
The resulting density is
which is easily recognized as the density of a Student'st-distribution.
This implies that the Pearson type VII distribution subsumes the standardStudent'st-distributionand also the standardCauchy distribution. In particular, the standard Student'st-distribution arises as a subcase, whenμ= 0 andσ2= 1, equivalent to the following substitutions:
The density of this restricted one-parameter family is a standard Student'st:
If the quadratic function (2) has a non-negative discriminant (b12−4b2b0≥0{\displaystyle b_{1}^{2}-4b_{2}b_{0}\geq 0}), it has real rootsa1anda2(not necessarily distinct):
In the presence of real roots the quadratic function (2) can be written as
and the solution to the differential equation is therefore
Pearson (1895, p. 362) called this the "logarithmic case", because the integral
involves only thelogarithmfunction and not the arctan function as in the previous case.
Using the substitution
we obtain the following solution to the differential equation (1):
Since this density is only known up to a hidden constant of proportionality, that constant can be changed and the density written as follows:
ThePearson type I distribution(a generalization of thebeta distribution) arises when the roots of the quadratic equation (2) are of opposite sign, that is,a1<0<a2{\displaystyle a_{1}<0<a_{2}}. Then the solutionpis supported on the interval(a1,a2){\displaystyle (a_{1},a_{2})}. Apply the substitution
where0<y<1{\displaystyle 0<y<1}, which yields a solution in terms ofythat is supported on the interval (0, 1):
One may define:
Regrouping constants and parameters, this simplifies to:
Thusx−λ−a1a2−a1{\displaystyle {\frac {x-\lambda -a_{1}}{a_{2}-a_{1}}}}follows aB(m1+1,m2+1){\displaystyle \mathrm {B} (m_{1}+1,m_{2}+1)}withλ=μ1−(a2−a1)m1+1m1+m2+2−a1{\displaystyle \lambda =\mu _{1}-(a_{2}-a_{1}){\frac {m_{1}+1}{m_{1}+m_{2}+2}}-a_{1}}. It turns out thatm1,m2> −1 is necessary and sufficient forpto be a proper probability density function.
ThePearson type II distributionis a special case of the Pearson type I family restricted to symmetric distributions.
For the Pearson type II curve,[4]
where
The ordinate,y, is the frequency of∑d2{\displaystyle \sum d^{2}}. The Pearson type II distribution is used in computing the table of significant correlation coefficients forSpearman's rank correlation coefficientwhen the number of items in a series is less than 100 (or 30, depending on some sources). After that, the distribution mimics a standardStudent's t-distribution. For the table of values, certain values are used as the constants in the previous equation:
The moments ofxused are
Defining
b0+b1(x−λ){\displaystyle b_{0}+b_{1}(x-\lambda )}isGamma(m+1,b12){\displaystyle \operatorname {Gamma} (m+1,b_{1}^{2})}. The Pearson type III distribution is agamma distributionorchi-squared distribution.
Defining new parameters:
x−λ{\displaystyle x-\lambda }follows anInverseGamma(1b2−1,a−C1b2){\displaystyle \operatorname {InverseGamma} ({\frac {1}{b_{2}}}-1,{\frac {a-C_{1}}{b_{2}}})}. The Pearson type V distribution is aninverse-gamma distribution.
Defining
x−λ−a2a2−a1{\displaystyle {\frac {x-\lambda -a_{2}}{a_{2}-a_{1}}}}follows aβ′(m2+1,−m2−m1−1){\displaystyle \beta ^{\prime }(m_{2}+1,-m_{2}-m_{1}-1)}. The Pearson type VI distribution is abeta prime distributionorF-distribution.
The Pearson family subsumes the following distributions, among others:
Alternatives to the Pearson system of distributions for the purpose of fitting distributions to data are thequantile-parameterized distributions(QPDs) and themetalog distributions. QPDs and metalogs can provide greater shape and bounds flexibility than the Pearson system. Instead of fitting moments, QPDs are typically fit toempirical CDFor other data withlinear least squares.
Examples of modern alternatives to the Pearson skewness-vs-kurtosis diagram are: (i)https://github.com/SchildCode/PearsonPlotand (ii) the "Cullen and Frey graph" in the statistical application R.
These models are used in financial markets, given their ability to be parametrized in a way that has intuitive meaning for market traders. A number of models are in current use that capture the stochastic nature of the volatility of rates, stocks, etc.,[which?][citation needed]and this family of distributions may prove to be one of the more important.
In the United States, the Log-Pearson III is the default distribution for flood frequency analysis.[5]
Recently, there have been alternatives developed to the Pearson distributions that are more flexible and easier to fit to data. See themetalog distributions.
|
https://en.wikipedia.org/wiki/Pearson_distribution
|
Inalgebraic number theory, theGrunwald–Wang theoremis alocal-global principlestating that—except in some precisely defined cases—an elementxin anumber fieldKis annth power inKif it is annth power in thecompletionKp{\displaystyle K_{\mathfrak {p}}}for all but finitely many primesp{\displaystyle {\mathfrak {p}}}ofK. For example, arational numberis a square of a rational number if it is a square of ap-adicnumberfor almost allprime numbersp.
It was introduced byWilhelm Grunwald(1933), but there was a mistake in this original version that was found and corrected byShianghao Wang(1948). The theorem considered by Grunwald and Wang was more general than the one stated above as they discussed the existence of cyclic extensions with certain local properties, and the statement aboutnth powers is a consequence of this.
Some days later I was withArtinin his office when Wang appeared. He said he had a counterexample to a lemma which had been used in the proof. An hour or two later, he produced a counterexample to the theorem itself... Of course he [Artin] was astonished, as were all of us students, that a famous theorem with two published proofs, one of which we had all heard in the seminar without our noticing anything, could be wrong.
Grunwald (1933), a student ofHelmut Hasse, gave an incorrect proof of the erroneous statement that an element in a number field is annth power if it is annth power locally almost everywhere. George Whaples (1942) gave another incorrect proof of this incorrect statement. HoweverWang (1948)discovered the followingcounterexample: 16 is ap-adic8th power for alloddprimesp, but is not a rational or2-adic8th power. In his doctoral thesisWang (1950)written underEmil Artin, Wang gave and proved the correct formulation of Grunwald's assertion, by describing the rare cases when it fails. This result is what is now known as the Grunwald–Wang theorem. The history of Wang's counterexample is discussed byPeter Roquette(2005, section 5.3)
Grunwald's original claim that an element that is annth power almost everywhere locally is annth power globally can fail in two distinct ways: the element can be annth power almost everywhere locally but not everywhere locally, or it can be annth power everywhere locally but not globally.
The element 16 in the rationals is an 8th power at all places except 2, but is not an 8th power in the 2-adic numbers.
It is clear that 16 is not a 2-adic 8th power, and hence not a rational 8th power, since the2-adic valuationof 16 is 4 which is not divisible by 8.
Generally, 16 is an 8th power in afieldKif and only if thepolynomialX8−16{\displaystyle X^{8}-16}has arootinK. Write
Thus, 16 is an 8th power inKif and only if 2, −2 or −1 is a square inK. Letpbe any odd prime. It follows from the multiplicativity of theLegendre symbolthat 2, −2 or −1 is a square modulop. Hence, byHensel's lemma, 2, −2 or −1 is a square inQp{\displaystyle \mathbb {Q} _{p}}.
16 is not an 8th power inQ(7){\displaystyle \mathbb {Q} ({\sqrt {7}}\,)}although it is an 8th power locally everywhere (i.e. inQp(7){\displaystyle \mathbb {Q} _{p}({\sqrt {7}}\,)}for allp). This follows from the above and the equalityQ2(7)=Q2(−1){\displaystyle \mathbb {Q} _{2}({\sqrt {7}}\,)=\mathbb {Q} _{2}({\sqrt {-1}}\,)}.
Wang's counterexample has the following interesting consequence showing that one cannot always find acyclicGalois extensionof a given degree of a number field in which finitely many given prime places split in a specified way:
There exists no cyclic degree-8 extensionK/Q{\displaystyle K/\mathbb {Q} }in which the prime 2 is totally inert (i.e., such thatK2/Q2{\displaystyle K_{2}/\mathbb {Q} _{2}}is unramified of degree 8).
For anys≥2{\displaystyle s\geq 2}let
Note that the2s{\displaystyle 2^{s}}thcyclotomic fieldis
A field is calleds-specialif it containsηs{\displaystyle \eta _{s}}, but neitheri{\displaystyle i},ηs+1{\displaystyle \eta _{s+1}}noriηs+1{\displaystyle i\eta _{s+1}}.
Consider a number fieldKand anatural numbern. LetSbe a finite (possibly empty) set of primes ofKand put
The Grunwald–Wang theorem says that
unless we are in thespecial casewhich occurs when the following two conditions both hold:
In the special case the failure of the Hasse principle is finite of order 2: the kernel of
isZ/2Z, generated by the element ηns+1.
The field of rational numbersK=Q{\displaystyle K=\mathbb {Q} }is 2-special since it containsη2=0{\displaystyle \eta _{2}=0}, but neitheri{\displaystyle i},η3=2{\displaystyle \eta _{3}={\sqrt {2}}}noriη3=−2{\displaystyle i\eta _{3}={\sqrt {-2}}}. The special set isS0={2}{\displaystyle S_{0}=\{2\}}. Thus, the special case in the Grunwald–Wang theorem occurs whennis divisible by 8, andScontains 2. This explains Wang's counterexample and shows that it isminimal. It is also seen that an element inQ{\displaystyle \mathbb {Q} }is annth power if it is ap-adicnth power for allp.
The fieldK=Q(7){\displaystyle K=\mathbb {Q} ({\sqrt {7}}\,)}is 2-special as well, but withS0=∅{\displaystyle S_{0}=\emptyset }. This explains the other counterexample above.[1]
|
https://en.wikipedia.org/wiki/Grunwald%E2%80%93Wang_theorem
|
Inquantum physics, aquantum stateis a mathematical entity that embodies the knowledge of a quantum system.Quantum mechanicsspecifies the construction, evolution, andmeasurementof a quantum state. The result is a prediction for the system represented by the state. Knowledge of the quantum state, and the rules for the system's evolution in time, exhausts all that can be known about a quantum system.
Quantum states may be defined differently for different kinds of systems or problems. Two broad categories are
Historical, educational, and application-focused problems typically feature wave functions; modern professional physics uses the abstract vector states. In both categories, quantum states divide intopureversusmixed states, or intocoherent statesand incoherent states. Categories with special properties includestationary statesfor time independence andquantum vacuum statesinquantum field theory.
As a tool for physics, quantum states grew out of states inclassical mechanics. A classical dynamical state consists of a set of dynamical variables with well-definedrealvalues at each instant of time.[1]: 3For example, the state of a cannon ball would consist of its position and velocity. The state values evolve under equations of motion and thus remain strictly determined. If we know the position of a cannon and the exit velocity of its projectiles, then we can use equations containing the force of gravity to predict the trajectory of a cannon ball precisely.
Similarly, quantum states consist of sets of dynamical variables that evolve under equations of motion. However, the values derived from quantum states arecomplex numbers, quantized, limited byuncertainty relations,[1]: 159and only provide aprobability distributionfor the outcomes for a system. These constraints alter the nature of quantum dynamic variables. For example, the quantum state of an electron in adouble-slit experimentwould consist of complex values over the detection region and, when squared, only predict the probability distribution of electron counts across the detector.
The process of describing a quantum system with quantum mechanics begins with identifying a set of variables defining the quantum state of the system.[1]: 204The set will containcompatible and incompatible variables. Simultaneous measurement of acomplete set of compatible variablesprepares the system in a unique state. The state then evolves deterministically according to theequations of motion. Subsequent measurement of the state produces a sample from a probability distribution predicted by the quantum mechanicaloperatorcorresponding to the measurement.
The fundamentally statistical or probabilisitic nature of quantum measurements changes the role of quantum states in quantum mechanics compared to classical states in classical mechanics. In classical mechanics, the initial state of one or more bodies is measured; the state evolves according to the equations of motion; measurements of the final state are compared to predictions. In quantum mechanics, ensembles of identically prepared quantum states evolve according to the equations of motion and many repeated measurements are compared to predicted probability distributions.[1]: 204
Measurements, macroscopic operations on quantum states, filter the state.[1]: 196Whatever the input quantum state might be, repeated identical measurements give consistent values. For this reason, measurements 'prepare' quantum states for experiments, placing the system in a partially defined state. Subsequent measurements may either further prepare the system – these are compatible measurements – or it may alter the state, redefining it – these are called incompatible or complementary measurements. For example, we may measure the momentum of a state along thex{\displaystyle x}axis any number of times and get the same result, but if we measure the position after once measuring the momentum, subsequent measurements of momentum are changed. The quantum state appears unavoidably altered by incompatible measurements. This is known as theuncertainty principle.
The quantum state after a measurement is in aneigenstatecorresponding to that measurement and the value measured.[1]: 202Other aspects of the state may be unknown. Repeating the measurement will not alter the state. In some cases, compatible measurements can further refine the state, causing it to be an eigenstate corresponding to all these measurements.[2]A full set of compatible measurements produces apure state. Any state that is not pure is called amixed stateas discussed in more depthbelow.[1]: 204[3]: 73
The eigenstate solutions to theSchrödinger equationcan be formed into pure states. Experiments rarely produce pure states. Therefore statistical mixtures of solutions must be compared to experiments.[1]: 204
The same physical quantum state can be expressed mathematically in different ways calledrepresentations.[1]The position wave function is one representation often seen first in introductions to quantum mechanics. The equivalent momentum wave function is another wave function based representation. Representations are analogous to coordinate systems[1]: 244or similar mathematical devices likeparametric equations. Selecting a representation will make some aspects of a problem easier at the cost of making other things difficult.
In formal quantum mechanics (see§ Formalism in quantum physicsbelow) the theory develops in terms of abstract 'vector space', avoiding any particular representation. This allows many elegant concepts of quantum mechanics to be expressed and to be applied even in cases where no classical analog exists.[1]: 244
Wave functionsrepresent quantum states, particularly when they are functions of position or ofmomentum. Historically, definitions of quantum states used wavefunctions before the more formal methods were developed.[4]: 268The wave function is a complex-valued function of any complete set of commuting or compatibledegrees of freedom. For example, one set could be thex,y,z{\displaystyle x,y,z}spatial coordinates of an electron.
Preparing a system by measuring the complete set of compatible observables produces apure quantum state. More common, incomplete preparation produces amixed quantum state. Wave function solutions ofSchrödinger's equations of motionfor operators corresponding to measurements can readily be expressed as pure states; they must be combined with statistical weights matching experimental preparation to compute the expected probability distribution.[1]: 205
Numerical or analytic solutions in quantum mechanics can be expressed aspure states. These solution states, calledeigenstates, are labeled with quantized values, typicallyquantum numbers.
For example, when dealing with theenergy spectrumof theelectronin ahydrogen atom, the relevant pure states are identified by theprincipal quantum numbern, theangular momentum quantum numberℓ, themagnetic quantum numberm, and thespinz-componentsz. For another example, if the spin of an electron is measured in any direction, e.g. with aStern–Gerlach experiment, there are two possible results: up or down. A pure state here is represented by a two-dimensionalcomplexvector(α,β){\displaystyle (\alpha ,\beta )}, with a length of one; that is, with|α|2+|β|2=1,{\displaystyle |\alpha |^{2}+|\beta |^{2}=1,}where|α|{\displaystyle |\alpha |}and|β|{\displaystyle |\beta |}are theabsolute valuesofα{\displaystyle \alpha }andβ{\displaystyle \beta }.
Thepostulates of quantum mechanicsstate that pure states, at a given timet, correspond tovectorsin aseparablecomplexHilbert space, while each measurable physical quantity (such as the energy or momentum of aparticle) is associated with a mathematicaloperatorcalled theobservable. The operator serves as alinear functionthat acts on the states of the system. Theeigenvaluesof the operator correspond to the possible values of the observable. For example, it is possible to observe a particle with a momentum of 1 kg⋅m/s if and only if one of the eigenvalues of the momentum operator is 1 kg⋅m/s. The correspondingeigenvector(which physicists call aneigenstate) with eigenvalue 1 kg⋅m/s would be a quantum state with a definite, well-defined value of momentum of 1 kg⋅m/s, with noquantum uncertainty. If its momentum were measured, the result is guaranteed to be 1 kg⋅m/s.
On the other hand, a pure state described as asuperpositionof multiple different eigenstatesdoesin general have quantum uncertainty for the given observable. Usingbra–ket notation, thislinear combinationof eigenstates can be represented as:[5]: 22, 171, 172|Ψ(t)⟩=∑nCn(t)|Φn⟩.{\displaystyle |\Psi (t)\rangle =\sum _{n}C_{n}(t)|\Phi _{n}\rangle .}The coefficient that corresponds to a particular state in the linear combination is a complex number, thus allowing interference effects between states. The coefficients are time dependent. How a quantum state changes in time is governed by thetime evolution operator.
A mixed quantum state corresponds to a probabilistic mixture of pure states; however, different distributions of pure states can generate equivalent (i.e., physically indistinguishable) mixed states. Amixtureof quantum states is again a quantum state.
A mixed state for electron spins, in the density-matrix formulation, has the structure of a2×2{\displaystyle 2\times 2}matrix that isHermitianand positive semi-definite, and hastrace1.[6]A more complicated case is given (inbra–ket notation) by thesinglet state, which exemplifiesquantum entanglement:|ψ⟩=12(|↑↓⟩−|↓↑⟩),{\displaystyle \left|\psi \right\rangle ={\frac {1}{\sqrt {2}}}{\bigl (}\left|\uparrow \downarrow \right\rangle -\left|\downarrow \uparrow \right\rangle {\bigr )},}which involvessuperpositionof joint spin states for two particles with spin 1/2. The singlet state satisfies the property that if the particles' spins are measured along the same direction then either the spin of the first particle is observed up and the spin of the second particle is observed down, or the first one is observed down and the second one is observed up, both possibilities occurring with equal probability.
A pure quantum state can be represented by arayin aprojective Hilbert spaceover thecomplex numbers, while mixed states are represented bydensity matrices, which arepositive semidefinite operatorsthat act on Hilbert spaces.[7][3]TheSchrödinger–HJW theoremclassifies the multitude of ways to write a given mixed state as aconvex combinationof pure states.[8]Before a particularmeasurementis performed on a quantum system, the theory gives only aprobability distributionfor the outcome, and the form that this distribution takes is completely determined by the quantum state and thelinear operatorsdescribing the measurement. Probability distributions for different measurements exhibit tradeoffs exemplified by theuncertainty principle: a state that implies a narrow spread of possible outcomes for one experiment necessarily implies a wide spread of possible outcomes for another.
Statistical mixtures of states are a different type of linear combination. A statistical mixture of states is astatistical ensembleof independent systems. Statistical mixtures represent the degree of knowledge whilst the uncertainty within quantum mechanics is fundamental. Mathematically, a statistical mixture is not a combination using complex coefficients, but rather a combination using real-valued, positive probabilities of different statesΦn{\displaystyle \Phi _{n}}. A numberPn{\displaystyle P_{n}}represents the probability of a randomly selected system being in the stateΦn{\displaystyle \Phi _{n}}. Unlike the linear combination case each system is in a definite eigenstate.[9][10]
The expectation value⟨A⟩σ{\displaystyle {\langle A\rangle }_{\sigma }}of an observableAis a statistical mean of measured values of the observable. It is this mean, and the distribution of probabilities, that is predicted by physical theories.
There is no state that is simultaneously an eigenstate forallobservables. For example, we cannot prepare a state such that both the position measurementQ(t)and the momentum measurementP(t)(at the same timet) are known exactly; at least one of them will have a range of possible values.[a]This is the content of theHeisenberg uncertainty relation.
Moreover, in contrast to classical mechanics, it is unavoidable thatperforming a measurement on the system generally changes its state.[11][12][13]: 4More precisely: After measuring an observableA, the system will be in an eigenstate ofA; thus the state has changed, unless the system was already in that eigenstate. This expresses a kind of logical consistency: If we measureAtwice in the same run of the experiment, the measurements being directly consecutive in time,[b]then they will produce the same results. This has some strange consequences, however, as follows.
Consider twoincompatible observables,AandB, whereAcorresponds to a measurement earlier in time thanB.[c]Suppose that the system is in an eigenstate ofBat the experiment's beginning. If we measure onlyB, all runs of the experiment will yield the same result.
If we measure firstAand thenBin the same run of the experiment, the system will transfer to an eigenstate ofAafter the first measurement, and we will generally notice that the results ofBare statistical. Thus:Quantum mechanical measurements influence one another, and the order in which they are performed is important.
Another feature of quantum states becomes relevant if we consider a physical system that consists of multiple subsystems; for example, an experiment with two particles rather than one. Quantum physics allows for certain states, calledentangled states, that show certain statistical correlations between measurements on the two particles which cannot be explained by classical theory. For details, seeQuantum entanglement. These entangled states lead to experimentally testable properties (Bell's theorem)
that allow us to distinguish between quantum theory and alternative classical (non-quantum) models.
One can take the observables to be dependent on time, while the stateσwas fixed once at the beginning of the experiment. This approach is called theHeisenberg picture. (This approach was taken in the later part of the discussion above, with time-varying observablesP(t),Q(t).) One can, equivalently, treat the observables as fixed, while the state of the system depends on time; that is known as theSchrödinger picture. (This approach was taken in the earlier part of the discussion above, with a time-varying state|Ψ(t)⟩=∑nCn(t)|Φn⟩{\textstyle |\Psi (t)\rangle =\sum _{n}C_{n}(t)|\Phi _{n}\rangle }.) Conceptually (and mathematically), the two approaches are equivalent; choosing one of them is a matter of convention.
Both viewpoints are used in quantum theory. While non-relativisticquantum mechanicsis usually formulated in terms of the Schrödinger picture, the Heisenberg picture is often preferred in a relativistic context, that is, forquantum field theory. Compare withDirac picture.[14]:65
Quantum physics is most commonly formulated in terms oflinear algebra, as follows. Any given system is identified with some finite- or infinite-dimensionalHilbert space. The pure states correspond to vectors ofnorm1. Thus the set of all pure states corresponds to theunit spherein the Hilbert space, because the unit sphere is defined as the set of all vectors with norm 1.
Multiplying a pure state by ascalaris physically inconsequential (as long as the state is considered by itself). If a vector in a complex Hilbert spaceH{\displaystyle H}can be obtained from another vector by multiplying by some non-zero complex number, the two vectors inH{\displaystyle H}are said to correspond to the samerayin theprojective Hilbert spaceP(H){\displaystyle \mathbf {P} (H)}ofH{\displaystyle H}. Note that although the wordrayis used, properly speaking, a point in the projective Hilbert space corresponds to alinepassing through the origin of the Hilbert space, rather than ahalf-line, orrayin thegeometrical sense.
Theangular momentumhas the same dimension (M·L2·T−1) as thePlanck constantand, at quantum scale, behaves as adiscretedegree of freedom of a quantum system. Most particles possess a kind of intrinsic angular momentum that does not appear at all in classical mechanics and arises from Dirac's relativistic generalization of the theory. Mathematically it is described withspinors. In non-relativistic quantum mechanics thegroup representationsof theLie groupSU(2) are used to describe this additional freedom. For a given particle, the choice of representation (and hence the range of possible values of the spin observable) is specified by a non-negative numberSthat, in units of thereduced Planck constantħ, is either aninteger(0, 1, 2, ...) or ahalf-integer(1/2, 3/2, 5/2, ...). For amassiveparticle with spinS, itsspin quantum numbermalways assumes one of the2S+ 1possible values in the set{−S,−S+1,…,S−1,S}{\displaystyle \{-S,-S+1,\ldots ,S-1,S\}}
As a consequence, the quantum state of a particle with spin is described by avector-valued wave function with values inC2S+1. Equivalently, it is represented by acomplex-valued functionof four variables: one discretequantum numbervariable (for the spin) is added to the usual three continuous variables (for the position in space).
The quantum state of a system ofNparticles, each potentially with spin, is described by a complex-valued function with four variables per particle, corresponding to 3spatial coordinatesandspin, e.g.|ψ(r1,m1;…;rN,mN)⟩.{\displaystyle |\psi (\mathbf {r} _{1},\,m_{1};\;\dots ;\;\mathbf {r} _{N},\,m_{N})\rangle .}
Here, the spin variablesmνassume values from the set{−Sν,−Sν+1,…,Sν−1,Sν}{\displaystyle \{-S_{\nu },\,-S_{\nu }+1,\,\ldots ,\,S_{\nu }-1,\,S_{\nu }\}}whereSν{\displaystyle S_{\nu }}is the spin ofνth particle.Sν=0{\displaystyle S_{\nu }=0}for a particle that does not exhibit spin.
The treatment ofidentical particlesis very different forbosons(particles with integer spin) versusfermions(particles with half-integer spin). The aboveN-particle function must either be symmetrized (in the bosonic case) or anti-symmetrized (in the fermionic case) with respect to the particle numbers. If not allNparticles are identical, but some of them are, then the function must be (anti)symmetrized separately over the variables corresponding to each group of identical variables, according to its statistics (bosonic or fermionic).
Electrons are fermions withS= 1/2,photons(quanta of light) are bosons withS= 1(although in thevacuumthey aremasslessand can't be described with Schrödinger mechanics).
When symmetrization or anti-symmetrization is unnecessary,N-particle spaces of states can be obtained simply bytensor productsof one-particle spaces, to which we will return later.
A state|ψ⟩{\displaystyle |\psi \rangle }belonging to aseparablecomplexHilbert spaceH{\displaystyle H}can always be expressed uniquely as alinear combinationof elements of anorthonormal basisofH{\displaystyle H}.
Usingbra–ket notation, this means any state|ψ⟩{\displaystyle |\psi \rangle }can be written as|ψ⟩=∑ici|ki⟩,=∑i|ki⟩⟨ki|ψ⟩,{\displaystyle {\begin{aligned}|\psi \rangle &=\sum _{i}c_{i}|{k_{i}}\rangle ,\\&=\sum _{i}|{k_{i}}\rangle \langle k_{i}|\psi \rangle ,\end{aligned}}}withcomplexcoefficientsci=⟨ki|ψ⟩{\displaystyle c_{i}=\langle {k_{i}}|\psi \rangle }and basis elements|ki⟩{\displaystyle |k_{i}\rangle }. In this case, thenormalization conditiontranslates to⟨ψ|ψ⟩=∑i⟨ψ|ki⟩⟨ki|ψ⟩=∑i|ci|2=1.{\displaystyle \langle \psi |\psi \rangle =\sum _{i}\langle \psi |{k_{i}}\rangle \langle k_{i}|\psi \rangle =\sum _{i}\left|c_{i}\right|^{2}=1.}In physical terms,|ψ⟩{\displaystyle |\psi \rangle }has been expressed as aquantum superpositionof the "basis states"|ki⟩{\displaystyle |{k_{i}}\rangle }, i.e., theeigenstatesof an observable. In particular, if said observable is measured on the normalized state|ψ⟩{\displaystyle |\psi \rangle }, then|ci|2=|⟨ki|ψ⟩|2,{\displaystyle |c_{i}|^{2}=|\langle {k_{i}}|\psi \rangle |^{2},}is the probability that the result of the measurement iski{\displaystyle k_{i}}.[5]: 22
In general, the expression for probability always consist of a relation between the quantum state and aportion of the spectrumof the dynamical variable (i.e.random variable) being observed.[15]: 98[16]: 53For example, the situation above describes the discrete case aseigenvalueski{\displaystyle k_{i}}belong to thepoint spectrum. Likewise, thewave functionis just theeigenfunctionof theHamiltonian operatorwith corresponding eigenvalue(s)E{\displaystyle E}; the energy of the system.
An example of the continuous case is given by theposition operator. The probability measure for a system in stateψ{\displaystyle \psi }is given by:[17]Pr(x∈B|ψ)=∫B⊂R|ψ(x)|2dx,{\displaystyle \mathrm {Pr} (x\in B|\psi )=\int _{B\subset \mathbb {R} }|\psi (x)|^{2}dx,}where|ψ(x)|2{\displaystyle |\psi (x)|^{2}}is the probability density function for finding a particle at a given position. These examples emphasize the distinction in charactertistics between the state and the observable. That is, whereasψ{\displaystyle \psi }is a pure state belonging toH{\displaystyle H}, the(generalized) eigenvectorsof the position operator donot.[18]
Though closely related, pure states are not the same as bound states belonging to thepure point spectrumof an observable with no quantum uncertainty. A particle is said to be in abound stateif it remains localized in a bounded region of space for all times. A pure state|ϕ⟩{\displaystyle |\phi \rangle }is called a bound stateif and only iffor everyε>0{\displaystyle \varepsilon >0}there is acompact setK⊂R3{\displaystyle K\subset \mathbb {R} ^{3}}such that∫K|ϕ(r,t)|2d3r≥1−ε{\displaystyle \int _{K}|\phi (\mathbf {r} ,t)|^{2}\,\mathrm {d} ^{3}\mathbf {r} \geq 1-\varepsilon }for allt∈R{\displaystyle t\in \mathbb {R} }.[19]The integral represents the probability that a particle is found in a bounded regionK{\displaystyle K}at any timet{\displaystyle t}. If the probability remains arbitrarily close to1{\displaystyle 1}then the particle is said to remain inK{\displaystyle K}.
For example,non-normalizablesolutions of thefree Schrödinger equationcan be expressed as functions that are normalizable, usingwave packets. These wave packets belong to the pure point spectrum of a correspondingprojection operatorwhich, mathematically speaking, constitutes an observable.[16]: 48However, they are not bound states.
As mentioned above, quantum states may besuperposed. If|α⟩{\displaystyle |\alpha \rangle }and|β⟩{\displaystyle |\beta \rangle }are two kets corresponding to quantum states, the ketcα|α⟩+cβ|β⟩{\displaystyle c_{\alpha }|\alpha \rangle +c_{\beta }|\beta \rangle }is also a quantum state of the same system. Bothcα{\displaystyle c_{\alpha }}andcβ{\displaystyle c_{\beta }}can be complex numbers; their relative amplitude and relative phase will influence the resulting quantum state.
Writing the superposed state usingcα=Aαeiθαcβ=Aβeiθβ{\displaystyle c_{\alpha }=A_{\alpha }e^{i\theta _{\alpha }}\ \ c_{\beta }=A_{\beta }e^{i\theta _{\beta }}}and defining the norm of the state as:|cα|2+|cβ|2=Aα2+Aβ2=1{\displaystyle |c_{\alpha }|^{2}+|c_{\beta }|^{2}=A_{\alpha }^{2}+A_{\beta }^{2}=1}and extracting the common factors gives:eiθα(Aα|α⟩+1−Aα2eiθβ−iθα|β⟩){\displaystyle e^{i\theta _{\alpha }}\left(A_{\alpha }|\alpha \rangle +{\sqrt {1-A_{\alpha }^{2}}}e^{i\theta _{\beta }-i\theta _{\alpha }}|\beta \rangle \right)}The overall phase factor in front has no physical effect.[20]: 108Only the relative phase affects the physical nature of the superposition.
One example of superposition is thedouble-slit experiment, in which superposition leads toquantum interference. Another example of the importance of relative phase isRabi oscillations, where the relative phase of two states varies in time due to theSchrödinger equation. The resulting superposition ends up oscillating back and forth between two different states.
Apure quantum stateis a state which can be described by a single ket vector, as described above. Amixed quantum stateis astatistical ensembleof pure states (seeQuantum statistical mechanics).[3]: 73
Mixed states arise in quantum mechanics in two different situations: first, when the preparation of the system is not fully known, and thus one must deal with astatistical ensembleof possible preparations; and second, when one wants to describe a physical system which isentangledwith another, as its state cannot be described by a pure state. In the first case, there could theoretically be another person who knows the full history of the system, and therefore describe the same system as a pure state; in this case, the density matrix is simply used to represent the limited knowledge of a quantum state. In the second case, however, the existence of quantum entanglement theoretically prevents the existence of complete knowledge about the subsystem, and it's impossible for any person to describe the subsystem of an entangled pair as a pure state.
Mixed states inevitably arise from pure states when, for a composite quantum systemH1⊗H2{\displaystyle H_{1}\otimes H_{2}}with anentangledstate on it, the partH2{\displaystyle H_{2}}is inaccessible to the observer.[3]: 121–122The state of the partH1{\displaystyle H_{1}}is expressed then as thepartial traceoverH2{\displaystyle H_{2}}.
A mixed statecannotbe described with a single ket vector.[21]: 691–692Instead, it is described by its associateddensity matrix(ordensity operator), usually denotedρ. Density matrices can describe both mixedandpure states, treating them on the same footing. Moreover, a mixed quantum state on a given quantum system described by a Hilbert spaceH{\displaystyle H}can be always represented as the partial trace of a pure quantum state (called apurification) on a larger bipartite systemH⊗K{\displaystyle H\otimes K}for a sufficiently large Hilbert spaceK{\displaystyle K}.
The density matrix describing a mixed state is defined to be an operator of the formρ=∑sps|ψs⟩⟨ψs|{\displaystyle \rho =\sum _{s}p_{s}|\psi _{s}\rangle \langle \psi _{s}|}wherepsis the fraction of the ensemble in each pure state|ψs⟩.{\displaystyle |\psi _{s}\rangle .}The density matrix can be thought of as a way of using the one-particleformalismto describe the behavior of many similar particles by giving a probability distribution (or ensemble) of states that these particles can be found in.
A simple criterion for checking whether a density matrix is describing a pure or mixed state is that thetraceofρ2is equal to 1 if the state is pure, and less than 1 if the state is mixed.[d][22]Another, equivalent, criterion is that thevon Neumann entropyis 0 for a pure state, and strictly positive for a mixed state.
The rules for measurement in quantum mechanics are particularly simple to state in terms of density matrices. For example, the ensemble average (expectation value) of a measurement corresponding to an observableAis given by⟨A⟩=∑sps⟨ψs|A|ψs⟩=∑s∑ipsai|⟨αi|ψs⟩|2=tr(ρA){\displaystyle \langle A\rangle =\sum _{s}p_{s}\langle \psi _{s}|A|\psi _{s}\rangle =\sum _{s}\sum _{i}p_{s}a_{i}|\langle \alpha _{i}|\psi _{s}\rangle |^{2}=\operatorname {tr} (\rho A)}where|αi⟩{\displaystyle |\alpha _{i}\rangle }andai{\displaystyle a_{i}}are eigenkets and eigenvalues, respectively, for the operatorA, and "tr" denotes trace.[3]: 73It is important to note that two types of averaging are occurring, one (overi{\displaystyle i}) being the usual expected value of the observable when the quantum is in state|ψs⟩{\displaystyle |\psi _{s}\rangle }, and the other (overs{\displaystyle s}) being a statistical (saidincoherent) average with the probabilitiespsthat the quantum is in those states.
States can be formulated in terms of observables, rather than as vectors in a vector space. These arepositive normalized linear functionalson aC*-algebra, or sometimes other classes of algebras of observables.
SeeState on a C*-algebraandGelfand–Naimark–Segal constructionfor more details.
The concept of quantum states, in particular the content of the sectionFormalism in quantum physicsabove, is covered in most standard textbooks on quantum mechanics.
For a discussion of conceptual aspects and a comparison with classical states, see:
For a more detailed coverage of mathematical aspects, see:
For a discussion of purifications of mixed quantum states, see Chapter 2 of John Preskill's lecture notes forPhysics 219at Caltech.
For a discussion of geometric aspects see:
|
https://en.wikipedia.org/wiki/Quantum_states
|
Business intelligence softwareis a type ofapplication softwaredesigned to retrieve, analyze, transform and report data forbusiness intelligence(BI). The applications generally read data that has been previously stored, often - though not necessarily - in adata warehouseordata mart.
The first comprehensive business intelligence systems were developed byIBMandSiebel(currently acquired byOracle) in the period between 1970 and 1990.[1][2]At the same time, small developer teams were emerging with attractive ideas, and pushing out some of the products companies still use nowadays.[3]
In 1988, specialists and vendors organized a Multiway Data Analysis Consortium inRome, where they considered making data management and analytics more efficient, and foremost available to smaller and financially restricted businesses. By 2000, there were many professional reporting systems and analytic programs, some owned by top performing software producers in theUnited States of America.[4]
In the years after 2000, business intelligence software producers became interested in producing universally applicable BI systems which don’t require expensive installation, and could hence be considered by smaller and midmarket businesses which could not afford on premise maintenance. These aspirations emerged in parallel with the cloud hosting trend, which is how most vendors came to develop independent systems with unrestricted access to information.[5]
From 2006 onwards, the positive effects of cloud-stored information and data management transformed itself to a completely mobile-affectioned one, mostly to the benefit of decentralized and remote teams looking to tweak data or gain full visibility over it out of office. As a response to the large success of fully optimized uni-browser versions, vendors have recently begun releasing mobile-specific product applications for bothAndroidandiOSusers.[6]Cloud-hosted data analytics made it possible for companies to categorize and process large volumes of data, which is how we can currently speak of unlimited visualization, and intelligent decision making.
The key general categories of business intelligence applications are:
Except for spreadsheets, these tools are provided as standalone applications, suites of applications, components ofEnterprise resource planningsystems,application programming interfacesor as components of software targeted to a specific industry. The tools are sometimes packaged intodata warehouse appliances.
|
https://en.wikipedia.org/wiki/Business_intelligence_software
|
Books oncryptographyhave been published sporadically and with variable quality for a long time. This is despite theparadoxthat secrecy is of the essence in sending confidential messages – seeKerckhoffs' principle.
In contrast, the revolutions incryptographyand securecommunicationssince the 1970s are covered in the available literature.
An early example of a book about cryptography was a Roman work,[which?]now lost and known only by references. Many early cryptographic works were esoteric, mystical, and/or reputation-promoting; cryptography being mysterious, there was much opportunity for such things. At least one work byTrithemiuswas banned by the Catholic Church and put on theIndex Librorum Prohibitorumas being about black magic or witchcraft. Many writers claimed to have invented unbreakableciphers. None were, though it sometimes took a long while to establish this.
In the 19th century, the general standard improved somewhat (e.g., works byAuguste Kerckhoffs,Friedrich Kasiski, andÉtienne Bazeries). ColonelParker HittandWilliam Friedmanin the early 20th century also wrote books on cryptography. These authors, and others, mostly abandoned any mystical or magical tone.
With the invention of radio, much of military communications went wireless, allowing the possibility of enemy interception much more readily than tapping into a landline. This increased the need to protect communications. By the end ofWorld War I, cryptography and its literature began to be officially limited. One exception was the 1931 bookThe American Black ChamberbyHerbert Yardley, which gave some insight into American cryptologic success stories, including theZimmermann telegramand the breaking of Japanese codes during theWashington Naval Conference.
Significant books on cryptography include:
From the end of World War II until the early 1980s most aspects of modern cryptography were regarded as the special concern of governments and the military and were protected by custom and, in some cases, by statute. The most significant work to be published on cryptography in this period is undoubtedlyDavid Kahn'sThe Codebreakers,[7]which was published at a time (mid-1960s) when virtually no information on the modern practice of cryptography was available.[8]Kahn has said that over ninety percent of its content was previously unpublished.[9]
The book caused serious concern at theNSAdespite its lack of coverage of specific modern cryptographic practice, so much so that after failing to prevent the book being published, NSA staff were informed to not even acknowledge the existence of the book if asked. In the US military, mere possession of a copy by cryptographic personnel was grounds for some considerable suspicion[citation needed]. Perhaps the single greatest importance of the book was the impact it had on the next generation of cryptographers.Whitfield Diffiehas made comments in interviews about the effect it had on him.[10][failed verification]
|
https://en.wikipedia.org/wiki/Books_on_cryptography
|
Data processingis thecollectionand manipulation of digital data to produce meaningful information.[1]Data processing is a form ofinformation processing, which is the modification (processing) of information in any manner detectable by an observer.[note 1]
Data processing may involve various processes, including:
TheUnited States Census Bureauhistory illustrates the evolution of data processing from manual through electronic procedures.
Although widespread use of the termdata processingdates only from the 1950s,[2]data processing functions have been performed manually for millennia. For example,bookkeepinginvolves functions such as posting transactions and producing reports like thebalance sheetand thecash flow statement. Completely manual methods were augmented by the application ofmechanicalor electroniccalculators. A person whose job was to perform calculations manually or using a calculator was called a "computer."
The1890 United States censusschedule was the first to gather data by individual rather thanhousehold. A number of questions could be answered by making a check in the appropriate box on the form. From 1850 to 1880 the Census Bureau employed "a system of tallying, which, by reason of the increasing number of combinations of classifications required, became increasingly complex. Only a limited number of combinations could be recorded in one tally, so it was necessary to handle the schedules 5 or 6 times, for as many independent tallies."[3]"It took over 7 years to publish the results of the 1880 census"[4]using manual processing methods.
The termautomatic data processingwas applied to operations performed by means ofunit record equipment, such asHerman Hollerith's application ofpunched cardequipment for the1890 United States census. "Using Hollerith's punchcard equipment, the Census Office was able to complete tabulating most of the 1890 census data in 2 to 3 years, compared with 7 to 8 years for the 1880 census. It is estimated that using Hollerith's system saved some $5 million in processing costs"[4]in 1890 dollars even though there were twice as many questions as in 1880.
Computerized data processing, orelectronic data processingrepresents a later development, with a computer used instead of several independent pieces of equipment. The Census Bureau first made limited use ofelectronic computersfor the1950 United States census, using aUNIVAC Isystem,[3]delivered in 1952.
The termdata processinghas mostly been subsumed by the more general terminformation technology(IT).[5]The older term "data processing" is suggestive of older technologies. For example, in 1996 theData Processing Management Association(DPMA) changed its name to theAssociation of Information Technology Professionals. Nevertheless, the terms are approximately synonymous.
Commercial data processing involves a large volume of input data, relatively few computational operations, and a large volume of output. For example, an insurance company needs to keep records on tens or hundreds of thousands of policies, print and mail bills, and receive and post payments.
In science and engineering, the termsdata processingandinformation systemsare considered too broad, and the termdata processingis typically used for the initial stage followed by adata analysisin the second stage of the overall data handling.
Data analysis uses specializedalgorithmsandstatisticalcalculations that are less often observed in a typical general business environment. For data analysis, software suites likeSPSSorSAS, or their free counterparts such asDAP,gretl, orPSPPare often used. These tools are usually helpful for processing various huge data sets, as they are able to handle enormous amount of statistical analysis.[6]
Adata processing systemis a combination ofmachines, people, and processes that for a set ofinputsproduces a defined set ofoutputs. The inputs and outputs are interpreted asdata,facts,informationetc. depending on the interpreter's relation to the system.
A term commonly used synonymously withdata or storage (codes) processing systemisinformation system.[7]With regard particularly toelectronic data processing, the corresponding concept is referred to aselectronic data processing system.
A very simple example of a data processing system is the process of maintaining a check register. Transactions— checks and deposits— are recorded as they occur and the transactions are summarized to determine a current balance. Monthly the data recorded in the register is reconciled with a hopefully identical list of transactions processed by the bank.
A more sophisticated record keeping system might further identify the transactions— for example deposits by source or checks by type, such as charitable contributions. This information might be used to obtain information like the total of all contributions for the year.
The important thing about this example is that it is asystem, in which, all transactions are recorded consistently, and the same method of bank reconciliation is used each time.
This is aflowchartof a data processing system combining manual and computerized processing to handleaccounts receivable, billing, andgeneral ledger
|
https://en.wikipedia.org/wiki/Data_processing
|
Awireless Internet service provider(WISP) is anInternet service providerwith a network based onwireless networking. Technology may include commonplaceWi-Fiwireless mesh networking, or proprietary equipment designed to operate over open900 MHz,2.4 GHz, 4.9, 5, 24, and 60 GHz bands or licensed frequencies in theUHFband (including theMMDSfrequency band),LMDS, and other bands from 6 GHz to 80 GHz.
In the US, theFederal Communications Commission(FCC) released a Report and Order, FCC 05-56 in 2005 that revised the FCC’s rules to open the 3650 MHz band for terrestrial wireless broadband operations.[1]On November 14, 2007 the Commission released a Public Notice (DA 07-4605) in which the Wireless Telecommunications Bureau announced the start date for the licensing and registration process for the 3650-3700 MHz band.[2]
As of July 2015, over 2,000fixed wirelessbroadband providers operate in the US, servicing nearly 4 million customers.[3]
Initially, WISPs were only found inruralareas not covered bycable televisionorDSL.[4]There were 879Wi-Fibased WISPs in theCzech Republicas of May 2008,[5][6]making it the country with mostWi-Fiaccess points in the wholeEU.;[7][8]which was a consequence of the then de facto monopoly of the former telecom operator on fixed data networks. The providing of wireless Internet has a big potential of lower the "digital gap" or "Internet gap" in the developing countries.Geekcorpsactively help in Africa with among others wireless network building. An example of a typical WISP system is such as the one deployed by Gaiacom Wireless Networks which is based on Wi-Fi standards. TheOne Laptop per Childproject strongly relies on good Internet connectivity, which can most likely be provided in rural areas only with satellite or wireless network Internet access. In high internet cost countries such as South Africa, prices have been drastically reduced by the government allocating spectrum to smaller WISPs, who are able to deliver high speed broadband at a much lower cost.[9]
Some WISP networks have been started in rural parts of theUnited Kingdom, to address issues with poor broadbandDSLservice (bandwidth) in rural areas ("notspots"), including slow rollout of fibre based services which could improve service (usuallyFibre to the cabinetto groups of rural buildings, potentiallyFibre to the premisesfor isolated buildings). A number of these WISPs[10][11]have been set up via theCommunity Broadband Network, using funds from theEuropean Agricultural Fund for Rural Development
WISPs often offer additional services like location-based content,Virtual Private Networking(VPN) andVoice over IP. Isolated municipal ISPs and larger statewide initiatives alike are tightly focused on wireless networking.[citation needed]
WISPs have a large market share in rural environments wherecableanddigital subscriber linesare not available; further, with technology available, they can meet or beat speeds of legacy cable and telephone systems.[12]In urban environments,gigabit wirelesslinks are common and provide levels of bandwidth previously only available through expensivefiber opticconnections.[13]
Typically, the way that a WISP operates is to order a fiber circuit to the center of the area they wish to serve. From there, the WISP builds backhauls (gigabit wireless or fiber) to elevated points in the region, such as radio towers, tall buildings, grain silos, or water towers. Those locations haveaccess pointsto provide service to individual customers, or backhauls to other towers where they have more equipment. The WISP may also use gigabit wireless links to connect a PoP (Point of Presence) to several towers, reducing the need to pay for fiber circuits to the tower. For fixed wireless connections, a smalldishor other antenna is mounted to the roof of the customer's building and aligned to the WISP's nearest antenna site. Where a WISP operates over the tightly limited range of the heavily populated2.4 GHz band, as nearly all802.11-based Wi‑Fi providers do, it is not uncommon to also see access points mounted on light posts and customer buildings.
Roamingbetween service providers is possible with the draft protocolWISPr, a set of recommendations which facilitate inter-network and inter-operator roaming of Wi-Fi users.
|
https://en.wikipedia.org/wiki/Wireless_Internet_service_provider
|
TheTonelli–Shanksalgorithm(referred to by Shanks as the RESSOL algorithm) is used inmodular arithmeticto solve forrin a congruence of the formr2≡n(modp), wherepis aprime: that is, to find a square root ofnmodulop.
Tonelli–Shanks cannot be used for composite moduli: finding square roots modulo composite numbers is a computational problem equivalent tointeger factorization.[1]
An equivalent, but slightly more redundant version of this algorithm was developed byAlberto Tonelli[2][3]in 1891. The version discussed here was developed independently byDaniel Shanksin 1973, who explained:
My tardiness in learning of these historical references was because I had lent Volume 1 ofDickson'sHistoryto a friend and it was never returned.[4]
According to Dickson,[3]Tonelli's algorithm can take square roots ofxmodulo prime powerspλapart from primes.
Given a non-zeron{\displaystyle n}and a primep>2{\displaystyle p>2}(which will always be odd),Euler's criteriontells us thatn{\displaystyle n}has a square root (i.e.,n{\displaystyle n}is aquadratic residue) if and only if:
In contrast, if a numberz{\displaystyle z}has no square root (is a non-residue), Euler's criterion tells us that:
It is not hard to find suchz{\displaystyle z}, because half of the integers between 1 andp−1{\displaystyle p-1}have this property. So we assume that we have access to such a non-residue.
By (normally) dividing by 2 repeatedly, we can writep−1{\displaystyle p-1}asQ2S{\displaystyle Q2^{S}}, whereQ{\displaystyle Q}is odd. Note that if we try
thenR2≡nQ+1=(n)(nQ)(modp){\displaystyle R^{2}\equiv n^{Q+1}=(n)(n^{Q}){\pmod {p}}}. Ift≡nQ≡1(modp){\displaystyle t\equiv n^{Q}\equiv 1{\pmod {p}}}, thenR{\displaystyle R}is a square root ofn{\displaystyle n}. Otherwise, forM=S{\displaystyle M=S}, we haveR{\displaystyle R}andt{\displaystyle t}satisfying:
If, given a choice ofR{\displaystyle R}andt{\displaystyle t}for a particularM{\displaystyle M}satisfying the above (whereR{\displaystyle R}is not a square root ofn{\displaystyle n}), we can easily calculate anotherR{\displaystyle R}andt{\displaystyle t}forM−1{\displaystyle M-1}such that the above relations hold, then we can repeat this untilt{\displaystyle t}becomes a20{\displaystyle 2^{0}}-th root of 1, i.e.,t=1{\displaystyle t=1}. At that pointR{\displaystyle R}is a square root ofn{\displaystyle n}.
We can check whethert{\displaystyle t}is a2M−2{\displaystyle 2^{M-2}}-th root of 1 by squaring itM−2{\displaystyle M-2}times and check whether it is 1. If it is, then we do not need to do anything, as the same choice ofR{\displaystyle R}andt{\displaystyle t}works. But if it is not,t2M−2{\displaystyle t^{2^{M-2}}}must be -1 (because squaring it gives 1, and there can only be two square roots 1 and -1 of 1 modulop{\displaystyle p}).
To find a new pair ofR{\displaystyle R}andt{\displaystyle t}, we can multiplyR{\displaystyle R}by a factorb{\displaystyle b}, to be determined. Thent{\displaystyle t}must be multiplied by a factorb2{\displaystyle b^{2}}to keepR2≡nt(modp){\displaystyle R^{2}\equiv nt{\pmod {p}}}. So, whent2M−2{\displaystyle t^{2^{M-2}}}is -1, we need to find a factorb2{\displaystyle b^{2}}so thattb2{\displaystyle tb^{2}}is a2M−2{\displaystyle 2^{M-2}}-th root of 1, or equivalentlyb2{\displaystyle b^{2}}is a2M−2{\displaystyle 2^{M-2}}-th root of -1.
The trick here is to make use ofz{\displaystyle z}, the known non-residue. The Euler's criterion applied toz{\displaystyle z}shown above says thatzQ{\displaystyle z^{Q}}is a2S−1{\displaystyle 2^{S-1}}-th root of -1. So by squaringzQ{\displaystyle z^{Q}}repeatedly, we have access to a sequence of2i{\displaystyle 2^{i}}-th root of -1. We can select the right one to serve asb{\displaystyle b}. With a little bit of variable maintenance and trivial case compression, the algorithm below emerges naturally.
Operations and comparisons on elements of themultiplicative group of integers modulo pZ/pZ{\displaystyle \mathbb {Z} /p\mathbb {Z} }are implicitly modp.
Inputs:
Outputs:
Algorithm:
Once you have solved the congruence withrthe second solution is−r(modp){\displaystyle -r{\pmod {p}}}. If the leastisuch thatt2i=1{\displaystyle t^{2^{i}}=1}isM, then no solution to the congruence exists, i.e.nis not a quadratic residue.
This is most useful whenp≡ 1 (mod 4).
For primes such thatp≡ 3 (mod 4), this problem has possible solutionsr=±np+14(modp){\displaystyle r=\pm n^{\frac {p+1}{4}}{\pmod {p}}}. If these satisfyr2≡n(modp){\displaystyle r^{2}\equiv n{\pmod {p}}}, they are the only solutions. If not,r2≡−n(modp){\displaystyle r^{2}\equiv -n{\pmod {p}}},nis a quadratic non-residue, and there are no solutions.
We can show that at the start of each iteration of the loop the followingloop invariantshold:
Initially:
At each iteration, withM',c',t',R'the new values replacingM,c,t,R:
Fromt2M−1=1{\displaystyle t^{2^{M-1}}=1}and the test againstt= 1 at the start of the loop, we see that we will always find aniin 0 <i<Msuch thatt2i=1{\displaystyle t^{2^{i}}=1}.Mis strictly smaller on each iteration, and thus the algorithm is guaranteed to halt. When we hit the conditiont= 1 and halt, the last loop invariant implies thatR2=n.
We can alternately express the loop invariants using theorderof the elements:
Each step of the algorithm movestinto a smaller subgroup by measuring the exact order oftand multiplying it by an element of the same order.
Solving the congruencer2≡ 5 (mod 41). 41 is prime as required and 41 ≡ 1 (mod 4). 5 is a quadratic residue by Euler's criterion:541−12=520=1{\displaystyle 5^{\frac {41-1}{2}}=5^{20}=1}(as before, operations in(Z/41Z)×{\displaystyle (\mathbb {Z} /41\mathbb {Z} )^{\times }}are implicitly mod 41).
Indeed, 282≡ 5 (mod 41) and (−28)2≡ 132≡ 5 (mod 41). So the algorithm yields the two solutions to our congruence.
The Tonelli–Shanks algorithm requires (on average over all possible input (quadratic residues and quadratic nonresidues))
modular multiplications, wherem{\displaystyle m}is the number of digits in the binary representation ofp{\displaystyle p}andk{\displaystyle k}is the number of ones in the binary representation ofp{\displaystyle p}. If the required quadratic nonresiduez{\displaystyle z}is to be found by checking if a randomly taken numbery{\displaystyle y}is a quadratic nonresidue, it requires (on average)2{\displaystyle 2}computations of theLegendre symbol.[5]The average of two computations of theLegendre symbolare explained as follows:y{\displaystyle y}is a quadratic residue with chancep+12p=1+1p2{\displaystyle {\tfrac {\tfrac {p+1}{2}}{p}}={\tfrac {1+{\tfrac {1}{p}}}{2}}}, which is smaller than1{\displaystyle 1}but≥12{\displaystyle \geq {\tfrac {1}{2}}}, so we will on average need to check if ay{\displaystyle y}is a quadratic residue two times.
This shows essentially that the Tonelli–Shanks algorithm works very well if the modulusp{\displaystyle p}is random, that is, ifS{\displaystyle S}is not particularly large with respect to the number of digits in the binary representation ofp{\displaystyle p}. As written above,Cipolla's algorithmworks better than Tonelli–Shanks if (and only if)S(S−1)>8m+20{\displaystyle S(S-1)>8m+20}.
However, if one instead uses Sutherland's algorithm to perform the discrete logarithm computation in the 2-Sylow subgroup ofFp∗{\displaystyle \mathbb {F} _{p}^{\ast }}, one may replaceS(S−1){\displaystyle S(S-1)}with an expression that is asymptotically bounded byO(SlogS/loglogS){\displaystyle O(S\log S/\log \log S)}.[6]Explicitly, one computese{\displaystyle e}such thatce≡nQ{\displaystyle c^{e}\equiv n^{Q}}and thenR≡c−e/2n(Q+1)/2{\displaystyle R\equiv c^{-e/2}n^{(Q+1)/2}}satisfiesR2≡n{\displaystyle R^{2}\equiv n}(note thate{\displaystyle e}is a multiple of 2 becausen{\displaystyle n}is a quadratic residue).
The algorithm requires us to find a quadratic nonresiduez{\displaystyle z}. There is no known deterministic algorithm that runs in polynomial time for finding such az{\displaystyle z}. However, if thegeneralized Riemann hypothesisis true, there exists a quadratic nonresiduez<2ln2p{\displaystyle z<2\ln ^{2}{p}},[7]making it possible to check everyz{\displaystyle z}up to that limit and find a suitablez{\displaystyle z}withinpolynomial time. Keep in mind, however, that this is a worst-case scenario; in general,z{\displaystyle z}is found in on average 2 trials as stated above.
The Tonelli–Shanks algorithm can (naturally) be used for any process in which square roots modulo a prime are necessary. For example, it can be used for finding points onelliptic curves. It is also useful for the computations in theRabin cryptosystemand in the sieving step of thequadratic sieve.
Tonelli–Shanks can be generalized to any cyclic group (instead of(Z/pZ)×{\displaystyle (\mathbb {Z} /p\mathbb {Z} )^{\times }}) and tokth roots for arbitrary integerk, in particular to taking thekth root of an element of afinite field.[8]
If many square-roots must be done in the same cyclic group and S is not too large, a table of square-roots of the elements of 2-power order can be prepared in advance and the algorithm simplified and sped up as follows.
According to Dickson's "Theory of Numbers"[3]
A. Tonelli[9]gave an explicit formula for the roots ofx2=c(modpλ){\displaystyle x^{2}=c{\pmod {p^{\lambda }}}}[3]
The Dickson reference shows the following formula for the square root ofx2modpλ{\displaystyle x^{2}{\bmod {p^{\lambda }}}}.
Noting that232mod293≡529{\displaystyle 23^{2}{\bmod {29^{3}}}\equiv 529}and noting thatβ=7⋅292{\displaystyle \beta =7\cdot 29^{2}}then
To take another example:23332mod293≡4142{\displaystyle 2333^{2}{\bmod {29^{3}}}\equiv 4142}and
Dickson also attributes the following equation to Tonelli:
Usingp=23{\displaystyle p=23}and using the modulus ofp3{\displaystyle p^{3}}the math follows:
First, find the modular square root modp{\displaystyle p}which can be done by the regular Tonelli algorithm for one or the other roots:
And applying Tonelli's equation (see above):
Dickson's reference[3]clearly shows that Tonelli's algorithm works on moduli ofpλ{\displaystyle p^{\lambda }}.
|
https://en.wikipedia.org/wiki/Shanks%E2%80%93Tonelli_algorithm
|
VUCAis an acronym based on the leadership theories ofWarren BennisandBurt Nanus, to describe or to reflect on thevolatility,uncertainty,complexityandambiguityof general conditions and situations.[1][2]TheU.S. Army War Collegeintroduced the concept of VUCA in 1987, to describe a more complex multilateral world perceived as resulting from the end of theCold War.[3]More frequent use and discussion of the term began from 2002.[4][need quotation to verify]It has subsequently spread tostrategic leadershipinorganizations, from for-profitcorporations[5][6]toeducation.[7][8][9]
The VUCA framework provides a lens through which organizations can interpret their challenges and opportunities. It emphasizes strategic foresight, insight, and the behavior of entities within organizations.[10]Furthermore, it highlights both systemic and behavioral failures[11]often associated with organizational missteps.
V =Volatility: Characterizes the rapid and unpredictable nature of change.
U =Uncertainty: Denotes theunpredictabilityof events and issues.
C =Complexity: Describes the intertwined forces and issues, making cause-and-effect relationships unclear.
A =Ambiguity: Points to the unclear realities and potential misunderstandings stemming from mixed messages.
These elements articulate how organizations perceive their current and potential challenges. They establish the parameters for planning and policy-making. Interacting in various ways, they can either complicate decision-making or enhance the ability to strategize, plan, and progress. Essentially, VUCA lays the groundwork for effective management and leadership.
The VUCA framework is a conceptual tool that underscores the conditions and challenges organizations face when making decisions, planning, managing risks, driving change, and solving problems. It primarily shapes an organization's ability to:
VUCA serves as a guideline for fostering awareness and preparedness in various sectors, including business, the military, education, and government. It provides a roadmap for organizations to develop strategies for readiness, foresight, adaptation, and proactive intervention.[12]
VUCA, as a system of thought, revolves around an idea expressed by Andrew Porteous: "Failure in itself may not be a catastrophe. Still, failure to learn from failure is." This perspective underlines the significance of resilience and adaptability in leadership. It suggests that beyond mere competencies, it is behavioural nuances, like the ability to learn from failures and adapt, that distinguish exceptional leaders from average ones. Leaders using VUCA as a guide often see change not just as inevitable but as something to anticipate.[11]
Within VUCA, several thematic areas of consideration emerge, providing a framework for introspection and evaluation:
Within the VUCA system of thought, an organization's ability to navigate these challenges is closely tied to its foundational beliefs, values, and aspirations. Those enterprises that consider themselves prepared and resolved align their strategic approach with VUCA's principles, signaling a holistic awareness.
The essence of VUCA philosophy also emphasizes the need for a deep-rooted understanding of one's environment, spanning technical, social, political, market, and economic realms.[13]
Psychometrics[14]which measure fluid intelligence by tracking information processing when faced with unfamiliar, dynamic, and vague data can predict cognitive performance in VUCA environments.
Volatilityrefers to the different situational social-categorizations of people due to specific traits or reactions that stand out in particular situations. When people act based on a specific situation, there is a possibility that the public categorizes them into a different group than they were in a previous situation. These people might respond differently to individual situations due to social or environmental cues. The idea that situational occurrences cause certain social categorization is known as volatility and is one of the main aspects ofself-categorization theory.[15]
Sociologistsuse volatility to better understand the impacts ofstereotypesand social categorization on the situation at hand and any external forces that may cause people to perceive others differently. Volatility is the changing dynamic of social categorization in environmental situations. The dynamic can change due to any shift in a situation, whether social, technical, biological, or anything else. Studies have been conducted, but finding the specific component that causes the change in situational social categorization has proven challenging.[16]
Two distinct components link individuals to their social identities. The first component is normative fit, which pertains to how a person aligns with the stereotypes and norms associated with their particular identity. For instance, when a Hispanic woman is cleaning the house, people often associate gender stereotypes with the situation, while her ethnicity is not a central concern. However, when this same woman eats an enchilada, ethnicity stereotypes come to the forefront, while her gender is not the focal point.[15]The second social cue is comparative fit. This is when a specific characteristic or trait of a person is prominent in certain situations compared to others. For example, as mentioned by Bodenhausen and Peery, when there is one woman in a room full of men.[15]She stands out, because she is the only one of her gender. However, all of the men are clumped together because they do not have any specific traits that stand out. Comparative fit shows that people categorize others based on the relative social context. In a particular situation, particular characteristics are made obvious because others around that individual do not possess that characteristic. However, in other cases, this characteristic may be the norm and would not be a key characteristic in the categorization process.[15]
People can be less critical of the same person in different scenarios. For example, when looking at anAfrican Americanman on the street in a low-income neighborhood and the same man inside a school in a high-income neighborhood, people will be less judgmental when seeing him in school. Nothing else has changed about this man, other than his location.[15]When individuals are spotted in certainsocial contexts, the basic-level categories are forgotten, and the more partial categories are brought to light. This helps to describe the problems of situational social-categorization.[15]This also illustrates how stereotypes can shift the perspectives of those around an individual.[15]
Uncertaintyin the VUCA framework occurs when the availability orpredictabilityof information in events is unknown. Uncertainty often occurs in volatile environments consisting of complex unanticipated interactions. Uncertainty may occur with the intention to implycausationorcorrelationbetween the events of a social perceiver and a target. Situations where there is either a lack of information to prove why perception is in occurrence or informational availability but lack of causation, are where uncertainty is salient.[15]
The uncertainty component of the framework serves as a grey area and is compensated by the use of social categorization and/or stereotypes. Social categorization can be described as a collection of people that have no interaction but tend to share similar characteristics. People tend to engage in social categorization, especially when there is a lack of information surrounding the event. Literature suggests that default categories tend to be assumed in the absence of any clear data when referring to someone's gender or race in the essence of a discussion.[15]
Individuals often associate general references (e.g. people, they, them, a group) with the male gender, meaning people = male. This usually occurs when there is insufficient information to distinguish someone's gender clearly. For example, when discussing a written piece of information, most assume the author is male. If an author's name is unavailable (due to lack of information), it is difficult to determine the gender of the author through the context of whatever was written. People automatically label the author as male without having any prior basis of gender, thus placing the author in a social category. This social categorization happens in this example, but people will also assume someone is male if the gender is not known in many other situations as well.[15]
Social categorization occurs in the realm of not only gender, but alsorace. Default assumptions may be made, like in gender, to the race of an individual or a group based on prior known stereotypes. For example, race-occupation combinations such as basketball or golf players usually receive race assumptions. Without any information on the individual's race, people usually assume a basketball player is black, and a golf player is white. This is based upon stereotypes because each sport tends to be dominated by a single race. In reality, there are other races within each sport.[15]
Complexityrefers to theinterconnectivityandinterdependenceof multiple parts in a system. When conducting research, complexity is a component that scholars have to keep in mind. The results of a deliberately controlled environment are unexpected because of thenon-linear interactionand interdependencies within different groups and categories.[16]
In a sociological aspect, the VUCA framework is utilized in research to understand social perception in the real world and how that plays into social categorization and stereotypes. Galen V. Bodenhausen and Destiny Peery's article,Social Categorization and Stereotyping In vivo: The VUCA Challenge, focused on researching how social categories impacted the process of social cognition and perception.[15]The strategy used to conduct the research is to manipulate or isolate a single identity of a target while keeping all other identities constant. This method clearly shows how a specific identity in a social category can change one's perception of other identities, thus creating stereotypes.[15]
There are problems with categorizing an individual's social identity due to the complexity of an individual's background. This research fails to address the complexity of the real world and the results from this highlighted an even greater picture of social categorization and stereotyping.[15]Complexity adds many layers of different components to an individual's identity and creates challenges for sociologists trying to examine social categories.[16]In the real world, people are far more complex than a modified social environment. Individuals identify with more than one social category, which opens the door to a more profound discovery about stereotyping. Results from research conducted by Bodenhausen reveal that specific identities are more dominant than others.[15]Perceivers who recognize these distinct identities latch on to them and associate their preconceived notion of such identity and make initial assumptions about the individuals and hence stereotypes are created.
Conversely, perceivers who share some identities with the target tend to be more open-minded. They consider multiple social identities simultaneously, a phenomenon known as cross-categorization effects.[17]Some social categories are nested within larger categorical structures, making subcategories more salient to perceivers. Cross-categorization can trigger both positive and negative effects. On the positive side, perceivers become more open-minded and motivated to delve deeper into their understanding of the target, moving beyond dominant social categories. However, cross-categorization can also result in social invisibility,[15]where some cross-over identities diminish the visibility of others, leading to "intersectional invisibility" where neither social identity stands out distinctly and is overlooked.[18]
Ambiguityrefers to when the generalmeaningof something is unclear even when an appropriate amount of information is provided. Many get confused about the meaning of ambiguity. It is similar to the idea of uncertainty, but they have different factors. Uncertainty is when relevant information is unavailable and unknown, and ambiguity where relevant information is available but the overall meaning is still unknown. Both uncertainty and ambiguity exist in our culture today. Sociologists use ambiguity to determine how and why an answer has been developed. Sociologists focus on details such as if there was enough information present and if the subject had the full knowledge necessary to make a decision. and why did he/she come to their specific answer.[15]
Ambiguity is considered one of the leading causes of conflict within organizations.[19]
Ambiguity often prompts individuals to make assumptions, including those related to race, gender, sexual orientation, and even class stereotypes. When people possess some information but lack a complete answer, they tend to generate their own conclusions based on the available relevant information. For instance, as Bodenhausen notes, we may occasionally encounter individuals who possess a degree of androgyny, making it challenging to determine their gender. In such cases, brief exposure might lead to misclassifications based on gender-atypical features, such as very long hair on a man or very short hair on a woman. Ambiguity can result in premature categorizations, potentially leading to inaccurate conclusions due to the absence of crucial details.[15]
Sociologists suggest that ambiguity can fuel racial stereotypes and discrimination. In a South African study, white participants were shown images of racially mixed faces and asked to categorize them as European or African. Since all the participants were white, they struggled to classify these mixed-race faces as European and instead labeled them as African. This difficulty arose due to the ambiguity present in the images. The only information available to the participants was the subjects' skin tone and facial features. Despite having this information, the participants still couldn't confidently determine the ethnicity because the individuals didn't precisely resemble their own racial group.[15]
Levent Işıklıgöz has suggested that theCof VUCA be changed fromcomplexitytochaos, arguing that it is more suitable according to our era.[citation needed]
Bill George, a professor of management practice at Harvard Business School, argues that VUCA calls for a leadership response which he calls VUCA 2.0:Vision,understanding,courageandadaptability.[20]
George's response seems a minor adaptation of Bob Johansen's VUCA prime:Vision,understanding,clarityandagility.[21]
German academic Ali Aslan Gümüsay adds "paradox" to the acronym, calling it VUCA + paradox orVUCAP.[22]
Jamais Casciosuggested theBANIframework to highlight the environment as Brittle, Anxious, Nonlinear, and Incomprehensible.[23]
Ulrich Lichtenthalerdeveloped thePUMOframework, which describes the world as increasingly Polarized, Unthinkable, Metamorphic, and Overheated.[24]
|
https://en.wikipedia.org/wiki/Volatility,_uncertainty,_complexity_and_ambiguity
|
The following is alist of personal information managers(PIMs) and online organizers.
|
https://en.wikipedia.org/wiki/List_of_personal_information_managers
|
Gillick competenceis a term originating inEngland and Walesand is used inmedical lawto decide whether a child (a person under 16 years of age) is able to consent to their own medical treatment, without the need for parental permission or knowledge.
The standard is based on the 1985 judicial decision of theHouse of Lordswith respect to a case of thecontraceptionadvice given by anNHSdoctor inGillick v West Norfolk and Wisbech Area Health Authority.[1]The case is binding in England and Wales, and has been adopted to varying extents in Australia, Canada, and New Zealand.[2][3]Similar provision is made in Scotland by theAge of Legal Capacity (Scotland) Act 1991. In Northern Ireland, although separate legislation applies, the thenDepartment of Health and Social Servicesstated that there was no reason to suppose that the House of Lords' decision would not be followed by the Northern Ireland courts.
Gillick's case involved ahealth departmentalcircularadvising doctors oncontraceptionfor people under 16. The circular stated that the prescription of contraception was a matter for the doctor's discretion and that they could be prescribed to under-16s without parental consent. This matter was litigated becauseVictoria Gillickran an active campaign against the policy. Gillick sought a declaration that prescribing contraception was illegal because the doctor would commit an offence of encouraging sex with a minor and that it would be treatment without consent as consent vested in the parent; she was unsuccessful before theHigh Court of Justice, but succeeded in theCourt of Appeal.[4]
The issue before the House of Lords was only whether the minor involved could give consent. "Consent" here was considered in the broad sense of consent to battery or assault: in the absence of patient consent to treatment, a doctor, even if well-intentioned, might be sued/charged.
The House of Lords focused on the issue of consent rather than a notion of 'parental rights' or parental power. In fact, the court held that 'parental rights' did not exist, other than to safeguard the best interests of a minor. The majority held that in some circumstances a minor could consent to treatment, and that in these circumstances a parent had no power to veto treatment,[5]building on the judgement byLord DenninginHewer v Bryantthat parental rights were diminishing as the age of a child increases.[6][7][8]
Lord ScarmanandLord Fraserproposed slightly different tests (Lord Bridgeagreed with both). Lord Scarman's test is generally considered to be the test of 'Gillick competency'. He required that a child could consent if they fully understood the medical treatment that is proposed:
As a matter of law the parental right to determine whether or not their minor child below the age of sixteen will have medical treatment terminates if and when the child achieves sufficient understanding and intelligence to understand fully what is proposed.
The ruling holds particularly significant implications for the legal rights of minor children in England in that it is broader in scope than merely medical consent. It lays down that the authority of parents to make decisions for their minor children is not absolute, but diminishes with the child's evolving maturity. The result of Gillick is that in England and Wales today, except in situations which are regulated by statute, the legal right to make a decision on any particular matter concerning the child shifts from the parent to the child when the child reaches sufficient maturity to be capable of making up their own mind on the matter requiring decision.
A child who is deemed "Gillick competent" is able to prevent their parents viewing their medical records. Thus medical staff will not make a disclosure of medical records of a child who is deemed "Gillick competent" unlessconsentis manifest.[9]
In most jurisdictions the parent of anemancipated minordoes not have the ability to consent to therapy, regardless of the Gillick test. Typical positions of emancipation arise when the minor is married (R v D[1984] AC 778, 791) or in the military.[citation needed]
The nature of the standard remains uncertain. Thecourtshave so far declined invitations to define rigidly "Gillick competence" and the individual doctor is free to make a decision, consulting peers if this may be helpful, as to whether that child is "Gillick competent".[citation needed]
As of May 2016, it appeared to Funston and Howard—two researchers working on health education—that some recent legislation worked explicitly to restrict the ability of Gillick competent children to consent to medical treatment outside of clinical settings. For example, parental consent is required for the treatment of children withasthmausing standbysalbutamolinhalers in schools.[10]These restrictions have yet to be tested in court.
The decisionsIn re R(1991)[11]andRe W(1992)[12](especially Lord Donaldson) contradict theGillickdecision somewhat. From these, and subsequent cases, it is suggested that although the parental right to veto treatment ends, parental powers do not "terminate" as suggested by Lord Scarman inGillick. However, these are onlyobiterstatements and were made by a lower court; therefore, they are not legally binding. However, theparens patriaejurisdiction of the court remains available allowing a court order to force treatment against a child's (and parent's) wishes.[13]
In a 2006 judicial review,R (on the application of Axon) v Secretary of State for Health,[14]the High Court affirmedGillickin allowing for medical confidentiality for teenagers seeking anabortion. The court rejected a claim that not granting parents a "right to know" whether their child had sought an abortion, birth control or contraception breachedArticle 8 of the European Convention on Human Rights. TheAxoncase set out a list of criteria that a doctor must meet when deciding whether to provide treatment to an under-16 child without informing their parents: they must be convinced that they can understand all aspects of the advice, that the patient's physical or mental health is likely to suffer without medical advice, that it is in the best interests of the patient to provide medical advice, that (in provision of contraception) they are likely to have sex whether contraception is provided or not, and that they have made an effort to convince the young person to disclose the information to their parents.
In late 2020,Bell v Tavistockconsidered whetherunder-16s with gender dysphoriacould be Gillick competent to consent to receiving puberty blockers. Due to the unique specifics of that treatment, the High Court concluded that in such cases the answer will almost always be 'no',a priori.[15]In late 2021, the Court of Appeal overturnedBell v Tavistock, as the clinic's policies and practices had not been found to be unlawful.[16]
During the COVID-19 pandemic, government guidance was circulated stating that some older children in secondary school would be considered Gillick competent to decide to bevaccinated against COVID-19when a parent/guardian has not consented.[17]The Green Book, the UK's guidance on immunisation, states that under 16s "who understand fully what is involved in the proposed procedure" can consent "although ideally their parents will be involved".[18]
In 1992, theHigh Court of Australiagave specific and strong approval for the application of Gillick competence inSecretary of the Department of Health and Community Services v JWB (1992) 175 CLR 189, also known asMarrion's Case. This decision introduced Gillick competence as Australian common law, and has been applied in similar cases such asDepartment of Community Services v Y (1999)NSWSC644.
There is no express authority in Australia onIn re RandRe W, so whether or not a parent's right terminates when Gillick competence is applied is unclear. This lack of authority reflects that the reported cases have all involved minors who have been found to be incompetent, and that Australian courts will make decisions in theparens patriaejurisdiction regardless of Gillick competence.
Legislation in South Australia and New South Wales clarifies the common law, establishing a Gillick-esque standard of competence but preserving concurrent consent between parent and child for patients aged 14–16 years.
On 21 May 2009, confusion[whose?]arose between Gillick competence, which identifies under-16s with the capacity to consent to their own treatment, and theFraser guidelines, which are concerned only withcontraceptionand focus on the desirability of parental involvement and the risks of unprotected sex in that area.[citation needed]
A persistent rumour arose that Victoria Gillick disliked having her name associated with the assessment of children's capacity, but an editorial in the BMJ from 2006 claimed that Gillick said that she "has never suggested to anyone, publicly or privately, that [she] disliked being associated with the term 'Gillick competent'".[19]
It is lawful for doctors to provide contraceptive advice and treatment without parental consent providing certain criteria are met. These criteria, known as the Fraser guidelines, were laid down by Lord Fraser in the Gillick decision and require the professional to be satisfied that:[20]
Although these criteria specifically refer to contraception, the principles are deemed to apply to other treatments, including abortion.[21]Although the judgment in the House of Lords referred specifically to doctors, it is considered by theRoyal College of Obstetricians and Gynaecologists(RCOG) to apply to other health professionals, "including general practitioners, gynaecologists, nurses, and practitioners in community contraceptive clinics, sexual health clinics and hospital services".[22]It may also be interpreted as covering youth workers and health promotion workers who may be giving contraceptive advice and condoms to young people under 16, but this has not been tested in court.[citation needed]
If a person under the age of 18 refuses to consent to treatment, it is possible in some cases for their parents or the courts to overrule their decision. However, this right can be exercised only on the basis that the welfare of the young person is paramount. In this context, welfare does not simply mean their physical health. The psychological effect of having the decision overruled would have to be taken into account and would normally be an option only when the young person was thought likely to suffer "grave and irreversible mental or physical harm". Usually, when a parent wants to overrule a young person's decision to refuse treatment, health professionals will apply to the courts for a final decision.[22]
An interesting aside to the Fraser guidelines is that many[weasel words]regard Lord Scarman's judgment as the leading judgement in the case, but because Lord Fraser's judgement was shorter and set out in more specific terms – and in that sense more accessible to health and welfare professionals – it is his judgement that has been reproduced as containing the core principles,[citation needed]as for example cited in the RCOG circular.[22]
|
https://en.wikipedia.org/wiki/Gillick_competence
|
Geometric feature learningis a technique combiningmachine learningandcomputer visionto solve visual tasks. The main goal of this method is to find a set of representative features of geometric form to represent an object by collecting geometric features from images and learning them using efficientmachine learningmethods. Humans solve visual tasks and can give fast response to the environment by extracting perceptual information from what they see. Researchers simulate humans' ability of recognizing objects to solve computer vision problems. For example, M. Mata et al.(2002)[1]applied feature learning techniques to themobile robot navigationtasks in order to avoid obstacles. They usedgenetic algorithmsfor learning features andrecognizing objects(figures). Geometric feature learning methods can not only solve recognition problems but also predict subsequent actions by analyzing a set of sequential input sensory images, usually some extracting features of images. Through learning, some hypothesis of the next action are given and according to the probability of each hypothesis give a most probable action. This technique is widely used in the area ofartificial intelligence.
Geometric feature learning methods extract distinctive geometric features from images. Geometric features are features of objects constructed by a set of geometric elements like points, lines, curves or surfaces. These features can be corner features, edge features, Blobs, Ridges, salient points image texture and so on, which can be detected byfeature detectionmethods.
Geometric component feature is a combination of several primitive features and it always consists more than 2 primitive features like edges, corners or blobs. Extracting geometric feature vector at location x can be computed according to the reference point, which is shown below:
x means the location of the location of features,θ{\displaystyle \textstyle \theta }means the orientation,σ{\displaystyle \textstyle \sigma }means the intrinsic scale.
Boolean compound feature consists of two sub-features which can be primitive features or compound features. There are two type of boolean features: conjunctive feature whose value is the product of two sub-features and disjunctive features whose value is the maximum of the two sub-features.
Feature spacewas firstly considered in computer vision area by Segen.[4]He used multilevel graph to represent the geometric relations of local features.
There are many learning algorithms which can be applied to learn to finddistinctive featuresof objects in an image. Learning can be incremental, meaning that the object classes can be added at any time.
1.Acquire a new training image "I".
2.According to the recognition algorithm, evaluate the result. If the result is true, new object classes are recognised.
The key point of recognition algorithm is to find the most distinctive features among all features of all classes. So using below equation to maximise the featurefmax{\displaystyle \textstyle \ f_{max}}
Measure the value of a feature in images,fmax{\displaystyle \textstyle \ f_{max}}andffmax{\displaystyle \textstyle \ f_{f_{max}}}, and localise a feature:
Whereff(p)(x){\displaystyle \textstyle f_{f_{(p)}}(x)}is defined asff(p)(I)=max{0,f(p)T)f(x)‖f(p)‖‖f(x)‖}{\displaystyle \textstyle f_{f_{(p)}}(I)=max\left\{0,{\frac {f(p)^{T})f(x)}{\left\|f(p)\right\|\left\|f(x)\right\|}}\right\}}
After recognise the features, the results should be evaluated to determine whether the classes can be recognised, There are five evaluation categories of recognition results: correct, wrong, ambiguous, confused and ignorant. When the evaluation is correct, add a new training image and train it. If the recognition failed, the feature nodes should be maximise their distinctive power which is defined by the Kolmogorov-Smirno distance (KSD).
3.Feature learning algorithm
After a feature is recognised, it should be applied toBayesian networkto recognise the image, using the feature learning algorithm to test.
The probably approximately correct (PAC) model was applied by D. Roth (2002) to solve computer vision problem by developing a distribution-free learning theory based on this model.[5]This theory heavily relied on the development of feature-efficient learning approach. The goal of this algorithm is to learn an object represented by some geometric features in an image. The input is afeature vectorand the output is 1 which means successfully detect the object or 0 otherwise. The main point of this learning approach is collecting representative elements which can represent the object through a function and testing by recognising an object from image to find the representation with high probability.
The learning algorithm aims to predict whether the learned target conceptfT(X){\displaystyle \textstyle f_{T}(X)}belongs to a class, where X is the instance space consists with parameters and then test whether the prediction is correct.
After learning features, there should be some evaluation algorithms to evaluate the learning algorithms. D. Roth applied two learning algorithms:
The main purpose of SVM is to find ahyperplaneto separate the set of samples(xi,yi){\displaystyle \textstyle (x_{i},y_{i})}wherexi{\displaystyle \textstyle x_{i}}is an input vector which is a selection of featuresx∈RN{\displaystyle \textstyle x\in R^{N}}andyi{\displaystyle \textstyle y_{i}}is the label ofxi{\displaystyle \textstyle x_{i}}. The hyperplane has the following form:f(x)=sgn(∑i=1lyiαi⋅k(x,xi)+b)={1,positiveinputs−1,negativeinputs{\displaystyle \textstyle f(x)=sgn\left(\sum _{i=1}^{l}y_{i}\alpha _{i}\cdot k(x,x_{i})+b\right)=\left\{{\begin{matrix}1,positive\;inputs\\-1,negative\;inputs\end{matrix}}\right.}
k(x,xi)=ϕ(x)⋅ϕ(xi){\displaystyle \textstyle k(x,x_{i})=\phi (x)\cdot \phi (x_{i})}is a kernel function
Both algorithms separate training data by finding a linear function.
|
https://en.wikipedia.org/wiki/Geometric_feature_learning
|
Computer security(alsocybersecurity,digital security, orinformation technology (IT) security) is a subdiscipline within the field ofinformation security. It consists of the protection ofcomputer software,systemsandnetworksfromthreatsthat can lead to unauthorized information disclosure, theft or damage tohardware,software, ordata, as well as from the disruption or misdirection of theservicesthey provide.[1][2]
The significance of the field stems from the expanded reliance oncomputer systems, theInternet,[3]andwireless network standards. Its importance is further amplified by the growth ofsmart devices, includingsmartphones,televisions, and the various devices that constitute theInternet of things(IoT). Cybersecurity has emerged as one of the most significant new challenges facing the contemporary world, due to both the complexity ofinformation systemsand the societies they support. Security is particularly crucial for systems that govern large-scale systems with far-reaching physical effects, such aspower distribution,elections, andfinance.[4][5]
Although many aspects of computer security involve digital security, such as electronicpasswordsandencryption,physical securitymeasures such asmetal locksare still used to prevent unauthorized tampering. IT security is not a perfect subset ofinformation security, therefore does not completely align into thesecurity convergenceschema.
A vulnerability refers to a flaw in the structure, execution, functioning, or internal oversight of a computer or system that compromises its security. Most of the vulnerabilities that have been discovered are documented in theCommon Vulnerabilities and Exposures(CVE) database.[6]Anexploitablevulnerability is one for which at least one workingattackorexploitexists.[7]Actors maliciously seeking vulnerabilities are known asthreats. Vulnerabilities can be researched, reverse-engineered, hunted, or exploited usingautomated toolsor customized scripts.[8][9]
Various people or parties are vulnerable to cyber attacks; however, different groups are likely to experience different types of attacks more than others.[10]
In April 2023, theUnited KingdomDepartment for Science, Innovation & Technology released a report on cyber attacks over the previous 12 months.[11]They surveyed 2,263 UK businesses, 1,174 UK registered charities, and 554 education institutions. The research found that "32% of businesses and 24% of charities overall recall any breaches or attacks from the last 12 months." These figures were much higher for "medium businesses (59%), large businesses (69%), and high-income charities with £500,000 or more in annual income (56%)."[11]Yet, although medium or large businesses are more often the victims, since larger companies have generally improved their security over the last decade,small and midsize businesses(SMBs) have also become increasingly vulnerable as they often "do not have advanced tools to defend the business."[10]SMBs are most likely to be affected by malware, ransomware, phishing,man-in-the-middle attacks, and Denial-of Service (DoS) Attacks.[10]
Normal internet users are most likely to be affected by untargeted cyberattacks.[12]These are where attackers indiscriminately target as many devices, services, or users as possible. They do this using techniques that take advantage of the openness of the Internet. These strategies mostly includephishing,ransomware,water holingand scanning.[12]
To secure a computer system, it is important to understand the attacks that can be made against it, and thesethreatscan typically be classified into one of the following categories:
Abackdoorin a computer system, acryptosystem, or analgorithmis any secret method of bypassing normalauthenticationor security controls. These weaknesses may exist for many reasons, including original design or poor configuration.[13]Due to the nature of backdoors, they are of greater concern to companies and databases as opposed to individuals.
Backdoors may be added by an authorized party to allow some legitimate access or by an attacker for malicious reasons.Criminalsoften usemalwareto install backdoors, giving them remote administrative access to a system.[14]Once they have access, cybercriminals can "modify files, steal personal information, install unwanted software, and even take control of the entire computer."[14]
Backdoors can be difficult to detect, as they often remain hidden within the source code or system firmware intimate knowledge of theoperating systemof the computer.
Denial-of-service attacks(DoS) are designed to make a machine or network resource unavailable to its intended users.[15]Attackers can deny service to individual victims, such as by deliberately entering a wrong password enough consecutive times to cause the victim's account to be locked, or they may overload the capabilities of a machine or network and block all users at once. While a network attack from a singleIP addresscan be blocked by adding a new firewall rule, many forms ofdistributed denial-of-service(DDoS) attacks are possible, where the attack comes from a large number of points. In this case, defending against these attacks is much more difficult. Such attacks can originate from thezombie computersof abotnetor from a range of other possible techniques, includingdistributed reflective denial-of-service(DRDoS), where innocent systems are fooled into sending traffic to the victim.[15]With such attacks, the amplification factor makes the attack easier for the attacker because they have to use little bandwidth themselves. To understand why attackers may carry out these attacks, see the 'attacker motivation' section.
A direct-access attack is when an unauthorized user (an attacker) gains physical access to a computer, most likely to directly copy data from it or steal information.[16]Attackers may also compromise security by making operating system modifications, installingsoftware worms,keyloggers,covert listening devicesor using wireless microphones. Even when the system is protected by standard security measures, these may be bypassed by booting another operating system or tool from aCD-ROMor other bootable media.Disk encryptionand theTrusted Platform Modulestandard are designed to prevent these attacks.
Direct service attackers are related in concept todirect memory attackswhich allow an attacker to gain direct access to a computer's memory.[17]The attacks "take advantage of a feature of modern computers that allows certain devices, such as external hard drives, graphics cards, or network cards, to access the computer's memory directly."[17]
Eavesdroppingis the act of surreptitiously listening to a private computer conversation (communication), usually between hosts on a network. It typically occurs when a user connects to a network where traffic is not secured or encrypted and sends sensitive business data to a colleague, which, when listened to by an attacker, could be exploited.[18]Data transmitted across anopen networkallows an attacker to exploit a vulnerability and intercept it via various methods.
Unlikemalware, direct-access attacks, or other forms of cyber attacks, eavesdropping attacks are unlikely to negatively affect the performance of networks or devices, making them difficult to notice.[18]In fact, "the attacker does not need to have any ongoing connection to the software at all. The attacker can insert the software onto a compromised device, perhaps by direct insertion or perhaps by a virus or other malware, and then come back some time later to retrieve any data that is found or trigger the software to send the data at some determined time."[19]
Using avirtual private network(VPN), which encrypts data between two points, is one of the most common forms of protection against eavesdropping. Using the best form of encryption possible for wireless networks is best practice, as well as usingHTTPSinstead of an unencryptedHTTP.[20]
Programs such asCarnivoreandNarusInSighthave been used by theFederal Bureau of Investigation(FBI) and NSA to eavesdrop on the systems ofinternet service providers. Even machines that operate as a closed system (i.e., with no contact with the outside world) can be eavesdropped upon by monitoring the faintelectromagnetictransmissions generated by the hardware.TEMPESTis a specification by the NSA referring to these attacks.
Malicious software (malware) is any software code or computer program "intentionally written to harm a computer system or its users."[21]Once present on a computer, it can leak sensitive details such as personal information, business information and passwords, can give control of the system to the attacker, and can corrupt or delete data permanently.[22][23]
Man-in-the-middle attacks(MITM) involve a malicious attacker trying to intercept, surveil or modify communications between two parties by spoofing one or both party's identities and injecting themselves in-between.[24]Types of MITM attacks include:
Surfacing in 2017, a new class of multi-vector,[25]polymorphic[26]cyber threats combine several types of attacks and change form to avoid cybersecurity controls as they spread.
Multi-vector polymorphic attacks, as the name describes, are both multi-vectored and polymorphic.[27]Firstly, they are a singular attack that involves multiple methods of attack. In this sense, they are "multi-vectored (i.e. the attack can use multiple means of propagation such as via the Web, email and applications." However, they are also multi-staged, meaning that "they can infiltrate networks and move laterally inside the network."[27]The attacks can be polymorphic, meaning that the cyberattacks used such as viruses, worms or trojans "constantly change ("morph") making it nearly impossible to detect them using signature-based defences."[27]
Phishingis the attempt of acquiring sensitive information such as usernames, passwords, and credit card details directly from users by deceiving the users.[28]Phishing is typically carried out byemail spoofing,instant messaging,text message, or on aphonecall. They often direct users to enter details at a fake website whoselook and feelare almost identical to the legitimate one.[29]The fake website often asks for personal information, such as login details and passwords. This information can then be used to gain access to the individual's real account on the real website.
Preying on a victim's trust, phishing can be classified as a form ofsocial engineering. Attackers can use creative ways to gain access to real accounts. A common scam is for attackers to send fake electronic invoices[30]to individuals showing that they recently purchased music, apps, or others, and instructing them to click on a link if the purchases were not authorized. A more strategic type of phishing is spear-phishing which leverages personal or organization-specific details to make the attacker appear like a trusted source. Spear-phishing attacks target specific individuals, rather than the broad net cast by phishing attempts.[31]
Privilege escalationdescribes a situation where an attacker with some level of restricted access is able to, without authorization, elevate their privileges or access level.[32]For example, a standard computer user may be able to exploit avulnerabilityin the system to gain access to restricted data; or even becomerootand have full unrestricted access to a system. The severity of attacks can range from attacks simply sending an unsolicited email to aransomware attackon large amounts of data. Privilege escalation usually starts withsocial engineeringtechniques, oftenphishing.[32]
Privilege escalation can be separated into two strategies, horizontal and vertical privilege escalation:
Any computational system affects its environment in some form. This effect it has on its environment can range from electromagnetic radiation, to residual effect on RAM cells which as a consequence make aCold boot attackpossible, to hardware implementation faults that allow for access or guessing of other values that normally should be inaccessible. In Side-channel attack scenarios, the attacker would gather such information about a system or network to guess its internal state and as a result access the information which is assumed by the victim to be secure. The target information in a side channel can be challenging to detect due to its low amplitude when combined with other signals[33]
Social engineering, in the context of computer security, aims to convince a user to disclose secrets such as passwords, card numbers, etc. or grant physical access by, for example, impersonating a senior executive, bank, a contractor, or a customer.[34]This generally involves exploiting people's trust, and relying on theircognitive biases. A common scam involves emails sent to accounting and finance department personnel, impersonating their CEO and urgently requesting some action. One of the main techniques of social engineering arephishingattacks.
In early 2016, theFBIreported that suchbusiness email compromise(BEC) scams had cost US businesses more than $2 billion in about two years.[35]
In May 2016, theMilwaukee BucksNBAteam was the victim of this type of cyber scam with a perpetrator impersonating the team's presidentPeter Feigin, resulting in the handover of all the team's employees' 2015W-2tax forms.[36]
Spoofing is an act of pretending to be a valid entity through the falsification of data (such as an IP address or username), in order to gain access to information or resources that one is otherwise unauthorized to obtain. Spoofing is closely related tophishing.[37][38]There are several types of spoofing, including:
In 2018, the cybersecurity firmTrellixpublished research on the life-threatening risk of spoofing in the healthcare industry.[40]
Tamperingdescribes amalicious modificationor alteration of data. It is an intentional but unauthorized act resulting in the modification of a system, components of systems, its intended behavior, or data. So-calledEvil Maid attacksand security services planting ofsurveillancecapability into routers are examples.[41]
HTMLsmuggling allows an attacker tosmugglea malicious code inside a particular HTML or web page.[42]HTMLfiles can carry payloads concealed as benign, inert data in order to defeatcontent filters. These payloads can be reconstructed on the other side of the filter.[43]
When a target user opens the HTML, the malicious code is activated; the web browser thendecodesthe script, which then unleashes the malware onto the target's device.[42]
Employee behavior can have a big impact oninformation securityin organizations. Cultural concepts can help different segments of the organization work effectively or work against effectiveness toward information security within an organization. Information security culture is the "...totality of patterns of behavior in an organization that contributes to the protection of information of all kinds."[44]
Andersson and Reimers (2014) found that employees often do not see themselves as part of their organization's information security effort and often take actions that impede organizational changes.[45]Indeed, the Verizon Data Breach Investigations Report 2020, which examined 3,950 security breaches, discovered 30% of cybersecurity incidents involved internal actors within a company.[46]Research shows information security culture needs to be improved continuously. In "Information Security Culture from Analysis to Change", authors commented, "It's a never-ending process, a cycle of evaluation and change or maintenance." To manage the information security culture, five steps should be taken: pre-evaluation, strategic planning, operative planning, implementation, and post-evaluation.[47]
In computer security, acountermeasureis an action, device, procedure or technique that reduces a threat, a vulnerability, or anattackby eliminating or preventing it, by minimizing the harm it can cause, or by discovering and reporting it so that corrective action can be taken.[48][49][50]
Some common countermeasures are listed in the following sections:
Security by design, or alternately secure by design, means that the software has been designed from the ground up to be secure. In this case, security is considered a main feature.
The UK government's National Cyber Security Centre separates secure cyber design principles into five sections:[51]
These design principles of security by design can include some of the following techniques:
Security architecture can be defined as the "practice of designing computer systems to achieve security goals."[52]These goals have overlap with the principles of "security by design" explored above, including to "make initial compromise of the system difficult," and to "limit the impact of any compromise."[52]In practice, the role of a security architect would be to ensure the structure of a system reinforces the security of the system, and that new changes are safe and meet the security requirements of the organization.[53][54]
Similarly, Techopedia defines security architecture as "a unified security design that addresses the necessities and potential risks involved in a certain scenario or environment. It also specifies when and where to apply security controls. The design process is generally reproducible." The key attributes of security architecture are:[55]
Practicing security architecture provides the right foundation to systematically address business, IT and security concerns in an organization.
A state of computer security is the conceptual ideal, attained by the use of three processes: threat prevention, detection, and response. These processes are based on various policies and system components, which include the following:
Today, computer security consists mainly of preventive measures, likefirewallsor anexit procedure. A firewall can be defined as a way of filtering network data between a host or a network and another network, such as theInternet. They can be implemented as software running on the machine, hooking into thenetwork stack(or, in the case of mostUNIX-based operating systems such asLinux, built into the operating systemkernel) to provide real-time filtering and blocking.[56]Another implementation is a so-calledphysical firewall, which consists of a separate machine filtering network traffic. Firewalls are common amongst machines that are permanently connected to the Internet.
Some organizations are turning tobig dataplatforms, such asApache Hadoop, to extend data accessibility andmachine learningto detectadvanced persistent threats.[58]
In order to ensure adequate security, the confidentiality, integrity and availability of a network, better known as the CIA triad, must be protected and is considered the foundation to information security.[59]To achieve those objectives, administrative, physical and technical security measures should be employed. The amount of security afforded to an asset can only be determined when its value is known.[60]
Vulnerability management is the cycle of identifying, fixing or mitigatingvulnerabilities,[61]especially in software andfirmware. Vulnerability management is integral to computer security andnetwork security.
Vulnerabilities can be discovered with avulnerability scanner, which analyzes a computer system in search of known vulnerabilities,[62]such asopen ports, insecure software configuration, and susceptibility tomalware. In order for these tools to be effective, they must be kept up to date with every new update the vendor release. Typically, these updates will scan for the new vulnerabilities that were introduced recently.
Beyond vulnerability scanning, many organizations contract outside security auditors to run regularpenetration testsagainst their systems to identify vulnerabilities. In some sectors, this is a contractual requirement.[63]
The act of assessing and reducing vulnerabilities to cyber attacks is commonly referred to asinformation technology security assessments. They aim to assess systems for risk and to predict and test for their vulnerabilities. Whileformal verificationof the correctness of computer systems is possible,[64][65]it is not yet common. Operating systems formally verified includeseL4,[66]andSYSGO'sPikeOS[67][68]– but these make up a very small percentage of the market.
It is possible to reduce an attacker's chances by keeping systems up to date with security patches and updates and by hiring people with expertise in security. Large companies with significant threats can hire Security Operations Centre (SOC) Analysts. These are specialists in cyber defences, with their role ranging from "conducting threat analysis to investigating reports of any new issues and preparing and testing disaster recovery plans."[69]
Whilst no measures can completely guarantee the prevention of an attack, these measures can help mitigate the damage of possible attacks. The effects of data loss/damage can be also reduced by carefulbacking upandinsurance.
Outside of formal assessments, there are various methods of reducing vulnerabilities.Two factor authenticationis a method for mitigating unauthorized access to a system or sensitive information.[70]It requiressomething you know:a password or PIN, andsomething you have: a card, dongle, cellphone, or another piece of hardware. This increases security as an unauthorized person needs both of these to gain access.
Protecting against social engineering and direct computer access (physical) attacks can only happen by non-computer means, which can be difficult to enforce, relative to the sensitivity of the information. Training is often involved to help mitigate this risk by improving people's knowledge of how to protect themselves and by increasing people's awareness of threats.[71]However, even in highly disciplined environments (e.g. military organizations), social engineering attacks can still be difficult to foresee and prevent.
Inoculation, derived frominoculation theory, seeks to prevent social engineering and other fraudulent tricks and traps by instilling a resistance to persuasion attempts through exposure to similar or related attempts.[72]
Hardware-based or assisted computer security also offers an alternative to software-only computer security. Using devices and methods such asdongles,trusted platform modules, intrusion-aware cases, drive locks, disabling USB ports, and mobile-enabled access may be considered more secure due to the physical access (or sophisticated backdoor access) required in order to be compromised. Each of these is covered in more detail below.
One use of the termcomputer securityrefers to technology that is used to implementsecure operating systems. Using secure operating systems is a good way of ensuring computer security. These are systems that have achieved certification from an external security-auditing organization, the most popular evaluations areCommon Criteria(CC).[86]
In software engineering,secure codingaims to guard against the accidental introduction of security vulnerabilities. It is also possible to create software designed from the ground up to be secure. Such systems aresecure by design. Beyond this, formal verification aims to prove thecorrectnessof thealgorithmsunderlying a system;[87]important forcryptographic protocolsfor example.
Within computer systems, two of the mainsecurity modelscapable of enforcing privilege separation areaccess control lists(ACLs) androle-based access control(RBAC).
Anaccess-control list(ACL), with respect to a computer file system, is a list of permissions associated with an object. An ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects.
Role-based access control is an approach to restricting system access to authorized users,[88][89][90]used by the majority of enterprises with more than 500 employees,[91]and can implementmandatory access control(MAC) ordiscretionary access control(DAC).
A further approach,capability-based securityhas been mostly restricted to research operating systems. Capabilities can, however, also be implemented at the language level, leading to a style of programming that is essentially a refinement of standard object-oriented design. An open-source project in the area is theE language.
The end-user is widely recognized as the weakest link in the security chain[92]and it is estimated that more than 90% of security incidents and breaches involve some kind of human error.[93][94]Among the most commonly recorded forms of errors and misjudgment are poor password management, sending emails containing sensitive data and attachments to the wrong recipient, the inability to recognize misleading URLs and to identify fake websites and dangerous email attachments. A common mistake that users make is saving their user id/password in their browsers to make it easier to log in to banking sites. This is a gift to attackers who have obtained access to a machine by some means. The risk may be mitigated by the use of two-factor authentication.[95]
As the human component of cyber risk is particularly relevant in determining the global cyber risk[96]an organization is facing, security awareness training, at all levels, not only provides formal compliance with regulatory and industry mandates but is considered essential[97]in reducing cyber risk and protecting individuals and companies from the great majority of cyber threats.
The focus on the end-user represents a profound cultural change for many security practitioners, who have traditionally approached cybersecurity exclusively from a technical perspective, and moves along the lines suggested by major security centers[98]to develop a culture of cyber awareness within the organization, recognizing that a security-aware user provides an important line of defense against cyber attacks.
Related to end-user training,digital hygieneorcyber hygieneis a fundamental principle relating to information security and, as the analogy withpersonal hygieneshows, is the equivalent of establishing simple routine measures to minimize the risks from cyber threats. The assumption is that good cyber hygiene practices can give networked users another layer of protection, reducing the risk that one vulnerable node will be used to either mount attacks or compromise another node or network, especially from common cyberattacks.[99]Cyber hygiene should also not be mistaken forproactive cyber defence, a military term.[100]
The most common acts of digital hygiene can include updating malware protection, cloud back-ups, passwords, and ensuring restricted admin rights and network firewalls.[101]As opposed to a purely technology-based defense against threats, cyber hygiene mostly regards routine measures that are technically simple to implement and mostly dependent on discipline[102]or education.[103]It can be thought of as an abstract list of tips or measures that have been demonstrated as having a positive effect on personal or collective digital security. As such, these measures can be performed by laypeople, not just security experts.
Cyber hygiene relates to personal hygiene as computer viruses relate to biological viruses (or pathogens). However, while the termcomputer viruswas coined almost simultaneously with the creation of the first working computer viruses,[104]the termcyber hygieneis a much later invention, perhaps as late as 2000[105]by Internet pioneerVint Cerf. It has since been adopted by theCongress[106]andSenateof the United States,[107]the FBI,[108]EUinstitutions[99]and heads of state.[100]
Responding to attemptedsecurity breachesis often very difficult for a variety of reasons, including:
Where an attack succeeds and a breach occurs, many jurisdictions now have in place mandatorysecurity breach notification laws.
The growth in the number of computer systems and the increasing reliance upon them by individuals, businesses, industries, and governments means that there are an increasing number of systems at risk.
The computer systems of financial regulators and financial institutions like theU.S. Securities and Exchange Commission, SWIFT, investment banks, and commercial banks are prominent hacking targets forcybercriminalsinterested in manipulating markets and making illicit gains.[109]Websites and apps that accept or storecredit card numbers, brokerage accounts, andbank accountinformation are also prominent hacking targets, because of the potential for immediate financial gain from transferring money, making purchases, or selling the information on theblack market.[110]In-store payment systems andATMshave also been tampered with in order to gather customer account data andPINs.
TheUCLAInternet Report: Surveying the Digital Future (2000) found that the privacy of personal data created barriers to online sales and that more than nine out of 10 internet users were somewhat or very concerned aboutcredit cardsecurity.[111]
The most common web technologies for improving security between browsers and websites are named SSL (Secure Sockets Layer), and its successor TLS (Transport Layer Security),identity managementandauthenticationservices, anddomain nameservices allow companies and consumers to engage in secure communications and commerce. Several versions of SSL and TLS are commonly used today in applications such as web browsing, e-mail, internet faxing,instant messaging, andVoIP(voice-over-IP). There are variousinteroperableimplementations of these technologies, including at least one implementation that isopen source. Open source allows anyone to view the application'ssource code, and look for and report vulnerabilities.
The credit card companiesVisaandMasterCardcooperated to develop the secureEMVchip which is embedded in credit cards. Further developments include theChip Authentication Programwhere banks give customers hand-held card readers to perform online secure transactions. Other developments in this arena include the development of technology such as Instant Issuance which has enabled shoppingmall kiosksacting on behalf of banks to issue on-the-spot credit cards to interested customers.
Computers control functions at many utilities, including coordination oftelecommunications, thepower grid,nuclear power plants, and valve opening and closing in water and gas networks. The Internet is a potential attack vector for such machines if connected, but theStuxnetworm demonstrated that even equipment controlled by computers not connected to the Internet can be vulnerable. In 2014, theComputer Emergency Readiness Team, a division of theDepartment of Homeland Security, investigated 79 hacking incidents at energy companies.[112]
Theaviationindustry is very reliant on a series of complex systems which could be attacked.[113]A simple power outage at one airport can cause repercussions worldwide,[114]much of the system relies on radio transmissions which could be disrupted,[115]and controlling aircraft over oceans is especially dangerous because radar surveillance only extends 175 to 225 miles offshore.[116]There is also potential for attack from within an aircraft.[117]
Implementing fixes in aerospace systems poses a unique challenge because efficient air transportation is heavily affected by weight and volume. Improving security by adding physical devices to airplanes could increase their unloaded weight, and could potentially reduce cargo or passenger capacity.[118]
In Europe, with the (Pan-European Network Service)[119]and NewPENS,[120]and in the US with the NextGen program,[121]air navigation service providersare moving to create their own dedicated networks.
Many modern passports are nowbiometric passports, containing an embeddedmicrochipthat stores a digitized photograph and personal information such as name, gender, and date of birth. In addition, more countries[which?]are introducingfacial recognition technologyto reduceidentity-related fraud. The introduction of the ePassport has assisted border officials in verifying the identity of the passport holder, thus allowing for quick passenger processing.[122]Plans are under way in the US, theUK, andAustraliato introduce SmartGate kiosks with both retina andfingerprint recognitiontechnology.[123]The airline industry is moving from the use of traditional paper tickets towards the use ofelectronic tickets(e-tickets). These have been made possible by advances in online credit card transactions in partnership with the airlines. Long-distance bus companies[which?]are also switching over to e-ticketing transactions today.
The consequences of a successful attack range from loss of confidentiality to loss of system integrity,air traffic controloutages, loss of aircraft, and even loss of life.
Desktop computers and laptops are commonly targeted to gather passwords or financial account information or to construct a botnet to attack another target.Smartphones,tablet computers,smart watches, and othermobile devicessuch asquantified selfdevices likeactivity trackershave sensors such as cameras, microphones, GPS receivers, compasses, andaccelerometerswhich could be exploited, and may collect personal information, including sensitive health information. WiFi, Bluetooth, and cell phone networks on any of these devices could be used as attack vectors, and sensors might be remotely activated after a successful breach.[124]
The increasing number ofhome automationdevices such as theNest thermostatare also potential targets.[124]
Today many healthcare providers andhealth insurancecompanies use the internet to provide enhanced products and services. Examples are the use oftele-healthto potentially offer better quality and access to healthcare, or fitness trackers to lower insurance premiums.[citation needed]Patient records are increasingly being placed on secure in-house networks, alleviating the need for extra storage space.[125]
Large corporations are common targets. In many cases attacks are aimed at financial gain throughidentity theftand involvedata breaches. Examples include the loss of millions of clients' credit card and financial details byHome Depot,[126]Staples,[127]Target Corporation,[128]andEquifax.[129]
Medical records have been targeted in general identify theft, health insurance fraud, and impersonating patients to obtain prescription drugs for recreational purposes or resale.[130]Although cyber threats continue to increase, 62% of all organizations did not increase security training for their business in 2015.[131]
Not all attacks are financially motivated, however: security firmHBGary Federalhad a serious series of attacks in 2011 fromhacktivistgroupAnonymousin retaliation for the firm's CEO claiming to have infiltrated their group,[132][133]andSony Pictureswashacked in 2014with the apparent dual motive of embarrassing the company through data leaks and crippling the company by wiping workstations and servers.[134][135]
Vehicles are increasingly computerized, with engine timing,cruise control,anti-lock brakes, seat belt tensioners, door locks,airbagsandadvanced driver-assistance systemson many models. Additionally,connected carsmay use WiFi and Bluetooth to communicate with onboard consumer devices and the cell phone network.[136]Self-driving carsare expected to be even more complex. All of these systems carry some security risks, and such issues have gained wide attention.[137][138][139]
Simple examples of risk include a maliciouscompact discbeing used as an attack vector,[140]and the car's onboard microphones being used for eavesdropping. However, if access is gained to a car's internalcontroller area network, the danger is much greater[136]– and in a widely publicized 2015 test, hackers remotely carjacked a vehicle from 10 miles away and drove it into a ditch.[141][142]
Manufacturers are reacting in numerous ways, withTeslain 2016 pushing out some security fixesover the airinto its cars' computer systems.[143]In the area of autonomous vehicles, in September 2016 theUnited States Department of Transportationannounced some initial safety standards, and called for states to come up with uniform policies.[144][145][146]
Additionally, e-Drivers' licenses are being developed using the same technology. For example, Mexico's licensing authority (ICV) has used a smart card platform to issue the first e-Drivers' licenses to the city ofMonterrey, in the state ofNuevo León.[147]
Shipping companies[148]have adoptedRFID(Radio Frequency Identification) technology as an efficient, digitally secure,tracking device. Unlike abarcode, RFID can be read up to 20 feet away. RFID is used byFedEx[149]andUPS.[150]
Government andmilitarycomputer systems are commonly attacked by activists[151][152][153]and foreign powers.[154][155][156][157]Local and regional government infrastructure such astraffic lightcontrols, police and intelligence agency communications,personnel records, as well as student records.[158]
TheFBI,CIA, andPentagon, all utilize secure controlled access technology for any of their buildings. However, the use of this form of technology is spreading into the entrepreneurial world. More and more companies are taking advantage of the development of digitally secure controlled access technology. GE's ACUVision, for example, offers a single panel platform for access control, alarm monitoring and digital recording.[159]
TheInternet of things(IoT) is the network of physical objects such as devices, vehicles, and buildings that areembeddedwithelectronics,software,sensors, andnetwork connectivitythat enables them to collect and exchange data.[160]Concerns have been raised that this is being developed without appropriate consideration of the security challenges involved.[161][162]
While the IoT creates opportunities for more direct integration of the physical world into computer-based systems,[163][164]it also provides opportunities for misuse. In particular, as the Internet of Things spreads widely, cyberattacks are likely to become an increasingly physical (rather than simply virtual) threat.[165]If a front door's lock is connected to the Internet, and can be locked/unlocked from a phone, then a criminal could enter the home at the press of a button from a stolen or hacked phone. People could stand to lose much more than their credit card numbers in a world controlled by IoT-enabled devices. Thieves have also used electronic means to circumvent non-Internet-connected hotel door locks.[166]
An attack aimed at physical infrastructure or human lives is often called a cyber-kinetic attack. As IoT devices and appliances become more widespread, the prevalence and potential damage of cyber-kinetic attacks can increase substantially.
Medical deviceshave either been successfully attacked or had potentially deadly vulnerabilities demonstrated, including both in-hospital diagnostic equipment[167]and implanted devices includingpacemakers[168]andinsulin pumps.[169]There are many reports of hospitals and hospital organizations getting hacked, includingransomwareattacks,[170][171][172][173]Windows XPexploits,[174][175]viruses,[176][177]and data breaches of sensitive data stored on hospital servers.[178][171][179][180]On 28 December 2016 the USFood and Drug Administrationreleased its recommendations for how medicaldevice manufacturersshould maintain the security of Internet-connected devices – but no structure for enforcement.[181][182]
In distributed generation systems, the risk of a cyber attack is real, according toDaily Energy Insider. An attack could cause a loss of power in a large area for a long period of time, and such an attack could have just as severe consequences as a natural disaster. The District of Columbia is considering creating a Distributed Energy Resources (DER) Authority within the city, with the goal being for customers to have more insight into their own energy use and giving the local electric utility,Pepco, the chance to better estimate energy demand. The D.C. proposal, however, would "allow third-party vendors to create numerous points of energy distribution, which could potentially create more opportunities for cyber attackers to threaten the electric grid."[183]
Perhaps the most widely known digitally secure telecommunication device is theSIM(Subscriber Identity Module) card, a device that is embedded in most of the world's cellular devices before any service can be obtained. The SIM card is just the beginning of this digitally secure environment.
The Smart Card Web Servers draft standard (SCWS) defines the interfaces to anHTTP serverin asmart card.[184]Tests are being conducted to secure OTA ("over-the-air") payment and credit card information from and to a mobile phone.
Combination SIM/DVD devices are being developed through Smart Video Card technology which embeds aDVD-compliantoptical discinto the card body of a regular SIM card.
Other telecommunication developments involving digital security includemobile signatures, which use the embedded SIM card to generate a legally bindingelectronic signature.
Serious financial damage has been caused bysecurity breaches, but because there is no standard model for estimating the cost of an incident, the only data available is that which is made public by the organizations involved. "Several computer security consulting firms produce estimates of total worldwide losses attributable tovirusand worm attacks and to hostile digital acts in general. The 2003 loss estimates by these firms range from $13 billion (worms and viruses only) to $226 billion (for all forms of covert attacks). The reliability of these estimates is often challenged; the underlying methodology is basically anecdotal."[185]
However, reasonable estimates of the financial cost of security breaches can actually help organizations make rational investment decisions. According to the classicGordon-Loeb Modelanalyzing the optimal investment level in information security, one can conclude that the amount a firm spends to protect information should generally be only a small fraction of the expected loss (i.e., theexpected valueof the loss resulting from a cyber/informationsecurity breach).[186]
As withphysical security, the motivations for breaches of computer security vary between attackers. Some are thrill-seekers orvandals, some are activists, others are criminals looking for financial gain. State-sponsored attackers are now common and well resourced but started with amateurs such as Markus Hess who hacked for theKGB, as recounted byClifford StollinThe Cuckoo's Egg.
Attackers motivations can vary for all types of attacks from pleasure to political goals.[15]For example, hacktivists may target a company or organization that carries out activities they do not agree with. This would be to create bad publicity for the company by having its website crash.
High capability hackers, often with larger backing or state sponsorship, may attack based on the demands of their financial backers. These attacks are more likely to attempt more serious attack. An example of a more serious attack was the2015 Ukraine power grid hack, which reportedly utilised the spear-phising, destruction of files, and denial-of-service attacks to carry out the full attack.[187][188]
Additionally, recent attacker motivations can be traced back to extremist organizations seeking to gain political advantage or disrupt social agendas.[189]The growth of the internet, mobile technologies, and inexpensive computing devices have led to a rise in capabilities but also to the risk to environments that are deemed as vital to operations. All critical targeted environments are susceptible to compromise and this has led to a series of proactive studies on how to migrate the risk by taking into consideration motivations by these types of actors. Several stark differences exist between the hacker motivation and that ofnation stateactors seeking to attack based on an ideological preference.[190]
A key aspect of threat modeling for any system is identifying the motivations behind potential attacks and the individuals or groups likely to carry them out. The level and detail of security measures will differ based on the specific system being protected. For instance, a home personal computer, a bank, and a classified military network each face distinct threats, despite using similar underlying technologies.[191]
Computer security incident managementis an organized approach to addressing and managing the aftermath of a computer security incident or compromise with the goal of preventing a breach or thwarting a cyberattack. An incident that is not identified and managed at the time of intrusion typically escalates to a more damaging event such as adata breachor system failure. The intended outcome of a computer security incident response plan is to contain the incident, limit damage and assist recovery to business as usual. Responding to compromises quickly can mitigate exploited vulnerabilities, restore services and processes and minimize losses.[192]Incident response planning allows an organization to establish a series of best practices to stop an intrusion before it causes damage. Typical incident response plans contain a set of written instructions that outline the organization's response to a cyberattack. Without a documented plan in place, an organization may not successfully detect an intrusion or compromise and stakeholders may not understand their roles, processes and procedures during an escalation, slowing the organization's response and resolution.
There are four key components of a computer security incident response plan:
Some illustrative examples of different types of computer security breaches are given below.
In 1988, 60,000 computers were connected to the Internet, and most were mainframes, minicomputers and professional workstations. On 2 November 1988, many started to slow down, because they were running a malicious code that demanded processor time and that spread itself to other computers – the first internetcomputer worm.[194]The software was traced back to 23-year-oldCornell Universitygraduate studentRobert Tappan Morriswho said "he wanted to count how many machines were connected to the Internet".[194]
In 1994, over a hundred intrusions were made by unidentified crackers into theRome Laboratory, the US Air Force's main command and research facility. Usingtrojan horses, hackers were able to obtain unrestricted access to Rome's networking systems and remove traces of their activities. The intruders were able to obtain classified files, such as air tasking order systems data and furthermore able to penetrate connected networks ofNational Aeronautics and Space Administration's Goddard Space Flight Center, Wright-Patterson Air Force Base, some Defense contractors, and other private sector organizations, by posing as a trusted Rome center user.[195]
In early 2007, American apparel and home goods companyTJXannounced that it was the victim of anunauthorized computer systems intrusion[196]and that the hackers had accessed a system that stored data oncredit card,debit card,check, and merchandise return transactions.[197]
In 2010, the computer worm known asStuxnetreportedly ruined almost one-fifth of Iran'snuclear centrifuges.[198]It did so by disrupting industrialprogrammable logic controllers(PLCs) in a targeted attack. This is generally believed to have been launched by Israel and the United States to disrupt Iran's nuclear program[199][200][201][202]– although neither has publicly admitted this.
In early 2013, documents provided byEdward Snowdenwere published byThe Washington PostandThe Guardian[203][204]exposing the massive scale ofNSAglobal surveillance. There were also indications that the NSA may have inserted a backdoor in aNISTstandard for encryption.[205]This standard was later withdrawn due to widespread criticism.[206]The NSA additionally were revealed to have tapped the links betweenGoogle's data centers.[207]
A Ukrainian hacker known asRescatorbroke intoTarget Corporationcomputers in 2013, stealing roughly 40 million credit cards,[208]and thenHome Depotcomputers in 2014, stealing between 53 and 56 million credit card numbers.[209]Warnings were delivered at both corporations, but ignored; physical security breaches usingself checkout machinesare believed to have played a large role. "The malware utilized is absolutely unsophisticated and uninteresting," says Jim Walter, director of threat intelligence operations at security technology company McAfee – meaning that the heists could have easily been stopped by existingantivirus softwarehad administrators responded to the warnings. The size of the thefts has resulted in major attention from state and Federal United States authorities and the investigation is ongoing.
In April 2015, theOffice of Personnel Managementdiscovered it had been hackedmore than a year earlier in a data breach, resulting in the theft of approximately 21.5 million personnel records handled by the office.[210]The Office of Personnel Management hack has been described by federal officials as among the largest breaches of government data in the history of the United States.[211]Data targeted in the breach includedpersonally identifiable informationsuch asSocial Security numbers, names, dates and places of birth, addresses, and fingerprints of current and former government employees as well as anyone who had undergone a government background check.[212][213]It is believed the hack was perpetrated by Chinese hackers.[214]
In July 2015, a hacker group is known as The Impact Team successfully breached the extramarital relationship website Ashley Madison, created by Avid Life Media. The group claimed that they had taken not only company data but user data as well. After the breach, The Impact Team dumped emails from the company's CEO, to prove their point, and threatened to dump customer data unless the website was taken down permanently.[215]When Avid Life Media did not take the site offline the group released two more compressed files, one 9.7GB and the second 20GB. After the second data dump, Avid Life Media CEO Noel Biderman resigned; but the website remained to function.
In June 2021, the cyber attack took down the largest fuel pipeline in the U.S. and led to shortages across the East Coast.[216]
International legal issues of cyber attacks are complicated in nature. There is no global base of common rules to judge, and eventually punish, cybercrimes and cybercriminals - and where security firms or agencies do locate the cybercriminal behind the creation of a particular piece ofmalwareor form ofcyber attack, often the local authorities cannot take action due to lack of laws under which to prosecute.[217][218]Provingattribution for cybercrimes and cyberattacksis also a major problem for all law enforcement agencies. "Computer virusesswitch from one country to another, from one jurisdiction to another – moving around the world, using the fact that we don't have the capability to globally police operations like this. So the Internet is as if someone [had] given free plane tickets to all the online criminals of the world."[217]The use of techniques such asdynamic DNS,fast fluxandbullet proof serversadd to the difficulty of investigation and enforcement.
The role of the government is to makeregulationsto force companies and organizations to protect their systems, infrastructure and information from any cyberattacks, but also to protect its own national infrastructure such as the nationalpower-grid.[219]
The government's regulatory role incyberspaceis complicated. For some, cyberspace was seen as avirtual spacethat was to remain free of government intervention, as can be seen in many of today's libertarianblockchainandbitcoindiscussions.[220]
Many government officials and experts think that the government should do more and that there is a crucial need for improved regulation, mainly due to the failure of the private sector to solve efficiently the cybersecurity problem.R. Clarkesaid during a panel discussion at theRSA Security ConferenceinSan Francisco, he believes that the "industry only responds when you threaten regulation. If the industry doesn't respond (to the threat), you have to follow through."[221]On the other hand, executives from the private sector agree that improvements are necessary, but think that government intervention would affect their ability to innovate efficiently. Daniel R. McCarthy analyzed this public-private partnership in cybersecurity and reflected on the role of cybersecurity in the broader constitution of political order.[222]
On 22 May 2020, the UN Security Council held its second ever informal meeting on cybersecurity to focus on cyber challenges tointernational peace. According to UN Secretary-GeneralAntónio Guterres, new technologies are too often used to violate rights.[223]
Many different teams and organizations exist, including:
On 14 April 2016, theEuropean Parliamentand theCouncil of the European Unionadopted theGeneral Data Protection Regulation(GDPR). The GDPR, which came into force on 25 May 2018, grants individuals within the European Union (EU) and the European Economic Area (EEA) the right to theprotection of personal data. The regulation requires that any entity that processes personal data incorporate data protection by design and by default. It also requires that certain organizations appoint a Data Protection Officer (DPO).
The IT Security AssociationTeleTrusTexist inGermanysince June 1986, which is an international competence network for IT security.
Most countries have their own computer emergency response team to protect network security.
Since 2010, Canada has had a cybersecurity strategy.[229][230]This functions as a counterpart document to the National Strategy and Action Plan for Critical Infrastructure.[231]The strategy has three main pillars: securing government systems, securing vital private cyber systems, and helping Canadians to be secure online.[230][231]There is also a Cyber Incident Management Framework to provide a coordinated response in the event of a cyber incident.[232][233]
TheCanadian Cyber Incident Response Centre(CCIRC) is responsible for mitigating and responding to threats to Canada's critical infrastructure and cyber systems. It provides support to mitigate cyber threats, technical support to respond & recover from targeted cyber attacks, and provides online tools for members of Canada's critical infrastructure sectors.[234]It posts regular cybersecurity bulletins[235]& operates an online reporting tool where individuals and organizations can report a cyber incident.[236]
To inform the general public on how to protect themselves online, Public Safety Canada has partnered with STOP.THINK.CONNECT, a coalition of non-profit, private sector, and government organizations,[237]and launched the Cyber Security Cooperation Program.[238][239]They also run the GetCyberSafe portal for Canadian citizens, and Cyber Security Awareness Month during October.[240]
Public Safety Canada aims to begin an evaluation of Canada's cybersecurity strategy in early 2015.[231]
Australian federal governmentannounced an $18.2 million investment to fortify thecybersecurityresilience of small and medium enterprises (SMEs) and enhance their capabilities in responding to cyber threats. This financial backing is an integral component of the soon-to-be-unveiled2023-2030 Australian Cyber Security Strategy, slated for release within the current week. A substantial allocation of $7.2 million is earmarked for the establishment of a voluntary cyber health check program, facilitating businesses in conducting a comprehensive and tailored self-assessment of their cybersecurity upskill.
This avant-garde health assessment serves as a diagnostic tool, enabling enterprises to ascertain the robustness ofAustralia's cyber security regulations. Furthermore, it affords them access to a repository of educational resources and materials, fostering the acquisition of skills necessary for an elevated cybersecurity posture. This groundbreaking initiative was jointly disclosed by Minister for Cyber SecurityClare O'Neiland Minister for Small BusinessJulie Collins.[241]
Some provisions for cybersecurity have been incorporated into rules framed under the Information Technology Act 2000.[242]
TheNational Cyber Security Policy 2013is a policy framework by the Ministry of Electronics and Information Technology (MeitY) which aims to protect the public and private infrastructure from cyberattacks, and safeguard "information, such as personal information (of web users), financial and banking information and sovereign data".CERT- Inis the nodal agency which monitors the cyber threats in the country. The post ofNational Cyber Security Coordinatorhas also been created in thePrime Minister's Office (PMO).
The Indian Companies Act 2013 has also introduced cyber law and cybersecurity obligations on the part of Indian directors. Some provisions for cybersecurity have been incorporated into rules framed under the Information Technology Act 2000 Update in 2013.[243]
Following cyberattacks in the first half of 2013, when the government, news media, television stations, and bank websites were compromised, the national government committed to the training of 5,000 new cybersecurity experts by 2017. The South Korean government blamed its northern counterpart for these attacks, as well as incidents that occurred in 2009, 2011,[244]and 2012, but Pyongyang denies the accusations.[245]
TheUnited Stateshas its first fully formed cyber plan in 15 years, as a result of the release of this National Cyber plan.[246]In this policy, the US says it will: Protect the country by keeping networks, systems, functions, and data safe; Promote American wealth by building a strong digital economy and encouraging strong domestic innovation; Peace and safety should be kept by making it easier for the US to stop people from using computer tools for bad things, working with friends and partners to do this; and increase the United States' impact around the world to support the main ideas behind an open, safe, reliable, and compatible Internet.[247]
The new U.S. cyber strategy[248]seeks to allay some of those concerns by promoting responsible behavior incyberspace, urging nations to adhere to a set of norms, both through international law and voluntary standards. It also calls for specific measures to harden U.S. government networks from attacks, like the June 2015 intrusion into theU.S. Office of Personnel Management(OPM), which compromised the records of about 4.2 million current and former government employees. And the strategy calls for the U.S. to continue to name and shame bad cyber actors, calling them out publicly for attacks when possible, along with the use of economic sanctions and diplomatic pressure.[249]
The 198618 U.S.C.§ 1030, theComputer Fraud and Abuse Actis the key legislation. It prohibits unauthorized access or damage ofprotected computersas defined in18 U.S.C.§ 1030(e)(2). Although various other measures have been proposed[250][251]– none have succeeded.
In 2013,executive order13636Improving Critical Infrastructure Cybersecuritywas signed, which prompted the creation of theNIST Cybersecurity Framework.
In response to theColonial Pipeline ransomware attack[252]PresidentJoe Bidensigned Executive Order 14028[253]on May 12, 2021, to increase software security standards for sales to the government, tighten detection and security on existing systems, improve information sharing and training, establish a Cyber Safety Review Board, and improve incident response.
TheGeneral Services Administration(GSA) has[when?]standardized thepenetration testservice as a pre-vetted support service, to rapidly address potential vulnerabilities, and stop adversaries before they impact US federal, state and local governments. These services are commonly referred to as Highly Adaptive Cybersecurity Services (HACS).
TheDepartment of Homeland Securityhas a dedicated division responsible for the response system,risk managementprogram and requirements for cybersecurity in the United States called theNational Cyber Security Division.[254][255]The division is home to US-CERT operations and the National Cyber Alert System.[255]The National Cybersecurity and Communications Integration Center brings together government organizations responsible for protecting computer networks and networked infrastructure.[256]
The third priority of the FBI is to: "Protect the United States against cyber-based attacks and high-technology crimes",[257]and they, along with theNational White Collar Crime Center(NW3C), and theBureau of Justice Assistance(BJA) are part of the multi-agency task force, TheInternet Crime Complaint Center, also known as IC3.[258]
In addition to its own specific duties, the FBI participates alongside non-profit organizations such asInfraGard.[259][260]
TheComputer Crime and Intellectual Property Section(CCIPS) operates in theUnited States Department of Justice Criminal Division. The CCIPS is in charge of investigatingcomputer crimeandintellectual propertycrime and is specialized in the search and seizure ofdigital evidencein computers andnetworks.[261]In 2017, CCIPS published A Framework for a Vulnerability Disclosure Program for Online Systems to help organizations "clearly describe authorized vulnerability disclosure and discovery conduct, thereby substantially reducing the likelihood that such described activities will result in a civil or criminal violation of law under the Computer Fraud and Abuse Act (18 U.S.C. § 1030)."[262]
TheUnited States Cyber Command, also known as USCYBERCOM, "has the mission to direct, synchronize, and coordinate cyberspace planning and operations to defend and advance national interests in collaboration with domestic and international partners."[263]It has no role in the protection of civilian networks.[264][265]
The U.S.Federal Communications Commission's role in cybersecurity is to strengthen the protection of critical communications infrastructure, to assist in maintaining the reliability of networks during disasters, to aid in swift recovery after, and to ensure that first responders have access to effective communications services.[266]
TheFood and Drug Administrationhas issued guidance for medical devices,[267]and theNational Highway Traffic Safety Administration[268]is concerned with automotive cybersecurity. After being criticized by theGovernment Accountability Office,[269]and following successful attacks on airports and claimed attacks on airplanes, theFederal Aviation Administrationhas devoted funding to securing systems on board the planes of private manufacturers, and theAircraft Communications Addressing and Reporting System.[270]Concerns have also been raised about the futureNext Generation Air Transportation System.[271]
The US Department of Defense (DoD) issued DoD Directive 8570 in 2004, supplemented by DoD Directive 8140, requiring all DoD employees and all DoD contract personnel involved in information assurance roles and activities to earn and maintain various industry Information Technology (IT) certifications in an effort to ensure that all DoD personnel involved in network infrastructure defense have minimum levels of IT industry recognized knowledge, skills and abilities (KSA). Andersson and Reimers (2019) report these certifications range from CompTIA's A+ and Security+ through the ICS2.org's CISSP, etc.[272]
Computer emergency response teamis a name given to expert groups that handle computer security incidents. In the US, two distinct organizations exist, although they do work closely together.
In the context ofU.S. nuclear power plants, theU.S. Nuclear Regulatory Commission (NRC)outlines cybersecurity requirements under10 CFR Part 73, specifically in §73.54.[274]
TheNuclear Energy Institute's NEI 08-09 document,Cyber Security Plan for Nuclear Power Reactors,[275]outlines a comprehensive framework forcybersecurityin thenuclear power industry. Drafted with input from theU.S. NRC, this guideline is instrumental in aidinglicenseesto comply with theCode of Federal Regulations (CFR), which mandates robust protection of digital computers and equipment and communications systems at nuclear power plants against cyber threats.[276]
There is growing concern that cyberspace will become the next theater of warfare. As Mark Clayton fromThe Christian Science Monitorwrote in a 2015 article titled "The New Cyber Arms Race":
In the future, wars will not just be fought by soldiers with guns or with planes that drop bombs. They will also be fought with the click of a mouse a half a world away that unleashes carefully weaponized computer programs that disrupt or destroy critical industries like utilities, transportation, communications, and energy. Such attacks could also disable military networks that control the movement of troops, the path of jet fighters, the command and control of warships.[277]
This has led to new terms such ascyberwarfareandcyberterrorism. TheUnited States Cyber Commandwas created in 2009[278]and many other countrieshave similar forces.
There are a few critical voices that question whether cybersecurity is as significant a threat as it is made out to be.[279][280][281]
Cybersecurity is a fast-growing field ofITconcerned with reducing organizations' risk of hack or data breaches.[282]According to research from the Enterprise Strategy Group, 46% of organizations say that they have a "problematic shortage" of cybersecurity skills in 2016, up from 28% in 2015.[283]Commercial, government and non-governmental organizations all employ cybersecurity professionals. The fastest increases in demand for cybersecurity workers are in industries managing increasing volumes of consumer data such as finance, health care, and retail.[284]However, the use of the termcybersecurityis more prevalent in government job descriptions.[285]
Typical cybersecurity job titles and descriptions include:[286]
Student programs are also available for people interested in beginning a career in cybersecurity.[290][291]Meanwhile, a flexible and effective option for information security professionals of all experience levels to keep studying is online security training, including webcasts.[292][293]A wide range of certified courses are also available.[294]
In the United Kingdom, a nationwide set of cybersecurity forums, known as theU.K Cyber Security Forum, were established supported by the Government's cybersecurity strategy[295]in order to encourage start-ups and innovation and to address the skills gap[296]identified by theU.K Government.
In Singapore, theCyber Security Agencyhas issued a Singapore Operational Technology (OT) Cybersecurity Competency Framework (OTCCF). The framework defines emerging cybersecurity roles in Operational Technology. The OTCCF was endorsed by theInfocomm Media Development Authority(IMDA). It outlines the different OT cybersecurity job positions as well as the technical skills and core competencies necessary. It also depicts the many career paths available, including vertical and lateral advancement opportunities.[297]
The following terms used with regards to computer security are explained below:
Since theInternet's arrival and with the digital transformation initiated in recent years, the notion of cybersecurity has become a familiar subject in both our professional and personal lives. Cybersecurity and cyber threats have been consistently present for the last 60 years of technological change. In the 1970s and 1980s, computer security was mainly limited toacademiauntil the conception of the Internet, where, with increased connectivity, computer viruses and network intrusions began to take off. After the spread of viruses in the 1990s, the 2000s marked the institutionalization of organized attacks such asdistributed denial of service.[301]This led to the formalization of cybersecurity as a professional discipline.[302]
TheApril 1967 sessionorganized byWillis Wareat theSpring Joint Computer Conference, and the later publication of theWare Report, were foundational moments in the history of the field of computer security.[303]Ware's work straddled the intersection of material, cultural, political, and social concerns.[303]
A 1977NISTpublication[304]introduced theCIA triadof confidentiality, integrity, and availability as a clear and simple way to describe key security goals.[305]While still relevant, many more elaborate frameworks have since been proposed.[306][307]
However, in the 1970s and 1980s, there were no grave computer threats because computers and the internet were still developing, and security threats were easily identifiable. More often, threats came from malicious insiders who gained unauthorized access to sensitive documents and files. Although malware and network breaches existed during the early years, they did not use them for financial gain. By the second half of the 1970s, established computer firms likeIBMstarted offering commercial access control systems and computer security software products.[308]
One of the earliest examples of an attack on a computer network was thecomputer wormCreeperwritten by Bob Thomas atBBN, which propagated through theARPANETin 1971.[309]The program was purely experimental in nature and carried no malicious payload. A later program,Reaper, was created byRay Tomlinsonin 1972 and used to destroy Creeper.[citation needed]
Between September 1986 and June 1987, a group of German hackers performed the first documented case of cyber espionage.[310]The group hacked into American defense contractors, universities, and military base networks and sold gathered information to the Soviet KGB. The group was led byMarkus Hess, who was arrested on 29 June 1987. He was convicted of espionage (along with two co-conspirators) on 15 Feb 1990.
In 1988, one of the first computer worms, called theMorris worm, was distributed via the Internet. It gained significant mainstream media attention.[311]
Netscapestarted developing the protocolSSL, shortly after the National Center for Supercomputing Applications (NCSA) launched Mosaic 1.0, the first web browser, in 1993.[312][313]Netscape had SSL version 1.0 ready in 1994, but it was never released to the public due to many serious security vulnerabilities.[312]However, in 1995, Netscape launched Version 2.0.[314]
TheNational Security Agency(NSA) is responsible for theprotectionof U.S. information systems and also for collecting foreign intelligence.[315]The agency analyzes commonly used software and system configurations to find security flaws, which it can use for offensive purposes against competitors of the United States.[316]
NSA contractors created and soldclick-and-shootattack tools to US agencies and close allies, but eventually, the tools made their way to foreign adversaries.[317]In 2016, NSAs own hacking tools were hacked, and they have been used by Russia and North Korea.[citation needed]NSA's employees and contractors have been recruited at high salaries by adversaries, anxious to compete incyberwarfare.[citation needed]In 2007, the United States andIsraelbegan exploiting security flaws in theMicrosoft Windowsoperating system to attack and damage equipment used in Iran to refine nuclear materials. Iran responded by heavily investing in their own cyberwarfare capability, which it began using against the United States.[316]
|
https://en.wikipedia.org/wiki/Computer_security
|
Gauss's lemmainnumber theorygives a condition for an integer to be aquadratic residue. Although it is not useful computationally, it has theoretical significance, being involved in someproofs of quadratic reciprocity.
It made its first appearance inCarl Friedrich Gauss's third proof (1808)[1]: 458–462ofquadratic reciprocityand he proved it again in his fifth proof (1818).[1]: 496–501
For any odd primepletabe an integer that iscoprimetop.
Consider the integers
and their least positive residues modulop. These residues are all distinct, so there are (p− 1)/2of them.
Letnbe the number of these residues that are greater thanp/2. Then
where(ap){\displaystyle \left({\frac {a}{p}}\right)}is theLegendre symbol.
Takingp= 11 anda= 7, the relevant sequence of integers is
After reduction modulo 11, this sequence becomes
Three of these integers are larger than 11/2 (namely 6, 7 and 10), son= 3. Correspondingly Gauss's lemma predicts that
This is indeed correct, because 7 is not a quadratic residue modulo 11.
The above sequence of residues
may also be written
In this form, the integers larger than 11/2 appear as negative numbers. It is also apparent that the absolute values of the residues are a permutation of the residues
A fairly simple proof,[1]: 458–462reminiscent of one of the simplestproofs of Fermat's little theorem, can be obtained by evaluating the product
modulopin two different ways. On one hand it is equal to
The second evaluation takes more work. Ifxis a nonzero residue modulop, let us define the "absolute value" ofxto be
Sincencounts those multipleskawhich are in the latter range, and since for those multiples,−kais in the first range, we have
Now observe that the values|ra|aredistinctforr= 1, 2, …, (p− 1)/2. Indeed, we have
becauseais coprime top.
This givesr=s, sincerandsare positive least residues. But there are exactly(p− 1)/2of them, so their values are a rearrangement of the integers1, 2, …, (p− 1)/2. Therefore,
Comparing with our first evaluation, we may cancel out the nonzero factor
and we are left with
This is the desired result, because byEuler's criterionthe left hand side is just an alternative expression for the Legendre symbol(ap){\displaystyle \left({\frac {a}{p}}\right)}.
For any odd primepletabe an integer that iscoprimetop.
LetI⊂(Z/pZ)×{\displaystyle I\subset (\mathbb {Z} /p\mathbb {Z} )^{\times }}be a set such that(Z/pZ)×{\displaystyle (\mathbb {Z} /p\mathbb {Z} )^{\times }}is the disjoint union ofI{\displaystyle I}and−I={−i:i∈I}{\displaystyle -I=\{-i:i\in I\}}.
Then(ap)=(−1)t{\displaystyle \left({\frac {a}{p}}\right)=(-1)^{t}}, wheret=#{j∈I:aj∈−I}{\displaystyle t=\#\{j\in I:aj\in -I\}}.[2]
In the original statement,I={1,2,…,p−12}{\displaystyle I=\{1,2,\dots ,{\frac {p-1}{2}}\}}.
The proof is almost the same.
Gauss's lemma is used in many,[3]: Ch. 1[3]: 9but by no means all, of the known proofs of quadratic reciprocity.
For example,Gotthold Eisenstein[3]: 236used Gauss's lemma to prove that ifpis an odd prime then
and used this formula to prove quadratic reciprocity. By usingellipticrather thancircularfunctions, he proved thecubicandquartic reciprocitylaws.[3]: Ch. 8
Leopold Kronecker[3]: Ex. 1.34used the lemma to show that
Switchingpandqimmediately gives quadratic reciprocity.
It is also used in what are probably the simplest proofs of the "second supplementary law"
Generalizations of Gauss's lemma can be used to compute higher power residue symbols. In his second monograph on biquadratic reciprocity,[4]: §§69–71Gauss used a fourth-power lemma to derive the formula for the biquadratic character of1 +iinZ[i], the ring ofGaussian integers. Subsequently, Eisenstein used third- and fourth-power versions to provecubicandquartic reciprocity.[3]: Ch. 8
Letkbe analgebraic number fieldwithring of integersOk,{\displaystyle {\mathcal {O}}_{k},}and letp⊂Ok{\displaystyle {\mathfrak {p}}\subset {\mathcal {O}}_{k}}be aprime ideal. Theideal normNp{\displaystyle \mathrm {N} {\mathfrak {p}}}ofp{\displaystyle {\mathfrak {p}}}is defined as the cardinality of the residue class ring. Sincep{\displaystyle {\mathfrak {p}}}is prime this is afinite fieldOk/p{\displaystyle {\mathcal {O}}_{k}/{\mathfrak {p}}}, so the ideal norm isNp=|Ok/p|{\displaystyle \mathrm {N} {\mathfrak {p}}=|{\mathcal {O}}_{k}/{\mathfrak {p}}|}.
Assume that a primitiventhroot of unityζn∈Ok,{\displaystyle \zeta _{n}\in {\mathcal {O}}_{k},}and thatnandp{\displaystyle {\mathfrak {p}}}arecoprime(i.e.n∉p{\displaystyle n\not \in {\mathfrak {p}}}). Then no two distinctnth roots of unity can be congruent modulop{\displaystyle {\mathfrak {p}}}.
This can be proved by contradiction, beginning by assuming thatζnr≡ζns{\displaystyle \zeta _{n}^{r}\equiv \zeta _{n}^{s}}modp{\displaystyle {\mathfrak {p}}},0 <r<s≤n. Lett=s−rsuch thatζnt≡1{\displaystyle \zeta _{n}^{t}\equiv 1}modp{\displaystyle {\mathfrak {p}}}, and0 <t<n. From the definition of roots of unity,
and dividing byx− 1gives
Lettingx= 1and taking residues modp{\displaystyle {\mathfrak {p}}},
Sincenandp{\displaystyle {\mathfrak {p}}}are coprime,n≢0{\displaystyle n\not \equiv 0}modp,{\displaystyle {\mathfrak {p}},}but under the assumption, one of the factors on the right must be zero. Therefore, the assumption that two distinct roots are congruent is false.
Thus the residue classes ofOk/p{\displaystyle {\mathcal {O}}_{k}/{\mathfrak {p}}}containing the powers ofζnare a subgroup of ordernof its (multiplicative) group of units,(Ok/p)×=Ok/p−{0}.{\displaystyle ({\mathcal {O}}_{k}/{\mathfrak {p}})^{\times }={\mathcal {O}}_{k}/{\mathfrak {p}}-\{0\}.}Therefore, the order of(Ok/p)×{\displaystyle ({\mathcal {O}}_{k}/{\mathfrak {p}})^{\times }}is a multiple ofn, and
There is an analogue of Fermat's theorem inOk{\displaystyle {\mathcal {O}}_{k}}. Ifα∈Ok{\displaystyle \alpha \in {\mathcal {O}}_{k}}forα∉p{\displaystyle \alpha \not \in {\mathfrak {p}}}, then[3]: Ch. 4.1
and sinceNp≡1{\displaystyle \mathrm {N} {\mathfrak {p}}\equiv 1}modn,
is well-defined and congruent to a uniquenth root of unity ζns.
This root of unity is called thenth-power residue symbol forOk,{\displaystyle {\mathcal {O}}_{k},}and is denoted by
It can be proven that[3]: Prop. 4.1
if and only if there is anη∈Ok{\displaystyle \eta \in {\mathcal {O}}_{k}}such thatα≡ηnmodp{\displaystyle {\mathfrak {p}}}.
Letμn={1,ζn,ζn2,…,ζnn−1}{\displaystyle \mu _{n}=\{1,\zeta _{n},\zeta _{n}^{2},\dots ,\zeta _{n}^{n-1}\}}be the multiplicative group of thenth roots of unity, and letA={a1,a2,…,am}{\displaystyle A=\{a_{1},a_{2},\dots ,a_{m}\}}be representatives of the cosets of(Ok/p)×/μn.{\displaystyle ({\mathcal {O}}_{k}/{\mathfrak {p}})^{\times }/\mu _{n}.}ThenAis called a1/nsystemmodp.{\displaystyle {\mathfrak {p}}.}[3]: Ch. 4.2
In other words, there aremn=Np−1{\displaystyle mn=\mathrm {N} {\mathfrak {p}}-1}numbers in the setAμ={aiζnj:1≤i≤m,0≤j≤n−1},{\displaystyle A\mu =\{a_{i}\zeta _{n}^{j}\;:\;1\leq i\leq m,\;\;\;0\leq j\leq n-1\},}and this set constitutes a representative set for(Ok/p)×.{\displaystyle ({\mathcal {O}}_{k}/{\mathfrak {p}})^{\times }.}
The numbers1, 2, … (p− 1)/2, used in the original version of the lemma, are a 1/2 system (modp).
Constructing a1/nsystem is straightforward: letMbe a representative set for(Ok/p)×.{\displaystyle ({\mathcal {O}}_{k}/{\mathfrak {p}})^{\times }.}Pick anya1∈M{\displaystyle a_{1}\in M}and remove the numbers congruent toa1,a1ζn,a1ζn2,…,a1ζnn−1{\displaystyle a_{1},a_{1}\zeta _{n},a_{1}\zeta _{n}^{2},\dots ,a_{1}\zeta _{n}^{n-1}}fromM. Picka2fromMand remove the numbers congruent toa2,a2ζn,a2ζn2,…,a2ζnn−1{\displaystyle a_{2},a_{2}\zeta _{n},a_{2}\zeta _{n}^{2},\dots ,a_{2}\zeta _{n}^{n-1}}Repeat untilMis exhausted. Then{a1,a2, …am}is a1/nsystem modp.{\displaystyle {\mathfrak {p}}.}
Gauss's lemma may be extended to thenth power residue symbol as follows.[3]: Prop. 4.3Letζn∈Ok{\displaystyle \zeta _{n}\in {\mathcal {O}}_{k}}be a primitiventh root of unity,p⊂Ok{\displaystyle {\mathfrak {p}}\subset {\mathcal {O}}_{k}}a prime ideal,γ∈Ok,nγ∉p,{\displaystyle \gamma \in {\mathcal {O}}_{k},\;\;n\gamma \not \in {\mathfrak {p}},}(i.e.p{\displaystyle {\mathfrak {p}}}is coprime to bothγandn) and letA= {a1,a2, …,am}be a1/nsystem modp.{\displaystyle {\mathfrak {p}}.}
Then for eachi,1 ≤i≤m, there are integersπ(i), unique (modm), andb(i), unique (modn), such that
and thenth-power residue symbol is given by the formula
The classical lemma for the quadratic Legendre symbol is the special casen= 2,ζ2= −1,A= {1, 2, …, (p− 1)/2},b(k) = 1ifak>p/2,b(k) = 0ifak<p/2.
The proof of thenth-power lemma uses the same ideas that were used in the proof of the quadratic lemma.
The existence of the integersπ(i)andb(i), and their uniqueness (modm) and (modn), respectively, come from the fact thatAμis a representative set.
Assume thatπ(i)=π(j)=p, i.e.
and
Then
Becauseγandp{\displaystyle {\mathfrak {p}}}are coprime both sides can be divided byγ, giving
which, sinceAis a1/nsystem, impliess=randi=j, showing thatπis a permutation of the set{1, 2, …,m}.
Then on the one hand, by the definition of the power residue symbol,
and on the other hand, sinceπis a permutation,
so
and since for all1 ≤i≤m,aiandp{\displaystyle {\mathfrak {p}}}are coprime,a1a2…amcan be cancelled from both sides of the congruence,
and the theorem follows from the fact that no two distinctnth roots of unity can be congruent (modp{\displaystyle {\mathfrak {p}}}).
LetGbe the multiplicative group of nonzero residue classes inZ/pZ, and letHbe the subgroup {+1, −1}. Consider the following coset representatives ofHinG,
Applying the machinery of thetransferto this collection of coset representatives, we obtain the transfer homomorphism
which turns out to be the map that sendsato(−1)n, whereaandnare as in the statement of the lemma. Gauss's lemma may then be viewed as a computation that explicitly identifies this homomorphism as being the quadratic residue character.
|
https://en.wikipedia.org/wiki/Gauss%27s_lemma_(number_theory)
|
Peer-to-peer(P2P) computing or networking is adistributed applicationarchitecture that partitions tasks or workloads between peers. Peers are equally privileged,equipotentparticipants in the network, forming a peer-to-peer network ofnodes.[1]In addition, apersonal area network(PAN) is also in nature a type ofdecentralizedpeer-to-peer network typically between two devices.[2]
Peers make a portion of their resources, such as processing power, disk storage, ornetwork bandwidth, directly available to other network participants, without the need for central coordination by servers or stable hosts.[3]Peers are both suppliers and consumers of resources, in contrast to the traditionalclient–server modelin which the consumption and supply of resources are divided.[4]
While P2P systems had previously been used in manyapplication domains,[5]the architecture was popularized by theInternetfile sharing systemNapster, originally released in 1999.[6]P2P is used in many protocols such asBitTorrentfile sharing over the Internet[7]and inpersonal networkslikeMiracastdisplaying andBluetoothradio.[8]The concept has inspired new structures and philosophies in many areas of human interaction. In such social contexts,peer-to-peer as a memerefers to theegalitariansocial networkingthat has emerged throughout society, enabled byInternettechnologies in general.
While P2P systems had previously been used in many application domains,[5]the concept was popularized byfile sharingsystems such as the music-sharing applicationNapster. The peer-to-peer movement allowed millions of Internet users to connect "directly, forming groups and collaborating to become user-created search engines, virtual supercomputers, and filesystems".[9]The basic concept of peer-to-peer computing was envisioned in earlier software systems and networking discussions, reaching back to principles stated in the firstRequest for Comments, RFC 1.[10]
Tim Berners-Lee's vision for theWorld Wide Webwas close to a P2P network in that it assumed each user of the web would be an active editor and contributor, creating and linking content to form an interlinked "web" of links. The early Internet was more open than the present day, where two machines connected to the Internet could send packets to each other without firewalls and other security measures.[11][9][page needed]This contrasts with thebroadcasting-like structure of the web as it has developed over the years.[12][13][14]As a precursor to the Internet,ARPANETwas a successful peer-to-peer network where "every participating node could request and serve content". However, ARPANET was not self-organized, and it could not "provide any means for context or content-based routing beyond 'simple' address-based routing."[14]
Therefore,Usenet, a distributed messaging system that is often described as an early peer-to-peer architecture, was established. It was developed in 1979 as a system that enforces adecentralized modelof control.[15]The basic model is aclient–servermodel from the user or client perspective that offers a self-organizing approach to newsgroup servers. However,news serverscommunicate with one another as peers to propagate Usenet news articles over the entire group of network servers. The same consideration applies toSMTPemail in the sense that the core email-relaying network ofmail transfer agentshas a peer-to-peer character, while the periphery ofEmail clientsand their direct connections is strictly a client-server relationship.[16]
In May 1999, with millions more people on the Internet,Shawn Fanningintroduced the music and file-sharing application calledNapster.[14]Napster was the beginning of peer-to-peer networks, as we know them today, where "participating users establish a virtual network, entirely independent from the physical network, without having to obey any administrative authorities or restrictions".[14]
A peer-to-peer network is designed around the notion of equalpeernodes simultaneously functioning as both "clients" and "servers" to the other nodes on the network.[17]This model of network arrangement differs from theclient–servermodel where communication is usually to and from a central server. A typical example of a file transfer that uses the client-server model is theFile Transfer Protocol(FTP) service in which the client and server programs are distinct: the clients initiate the transfer, and the servers satisfy these requests.
Peer-to-peer networks generally implement some form of virtualoverlay networkon top of the physical network topology, where the nodes in the overlay form asubsetof the nodes in the physical network.[18]Data is still exchanged directly over the underlyingTCP/IPnetwork, but at theapplication layerpeers can communicate with each other directly, via the logical overlay links (each of which corresponds to a path through the underlying physical network). Overlays are used for indexing and peer discovery, and make the P2P system independent from the physical network topology. Based on how the nodes are linked to each other within the overlay network, and how resources are indexed and located, we can classify networks asunstructuredorstructured(or as a hybrid between the two).[19][20][21]
Unstructured peer-to-peer networksdo not impose a particular structure on the overlay network by design, but rather are formed by nodes that randomly form connections to each other.[22](Gnutella,Gossip, andKazaaare examples of unstructured P2P protocols).[23]
Because there is no structure globally imposed upon them, unstructured networks are easy to build and allow for localized optimizations to different regions of the overlay.[24]Also, because the role of all peers in the network is the same, unstructured networks are highly robust in the face of high rates of "churn"—that is, when large numbers of peers are frequently joining and leaving the network.[25][26]
However, the primary limitations of unstructured networks also arise from this lack of structure. In particular, when a peer wants to find a desired piece of data in the network, the search query must be flooded through the network to find as many peers as possible that share the data. Flooding causes a very high amount of signaling traffic in the network, uses moreCPU/memory (by requiring every peer to process all search queries), and does not ensure that search queries will always be resolved. Furthermore, since there is no correlation between a peer and the content managed by it, there is no guarantee that flooding will find a peer that has the desired data. Popular content is likely to be available at several peers and any peer searching for it is likely to find the same thing. But if a peer is looking for rare data shared by only a few other peers, then it is highly unlikely that the search will be successful.[27]
Instructured peer-to-peer networksthe overlay is organized into a specific topology, and the protocol ensures that any node can efficiently[28]search the network for a file/resource, even if the resource is extremely rare.[23]
The most common type of structured P2P networks implement adistributed hash table(DHT),[4][29]in which a variant ofconsistent hashingis used to assign ownership of each file to a particular peer.[30][31]This enables peers to search for resources on the network using ahash table: that is, (key,value) pairs are stored in the DHT, and any participating node can efficiently retrieve the value associated with a given key.[32][33]
However, in order to route traffic efficiently through the network, nodes in a structured overlay must maintain lists of neighbors[34]that satisfy specific criteria. This makes them less robust in networks with a high rate ofchurn(i.e. with large numbers of nodes frequently joining and leaving the network).[26][35]More recent evaluation of P2P resource discovery solutions under real workloads have pointed out several issues in DHT-based solutions such as high cost of advertising/discovering resources and static and dynamic load imbalance.[36]
Notable distributed networks that use DHTs includeTixati, an alternative toBitTorrent'sdistributed tracker, theKad network, theStorm botnet, and theYaCy. Some prominent research projects include theChord project,Kademlia,PAST storage utility,P-Grid, a self-organized and emerging overlay network, andCoopNet content distribution system.[37]DHT-based networks have also been widely utilized for accomplishing efficient resource discovery[38][39]forgrid computingsystems, as it aids in resource management and scheduling of applications.
Hybrid models are a combination of peer-to-peer andclient–servermodels.[40]A common hybrid model is to have a central server that helps peers find each other.Spotifywas an example of a hybrid model [until 2014].[41]There are a variety of hybrid models, all of which make trade-offs between the centralized functionality provided by a structured server/client network and the node equality afforded by the pure peer-to-peer unstructured networks. Currently, hybrid models have better performance than either pure unstructured networks or pure structured networks because certain functions, such as searching, do require a centralized functionality but benefit from the decentralized aggregation of nodes provided by unstructured networks.[42]
CoopNet (Cooperative Networking)was a proposed system for off-loading serving to peers who have recentlydownloadedcontent, proposed by computer scientists Venkata N. Padmanabhan and Kunwadee Sripanidkulchai, working atMicrosoft ResearchandCarnegie Mellon University.[43][44]When aserverexperiences an increase in load it redirects incoming peers to other peers who have agreed tomirrorthe content, thus off-loading balance from the server. All of the information is retained at the server. This system makes use of the fact that the bottleneck is most likely in the outgoing bandwidth than theCPU, hence its server-centric design. It assigns peers to other peers who are 'close inIP' to its neighbors [same prefix range] in an attempt to use locality. If multiple peers are found with the samefileit designates that the node choose the fastest of its neighbors.Streaming mediais transmitted by having clientscachethe previous stream, and then transmit it piece-wise to new nodes.
Peer-to-peer systems pose unique challenges from acomputer securityperspective. Like any other form ofsoftware, P2P applications can containvulnerabilities. What makes this particularly dangerous for P2P software, however, is that peer-to-peer applications act as servers as well as clients, meaning that they can be more vulnerable toremote exploits.[45]
Since each node plays a role in routing traffic through the network, malicious users can perform a variety of "routing attacks", ordenial of serviceattacks. Examples of common routing attacks include "incorrect lookup routing" whereby malicious nodes deliberately forward requests incorrectly or return false results, "incorrect routing updates" where malicious nodes corrupt the routing tables of neighboring nodes by sending them false information, and "incorrect routing network partition" where when new nodes are joining they bootstrap via a malicious node, which places the new node in a partition of the network that is populated by other malicious nodes.[45]
The prevalence ofmalwarevaries between different peer-to-peer protocols.[46]Studies analyzing the spread of malware on P2P networks found, for example, that 63% of the answered download requests on thegnutellanetwork contained some form of malware, whereas only 3% of the content onOpenFTcontained malware. In both cases, the top three most common types of malware accounted for the large majority of cases (99% in gnutella, and 65% in OpenFT). Another study analyzing traffic on theKazaanetwork found that 15% of the 500,000 file sample taken were infected by one or more of the 365 differentcomputer virusesthat were tested for.[47]
Corrupted data can also be distributed on P2P networks by modifying files that are already being shared on the network. For example, on theFastTracknetwork, theRIAAmanaged to introduce faked chunks into downloads and downloaded files (mostlyMP3files). Files infected with the RIAA virus were unusable afterwards and contained malicious code. The RIAA is also known to have uploaded fake music and movies to P2P networks in order to deter illegal file sharing.[48]Consequently, the P2P networks of today have seen an enormous increase of their security and file verification mechanisms. Modernhashing,chunk verificationand different encryption methods have made most networks resistant to almost any type of attack, even when major parts of the respective network have been replaced by faked or nonfunctional hosts.[49]
The decentralized nature of P2P networks increases robustness because it removes thesingle point of failurethat can be inherent in a client–server based system.[50]As nodes arrive and demand on the system increases, the total capacity of the system also increases, and the likelihood of failure decreases. If one peer on the network fails to function properly, the whole network is not compromised or damaged. In contrast, in a typical client–server architecture, clients share only their demands with the system, but not their resources. In this case, as more clients join the system, fewer resources are available to serve each client, and if the central server fails, the entire network is taken down.
There are both advantages and disadvantages in P2P networks related to the topic of databackup, recovery, and availability. In a centralized network, the system administrators are the only forces controlling the availability of files being shared. If the administrators decide to no longer distribute a file, they simply have to remove it from their servers, and it will no longer be available to users. Along with leaving the users powerless in deciding what is distributed throughout the community, this makes the entire system vulnerable to threats and requests from the government and other large forces.
For example,YouTubehas been pressured by theRIAA,MPAA, and entertainment industry to filter out copyrighted content. Although server-client networks are able to monitor and manage content availability, they can have more stability in the availability of the content they choose to host. A client should not have trouble accessing obscure content that is being shared on a stable centralized network. P2P networks, however, are more unreliable in sharing unpopular files because sharing files in a P2P network requires that at least one node in the network has the requested data, and that node must be able to connect to the node requesting the data. This requirement is occasionally hard to meet because users may delete or stop sharing data at any point.[51]
In a P2P network, the community of users is entirely responsible for deciding which content is available. Unpopular files eventually disappear and become unavailable as fewer people share them. Popular files, however, are highly and easily distributed. Popular files on a P2P network are more stable and available than files on central networks. In a centralized network, a simple loss of connection between the server and clients can cause a failure, but in P2P networks, the connections between every node must be lost to cause a data-sharing failure. In a centralized system, the administrators are responsible for all data recovery and backups, while in P2P systems, each node requires its backup system. Because of the lack of central authority in P2P networks, forces such as the recording industry,RIAA,MPAA, and the government are unable to delete or stop the sharing of content on P2P systems.[52]
In P2P networks, clients both provide and use resources. This means that unlike client–server systems, the content-serving capacity of peer-to-peer networks can actuallyincreaseas more users begin to access the content (especially with protocols such asBitTorrentthat require users to share, refer a performance measurement study[53]). This property is one of the major advantages of using P2P networks because it makes the setup and running costs very small for the original content distributor.[54][55]
Peer-to-peer file sharingnetworks such asGnutella,G2, and theeDonkey networkhave been useful in popularizing peer-to-peer technologies. These advancements have paved the way forPeer-to-peer content delivery networksand services, including distributed caching systems like Correli Caches to enhance performance.[56]Furthermore, peer-to-peer networks have made possible the software publication and distribution, enabling efficient sharing ofLinux distributionand various games throughfile sharingnetworks.
Peer-to-peer networking involves data transfer from one user to another without using an intermediate server. Companies developing P2P applications have been involved in numerous legal cases, primarily in the United States, over conflicts withcopyrightlaw.[57]Two major cases areGrokstervs RIAAandMGM Studios, Inc. v. Grokster, Ltd..[58]In the last case, the Court unanimously held that defendant peer-to-peer file sharing companies Grokster and Streamcast could be sued for inducing copyright infringement.
TheP2PTVandPDTPprotocols are used in various peer-to-peer applications. Someproprietarymultimedia applications leverage a peer-to-peer network in conjunction with streaming servers to stream audio and video to their clients.Peercastingis employed for multicasting streams. Additionally, a project calledLionShare, undertaken byPennsylvania State University, MIT, andSimon Fraser University, aims to facilitate file sharing among educational institutions globally. Another notable program,Osiris, enables users to create anonymous and autonomous web portals that are distributed via a peer-to-peer network.
Datis a distributed version-controlled publishing platform.I2P, is anoverlay networkused to browse the Internetanonymously. Unlike the related I2P, theTor networkis not itself peer-to-peer[dubious–discuss]; however, it can enable peer-to-peer applications to be built on top of it viaonion services. TheInterPlanetary File System(IPFS) is aprotocoland network designed to create acontent-addressable, peer-to-peer method of storing and sharinghypermediadistribution protocol, with nodes in the IPFS network forming adistributed file system.Jamiis a peer-to-peer chat andSIPapp.JXTAis a peer-to-peer protocol designed for theJava platform.Netsukukuis aWireless community networkdesigned to be independent from the Internet.Open Gardenis a connection-sharing application that shares Internet access with other devices using Wi-Fi or Bluetooth.
Resilio Syncis a directory-syncing app. Research includes projects such as theChord project, thePAST storage utility, theP-Grid, and theCoopNet content distribution system.Secure Scuttlebuttis a peer-to-peergossip protocolcapable of supporting many different types of applications, primarilysocial networking.Syncthingis also a directory-syncing app.Tradepall andM-commerceapplications are designed to power real-time marketplaces. TheU.S. Department of Defenseis conducting research on P2P networks as part of its modern network warfare strategy.[59]In May 2003,Anthony Tether, then director ofDARPA, testified that the United States military uses P2P networks.WebTorrentis a P2Pstreamingtorrent clientinJavaScriptfor use inweb browsers, as well as in theWebTorrent Desktopstandalone version that bridges WebTorrent andBitTorrentserverless networks.Microsoft, inWindows 10, uses a proprietary peer-to-peer technology called "Delivery Optimization" to deploy operating system updates using end-users' PCs either on the local network or other PCs. According to Microsoft's Channel 9, this led to a 30%-50% reduction in Internet bandwidth usage.[60]Artisoft'sLANtasticwas built as a peer-to-peer operating system where machines can function as both servers and workstations simultaneously.Hotline CommunicationsHotline Client was built with decentralized servers and tracker software dedicated to any type of files and continues to operate today.Cryptocurrenciesare peer-to-peer-baseddigital currenciesthat useblockchains
Cooperation among a community of participants is key to the continued success of P2P systems aimed at casual human users; these reach their full potential only when large numbers of nodes contribute resources. But in current practice, P2P networks often contain large numbers of users who utilize resources shared by other nodes, but who do not share anything themselves (often referred to as the "freeloader problem").
Freeloading can have a profound impact on the network and in some cases can cause the community to collapse.[61]In these types of networks "users have natural disincentives to cooperate because cooperation consumes their own resources and may degrade their own performance".[62]Studying the social attributes of P2P networks is challenging due to large populations of turnover, asymmetry of interest and zero-cost identity.[62]A variety of incentive mechanisms have been implemented to encourage or even force nodes to contribute resources.[63][45]
Some researchers have explored the benefits of enabling virtual communities to self-organize and introduce incentives for resource sharing and cooperation, arguing that the social aspect missing from today's P2P systems should be seen both as a goal and a means for self-organized virtual communities to be built and fostered.[64]Ongoing research efforts for designing effective incentive mechanisms in P2P systems, based on principles from game theory, are beginning to take on a more psychological and information-processing direction.
Some peer-to-peer networks (e.g.Freenet) place a heavy emphasis onprivacyandanonymity—that is, ensuring that the contents of communications are hidden from eavesdroppers, and that the identities/locations of the participants are concealed.Public key cryptographycan be used to provideencryption,data validation, authorization, and authentication for data/messages.Onion routingand othermix networkprotocols (e.g. Tarzan) can be used to provide anonymity.[65]
Perpetrators oflive streaming sexual abuseand othercybercrimeshave used peer-to-peer platforms to carry out activities with anonymity.[66]
Although peer-to-peer networks can be used for legitimate purposes, rights holders have targeted peer-to-peer over the involvement with sharing copyrighted material. Peer-to-peer networking involves data transfer from one user to another without using an intermediate server. Companies developing P2P applications have been involved in numerous legal cases, primarily in the United States, primarily over issues surroundingcopyrightlaw.[57]Two major cases areGrokstervs RIAAandMGM Studios, Inc. v. Grokster, Ltd.[58]In both of the cases the file sharing technology was ruled to be legal as long as the developers had no ability to prevent the sharing of the copyrighted material.
To establish criminal liability for the copyright infringement on peer-to-peer systems, the government must prove that the defendant infringed a copyright willingly for the purpose of personal financial gain or commercial advantage.[67]Fair useexceptions allow limited use of copyrighted material to be downloaded without acquiring permission from the rights holders. These documents are usually news reporting or under the lines of research and scholarly work. Controversies have developed over the concern of illegitimate use of peer-to-peer networks regarding public safety and national security. When a file is downloaded through a peer-to-peer network, it is impossible to know who created the file or what users are connected to the network at a given time. Trustworthiness of sources is a potential security threat that can be seen with peer-to-peer systems.[68]
A study ordered by theEuropean Unionfound that illegal downloadingmaylead to an increase in overall video game sales because newer games charge for extra features or levels. The paper concluded that piracy had a negative financial impact on movies, music, and literature. The study relied on self-reported data about game purchases and use of illegal download sites. Pains were taken to remove effects of false and misremembered responses.[69][70][71]
Peer-to-peer applications present one of the core issues in thenetwork neutralitycontroversy. Internet service providers (ISPs) have been known to throttle P2P file-sharing traffic due to its high-bandwidthusage.[72]Compared to Web browsing, e-mail or many other uses of the internet, where data is only transferred in short intervals and relative small quantities, P2P file-sharing often consists of relatively heavy bandwidth usage due to ongoing file transfers and swarm/network coordination packets. In October 2007,Comcast, one of the largest broadband Internet providers in the United States, started blocking P2P applications such asBitTorrent. Their rationale was that P2P is mostly used to share illegal content, and their infrastructure is not designed for continuous, high-bandwidth traffic.
Critics point out that P2P networking has legitimate legal uses, and that this is another way that large providers are trying to control use and content on the Internet, and direct people towards aclient–server-based application architecture. The client–server model provides financial barriers-to-entry to small publishers and individuals, and can be less efficient for sharing large files. As a reaction to thisbandwidth throttling, several P2P applications started implementing protocol obfuscation, such as theBitTorrent protocol encryption. Techniques for achieving "protocol obfuscation" involves removing otherwise easily identifiable properties of protocols, such as deterministic byte sequences and packet sizes, by making the data look as if it were random.[73]The ISP's solution to the high bandwidth isP2P caching, where an ISP stores the part of files most accessed by P2P clients in order to save access to the Internet.
Researchers have used computer simulations to aid in understanding and evaluating the complex behaviors of individuals within the network. "Networking research often relies on simulation in order to test and evaluate new ideas. An important requirement of this process is that results must be reproducible so that other researchers can replicate, validate, and extend existing work."[74]If the research cannot be reproduced, then the opportunity for further research is hindered. "Even though new simulators continue to be released, the research community tends towards only a handful of open-source simulators. The demand for features in simulators, as shown by our criteria and survey, is high. Therefore, the community should work together to get these features in open-source software. This would reduce the need for custom simulators, and hence increase repeatability and reputability of experiments."[74]
Popular simulators that were widely used in the past are NS2, OMNeT++, SimPy, NetLogo, PlanetLab, ProtoPeer, QTM, PeerSim, ONE, P2PStrmSim, PlanetSim, GNUSim, and Bharambe.[75]
Besides all the above stated facts, there has also been work done on ns-2 open source network simulators. One research issue related to free rider detection and punishment has been explored using ns-2 simulator here.[76]
|
https://en.wikipedia.org/wiki/Social_peer-to-peer_processes
|
Anoblivious pseudorandom function(OPRF) is acryptographicfunction, similar to akeyed-hash function, but with the distinction that in an OPRFtwo partiescooperate tosecurely computeapseudorandom function(PRF).[1]
Specifically, an OPRF is apseudorandom functionwith the following properties:
The function is called anobliviouspseudorandom function, because the second party isobliviousto the function's output. This party learns no new information from participating in the calculation of the result.
However, because it is only the second party that holds the secret, the first party must involve the second party to calculate the output of thepseudorandom function(PRF). This requirement enables the second party to implementaccess controls,throttling,audit loggingand other security measures.
While conventionalpseudorandom functionscomputed by a single party were first formalized in 1986,[2]it was not until 1997 that thefirst two-party oblivious pseudorandom functionwas described in the literature,[3]but the term "oblivious pseudorandom function" was not coined until 2005 by some of the same authors.[4]
OPRFs have many useful applications incryptographyandinformation security.
These includepassword-based key derivation, password-basedkey agreement, password-hardening, untraceableCAPTCHAs,password management,homomorphickey management, andprivate set intersection.[1][5]
An OPRF can be viewed as a special case ofhomomorphic encryption, as it enables another party to compute a function over anencrypted inputand produce a result (which remains encrypted) and therefore it learns nothing about what it computed.
Most forms of password-based key derivation suffer from the fact that passwords usually contain asmall amount of randomness(or entropy) compared to full-length 128- or 256-bit encryption keys. This makes keys derived from passwords vulnerable tobrute-force attacks.
However, this threat can be mitigated by using the output of an OPRF that takes the password as input.
If the secret key used in the OPRF is high-entropy, then the output of the OPRF will also be high-entropy. This thereby solves the problem of the password being low-entropy, and therefore vulnerable tocrackingviabrute force.
This technique is calledpassword hardening.[6]It fills a similar purpose askey stretching, but password hardening adds significantly more entropy.
Further, since each attempt at guessing a password that is hardened in this way requires interaction with a server, it prevents anoffline attack, and thus enables the user or system administrator to be alerted to any password-cracking attempt.
The recovered key may then be used for authentication (e.g. performing aPKI-basedauthentication using adigital certificateandprivate key), or may be used to decrypt sensitive content, such as an encryptedfileorcrypto wallet.
Apasswordcan be used as the basis of akey agreementprotocol, to establish temporary session keys and mutually authenticate the client and server. This is known as apassword-authenticated key exchangeorPAKE.
Inbasic authentication, the server learns the user's password during the course of the authentication. If the server is compromised, this exposes the user's password which compromises the security of the user.
With PAKE, however, the user's password is not sent to the server, preventing it from falling into an eavesdropper's hands. It can be seen as an authentication via azero-knowledge password proof.
Various 'augmented forms' of PAKE incorporate an oblivious pseudorandom function so that the server never sees the user's password during the authentication, but nevertheless it is able to authenticate the client is in possession of the correct password. This is done by assuming only the client that knows the correct password can use the OPRF to derive the correct key.
An example of an augmented PAKE that uses an OPRF in this way isOPAQUE.[7][8][9][10]
Recently, OPRFs have been applied to password-based key exchange toback upencrypted chat histories inWhatsApp[11]andFacebook Messenger.[12]A similar use case is planned to be added inSignal Messenger.[13]
A CAPTCHA or "Completely Automated PublicTuring testto tell Computers and Humans Apart"[14]is a mechanism to prevent automated robots or (bots) from accessing websites. Lately, mechanisms for running CAPTCHA tests have been centralized to services such asGoogleandCloudFlare, but this can come at the expense of user privacy.
Recently, CloudFlare developed a privacy-preserving technology called "Privacy Pass".[15]This technology is based on OPRFs, and enables the client's browser to obtain passes from CloudFlare and then present them to bypass CAPTCHA tests. Due to the fact that the CloudFlare service is oblivious to which passes were provided to which users, there is no way it can correlate users with the websites they visit. This prevents tracking of the user, and thereby preserves the user's privacy.
Apassword manageris software or a service that holds potentially many different account credentials on behalf of the user. Access to the password manager is thus highly sensitive: an attack could expose many credentials to the attacker.
The first proposal for a password manager based on OPRFs was SPHINX.[16]It uses two devices (such as the user's laptop and phone) which collaborate to compute a password for a given account (as identified by the username and website's domain name). Because the user's two devices exchange values according to an OPRF protocol, intercepting the connection between them does not reveal anything about the password or the internal values each device used to compute it. Requiring two devices to compute any password also ensures that a compromise of either device does not allow the attacker to compute any of the passwords. A downside of this approach is that the user always needs access to both devices whenever they want to log in to any of their accounts.
An OPRF is used by the Password Monitor inMicrosoft Edgeto allow querying a server for whether a credential (which the user saved in the browser) is known to be compromised, without needing to reveal this credential to the server.[17]
Similarly to securing passwords managed by a password manager, an OPRF can be used to enhance the security of akey-management system.
For example, an OPRF enables a key-management system to issuecryptographic keysto authenticated and authorized users, without ever seeing, learning, or being in a position to learn, any of the keys it provides to users.[18]
Private set intersectionis a cryptographic technique that enables two or more parties to compare their private sets to determine which entries they share in common, but without disclosing any entires which they do not hold in common.
For example, private set intersection could be used by two users of a social network to determine which friends they have in common, without revealing the identities of friends they do not have in common. To do this, they could share the outputs of an OPRF applied to the friend's identity (e.g., the friend's phone number or e-mail address).
The output of the OPRF cannot be inverted to determine the identity of the user, and since the OPRF may berate-limited, it will prevent a brute-force attack (e.g., iterating over all possible phone numbers).[19]
There are various mathematical functions that can serve as the basis to implement an OPRF.
For example, methods fromasymmetric cryptography, includingelliptic curvepoint multiplication,Diffie–Hellmanmodular exponentiation over a prime, or anRSAsignature calculation.
Elliptic curves and prime order fields can be used to implement an OPRF. The essential idea is that the first party (the client), must cryptographicallyblindthe input prior sending it to the second party.
This blinding can be viewed as a form ofencryptionthat survives the computation performed by the second party. Therefore, the first party candecryptwhat it receives from the second party to "unblind" it, and thereby receive the same result it would have received had the input not been blinded.
When the second party receives the blinded input, it performs a computation on it using asecret. The result of this computation must not reveal the secret.
For example, the second party may perform apoint multiplicationof a point on an elliptic curve. Or it may perform amodular exponentiationmodulo a largeprime.
The first party, upon receipt of the result, and with knowledge of the blinding-factor, computes a function that removes the blinding factor's influence on the result returned by the second party. This 'unblinds' the result, revealing the output of the OPRF (or an intermediate result which is then used by the client to compute the output of the OPRF, for example, by hashing this intermediate result).
The following ispseudocodefor the calculations performed by the client and server using an elliptic-curve–based OPRF.
The following code represents calculations performed by the client, or the first party.
Notes:
The client computes themultiplicative inverseof the blinding factor. This enables it to reverse the effect of the blinding factor on the result, and obtain the result the server would have returned had the client not blinded the input.
As a final step, to complete the OPRF, the client performs aone-way hashon the result to ensure the OPRF output isuniform, completelypseudorandom, and non-invertible.
The following code represents calculations performed by the server, or the second party.
The server receives theblinded inputvalue from the client, and may perform authentication, access control, request throttling, or other security measures before processing the request. It then uses its own secret to compute:
It then returns the response, which is the blinded output, to the client.
Notes:
Because the elliptic curve point multiplication is computationally difficult to invert (like thediscrete logarithmproblem, the client cannot feasibly learn the server's secret from the response it produces.
Note, however, that this function is vulnerable toattacksbyquantum computers. A client or third party in possession of a quantum computer could solve for the server's secret knowing the result it produced for a given input.
When the output of ablind signaturescheme is deterministic, it can be used as the basis of building an OPRF, e.g. simply by hashing the resulting signature.
This is because due to the blinding, the party computing the blind signature learns neither the input (what is being signed) nor the output (the resultingdigital signature).
The OPRF construction can be extended in various ways. These include: verifiable, partially oblivious, threshold-secure, and post-quantum–secure versions.
Many applications require the ability of the first party to verify the OPRF output was computed correctly. For example, when using the output as a key to encrypt data. If the wrong value is computed, that encrypted data may be lost forever.
Fortunately, most OPRFs support verifiability. For example, when usingRSAblind signatures as the underlying construction, the client can, with the public key, verify the correctness of the resultingdigital signature.
When using OPRFs based onelliptic curveorDiffie–Hellman, knowing the public keyy = gxit is possible to use a second request to the OPRF server to create azero-knowledge proofof correctness for the previous result.[20][21]
One modification to an OPRF is called a partially oblivious PRF, or P-OPRF.
Specifically, a P-OPRF is any function with the following properties:
The use case for this is when the server needs to implement specific throttling or access controls on the exposed input (E), for example, (E) could be a file path, or user name, for which the server enforcesaccess controls, and only services requests when the requesting user is authorized.
A P-OPRF based onbilinear pairingswas used by the "Pythia PRF Service".[22]
Recently, versions of P-OPRFs not based on pairings have appeared, such as a version standardized in theIETFRFC9497,[21]as well in its more recent improvement.[23]
For even greater security, it is possible to"thresholdize" the server, such that the secret (S) is not held by any individual server, and so the compromise of any single server, or a set of servers numbering below some defined threshold, will not expose the secret.
This can be done by having each server be a shareholder in asecret-sharing scheme. Instead of using its secret to compute the result, each server uses itsshareof the secret to perform the computation.
The client then takes some subset of the server's computed results, and combines them, for example by computing a protocol known asinterpolation in the exponent. This recovers the same result as if the client had interacted with a single server which has the full secret.
This algorithm is used in various distributed cryptographic protocols.[24]
Finding efficientpost-quantum–secure implementations of OPRFs is an area of active research.[25]
"With the exception of OPRFs based on symmetric primitives, all known efficient OPRF
constructions rely on discrete-log- or factoring-type hardness assumptions. These assumptions are known to fall with the rise of quantum computers."[1]
Two possible exceptions arelattice-basedOPRFs[26]andisogeny-basedOPRFs,[27]but more research is required to improve their efficiency and establish their security. Recent attacks on isogenies raise doubts on the security of the algorithm.[28]
A more secure, but less efficient approach to realize a post-quantum–secure OPRF is to use asecure two-party computationprotocol to compute a PRF using asymmetric-keyconstruction, such asAESorHMAC.
|
https://en.wikipedia.org/wiki/Oblivious_Pseudorandom_Function
|
Intelecommunication, aconvolutional codeis a type oferror-correcting codethat generates parity symbols via the sliding application of aboolean polynomialfunction to a data stream. The sliding application represents the 'convolution' of the encoder over the data, which gives rise to the term 'convolutional coding'. The sliding nature of the convolutional codes facilitatestrellisdecoding using a time-invariant trellis. Time invariant trellis decoding allows convolutional codes to be maximum-likelihood soft-decision decoded with reasonable complexity.
The ability to perform economical maximum likelihood soft decision decoding is one of the major benefits of convolutional codes. This is in contrast to classic block codes, which are generally represented by a time-variant trellis and therefore are typically hard-decision decoded. Convolutional codes are often characterized by the basecode rateand the depth (or memory) of the encoder[n,k,K]{\displaystyle [n,k,K]}. The base code rate is typically given asn/k{\displaystyle n/k}, wherenis the raw input data rate andkis the data rate of output channel encoded stream.nis less thankbecause channel coding inserts redundancy in the input bits. The memory is often called the "constraint length"K, where the output is a function of the current input as well as the previousK−1{\displaystyle K-1}inputs. The depth may also be given as the number of memory elementsvin the polynomial or the maximum possible number of states of the encoder (typically:2v{\displaystyle 2^{v}}).
Convolutional codes are often described as continuous. However, it may also be said that convolutional codes have arbitrary block length, rather than being continuous, since most real-world convolutional encoding is performed on blocks of data. Convolutionally encoded block codes typically employ termination. The arbitrary block length of convolutional codes can also be contrasted to classicblock codes, which generally have fixed block lengths that are determined by algebraic properties.
The code rate of a convolutional code is commonly modified viasymbol puncturing. For example, a convolutional code with a 'mother' code raten/k=1/2{\displaystyle n/k=1/2}may be punctured to a higher rate of, for example,7/8{\displaystyle 7/8}simply by not transmitting a portion of code symbols. The performance of a punctured convolutional code generally scales well with the amount of parity transmitted. The ability to perform economical soft decision decoding on convolutional codes, as well as the block length and code rate flexibility of convolutional codes, makes them very popular for digital communications.
Convolutional codes were introduced in 1955 byPeter Elias. It was thought that convolutional codes could be decoded with arbitrary quality at the expense of computation and delay. In 1967,Andrew Viterbidetermined that convolutional codes could be maximum-likelihood decoded with reasonable complexity using time invariant trellis based decoders — theViterbi algorithm. Other trellis-based decoder algorithms were later developed, including theBCJRdecoding algorithm.
Recursive systematic convolutional codes were invented byClaude Berrouaround 1991. These codes proved especially useful for iterative processing including the processing of concatenated codes such asturbo codes.[1]
Using the "convolutional" terminology, a classic convolutional code might be considered aFinite impulse response(FIR) filter, while a recursive convolutional code might be considered anInfinite impulse response(IIR) filter.
Convolutional codes are used extensively to achieve reliable data transfer in numerous applications, such asdigital video, radio,mobile communications(e.g., in GSM, GPRS, EDGE and 3G networks (until 3GPP Release 7)[3][4]) andsatellite communications.[5]These codes are often implemented inconcatenationwith a hard-decision code, particularlyReed–Solomon. Prior toturbo codessuch constructions were the most efficient, coming closest to theShannon limit.
To convolutionally encode data, start withkmemory registers, each holding one input bit. Unless otherwise specified, all memory registers start with a value of 0. The encoder hasnmodulo-2adders(a modulo 2 adder can be implemented with a singleBooleanXOR gate, where the logic is:0+0 = 0,0+1 = 1,1+0 = 1,1+1 = 0), andngenerator polynomials— one for each adder (see figure below). An input bitm1is fed into the leftmost register. Using the generator polynomials and the existing values in the remaining registers, the encoder outputsnsymbols. These symbols may be transmitted or punctured depending on the desired code rate. Nowbit shiftall register values to the right (m1moves tom0,m0moves tom−1) and wait for the next input bit. If there are no remaining input bits, the encoder continues shifting until all registers have returned to the zero state (flush bit termination).
The figure below is a rate1⁄3(m⁄n) encoder with constraint length (k) of 3. Generator polynomials areG1= (1,1,1),G2= (0,1,1), andG3= (1,0,1). Therefore, output bits are calculated (modulo 2) as follows:
Convolutional codes can be systematic and non-systematic:
Non-systematic convolutional codes are more popular due to better noise immunity. It relates to the free distance of the convolutional code.[6]
The encoder on the picture above is anon-recursiveencoder. Here's an example of a recursive one and as such it admits a feedback structure:
The example encoder issystematicbecause the input data is also used in the output symbols (Output 2). Codes with output symbols that do not include the input data are callednon-systematic.
Recursive codes are typically systematic and, conversely, non-recursive codes are typically non-systematic. It isn't a strict requirement, but a common practice.
The example encoder in Img. 2. is an 8-state encoder because the 3 registers will create 8 possible encoder states (23). A corresponding decoder trellis will typically use 8 states as well.
Recursive systematic convolutional (RSC) codes have become more popular due to their use in Turbo Codes. Recursive systematic codes are also referred to as pseudo-systematic codes.
Other RSC codes and example applications include:
Useful forLDPCcode implementation and as inner constituent code forserial concatenated convolutional codes(SCCC's).
Useful for SCCC's and multidimensional turbo codes.
Useful as constituent code in low error rate turbo codes for applications such as satellite links. Also suitable as SCCC outer code.
A convolutional encoder is called so because it performs aconvolutionof the input stream with the encoder'simpulse responses:
wherexis an input sequence,yjis a sequence from outputj,hjis an impulse response for outputjand∗{\displaystyle {*}}denotes convolution.
A convolutional encoder is a discretelinear time-invariant system. Every output of an encoder can be described by its owntransfer function, which is closely related to the generator polynomial. An impulse response is connected with a transfer function throughZ-transform.
Transfer functions for the first (non-recursive) encoder are:
Transfer functions for the second (recursive) encoder are:
Definemby
where, for anyrational functionf(z)=P(z)/Q(z){\displaystyle f(z)=P(z)/Q(z)\,},
Thenmis the maximum of thepolynomial degreesof the
Hi(1/z){\displaystyle H_{i}(1/z)\,}, and theconstraint lengthis defined asK=m+1{\displaystyle K=m+1\,}. For instance, in the first example the constraint length is 3, and in the second the constraint length is 4.
A convolutional encoder is afinite state machine. An encoder withnbinary cells will have 2nstates.
Imagine that the encoder (shown on Img.1, above) has '1' in the left memory cell (m0), and '0' in the right one (m−1). (m1is not really a memory cell because it represents a current value). We will designate such a state as "10". According to an input bit the encoder at the next turn can convert either to the "01" state or the "11" state. One can see that not all transitions are possible for (e.g., a decoder can't convert from "10" state to "00" or even stay in "10" state).
All possible transitions can be shown as below:
An actual encoded sequence can be represented as a path on this graph. One valid path is shown in red as an example.
This diagram gives us an idea aboutdecoding: if a received sequence doesn't fit this graph, then it was received with errors, and we must choose the nearestcorrect(fitting the graph) sequence. The real decoding algorithms exploit this idea.
Thefree distance[7](d) is the minimalHamming distancebetween different encoded sequences. Thecorrecting capability(t) of a convolutional code is the number of errors that can be corrected by the code. It can be calculated as
Since a convolutional code doesn't use blocks, processing instead a continuous bitstream, the value oftapplies to a quantity of errors located relatively near to each other. That is, multiple groups ofterrors can usually be fixed when they are relatively far apart.
Free distance can be interpreted as the minimal length of an erroneous "burst" at the output of a convolutional decoder. The fact that errors appear as "bursts" should be accounted for when designing aconcatenated codewith an inner convolutional code. The popular solution for this problem is tointerleavedata before convolutional encoding, so that the outer block (usuallyReed–Solomon) code can correct most of the errors.
Severalalgorithmsexist for decoding convolutional codes. For relatively small values ofk, theViterbi algorithmis universally used as it providesmaximum likelihoodperformance and is highly parallelizable. Viterbi decoders are thus easy to implement inVLSIhardware and in software on CPUs withSIMDinstruction sets.
Longer constraint length codes are more practically decoded with any of severalsequential decodingalgorithms, of which theFanoalgorithm is the best known. Unlike Viterbi decoding, sequential decoding is not maximum likelihood but its complexity increases only slightly with constraint length, allowing the use of strong, long-constraint-length codes. Such codes were used in thePioneer programof the early 1970s to Jupiter and Saturn, but gave way to shorter, Viterbi-decoded codes, usually concatenated with largeReed–Solomon error correctioncodes that steepen the overall bit-error-rate curve and produce extremely low residual undetected error rates.
Both Viterbi and sequential decoding algorithms return hard decisions: the bits that form the most likely codeword. An approximate confidence measure can be added to each bit by use of theSoft output Viterbi algorithm.Maximum a posteriori(MAP) soft decisions for each bit can be obtained by use of theBCJR algorithm.
In fact, predefined convolutional codes structures obtained during scientific researches are used in the industry. This relates to the possibility to select catastrophic convolutional codes (causes larger number of errors).
An especially popular Viterbi-decoded convolutional code, used at least since theVoyager program, has a constraint lengthKof 7 and a raterof 1/2.[12]
Mars Pathfinder,Mars Exploration Roverand theCassini probeto Saturn use aKof 15 and a rate of 1/6; this code performs about 2 dB better than the simplerK=7{\displaystyle K=7}code at a cost of 256× in decoding complexity (compared to Voyager mission codes).
The convolutional code with a constraint length of 2 and a rate of 1/2 is used inGSMas an error correction technique.[13]
Convolutional code with any code rate can be designed based on polynomial selection;[15]however, in practice, a puncturing procedure is often used to achieve the required code rate.Puncturingis a technique used to make am/nrate code from a "basic" low-rate (e.g., 1/n) code. It is achieved by deleting of some bits in the encoder output. Bits are deleted according to apuncturing matrix. The following puncturing matrices are the most frequently used:
For example, if we want to make a code with rate 2/3 using the appropriate matrix from the above table, we should take a basic encoder output and transmit every first bit from the first branch and every bit from the second one. The specific order of transmission is defined by the respective communication standard.
Punctured convolutional codes are widely used in thesatellite communications, for example, inIntelsatsystems andDigital Video Broadcasting.
Punctured convolutional codes are also called "perforated".
Simple Viterbi-decoded convolutional codes are now giving way toturbo codes, a new class of iterated short convolutional codes that closely approach the theoretical limits imposed byShannon's theoremwith much less decoding complexity than the Viterbi algorithm on the long convolutional codes that would be required for the same performance.Concatenationwith an outer algebraic code (e.g.,Reed–Solomon) addresses the issue oferror floorsinherent to turbo code designs.
|
https://en.wikipedia.org/wiki/Convolutional_code
|
ThePhillips Machine, also known as theMONIAC(Monetary National Income Analogue Computer),Phillips Hydraulic Computerand theFinancephalograph, is ananalogue computerwhich usesfluidic logicto model the workings of an economy. The name "MONIAC" is suggested by associatingmoneyandENIAC, an early electronicdigital computer.
It was created in 1949 by theNew ZealandeconomistBill Phillipsto model the national economic processes of theUnited Kingdom, while Phillips was a student at theLondon School of Economics(LSE). While designed as a teaching tool, it was discovered to be quite accurate, and thus an effective economic simulator.
At least twelve machines were built, donated to or purchased by various organisations around the world. As of 2023[update], several are in working order.
Phillips scrounged materials to create his prototype computer, including bits and pieces of war surplus parts from oldLancaster bombers.[1]The first MONIAC was created in his landlady's garage inCroydonat a cost of£400 (equivalent to £18,000 in 2023).
According to the Anna Corkhill:
Phillips discussed the idea with Walter Newlyn, a junior academic at Leeds University who had studied with Phillips at the LSE, and proceeded to build a prototype (with Newlyn’s assistance) over one summer in a garage in Croydon. Newlyn persuaded the head of department at Leeds to advance
£100 towards building the prototype. Newlyn helped as a craftsman’s mate—sanding and gluing together pieces of acrylic and supplementing Phillips’ economic knowledge.[2]
Phillips first demonstrated the machine to leading economists at theLondon School of Economics(LSE), of which Phillips was a student, in 1949. It was very well received and Phillips was soon offered a teaching position at the LSE.
The machine had been designed as a teaching aid but was also discovered to be an effective economic simulator.[3]When the machine was created, electronic digital computers that could run complex economic simulations were unavailable. In 1949, the few computers in existence were restricted to government and military use and their lack of adequate visual displays made them unable to illustrate the operation of complex models. Observing the machine in operation made it much easier for students to understand the interrelated processes of a national economy. The range of organisations that acquired a machine showed that it was used in both capacities.[original research?]
The machine is approximately 2 m (6 ft 7 in) high, 1.2 m (3 ft 11 in) wide and almost 1 m (3 ft 3 in) deep, and consisted of a series of transparent plastic tanks and pipes which were fastened to a wooden board. Each tank represented some aspect of the UK nationaleconomyand the flow of money around the economy was illustrated by coloured water. At the top of the board was a large tank called the treasury. Water (representing money) flowed from the treasury to other tanks representing the various ways in which a country could spend its money. For example, there were tanks for health and education. To increase spending on health care a tap could be opened to drain water from the treasury to the tank which represented health spending. Water then ran further down the model to other tanks, representing other interactions in the economy. Water could be pumped back to the treasury from some of the tanks to representtaxation. Changes in tax rates were modeled by increasing or decreasing pumping speeds.
Savingsreduce the funds available to consumers andinvestmentincome increases those funds.[citation needed]The machine showed it by draining water (savings) from the expenditure stream and by injecting water (investment income) into that stream. When the savings flow exceeds the investment flow, the level of water in the savings and investment tank (the surplus-balances tank) would rise to reflect the accumulated balance. When the investment flow exceeds the savings flow for any length of time, the surplus-balances tank would run dry. Import and export were represented by water draining from the model and by additional water being poured into the model.
The flow of the water was automatically controlled through a series of floats, counterweights, electrodes, and cords. When the level of water reached a certain level in a tank, pumps and drains would be activated. To their surprise, Phillips and his associate Walter Newlyn found that machine could be calibrated to an accuracy of 2%.
The flow of water between the tanks was determined by economic principles and the settings for various parameters. Different economic parameters, such as tax rates and investment rates, could be entered by setting the valves which controlled the flow of water about the computer. Users could experiment with different settings and note their effects. The machine's ability to model the subtle interaction of a number of variables made it a powerful tool for its time.[citation needed]When a set of parameters resulted in a viable economy the model would stabilise and the results could be read from scales. The output from the computer could also be sent to a rudimentaryplotter.
It is thought that twelve to fourteen machines were built:
TheTerry PratchettnovelMaking Moneycontains a similar device as a major plot point. However, after the device is fully perfected, itmagically becomes directly coupledto the economy it was intended to simulate, with the result that the machine cannot then be adjusted without causing a change in the actual economy (in parodic resemblance toGoodhart's law).[improper synthesis?]
EconomistKate Raworth's bookDonut Economicscritiques the use of an electric pump as the power source, claiming that because its power consumption was not considered, it left out an important component out of the economic model it was portraying:[11][12]
"This is where Bill Phillips’s MONIAC machine was fundamentally flawed. While brilliantly demonstrating the economy’s circular flow of income, it completely overlooked its throughflow of energy. To make his hydraulic computer start up, Phillips had to flip a switch on the back of it to turn on its electric pump. Like any real economy it relied upon an external source of energy to make it run, but neither Phillips nor his contemporaries spotted that the machine’s power source was a critical part of what made the model work. That lesson from the MONIAC applies to all of macroeconomics: the role of energy deserves a far more prominent place in economic theories that hope to explain what drives economic activity."
|
https://en.wikipedia.org/wiki/MONIAC
|
Thesandbox effect(orsandboxing) is a theory about the wayGoogleranks web pages in its index. It is the subject of much debate—its existence has been written about[1][2]since 2004,[3]but not confirmed, with several statements to the contrary.[4]
According to the theory of the sandbox effect, links that may normally be weighted by Google's ranking algorithm but don't improve the position of a webpage in Google's index, could be subjected to filtering to prevent their full impact. Some observations have suggested that two important factors causing this filter are the active age of a domain and the competitiveness of the keywords used in links.
Active age of a domain[5]should not be confused with the date of registration on a domain'sWHOISrecord, but instead refers to the time when Google first indexed pages on the domain. Keyword competitiveness refers to the search frequency of a word onGoogle search, with observation suggesting that the higher the search frequency of a word, the more likely the sandbox filter effect will come into play.
While the presence of the Google sandbox has long been debated, Google has made no direct disclosure. However, as the sandbox effect almost certainly refers to a set of filters in play for anti-spam purposes, it is unlikely Google would ever provide details on the matter. However, in one instance, Google's John Mueller[6]did mention that "it can take a bit of time for search engines to catch up with your content, and to learn to treat it appropriately. It's one thing to have a fantastic website. Still, search engines generally need a bit more to be able to confirm that and to rank yoursite—yourcontent—appropriately".[7]This could be understood as the cause for the sandbox effect.
Google has long been aware that its historical use of links as a "vote" for ranking web documents can be subject to manipulation and stated such in its original IPO documentation. Over the years, Google has filed a number of patents that seek to qualify or minimise the impact of such manipulation, which Google terms as "link spam".
Link spam is primarily driven bysearch engine optimizers, who attempt to manipulate Google's page ranking by creating many inbound links to a new website from other websites they own. Some SEO experts also claim that the sandbox only applies to highly competitive or broad keyword phrases and can be counteracted by targeting narrow or so-called long-tail phrases.[8]
Google has been updating its algorithm for as long as it has been fighting the manipulation of organic search results. However, until May 10, 2012, when Google launched theGoogle Penguinupdate, many people wrongly believed low-qualitybacklinkswould not negatively affect a site ranking; Google had been applying such link-based penalties[9]for many years but not made public how the company approached and dealt with what they called "link spam". Since then, there has been a much wider acknowledgment of the dangers of bad SEO and forensic analysis of backlinks to ensure no harmful links. As a result, the algorithm penalised Google's own products too. A well-known example is Google Chrome, which was penalised for purchasing links to boost theweb browser's results.
Penalties are generally caused by manipulative backlinks intended to favor particular companies in the search results. By adding such links, companies break Google's terms and conditions. When Google discovers such links, it imposes penalties to discourage other companies from following this practice and to remove any gains that may have been enjoyed from such links. Google also penalizes those who took part in the manipulation and helped other companies by linking to them. These types of companies are often low-quality directories that list a link to a company website with manipulativeanchor textfor a fee. Google argues that such pages offer no value to the Internet and are often deindexed. Such links are often referred to as paid links.
Paid links are links that people place on their site for a fee, believing that this will positively impact the search results. The practice of paid links was prevalent before the Penguin update when companies believed they could add any type of link with impunity since Google claimed prior that they ignored these links instead of penalizing websites. To comply with Google's recent TOS, applying thenofollowattribute to paid advertisement links is imperative. Businesses that buy backlinks from low-quality sites attract Google penalties.
These are links left in the comments of articles that are impossible to remove. As this practice became widespread, Google launched a feature to help curb such practices. The nofollow tag tells search engines not to trust such links.
Blog networks are sometimes thousands of blogs that appear unconnected, which link out to those prepared to pay for such links. Google has typically targeted blog networks and once detecting them has penalized thousands of sites that gained benefits.
Google has encouraged companies to reform their bad practices and as a result, demand that efforts be taken to remove manipulative links. Google launched the Disavow tool on 16 October 2012 so that people could report the bad links they had. The Disavow tool was launched mainly in response to many reports of negative SEO, where companies were being targeted with manipulative links by competitors knowing full well that they would be penalized.[citation needed]There has been some controversy[10]over whether the Disavow tool has any effect when manipulation has taken place over many years. At the same time, some anecdotal case studies have been presented,[11]which suggest that the tool is effective and that former ranking positions can be restored.
Negative SEO started to occur following the Penguin update when it became common knowledge that Google would apply penalties for manipulative links, Such practices as negative SEO led companies to diligently monitor their backlinks to ensure they are not being targeted by hostile competitors through negative SEO services.[12][13]
In the US and UK, these types of activities by competitors attempting to sabotage a website's rankings are considered to be illegal.
A "reverse sandbox" effect is also claimed to exist, whereby new pages with good content, butwithoutinbound links, are temporarilyincreasedin rank — much like the "New Releases" in a book store are displayed more prominently — to encourage the organic building of the World Wide Web.[4][25]
David George disputes the claim that Google applies sandboxing toallnew websites, saying that the claim "doesn't seem to be borne out by experience". He states that he created a new website in October 2004 and had it ranked in the top 20 Google results for a target keyword within one month. He asserts that "no one knows for sure if the Google sandbox exists", and comments that it "seems to fit the observations and experiments of many search engine optimizers". He theorizes that the sandbox "has introduced somehysteresisinto the system to restore a bit of sanity to Google's results".[4]
In an interview with the Search Engine Roundtable website,Matt Cuttsis reported to have said that some things in the algorithm may be perceived as a sandbox that does not apply to all industries.[26]Jaimie Sirovich and Cristian Darie, authors ofProfessional Search Engine Optimization with PHP, state that they believe that, while Google does not actuallyhavean explicit "sandbox", the effect itself (however caused) is real.[25]
|
https://en.wikipedia.org/wiki/Google_penalty
|
Delegated credentialis a short-livedTLScertificateused to improve security by faster recovery fromprivate keyleakage, without increasing thelatencyof theTLS handshake. It is currently anIETFInternet Draft,[1]and has been in use byCloudflare[2]andFacebook,[3]with browser support byFirefox.[4]
Modern websites and other services usecontent delivery networks(CDNs), which are servers potentially distributed all over the world, in order to respond to a user's request as fast as possible, alongside other services that CDNs provide such asDDoS mitigation. However, in order to establish asecureconnection, the server is required to prove possession of a private key associated with a certificate, which serves as achain of trustlinking the public key and a trusted party. The trusted party is normally acertificate authority(CA).
CAs issue thesedigital certificateswith an expiration time, usually a few months up to a year. It is the server's responsibility to renew the certificate close to its expiration date. Knowledge of a private key associated to a valid certificate is devastating for the site's security, as it allowsMan-in-the-middle attacks, in which a malicious entity can impersonate to a user as a legitimate server. Therefore, these private keys should be kept secure, preferably not distributed over every server in the CDN. Specifically, if a private key is compromised, the corresponding certificate should optimally berevoked, such that browsers will no longer support this certificate. Certificate revocation has two main drawbacks. Firstly, current revocation methods do not work well across all browsers, and put the users at risk; and secondly, upon revocation, the server needs to quickly fetch a new valid certificate from the CA and deploy it across allmirrors.
A delegated credential is a short-lived key (from a few hours to a few days) that the certificate's owner delegates to the server for use in TLS. It is in fact asignature: the certificate's owner uses the certificate's private key to sign a delegated public key, and an expiration time.
Given this delegated credential, a browser can (if it supports it) verify the server's authenticity by verifying the delegated certificate and then verify the certificate itself.
This approach has many advantage over current solutions:
|
https://en.wikipedia.org/wiki/Delegated_credential
|
Loop-level parallelismis a form ofparallelisminsoftware programmingthat is concerned with extracting parallel tasks fromloops. The opportunity for loop-level parallelism often arises in computing programs where data is stored inrandom accessdata structures. Where a sequential program will iterate over the data structure and operate on indices one at a time, a program exploiting loop-level parallelism will use multiplethreadsorprocesseswhich operate on some or all of the indices at the same time. Such parallelism provides aspeedupto overall execution time of the program, typically in line withAmdahl's law.
For simple loops, where each iteration is independent of the others, loop-level parallelism can beembarrassingly parallel, as parallelizing only requires assigning a process to handle each iteration. However, many algorithms are designed to run sequentially, and fail when parallel processesracedue to dependence within the code. Sequential algorithms are sometimes applicable to parallel contexts with slight modification. Usually, though, they requireprocess synchronization. Synchronization can be either implicit, viamessage passing, or explicit, via synchronization primitives likesemaphores.
Consider the following code operating on a listLof lengthn.
Each iteration of the loop takes the value from the current index ofL, and increments it by 10. If statementS1takesTtime to execute, then the loop takes timen * Tto execute sequentially, ignoring time taken by loop constructs. Now, consider a system withpprocessors wherep > n. Ifnthreads run in parallel, the time to execute allnsteps is reduced toT.
Less simple cases produce inconsistent, i.e.non-serializableoutcomes. Consider the following loop operating on the same listL.
Each iteration sets the current index to be the value of the previous plus ten. When run sequentially, each iteration is guaranteed that the previous iteration will already have the correct value. With multiple threads,process schedulingand other considerations prevent the execution order from guaranteeing an iteration will execute only after its dependence is met. It very well may happen before, leading to unexpected results. Serializability can be restored by adding synchronization to preserve the dependence on previous iterations.
There are several types of dependences that can be found within code.[1][2]
In order to preserve the sequential behaviour of a loop when run in parallel, True Dependence must be preserved. Anti-Dependence and Output Dependence can be dealt with by giving each process its own copy of variables (known as privatization).[1]
S2 ->T S3, meaning that S2 has a true dependence on S3 because S2 writes to the variablea, which S3 reads from.
S2 ->A S3, meaning that S2 has an anti-dependence on S3 because S2 reads from the variablebbefore S3 writes to it.
S2 ->O S3, meaning that S2 has an output dependence on S3 because both write to the variablea.
S2 ->I S3, meaning that S2 has an input dependence on S3 because S2 and S3 both read from variablec.
Loops can have two types of dependence:
In loop-independent dependence, loops have inter-iteration dependence, but do not have dependence between iterations. Each iteration may be treated as a block and performed in parallel without other synchronization efforts.
In the following example code used for swapping the values of two array of length n, there is a loop-independent dependence ofS1 ->T S3.
In loop-carried dependence, statements in an iteration of a loop depend on statements in another iteration of the loop. Loop-Carried Dependence uses a modified version of the dependence notation seen earlier.
Example of loop-carried dependence whereS1[i] ->T S1[i + 1], whereiindicates the current iteration, andi + 1indicates the next iteration.
A Loop-carried dependence graph graphically shows the loop-carried dependencies between iterations. Each iteration is listed as a node on the graph, and directed edges show the true, anti, and output dependencies between each iteration.
There are a variety of methodologies for parallelizing loops.
Each implementation varies slightly in how threads synchronize, if at all. In addition, parallel tasks must somehow be mapped to a process. These tasks can either be allocated statically or dynamically. Research has shown that load-balancing can be better achieved through some dynamic allocation algorithms than when done statically.[4]
The process of parallelizing a sequential program can be broken down into the following discrete steps.[1]Each concrete loop-parallelization below implicitly performs them.
When a loop has a loop-carried dependence, one way to parallelize it is to distribute the loop into several different loops. Statements that are not dependent on each other are separated so that these distributed loops can be executed in parallel. For example, consider the following code.
The loop has a loop carried dependenceS1[i] ->T S1[i+1]but S2 and S1 do not have a loop-independent dependence so we can rewrite the code as follows.
Note that now loop1 and loop2 can be executed in parallel. Instead of single instruction being performed in parallel on different data as in data level parallelism, here different loops perform different tasks on different data. Let's say the time of execution of S1 and S2 beTS1{\displaystyle T_{S_{1}}}andTS2{\displaystyle T_{S_{2}}}then the execution time for sequential form of above code isn∗(TS1+TS2){\displaystyle n*(T_{S_{1}}+T_{S_{2}})}, Now because we split the two statements and put them in two different loops, gives us an execution time ofn∗TS1+TS2{\displaystyle n*T_{S_{1}}+T_{S_{2}}}. We call this type of parallelism either function or task parallelism.
DOALL parallelism exists when statements within a loop can be executed independently (situations where there is no loop-carried dependence).[1]For example, the following code does not read from the arraya, and does not update the arraysb, c. No iterations have a dependence on any other iteration.
Let's say the time of one execution of S1 beTS1{\displaystyle T_{S_{1}}}then the execution time for sequential form of above code isn∗TS1{\displaystyle n*T_{S_{1}}}, Now because DOALL Parallelism exists when all iterations are independent, speed-up may be achieved by executing all iterations in parallel which gives us an execution time ofTS1{\displaystyle T_{S_{1}}}, which is the time taken for one iteration in sequential execution.
The following example, using a simplified pseudo code, shows how a loop might be parallelized to execute each iteration independently.
DOACROSS Parallelism exists where iterations of a loop are parallelized by extracting calculations that can be performed independently and running them simultaneously.[5]
Synchronization exists to enforce loop-carried dependence.
Consider the following, synchronous loop with dependenceS1[i] ->T S1[i+1].
Each loop iteration performs two actions
Calculating the valuea[i-1] + b[i] + 1, and then performing the assignment can be decomposed into two lines(statements S1 and S2):
The first line,int tmp = b[i] + 1;, has no loop-carried dependence. The loop can then be parallelized by computing the temp value in parallel, and then synchronizing the assignment toa[i].
Let's say the time of execution of S1 and S2 beTS1{\displaystyle T_{S_{1}}}andTS2{\displaystyle T_{S_{2}}}then the execution time for sequential form of above code isn∗(TS1+TS2){\displaystyle n*(T_{S_{1}}+T_{S_{2}})}, Now because DOACROSS Parallelism exists, speed-up may be achieved by executing iterations in a pipelined fashion which gives us an execution time ofTS1+n∗TS2{\displaystyle T_{S_{1}}+n*T_{S_{2}}}.
DOPIPE Parallelism implements pipelined parallelism for loop-carried dependence where a loop iteration is distributed over multiple, synchronized loops.[1]The goal of DOPIPE is to act like an assembly line, where one stage is started as soon as there is sufficient data available for it from the previous stage.[6]
Consider the following, synchronous code with dependenceS1[i] ->T S1[i+1].
S1 must be executed sequentially, but S2 has no loop-carried dependence. S2 could be executed in parallel using DOALL Parallelism after performing all calculations needed by S1 in series. However, the speedup is limited if this is done. A better approach is to parallelize such that the S2 corresponding to each S1 executes when said S1 is finished.
Implementing pipelined parallelism results in the following set of loops, where the second loop may execute for an index as soon as the first loop has finished its corresponding index.
Let's say the time of execution of S1 and S2 beTS1{\displaystyle T_{S_{1}}}andTS2{\displaystyle T_{S_{2}}}then the execution time for sequential form of above code isn∗(TS1+TS2){\displaystyle n*(T_{S_{1}}+T_{S_{2}})}, Now because DOPIPE Parallelism exists, speed-up may be achieved by executing iterations in a pipelined fashion which gives us an execution time ofn∗TS1+(n/p)∗TS2{\displaystyle n*T_{S_{1}}+(n/p)*T_{S_{2}}}, wherepis the number of processor in parallel.
|
https://en.wikipedia.org/wiki/Loop-level_parallelism
|
Intopologyand related fields ofmathematics, there are several restrictions that one often makes on the kinds oftopological spacesthat one wishes to consider. Some of these restrictions are given by theseparation axioms. These are sometimes calledTychonoff separation axioms, afterAndrey Tychonoff.
The separation axioms are not fundamentalaxiomslike those ofset theory, but rather defining properties which may be specified to distinguish certain types of topological spaces. The separation axioms are denoted with the letter "T" after theGermanTrennungsaxiom("separation axiom"), and increasing numerical subscripts denote stronger and stronger properties.
The precise definitions of theseparation axioms have varied over time. Especially in older literature, different authors might have different definitions of each condition.
Before we define the separation axioms themselves, we give concrete meaning to the concept ofseparated sets(and points) intopological spaces. (Separated sets are not the same asseparated spaces, defined in the next section.)
The separation axioms are about the use of topological means to distinguishdisjoint setsanddistinctpoints. It's not enough for elements of a topological space to be distinct (that is,unequal); we may want them to betopologically distinguishable. Similarly, it's not enough forsubsetsof a topological space to be disjoint; we may want them to beseparated(in any of various ways). The separation axioms all say, in one way or another, that points or sets that are distinguishable or separated in some weak sense must also be distinguishable or separated in some stronger sense.
LetXbe a topological space. Then two pointsxandyinXaretopologically distinguishableif they do not have exactly the sameneighbourhoods(or equivalently the same open neighbourhoods); that is, at least one of them has a neighbourhood that is not a neighbourhood of the other (or equivalently there is anopen setthat one point belongs to but the other point does not). That is, at least one of the points does not belong to the other'sclosure.
Two pointsxandyareseparatedif each of them has a neighbourhood that is not a neighbourhood of the other; that is, neither belongs to the other'sclosure. More generally, two subsetsAandBofXareseparatedif each is disjoint from the other's closure, though the closures themselves do not have to be disjoint. Equivalently, each subset is included in an open set disjoint from the other subset. All of the remaining conditions for separation of sets may also be applied to points (or to a point and a set) by using singleton sets. Pointsxandywill be considered separated, by neighbourhoods, by closed neighbourhoods, by a continuous function, precisely by a function, if and only if their singleton sets {x} and {y} are separated according to the corresponding criterion.
SubsetsAandBareseparated by neighbourhoodsif they have disjoint neighbourhoods. They areseparated by closed neighbourhoodsif they have disjoint closed neighbourhoods. They areseparated by a continuous functionif there exists acontinuous functionffrom the spaceXto thereal lineRsuch that A is a subset of thepreimagef−1({0}) and B is a subset of the preimagef−1({1}). Finally, they areprecisely separated by a continuous functionif there exists a continuous functionffromXtoRsuch thatAequals the preimagef−1({0}) andBequalsf−1({1}).
These conditions are given in order of increasing strength: Any two topologically distinguishable points must be distinct, and any two separated points must be topologically distinguishable. Any two separated sets must be disjoint, any two sets separated by neighbourhoods must be separated, and so on.
These definitions all use essentially thepreliminary definitionsabove.
Many of these names havealternative meanings in some of mathematical literature; for example, the meanings of "normal" and "T4" are sometimes interchanged, similarly "regular" and "T3", etc. Many of the concepts also have several names; however, the one listed first is always least likely to be ambiguous.
Most of these axioms have alternative definitions with the same meaning; the definitions given here fall into a consistent pattern that relates the various notions of separation defined in the previous section. Other possible definitions can be found in the individual articles.
In all of the following definitions,Xis again atopological space.
The following table summarizes the separation axioms as well as the implications between them: cells which are merged represent equivalent properties, each axiom implies the ones in the cells to its left, and if we assume the T1axiom, then each axiom also implies the ones in the cells above it (for example, all normal T1spaces are also completely regular).
The T0axiom is special in that it can not only be added to a property (so that completely regular plus T0is Tychonoff) but also be subtracted from a property (so that Hausdorff minus T0is R1), in a fairly precise sense; seeKolmogorov quotientfor more information. When applied to the separation axioms, this leads to the relationships in the table to the left below. In this table, one goes from the right side to the left side by adding the requirement of T0, and one goes from the left side to the right side by removing that requirement, using the Kolmogorov quotient operation. (The names in parentheses given on the left side of this table are generally ambiguous or at least less well known; but they are used in the diagram below.)
Other than the inclusion or exclusion of T0, the relationships between the separation axioms are indicated in the diagram to the right. In this diagram, the non-T0version of a condition is on the left side of the slash, and the T0version is on the right side. Letters are used forabbreviationas follows:
"P" = "perfectly", "C" = "completely", "N" = "normal", and "R" (without a subscript) = "regular". A bullet indicates that there is no special name for a space at that spot. The dash at the bottom indicates no condition.
Two properties may be combined using this diagram by following the diagram upwards until both branches meet. For example, if a space is both completely normal ("CN") and completely Hausdorff ("CT2"), then following both branches up, one finds the spot "•/T5".
Since completely Hausdorff spaces are T0(even though completely normal spaces may not be), one takes the T0side of the slash, so a completely normal completely Hausdorff space is the same as a T5space (less ambiguously known as a completely normal Hausdorff space, as can be seen in the table above).
As can be seen from the diagram, normal and R0together imply a host of other properties, since combining the two properties leads through the many nodes on the right-side branch. Since regularity is the most well known of these, spaces that are both normal and R0are typically called "normal regular spaces". In a somewhat similar fashion, spaces that are both normal and T1are often called "normal Hausdorff spaces" by people that wish to avoid the ambiguous "T" notation. These conventions can be generalised to other regular spaces and Hausdorff spaces.
[NB: This diagram does not reflect that perfectly normal spaces are always regular; the editors are working on this now.]
There are some other conditions on topological spaces that are sometimes classified with the separation axioms, but these don't fit in with the usual separation axioms as completely. Other than their definitions, they aren't discussed here; see their individual articles.
|
https://en.wikipedia.org/wiki/Separation_axiom
|
Information pollution(also referred to asinfo pollution) is the contamination of aninformationsupply with irrelevant, redundant, unsolicited, hampering, and low-value information.[1][2]Examples includemisinformation,junk e-mail, andmedia violence.
The spread of useless and undesirable information can have a detrimental effect on human activities. It is considered to be an adverse effect of theinformation revolution.[3]
Information pollution generally applies to digital communication, such ase-mail,instant messaging(IM), andsocial media. The term acquired particular relevance in 2003 when web usability expertJakob Nielsenpublished articles discussing the topic.[4]As early as 1971 researchers were expressing doubts about the negative effects of having to recover "valuable nodules from a slurry of garbage in which it is a randomly dispersed minor component."[5]People use information in order to make decisions and adapt to circumstances. Cognitive studies demonstrated human beings can process only limited information before the quality of their decisions begins to deteriorate.[6]Information overloadis a related concept that can also harm decision-making. It refers to an abundance of available information, without respect to its quality.[1][6]
Although technology is thought to have exacerbated the problem, it is not the only cause of information pollution. Anything that distracts attention from the essential facts required to perform a task or make a decision could be considered aninformation pollutant.
Information pollution is seen as the digital equivalent of theenvironmental pollutiongenerated by industrial processes.[3][7][8]Some authors claim that information overload is a crisis of global proportions, on the same scale as threats faced by environmental destruction. Others have expressed the need for the development of an information management paradigm that parallelsenvironmental managementpractices.[6]
The manifestations of information pollution can be classified into two groups: those that provoke disruption, and those that damage information quality.
Typical examples of disrupting information pollutants include unsolicited electronic messages (spam) and instant messages, particularly in the workplace.[9]Mobile phones (ring tones and content) are disruptive in many contexts. Disrupting information pollution is not always technology based. A common example are newspapers, where subscribers read less than half or even none of the articles provided.[10][clarification needed]Superfluous messages, such as unnecessary labels on a map, also distract.[9]
Alternatively, information may be polluted when its quality is reduced. This may be due to inaccurate or outdated information,[8]but it also happens when information is badly presented. For example, when content is unfocused or unclear or when they appear in cluttered, wordy, or poorly organised documents it is difficult for the reader to understand.[11]
Laws and regulations undergo changes and revisions. Handbooks and other sources used for interpreting these laws can fall years behind the changes, which can cause the public to be misinformed.
Traditionally,[vague]information has been seen positively. People are accustomed to statements like "you cannot have too much information", "the more information the better",[9]and "knowledge is power".[8]The publishing and marketing industries have become used to printing many copies of books, magazines, and brochures regardless of customerdemand, just in case they are needed.[10]
Democratised information sharing is an example of a new technology that has made it easier for information to reach everyone. Such technologies are perceived as a sign ofprogressand individual empowerment, as well as a positive step to bridge thedigital divide.[7][8]However, they also increase the volume of distracting information, making it more difficult to distinguish valuable information fromnoise. The continuous use ofadvertisingin websites, technologies, newspapers, and everyday life is known as "cultural pollution".[12]
Technological advances of the 20th century and, in particular, the internet play a key role in the increase of information pollution.Blogs,social networks,personal websites, andmobile technologyall contribute to increased "noise".[9]The level of pollution may depend on the context. For example, e-mail is likely to cause more information pollution in a corporate setting,[11]whereasmobile phonesare likely to be particularly disruptive in a confined space shared by multiple people, such as a train carriage.
The effects of information pollution can be seen at multiple levels.
At a personal level, information pollution affects individuals' capacity to evaluate options and find adequate solutions. This can lead toinformation overload,anxiety, decision paralysis, andstress.[11]It can disrupt the learning process.[13]
Some authors argue that information pollution and information overload can cause loss of perspective and moral values.[14]This argument may explain the indifferent attitude that society shows toward topics such as scientific discoveries, health warnings, or politics.[1]Pollution makes people less sensitive to headlines and more cynical toward new messages.
Information pollution contributes to information overload and stress, which can disrupt the kinds information processing and decision-making needed to complete tasks at work. This leads to delayed or flawed decisions, which can translate into loss ofproductivityandrevenueas well as an increased risk of critical errors.[1][11]
Proposed solutions includemanagementtechniques and refined technology.
The terminfollutionorinformatization pollutionwas coined by Dr. Paek-Jae Cho, former president & CEO ofKTC (Korean Telecommunication Corp.), in a 2002 speech at theInternational Telecommunications Society (ITS)14th biennial conference to describe any undesirable side effect brought about by information technology and its applications.[15]
|
https://en.wikipedia.org/wiki/Information_pollution
|
Thebase rate fallacy, also calledbase rate neglect[2]orbase rate bias, is a type offallacyin which people tend to ignore thebase rate(e.g., generalprevalence) in favor of the individuating information (i.e., information pertaining only to a specific case).[3]For example, if someone hears that a friend is very shy and quiet, they might think the friend is more likely to be a librarian than a salesperson. However, there are far more salespeople than librarians overall—hence making it more likely that their friend is actually a salesperson, even if a greater proportion of librarians fit the description of being shy and quiet. Base rate neglect is a specific form of the more generalextension neglect.
It is also called theprosecutor's fallacyordefense attorney's fallacywhen applied to the results of statistical tests (such as DNA tests) in the context of law proceedings. These terms were introduced by William C. Thompson and Edward Schumann in 1987,[4][5]although it has been argued that their definition of the prosecutor's fallacy extends to many additional invalid imputations of guilt or liability that are not analyzable as errors in base rates orBayes's theorem.[6]
An example of the base rate fallacy is thefalse positive paradox(also known asaccuracy paradox). This paradox describes situations where there are morefalse positivetest results than true positives (this means the classifier has a lowprecision). For example, if a facial recognition camera can identify wanted criminals 99% accurately, but analyzes 10,000 people a day, the high accuracy is outweighed by the number of tests; because of this, the program's list of criminals will likely have far more innocents (false positives) than criminals (true positives) because there are far more innocents than criminals overall. The probability of a positive test result is determined not only by the accuracy of the test but also by the characteristics of the sampled population.[7]The fundamental issue is that the far higher prevalence of true negatives means that the pool of people testing positively will be dominated by false positives, given that even a small fraction of the much larger [negative] group will produce a larger number of indicated positives than the larger fraction of the much smaller [positive] group.
When the prevalence, the proportion of those who have a given condition, is lower than the test'sfalse positive rate, even tests that have a very low risk of giving a false positivein an individual casewill give more false than true positivesoverall.[8]
It is especially counter-intuitive when interpreting a positive result in a test on a low-prevalencepopulationafter having dealt with positive results drawn from a high-prevalence population.[8]If the false positive rate of the test is higher than the proportion of thenewpopulation with the condition, then a test administrator whose experience has been drawn from testing in a high-prevalence population mayconclude from experiencethat a positive test result usually indicates a positive subject, when in fact a false positive is far more likely to have occurred.
Imagine running an infectious disease test on a populationAof 1,000 persons, of which 40% are infected. The test has a false positive rate of 5% (0.05) and a false negative rate of zero. Theexpected outcomeof the 1,000 tests on populationAwould be:
So, in populationA, a person receiving a positive test could be over 93% confident (400/30 + 400) that it correctly indicates infection.
Now consider the same test applied to populationB, of which only 2% are infected. The expected outcome of 1000 tests on populationBwould be:
In populationB, only 20 of the 69 total people with a positive test result are actually infected. So, the probability of actually being infected after one is told that one is infected is only 29% (20/20 + 49) for a test that otherwise appears to be "95% accurate".
A tester with experience of groupAmight find it a paradox that in groupB, a result that had usually correctly indicated infection is now usually a false positive. The confusion of theposterior probabilityof infection with theprior probabilityof receiving a false positive is a naturalerrorafter receiving a health-threatening test result.
Imagine that a group of police officers havebreathalyzersdisplaying false drunkenness in 5% of the cases in which the driver is sober. However, the breathalyzers never fail to detect a truly drunk person. One in a thousand drivers is driving drunk. Suppose the police officers then stop a driver at random to administer a breathalyzer test. It indicates that the driver is drunk. No other information is known about them.
Many would estimate the probability that the driver is drunk as high as 95%, but the correct probability is about 2%.
An explanation for this is as follows: on average, for every 1,000 drivers tested,
Therefore, the probability that any given driver among the 1 + 49.95 = 50.95 positive test results really is drunk is1/50.95≈0.019627{\displaystyle 1/50.95\approx 0.019627}.
The validity of this result does, however, hinge on the validity of the initial assumption that the police officer stopped the driver truly at random, and not because of bad driving. If that or another non-arbitrary reason for stopping the driver was present, then the calculation also involves the probability of a drunk driver driving competently and a non-drunk driver driving (in-)competently.
More formally, the same probability of roughly 0.02 can be established usingBayes' theorem. The goal is to find the probability that the driver is drunk given that the breathalyzer indicated they are drunk, which can be represented as
p(drunk∣D){\displaystyle p(\mathrm {drunk} \mid D)}
whereDmeans that the breathalyzer indicates that the driver is drunk. Using Bayes's theorem,
p(drunk∣D)=p(D∣drunk)p(drunk)p(D).{\displaystyle p(\mathrm {drunk} \mid D)={\frac {p(D\mid \mathrm {drunk} )\,p(\mathrm {drunk} )}{p(D)}}.}
The following information is known in this scenario:
p(drunk)=0.001,p(sober)=0.999,p(D∣drunk)=1.00,p(D∣sober)=0.05.{\displaystyle {\begin{aligned}p(\mathrm {drunk} )&=0.001,\\p(\mathrm {sober} )&=0.999,\\p(D\mid \mathrm {drunk} )&=1.00,\\p(D\mid \mathrm {sober} )&=0.05.\end{aligned}}}
As can be seen from the formula, one needsp(D) for Bayes' theorem, which can be computed from the preceding values using thelaw of total probability:
p(D)=p(D∣drunk)p(drunk)+p(D∣sober)p(sober){\displaystyle p(D)=p(D\mid \mathrm {drunk} )\,p(\mathrm {drunk} )+p(D\mid \mathrm {sober} )\,p(\mathrm {sober} )}
which gives
p(D)=(1.00×0.001)+(0.05×0.999)=0.05095.{\displaystyle p(D)=(1.00\times 0.001)+(0.05\times 0.999)=0.05095.}
Plugging these numbers into Bayes' theorem, one finds that
p(drunk∣D)=1.00×0.0010.05095≈0.019627,{\displaystyle p(\mathrm {drunk} \mid D)={\frac {1.00\times 0.001}{0.05095}}\approx 0.019627,}
which is the precision of the test.
In a city of 1 million inhabitants, let there be 100 terrorists and 999,900 non-terrorists. To simplify the example, it is assumed that all people present in the city are inhabitants. Thus, the base rate probability of a randomly selected inhabitant of the city being a terrorist is 0.0001, and the base rate probability of that same inhabitant being a non-terrorist is 0.9999. In an attempt to catch the terrorists, the city installs an alarm system with a surveillance camera and automaticfacial recognition software.
The software has two failure rates of 1%:
Suppose now that an inhabitant triggers the alarm. Someone making the base rate fallacy would infer that there is a 99% probability that the detected person is a terrorist. Although the inference seems to make sense, it is actually bad reasoning, and a calculation below will show that the probability of a terrorist is actually near 1%, not near 99%.
The fallacy arises from confusing the natures of two different failure rates. The 'number of non-bells per 100 terrorists' (P(¬B | T), or the probability that the bell fails to ring given the inhabitant is a terrorist) and the 'number of non-terrorists per 100 bells' (P(¬T | B), or the probability that the inhabitant is a non-terrorist given the bell rings) are unrelated quantities; one is not necessarily equal—or even close—to the other. To show this, consider what happens if an identical alarm system were set up in a second city with no terrorists at all. As in the first city, the alarm sounds for 1 out of every 100 non-terrorist inhabitants detected, but unlike in the first city, the alarm never sounds for a terrorist. Therefore, 100% of all occasions of the alarm sounding are for non-terrorists, but a false negative rate cannot even be calculated. The 'number of non-terrorists per 100 bells' in that city is 100, yet P(T | B) = 0%. There is zero chance that a terrorist has been detected given the ringing of the bell.
Imagine that the first city's entire population of one million people pass in front of the camera. About 99 of the 100 terrorists will trigger the alarm—and so will about 9,999 of the 999,900 non-terrorists. Therefore, about 10,098 people will trigger the alarm, among which about 99 will be terrorists. The probability that a person triggering the alarm actually is a terrorist is only about 99 in 10,098, which is less than 1% and very, very far below the initial guess of 99%.
The base rate fallacy is so misleading in this example because there are many more non-terrorists than terrorists, and the number of false positives (non-terrorists scanned as terrorists) is so much larger than the true positives (terrorists scanned as terrorists).
Multiple practitioners have argued that as the base rate of terrorism is extremely low, usingdata miningand predictive algorithms to identify terrorists cannot feasibly work due to the false positive paradox.[9][10][11][12]Estimates of the number of false positives for each accurate result vary from over ten thousand[12]to one billion;[10]consequently, investigating each lead would be cost- and time-prohibitive.[9][11]The level of accuracy required to make these models viable is likely unachievable. Foremost, the low base rate of terrorism also means there is a lack of data with which to make an accurate algorithm.[11]Further, in the context of detecting terrorism false negatives are highly undesirable and thus must be minimised as much as possible; however, this requiresincreasing sensitivity at the cost of specificity, increasing false positives.[12]It is also questionable whether the use of such models by law enforcement would meet the requisiteburden of proofgiven that over 99% of results would be false positives.[12]
A crime is committed. Forensic analysis determines that the perpetrator has a certain blood type shared by 10% of the population. A suspect is arrested, and found to have that same blood type.
A prosecutor might charge the suspect with the crime on that basis alone, and claim at trial that the probability that the defendant is guilty is 90%.
However, this conclusion is only close to correct if the defendant was selected as the main suspect based on robust evidence discovered prior to the blood test and unrelated to it. Otherwise, the reasoning presented is flawed, as it overlooks the highprior probability(that is, prior to the blood test) that he is a random innocent person. Assume, for instance, that 1000 people live in the town where the crime occurred. This means that 100 people live there who have the perpetrator's blood type, of whom only one is the true perpetrator; therefore, the true probability that the defendant is guilty – based only on the fact that his blood type matches that of the killer – is only 1%, far less than the 90% argued by the prosecutor.
The prosecutor's fallacy involves assuming that the prior probability of a random match is equal to the probability that the defendant is innocent. When using it, a prosecutor questioning an expert witness may ask: "The odds of finding this evidence on an innocent man are so small that the jury can safely disregard the possibility that this defendant is innocent, correct?"[13]The claim assumes that the probability that evidence is found on an innocent man is the same as the probability that a man is innocent given that evidence was found on him, which is not true. Whilst the former is usually small (10% in the previous example) due to goodforensic evidenceprocedures, the latter (99% in that example) does not directly relate to it and will often be much higher, since, in fact, it depends on the likely quite highprior oddsof the defendant being a random innocent person.
O. J. Simpsonwas tried and acquitted in 1995 for the murders of his ex-wife Nicole Brown Simpson and her friend Ronald Goldman.
Crime scene blood matched Simpson's with characteristics shared by 1 in 400 people. However, the defense argued that the number of people from Los Angeles matching the sample could fill a football stadium and that the figure of 1 in 400 was useless.[14][15]It would have been incorrect, and an example of prosecutor's fallacy, to rely solely on the "1 in 400" figure to deduce that a given person matching the sample would be likely to be the culprit.
In the same trial, the prosecution presented evidence that Simpson had been violent toward his wife. The defense argued that there was only one woman murdered for every 2500 women who were subjected to spousal abuse, and that any history of Simpson being violent toward his wife was irrelevant to the trial. However, the reasoning behind the defense's calculation was fallacious. According to authorGerd Gigerenzer, the correct probability requires additional context: Simpson's wife had not only been subjected to domestic violence, but rather subjected to domestic violence (by Simpson)andkilled (by someone). Gigerenzer writes "the chances that a batterer actually murdered his partner, given that she has been killed, is about 8 in 9 or approximately 90%".[16]While most cases of spousal abuse do not end in murder, most cases of murder where there is a history of spousal abuse were committed by the spouse.
Sally Clark, a British woman, was accused in 1998 of having killed her first child at 11 weeks of age and then her second child at 8 weeks of age. The prosecution hadexpert witnessSirRoy Meadow, a professor and consultant paediatrician,[17]testify that the probability of two children in the same family dying fromSIDSis about 1 in 73 million. That was much less frequent than the actual rate measured inhistorical data– Meadow estimated it from single-SIDS death data, and the assumption that the probability of such deaths should beuncorrelatedbetween infants.[18]
Meadow acknowledged that 1-in-73 million is not an impossibility, but argued that such accidents would happen "once every hundred years" and that, in a country of 15 million 2-child families, it is vastly more likely that the double-deaths are due toMünchausen syndrome by proxythan to such a rare accident. However, there is good reason to suppose that the likelihood of a death from SIDS in a family is significantly greater if a previous child has already died in these circumstances, (agenetic predispositionto SIDS is likely to invalidate that assumedstatistical independence[19]) making some families more susceptible to SIDS and the error an outcome of theecological fallacy.[20]The likelihood of two SIDS deaths in the same family cannot be soundlyestimatedby squaring the likelihood of a single such death in all otherwise similar families.[21]
The 1-in-73 million figure greatly underestimated the chance of two successive accidents, but even if that assessment were accurate, the court seems to have missed the fact that the 1-in-73 million number meant nothing on its own. As ana prioriprobability, it should have been weighed against thea prioriprobabilities of the alternatives. Given that two deaths had occurred, one of the following explanations must be true, and all of them area prioriextremely improbable:
It is unclear whether an estimate of the probability for the second possibility was ever proposed during the trial, or whether the comparison of the first two probabilities was understood to be the key estimate to make in the statistical analysis assessing the prosecution's case against the case for innocence.
Clark was convicted in 1999, resulting in a press release by theRoyal Statistical Societywhich pointed out the mistakes.[22]
In 2002, Ray Hill (a mathematics professor atSalford) attempted to accurately compare the chances of these two possible explanations; he concluded that successive accidents are between 4.5 and 9 times more likely than are successive murders, so that thea priorioddsof Clark's guilt were between 4.5 to 1 and 9 to 1 against.[23]
After the court found that the forensic pathologist who had examined both babies had withheldexculpatory evidence, a higher court later quashed Clark's conviction, on 29 January 2003.[24]
In experiments, people have been found to prefer individuating information over general information when the former is available.[25][26][27]
In some experiments, students were asked to estimate thegrade point averages(GPAs) of hypothetical students. When given relevant statistics about GPA distribution, students tended to ignore them if given descriptive information about the particular student even if the new descriptive information was obviously of little or no relevance to school performance.[26]This finding has been used to argue that interviews are an unnecessary part of thecollege admissionsprocess because interviewers are unable to pick successful candidates better than basic statistics.
PsychologistsDaniel KahnemanandAmos Tverskyattempted to explain this finding in terms of asimple rule or "heuristic"calledrepresentativeness. They argued that many judgments relating to likelihood, or to cause and effect, are based on how representative one thing is of another, or of a category.[26]Kahneman considers base rate neglect to be a specific form ofextension neglect.[28]Richard Nisbetthas argued that someattributional biaseslike thefundamental attribution errorare instances of the base rate fallacy: people do not use the "consensus information" (the "base rate") about how others behaved in similar situations and instead prefer simplerdispositional attributions.[29]
There is considerable debate in psychology on the conditions under which people do or do not appreciate base rate information.[30][31]Researchers in the heuristics-and-biases program have stressed empirical findings showing that people tend to ignore base rates and make inferences that violate certain norms of probabilistic reasoning, such asBayes' theorem. The conclusion drawn from this line of research was that human probabilistic thinking is fundamentally flawed and error-prone.[32]Other researchers have emphasized the link between cognitive processes and information formats, arguing that such conclusions are not generally warranted.[33][34]
Consider again Example 2 from above. The required inference is to estimate the (posterior) probability that a (randomly picked) driver is drunk, given that the breathalyzer test is positive. Formally, this probability can be calculated using Bayes' theorem, as shown above. However, there are different ways of presenting the relevant information. Consider the following, formally equivalent variant of the problem:
In this case, the relevant numerical information—p(drunk),p(D| drunk),p(D| sober)—is presented in terms of natural frequencies with respect to a certain reference class (seereference class problem). Empirical studies show that people's inferences correspond more closely to Bayes' rule when information is presented this way, helping to overcome base-rate neglect in laypeople[34]and experts.[35]As a consequence, organizations like theCochrane Collaborationrecommend using this kind of format for communicating health statistics.[36]Teaching people to translate these kinds of Bayesian reasoning problems into natural frequency formats is more effective than merely teaching them to plug probabilities (or percentages) into Bayes' theorem.[37]It has also been shown that graphical representations of natural frequencies (e.g., icon arrays, hypothetical outcome plots) help people to make better inferences.[37][38][39][40]
One important reason why natural frequency formats are helpful is that this information format facilitates the required inference because it simplifies the necessary calculations. This can be seen when using an alternative way of computing the required probabilityp(drunk|D):
whereN(drunk ∩D) denotes the number of drivers that are drunk and get a positive breathalyzer result, andN(D) denotes the total number of cases with a positive breathalyzer result. The equivalence of this equation to the above one follows from the axioms of probability theory, according to whichN(drunk ∩D) =N×p(D| drunk) ×p(drunk). Importantly, although this equation is formally equivalent to Bayes' rule, it is not psychologically equivalent. Using natural frequencies simplifies the inference because the required mathematical operation can be performed on natural numbers, instead of normalized fractions (i.e., probabilities), because it makes the high number of false positives more transparent, and because natural frequencies exhibit a "nested-set structure".[41][42]
Not every frequency format facilitates Bayesian reasoning.[42][43]Natural frequencies refer to frequency information that results fromnatural sampling,[44]which preserves base rate information (e.g., number of drunken drivers when taking a random sample of drivers). This is different fromsystematic sampling, in which base rates are fixeda priori(e.g., in scientific experiments). In the latter case it is not possible to infer the posterior probabilityp(drunk | positive test) from comparing the number of drivers who are drunk and test positive compared to the total number of people who get a positive breathalyzer result, because base rate information is not preserved and must be explicitly re-introduced using Bayes' theorem.
|
https://en.wikipedia.org/wiki/Base_rate_fallacy
|
Asystem of units of measurement, also known as asystem of unitsorsystem of measurement, is a collection ofunits of measurementand rules relating them to each other. Systems of measurement have historically been important, regulated and defined for the purposes of science andcommerce. Instances in use include theInternational System of UnitsorSI(the modern form of themetric system), theBritish imperial system, and theUnited States customary system.
In antiquity,systems of measurementwere defined locally: the different units might be defined independently according to the length of a king's thumb or the size of his foot, the length of stride, the length of arm, or maybe the weight of water in a keg of specific size, perhaps itself defined inhandsandknuckles. The unifying characteristic is that there was some definition based on some standard. Eventuallycubitsandstridesgave way to "customary units" to meet the needs of merchants and scientists.
The preference for a more universal and consistent system only gradually spread with the growth of international trade and science. Changing a measurement system has costs in the near term, which often results in resistance to such a change. The substantial benefit of conversion to a more rational and internationally consistent system of measurement has been recognized and promoted by scientists, engineers, businesses and politicians, and has resulted in most of the world adopting a commonly agreed metric system.
TheFrench Revolutiongave rise to themetric system, and this has spread around the world, replacing most customary units of measure. In most systems,length(distance),mass, andtimearebase quantities.
Later, science developments showed that an electromagnetic quantity such aselectric chargeor electric current could be added to extend the set of base quantities.Gaussian unitshave only length, mass, and time as base quantities, with no separate electromagnetic dimension. Other quantities, such aspowerandspeed, are derived from the base quantities: for example, speed is distance per unit time. Historically, a wide range of units was used for the same type of quantity. In different contexts length was measured ininches,feet,yards,fathoms,rods,chains,furlongs,miles,nautical miles,stadia,leagues, with conversion factors that were not based on power of ten.
In the metric system and other recent systems, underlying relationships between quantities, as expressed by formulae of physics such asNewton's laws of motion, is used to select a small number of base quantities for which a unit is defined for each, from which all other units may be derived. Secondary units (multiples and submultiples) are derived from these base and derived units by multiplying by powers of ten. For example, where the unit of length is themetre; a distance of 1 metre is 1,000 millimetres, or 0.001 kilometres.
Metrication is complete or nearly complete in most countries.
However,US customary unitsremain heavily used in theUnited Statesand to some degree inLiberia. TraditionalBurmese units of measurementare used inBurma, with partial transition to the metric system. U.S. units are used in limited contexts in Canada due to the large volume of trade with the U.S. There is also considerable use of imperial weights and measures, despitede jureCanadian conversion to metric.
A number of other jurisdictions have laws mandating or permitting other systems of measurement in some or all contexts, such as the United Kingdom whoseroad signage legislation, for instance, only allows distance signs displayingimperial units(miles or yards)[1]or Hong Kong.[2]
In the United States, metric units are virtually always used in science, frequently in the military, and partially in industry. U.S. customary units are primarily used in U.S. households. At retail stores, the litre (spelled 'liter' in the U.S.) is a commonly used unit for volume, especially on bottles of beverages, and milligrams, rather thangrains, are used for medications.
Some other non-SIunits are still in international use, such asnautical milesandknotsin aviation and shipping, andfeetfor aircraft altitude.
Metric systemsof units have evolved since the adoption of the first well-defined system in France in 1795. During this evolution the use of these systems has spread throughout the world, first to non-English-speaking countries, and then to English speaking countries.
Multiples and submultiples of metric units are related by powers of ten and their names are formed withprefixes. This relationship is compatible with the decimal system of numbers and it contributes greatly to the convenience of metric units.
In the early metric system there were two base units, themetrefor length and thegramfor mass. The other units of length and mass, and all units of area, volume, and derived units such as density were derived from these two base units.
Mesures usuelles(Frenchforcustomary measures) were a system of measurement introduced as a compromise between the metric system and traditional measurements. It was used in France from 1812 to 1839.
A number of variations on the metric system have been in use. These includegravitational systems, thecentimetre–gram–second systems(cgs) useful in science, themetre–tonne–second system(mts) once used in the USSR and themetre–kilogram–second system(mks). In some engineering fields, likecomputer-aided design, millimetre–gram–second (mmgs) is also used.[3]
The current international standard for the metric system is theInternational System of Units(Système international d'unitésor SI). It is a system in which all units can be expressed in terms of seven units. The units that serve as theSI base unitsare themetre,kilogram,second,ampere,kelvin,mole, andcandela.
BothBritish imperial unitsandUS customary unitsderive from earlierEnglish units. Imperial units were mostly used in the formerBritish Empireand theBritish Commonwealth, but in all these countries they have been largely supplanted by the metric system. They are still used for some applications in the United Kingdom but have been mostly replaced by the metric system incommercial,scientific, andindustrialapplications. US customary units, however, are still the main system of measurement in theUnited States. While some steps towardsmetricationhave been made (mainly in the late 1960s and early 1970s), the customary units have a strong hold due to the vast industrial infrastructure and commercial development.
While British imperial and US customary systems are closely related, there are a number ofdifferences between them. Units of length and area (theinch,foot,yard,mile, etc.) have been identical since the adoption of theInternational Yard and Pound Agreement; however, the US and, formerly, India retained older definitions for surveying purposes. This gave rise to the USsurvey foot, for instance. Theavoirdupoisunits of mass and weight differ for units larger than apound(lb). The British imperial system uses a stone of 14 lb, along hundredweightof 112 lb and along tonof 2,240 lb. Thestoneis not a measurement of weight used in the US. The US customary system uses theshort hundredweightof 100 lb andshort tonof 2,000 lb.
Where these systems most notably differ is in their units of volume. An imperial fluid ounce of 28.4130625 ml is 3.924% smaller than the USfluid ounce(fl oz) of 29.5735295625millilitres(ml). However, as there are 16 US fl oz to a USpintand 20 imp fl oz to an imperial pint, the imperial pint is 20.095% larger than a US pint, and the same is true forgills,quarts, andgallons: six US gallons (22.712470704 L) is only 0.08% less than five imperial gallons (22.73045 L).
Theavoirdupoissystem served as the general system of mass and weight. In addition to this, there are thetroyand theapothecaries' systems. Troy weight was customarily used forprecious metals,black powder, andgemstones. The troy ounce is the only unit of the system in current use; it is used for precious metals. Although the troy ounce is larger than its avoirdupois equivalent, the pound is smaller. The obsolete troy pound was divided into 12 ounces, rather than the 16 ounces per pound of the avoirdupois system. The apothecaries' system was traditionally used inpharmacology, but has now been replaced by the metric system; it shared the same pound and ounce as the troy system but with different further subdivisions.
Natural unitsareunits of measurementdefined in terms of universalphysical constantsin such a manner that selected physical constants take on the numerical value of one when expressed in terms of those units. Natural units are so named because their definition relies on only properties ofnatureand not on any human construct. Varying systems of natural units are possible, depending on the choice of constants used.
Some examples are as follows:
Non-standard measurement unitsalso found in books, newspapers etc., include:
A unit of measurement that applies tomoneyis called aunit of accountin economics and unit of measure in accounting.[5]This is normally acurrencyissued by acountryor a fraction thereof; for instance, theUS dollarand US cent (1⁄100of a dollar), or theeuroand euro cent.
ISO 4217is the international standard describing three letter codes (also known as the currency code) to define the names of currencies established by the International Organization for Standardization (ISO).
Throughout history, many official systems of measurement have been used. While no longer in official use, some of thesecustomary systemsare occasionally used in day-to-day life, for instance incooking.
Still in use:
|
https://en.wikipedia.org/wiki/System_of_measurement
|
In the branch ofexperimental psychologyfocused onsense,sensation, andperception, which is calledpsychophysics, ajust-noticeable differenceorJNDis the amount something must be changed in order for a difference to be noticeable, detectable at least half the time.[1]Thislimenis also known as thedifference limen,difference threshold, orleast perceptible difference.[2]
For many sensory modalities, over a wide range of stimulus magnitudes sufficiently far from the upper and lower limits of perception, the 'JND' is a fixed proportion of the reference sensory level, and so the ratio of the JND/reference is roughly constant (that is the JND is a constant proportion/percentage of the reference level). Measured in physical units, we have:
ΔII=k,{\displaystyle {\frac {\Delta I}{I}}=k,}
whereI{\displaystyle I\!}is the original intensity of the particular stimulation,ΔI{\displaystyle \Delta I\!}is the addition to it required for the change to be perceived (theJND), andkis a constant. This rule was first discovered byErnst Heinrich Weber(1795–1878), an anatomist and physiologist, in experiments on the thresholds of perception of lifted weights. A theoretical rationale (not universally accepted) was subsequently provided byGustav Fechner, so the rule is therefore known either as the Weber Law or as theWeber–Fechner law; the constantkis called theWeber constant. It is true, at least to a good approximation, of many but not all sensory dimensions, for example the brightness of lights, and the intensity and thepitchof sounds. It is not true, however, for the wavelength of light.Stanley Smith Stevensargued that it would hold only for what he calledprotheticsensorycontinua, where change of input takes the form of increase in intensity or something obviously analogous; it would not hold formetatheticcontinua, where change of input produces a qualitative rather than a quantitative change of the percept. Stevens developed his own law, calledStevens' Power Law, that raises the stimulus to a constant power while, like Weber, also multiplying it by a constant factor in order to achieve the perceived stimulus.
The JND is a statistical, rather than an exact quantity: from trial to trial, the difference that a given person notices will vary somewhat, and it is therefore necessary to conduct many trials in order to determine the threshold. The JND usually reported is the difference that a person notices on 50% of trials. If a different proportion is used, this should be included in the description—for example one might report the value of the "75% JND".
Modern approaches to psychophysics, for examplesignal detection theory, imply that the observed JND, even in this statistical sense, is not an absolute quantity, but will depend on situational and motivational as well as perceptual factors. For example, when a researcher flashes a very dim light, a participant may report seeing it on some trials but not on others.
The JND formula has an objective interpretation (implied at the start of this entry) as the disparity between levels of the presented stimulus that is detected on 50% of occasions by a particular observed response,[3]rather than what is subjectively "noticed" or as a difference in magnitudes of consciously experienced 'sensations'. This 50%-discriminated disparity can be used as a universal unit of measurement of thepsychological distanceof the level of a feature in an object or situation and an internal standard of comparison in memory, such as the 'template' for a category or the 'norm' of recognition.[4]The JND-scaled distances from norm can be combined among observed and inferred psychophysical functions to generate diagnostics among hypothesised information-transforming (mental) processes mediating observed quantitative judgments.[5]
In music production, a single change in a property of sound which is below the JND does not affect perception of the sound. For amplitude, the JND for humans is around 1dB.[6][7]
The JND for tone is dependent on the tone's frequency content. Below 500 Hz, the JND is about 3 Hz for sine waves; above 1000 Hz, the JND for sine waves is about 0.6% (about 10cents).[8]
The JND is typically tested by playing two tones in quick succession with the listener asked if there was a difference in their pitches.[9]The JND becomes smaller if the two tones are playedsimultaneouslyas the listener is then able to discernbeat frequencies. The total number of perceptible pitch steps in the range of human hearing is about 1,400; the total number of notes in the equal-tempered scale, from 16 to 16,000 Hz, is 120.[9]
JND analysis is frequently occurring in both music and speech, the two being related and overlapping in the analysis of speech prosody (i.e. speech melody). Although JND varies as a function of the frequency band being tested, it has been shown that JND for the best performers at around 1 kHz is well below 1 Hz, (i.e. less than a tenth of a percent).[10][11][12]It is, however, important to be aware of the role played by critical bandwidth when performing this kind of analysis.[11]
When analysing speech melody, rather than musical tones, accuracy decreases. This is not surprising given that speech does not stay at fixed intervals in the way that tones in music do. Johan 't Hart (1981) found that JND for speech averaged between 1 and 2 STs but concluded that "only differences of more than 3 semitones play a part in communicative situations".[13]
Note that, given the logarithmic characteristics of Hz, for both music and speech perception results should not be reported in Hz but either as percentages or in STs (5 Hz between 20 and 25 Hz is very different from 5 Hz between 2000 and 2005 Hz, but an ~18.9% or 3 semitone increase is perceptually the same size difference, regardless of whether one starts at 20Hz or at 2000Hz).
Weber's law has important applications inmarketing. Manufacturers and marketers endeavor to determine the relevant JND for their products for two very different reasons:
When it comes to product improvements, marketers very much want to meet or exceed the consumer's differential threshold; that is, they want consumers to readily perceive any improvements made in the original products. Marketers use the JND to determine the amount of improvement they should make in their products. Less than the JND is wasted effort because the improvement will not be perceived; more than the JND is again wasteful because it reduces the level of repeat sales. On the other hand, when it comes to price increases, less than the JND is desirable because consumers are unlikely to notice it.
Weber's lawis used inhapticdevices and robotic applications. Exerting the proper amount of force to human operator is a critical aspects in human robot interactions and tele operation scenarios. It can highly improve the performance of the user in accomplishing a task.[14]
|
https://en.wikipedia.org/wiki/Just-noticeable_difference
|
Thecategorical abstract machine(CAM) is amodel of computationfor programs[1]that preserves the abilities of applicative, functional, or compositional style. It is based on the techniques ofapplicative computing.
The notion of the categorical abstract machine arose in the mid-1980s. It took its place in computer science as a kind oftheory of computationfor programmers, represented byCartesian closed categoryand embedded into thecombinatory logic. CAM is a transparent and sound mathematical representation for the languages of functional programming. The machine code can be optimized using the equational form of a theory of computation. Using CAM, the various mechanisms of computation such asrecursionorlazy evaluationcan be emulated as well as parameter passing, such ascall by name,call by value, and so on. In theory, CAM preserves[how?]all the advantages of object approach towards programming or computing.
The main current implementation is OCaml, which added class inheritance and dynamic method dispatch toCamlthe Categorical Abstract Machine Language. Both are variants of MetaLanguageML, and all three languages implementtype inference.
One of the implementation approaches to functional languages is given by the machinery based onsupercombinators, or an SK-machine, by D. Turner. The notion of CAM gives an alternative approach. The structure of CAM consists of syntactic, semantic, and computational constituents. Syntax is based onde Bruijn’snotation, which overcomes the difficulties of using bound variables. The evaluations are similar to those ofP. Landin’sSECD machine. With this coverage, CAM gives a sound ground for syntax, semantics, andtheory of computation. This comprehension arises as being influenced by the functional style of programming.
|
https://en.wikipedia.org/wiki/Categorical_abstract_machine
|
Incomputer architecturealocaleis an abstraction of the concept of a localized set of hardware resources which are close enough to enjoy uniform memory access.[1]
For instance, on acomputer clustereach node may be considered a locale given that there is one instance of the operating system and uniform access to memory for processes running on that node. Similarly, on anSMP system, each node may be defined as a locale. Parallel programming languages such asChapelhave specific constructs for declaring locales.[2]
This computing article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Locale_(computer_hardware)
|
TheBahl-Cocke-Jelinek-Raviv (BCJR) algorithmis analgorithmformaximum a posterioridecoding oferror correcting codesdefined ontrellises(principallyconvolutional codes). The algorithm is named after its inventors: Bahl, Cocke,Jelinekand Raviv.[1]This algorithm is critical to modern iteratively-decoded error-correcting codes, includingturbo codesandlow-density parity-check codes.
Based on thetrellis:
Berrou, Glavieux and Thitimajshima simplification.[2]
[3]
Thisalgorithmsordata structures-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/BCJR_algorithm
|
Inmathematics, particularly infunctional analysis, thespectrumof abounded linear operator(or, more generally, anunbounded linear operator) is a generalisation of the set ofeigenvaluesof amatrix. Specifically, acomplex numberλ{\displaystyle \lambda }is said to be in the spectrum of a bounded linear operatorT{\displaystyle T}ifT−λI{\displaystyle T-\lambda I}
Here,I{\displaystyle I}is theidentity operator.
By theclosed graph theorem,λ{\displaystyle \lambda }is in the spectrum if and only if the bounded operatorT−λI:V→V{\displaystyle T-\lambda I:V\to V}is non-bijective onV{\displaystyle V}.
The study of spectra and related properties is known asspectral theory, which has numerous applications, most notably themathematical formulation of quantum mechanics.
The spectrum of an operator on afinite-dimensionalvector spaceis precisely the set of eigenvalues. However an operator on an infinite-dimensional space may have additional elements in its spectrum, and may have no eigenvalues. For example, consider theright shiftoperatorRon theHilbert spaceℓ2,
This has no eigenvalues, since ifRx=λxthen by expanding this expression we see thatx1=0,x2=0, etc. On the other hand, 0 is in the spectrum because although the operatorR− 0 (i.e.Ritself) is invertible, the inverse is defined on a set which is not dense inℓ2. In facteverybounded linear operator on acomplexBanach spacemust have a non-empty spectrum.
The notion of spectrum extends tounbounded(i.e. not necessarily bounded) operators. Acomplex numberλis said to be in the spectrum of an unbounded operatorT:X→X{\displaystyle T:\,X\to X}defined on domainD(T)⊆X{\displaystyle D(T)\subseteq X}if there is no bounded inverse(T−λI)−1:X→D(T){\displaystyle (T-\lambda I)^{-1}:\,X\to D(T)}defined on the whole ofX.{\displaystyle X.}IfTisclosed(which includes the case whenTis bounded), boundedness of(T−λI)−1{\displaystyle (T-\lambda I)^{-1}}follows automatically from its existence. U
The space of bounded linear operatorsB(X) on a Banach spaceXis an example of aunitalBanach algebra. Since the definition of the spectrum does not mention any properties ofB(X) except those that any such algebra has, the notion of a spectrum may be generalised to this context by using the same definition verbatim.
LetT{\displaystyle T}be abounded linear operatoracting on a Banach spaceX{\displaystyle X}over the complex scalar fieldC{\displaystyle \mathbb {C} }, andI{\displaystyle I}be theidentity operatoronX{\displaystyle X}. ThespectrumofT{\displaystyle T}is the set of allλ∈C{\displaystyle \lambda \in \mathbb {C} }for which the operatorT−λI{\displaystyle T-\lambda I}does not have an inverse that is a bounded linear operator.
SinceT−λI{\displaystyle T-\lambda I}is a linear operator, the inverse is linear if it exists; and, by thebounded inverse theorem, it is bounded. Therefore, the spectrum consists precisely of those scalarsλ{\displaystyle \lambda }for whichT−λI{\displaystyle T-\lambda I}is notbijective.
The spectrum of a given operatorT{\displaystyle T}is often denotedσ(T){\displaystyle \sigma (T)}, and its complement, theresolvent set, is denotedρ(T)=C∖σ(T){\displaystyle \rho (T)=\mathbb {C} \setminus \sigma (T)}. (ρ(T){\displaystyle \rho (T)}is sometimes used to denote the spectral radius ofT{\displaystyle T})
Ifλ{\displaystyle \lambda }is an eigenvalue ofT{\displaystyle T}, then the operatorT−λI{\displaystyle T-\lambda I}is not one-to-one, and therefore its inverse(T−λI)−1{\displaystyle (T-\lambda I)^{-1}}is not defined. However, the converse statement is not true: the operatorT−λI{\displaystyle T-\lambda I}may not have an inverse, even ifλ{\displaystyle \lambda }is not an eigenvalue. Thus the spectrum of an operator always contains all its eigenvalues, but is not limited to them.
For example, consider the Hilbert spaceℓ2(Z){\displaystyle \ell ^{2}(\mathbb {Z} )}, that consists of allbi-infinite sequencesof real numbers
that have a finite sum of squares∑i=−∞+∞vi2{\textstyle \sum _{i=-\infty }^{+\infty }v_{i}^{2}}. Thebilateral shiftoperatorT{\displaystyle T}simply displaces every element of the sequence by one position; namely ifu=T(v){\displaystyle u=T(v)}thenui=vi−1{\displaystyle u_{i}=v_{i-1}}for every integeri{\displaystyle i}. The eigenvalue equationT(v)=λv{\displaystyle T(v)=\lambda v}has no nonzero solution in this space, since it implies that all the valuesvi{\displaystyle v_{i}}have the same absolute value (if|λ|=1{\displaystyle \vert \lambda \vert =1}) or are a geometric progression (if|λ|≠1{\displaystyle \vert \lambda \vert \neq 1}); either way, the sum of their squares would not be finite. However, the operatorT−λI{\displaystyle T-\lambda I}is not invertible if|λ|=1{\displaystyle |\lambda |=1}. For example, the sequenceu{\displaystyle u}such thatui=1/(|i|+1){\displaystyle u_{i}=1/(|i|+1)}is inℓ2(Z){\displaystyle \ell ^{2}(\mathbb {Z} )}; but there is no sequencev{\displaystyle v}inℓ2(Z){\displaystyle \ell ^{2}(\mathbb {Z} )}such that(T−I)v=u{\displaystyle (T-I)v=u}(that is,vi−1=ui+vi{\displaystyle v_{i-1}=u_{i}+v_{i}}for alli{\displaystyle i}).
The spectrum of a bounded operatorTis always aclosed,boundedsubset of thecomplex plane.
If the spectrum were empty, then theresolvent function
would be defined everywhere on the complex plane and bounded. But it can be shown that the resolvent functionRisholomorphicon its domain. By the vector-valued version ofLiouville's theorem, this function is constant, thus everywhere zero as it is zero at infinity. This would be a contradiction.
The boundedness of the spectrum follows from theNeumann series expansioninλ; the spectrumσ(T) is bounded by ||T||. A similar result shows the closedness of the spectrum.
The bound ||T|| on the spectrum can be refined somewhat. Thespectral radius,r(T), ofTis the radius of the smallest circle in the complex plane which is centered at the origin and contains the spectrumσ(T) inside of it, i.e.
Thespectral radius formulasays[2]that for any elementT{\displaystyle T}of aBanach algebra,
One can extend the definition of spectrum tounbounded operatorson aBanach spaceX. These operators are no longer elements in the Banach algebraB(X).
LetXbe a Banach space andT:D(T)→X{\displaystyle T:\,D(T)\to X}be alinear operatordefined on domainD(T)⊆X{\displaystyle D(T)\subseteq X}.
A complex numberλis said to be in theresolvent set(also calledregular set) ofT{\displaystyle T}if the operator
has a bounded everywhere-defined inverse, i.e. if there exists a bounded operator
such that
A complex numberλis then in thespectrumifλis not in the resolvent set.
Forλto be in the resolvent (i.e. not in the spectrum), just like in the bounded case,T−λI{\displaystyle T-\lambda I}must be bijective, since it must have a two-sided inverse. As before, if an inverse exists, then its linearity is immediate, but in general it may not be bounded, so this condition must be checked separately.
By theclosed graph theorem, boundedness of(T−λI)−1{\displaystyle (T-\lambda I)^{-1}}doesfollow directly from its existence whenTisclosed. Then, just as in the bounded case, a complex numberλlies in the spectrum of a closed operatorTif and only ifT−λI{\displaystyle T-\lambda I}is not bijective. Note that the class of closed operators includes all bounded operators.
The spectrum of an unbounded operator is in general a closed, possibly empty, subset of the complex plane.
If the operatorTis notclosed, thenσ(T)=C{\displaystyle \sigma (T)=\mathbb {C} }.
The following example indicates that non-closed operators may have empty spectra. LetT{\displaystyle T}denote the differentiation operator onL2([0,1]){\displaystyle L^{2}([0,1])}, whose domain is defined to be the closure ofCc∞((0,1]){\displaystyle C_{c}^{\infty }((0,1])}with respect to theH1{\displaystyle H^{1}}-Sobolev spacenorm. This space can be characterized as all functions inH1([0,1]){\displaystyle H^{1}([0,1])}that are zero att=0{\displaystyle t=0}. Then,T−z{\displaystyle T-z}has trivial kernel on this domain, as anyH1([0,1]){\displaystyle H^{1}([0,1])}-function in its kernel is a constant multiple ofezt{\displaystyle e^{zt}}, which is zero att=0{\displaystyle t=0}if and only if it is identically zero. Therefore, the complement of the spectrum is all ofC.{\displaystyle \mathbb {C} .}
A bounded operatorTon a Banach space is invertible, i.e. has a bounded inverse, if and only ifTis bounded below, i.e.‖Tx‖≥c‖x‖,{\displaystyle \|Tx\|\geq c\|x\|,}for somec>0,{\displaystyle c>0,}and has dense range. Accordingly, the spectrum ofTcan be divided into the following parts:
Note that the approximate point spectrum and residual spectrum are not necessarily disjoint[3](however, the point spectrum and the residual spectrum are).
The following subsections provide more details on the three parts ofσ(T) sketched above.
If an operator is not injective (so there is some nonzeroxwithT(x) = 0), then it is clearly not invertible. So ifλis aneigenvalueofT, one necessarily hasλ∈σ(T). The set of eigenvalues ofTis also called thepoint spectrumofT, denoted byσp(T). Some authors refer to the closure of the point spectrum as thepure point spectrumσpp(T)=σp(T)¯{\displaystyle \sigma _{pp}(T)={\overline {\sigma _{p}(T)}}}while others simply considerσpp(T):=σp(T).{\displaystyle \sigma _{pp}(T):=\sigma _{p}(T).}[4][5]
More generally, by thebounded inverse theorem,Tis not invertible if it is not bounded below; that is, if there is noc> 0 such that ||Tx|| ≥c||x|| for allx∈X. So the spectrum includes the set ofapproximate eigenvalues, which are thoseλsuch thatT-λIis not bounded below; equivalently, it is the set ofλfor which there is a sequence of unit vectorsx1,x2, ... for which
The set of approximate eigenvalues is known as theapproximate point spectrum, denoted byσap(T){\displaystyle \sigma _{\mathrm {ap} }(T)}.
It is easy to see that the eigenvalues lie in the approximate point spectrum.
For example, consider the right shiftRonl2(Z){\displaystyle l^{2}(\mathbb {Z} )}defined by
where(ej)j∈N{\displaystyle {\big (}e_{j}{\big )}_{j\in \mathbb {N} }}is the standard orthonormal basis inl2(Z){\displaystyle l^{2}(\mathbb {Z} )}. Direct calculation showsRhas no eigenvalues, but everyλwith|λ|=1{\displaystyle |\lambda |=1}is an approximate eigenvalue; lettingxnbe the vector
one can see that ||xn|| = 1 for alln, but
SinceRis a unitary operator, its spectrum lies on the unit circle. Therefore, the approximate point spectrum ofRis its entire spectrum.
This conclusion is also true for a more general class of operators.
A unitary operator isnormal. By thespectral theorem, a bounded operator on a Hilbert space H is normal if and only if it is equivalent (after identification ofHwith anL2{\displaystyle L^{2}}space) to amultiplication operator. It can be shown that the approximate point spectrum of a bounded multiplication operator equals its spectrum.
Thediscrete spectrumis defined as the set ofnormal eigenvaluesor, equivalently, as the set of isolated points of the spectrum such that the correspondingRiesz projectoris of finite rank. As such, the discrete spectrum is a strict subset of the point spectrum, i.e.,σd(T)⊂σp(T).{\displaystyle \sigma _{d}(T)\subset \sigma _{p}(T).}
The set of allλfor whichT−λI{\displaystyle T-\lambda I}is injective and has dense range, but is not surjective, is called thecontinuous spectrumofT, denoted byσc(T){\displaystyle \sigma _{\mathbb {c} }(T)}. The continuous spectrum therefore consists of those approximate eigenvalues which are not eigenvalues and do not lie in the residual spectrum. That is,
For example,A:l2(N)→l2(N){\displaystyle A:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )},ej↦ej/j{\displaystyle e_{j}\mapsto e_{j}/j},j∈N{\displaystyle j\in \mathbb {N} }, is injective and has a dense range, yetRan(A)⊊l2(N){\displaystyle \mathrm {Ran} (A)\subsetneq l^{2}(\mathbb {N} )}.
Indeed, ifx=∑j∈Ncjej∈l2(N){\textstyle x=\sum _{j\in \mathbb {N} }c_{j}e_{j}\in l^{2}(\mathbb {N} )}withcj∈C{\displaystyle c_{j}\in \mathbb {C} }such that∑j∈N|cj|2<∞{\textstyle \sum _{j\in \mathbb {N} }|c_{j}|^{2}<\infty }, one does not necessarily have∑j∈N|jcj|2<∞{\textstyle \sum _{j\in \mathbb {N} }\left|jc_{j}\right|^{2}<\infty }, and then∑j∈Njcjej∉l2(N){\textstyle \sum _{j\in \mathbb {N} }jc_{j}e_{j}\notin l^{2}(\mathbb {N} )}.
The set ofλ∈C{\displaystyle \lambda \in \mathbb {C} }for whichT−λI{\displaystyle T-\lambda I}does not have dense range is known as thecompression spectrumofTand is denoted byσcp(T){\displaystyle \sigma _{\mathrm {cp} }(T)}.
The set ofλ∈C{\displaystyle \lambda \in \mathbb {C} }for whichT−λI{\displaystyle T-\lambda I}is injective but does not have dense range is known as theresidual spectrumofTand is denoted byσr(T){\displaystyle \sigma _{\mathrm {r} }(T)}:
An operator may be injective, even bounded below, but still not invertible. The right shift onl2(N){\displaystyle l^{2}(\mathbb {N} )},R:l2(N)→l2(N){\displaystyle R:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )},R:ej↦ej+1,j∈N{\displaystyle R:\,e_{j}\mapsto e_{j+1},\,j\in \mathbb {N} }, is such an example. This shift operator is anisometry, therefore bounded below by 1. But it is not invertible as it is not surjective (e1∉Ran(R){\displaystyle e_{1}\not \in \mathrm {Ran} (R)}), and moreoverRan(R){\displaystyle \mathrm {Ran} (R)}is not dense inl2(N){\displaystyle l^{2}(\mathbb {N} )}(e1∉Ran(R)¯{\displaystyle e_{1}\notin {\overline {\mathrm {Ran} (R)}}}).
The peripheral spectrum of an operator is defined as the set of points in its spectrum which have modulus equal to its spectral radius.[6]
There are five similar definitions of theessential spectrumof closed densely defined linear operatorA:X→X{\displaystyle A:\,X\to X}which satisfy
All these spectraσess,k(A),1≤k≤5{\displaystyle \sigma _{\mathrm {ess} ,k}(A),\ 1\leq k\leq 5}, coincide in the case of self-adjoint operators.
Thehydrogen atomprovides an example of different types of the spectra. Thehydrogen atom Hamiltonian operatorH=−Δ−Z|x|{\displaystyle H=-\Delta -{\frac {Z}{|x|}}},Z>0{\displaystyle Z>0}, with domainD(H)=H1(R3){\displaystyle D(H)=H^{1}(\mathbb {R} ^{3})}has a discrete set of eigenvalues (the discrete spectrumσd(H){\displaystyle \sigma _{\mathrm {d} }(H)}, which in this case coincides with the point spectrumσp(H){\displaystyle \sigma _{\mathrm {p} }(H)}since there are no eigenvalues embedded into the continuous spectrum) that can be computed by theRydberg formula. Their correspondingeigenfunctionsare calledeigenstates, or thebound states. The result of theionizationprocess is described by the continuous part of the spectrum (the energy of the collision/ionization is not "quantized"), represented byσcont(H)=[0,+∞){\displaystyle \sigma _{\mathrm {cont} }(H)=[0,+\infty )}(it also coincides with the essential spectrum,σess(H)=[0,+∞){\displaystyle \sigma _{\mathrm {ess} }(H)=[0,+\infty )}).[citation needed][clarification needed]
LetXbe a Banach space andT:X→X{\displaystyle T:\,X\to X}aclosed linear operatorwith dense domainD(T)⊂X{\displaystyle D(T)\subset X}.
IfX*is the dual space ofX, andT∗:X∗→X∗{\displaystyle T^{*}:\,X^{*}\to X^{*}}is thehermitian adjointofT, then
Theorem—For a bounded (or, more generally, closed and densely defined) operatorT,
In particular,σr(T)⊂σp(T∗)¯⊂σr(T)∪σp(T){\displaystyle \sigma _{\mathrm {r} }(T)\subset {\overline {\sigma _{\mathrm {p} }(T^{*})}}\subset \sigma _{\mathrm {r} }(T)\cup \sigma _{\mathrm {p} }(T)}.
Suppose thatRan(T−λI){\displaystyle \mathrm {Ran} (T-\lambda I)}is not dense inX.
By theHahn–Banach theorem, there exists a non-zeroφ∈X∗{\displaystyle \varphi \in X^{*}}that vanishes onRan(T−λI){\displaystyle \mathrm {Ran} (T-\lambda I)}.
For allx∈X,
Therefore,(T∗−λ¯I)φ=0∈X∗{\displaystyle (T^{*}-{\bar {\lambda }}I)\varphi =0\in X^{*}}andλ¯{\displaystyle {\bar {\lambda }}}is an eigenvalue ofT*.
Conversely, suppose thatλ¯{\displaystyle {\bar {\lambda }}}is an eigenvalue ofT*. Then there exists a non-zeroφ∈X∗{\displaystyle \varphi \in X^{*}}such that(T∗−λ¯I)φ=0{\displaystyle (T^{*}-{\bar {\lambda }}I)\varphi =0}, i.e.
IfRan(T−λI){\displaystyle \mathrm {Ran} (T-\lambda I)}is dense inX, thenφmust be the zero functional, a contradiction.
The claim is proved.
We also getσp(T)⊂σr(T∗)∪σp(T∗)¯{\displaystyle \sigma _{\mathrm {p} }(T)\subset {\overline {\sigma _{\mathrm {r} }(T^{*})\cup \sigma _{\mathrm {p} }(T^{*})}}}by the following argument:Xembeds isometrically intoX**.
Therefore, for every non-zero element in the kernel ofT−λI{\displaystyle T-\lambda I}there exists a non-zero element inX**which vanishes onRan(T∗−λ¯I){\displaystyle \mathrm {Ran} (T^{*}-{\bar {\lambda }}I)}.
ThusRan(T∗−λ¯I){\displaystyle \mathrm {Ran} (T^{*}-{\bar {\lambda }}I)}can not be dense.
Furthermore, ifXis reflexive, we haveσr(T∗)¯⊂σp(T){\displaystyle {\overline {\sigma _{\mathrm {r} }(T^{*})}}\subset \sigma _{\mathrm {p} }(T)}.
IfTis acompact operator, or, more generally, aninessential operator, then it can be shown that the spectrum is countable, that zero is the only possibleaccumulation point, and that any nonzeroλin the spectrum is an eigenvalue.
A bounded operatorA:X→X{\displaystyle A:\,X\to X}isquasinilpotentif‖An‖1/n→0{\displaystyle \lVert A^{n}\rVert ^{1/n}\to 0}asn→∞{\displaystyle n\to \infty }(in other words, if the spectral radius ofAequals zero). Such operators could equivalently be characterized by the condition
An example of such an operator isA:l2(N)→l2(N){\displaystyle A:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )},ej↦ej+1/2j{\displaystyle e_{j}\mapsto e_{j+1}/2^{j}}forj∈N{\displaystyle j\in \mathbb {N} }.
IfXis aHilbert spaceandTis aself-adjoint operator(or, more generally, anormal operator), then a remarkable result known as thespectral theoremgives an analogue of the diagonalisation theorem for normal finite-dimensional operators (Hermitian matrices, for example).
For self-adjoint operators, one can usespectral measuresto define adecomposition of the spectruminto absolutely continuous, pure point, and singular parts.
The definitions of the resolvent and spectrum can be extended to any continuous linear operatorT{\displaystyle T}acting on a Banach spaceX{\displaystyle X}over the real fieldR{\displaystyle \mathbb {R} }(instead of the complex fieldC{\displaystyle \mathbb {C} }) via itscomplexificationTC{\displaystyle T_{\mathbb {C} }}. In this case we define the resolvent setρ(T){\displaystyle \rho (T)}as the set of allλ∈C{\displaystyle \lambda \in \mathbb {C} }such thatTC−λI{\displaystyle T_{\mathbb {C} }-\lambda I}is invertible as an operator acting on the complexified spaceXC{\displaystyle X_{\mathbb {C} }}; then we defineσ(T)=C∖ρ(T){\displaystyle \sigma (T)=\mathbb {C} \setminus \rho (T)}.
Thereal spectrumof a continuous linear operatorT{\displaystyle T}acting on a real Banach spaceX{\displaystyle X}, denotedσR(T){\displaystyle \sigma _{\mathbb {R} }(T)}, is defined as the set of allλ∈R{\displaystyle \lambda \in \mathbb {R} }for whichT−λI{\displaystyle T-\lambda I}fails to be invertible in the real algebra of bounded linear operators acting onX{\displaystyle X}. In this case we haveσ(T)∩R=σR(T){\displaystyle \sigma (T)\cap \mathbb {R} =\sigma _{\mathbb {R} }(T)}. Note that the real spectrum may or may not coincide with the complex spectrum. In particular, the real spectrum could be empty.
LetBbe a complexBanach algebracontaining aunite. Then we define the spectrumσ(x) (or more explicitlyσB(x)) of an elementxofBto be the set of thosecomplex numbersλfor whichλe−xis not invertible inB. This extends the definition for bounded linear operatorsB(X) on a Banach spaceX, sinceB(X) is a unital Banach algebra.
|
https://en.wikipedia.org/wiki/Spectrum_(functional_analysis)
|
TheAutomatic Certificate Management Environment(ACME) protocol is acommunications protocolfor automating interactions betweencertificate authoritiesand their users' servers, allowing the automated deployment ofpublic key infrastructureat very low cost.[1][2]It was designed by theInternet Security Research Group(ISRG) for theirLet's Encryptservice.[1]
The protocol, based on passingJSON-formatted messages overHTTPS,[2][3]has been published as an Internet Standard inRFC8555[4]by its own charteredIETFworking group.[5]
The ISRG providesfree and open-sourcereference implementations for ACME:certbotis aPython-based implementation of server certificate management software using the ACME protocol,[6][7][8]andboulderis acertificate authorityimplementation, written inGo.[9]
Since 2015 a large variety of client options have appeared for all operating systems.[10]
API v1 specification was published on April 12, 2016. It supports issuing certificates for fully-qualified domain names, such asexample.comorcluster.example.com, but not wildcards like*.example.com. Let's Encrypt turned off API v1 support on 1 June 2021.[11]
API v2 was released March 13, 2018 after being pushed back several times. ACME v2 is not backwards compatible with v1. Version 2 supports wildcard domains, such as*.example.com, allowing for many subdomains to have trustedTLS, e.g.https://cluster01.example.com,https://cluster02.example.com,https://example.com, on private networks under a single domain using a single shared "wildcard" certificate.[12]A major new requirement in v2 is that requests for wildcard certificates require the modification of a Domain Name ServiceTXT record, verifying control over the domain.
Changes to ACME v2 protocol since v1 include:[13]
|
https://en.wikipedia.org/wiki/Automatic_Certificate_Management_Environment
|
System testing, a.k.a.end-to-end (E2E) testing, is testing conducted on a completesoftware system.
System testing describes testing at the system level to contrast to testing at thesystem integration,integrationorunitlevel.
System testing often serves the purpose of evaluating the system's compliance with its specifiedrequirements[citation needed]– often from afunctional requirement specification(FRS), asystem requirement specification(SRS), another type of specification or multiple.
System testing can detect defects in the system as a whole.[citation needed][1]
System testing can verify the design, the behavior and even the believed expectations of the customer. It is also intended to test up to and beyond the bounds of specified software and hardware requirements.[citation needed]
|
https://en.wikipedia.org/wiki/System_testing
|
Inpublic key infrastructure, avalidation authority(VA) is an entity that provides a service used to verify the validity orrevocation statusof adigital certificateper the mechanisms described in theX.509standard andRFC5280(page 69).[1]
The dominant method used for this purpose is to host acertificate revocation list(CRL) for download via theHTTPorLDAPprotocols. To reduce the amount ofnetwork trafficrequired for certificate validation, theOCSPprotocol may be used instead.
While this is a potentially labor-intensive process, the use of a dedicated validation authority allows for dynamic validation of certificates issued by anoffline root certificate authority. While the root CA itself will be unavailable to network traffic, certificates issued by it can always be verified via the validation authority and the protocols mentioned above.
The ongoing administrative overhead of maintaining the CRLs hosted by the validation authority is typically minimal, as it is uncommon for root CAs to issue (or revoke) large numbers of certificates.
While a validation authority is capable of responding to a network-based request for a CRL, it lacks the ability to issue or revoke certificates. It must be continuously updated with current CRL information from acertificate authoritywhich issued the certificates contained within the CRL.
|
https://en.wikipedia.org/wiki/Validation_authority
|
Inmathematics, analgebra over a field(often simply called analgebra) is avector spaceequipped with abilinearproduct. Thus, an algebra is analgebraic structureconsisting of asettogether with operations of multiplication and addition andscalar multiplicationby elements of afieldand satisfying the axioms implied by "vector space" and "bilinear".[1]
The multiplication operation in an algebra may or may not beassociative, leading to the notions ofassociative algebraswhere associativity of multiplication is assumed, andnon-associative algebras, where associativity is not assumed (but not excluded, either). Given an integern, theringofrealsquare matricesof ordernis an example of an associative algebra over the field ofreal numbersundermatrix additionandmatrix multiplicationsince matrix multiplication is associative. Three-dimensionalEuclidean spacewith multiplication given by thevector cross productis an example of a nonassociative algebra over the field of real numbers since the vector cross product is nonassociative, satisfying theJacobi identityinstead.
An algebra isunitalorunitaryif it has anidentity elementwith respect to the multiplication. The ring of real square matrices of ordernforms a unital algebra since theidentity matrixof ordernis the identity element with respect to matrix multiplication. It is an example of a unital associative algebra, a(unital) ringthat is also a vector space.
Many authors use the termalgebrato meanassociative algebra, orunital associative algebra, or in some subjects such asalgebraic geometry,unital associative commutative algebra.
Replacing the field of scalars by acommutative ringleads to the more general notion of analgebra over a ring. Algebras are not to be confused with vector spaces equipped with abilinear form, likeinner product spaces, as, for such a space, the result of a product is not in the space, but rather in the field of coefficients.
LetKbe afield, and letAbe avector spaceoverKequipped with an additionalbinary operationfromA×AtoA, denoted here by·(that is, ifxandyare any two elements ofA, thenx·yis an element ofAthat is called theproductofxandy). ThenAis analgebraoverKif the following identities hold for all elementsx,y,zinA, and all elements (often calledscalars)aandbinK:
These three axioms are another way of saying that the binary operation isbilinear. An algebra overKis sometimes also called aK-algebra, andKis called thebase fieldofA. The binary operation is often referred to asmultiplicationinA. The convention adopted in this article is that multiplication of elements of an algebra is not necessarilyassociative, although some authors use the termalgebrato refer to anassociative algebra.
When a binary operation on a vector space iscommutative, left distributivity and right distributivity are equivalent, and, in this case, only one distributivity requires a proof. In general, for non-commutative operations left distributivity and right distributivity are not equivalent, and require separate proofs.
GivenK-algebrasAandB, ahomomorphismofK-algebras orK-algebra homomorphismis aK-linear mapf:A→Bsuch thatf(xy) =f(x)f(y)for allx,yinA. IfAandBare unital, then a homomorphism satisfyingf(1A) = 1Bis said to be a unital homomorphism. The space of allK-algebra homomorphisms betweenAandBis frequently written as
AK-algebraisomorphismis abijectiveK-algebra homomorphism.
Asubalgebraof an algebra over a fieldKis alinear subspacethat has the property that the product of any two of its elements is again in the subspace. In other words, a subalgebra of an algebra is a non-empty subset of elements that is closed under addition, multiplication, and scalar multiplication. In symbols, we say that a subsetLof aK-algebraAis asubalgebraif for everyx,yinLandcinK, we have thatx·y,x+y, andcxare all inL.
In the above example of the complex numbers viewed as a two-dimensional algebra over the real numbers, the one-dimensional real line is a subalgebra.
Aleft idealof aK-algebra is a linear subspace that has the property that any element of the subspace multiplied on the left by any element of the algebra produces an element of the subspace. In symbols, we say that a subsetLof aK-algebraAis a left ideal if for everyxandyinL,zinAandcinK, we have the following three statements.
If (3) were replaced withx·zis inL, then this would define aright ideal. Atwo-sided idealis a subset that is both a left and a right ideal. The termidealon its own is usually taken to mean a two-sided ideal. Of course when the algebra is commutative, then all of these notions of ideal are equivalent. Conditions (1) and (2) together are equivalent toLbeing a linear subspace ofA. It follows from condition (3) that every left or right ideal is a subalgebra.
This definition is different from the definition of anideal of a ring, in that here we require the condition (2). Of course if the algebra is unital, then condition (3) implies condition (2).
If we have afield extensionF/K, which is to say a bigger fieldFthat containsK, then there is a natural way to construct an algebra overFfrom any algebra overK. It is the same construction one uses to make a vector space over a bigger field, namely the tensor productVF:=V⊗KF{\displaystyle V_{F}:=V\otimes _{K}F}. So ifAis an algebra overK, thenAF{\displaystyle A_{F}}is an algebra overF.
Algebras over fields come in many different types. These types are specified by insisting on some further axioms, such ascommutativityorassociativityof the multiplication operation, which are not required in the broad definition of an algebra. The theories corresponding to the different types of algebras are often very different.
An algebra isunitalorunitaryif it has aunitor identity elementIwithIx=x=xIfor allxin the algebra.
An algebra is called azero algebraifuv= 0for allu,vin the algebra,[2]not to be confused with the algebra with one element. It is inherently non-unital (except in the case of only one element), associative and commutative.
Aunital zero algebrais thedirect sumK⊕V{\displaystyle K\oplus V}of a fieldK{\displaystyle K}and aK{\displaystyle K}-vector spaceV{\displaystyle V}, that is equipped by the only multiplication that is zero on the vector space (or module), and makes it an unital algebra.
More precisely, every element of the algebra may be uniquely written ask+v{\displaystyle k+v}withk∈K{\displaystyle k\in K}andv∈V{\displaystyle v\in V}, and the product is the onlybilinear operationsuch thatvw=0{\displaystyle vw=0}for everyv{\displaystyle v}andw{\displaystyle w}inV{\displaystyle V}. So, ifk1,k2∈K{\displaystyle k_{1},k_{2}\in K}andv1,v2∈V{\displaystyle v_{1},v_{2}\in V}, one has(k1+v1)(k2+v2)=k1k2+(k1v2+k2v1).{\displaystyle (k_{1}+v_{1})(k_{2}+v_{2})=k_{1}k_{2}+(k_{1}v_{2}+k_{2}v_{1}).}
A classical example of unital zero algebra is the algebra ofdual numbers, the unital zeroR-algebra built from a one dimensional real vector space.
This definition extends verbatim to the definition of aunital zero algebraover acommutative ring, with the replacement of "field" and "vector space" with "commutative ring" and "module".
Unital zero algebras allow the unification of the theory of submodules of a given module and the theory of ideals of a unital algebra. Indeed, the submodules of a moduleV{\displaystyle V}correspond exactly to the ideals ofK⊕V{\displaystyle K\oplus V}that are contained inV{\displaystyle V}.
For example, the theory ofGröbner baseswas introduced byBruno Buchbergerforidealsin a polynomial ringR=K[x1, ...,xn]over a field. The construction of the unital zero algebra over a freeR-module allows extending this theory as a Gröbner basis theory for submodules of a free module. This extension allows, for computing a Gröbner basis of a submodule, to use, without any modification, any algorithm and any software for computing Gröbner bases of ideals.
Similarly, unital zero algebras allow to deduce straightforwardly theLasker–Noether theoremfor modules (over a commutative ring) from the original Lasker–Noether theorem for ideals.
Examples of associative algebras include
Anon-associative algebra[3](ordistributive algebra) over a fieldKis aK-vector spaceAequipped with aK-bilinear mapA×A→A{\displaystyle A\times A\rightarrow A}. The usage of "non-associative" here is meant to convey that associativity is not assumed, but it does not mean it is prohibited – that is, it means "not necessarily associative".
Examples detailed in the main article include:
The definition of an associativeK-algebra with unit is also frequently given in an alternative way. In this case, an algebra over a fieldKis aringAtogether with aring homomorphism
whereZ(A) is thecenterofA. Sinceηis a ring homomorphism, then one must have either thatAis thezero ring, or thatηisinjective. This definition is equivalent to that above, with scalar multiplication
given by
Given two such associative unitalK-algebrasAandB, a unitalK-algebra homomorphismf:A→Bis a ring homomorphism that commutes with the scalar multiplication defined byη, which one may write as
for allk∈K{\displaystyle k\in K}anda∈A{\displaystyle a\in A}. In other words, the following diagram commutes:
For algebras over a field, the bilinear multiplication fromA×AtoAis completely determined by the multiplication ofbasiselements ofA.
Conversely, once a basis forAhas been chosen, the products of basis elements can be set arbitrarily, and then extended in a unique way to a bilinear operator onA, i.e., so the resulting multiplication satisfies the algebra laws.
Thus, given the fieldK, any finite-dimensional algebra can be specifiedup toisomorphismby giving itsdimension(sayn), and specifyingn3structure coefficientsci,j,k, which arescalars.
These structure coefficients determine the multiplication inAvia the following rule:
wheree1,...,enform a basis ofA.
Note however that several different sets of structure coefficients can give rise to isomorphic algebras.
Inmathematical physics, the structure coefficients are generally written with upper and lower indices, so as to distinguish their transformation properties under coordinate transformations. Specifically, lower indices arecovariantindices, and transform viapullbacks, while upper indices arecontravariant, transforming underpushforwards. Thus, the structure coefficients are often writtenci,jk, and their defining rule is written using theEinstein notationas
If you apply this to vectors written inindex notation, then this becomes
IfKis only a commutative ring and not a field, then the same process works ifAis afree moduleoverK. If it isn't, then the multiplication is still completely determined by its action on a set that spansA; however, the structure constants can't be specified arbitrarily in this case, and knowing only the structure constants does not specify the algebra up to isomorphism.
Two-dimensional, three-dimensional and four-dimensional unital associative algebras over the field of complex numbers were completely classified up to isomorphism byEduard Study.[4]
There exist two such two-dimensional algebras. Each algebra consists of linear combinations (with complex coefficients) of two basis elements, 1 (the identity element) anda. According to the definition of an identity element,
It remains to specify
There exist five such three-dimensional algebras. Each algebra consists of linear combinations of three basis elements, 1 (the identity element),aandb. Taking into account the definition of an identity element, it is sufficient to specify
The fourth of these algebras is non-commutative, and the others are commutative.
In some areas of mathematics, such ascommutative algebra, it is common to consider the more general concept of analgebra over a ring, where acommutative ringRreplaces the fieldK. The only part of the definition that changes is thatAis assumed to be anR-module(instead of aK-vector space).
AringAis always an associative algebra over itscenter, and over theintegers. A classical example of an algebra over its center is thesplit-biquaternion algebra, which is isomorphic toH×H{\displaystyle \mathbb {H} \times \mathbb {H} }, the direct product of twoquaternion algebras. The center of that ring isR×R{\displaystyle \mathbb {R} \times \mathbb {R} }, and hence it has the structure of an algebra over its center, which is not a field. Note that the split-biquaternion algebra is also naturally an 8-dimensionalR{\displaystyle \mathbb {R} }-algebra.
In commutative algebra, ifAis acommutative ring, then any unital ring homomorphismR→A{\displaystyle R\to A}defines anR-module structure onA, and this is what is known as theR-algebra structure.[5]So a ring comes with a naturalZ{\displaystyle \mathbb {Z} }-module structure, since one can take the unique homomorphismZ→A{\displaystyle \mathbb {Z} \to A}.[6]On the other hand, not all rings can be given the structure of an algebra over a field (for example the integers). SeeField with one elementfor a description of an attempt to give to every ring a structure that behaves like an algebra over a field.
|
https://en.wikipedia.org/wiki/Algebra_over_a_commutative_ring
|
Inmathematics, theabscissa(/æbˈsɪs.ə/; pluralabscissaeorabscissas) and theordinateare respectively the first and secondcoordinateof apointin aCartesian coordinate system:[1][2]
Together they form anordered pairwhich defines the location of a point in two-dimensionalrectangular space.
More technically, the abscissa of a point is the signed measure of its projection on the primary axis. Itsabsolute valueis the distance between the projection and theoriginof the axis, and itssignis given by the location on the projection relative to the origin (before: negative; after: positive). Similarly, the ordinate of a point is the signed measure of its projection on the secondary axis.In three dimensions, the third direction is sometimes referred to as theapplicate.[3]
Though the word "abscissa" (fromLatinlinea abscissa'a line cut off') has been used at least sinceDe Practica Geometrie(1220) byFibonacci(Leonardo of Pisa), its use in its modern sense may be due to Venetian mathematicianStefano degli Angeliin his workMiscellaneum Hyperbolicum, et Parabolicum(1659).[4]Historically, the term was used in the more general sense of a 'distance'.[5]
In his 1892 workVorlesungen über die Geschichte der Mathematik("Lectures on history of mathematics"), volume 2, Germanhistorian of mathematicsMoritz Cantorwrites:
Gleichwohl ist durch [Stefano degli Angeli] vermuthlich ein Wort in den mathematischen Sprachschatz eingeführt worden, welches gerade in der analytischen Geometrie sich als zukunftsreich bewährt hat. […] Wir kennen keine ältere Benutzung des WortesAbscissein lateinischen Originalschriften. Vielleicht kommt das Wort in Uebersetzungen derApollonischen Kegelschnittevor, wo Buch I Satz 20 vonἀποτεμνομέναιςdie Rede ist, wofür es kaum ein entsprechenderes lateinisches Wort alsabscissageben möchte.[6]
At the same time it was presumably by [Stefano degli Angeli] that a word was introduced into the mathematical vocabulary for which especially in analytic geometry the future proved to have much in store. […] We know of no earlier use of the wordabscissain Latin original texts. Maybe the word appears in translations of theApollonian conics, where [in] Book I, Chapter 20 there is mention ofἀποτεμνομέναις,for which there would hardly be a more appropriate Latin word thanabscissa.
The use of the wordordinateis related to the Latin phraselinea ordinata appliicata'line applied parallel'.
In a somewhat obsolete variant usage, the abscissa of a point may also refer to any number that describes the point's location along some path, e.g. the parameter of aparametric equation.[1]Used in this way, the abscissa can be thought of as a coordinate-geometry analog to theindependent variablein amathematical modelor experiment (with any ordinates filling a role analogous todependent variables).
|
https://en.wikipedia.org/wiki/Abscissa_and_ordinate
|
In themathematicalstudy ofpartial differential equations, theBateman transformis a method for solving theLaplace equationin four dimensions andwave equationin three by using aline integralof aholomorphic functionin threecomplex variables. It is named after the mathematicianHarry Bateman, who first published the result in (Bateman 1904).
The formula asserts that ifƒis a holomorphic function of three complex variables, then
is a solution of the Laplace equation, which follows by differentiation under the integral. Furthermore, Bateman asserted that the most general solution of the Laplace equation arises in this way.
Thismathematical analysis–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Bateman_transform
|
Inlinguistics,clipping, also calledtruncationorshortening,[1]isword formationby removing somesegmentsof an existing word to create adiminutiveword or aclipped compound. Clipping differs fromabbreviation, which is based on a shortening of the written, rather than the spoken, form of an existing word or phrase. Clipping is also different fromback-formation, which proceeds by (pseudo-)morphemerather than segment, and where the new word may differ in sense andword classfrom its source.[2]In English, clipping may extend tocontraction, which mostly involves theelisionof a vowel that is replaced by anapostrophein writing.
According toHans Marchand, clippings are not coined as words belonging to thecore lexiconof a language.[3]They typically originate assynonyms[3]within thejargonorslangof anin-group, such as schools, army, police, and the medical profession. For example,exam(ination),math(ematics), andlab(oratory) originated in schoolslang;spec(ulation) andtick(et = credit) in stock-exchange slang; andvet(eran) andcap(tain) in army slang. Clipped forms can pass into common usage when they are widely useful, becoming part of standard language, which most speakers would agree has happened withmath/maths,lab,exam,phone(fromtelephone),fridge(fromrefrigerator), and various others. When their usefulness is limited to narrower contexts, they remain outside thestandard register. Many, such asmaniandpediformanicureandpedicureormic/mikeformicrophone, occupy a middle ground in which their appropriate register is a subjective judgment, but succeeding decades tend to see them become more widely used.
According toIrina Arnold[ru], clipping mainly consists of the following types:[4]
Final and initial clipping may be combined into a sort of "bilateral clipping", and result in curtailed words with the middle part of the prototype retained, which usually includes the syllable withprimary stress. Examples:fridge(refrigerator),rizz(charisma),rona(coronavirus),shrink(head-shrinker),tec(detective); alsoflu(which omits the stressed syllable ofinfluenza),jams(retaining thebinary noun-s of pajamas/pyjamas) orjammies(adding diminutive-ie).
Another common shortening in English will clip a word and then add some sort of suffix. That suffix can be either neutral or casual in nature, as in the-oofcombo(combination) andconvo(conversation), or else diminutive and/or hypochoric, as in the-yor-ieofSammy(Samantha) andselfie(self portrait), and the-sofbabes(baby, as a term of endearment) andBarbs(Barbara). Sometimes, the adding of this suffix can make the word which was originally shortened from a longer form end up with the same number of syllables as the original longer form; i.e.choccy(chocolate) orDavy(David).
In a final clipping, the most common type in English, the beginning of the prototype is retained. The unclipped original may be either a simple or a composite. Examples includeadandadvert(advertisement),cable(cablegram),doc(doctor),exam(examination),fax(facsimile),gas(gasoline),gym(gymnastics, gymnasium),memo(memorandum),mutt(muttonhead),pub(public house),pop(popular music), andclit(clitoris).[5]: 109An example of apocope in Israeli Hebrew is the wordlehit, which derives from להתראותlehitraot, meaning "see you, goodbye".[5]: 155
Because final clippings are most common in English, this often leads to clipped forms from different sources which end up looking identical. For example,appcan equally refer to anappetizeror anapplicationdepending on the context, whilevetcan be short for eitherveteranorveterinarian.
Initial (or fore) clipping retains the final part of the word. Examples:bot(robot),chute(parachute),roach(cockroach),gator(alligator),phone(telephone),pike(turnpike),varsity(university),net(Internet).
Words with the middle part of the word left out are few. They may be further subdivided into two groups: (a) words with a final-clipped stem retaining the functional morpheme:maths(mathematics),specs(spectacles); (b) contractions due to a gradual process of elision under the influence of rhythm and context. Thus,fancy(fantasy),ma'am(madam), andfo'c'slemay be regarded as accelerated forms.
Clipped forms are also used incompounds. One part of the original compound most often remains intact. Examples are:cablegram(cabletelegram),op art(opticalart),org-man(organizationman),linocut(linoleumcut). Sometimes both halves of a compound are clipped as innavicert(navigationcertificate). In these cases it is difficult to know whether the resultant formation should be treated as a clipping or as ablend, for the border between the two types is not always clear. According to Bauer (1983),[6]the easiest way to draw the distinction is to say that those forms which retain compound stress are clipped compounds, whereas those that take simple word stress are not. By this criterionbodbiz, Chicom, Comsymp, Intelsat, midcult, pro-am, photo op, sci-fi, andsitcomare all compounds made of clippings.
|
https://en.wikipedia.org/wiki/Clipping_(morphology)
|
Computer musicis the application ofcomputing technologyinmusic composition, to help human composers create new music or to have computers independently create music, such as withalgorithmic compositionprograms. It includes the theory and application of new and existing computer software technologies and basic aspects of music, such assound synthesis,digital signal processing,sound design, sonic diffusion,acoustics,electrical engineering, andpsychoacoustics.[1]The field of computer music can trace its roots back to the origins ofelectronic music, and the first experiments and innovations with electronic instruments at the turn of the 20th century.[2]
Much of the work on computer music has drawn on the relationship betweenmusic and mathematics, a relationship that has been noted since theAncient Greeksdescribed the "harmony of the spheres".
Musical melodies were first generated by the computer originally named the CSIR Mark 1 (later renamedCSIRAC) in Australia in 1950. There were newspaper reports from America and England (early and recently) that computers may have played music earlier, but thorough research has debunked these stories as there is no evidence to support the newspaper reports (some of which were speculative). Research has shown that peoplespeculatedabout computers playing music, possibly because computers would make noises,[3]but there is no evidence that they did it.[4][5]
The world's first computer to play music was the CSIR Mark 1 (later named CSIRAC), which was designed and built byTrevor Pearceyand Maston Beard in the late 1940s. Mathematician Geoff Hill programmed the CSIR Mark 1 to play popular musical melodies from the very early 1950s. In 1950 the CSIR Mark 1 was used to play music, the first known use of a digital computer for that purpose. The music was never recorded, but it has been accurately reconstructed.[6][7]In 1951 it publicly played the "Colonel Bogey March"[8]of which only the reconstruction exists. However, the CSIR Mark 1 played standard repertoire and was not used to extend musical thinking or composition practice, asMax Mathewsdid, which is current computer-music practice.
The first music to be performed in England was a performance of theBritish National Anthemthat was programmed byChristopher Stracheyon theFerranti Mark 1, late in 1951. Later that year, short extracts of three pieces were recorded there by aBBCoutside broadcasting unit: the National Anthem, "Baa, Baa, Black Sheep", and "In the Mood"; this is recognized as the earliest recording of a computer to play music as the CSIRAC music was never recorded. This recording can be heard at the Manchester University site.[9]Researchers at theUniversity of Canterbury, Christchurch declicked and restored this recording in 2016 and the results may be heard onSoundCloud.[10][11][6]
Two further major 1950s developments were the origins of digital sound synthesis by computer, and ofalgorithmic compositionprograms beyond rote playback. Amongst other pioneers, the musical chemistsLejaren Hillerand Leonard Isaacson worked on a series of algorithmic composition experiments from 1956 to 1959, manifested in the 1957 premiere of theIlliac Suitefor string quartet.[12]Max Mathews at Bell Laboratories developed the influentialMUSIC Iprogram and its descendants, further popularising computer music through a 1963 article inScience.[13]The first professional composer to work with digital synthesis wasJames Tenney, who created a series of digitally synthesized and/or algorithmically composed pieces at Bell Labs using Mathews' MUSIC III system, beginning withAnalog #1 (Noise Study)(1961).[14][15]After Tenney left Bell Labs in 1964, he was replaced by composerJean-Claude Risset, who conducted research on the synthesis of instrumental timbres and composedComputer Suite from Little Boy(1968).
Early computer-music programs typically did not run inreal time, although the first experiments on CSIRAC and theFerranti Mark 1did operate inreal time. From the late 1950s, with increasingly sophisticated programming, programs would run for hours or days, on multi million-dollar computers, to generate a few minutes of music.[16][17]One way around this was to use a 'hybrid system' of digital control of ananalog synthesiserand early examples of this were Max Mathews' GROOVE system (1969) and also MUSYS byPeter Zinovieff(1969).
Until now partial use has been exploited for musical research into the substance and form of sound (convincing examples are those of Hiller and Isaacson in Urbana, Illinois, US;Iannis Xenakisin Paris andPietro Grossiin Florence, Italy).[18]
In May 1967 the first experiments in computer music in Italy were carried out by theS 2F M studioin Florence[19]in collaboration withGeneral Electric Information SystemsItaly.[20]Olivetti-General Electric GE 115(Olivetti S.p.A.) is used by Grossi as aperformer: three programmes were prepared for these experiments. The programmes were written by Ferruccio Zulian[21]and used byPietro Grossifor playing Bach, Paganini, and Webern works and for studying new sound structures.[22]
John Chowning's work onFM synthesisfrom the 1960s to the 1970s allowed much more efficient digital synthesis,[23]eventually leading to the development of the affordable FM synthesis-basedYamaha DX7digital synthesizer, released in 1983.[24]
Interesting sounds must have a fluidity and changeability that allows them to remain fresh to the ear. In computer music this subtle ingredient is bought at a high computational cost, both in terms of the number of items requiring detail in a score and in the amount of interpretive work the instruments must produce to realize this detail in sound.[25]
In Japan, experiments in computer music date back to 1962, whenKeio Universityprofessor Sekine andToshibaengineer Hayashi experimented with theTOSBAC[jp]computer. This resulted in a piece entitledTOSBAC Suite, influenced by theIlliac Suite. Later Japanese computer music compositions include a piece by Kenjiro Ezaki presented duringOsaka Expo '70and "Panoramic Sonore" (1974) by music critic Akimichi Takeda. Ezaki also published an article called "Contemporary Music and Computers" in 1970. Since then, Japanese research in computer music has largely been carried out for commercial purposes inpopular music, though some of the more serious Japanese musicians used large computer systems such as theFairlightin the 1970s.[26]
In the late 1970s these systems became commercialized, including systems like theRoland MC-8 Microcomposer, where amicroprocessor-based system controls ananalog synthesizer, released in 1978.[26]In addition to the Yamaha DX7, the advent of inexpensive digitalchipsandmicrocomputersopened the door to real-time generation of computer music.[24]In the 1980s, Japanese personal computers such as theNEC PC-88came installed with FM synthesissound chipsand featuredaudio programming languagessuch asMusic Macro Language(MML) andMIDIinterfaces, which were most often used to producevideo game music, orchiptunes.[26]By the early 1990s, the performance of microprocessor-based computers reached the point that real-time generation of computer music using more general programs and algorithms became possible.[27]
Advances in computing power and software for manipulation of digital media have dramatically affected the way computer music is generated and performed. Current-generation micro-computers are powerful enough to perform very sophisticated audio synthesis using a wide variety of algorithms and approaches. Computer music systems and approaches are now ubiquitous, and so firmly embedded in the process of creating music that we hardly give them a second thought: computer-based synthesizers, digital mixers, and effects units have become so commonplace that use of digital rather than analog technology to create and record music is the norm, rather than the exception.[28]
There is considerable activity in the field of computer music as researchers continue to pursue new and interesting computer-based synthesis, composition, and performance approaches. Throughout the world there are many organizations and institutions dedicated to the area of computer andelectronic musicstudy and research, including theCCRMA(Center of Computer Research in Music and Acoustic, Stanford, USA),ICMA(International Computer Music Association), C4DM (Centre for Digital Music),IRCAM, GRAME,SEAMUS(Society for Electro Acoustic Music in the United States),CEC(Canadian Electroacoustic Community), and a great number of institutions of higher learning around the world.
Later, composers such asGottfried Michael KoenigandIannis Xenakishad computers generate the sounds of the composition as well as the score. Koenig producedalgorithmic compositionprograms which were a generalization of his ownserial compositionpractice. This is not exactly similar to Xenakis' work as he used mathematical abstractions and examined how far he could explore these musically. Koenig's software translated the calculation of mathematical equations into codes which represented musical notation. This could be converted into musical notation by hand and then performed by human players. His programs Project 1 and Project 2 are examples of this kind of software. Later, he extended the same kind of principles into the realm of synthesis, enabling the computer to produce the sound directly. SSP is an example of a program which performs this kind of function. All of these programs were produced by Koenig at theInstitute of SonologyinUtrechtin the 1970s.[29]In the 2000s,Andranik Tangiandeveloped a computer algorithm to determine the time event structures forrhythmic canonsand rhythmic fugues, which were then "manually" worked out into harmonic compositionsEine kleine Mathmusik IandEine kleine Mathmusik IIperformed by computer;[30][31]for scores and recordings see.[32]
Computers have also been used in an attempt to imitate the music of great composers of the past, such asMozart. A present exponent of this technique isDavid Cope, whose computer programs analyses works of other composers to produce new works in a similar style. Cope's best-known program isEmily Howell.[33][34][35]
Melomics, a research project from theUniversity of Málaga(Spain), developed a computer composition cluster namedIamus, which composes complex, multi-instrument pieces for editing and performance. Since its inception,Iamushas composed a full album in 2012, also namedIamus, whichNew Scientistdescribed as "the first major work composed by a computer and performed by a full orchestra".[36]The group has also developed anAPIfor developers to utilize the technology, and makes its music available on its website.
Computer-aided algorithmic composition (CAAC, pronounced "sea-ack") is the implementation and use ofalgorithmic compositiontechniques in software. This label is derived from the combination of two labels, each too vague for continued use. The labelcomputer-aided compositionlacks the specificity of using generative algorithms. Music produced with notation or sequencing software could easily be considered computer-aided composition. The labelalgorithmic compositionis likewise too broad, particularly in that it does not specify the use of a computer. The termcomputer-aided, rather than computer-assisted, is used in the same manner ascomputer-aided design.[37]
Machine improvisation uses computer algorithms to createimprovisationon existing music materials. This is usually done by sophisticated recombination of musical phrases extracted from existing music, either live or pre-recorded. In order to achieve credible improvisation in particular style, machine improvisation usesmachine learningandpattern matchingalgorithms to analyze existing musical examples. The resulting patterns are then used to create new variations "in the style" of the original music, developing a notion of stylistic re-injection.
This is different from other improvisation methods with computers that usealgorithmic compositionto generate new music without performing analysis of existing music examples.[38]
Style modeling implies building a computational representation of the musical surface that captures important stylistic features from data. Statistical approaches are used to capture the redundancies in terms of pattern dictionaries or repetitions, which are later recombined to generate new musical data. Style mixing can be realized by analysis of a database containing multiple musical examples in different styles. Machine Improvisation builds upon a long musical tradition of statistical modeling that began with Hiller and Isaacson'sIlliac Suite for String Quartet(1957) and Xenakis' uses ofMarkov chainsandstochastic processes. Modern methods include the use oflossless data compressionfor incremental parsing, predictionsuffix tree,string searchingand more.[39]Style mixing is possible by blending models derived from several musical sources, with the first style mixing done by S. Dubnov in a piece NTrope Suite using Jensen-Shannon joint source model.[40]Later the use offactor oraclealgorithm (basically afactor oracleis a finite state automaton constructed in linear time and space in an incremental fashion)[41]was adopted for music by Assayag and Dubnov[42]and became the basis for several systems that use stylistic re-injection.[43]
The first implementation of statistical style modeling was the LZify method in Open Music,[44]followed by the Continuator system that implemented interactive machine improvisation that interpreted the LZ incremental parsing in terms ofMarkov modelsand used it for real time style modeling[45]developed byFrançois Pachetat Sony CSL Paris in 2002.[46][47]Matlab implementation of the Factor Oracle machine improvisation can be found as part ofComputer Auditiontoolbox. There is also an NTCC implementation of the Factor Oracle machine improvisation.[48]
OMax is a software environment developed in IRCAM. OMax usesOpenMusicand Max. It is based on researches on stylistic modeling carried out by Gerard Assayag andShlomo Dubnovand on researches on improvisation with the computer by G. Assayag, M. Chemillier and G. Bloch (a.k.a. theOMax Brothers) in the Ircam Music Representations group.[49]One of the problems in modeling audio signals with factor oracle is the symbolization of features from continuous values to a discrete alphabet. This problem was solved in the Variable Markov Oracle (VMO) available as python implementation,[50]using an information rate criteria for finding the optimal or most informative representation.[51]
The use ofartificial intelligenceto generate new melodies,[52]coverpre-existing music,[53]and clone artists' voices, is a recent phenomenon that has been reported to disrupt themusic industry.[54]
Live coding[55](sometimes known as 'interactive programming', 'on-the-fly programming',[56]'just in time programming') is the name given to the process of writing software in real time as part of a performance. Recently it has been explored as a more rigorous alternative to laptop musicians who, live coders often feel, lack the charisma and pizzazz of musicians performing live.[57]
|
https://en.wikipedia.org/wiki/Computer_music
|
InVapnik–Chervonenkis theory, theVapnik–Chervonenkis (VC) dimensionis a measure of the size (capacity, complexity, expressive power, richness, or flexibility) of a class of sets. The notion can be extended to classes of binary functions. It is defined as thecardinalityof the largest set of points that the algorithm canshatter, which means the algorithm can always learn a perfect classifier for any labeling of at least one configuration of those data points. It was originally defined byVladimir VapnikandAlexey Chervonenkis.[1]
Informally, the capacity of a classification model is related to how complicated it can be. For example, consider thethresholdingof a high-degreepolynomial: if the polynomial evaluates above zero, that point is classified as positive, otherwise as negative. A high-degree polynomial can be wiggly, so that it can fit a given set of training points well. But one can expect that the classifier will make errors on other points, because it is too wiggly. Such a polynomial has a high capacity. A much simpler alternative is to threshold a linear function. This function may not fit the training set well, because it has a low capacity. This notion of capacity is made rigorous below.
LetH{\displaystyle H}be aset family(a set of sets) andC{\displaystyle C}a set. Theirintersectionis defined as the following set family:
We say that a setC{\displaystyle C}isshatteredbyH{\displaystyle H}ifH∩C{\displaystyle H\cap C}contains all the subsets ofC{\displaystyle C}, i.e.:
TheVC dimensionD{\displaystyle D}ofH{\displaystyle H}is thecardinalityof the largest set that is shattered byH{\displaystyle H}. If arbitrarily large sets can be shattered, the VC dimension is∞{\displaystyle \infty }.
A binary classification modelf{\displaystyle f}with some parameter vectorθ{\displaystyle \theta }is said toshattera set ofgenerally positioneddata points(x1,x2,…,xn){\displaystyle (x_{1},x_{2},\ldots ,x_{n})}if, for every assignment of labels to those points, there exists aθ{\displaystyle \theta }such that the modelf{\displaystyle f}makes no errors when evaluating that set of data points[citation needed].
The VC dimension of a modelf{\displaystyle f}is the maximum number of points that can be arranged so thatf{\displaystyle f}shatters them. More formally, it is the maximum cardinalD{\displaystyle D}such that there exists a generally positioned data point set ofcardinalityD{\displaystyle D}that can be shattered byf{\displaystyle f}.
The VC dimension can predict aprobabilisticupper boundon the test error of a classification model. Vapnik[3]proved that the probability of the test error (i.e., risk with 0–1 loss function) distancing from an upper bound (on data that is drawni.i.d.from the same distribution as the training set) is given by:
whereD{\displaystyle D}is the VC dimension of the classification model,0<η⩽1{\displaystyle 0<\eta \leqslant 1}, andN{\displaystyle N}is the size of the training set (restriction: this formula is valid whenD≪N{\displaystyle D\ll N}. WhenD{\displaystyle D}is larger, the test-error may be much higher than the training-error. This is due tooverfitting).
The VC dimension also appears insample-complexity bounds. A space of binary functions with VC dimensionD{\displaystyle D}can be learned with:[4]: 73
samples, whereε{\displaystyle \varepsilon }is the learning error andδ{\displaystyle \delta }is the failure probability. Thus, the sample-complexity is a linear function of the VC dimension of the hypothesis space.
The VC dimension is one of the critical parameters in the size ofε-nets, which determines the complexity of approximation algorithms based on them; range sets without finite VC dimension may not have finite ε-nets at all.
Afinite projective planeof ordernis a collection ofn2+n+ 1 sets (called "lines") overn2+n+ 1 elements (called "points"), for which:
The VC dimension of a finite projective plane is 2.[5]
Proof: (a) For each pair of distinct points, there is one line that contains both of them, lines that contain only one of them, and lines that contain none of them, so every set of size 2 is shattered. (b) For any triple of three distinct points, if there is a linexthat contain all three, then there is no lineythat contains exactly two (since thenxandywould intersect in two points, which is contrary to the definition of a projective plane). Hence, no set of size 3 is shattered.
Suppose we have a base classB{\displaystyle B}of simple classifiers, whose VC dimension isD{\displaystyle D}.
We can construct a more powerful classifier by combining several different classifiers fromB{\displaystyle B}; this technique is calledboosting. Formally, givenT{\displaystyle T}classifiersh1,…,hT∈B{\displaystyle h_{1},\ldots ,h_{T}\in B}and a weight vectorw∈RT{\displaystyle w\in \mathbb {R} ^{T}}, we can define the following classifier:
The VC dimension of the set of all such classifiers (for all selections ofT{\displaystyle T}classifiers fromB{\displaystyle B}and a weight-vector fromRT{\displaystyle \mathbb {R} ^{T}}), assumingT,D≥3{\displaystyle T,D\geq 3}, is at most:[4]: 108–109
Aneural networkis described by adirected acyclic graphG(V,E), where:
The VC dimension of a neural network is bounded as follows:[4]: 234–235
The VC dimension is defined for spaces of binary functions (functions to {0,1}). Several generalizations have been suggested for spaces of non-binary functions.
|
https://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_dimension
|
Software testingis the act of checking whethersoftwaresatisfies expectations.
Software testing can provide objective, independent information about thequalityof software and theriskof its failure to auseror sponsor.[1]
Software testing can determine thecorrectnessof software for specificscenariosbut cannot determine correctness for all scenarios.[2][3]It cannot find allbugs.
Based on the criteria for measuring correctness from anoracle, software testing employs principles and mechanisms that might recognize a problem. Examples of oracles includespecifications,contracts,[4]comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, and applicable laws.
Software testing is often dynamic in nature; running the software to verify actual output matches expected. It can also be static in nature; reviewingcodeand its associateddocumentation.
Software testing is often used to answer the question: Does the software do what it is supposed to do and what it needs to do?
Information learned from software testing may be used to improve the process by which software is developed.[5]: 41–43
Software testing should follow a "pyramid" approach wherein most of your tests should beunit tests, followed byintegration testsand finallyend-to-end (e2e) testsshould have the lowest proportion.[6][7][8]
A study conducted byNISTin 2002 reported that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided if better software testing was performed.[9][dubious–discuss]
Outsourcingsoftware testing because of costs is very common, with China, the Philippines, and India being preferred destinations.[citation needed]
Glenford J. Myersinitially introduced the separation ofdebuggingfrom testing in 1979.[10]Although his attention was on breakage testing ("A successful test case is one that detects an as-yet undiscovered error."[10]: 16), it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification.
Software testing is typically goal driven.
Software testing typically includes handling software bugs – a defect in thecodethat causes an undesirable result.[11]: 31Bugs generally slow testing progress and involveprogrammerassistance todebugand fix.
Not all defects cause a failure. For example, a defect indead codewill not be considered a failure.
A defect that does not cause failure at one point in time may lead to failure later due to environmental changes. Examples of environment change include running on newcomputer hardware, changes indata, and interacting with different software.[12]
A single defect may result in multiple failure symptoms.
Software testing may involve a Requirements gap – omission from the design for a requirement.[5]: 426Requirement gaps can often benon-functional requirementssuch astestability,scalability,maintainability,performance, andsecurity.
A fundamental limitation of software testing is that testing underallcombinations of inputs and preconditions (initial state) is not feasible, even with a simple product.[3]: 17–18[13]Defects that manifest in unusual conditions are difficult to find in testing. Also,non-functionaldimensions of quality (how it is supposed tobeversus what it is supposed todo) –usability,scalability,performance,compatibility, andreliability– can be subjective; something that constitutes sufficient value to one person may not to another.
Although testing for every possible input is not feasible, testing can usecombinatoricsto maximize coverage while minimizing tests.[14]
Testing can be categorized many ways.[15]
Software testing can be categorized into levels based on how much of thesoftware systemis the focus of a test.[18][19][20][21]
There are many approaches to software testing.Reviews,walkthroughs, orinspectionsare referred to as static testing, whereas executing programmed code with a given set oftest casesis referred to asdynamic testing.[23][24]
Static testing is often implicit, like proofreading, plus when programming tools/text editors check source code structure or compilers (pre-compilers) check syntax and data flow asstatic program analysis. Dynamic testing takes place when the program itself is run. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discretefunctionsor modules.[23][24]Typical techniques for these are either usingstubs/drivers or execution from adebuggerenvironment.[24]
Static testing involvesverification, whereas dynamic testing also involvesvalidation.[24]
Passive testing means verifying the system's behavior without any interaction with the software product. Contrary to active testing, testers do not provide any test data but look at system logs and traces. They mine for patterns and specific behavior in order to make some kind of decisions.[25]This is related to offlineruntime verificationandlog analysis.
The type of testing strategy to be performed depends on whether the tests to be applied to the IUT should be decided before the testing plan starts to be executed (preset testing[28]) or whether each input to be applied to the IUT can be dynamically dependent on the outputs obtained during the application of the previous tests (adaptive testing[29][30]).
Software testing can often be divided into white-box and black-box. These two approaches are used to describe the point of view that the tester takes when designing test cases. A hybrid approach called grey-box that includes aspects of both boxes may also be applied to software testing methodology.[31][32]
White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) verifies the internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing, an internal perspective of the system (the source code), as well as programming skills are used to design test cases. The tester chooses inputs to exercise paths through the code and determines the appropriate outputs.[31][32]This is analogous to testing nodes in a circuit, e.g.,in-circuit testing(ICT).
While white-box testing can be applied at theunit,integration, andsystemlevels of the software testing process, it is usually done at the unit level.[33]It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements.
Techniques used in white-box testing include:[32][34]
Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most importantfunction pointshave been tested.[35]Code coverage as asoftware metriccan be reported as a percentage for:[31][35][36]
100% statement coverage ensures that all code paths or branches (in terms ofcontrol flow) are executed at least once. This is helpful in ensuring correct functionality, but not sufficient since the same code may process different inputs correctly or incorrectly.[37]
Black-box testing (also known as functional testing) describes designing test cases without knowledge of the implementation, without reading the source code. The testers are only aware of what the software is supposed to do, not how it does it.[38]Black-box testing methods include:equivalence partitioning,boundary value analysis,all-pairs testing,state transition tables,decision tabletesting,fuzz testing,model-based testing,use casetesting,exploratory testing, and specification-based testing.[31][32][36]
Specification-based testing aims to test the functionality of software according to the applicable requirements.[39]This level of testing usually requires thoroughtest casesto be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs, to derive test cases. These tests can befunctionalornon-functional, though usually functional. Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.[40]
Black box testing can be used to any level of testing although usually not at the unit level.[33]
Component interface testing
Component interface testing is a variation ofblack-box testing, with the focus on the data values beyond just the related actions of a subsystem component.[41]The practice of component interface testing can be used to check the handling of data passed between various units, or subsystem components, beyond full integration testing between those units.[42][43]The data being passed can be considered as "message packets" and the range or data types can be checked for data generated from one unit and tested for validity before being passed into another unit. One option for interface testing is to keep a separate log file of data items being passed, often with a timestamp logged to allow analysis of thousands of cases of data passed between units for days or weeks. Tests can include checking the handling of some extreme data values while other interface variables are passed as normal values.[42]Unusual data values in an interface can help explain unexpected performance in the next unit.
The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily find the information he or she requires, and the information is expressed clearly.[44][45]
At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing, therefore, requires the recording of the entire test process – capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones.
Visual testing provides a number of advantages. The quality of communication is increased drastically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it, and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence he or she requires of a test failure and can instead focus on the cause of the fault and how it should be fixed.
Ad hoc testingandexploratory testingare important methodologies for checking software integrity because they require less preparation time to implement, while the important bugs can be found quickly.[46]In ad hoc testing, where testing takes place in an improvised impromptu way, the ability of the tester(s) to base testing off documented methods and then improvise variations of those tests can result in a more rigorous examination of defect fixes.[46]However, unless strict documentation of the procedures is maintained, one of the limits of ad hoc testing is lack of repeatability.[46]
Grey-box testing (American spelling: gray-box testing) involves using knowledge of internal data structures and algorithms for purposes of designing tests while executing those tests at the user, or black-box level. The tester will often have access to both "the source code and the executable binary."[47]Grey-box testing may also includereverse engineering(using dynamic code analysis) to determine, for instance, boundary values or error messages.[47]Manipulating input data and formatting output do not qualify as grey-box, as the input and output are clearly outside of the "black box" that we are calling the system under test. This distinction is particularly important when conductingintegration testingbetween two modules of code written by two different developers, where only the interfaces are exposed for the test.
By knowing the underlying concepts of how the software works, the tester makes better-informed testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up an isolated testing environment with activities, such as seeding adatabase. The tester can observe the state of the product being tested after performing certain actions such as executingSQLstatements against the database and then executing queries to ensure that the expected changes have been reflected. Grey-box testing implements intelligent test scenarios based on limited information. This will particularly apply to data type handling,exception handling, and so on.[48]
With the concept of grey-box testing, this "arbitrary distinction" between black- and white-box testing has faded somewhat.[33]
Most software systems have installation procedures that are needed before they can be used for their main purpose. Testing these procedures to achieve an installed software system that may be used is known asinstallation testing.[49]: 139These procedures may involve full or partial upgrades, and install/uninstall processes.
A common cause of software failure (real or perceived) is a lack of itscompatibilitywith otherapplication software,operating systems(or operating systemversions, old or new), or target environments that differ greatly from the original (such as aterminalorGUIapplication intended to be run on thedesktopnow being required to become aWeb application, which must render in aWeb browser). For example, in the case of a lack ofbackward compatibility, this can occur because the programmers develop and test software only on the latest version of the target environment, which not all users may be running. This results in the unintended consequence that the latest work may not function on earlier versions of the target environment, or on older hardware that earlier versions of the target environment were capable of using. Sometimes such issues can be fixed by proactivelyabstractingoperating system functionality into a separate programmoduleorlibrary.
Sanity testingdetermines whether it is reasonable to proceed with further testing.
Smoke testingconsists of minimal attempts to operate the software, designed to determine whether there are any basic problems that will prevent it from working at all. Such tests can be used asbuild verification test.
Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncoversoftware regressions, as degraded or lost features, including old bugs that have come back. Such regressions occur whenever software functionality that was previously working correctly, stops working as intended. Typically, regressions occur as anunintended consequenceof program changes, when the newly developed part of the software collides with the previously existing code. Regression testing is typically the largest test effort in commercial software development,[50]due to checking numerous details in prior software features, and even new software can be developed while using some old test cases to test parts of the new design to ensure prior functionality is still supported.
Common methods of regression testing include re-running previous sets of test cases and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and theriskof the added features. They can either be complete, for changes added late in the release or deemed to be risky, or be very shallow, consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk.
Acceptance testing is system-level testing to ensure the software meets customer expectations.[51][52][53][54]
Acceptance testing may be performed as part of the hand-off process between any two phases of development.[citation needed]
Tests are frequently grouped into these levels by where they are performed in the software development process, or by the level of specificity of the test.[54]
Sometimes, UAT is performed by the customer, in their environment and on their own hardware.
OAT is used to conduct operational readiness (pre-release) of a product, service or system as part of aquality management system. OAT is a common type of non-functional software testing, used mainly insoftware developmentandsoftware maintenanceprojects. This type of testing focuses on the operational readiness of the system to be supported, or to become part of the production environment. Hence, it is also known as operational readiness testing (ORT) oroperations readiness and assurance(OR&A) testing.Functional testingwithin OAT is limited to those tests that are required to verify thenon-functionalaspects of the system.
In addition, the software testing should ensure that the portability of the system, as well as working as expected, does not also damage or partially corrupt its operating environment or cause other processes within that environment to become inoperative.[55]
Contractual acceptance testing is performed based on the contract's acceptance criteria defined during the agreement of the contract, while regulatory acceptance testing is performed based on the relevant regulations to the software product. Both of these two tests can be performed by users or independent testers. Regulation acceptance testing sometimes involves the regulatory agencies auditing the test results.[54]
Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing before the software goes to beta testing.[56]
Beta testing comes after alpha testing and can be considered a form of externaluser acceptance testing. Versions of the software, known asbeta versions, are released to a limited audience outside of the programming team known as beta testers. The software is released to groups of people so that further testing can ensure the product has few faults orbugs. Beta versions can be made available to the open public to increase thefeedbackfield to a maximal number of future users and to deliver value earlier, for an extended or even indefinite period of time (perpetual beta).[57]
Functional testingrefers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work."
Non-functional testingrefers to aspects of the software that may not be related to a specific function or user action, such asscalabilityor otherperformance, behavior under certainconstraints, orsecurity. Testing will determine the breaking point, the point at which extremes of scalability or performance leads to unstable execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users.
Continuous testing is the process of executingautomated testsas part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate.[58][59]Continuous testing includes the validation of bothfunctional requirementsandnon-functional requirements; the scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals.[60][61]
Destructive testing attempts to cause the software or a sub-system to fail. It verifies that the software functions properly even when it receives invalid or unexpected inputs, thereby establishing therobustnessof input validation and error-management routines.[citation needed]Software fault injection, in the form offuzzing, is an example of failure testing. Various commercial non-functional testing tools are linked from thesoftware fault injectionpage; there are also numerous open-source and free software tools available that perform destructive testing.
Performance testing is generally executed to determine how a system or sub-system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
Load testingis primarily concerned with testing that the system can continue to operate under a specific load, whether that be large quantities of data or a large number ofusers. This is generally referred to as softwarescalability. The related load testing activity of when performed as a non-functional activity is often referred to asendurance testing.Volume testingis a way to test software functions even when certain components (for example a file or database) increase radically in size.Stress testingis a way to test reliability under unexpected or rare workloads.Stability testing(often referred to as load or endurance testing) checks to see if the software can continuously function well in or above an acceptable period.
There is little agreement on what the specific goals of performance testing are. The terms load testing, performance testing,scalability testing, and volume testing, are often used interchangeably.
Real-time softwaresystems have strict timing constraints. To test if timing constraints are met,real-time testingis used.
Usability testingis to check if the user interface is easy to use and understand. It is concerned mainly with the use of the application. This is not a kind of testing that can be automated; actual human users are needed, being monitored by skilledUI designers. Usability testing can use structured models to check how well an interface works. The Stanton, Theofanos, and Joshi (2015) model looks at user experience, and the Al-Sharafat and Qadoumi (2016) model is for expert evaluation, helping to assess usability in digital applications.[62]
Accessibilitytesting is done to ensure that the software is accessible to persons with disabilities. Some of the common web accessibility tests are
Security testingis essential for software that processes confidential data to preventsystem intrusionbyhackers.
The International Organization for Standardization (ISO) defines this as a "type of testing conducted to evaluate the degree to which a test item, and associated data and information, are protected so that unauthorised persons or systems cannot use, read or modify them, and authorized persons or systems are not denied access to them."[63]
Testing forinternationalization and localizationvalidates that the software can be used with different languages and geographic regions. The process ofpseudolocalizationis used to test the ability of an application to be translated to another language, and make it easier to identify when the localization process may introduce new bugs into the product.
Globalization testing verifies that the software is adapted for a new culture, such as different currencies or time zones.[64]
Actual translation to human languages must be tested, too. Possible localization and globalization failures include:
Development testing is a software development process that involves the synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Development testing aims to eliminate construction errors before code is promoted to other testing; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development process.
Depending on the organization's expectations for software development, development testing might includestatic code analysis, data flow analysis, metrics analysis, peer code reviews, unit testing, code coverage analysis,traceability, and other software testing practices.
A/B testing is a method of running a controlled experiment to determine if a proposed change is more effective than the current approach. Customers are routed to either a current version (control) of a feature, or to a modified version (treatment) and data is collected to determine which version is better at achieving the desired outcome.
Concurrent or concurrency testing assesses the behaviour and performance of software and systems that useconcurrent computing, generally under normal usage conditions. Typical problems this type of testing will expose are deadlocks, race conditions and problems with shared memory/resource handling.
In software testing, conformance testing verifies that a product performs according to its specified standards. Compilers, for instance, are extensively tested to determine whether they meet the recognized standard for that language.
Creating a display expected output, whether asdata comparisonof text or screenshots of the UI,[3]: 195is sometimes called snapshot testing or Golden Master Testing unlike many other forms of testing, this cannot detect failures automatically and instead requires that a human evaluate the output for inconsistencies.
Property testing is a testing technique where, instead of asserting that specific inputs produce specific expected outputs, the practitioner randomly generates many inputs, runs the program on all of them, and asserts the truth of some "property" that should be true for every pair of input and output. For example, every output from a serialization function should be accepted by the corresponding deserialization function, and every output from a sort function should be a monotonically increasing list containing exactly the same elements as its input.
Property testing libraries allow the user to control the strategy by which random inputs are constructed, to ensure coverage of degenerate cases, or inputs featuring specific patterns that are needed to fully exercise aspects of the implementation under test.
Property testing is also sometimes known as "generative testing" or "QuickCheck testing" since it was introduced and popularized by the Haskell libraryQuickCheck.[65]
Metamorphic testing (MT) is a property-based software testing technique, which can be an effective approach for addressing the test oracle problem and test case generation problem. The test oracle problem is the difficulty of determining the expected outcomes of selected test cases or to determine whether the actual outputs agree with the expected outcomes.
VCR testing, also known as "playback testing" or "record/replay" testing, is a testing technique for increasing the reliability and speed of regression tests that involve a component that is slow or unreliable to communicate with, often a third-party API outside of the tester's control. It involves making a recording ("cassette") of the system's interactions with the external component, and then replaying the recorded interactions as a substitute for communicating with the external system on subsequent runs of the test.
The technique was popularized in web development by the Ruby libraryvcr.
In an organization, testers may be in a separate team from the rest of thesoftware developmentteam or they may be integrated into one team. Software testing can also be performed by non-dedicated software testers.
In the 1980s, the termsoftware testerstarted to be used to denote a separate profession.
Notable software testing roles and titles include:[66]test manager,test lead,test analyst,test designer,tester,automation developer, andtest administrator.[67]
Organizations that develop software, perform testing differently, but there are common patterns.[2]
Inwaterfall development, testing is generally performed after the code is completed, but before the product is shipped to the customer.[68]This practice often results in the testing phase being used as aprojectbuffer to compensate for project delays, thereby compromising the time devoted to testing.[10]: 145–146
Some contend that the waterfall process allows for testing to start when the development project starts and to be a continuous process until the project finishes.[69]
Agile software developmentcommonly involves testing while the code is being written and organizing teams with both programmers and testers and with team members performing both programming and testing.
One agile practice,test-driven software development(TDD), is a way ofunit testingsuch that unit-level testing is performed while writing the product code.[70]Test code is updated as new features are added and failure conditions are discovered (bugs fixed). Commonly, the unit test code is maintained with the project code, integrated in the build process, and run on each build and as part of regression testing. Goals of thiscontinuous integrationis to support development and reduce defects.[71][70]
Even in organizations that separate teams by programming and testing functions, many often have the programmers performunit testing.[72]
The sample below is common for waterfall development. The same activities are commonly found in other development models, but might be described differently.
Software testing is used in association withverification and validation:[73]
The terms verification and validation are commonly used interchangeably in the industry; it is also common to see these two terms defined with contradictory definitions. According to theIEEE StandardGlossary of Software Engineering Terminology:[11]: 80–81
And, according to the ISO 9000 standard:
The contradiction is caused by the use of the concepts of requirements and specified requirements but with different meanings.
In the case of IEEE standards, the specified requirements, mentioned in the definition of validation, are the set of problems, needs and wants of the stakeholders that the software must solve and satisfy. Such requirements are documented in a Software Requirements Specification (SRS). And, the products mentioned in the definition of verification, are the output artifacts of every phase of the software development process. These products are, in fact, specifications such as Architectural Design Specification, Detailed Design Specification, etc. The SRS is also a specification, but it cannot be verified (at least not in the sense used here, more on this subject below).
But, for the ISO 9000, the specified requirements are the set of specifications, as just mentioned above, that must be verified. A specification, as previously explained, is the product of a software development process phase that receives another specification as input. A specification is verified successfully when it correctly implements its input specification. All the specifications can be verified except the SRS because it is the first one (it can be validated, though). Examples: The Design Specification must implement the SRS; and, the Construction phase artifacts must implement the Design Specification.
So, when these words are defined in common terms, the apparent contradiction disappears.
Both the SRS and the software must be validated. The SRS can be validated statically by consulting with the stakeholders. Nevertheless, running some partial implementation of the software or a prototype of any kind (dynamic testing) and obtaining positive feedback from them, can further increase the certainty that the SRS is correctly formulated. On the other hand, the software, as a final and running product (not its artifacts and documents, including the source code) must be validated dynamically with the stakeholders by executing the software and having them to try it.
Some might argue that, for SRS, the input is the words of stakeholders and, therefore, SRS validation is the same as SRS verification. Thinking this way is not advisable as it only causes more confusion. It is better to think of verification as a process involving a formal and technical input document.
In some organizations, software testing is part of asoftware quality assurance(SQA) process.[3]: 347In SQA, software process specialists and auditors are concerned with the software development process rather than just the artifacts such as documentation, code and systems. They examine and change thesoftware engineeringprocess itself to reduce the number of faults that end up in the delivered software: the so-called defect rate. What constitutes an acceptable defect rate depends on the nature of the software; a flight simulator video game would have much higher defect tolerance than software for an actual airplane. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies.[citation needed]
Software testing is an activity to investigate software under test in order to provide quality-related information to stakeholders. By contrast, QA (quality assurance) is the implementation of policies and procedures intended to prevent defects from reaching customers.
Quality measures include such topics ascorrectness, completeness,securityandISO/IEC 9126requirements such as capability,reliability,efficiency,portability,maintainability, compatibility, andusability.
There are a number of frequently usedsoftware metrics, or measures, which are used to assist in determining the state of the software or the adequacy of the testing.
A software testing process can produce severalartifacts. The actual artifacts produced are a factor of the software development model used, stakeholder and organisational needs.
Atest planis a document detailing the approach that will be taken for intended test activities. The plan may include aspects such as objectives, scope, processes and procedures, personnel requirements, and contingency plans.[51]The test plan could come in the form of a single plan that includes all test types (like an acceptance or system test plan) and planning considerations, or it may be issued as a master test plan that provides an overview of more than one detailed test plan (a plan of a plan).[51]A test plan can be, in some cases, part of a wide "test strategy" which documents overall testing approaches, which may itself be a master test plan or even a separate artifact.
Atest casenormally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and the actual result. Clinically defined, a test case is an input and an expected result.[75]This can be as terse as "for condition x your derived result is y", although normally test cases describe in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step, or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repositories. In a database system, you may also be able to see past test results, who generated the results, and what system configuration was used to generate those results. These past results would usually be stored in a separate table.
Atest scriptis a procedure or programming code that replicates user actions. Initially, the term was derived from the product of work created by automated regression test tools. A test case will be a baseline to create test scripts using a tool or a program.
In most cases, multiple sets of values or data are used to test the same functionality of a particular feature. All the test values and changeable environmental components are collected in separate files and stored as test data. It is also useful to provide this data to the client and with the product or a project. There are techniques to generate Test data.
The software, tools, samples of data input and output, and configurations are all referred to collectively as atest harness.
A test run is a collection of test cases or test suites that the user is executing and comparing the expected with the actual results. Once complete, a report or all executed tests may be generated.
Several certification programs exist to support the professional aspirations of software testers and quality assurance specialists. A few practitioners argue that the testing field is not ready for certification, as mentioned in thecontroversysection.
Some of the majorsoftware testing controversiesinclude:
It is commonly believed that the earlier a defect is found, the cheaper it is to fix it. The following table shows the cost of fixing the defect depending on the stage it was found.[85]For example, if a problem in the requirements is found only post-release, then it would cost 10–100 times more to fix than if it had already been found by the requirements review. With the advent of moderncontinuous deploymentpractices and cloud-based services, the cost of re-deployment and maintenance may lessen over time.
The data from which this table is extrapolated is scant. Laurent Bossavit says in his analysis:
The "smaller projects" curve turns out to be from only two teams of first-year students, a sample size so small that extrapolating to "smaller projects in general" is totally indefensible. The GTE study does not explain its data, other than to say it came from two projects, one large and one small. The paper cited for the Bell Labs "Safeguard" project specifically disclaims having collected the fine-grained data that Boehm's data points suggest. The IBM study (Fagan's paper) contains claims that seem to contradict Boehm's graph and no numerical results that clearly correspond to his data points.
Boehm doesn't even cite a paper for the TRW data, except when writing for "Making Software" in 2010, and there he cited the original 1976 article. There exists a large study conducted at TRW at the right time for Boehm to cite it, but that paper doesn't contain the sort of data that would support Boehm's claims.[86]
|
https://en.wikipedia.org/wiki/Software_testing#Limitations_of_testing
|
Dynamics of Markovian particles(DMP) is the basis of atheoryforkineticsofparticlesin openheterogeneous systems. It can be looked upon as an application of the notion ofstochastic processconceived as a physical entity; e.g. the particle moves because there is a transition probability acting on it.
Two particular features of DMP might be noticed: (1) anergodic-like relation between themotion of particleand the correspondingsteady state, and (2) the classic notion of geometricvolumeappears nowhere (e.g. a concept such as flow of "substance" is not expressed aslitersper time unit but as number of particles per time unit).
Although primitive, DMP has been applied for solving a classicparadoxof the absorption ofmercurybyfishand bymollusks. The theory has also been applied for a purelyprobabilisticderivation of the fundamental physical principle:conservation of mass; this might be looked upon as a contribution to the old and ongoing discussion of the relation betweenphysicsandprobability theory.
Thisclassical mechanics–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Dynamics_of_Markovian_particles
|
Trusted timestampingis the process ofsecurelykeeping track of the creation and modification time of a document. Security here means that no one—not even the owner of the document—should be able to change it once it has been recorded provided that the timestamper's integrity is never compromised.
The administrative aspect involves setting up a publicly available, trusted timestamp management infrastructure to collect, process and renew timestamps.
The idea of timestamping information is centuries old. For example, whenRobert HookediscoveredHooke's lawin 1660, he did not want to publish it yet, but wanted to be able to claim priority. So he published theanagramceiiinosssttuvand later published the translationut tensio sic vis(Latin for "as is the extension, so is the force"). Similarly,Galileofirst published his discovery of the phases of Venus in the anagram form.
Sir Isaac Newton, in responding to questions fromLeibnizin a letter in 1677, concealed the details of his"fluxional technique"with an anagram:
Trusted digital timestamping has first been discussed in literature byStuart HaberandW. Scott Stornetta.[1]
There are many timestamping schemes with different security goals:
Coverage in standards:
For systematic classification and evaluation of timestamping schemes see works by Masashi Une.[2]
According to the RFC 3161 standard, a trusted timestamp is atimestampissued by aTrusted Third Party(TTP) acting as aTime Stamping Authority(TSA). It is used to prove the existence of certain data before a certain point (e.g. contracts, research data, medical records, ...) without the possibility that the owner can backdate the timestamps. Multiple TSAs can be used to increase reliability and reduce vulnerability.
The newerANSI ASC X9.95 Standardfortrusted timestampsaugments the RFC 3161 standard with data-level security requirements to ensuredata integrityagainst a reliable time source that is provable to any third party. This standard has been applied to authenticating digitally signed data forregulatory compliance, financial transactions, and legal evidence.
The technique is based ondigital signaturesandhash functions. First a hash is calculated from the data. A hash is a sort of digital fingerprint of the original data: a string of bits that is practically impossible to duplicate with any other set of data. If the original data is changed then this will result in a completely different hash. This hash is sent to the TSA. The TSA concatenates a timestamp to the hash and calculates the hash of this concatenation. This hash is in turndigitally signedwith theprivate keyof the TSA. This signed hash + the timestamp is sent back to the requester of the timestamp who stores these with the original data (see diagram).
Since the original data cannot be calculated from the hash (because thehash functionis aone way function), the TSA never gets to see the original data, which allows the use of this method for confidential data.
Anyone trusting the timestamper can then verify that the document wasnotcreatedafterthe date that the timestamper vouches. It can also no longer be repudiated that the requester of the timestamp was in possession of the original data at the time given by the timestamp. To prove this (see diagram) thehashof the original data is calculated, the timestamp given by the TSA is appended to it and the hash of the result of this concatenation is calculated, call this hash A.
Then thedigital signatureof the TSA needs to be validated. This is done by decrypting the digital signature using public key of TSA, producing hash B. Hash A is then compared with hash B inside the signed TSA message to confirm they are equal, proving that the timestamp and message is unaltered and was issued by the TSA. If not, then either the timestamp was altered or the timestamp was not issued by the TSA.
With the advent ofcryptocurrencieslikebitcoin, it has become possible to get some level of secure timestamp accuracy in adecentralizedand tamper-proof manner. Digital data can be hashed and the hash can be incorporated into a transaction stored in theblockchain, which serves as evidence of the time at which that data existed.[3][4]Forproof of workblockchains, the security derives from the tremendous amount of computational effort performed after the hash was submitted to the blockchain. Tampering with the timestamp would require more computational resources than the rest of the network combined, and cannot be done unnoticed in an actively defended blockchain.
However, the design and implementation of Bitcoin in particular makes its timestamps vulnerable to some degree of manipulation, allowing timestamps up to two hours in the future, and accepting new blocks with timestamps earlier than the previous block.[5]
The decentralized timestamping approach using the blockchain has also found applications in other areas, such as indashboard cameras, to secure the integrity of video files at the time of their recording,[6]or to prove priority for creative content and ideas shared on social media platforms.[7]
|
https://en.wikipedia.org/wiki/Trusted_timestamping
|
Inregression analysis,least squaresis aparameter estimationmethod in which the sum of the squares of theresiduals(a residual being the difference between an observed value and the fitted value provided by a model) is minimized.
The most important application is indata fitting. When the problem has substantialuncertaintiesin theindependent variable(thexvariable), then simple regression and least-squares methods have problems; in such cases, the methodology required for fittingerrors-in-variables modelsmay be considered instead of that for least squares.
Least squares problems fall into two categories: linear orordinary least squaresandnonlinear least squares, depending on whether or not the model functions are linear in all unknowns. The linear least-squares problem occurs in statisticalregression analysis; it has aclosed-form solution. The nonlinear problem is usually solved byiterative refinement; at each iteration the system is approximated by a linear one, and thus the core calculation is similar in both cases.
Polynomial least squaresdescribes thevariancein a prediction of the dependent variable as a function of the independent variable and thedeviationsfrom the fitted curve.
When the observations come from anexponential familywith identity as its natural sufficient statistics and mild-conditions are satisfied (e.g. fornormal,exponential,Poissonandbinomial distributions), standardized least-squares estimates andmaximum-likelihoodestimates are identical.[1]The method of least squares can also be derived as amethod of momentsestimator.
The following discussion is mostly presented in terms oflinearfunctions but the use of least squares is valid and practical for more general families of functions. Also, by iteratively applying local quadratic approximation to the likelihood (through theFisher information), the least-squares method may be used to fit ageneralized linear model.
The least-squares method was officially discovered and published byAdrien-Marie Legendre(1805),[2]though it is usually also co-credited toCarl Friedrich Gauss(1809),[3][4]who contributed significant theoretical advances to the method,[4]and may have also used it in his earlier work in 1794 and 1795.[5][4]
The method of least squares grew out of the fields ofastronomyandgeodesy, as scientists and mathematicians sought to provide solutions to the challenges of navigating the Earth's oceans during theAge of Discovery. The accurate description of the behavior of celestial bodies was the key to enabling ships to sail in open seas, where sailors could no longer rely on land sightings for navigation.
The method was the culmination of several advances that took place during the course of the eighteenth century:[6]
The first clear and concise exposition of the method of least squares was published byLegendrein 1805.[10]The technique is described as an algebraic procedure for fitting linear equations to data and Legendre demonstrates the new method by analyzing the same data as Laplace for the shape of the Earth. Within ten years after Legendre's publication, the method of least squares had been adopted as a standard tool in astronomy and geodesy inFrance,Italy, andPrussia, which constitutes an extraordinarily rapid acceptance of a scientific technique.[6]
In 1809Carl Friedrich Gausspublished his method of calculating the orbits of celestial bodies. In that work he claimed to have been in possession of the method of least squares since 1795.[11]This naturally led to a priority dispute with Legendre. However, to Gauss's credit, he went beyond Legendre and succeeded in connecting the method of least squares with the principles of probability and to thenormal distribution. He had managed to complete Laplace's program of specifying a mathematical form of the probability density for the observations, depending on a finite number of unknown parameters, and define a method of estimation that minimizes the error of estimation. Gauss showed that thearithmetic meanis indeed the best estimate of the location parameter by changing both theprobability densityand the method of estimation. He then turned the problem around by asking what form the density should have and what method of estimation should be used to get the arithmetic mean as estimate of the location parameter. In this attempt, he invented the normal distribution.
An early demonstration of the strength ofGauss's methodcame when it was used to predict the future location of the newly discovered asteroidCeres. On 1 January 1801, the Italian astronomerGiuseppe Piazzidiscovered Ceres and was able to track its path for 40 days before it was lost in the glare of the Sun. Based on these data, astronomers desired to determine the location of Ceres after it emerged from behind the Sun without solvingKepler's complicated nonlinear equationsof planetary motion. The only predictions that successfully allowed Hungarian astronomerFranz Xaver von Zachto relocate Ceres were those performed by the 24-year-old Gauss using least-squares analysis.
In 1810, after reading Gauss's work, Laplace, after proving thecentral limit theorem, used it to give a large sample justification for the method of least squares and the normal distribution. In 1822, Gauss was able to state that the least-squares approach to regression analysis is optimal in the sense that in a linear model where the errors have a mean of zero, are uncorrelated, normally distributed, and have equal variances, the best linear unbiased estimator of the coefficients is the least-squares estimator. An extended version of this result is known as theGauss–Markov theorem.
The idea of least-squares analysis was also independently formulated by the AmericanRobert Adrainin 1808. In the next two centuries workers in the theory of errors and in statistics found many different ways of implementing least squares.[12]
The objective consists of adjusting the parameters of a model function to best fit a data set. A simple data set consists ofnpoints (data pairs)(xi,yi){\displaystyle (x_{i},y_{i})\!},i= 1, …,n, wherexi{\displaystyle x_{i}\!}is anindependent variableandyi{\displaystyle y_{i}\!}is adependent variablewhose value is found by observation. The model function has the formf(x,β){\displaystyle f(x,{\boldsymbol {\beta }})}, wheremadjustable parameters are held in the vectorβ{\displaystyle {\boldsymbol {\beta }}}. The goal is to find the parameter values for the model that "best" fits the data. The fit of a model to a data point is measured by itsresidual, defined as the difference between the observed value of the dependent variable and the value predicted by the model:ri=yi−f(xi,β).{\displaystyle r_{i}=y_{i}-f(x_{i},{\boldsymbol {\beta }}).}
The least-squares method finds the optimal parameter values by minimizing thesum of squared residuals,S{\displaystyle S}:[13]S=∑i=1nri2.{\displaystyle S=\sum _{i=1}^{n}r_{i}^{2}.}
In the simplest casef(xi,β)=β{\displaystyle f(x_{i},{\boldsymbol {\beta }})=\beta }and the result of the least-squares method is thearithmetic meanof the input data.
An example of a model in two dimensions is that of the straight line. Denoting the y-intercept asβ0{\displaystyle \beta _{0}}and the slope asβ1{\displaystyle \beta _{1}}, the model function is given byf(x,β)=β0+β1x{\displaystyle f(x,{\boldsymbol {\beta }})=\beta _{0}+\beta _{1}x}. Seelinear least squaresfor a fully worked out example of this model.
A data point may consist of more than one independent variable. For example, when fitting a plane to a set of height measurements, the plane is a function of two independent variables,xandz, say. In the most general case there may be one or more independent variables and one or more dependent variables at each data point.
To the right is a residual plot illustrating random fluctuations aboutri=0{\displaystyle r_{i}=0}, indicating that a linear model(Yi=β0+β1xi+Ui){\displaystyle (Y_{i}=\beta _{0}+\beta _{1}x_{i}+U_{i})}is appropriate.Ui{\displaystyle U_{i}}is an independent, random variable.[13]
If the residual points had some sort of a shape and were not randomly fluctuating, a linear model would not be appropriate. For example, if the residual plot had a parabolic shape as seen to the right, a parabolic model(Yi=β0+β1xi+β2xi2+Ui){\displaystyle (Y_{i}=\beta _{0}+\beta _{1}x_{i}+\beta _{2}x_{i}^{2}+U_{i})}would be appropriate for the data. The residuals for a parabolic model can be calculated viari=yi−β^0−β^1xi−β^2xi2{\displaystyle r_{i}=y_{i}-{\hat {\beta }}_{0}-{\hat {\beta }}_{1}x_{i}-{\hat {\beta }}_{2}x_{i}^{2}}.[13]
This regression formulation considers only observational errors in the dependent variable (but the alternativetotal least squaresregression can account for errors in both variables). There are two rather different contexts with different implications:
Theminimumof the sum of squares is found by setting thegradientto zero. Since the model containsmparameters, there aremgradient equations:∂S∂βj=2∑iri∂ri∂βj=0,j=1,…,m,{\displaystyle {\frac {\partial S}{\partial \beta _{j}}}=2\sum _{i}r_{i}{\frac {\partial r_{i}}{\partial \beta _{j}}}=0,\ j=1,\ldots ,m,}and sinceri=yi−f(xi,β){\displaystyle r_{i}=y_{i}-f(x_{i},{\boldsymbol {\beta }})}, the gradient equations become−2∑iri∂f(xi,β)∂βj=0,j=1,…,m.{\displaystyle -2\sum _{i}r_{i}{\frac {\partial f(x_{i},{\boldsymbol {\beta }})}{\partial \beta _{j}}}=0,\ j=1,\ldots ,m.}
The gradient equations apply to all least squares problems. Each particular problem requires particular expressions for the model and itspartial derivatives.[15]
A regression model is a linear one when the model comprises alinear combinationof the parameters, i.e.,f(x,β)=∑j=1mβjϕj(x),{\displaystyle f(x,{\boldsymbol {\beta }})=\sum _{j=1}^{m}\beta _{j}\phi _{j}(x),}where the functionϕj{\displaystyle \phi _{j}}is a function ofx{\displaystyle x}.[15]
LettingXij=ϕj(xi){\displaystyle X_{ij}=\phi _{j}(x_{i})}and putting the independent and dependent variables in matricesX{\displaystyle X}andY,{\displaystyle Y,}respectively, we can compute the least squares in the following way. Note thatD{\displaystyle D}is the set of all data.[15][16]L(D,β)=‖Y−Xβ‖2=(Y−Xβ)T(Y−Xβ){\displaystyle L(D,{\boldsymbol {\beta }})=\left\|Y-X{\boldsymbol {\beta }}\right\|^{2}=(Y-X{\boldsymbol {\beta }})^{\mathsf {T}}(Y-X{\boldsymbol {\beta }})}=YTY−2YTXβ+βTXTXβ{\displaystyle =Y^{\mathsf {T}}Y-2Y^{\mathsf {T}}X{\boldsymbol {\beta }}+{\boldsymbol {\beta }}^{\mathsf {T}}X^{\mathsf {T}}X{\boldsymbol {\beta }}}
The gradient of the loss is:∂L(D,β)∂β=∂(YTY−2YTXβ+βTXTXβ)∂β=−2XTY+2XTXβ{\displaystyle {\frac {\partial L(D,{\boldsymbol {\beta }})}{\partial {\boldsymbol {\beta }}}}={\frac {\partial \left(Y^{\mathsf {T}}Y-2Y^{\mathsf {T}}X{\boldsymbol {\beta }}+{\boldsymbol {\beta }}^{\mathsf {T}}X^{\mathsf {T}}X{\boldsymbol {\beta }}\right)}{\partial {\boldsymbol {\beta }}}}=-2X^{\mathsf {T}}Y+2X^{\mathsf {T}}X{\boldsymbol {\beta }}}
Setting the gradient of the loss to zero and solving forβ{\displaystyle {\boldsymbol {\beta }}}, we get:[16][15]−2XTY+2XTXβ=0⇒XTY=XTXβ{\displaystyle -2X^{\mathsf {T}}Y+2X^{\mathsf {T}}X{\boldsymbol {\beta }}=0\Rightarrow X^{\mathsf {T}}Y=X^{\mathsf {T}}X{\boldsymbol {\beta }}}β^=(XTX)−1XTY{\displaystyle {\boldsymbol {\hat {\beta }}}=\left(X^{\mathsf {T}}X\right)^{-1}X^{\mathsf {T}}Y}
There is, in some cases, aclosed-form solutionto a non-linear least squares problem – but in general there is not. In the case of no closed-form solution, numerical algorithms are used to find the value of the parametersβ{\displaystyle \beta }that minimizes the objective. Most algorithms involve choosing initial values for the parameters. Then, the parameters are refined iteratively, that is, the values are obtained by successive approximation:βjk+1=βjk+Δβj,{\displaystyle {\beta _{j}}^{k+1}={\beta _{j}}^{k}+\Delta \beta _{j},}where a superscriptkis an iteration number, and the vector of incrementsΔβj{\displaystyle \Delta \beta _{j}}is called the shift vector. In some commonly used algorithms, at each iteration the model may be linearized by approximation to a first-orderTaylor seriesexpansion aboutβk{\displaystyle {\boldsymbol {\beta }}^{k}}:f(xi,β)=fk(xi,β)+∑j∂f(xi,β)∂βj(βj−βjk)=fk(xi,β)+∑jJijΔβj.{\displaystyle {\begin{aligned}f(x_{i},{\boldsymbol {\beta }})&=f^{k}(x_{i},{\boldsymbol {\beta }})+\sum _{j}{\frac {\partial f(x_{i},{\boldsymbol {\beta }})}{\partial \beta _{j}}}\left(\beta _{j}-{\beta _{j}}^{k}\right)\\[1ex]&=f^{k}(x_{i},{\boldsymbol {\beta }})+\sum _{j}J_{ij}\,\Delta \beta _{j}.\end{aligned}}}
TheJacobianJis a function of constants, the independent variableandthe parameters, so it changes from one iteration to the next. The residuals are given byri=yi−fk(xi,β)−∑k=1mJikΔβk=Δyi−∑j=1mJijΔβj.{\displaystyle r_{i}=y_{i}-f^{k}(x_{i},{\boldsymbol {\beta }})-\sum _{k=1}^{m}J_{ik}\,\Delta \beta _{k}=\Delta y_{i}-\sum _{j=1}^{m}J_{ij}\,\Delta \beta _{j}.}
To minimize the sum of squares ofri{\displaystyle r_{i}}, the gradient equation is set to zero and solved forΔβj{\displaystyle \Delta \beta _{j}}:−2∑i=1nJij(Δyi−∑k=1mJikΔβk)=0,{\displaystyle -2\sum _{i=1}^{n}J_{ij}\left(\Delta y_{i}-\sum _{k=1}^{m}J_{ik}\,\Delta \beta _{k}\right)=0,}which, on rearrangement, becomemsimultaneous linear equations, thenormal equations:∑i=1n∑k=1mJijJikΔβk=∑i=1nJijΔyi(j=1,…,m).{\displaystyle \sum _{i=1}^{n}\sum _{k=1}^{m}J_{ij}J_{ik}\,\Delta \beta _{k}=\sum _{i=1}^{n}J_{ij}\,\Delta y_{i}\qquad (j=1,\ldots ,m).}
The normal equations are written in matrix notation as(JTJ)Δβ=JTΔy.{\displaystyle \left(\mathbf {J} ^{\mathsf {T}}\mathbf {J} \right)\Delta {\boldsymbol {\beta }}=\mathbf {J} ^{\mathsf {T}}\Delta \mathbf {y} .}
These are the defining equations of theGauss–Newton algorithm.
These differences must be considered whenever the solution to a nonlinear least squares problem is being sought.[15]
Consider a simple example drawn from physics. A spring should obeyHooke's lawwhich states that the extension of a springyis proportional to the force,F, applied to it.y=f(F,k)=kF{\displaystyle y=f(F,k)=kF}constitutes the model, whereFis the independent variable. In order to estimate theforce constant,k, we conduct a series ofnmeasurements with different forces to produce a set of data,(Fi,yi),i=1,…,n{\displaystyle (F_{i},y_{i}),\ i=1,\dots ,n\!}, whereyiis a measured spring extension.[17]Each experimental observation will contain some error,ε{\displaystyle \varepsilon }, and so we may specify an empirical model for our observations,yi=kFi+εi.{\displaystyle y_{i}=kF_{i}+\varepsilon _{i}.}
There are many methods we might use to estimate the unknown parameterk. Since thenequations in themvariables in our data comprise anoverdetermined systemwith one unknown andnequations, we estimatekusing least squares. The sum of squares to be minimized is[15]S=∑i=1n(yi−kFi)2.{\displaystyle S=\sum _{i=1}^{n}\left(y_{i}-kF_{i}\right)^{2}.}
The least squares estimate of the force constant,k, is given byk^=∑iFiyi∑iFi2.{\displaystyle {\hat {k}}={\frac {\sum _{i}F_{i}y_{i}}{\sum _{i}F_{i}^{2}}}.}
We assume that applying forcecausesthe spring to expand. After having derived the force constant by least squares fitting, we predict the extension from Hooke's law.
In a least squares calculation with unit weights, or in linear regression, the variance on thejth parameter, denotedvar(β^j){\displaystyle \operatorname {var} ({\hat {\beta }}_{j})}, is usually estimated withvar(β^j)=σ2([XTX]−1)jj≈σ^2Cjj,{\displaystyle \operatorname {var} ({\hat {\beta }}_{j})=\sigma ^{2}\left(\left[X^{\mathsf {T}}X\right]^{-1}\right)_{jj}\approx {\hat {\sigma }}^{2}C_{jj},}σ^2≈Sn−m{\displaystyle {\hat {\sigma }}^{2}\approx {\frac {S}{n-m}}}C=(XTX)−1,{\displaystyle C=\left(X^{\mathsf {T}}X\right)^{-1},}where the true error varianceσ2is replaced by an estimate, thereduced chi-squared statistic, based on the minimized value of theresidual sum of squares(objective function),S. The denominator,n−m, is thestatistical degrees of freedom; seeeffective degrees of freedomfor generalizations.[15]Cis thecovariance matrix.
If theprobability distributionof the parameters is known or an asymptotic approximation is made,confidence limitscan be found. Similarly, statistical tests on the residuals can be conducted if the probability distribution of the residuals is known or assumed. We can derive the probability distribution of any linear combination of the dependent variables if the probability distribution of experimental errors is known or assumed. Inferring is easy when assuming that the errors follow a normal distribution, consequently implying that the parameter estimates and residuals will also be normally distributed conditional on the values of the independent variables.[15]
It is necessary to make assumptions about the nature of the experimental errors to test the results statistically. A common assumption is that the errors belong to a normal distribution. Thecentral limit theoremsupports the idea that this is a good approximation in many cases.
However, suppose the errors are not normally distributed. In that case, acentral limit theoremoften nonetheless implies that the parameter estimates will be approximately normally distributed so long as the sample is reasonably large. For this reason, given the important property that the error mean is independent of the independent variables, the distribution of the error term is not an important issue in regression analysis. Specifically, it is not typically important whether the error term follows a normal distribution.
A special case ofgeneralized least squarescalledweighted least squaresoccurs when all the off-diagonal entries of Ω (the correlation matrix of the residuals) are null; thevariancesof the observations (along the covariance matrix diagonal) may still be unequal (heteroscedasticity). In simpler terms,heteroscedasticityis when the variance ofYi{\displaystyle Y_{i}}depends on the value ofFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle x_i}which causes the residual plot to create a "fanning out" effect towards largerYi{\displaystyle Y_{i}}values as seen in the residual plot to the right. On the other hand,homoscedasticityis assuming that the variance ofYi{\displaystyle Y_{i}}and variance ofUi{\displaystyle U_{i}}are equal.[13]
The firstprincipal componentabout the mean of a set of points can be represented by that line which most closely approaches the data points (as measured by squared distance of closest approach, i.e. perpendicular to the line). In contrast, linear least squares tries to minimize the distance in they{\displaystyle y}direction only. Thus, although the two use a similar error metric, linear least squares is a method that treats one dimension of the data preferentially, while PCA treats all dimensions equally.
Notable statisticianSara van de Geerusedempirical process theoryand theVapnik–Chervonenkis dimensionto prove a least-squares estimator can be interpreted as ameasureon the space ofsquare-integrable functions.[19]
In some contexts, aregularizedversion of the least squares solution may be preferable.Tikhonov regularization(orridge regression) adds a constraint that‖β‖22{\displaystyle \left\|\beta \right\|_{2}^{2}}, the squaredℓ2{\displaystyle \ell _{2}}-normof the parameter vector, is not greater than a given value to the least squares formulation, leading to a constrained minimization problem. This is equivalent to the unconstrained minimization problem where the objective function is the residual sum of squares plus a penalty termα‖β‖22{\displaystyle \alpha \left\|\beta \right\|_{2}^{2}}andα{\displaystyle \alpha }is a tuning parameter (this is theLagrangianform of the constrained minimization problem).[20]
In aBayesiancontext, this is equivalent to placing a zero-mean normally distributedprioron the parameter vector.
An alternativeregularizedversion of least squares isLasso(least absolute shrinkage and selection operator), which uses the constraint that‖β‖1{\displaystyle \|\beta \|_{1}}, theL1-normof the parameter vector, is no greater than a given value.[21][22][23](One can show like above using Lagrange multipliers that this is equivalent to an unconstrained minimization of the least-squares penalty withα‖β‖1{\displaystyle \alpha \|\beta \|_{1}}added.) In aBayesiancontext, this is equivalent to placing a zero-meanLaplaceprior distributionon the parameter vector.[24]The optimization problem may be solved usingquadratic programmingor more generalconvex optimizationmethods, as well as by specific algorithms such as theleast angle regressionalgorithm.
One of the prime differences between Lasso and ridge regression is that in ridge regression, as the penalty is increased, all parameters are reduced while still remaining non-zero, while in Lasso, increasing the penalty will cause more and more of the parameters to be driven to zero. This is an advantage of Lasso over ridge regression, as driving parameters to zero deselects the features from the regression. Thus, Lasso automatically selects more relevant features and discards the others, whereas Ridge regression never fully discards any features. Somefeature selectiontechniques are developed based on the LASSO including Bolasso which bootstraps samples,[25]and FeaLect which analyzes the regression coefficients corresponding to different values ofα{\displaystyle \alpha }to score all the features.[26]
TheL1-regularized formulation is useful in some contexts due to its tendency to prefer solutions where more parameters are zero, which gives solutions that depend on fewer variables.[21]For this reason, the Lasso and its variants are fundamental to the field ofcompressed sensing. An extension of this approach iselastic net regularization.
|
https://en.wikipedia.org/wiki/Least_squares
|
The theory ofassociation schemesarose instatistics, in the theory ofexperimental designfor theanalysis of variance.[1][2][3]Inmathematics, association schemes belong to bothalgebraandcombinatorics. Inalgebraic combinatorics, association schemes provide a unified approach to many topics, for examplecombinatorial designsandthe theory of error-correcting codes.[4][5]In algebra, the theory of association schemes generalizes thecharacter theoryoflinear representations of groups.[6][7][8]
Ann-class association scheme consists of asetXtogether with apartitionSofX×Xinton+ 1binary relations,R0,R1, ...,Rnwhich satisfy:
An association scheme iscommutativeifpijk=pjik{\displaystyle p_{ij}^{k}=p_{ji}^{k}}for alli{\displaystyle i},j{\displaystyle j}andk{\displaystyle k}. Most authors assume this property.
Note, however, that while the notion of an association scheme generalizes the notion of a group, the notion of a commutative association scheme only generalizes the notion of acommutative group.
Asymmetricassociation scheme is one in which eachRi{\displaystyle R_{i}}is asymmetric relation. That is:
Every symmetric association scheme is commutative.
Two pointsxandyare calledith associates if(x,y)∈Ri{\displaystyle (x,y)\in R_{i}}. The definition states that ifxandyareith associates then so areyandx. Every pair of points areith associates for exactly onei{\displaystyle i}. Each point is its own zeroth associate while distinct points are never zeroth associates. Ifxandyarekth associates then the number of pointsz{\displaystyle z}which are bothith associates ofx{\displaystyle x}andjth associates ofy{\displaystyle y}is a constantpijk{\displaystyle p_{ij}^{k}}.
A symmetric association scheme can be visualized as acomplete graphwith labeled edges. The graph hasv{\displaystyle v}vertices, one for each point ofX{\displaystyle X}, and the edge joining verticesx{\displaystyle x}andy{\displaystyle y}is labeledi{\displaystyle i}ifx{\displaystyle x}andy{\displaystyle y}arei{\displaystyle i}th associates. Each edge has a unique label, and the number of triangles with a fixed base labeledk{\displaystyle k}having the other edges labeledi{\displaystyle i}andj{\displaystyle j}is a constantpijk{\displaystyle p_{ij}^{k}}, depending oni,j,k{\displaystyle i,j,k}but not on the choice of the base. In particular, each vertex is incident with exactlypii0=vi{\displaystyle p_{ii}^{0}=v_{i}}edges labeledi{\displaystyle i};vi{\displaystyle v_{i}}is thevalencyof therelationRi{\displaystyle R_{i}}. There are also loops labeled0{\displaystyle 0}at each vertexx{\displaystyle x}, corresponding toR0{\displaystyle R_{0}}.
Therelationsare described by theiradjacency matrices.Ai{\displaystyle A_{i}}is the adjacency matrix ofRi{\displaystyle R_{i}}fori=0,…,n{\displaystyle i=0,\ldots ,n}and is av×vmatrixwith rows and columns labeled by the points ofX{\displaystyle X}.
The definition of a symmetric association scheme is equivalent to saying that theAi{\displaystyle A_{i}}arev×v(0,1)-matriceswhich satisfy
The (x,y)-th entry of the left side of (IV) is the number of paths of length two betweenxandywith labelsiandjin the graph. Note that the rows and columns ofAi{\displaystyle A_{i}}containvi{\displaystyle v_{i}}1{\displaystyle 1}'s:
The termassociation schemeis due to (Bose & Shimamoto 1952) but the concept is already inherent in (Bose & Nair 1939).[9]These authors were studying what statisticians have calledpartially balanced incomplete block designs(PBIBDs). The subject became an object of algebraic interest with the publication of (Bose & Mesner 1959) and the introduction of theBose–Mesner algebra. The most important contribution to the theory was the thesis ofPh. Delsarte(Delsarte 1973) who recognized and fully used the connections with coding theory and design theory.[10]
A generalization calledcoherent configurationshas been studied by D. G. Higman.
Theadjacency matricesAi{\displaystyle A_{i}}of thegraphs(X,Ri){\displaystyle \left(X,R_{i}\right)}generate acommutativeandassociative algebraA{\displaystyle {\mathcal {A}}}(over therealorcomplex numbers) both for thematrix productand thepointwise product. This associative, commutative algebra is called theBose–Mesner algebraof the association scheme.
Since the matrices inA{\displaystyle {\mathcal {A}}}aresymmetricandcommutewith each other, they can bediagonalizedsimultaneously. Therefore,A{\displaystyle {\mathcal {A}}}issemi-simpleand has a unique basis of primitiveidempotentsJ0,…,Jn{\displaystyle J_{0},\ldots ,J_{n}}.
There is another algebra of(n+1)×(n+1){\displaystyle (n+1)\times (n+1)}matrices which isisomorphictoA{\displaystyle {\mathcal {A}}}, and is often easier to work with.
TheHamming schemeand theJohnson schemeare of major significance in classicalcoding theory.
Incoding theory, association scheme theory is mainly concerned with thedistanceof acode. Thelinear programmingmethod produces upper bounds for the size of acodewith given minimumdistance, and lower bounds for the size of adesignwith a given strength. The most specific results are obtained in the case where the underlying association scheme satisfies certainpolynomialproperties; this leads one into the realm oforthogonal polynomials. In particular, some universal bounds are derived forcodesanddesignsin polynomial-type association schemes.
In classicalcoding theory, dealing withcodesin aHamming scheme, the MacWilliams transform involves a family of orthogonal polynomials known as theKrawtchouk polynomials. These polynomials give theeigenvaluesof the distance relation matrices of theHamming scheme.
|
https://en.wikipedia.org/wiki/Association_scheme
|
Theyouth rightsmovement (also known asyouth liberation) seeks to grant therightstoyoung peoplethat are traditionally reserved foradults. This is closely akin to the notion ofevolving capacitieswithin thechildren's rightsmovement, but the youth rights movement differs from the children's rights movement in that the latter places emphasis on the welfare and protection of children through the actions and decisions of adults, while the youth rights movement seeks to grant youth the liberty to make their own decisions autonomously in the ways adults are permitted to, or to abolish the legal minimum ages at which such rights are acquired, such as theage of majorityand thevoting age.[1]
Codified youth rights constitute one aspect of how youth are treated in society. Other aspects include social questions of how adults see and treat youth, and how open a society is toyouth participation.[2]
Of primary importance to advocates of youth rights are historical perceptions of young people, considered to beoppressiveand informed bypaternalism,adultismandageismin general, as well as fears ofchildrenandyouth. Several of these perceptions include the assumption that young people are incapable of making crucial decisions and need protecting from their tendency to act impulsively or irrationally.[3]Such perceptions can informlawsthroughout society, includingvoting age,child labor laws,the right to work,curfews,drinking age,smoking age,gambling age,age of consent,driving age,voting age,emancipation, medical autonomy,closed adoption,corporal punishment, theage of majority, andmilitary conscription. Restrictions on young people that aren't applied to adults may be called status offenses and viewed as a form of unjustifieddiscrimination.[4]
There are specific sets of issues addressing the rights of youth in schools, includingzero tolerance, "gulag schools",In loco parentis, andstudent rightsin general.Homeschooling,unschooling, andalternative schoolsare popular youth rights issues.
A long-standing effort within the youth rights movements has focused oncivic engagement. Other issues include mandatoryallowance[5]andnon-authoritarian parenting.[6]There have been a number of historical campaigns to increaseyouth voting rightsby lowering thevoting ageand theage of candidacy. There are also efforts to get young people elected to prominent positions in local communities, including as members ofcity councilsand as mayors. For example, in the2011 Raleigh mayoral election17-year-old Seth Keel launched a campaign for Mayor despite the age requirement of 21.[7]Strategies for gaining youth rights that are frequently utilized by their advocates include developingyouth programsandorganizationsthat promoteyouth activism,youth participation,youth empowerment,youth voice,youth/adult partnerships,intergenerational equityandcivil disobediencebetween young people and adults.
First emerging as a distinct movement in the 1930s, youth rights have long been concerned withcivil rightsandintergenerational equity. Tracing its roots toyouth activistsduring theGreat Depression, youth rights has influenced thecivil rights movement,opposition to the Vietnam War, and many other movements. Since the advent of theInternet, the youth rights movement has been gaining predominance again.[citation needed]
Some youth rights advocates use the argument offallibilityagainst the belief that others can know what is best or worst for an individual, and criticize the children's rights movement for assuming that exterior legislators, parents, authorities and so on can know what is for a minor's own good. These thinkers argue that the ability to correct what others think about one's ownwelfarein afalsificationist(as opposed topostmodernist) manner constitutes a non-arbitrary mental threshold at which an individual can speak for his or herself independently of exterior assumptions, as opposed to arbitrary chronological minimum ages in legislation. They also criticize the carte blanche for arbitrary definitions of "maturity" implicit in children's rights laws such as "with rising age and maturity" for being part of the problem, and suggest the absolute threshold of conceptual after-correcture to remedy it.[8]
These views are often supported by people with experience of the belief in absolutely gradual mental development being abused as an argument for "necessity" of arbitrary distinctions such asage of majoritywhich they perceive as oppressive (either currently oppressing or having formerly oppressed them, depending on age and jurisdiction), and instead cite types ofconnectionismthat allows forcritical phenomenathat encompasses the entirebrain. These thinkers tend to stress that different individuals reach the critical threshold at somewhat different ages with no more than one in 365 (one in 366 in the case of leap years) chance of coinciding with a birthday, and that the relevant difference that it is acceptable to base different treatment on is only between individuals and not between jurisdictions. Generally, the importance of judging each individual by observable relevant behaviors and not by birth date is stressed by advocates of these views.[9]
Children's rightscover all rights belonging to children. When individuals grow up, they are granted new rights (such as voting, consent, and driving) and duties (such as criminal responsibility and draft eligibility). There are differentminimumlimits of age at whichyouthare, situationally, not independent or deemed legally competent to make certain decisions or take certain actions. Some rights and responsibilities that legally come with age are:
After youth reach these limits they are free tovote, buy or consumealcohol beverages, and drivecars, among other acts.
The "youth rights movement", also described as "youth liberation", is a nascentgrass-roots movementwhose aim is to fight againstageismand for thecivil rightsof young people – those "under the age of majority", which is 18 in most countries. Some groups combatpedophobiaandephebiphobiathroughout society by promotingyouth voice,youth empowermentand ultimately,intergenerational equitythroughyouth/adult partnerships.[10]Many advocates of youth rights distinguish their movement from thechildren's rightsmovement, which they argue advocates changes that are often restrictive towards children and youth.[11]
International Youth Rights(IYR) is a student-run youth rights organization in China, with regional chapters across the country and abroad. Its aim is to make voices of youth be heard across the world and give opportunities for youths to carry out their own creative solutions to world issues in real life.
TheEuropean Youth Forum(YFJ, from Youth Forum Jeunesse) is the platform of the National Youth Council and International Non-Governmental Youth Organisations in Europe. It strives for youth rights in International Institutions such as the European Union, the Council of Europe and the United Nations.
The European Youth Forum works in the fields of youth policy and youth work development. It focuses its work on European youth policy matters, whilst through engagement on the global level it is enhancing the capacities of its members and promoting global interdependence. In its daily work the European Youth Forum represents the views and opinions of youth organisations in all relevant policy areas and promotes the cross-sectoral nature of youth policy towards a variety of institutional actors. The principles of equality and sustainable development are mainstreamed in the work of the European Youth Forum.
Other International youth rights organizations includeArticle 12 in Scotlandand K.R.A.T.Z.A. in Germany.
InMalta, the voting age has been lowered to 16 in 2018 to vote in national and European Parliament elections.[12]
TheEuropean Youth Portalis the starting place for the European Union's youth policy, withErasmus+as one of its key initiatives.
TheNational Youth Rights Associationis the primary youth rights organization for theyouths in the United States, with local chapters across the country. The organization known as Americans for a Society Free from Age Restrictions is also an important organization.The Freechild Projecthas gained a reputation for interjecting youth rights issues into organizations historically focused onyouth developmentandyouth servicethrough their consulting and training activities. TheGlobal Youth Action Networkengages young people around the world in advocating for youth rights, andPeacefireprovidestechnology-specific support for youth rights activists.
Choose Responsibilityand their successor organization, theAmethyst Initiative, founded byJohn McCardell, Jr., exist to promote the discussion of the drinking age, specifically. Choose Responsibility focuses on promoting a legal drinking age of 18, but includes provisions such as education and licensing. The Amethyst Initiative, a collaboration of college presidents and other educators, focuses on discussion and examination of the drinking age, with specific attention paid to the culture of alcohol as it exists on college campuses and the negative impact of the drinking age on alcohol education and responsible drinking.
Young India Foundation(YIF) is a youth-led youth rights organization in India, based in Gurgaon with regional chapters across India. Its aim is to make voices of youth be heard across India and seek representation for the 60% of India's demographic that is below the age of 25.[13]YIF is also the organization behind the age of candidacy campaign to bring down the age when a Member of Legislative Assembly or Member of Parliament can contest.[14]
Youth rights, as a philosophy and as a movement, has been informed and is led by a variety of individuals and institutions across the United States and around the world. In the 1960s and 70sJohn Holt,Richard Farson,Paul GoodmanandNeil Postmanwere regarded authors who spoke out about youth rights throughout society, including education, government, social services and popular citizenship.Shulamith Firestonealso wrote about youth rights issues in the second-wave feminist classicThe Dialectic of Sex.Alex Koroknay-Paliczhas become a vocal youth rights proponent, making regular appearances on television and in newspapers.Mike A. Malesis a prominentsociologistand researcher who has published several books regarding the rights of young people across the United States.Robert Epsteinis another prominent author who has called for greater rights and responsibilities for youth. Several organizational leaders, includingSarah Fitz-ClaridgeofTaking Children Seriously,Bennett HaseltonofPeacefireandAdam Fletcher (activist)ofThe Freechild Projectconduct local, national, and internationaloutreachfor youth and adults regarding youth rights.Giuseppe Porcaroduring his mandate as Secretary General of theEuropean Youth Forumedited the second edition of the volume "The International Law of Youth Rights" published byBrill Publishers.
|
https://en.wikipedia.org/wiki/Youth_rights
|
Inlinguistics,grammatical relations(also calledgrammatical functions,grammatical roles, orsyntactic functions) are functional relationships betweenconstituentsin aclause. The standard examples of grammatical functions from traditional grammar aresubject,direct object, andindirect object. In recent times, the syntactic functions (more generally referred to as grammatical relations), typified by the traditional categories of subject and object, have assumed an important role in linguistic theorizing, within a variety of approaches ranging fromgenerative grammartofunctionalandcognitive theories.[1]Many modern theories of grammar are likely to acknowledge numerous further types of grammatical relations (e.g.complement,specifier,predicative, etc.).
The role of grammatical relations in theories of grammar is greatest independency grammars, which tend to posit dozens of distinct grammatical relations. Everyhead-dependent dependency bears a grammatical function.
Grammatical categoriesare assigned to the words and phrases that have the relations. This includes traditionalparts of speechlikenouns,verbs,adjectives, etc., and features likenumberandtense.
The grammatical relations are exemplified in traditional grammar by the notions ofsubject,direct object, andindirect object:
The subjectFredperforms or is the source of the action. The direct objectthe bookis acted upon by the subject, and the indirect object Susan receives the direct object or otherwise benefits from the action. Traditional grammars often begin with these rather vague notions of the grammatical functions. When one begins to examine the distinctions more closely, it quickly becomes clear that these basic definitions do not provide much more than a loose orientation point.
What is indisputable about the grammatical relations is that they are relational. That is, subject and object can exist as such only by virtue of the context in which they appear. A noun such asFredor a noun phrase such asthe bookcannot qualify as subject and direct object, respectively, unless they appear in an environment, e.g. a clause, where they are related to each other and/or to an action or state. In this regard, the main verb in a clause is responsible for assigning grammatical relations to the clause "participants".
Most grammarians and students of language intuitively know in most cases what the subject and object in a given clause are. But when one attempts to produce theoretically satisfying definitions of these notions, the results are usually less clear and therefore controversial.[2]The contradictory impulses have resulted in a situation where most theories of grammar acknowledge the grammatical relations and rely on them heavily for describing phenomena of grammar but at the same time, avoid providing concrete definitions of them. Nevertheless, various principles can be acknowledged that attempts to define the grammatical relations are based on.
Thethematic relations(also known as thematic roles, and semantic roles, e.g.agent,patient, theme, goal) can provide semantic orientation for defining the grammatical relations. There is a tendency for subjects to be agents and objects to be patients or themes. However, the thematic relations cannot be substituted for the grammatical relations, nor vice versa. This point is evident with theactive-passive diathesisandergative verbs:
Margeis the agent in the first pair of sentences because she initiates and carries out the action of fixing, andthe coffee tableis the patient in both because it is acted upon in both sentences. In contrast, the subject and direct object are not consistent across the two sentences. The subject is the agentMargein the first sentence and the patientThe coffee tablein the second sentence. The direct object is the patientthe coffee tablein the first sentence, and there is no direct object in the second sentence. The situation is similar with the ergative verbsunk/sinkin the second pair of sentences. The noun phrasethe shipis the patient in both sentences, although it is the object in the first of the two and the subject in the second.
The grammatical relations belong to the level of surface syntax, whereas the thematic relations reside on a deeper semantic level. If, however, the correspondences across these levels are acknowledged, then the thematic relations can be seen as providing prototypical thematic traits for defining the grammatical relations.
Another prominent means used to define the syntactic relations is in terms of the syntactic configuration. The subject is defined as theverb argumentthat appears outside the canonicalfiniteverb phrase, whereas the object is taken to be the verb argument that appears inside the verb phrase.[3]This approach takes the configuration as primitive, whereby the grammatical relations are then derived from the configuration. This "configurational" understanding of the grammatical relations is associated with Chomskyanphrase structure grammars(Transformational grammar,Government and BindingandMinimalism).
The configurational approach is limited in what it can accomplish. It works best for the subject and object arguments. For other clause participants (e.g. attributes and modifiers of various sorts, prepositional arguments, etc.), it is less insightful, since it is often not clear how one might define these additional syntactic functions in terms of the configuration. Furthermore, even concerning the subject and object, it can run into difficulties, e.g.
The configurational approach has difficulty with such cases. The plural verbwereagrees with the post-verb noun phrasetwo lizards, which suggests thattwo lizardsis the subject. But sincetwo lizardsfollows the verb, one might view it as being located inside the verb phrase, which means it should count as the object. This second observation suggests that the expletivethereshould be granted subject status.
Many efforts to define the grammatical relations emphasize the roleinflectionalmorphology. In English, the subject can or must agree with the finite verb in person and number, and in languages that have morphologicalcase, the subject and object (and other verb arguments) are identified in terms of the case markers that they bear (e.g.nominative,accusative,dative,genitive,ergative,absolutive, etc.). Inflectional morphology may be a more reliable means for defining the grammatical relations than the configuration, but its utility can be very limited in many cases. For instance, inflectional morphology is not going to help in languages that lack inflectional morphology almost entirely such asMandarin, and even with English, inflectional morphology does not help much, since English largely lacks morphological case.
The difficulties facing attempts to define the grammatical relations in terms of thematic or configurational or morphological criteria can be overcome by an approach that posits prototypical traits. The prototypical subject has a cluster of thematic, configurational, and/or morphological traits, and the same is true of the prototypical object and other verb arguments. Across languages and across constructions within a language, there can be many cases where a given subject argument may not be a prototypical subject, but it has enough subject-like traits to be granted subject status. Similarly, a given object argument may not be prototypical in one way or another, but if it has enough object-like traits, then it can nevertheless receive the status of object.
This third strategy is tacitly preferred by most work in theoretical syntax. All those theories of syntax that avoid providing concrete definitions of the grammatical relations but yet reference them often are (perhaps unknowingly) pursuing an approach in terms of prototypical traits.[clarification needed]
Independency grammar(DG) theories of syntax,[4]everyhead-dependent dependency bears a syntactic function.[5]The result is that an inventory consisting of dozens of distinct syntactic functions is needed for each language. For example, a determiner-noun dependency might be assumed to bear the DET (determiner) function, and an adjective-noun dependency is assumed to bear the ATTR (attribute) function. These functions are often produced as labels on the dependencies themselves in the syntactic tree, e.g.
The tree contains the following syntactic functions: ATTR (attribute), CCOMP (clause complement), DET (determiner), MOD (modifier), OBJ (object), SUBJ (subject), and VCOMP (verb complement). The actual inventories of syntactic functions will differ from the one suggested here in the number and types of functions that are assumed. In this regard, this tree is merely intended to be illustrative of the importance that the syntactic functions can take on in some theories of syntax and grammar.
|
https://en.wikipedia.org/wiki/Grammatical_relation
|
Incryptography,black-bag cryptanalysisis aeuphemismfor the acquisition of cryptographic secrets viaburglary, or other covert means – rather than mathematical or technicalcryptanalytic attack. The term refers to the black bag of equipment that a burglar would carry or ablack bag operation.
As withrubber-hose cryptanalysis, this is technically not a form of cryptanalysis; the term is usedsardonically. However, given the free availability of very high strength cryptographic systems, this type of attack is a much more serious threat to most users than mathematical attacks because it is often much easier to attempt to circumvent cryptographic systems (e.g. steal the password) than to attack them directly.
Regardless of the technique used, such methods are intended to capture highly sensitive information e.g.cryptographic keys, key-rings,passwordsor unencrypted plaintext. The required information is usually copied without removing or destroying it, so capture often takes place without the victim realizing it has occurred.
In addition to burglary, the covert means might include the installation ofkeystroke logging[1]ortrojan horsesoftware or hardware installed on (or near to) target computers or ancillary devices. It is even possible tomonitor the electromagnetic emissions of computer displaysor keyboards[2][3]from a distance of 20 metres (or more), and thereby decode what has been typed. This could be done by surveillance technicians, or via some form ofbugconcealed somewhere in the room.[4]Although sophisticated technology is often used, black bag cryptanalysis can also be as simple as the process of copying a password which someone has unwisely written down on a piece of paper and left inside their desk drawer.
The case ofUnited States v. Scarfohighlighted one instance in which FBI agents using asneak and peek warrantplaced a keystroke logger on an alleged criminal gang leader.[5]
|
https://en.wikipedia.org/wiki/Black-bag_cryptanalysis
|
Tolerance analysisis the general term for activities related to the study of accumulated variation in mechanical parts and assemblies. Its methods may be used on other types of systems subject to accumulated variation, such as mechanical and electrical systems. Engineers analyze tolerances for the purpose of evaluatinggeometric dimensioning and tolerancing(GD&T). Methods include 2D tolerance stacks, 3DMonte Carlo simulations, and datum conversions.
Tolerance stackupsortolerance stacksare used to describe the problem-solving process inmechanical engineeringof calculating the effects of the accumulated variation that is allowed by specified dimensions andtolerances. Typically these dimensions and tolerances are specified on an engineering drawing. Arithmetic tolerance stackups use the worst-case maximum or minimum values of dimensions and tolerances to calculate the maximum and minimum distance (clearance or interference) between two features or parts. Statistical tolerance stackups evaluate the maximum and minimum values based on the absolute arithmetic calculation combined with some method for establishing likelihood of obtaining the maximum and minimum values, such as Root Sum Square (RSS) or Monte-Carlo methods.
In performing a tolerance analysis, there are two fundamentally different analysis tools for predicting stackup variation: worst-case analysis and statistical analysis.
Worst-case tolerance analysis is the traditional type of tolerance stackup calculation. The individual variables are placed at their tolerance limits in order to make the measurement as large or as small as possible. The worst-case model does not consider the distribution of the individual variables, but rather that those variables do not exceed their respective specified limits. This model predicts the maximum expected variation of the measurement. Designing to worst-case tolerance requirements guarantees 100 percent of the parts will assemble and function properly, regardless of the actual component variation. The major drawback is that the worst-case model often requires very tight individual component tolerances. The obvious result is expensive manufacturing and inspection processes and/or high scrap rates. Worst-case tolerancing is often required by the customer for critical mechanical interfaces and spare part replacement interfaces. When worst-case tolerancing is not a contract requirement, properly applied statistical tolerancing can ensure acceptable assembly yields with increased component tolerances and lower fabrication costs.
The statistical variation analysis model takes advantage of the principles of statistics to relax the component tolerances without sacrificing quality. Each component's variation is modeled as a statistical distribution and these distributions are summed to predict the distribution of the assembly measurement. Thus, statistical variation analysis predicts a distribution that describes the assembly variation, not the extreme values of that variation. This analysis model provides increased design flexibility by allowing the designer to design to any quality level, not just 100 percent.
There are two chief methods for performing the statistical analysis. In one, the expected distributions are modified in accordance with the relevant geometric multipliers within tolerance limits and then combined using mathematical operations to provide a composite of the distributions. The geometric multipliers are generated by making small deltas to the nominal dimensions. The immediate value to this method is that the output is smooth, but it fails to account for geometric misalignment allowed for by the tolerances; if a size dimension is placed between two parallel surfaces, it is assumed the surfaces will remain parallel, even though the tolerance does not require this. Because the CAD engine performs the variation sensitivity analysis, there is no output available to drive secondary programs such as stress analysis.
In the other, the variations are simulated by allowing random changes to geometry, constrained by expected distributions within allowed tolerances with the resulting parts assembled, and then measurements of critical places are recorded as if in an actual manufacturing environment. The collected data is analyzed to find a fit with a known distribution and mean and standard deviations derived from them. The immediate value to this method is that the output represents what is acceptable, even when that is from imperfect geometry and, because it uses recorded data to perform its analysis, it is possible to include actual factory inspection data into the analysis to see the effect of proposed changes on real data. In addition, because the engine for the analysis is performing the variation internally, not based on CAD regeneration, it is possible to link the variation engine output to another program. For example, a rectangular bar may vary in width and thickness; the variation engine could output those numbers to a stress program which passes back peak stress as a result and the dimensional variation be used to determine likely stress variations. The disadvantage is that each run is unique, so there will be variation from analysis to analysis for the output distribution and mean, just like would come from a factory.
While no official engineering standard covers the process or format of tolerance analysis and stackups, these are essential components of goodproduct design. Tolerance stackups should be used as part of the mechanical design process, both as a predictive and a problem-solving tool. The methods used to conduct a tolerance stackup depend somewhat upon the engineering dimensioning and tolerancing standards that are referenced in the engineering documentation, such asAmerican Society of Mechanical Engineers(ASME) Y14.5, ASME Y14.41, or the relevant ISO dimensioning and tolerancing standards. Understanding the tolerances, concepts and boundaries created by these standards is vital to performing accurate calculations.
Tolerance stackups serve engineers by:
The starting point for the tolerance loop; typically this is one side of an intended gap, after pushing the various parts in the assembly to one side or another of their loose range of motion. Vector loops define the assembly constraints that locate the parts of the assembly relative to each other. The vectors represent the dimensions that contribute to tolerance stackup in the assembly. The vectors are joined tip-to-tail, forming a chain, passing through each part in the assembly in succession. A vector loop must obey certain modeling rules as it passes through a part. It must:
Additional modeling rules for vector loops include:
The above rules will vary depending on whether 1D, 2D or 3D tolerance stackup method is used.
A safety factor is often included in designs because of concerns about:
|
https://en.wikipedia.org/wiki/Tolerance_stacks
|
Promova(in English: /prɔˈmɔvʌ/) is alanguage learning platformthat includes amobile app, website, personal and group lessons with tutors, and a conversation club.[1][2][3]Starting in 2024, language courses includeAI learningtools for conversational practice and pronunciation recognition.[4]
Promova was launched in 2019. Before that, the company was known as Ten Words, the app evolved into a comprehensive language-learning platform by 2022 and was rebranded as "Promova."[5]
In 2021, Andrew Skrypnyk, the company's CEO, was awarded30 under 30 by ForbesUkraine.[6][7][8][9]
In May 2023, Promova launched its Korean language course. The version was created by Elly Kim, a linguist with Korean roots living in Ukraine.[10]
On August 24, 2023, the Independence Day of Ukraine, Promova launched a Ukrainian language course, including 48 bite-sized lessons and flashcards with information about Ukrainian culture.[11][12][13]
In October 2023, Promova became the first language-learning platform to release aDyslexiamode, designed to make it easier for people with dyslexia to learn a new language.[14][15][16][17]The mode uses Dysfont, a specialized typeface created by dyslexic designer Martin Pysny.[18][19][20][21][22]
In November 2023, Promova provided all Ukrainians with three years of free access to its language courses as part of Ukraine's Future Perfect national program.[23]This initiative, launched by the Ukrainian government and the Ministry of Digital Transformation, supports President Zelensky's law recognizing English as the official language of international communication and aims to improve English proficiency among Ukrainians.[24][25][26][27][28][29]
In December 2023, Promova was recognized as one of the 25 most prominent Ukrainian startups by Forbes magazine.[30][31]
In April 2024, OnNational ASL Dayin the US, Promova launched a freeAmerican Sign Languagecourse. A part of the course covers communication in emergencies such as asking for help, warning about fire, and expressing the need to call the police or a doctor.[32]
In June 2024, Promova won the 2024 EdTechX Awards in the Language learning category.[33][34]
As of 2025, Promova has 12 language learning courses:[35][36][37][38][39]
|
https://en.wikipedia.org/wiki/Promova
|
Inmathematics,random graphis the general term to refer toprobability distributionsovergraphs. Random graphs may be described simply by a probability distribution, or by arandom processwhich generates them.[1][2]The theory of random graphs lies at the intersection betweengraph theoryandprobability theory. From a mathematical perspective, random graphs are used to answer questions about the properties oftypicalgraphs. Its practical applications are found in all areas in whichcomplex networksneed to be modeled – many random graph models are thus known, mirroring the diverse types of complex networks encountered in different areas. In a mathematical context,random graphrefers almost exclusively to theErdős–Rényi random graph model. In other contexts, any graph model may be referred to as arandom graph.
A random graph is obtained by starting with a set ofnisolated vertices and adding successive edges between them at random. The aim of the study in this field is to determine at what stage a particular property of the graph is likely to arise.[3]Differentrandom graph modelsproduce differentprobability distributionson graphs. Most commonly studied is the one proposed byEdgar Gilbertbut often called theErdős–Rényi model, denotedG(n,p). In it, every possible edge occurs independently with probability 0 <p< 1. The probability of obtainingany one particularrandom graph withmedges ispm(1−p)N−m{\displaystyle p^{m}(1-p)^{N-m}}with the notationN=(n2){\displaystyle N={\tbinom {n}{2}}}.[4]
A closely related model, also called the Erdős–Rényi model and denotedG(n,M), assigns equal probability to all graphs with exactlyMedges. With 0 ≤M≤N,G(n,M) has(NM){\displaystyle {\tbinom {N}{M}}}elements and every element occurs with probability1/(NM){\displaystyle 1/{\tbinom {N}{M}}}.[3]TheG(n,M) model can be viewed as a snapshot at a particular time (M) of therandom graph processG~n{\displaystyle {\tilde {G}}_{n}}, astochastic processthat starts withnvertices and no edges, and at each step adds one new edge chosen uniformly from the set of missing edges.
If instead we start with an infinite set of vertices, and again let every possible edge occur independently with probability 0 <p< 1, then we get an objectGcalled aninfinite random graph. Except in the trivial cases whenpis 0 or 1, such aGalmost surelyhas the following property:
Given anyn+melementsa1,…,an,b1,…,bm∈V{\displaystyle a_{1},\ldots ,a_{n},b_{1},\ldots ,b_{m}\in V}, there is a vertexcinVthat is adjacent to each ofa1,…,an{\displaystyle a_{1},\ldots ,a_{n}}and is not adjacent to any ofb1,…,bm{\displaystyle b_{1},\ldots ,b_{m}}.
It turns out that if the vertex set iscountablethen there is,up toisomorphism, only a single graph with this property, namely theRado graph. Thus any countably infinite random graph is almost surely the Rado graph, which for this reason is sometimes called simply therandom graph. However, the analogous result is not true for uncountable graphs, of which there are many (nonisomorphic) graphs satisfying the above property.
Another model, which generalizes Gilbert's random graph model, is therandom dot-product model. A random dot-product graph associates with each vertex areal vector. The probability of an edgeuvbetween any verticesuandvis some function of thedot productu•vof their respective vectors.
Thenetwork probability matrixmodels random graphs through edge probabilities, which represent the probabilitypi,j{\displaystyle p_{i,j}}that a given edgeei,j{\displaystyle e_{i,j}}exists for a specified time period. This model is extensible to directed and undirected; weighted and unweighted; and static or dynamic graphs structure.
ForM≃pN, whereNis the maximal number of edges possible, the two most widely used models,G(n,M) andG(n,p), are almost interchangeable.[5]
Random regular graphsform a special case, with properties that may differ from random graphs in general.
Once we have a model of random graphs, every function on graphs, becomes arandom variable. The study of this model is to determine if, or at least estimate the probability that, a property may occur.[4]
The term 'almost every' in the context of random graphs refers to a sequence of spaces and probabilities, such that theerror probabilitiestend to zero.[4]
The theory of random graphs studies typical properties of random graphs, those that hold with high probability for graphs drawn from a particular distribution. For example, we might ask for a given value ofn{\displaystyle n}andp{\displaystyle p}what the probability is thatG(n,p){\displaystyle G(n,p)}isconnected. In studying such questions, researchers often concentrate on the asymptotic behavior of random graphs—the values that various probabilities converge to asn{\displaystyle n}grows very large.Percolation theorycharacterizes the connectedness of random graphs, especially infinitely large ones.
Percolation is related to the robustness of the graph (called also network). Given a random graph ofn{\displaystyle n}nodes and an average degree⟨k⟩{\displaystyle \langle k\rangle }. Next we remove randomly a fraction1−p{\displaystyle 1-p}of nodes and leave only a fractionp{\displaystyle p}. There exists a critical percolation thresholdpc=1⟨k⟩{\displaystyle p_{c}={\tfrac {1}{\langle k\rangle }}}below which the network becomes fragmented while abovepc{\displaystyle p_{c}}a giant connected component exists.[1][5][6][7][8]
Localized percolation refers to removing a node its neighbors, next nearest neighbors etc. until a fraction of1−p{\displaystyle 1-p}of nodes from the network is removed. It was shown that for random graph with Poisson distribution of degreespc=1⟨k⟩{\displaystyle p_{c}={\tfrac {1}{\langle k\rangle }}}exactly as for random removal.
Random graphs are widely used in theprobabilistic method, where one tries to prove the existence of graphs with certain properties. The existence of a property on a random graph can often imply, via theSzemerédi regularity lemma, the existence of that property on almost all graphs.
Inrandom regular graphs,G(n,r−reg){\displaystyle G(n,r-reg)}are the set ofr{\displaystyle r}-regular graphs withr=r(n){\displaystyle r=r(n)}such thatn{\displaystyle n}andm{\displaystyle m}are the natural numbers,3≤r<n{\displaystyle 3\leq r<n}, andrn=2m{\displaystyle rn=2m}is even.[3]
The degree sequence of a graphG{\displaystyle G}inGn{\displaystyle G^{n}}depends only on the number of edges in the sets[3]
If edges,M{\displaystyle M}in a random graph,GM{\displaystyle G_{M}}is large enough to ensure that almost everyGM{\displaystyle G_{M}}has minimum degree at least 1, then almost everyGM{\displaystyle G_{M}}is connected and, ifn{\displaystyle n}is even, almost everyGM{\displaystyle G_{M}}has a perfect matching. In particular, the moment the last isolated vertex vanishes in almost every random graph, the graph becomes connected.[3]
Almost every graph process on an even number of vertices with the edge raising the minimum degree to 1 or a random graph with slightly more thann4log(n){\displaystyle {\tfrac {n}{4}}\log(n)}edges and with probability close to 1 ensures that the graph has a complete matching, with exception of at most one vertex.
For some constantc{\displaystyle c}, almost every labeled graph withn{\displaystyle n}vertices and at leastcnlog(n){\displaystyle cn\log(n)}edges isHamiltonian. With the probability tending to 1, the particular edge that increases the minimum degree to 2 makes the graph Hamiltonian.
Properties of random graph may change or remain invariant under graph transformations.Mashaghi A.et al., for example, demonstrated that a transformation which converts random graphs to their edge-dual graphs (or line graphs) produces an ensemble of graphs with nearly the same degree distribution, but with degree correlations and a significantly higher clustering coefficient.[9]
Given a random graphGof ordernwith the vertexV(G) = {1, ...,n}, by thegreedy algorithmon the number of colors, the vertices can be colored with colors 1, 2, ... (vertex 1 is colored 1, vertex 2 is colored 1 if it is not adjacent to vertex 1, otherwise it is colored 2, etc.).[3]The number of proper colorings of random graphs given a number ofqcolors, called itschromatic polynomial, remains unknown so far. The scaling of zeros of the chromatic polynomial of random graphs with parametersnand the number of edgesmor the connection probabilityphas been studied empirically using an algorithm based on symbolic pattern matching.[10]
Arandom treeis atreeorarborescencethat is formed by astochastic process. In a large range of random graphs of ordernand sizeM(n) the distribution of the number of tree components of orderkis asymptoticallyPoisson. Types of random trees includeuniform spanning tree,random minimum spanning tree,random binary tree,treap,rapidly exploring random tree,Brownian tree, andrandom forest.
Consider a given random graph model defined on the probability space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}and letP(G):Ω→Rm{\displaystyle {\mathcal {P}}(G):\Omega \rightarrow R^{m}}be a real valued function which assigns to each graph inΩ{\displaystyle \Omega }a vector ofmproperties.
For a fixedp∈Rm{\displaystyle \mathbf {p} \in R^{m}},conditional random graphsare models in which the probability measureP{\displaystyle P}assigns zero probability to all graphs such that 'P(G)≠p{\displaystyle {\mathcal {P}}(G)\neq \mathbf {p} }.
Special cases areconditionally uniform random graphs, whereP{\displaystyle P}assigns equal probability to all the graphs having specified properties. They can be seen as a generalization of theErdős–Rényi modelG(n,M), when the conditioning information is not necessarily the number of edgesM, but whatever other arbitrary graph propertyP(G){\displaystyle {\mathcal {P}}(G)}. In this case very few analytical results are available and simulation is required to obtain empirical distributions of average properties.
The earliest use of a random graph model was byHelen Hall JenningsandJacob Morenoin 1938 where a "chance sociogram" (a directed Erdős-Rényi model) was considered in studying comparing the fraction of reciprocated links in their network data with the random model.[11]Another use, under the name "random net", was byRay SolomonoffandAnatol Rapoportin 1951, using a model of directed graphs with fixed out-degree and randomly chosen attachments to other vertices.[12]
TheErdős–Rényi modelof random graphs was first defined byPaul ErdősandAlfréd Rényiin their 1959 paper "On Random Graphs"[8]and independently by Gilbert in his paper "Random graphs".[6]
|
https://en.wikipedia.org/wiki/Random_graph
|
TheLittle Man Computer(LMC) is an instructionalmodelof acomputer, created by Dr.Stuart Madnickin 1965.[1]The LMC is generally used to teach students, because it models a simplevon Neumann architecturecomputer—which has all of the basic features of a modern computer. It can be programmed in machine code (albeit in decimal rather than binary) or assembly code.[2][3][4]
The LMC model is based on the concept of a little man shut in a closed mail room (analogous to a computer in this scenario). At one end of the room, there are 100 mailboxes (memory), numbered 0 to 99, that can each contain a 3 digit instruction or data (ranging from 000 to 999). Furthermore, there are two mailboxes at the other end labeledINBOXandOUTBOXwhich are used for receiving and outputting data. In the center of the room, there is a work area containing a simple two function (addition and subtraction) calculator known as theAccumulatorand a resettable counter known as the Program Counter. The Program Counter holds the address of the next instruction the Little Man will carry out. This Program Counter is normally incremented by 1 after each instruction is executed, allowing the Little Man to work through a program sequentially.Branchinstructions allow iteration (loops) andconditionalprogramming structures to be incorporated into a program. The latter is achieved by setting the Program Counter to a non-sequential memory address if a particular condition is met (typically the value stored in the accumulator being zero or positive).
As specified by thevon Neumann architecture, any mailbox (signifying a unique memory location) can contain either an instruction or data. Care therefore needs to be taken to stop the Program Counter from reaching a memory address containing data - or the Little Man will attempt to treat it as an instruction. One can take advantage of this by writing instructions into mailboxes that are meant to be interpreted as code, to create self-modifying code. To use the LMC, the user loads data into the mailboxes and then signals the Little Man to begin execution, starting with the instruction stored at memory address zero. Resetting the Program Counter to zero effectively restarts the program, albeit in a potentially different state.
To execute a program, the little man performs these steps:
While the LMC does reflect the actual workings ofbinaryprocessors, the simplicity ofdecimalnumbers was chosen to minimize the complexity for students who may not be comfortable working in binary/hexadecimal.
Some LMC simulators are programmed directly using 3-digit numeric instructions and some use 3-letter mnemonic codes and labels. In either case, the instruction set is deliberately very limited (typically about ten instructions) to simplify understanding. If the LMC uses mnemonic codes and labels then these are converted into 3-digit numeric instructions when the program is assembled.
The table below shows a typical numeric instruction set and the equivalent mnemonic codes.
This program (instruction901to instruction000) is written just using numeric codes. The program takes two numbers as input and outputs the difference. Notice that execution starts at Mailbox 00 and finishes at Mailbox 07. The disadvantages of programming the LMC using numeric instruction codes are discussed below.
LOAD the first value back into the calculator (erasing whatever was there)
Assembly language is a low-level programming language that uses mnemonics and labels instead of numeric instruction codes. Although the LMC only uses a limited set of mnemonics, the convenience of using amnemonicfor each instruction is made apparent from the assembly language of the same program shown below - the programmer is no longer required to memorize a set of anonymous numeric codes and can now program with a set of more memorable mnemonic codes. If the mnemonic is an instruction that involves a memory address (either a branch instruction or loading/saving data) then a label is used to name the memory address.
Without labels the programmer is required to manually calculate mailbox (memory) addresses. In thenumeric code example, if a new instruction was to be inserted before the final HLT instruction then that HLT instruction would move from address 07 to address 08 (address labelling starts at address location 00). Suppose the user entered 600 as the first input. The instruction 308 would mean that this value would be stored at address location 08 and overwrite the 000 (HLT) instruction. Since 600 means "branch to mailbox address 00" the program, instead of halting, would get stuck in an endless loop.
To work around this difficulty, most assembly languages (including the LMC) combine the mnemonics withlabels. A label is simply a word that is used to either name a memory address where an instruction or data is stored, or to refer to that address in an instruction.
When a program is assembled:
In theassembly languageexamplewhich uses mnemonics and labels, if a new instruction was inserted before the final HLT instruction then the address location labelled FIRST would now be at memory location 09 rather than 08 and the STA FIRST instruction would be converted to 309 (STA 09) rather than 308 (STA 08) when the program was assembled.
Labels are therefore used to:
The program below will take a user input, and count down to zero.
The program below will take a user input, square it, output the answer and then repeat. Entering a zero will end the program.(Note: an input that results in a value greater than 999 will have undefined behaviour due to the 3 digit number limit of the LMC).
Note: If there is no data after a DAT statement then the default value 0 is stored in the memory address.
In the example above, [BRZ ENDLOOP] depends on undefined behaviour, as COUNT-VALUE can be negative, after which the ACCUMULATOR value is undefined, resulting in BRZ either branching or not (ACCUMULATOR may be zero, or wrapped around). To make the code compatible with the specification, replace:
with the following version, which evaluates VALUE-COUNT instead of COUNT-VALUE, making sure the accumulator never underflows:
Another example is aquine, printing its own machine code (printing source is impossible because letters cannot be output):
This quine works usingself-modifying code. Position 0 is incremented by one in each iteration, outputting that line's code, until the code it is outputting is 1, at which point it branches to the ONE position. The value at the ONE position has 0 as opcode, so it is interpreted as a HALT/COB instruction.
|
https://en.wikipedia.org/wiki/Little_man_computer
|
Apersonal information manager(often referred to as aPIM toolor, more simply, aPIM) is a type of application software that functions as a personal organizer. The acronymPIMis now, more commonly, used in reference to personal information management as a field of study.[1]As an information management tool, a PIM tool's purpose is to facilitate the recording, tracking, and management of certain types of "personal information".
Personal information can include any of the following:[2]
Some PIM/PDMsoftware products are capable of synchronizing data over acomputer network, includingmobile ad hoc networks(MANETs). This feature typically stores the personal data oncloud drivesallowing for continuous concurrent data updates/access, on the user's computers, includingdesktop computers,laptopcomputers, and mobile devices, such apersonal digital assistantsorsmartphones.)[3]
Prior to the introduction of the term "Personal digital assistant" ("PDA") by Apple in 1992, handheld personal organizers such as thePsion Organiserand theSharp Wizardwere also referred to as "PIMs".[4][5]
The time management and communications functions of PIMs largely migrated from PDAs to smartphones, with Apple, RIM (Research In Motion, nowBlackBerry), and others all manufacturing smartphones that offer most of the functions of earlier PDAs.
|
https://en.wikipedia.org/wiki/Personal_information_manager
|
Inmathematics– specifically, instochastic analysis– anItô diffusionis a solution to a specific type ofstochastic differential equation. That equation is similar to theLangevin equationused inphysicsto describe theBrownian motionof a particle subjected to a potential in aviscousfluid. Itô diffusions are named after theJapanesemathematicianKiyosi Itô.
A (time-homogeneous)Itô diffusioninn-dimensionalEuclidean spaceRn{\displaystyle {\boldsymbol {\textbf {R}}}^{n}}is aprocessX: [0, +∞) × Ω →Rndefined on aprobability space(Ω, Σ,P) and satisfying a stochastic differential equation of the form
whereBis anm-dimensionalBrownian motionandb:Rn→Rnand σ :Rn→Rn×msatisfy the usualLipschitz continuitycondition
for some constantCand allx,y∈Rn; this condition ensures the existence of a uniquestrong solutionXto the stochastic differential equation given above. Thevector fieldbis known as thedriftcoefficientofX; thematrix fieldσ is known as thediffusion coefficientofX. It is important to note thatband σ do not depend upon time; if they were to depend upon time,Xwould be referred to only as anItô process, not a diffusion. Itô diffusions have a number of nice properties, which include
In particular, an Itô diffusion is a continuous, strongly Markovian process such that the domain of its characteristic operator includes alltwice-continuously differentiablefunctions, so it is adiffusionin the sense defined by Dynkin (1965).
An Itô diffusionXis asample continuous process, i.e., foralmost allrealisationsBt(ω) of the noise,Xt(ω) is acontinuous functionof the time parameter,t. More accurately, there is a "continuous version" ofX, a continuous processYso that
This follows from the standard existence and uniqueness theory for strong solutions of stochastic differential equations.
In addition to being (sample) continuous, an Itô diffusionXsatisfies the stronger requirement to be aFeller-continuous process.
For a pointx∈Rn, letPxdenote the law ofXgiven initial datumX0=x, and letExdenoteexpectationwith respect toPx.
Letf:Rn→Rbe aBorel-measurable functionthat isbounded belowand define, for fixedt≥ 0,u:Rn→Rby
The behaviour of the functionuabove when the timetis varied is addressed by the Kolmogorov backward equation, the Fokker–Planck equation, etc. (See below.)
An Itô diffusionXhas the important property of beingMarkovian: the future behaviour ofX, given what has happened up to some timet, is the same as if the process had been started at the positionXtat time 0. The precise mathematical formulation of this statement requires some additional notation:
Let Σ∗denote thenaturalfiltrationof (Ω, Σ) generated by the Brownian motionB: fort≥ 0,
It is easy to show thatXisadaptedto Σ∗(i.e. eachXtis Σt-measurable), so the natural filtrationF∗=F∗Xof (Ω, Σ) generated byXhasFt⊆ Σtfor eacht≥ 0.
Letf:Rn→Rbe a bounded, Borel-measurable function. Then, for alltandh≥ 0, theconditional expectationconditioned on theσ-algebraΣtand the expectation of the process "restarted" fromXtsatisfy theMarkov property:
In fact,Xis also a Markov process with respect to the filtrationF∗, as the following shows:
The strong Markov property is a generalization of the Markov property above in whichtis replaced by a suitable random time τ : Ω → [0, +∞] known as astopping time. So, for example, rather than "restarting" the processXat timet= 1, one could "restart" wheneverXfirst reaches some specified pointpofRn.
As before, letf:Rn→Rbe a bounded, Borel-measurable function. Let τ be a stopping time with respect to the filtration Σ∗with τ < +∞almost surely. Then, for allh≥ 0,
Associated to each Itô diffusion, there is a second-orderpartial differential operatorknown as thegeneratorof the diffusion. The generator is very useful in many applications and encodes a great deal of information about the processX. Formally, theinfinitesimal generatorof an Itô diffusionXis the operatorA, which is defined to act on suitable functionsf:Rn→Rby
The set of all functionsffor which this limit exists at a pointxis denotedDA(x), whileDAdenotes the set of allffor which the limit exists for allx∈Rn. One can show that anycompactly-supportedC2(twice differentiable with continuous second derivative) functionflies inDAand that
or, in terms of thegradientandscalarandFrobeniusinner products,
The generatorAfor standardn-dimensional Brownian motionB, which satisfies the stochastic differential equation dXt= dBt, is given by
i.e.,A= Δ/2, where Δ denotes theLaplace operator.
The generator is used in the formulation of Kolmogorov's backward equation. Intuitively, this equation tells us how the expected value of any suitably smooth statistic ofXevolves in time: it must solve a certainpartial differential equationin which timetand the initial positionxare the independent variables. More precisely, iff∈C2(Rn;R) has compact support andu: [0, +∞) ×Rn→Ris defined by
thenu(t,x) is differentiable with respect tot,u(t, ·) ∈DAfor allt, andusatisfies the followingpartial differential equation, known asKolmogorov's backward equation:
The Fokker–Planck equation (also known asKolmogorov's forward equation) is in some sense the "adjoint" to the backward equation, and tells us how theprobability density functionsofXtevolve with timet. Let ρ(t, ·) be the density ofXtwith respect toLebesgue measureonRn, i.e., for any Borel-measurable setS⊆Rn,
LetA∗denote theHermitian adjointofA(with respect to theL2inner product). Then, given that the initial positionX0has a prescribed density ρ0, ρ(t,x) is differentiable with respect tot, ρ(t, ·) ∈DA*for allt, and ρ satisfies the following partial differential equation, known as theFokker–Planck equation:
The Feynman–Kac formula is a useful generalization of Kolmogorov's backward equation. Again,fis inC2(Rn;R) and has compact support, andq:Rn→Ris taken to be acontinuous functionthat is bounded below. Define a functionv: [0, +∞) ×Rn→Rby
TheFeynman–Kac formulastates thatvsatisfies the partial differential equation
Moreover, ifw: [0, +∞) ×Rn→RisC1in time,C2in space, bounded onK×Rnfor all compactK, and satisfies the above partial differential equation, thenwmust bevas defined above.
Kolmogorov's backward equation is the special case of the Feynman–Kac formula in whichq(x) = 0 for allx∈Rn.
Thecharacteristic operatorof an Itô diffusionXis a partial differential operator closely related to the generator, but somewhat more general. It is more suited to certain problems, for example in the solution of theDirichlet problem.
Thecharacteristic operatorA{\displaystyle {\mathcal {A}}}of an Itô diffusionXis defined by
where the setsUform a sequence ofopen setsUkthat decrease to the pointxin the sense that
and
is the first exit time fromUforX.DA{\displaystyle D_{\mathcal {A}}}denotes the set of allffor which this limit exists for allx∈Rnand all sequences {Uk}. IfEx[τU] = +∞ for all open setsUcontainingx, define
The characteristic operator and infinitesimal generator are very closely related, and even agree for a large class of functions. One can show that
and that
In particular, the generator and characteristic operator agree for allC2functionsf, in which case
Above, the generator (and hence characteristic operator) of Brownian motion onRnwas calculated to be1/2Δ, where Δ denotes the Laplace operator. The characteristic operator is useful in defining Brownian motion on anm-dimensionalRiemannian manifold(M,g): aBrownian motion onMis defined to be a diffusion onMwhose characteristic operatorA{\displaystyle {\mathcal {A}}}in local coordinatesxi, 1 ≤i≤m, is given by1/2ΔLB, where ΔLBis theLaplace-Beltrami operatorgiven in local coordinates by
where [gij] = [gij]−1in the sense ofthe inverse of a square matrix.
In general, the generatorAof an Itô diffusionXis not abounded operator. However, if a positive multiple of the identity operatorIis subtracted fromAthen the resulting operator is invertible. The inverse of this operator can be expressed in terms ofXitself using theresolventoperator.
For α > 0, theresolvent operatorRα, acting on bounded, continuous functionsg:Rn→R, is defined by
It can be shown, using the Feller continuity of the diffusionX, thatRαgis itself a bounded, continuous function. Also,Rαand αI−Aare mutually inverse operators:
Sometimes it is necessary to find aninvariant measurefor an Itô diffusionX, i.e. a measure onRnthat does not change under the "flow" ofX: i.e., ifX0is distributed according to such an invariant measure μ∞, thenXtis also distributed according to μ∞for anyt≥ 0. The Fokker–Planck equation offers a way to find such a measure, at least if it has a probability density function ρ∞: ifX0is indeed distributed according to an invariant measure μ∞with density ρ∞, then the density ρ(t, ·) ofXtdoes not change witht, so ρ(t, ·) = ρ∞, and so ρ∞must solve the (time-independent) partial differential equation
This illustrates one of the connections between stochastic analysis and the study of partial differential equations. Conversely, a given second-order linear partial differential equation of the form Λf= 0 may be hard to solve directly, but if Λ =A∗for some Itô diffusionX, and an invariant measure forXis easy to compute, then that measure's density provides a solution to the partial differential equation.
An invariant measure is comparatively easy to compute when the processXis a stochastic gradient flow of the form
where β > 0 plays the role of aninverse temperatureand Ψ :Rn→Ris a scalar potential satisfying suitable smoothness and growth conditions. In this case, the Fokker–Planck equation has a unique stationary solution ρ∞(i.e.Xhas a unique invariant measure μ∞with density ρ∞) and it is given by theGibbs distribution:
where thepartition functionZis given by
Moreover, the density ρ∞satisfies avariational principle: it minimizes over all probability densities ρ onRnthefree energyfunctionalFgiven by
where
plays the role of an energy functional, and
is the negative of the Gibbs-Boltzmann entropy functional. Even when the potential Ψ is not well-behaved enough for the partition functionZand the Gibbs measure μ∞to be defined, the free energyF[ρ(t, ·)] still makes sense for each timet≥ 0, provided that the initial condition hasF[ρ(0, ·)] < +∞. The free energy functionalFis, in fact, aLyapunov functionfor the Fokker–Planck equation:F[ρ(t, ·)] must decrease astincreases. Thus,Fis anH-functionfor theX-dynamics.
Consider theOrnstein-Uhlenbeck processXonRnsatisfying the stochastic differential equation
wherem∈Rnand β, κ > 0 are given constants. In this case, the potential Ψ is given by
and so the invariant measure forXis aGaussian measurewith density ρ∞given by
Heuristically, for larget,Xtis approximatelynormally distributedwith meanmand variance (βκ)−1. The expression for the variance may be interpreted as follows: large values of κ mean that the potential well Ψ has "very steep sides", soXtis unlikely to move far from the minimum of Ψ atm; similarly, large values of β mean that the system is quite "cold" with little noise, so, again,Xtis unlikely to move far away fromm.
In general, an Itô diffusionXis not amartingale. However, for anyf∈C2(Rn;R) with compact support, the processM: [0, +∞) × Ω →Rdefined by
whereAis the generator ofX, is a martingale with respect to the natural filtrationF∗of (Ω, Σ) byX. The proof is quite simple: it follows from the usual expression of the action of the generator on smooth enough functionsfandItô's lemma(the stochasticchain rule) that
Since Itô integrals are martingales with respect to the natural filtration Σ∗of (Ω, Σ) byB, fort>s,
Hence, as required,
sinceMsisFs-measurable.
Dynkin's formula, named afterEugene Dynkin, gives theexpected valueof any suitably smooth statistic of an Itô diffusionX(with generatorA) at a stopping time. Precisely, if τ is a stopping time withEx[τ] < +∞, andf:Rn→RisC2with compact support, then
Dynkin's formula can be used to calculate many useful statistics of stopping times. For example, canonical Brownian motion on the real line starting at 0 exits theinterval(−R, +R) at a random time τRwith expected value
Dynkin's formula provides information about the behaviour ofXat a fairly general stopping time. For more information on the distribution ofXat ahitting time, one can study theharmonic measureof the process.
In many situations, it is sufficient to know when an Itô diffusionXwill first leave ameasurable setH⊆Rn. That is, one wishes to study thefirst exit time
Sometimes, however, one also wishes to know the distribution of the points at whichXexits the set. For example, canonical Brownian motionBon the real line starting at 0 exits theinterval(−1, 1) at −1 with probability1/2and at 1 with probability1/2, soBτ(−1, 1)isuniformly distributedon the set {−1, 1}.
In general, ifGiscompactly embeddedwithinRn, then theharmonic measure(orhitting distribution) ofXon theboundary∂GofGis the measure μGxdefined by
forx∈GandF⊆ ∂G.
Returning to the earlier example of Brownian motion, one can show that ifBis a Brownian motion inRnstarting atx∈RnandD⊂Rnis anopen ballcentred onx, then the harmonic measure ofBon ∂Disinvariantunder allrotationsofDaboutxand coincides with the normalizedsurface measureon ∂D.
The harmonic measure satisfies an interestingmean value property: iff:Rn→Ris any bounded, Borel-measurable function and φ is given by
then, for all Borel setsG⊂⊂Hand allx∈G,
The mean value property is very useful in thesolution of partial differential equations using stochastic processes.
LetAbe a partial differential operator on a domainD⊆Rnand letXbe an Itô diffusion withAas its generator. Intuitively, the Green measure of a Borel setHis the expected length of time thatXstays inHbefore it leaves the domainD. That is, theGreen measureofXwith respect toDatx, denotedG(x, ·), is defined for Borel setsH⊆Rnby
or for bounded, continuous functionsf:D→Rby
The name "Green measure" comes from the fact that ifXis Brownian motion, then
whereG(x,y) isGreen's functionfor the operator1/2Δ on the domainD.
Suppose thatEx[τD] < +∞ for allx∈D. Then theGreen formulaholds for allf∈C2(Rn;R) with compact support:
In particular, if the support offiscompactly embeddedinD,
|
https://en.wikipedia.org/wiki/It%C3%B4_diffusion
|
Asupercomputeris a type ofcomputerwith a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured infloating-pointoperations per second (FLOPS) instead ofmillion instructions per second(MIPS). Since 2022, supercomputers have existed which can perform over 1018FLOPS, so calledexascale supercomputers.[3]For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013).[4][5]Since November 2017, all of theworld's fastest 500 supercomputersrun onLinux-based operating systems.[6]Additional research is being conducted in the United States, theEuropean Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.[7]
Supercomputers play an important role in the field ofcomputational science, and are used for a wide range of computationally intensive tasks in various fields, includingquantum mechanics,weather forecasting,climate research,oil and gas exploration,molecular modeling(computing the structures and properties of chemical compounds, biologicalmacromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraftaerodynamics, the detonation ofnuclear weapons, andnuclear fusion). They have been essential in the field ofcryptanalysis.[8]
Supercomputers were introduced in the 1960s, and for several decades the fastest was made bySeymour CrayatControl Data Corporation(CDC),Cray Researchand subsequent companies bearing his name or monogram. The first such machines were highly tuned conventional designs that ran more quickly than their more general-purpose contemporaries. Through the decade, increasing amounts ofparallelismwere added, with one to fourprocessorsbeing typical. In the 1970s,vector processorsoperating on large arrays of data came to dominate. A notable example is the highly successfulCray-1of 1976. Vector computers remained the dominant design into the 1990s. From then until today,massively parallelsupercomputers with tens of thousands of off-the-shelf processors became the norm.[9][10]
The U.S. has long been a leader in the supercomputer field, initially through Cray's nearly uninterrupted dominance, and later through a variety of technology companies. Japan made significant advancements in the field during the 1980s and 1990s, while China has become increasingly active in supercomputing in recent years. As of November 2024[update], Lawrence Livermore National Laboratory'sEl Capitanis the world's fastest supercomputer.[11]The US has five of the top 10; Italy two, Japan, Finland, Switzerland have one each.[12]In June 2018, all combined supercomputers on the TOP500 list broke the 1 exaFLOPS mark.[13]
In 1960,UNIVACbuilt theLivermore Atomic Research Computer(LARC), today considered among the first supercomputers, for the US Navy Research and Development Center. It still used high-speeddrum memory, rather than the newly emergingdisk drivetechnology.[14]Also, among the first supercomputers was theIBM 7030 Stretch. The IBM 7030 was built by IBM for theLos Alamos National Laboratory, which then in 1955 had requested a computer 100 times faster than any existing computer. The IBM 7030 usedtransistors, magnetic core memory,pipelinedinstructions, prefetched data through a memory controller and included pioneering random access disk drives. The IBM 7030 was completed in 1961 and despite not meeting the challenge of a hundredfold increase in performance, it was purchased by the Los Alamos National Laboratory. Customers in England and France also bought the computer, and it became the basis for theIBM 7950 Harvest, a supercomputer built forcryptanalysis.[15]
The third pioneering supercomputer project in the early 1960s was theAtlasat theUniversity of Manchester, built by a team led byTom Kilburn. He designed the Atlas to have memory space for up to a million words of 48 bits, but because magnetic storage with such a capacity was unaffordable, the actual core memory of the Atlas was only 16,000 words, with a drum providing memory for a further 96,000 words. TheAtlas Supervisorswappeddata in the form of pages between the magnetic core and the drum. The Atlas operating system also introducedtime-sharingto supercomputing, so that more than one program could be executed on the supercomputer at any one time.[16]Atlas was a joint venture betweenFerrantiandManchester Universityand was designed to operate at processing speeds approaching one microsecond per instruction, about one million instructions per second.[17]
TheCDC 6600, designed bySeymour Cray, was finished in 1964 and marked the transition fromgermaniumtosilicontransistors. Silicon transistors could run more quickly and the overheating problem was solved by introducing refrigeration to the supercomputer design.[18]Thus, the CDC6600 became the fastest computer in the world. Given that the 6600 outperformed all the other contemporary computers by about 10 times, it was dubbed asupercomputerand defined the supercomputing market, when one hundred computers were sold at $8 million each.[19][20][21][22]
Cray left CDC in 1972 to form his own company,Cray Research.[20]Four years after leaving CDC, Cray delivered the 80 MHzCray-1in 1976, which became one of the most successful supercomputers in history.[23][24]TheCray-2was released in 1985. It had eightcentral processing units(CPUs),liquid coolingand the electronics coolant liquidFluorinertwas pumped through thesupercomputer architecture. It reached 1.9gigaFLOPS, making it the first supercomputer to break the gigaflop barrier.[25]
The only computer to seriously challenge the Cray-1's performance in the 1970s was theILLIAC IV. This machine was the first realized example of a truemassively parallelcomputer, in which many processors worked together to solve different parts of a single larger problem. In contrast with the vector systems, which were designed to run a single stream of data as quickly as possible, in this concept, the computer instead feeds separate parts of the data to entirely different processors and then recombines the results. The ILLIAC's design was finalized in 1966 with 256 processors and offer speed up to 1 GFLOPS, compared to the 1970s Cray-1's peak of 250 MFLOPS. However, development problems led to only 64 processors being built, and the system could never operate more quickly than about 200 MFLOPS while being much larger and more complex than the Cray. Another problem was that writing software for the system was difficult, and getting peak performance from it was a matter of serious effort.
But the partial success of the ILLIAC IV was widely seen as pointing the way to the future of supercomputing. Cray argued against this, famously quipping that "If you were plowing a field, which would you rather use? Two strong oxen or 1024 chickens?"[26]But by the early 1980s, several teams were working on parallel designs with thousands of processors, notably theConnection Machine(CM) that developed from research atMIT. The CM-1 used as many as 65,536 simplified custommicroprocessorsconnected together in anetworkto share data. Several updated versions followed; the CM-5 supercomputer is a massively parallel processing computer capable of many billions of arithmetic operations per second.[27]
In 1982,Osaka University'sLINKS-1 Computer Graphics Systemused amassively parallelprocessing architecture, with 514microprocessors, including 257Zilog Z8001control processorsand 257iAPX86/20floating-point processors. It was mainly used for rendering realistic3D computer graphics.[28]Fujitsu's VPP500 from 1992 is unusual since, to achieve higher speeds, its processors usedGaAs, a material normally reserved for microwave applications due to its toxicity.[29]Fujitsu'sNumerical Wind Tunnelsupercomputer used 166 vector processors to gain the top spot in 1994 with a peak speed of 1.7gigaFLOPS (GFLOPS)per processor.[30][31]TheHitachi SR2201obtained a peak performance of 600 GFLOPS in 1996 by using 2048 processors connected via a fast three-dimensionalcrossbarnetwork.[32][33][34]TheIntel Paragoncould have 1000 to 4000Intel i860processors in various configurations and was ranked the fastest in the world in 1993. The Paragon was aMIMDmachine which connected processors via a high speed two-dimensional mesh, allowing processes to execute on separate nodes, communicating via theMessage Passing Interface.[35]
Software development remained a problem, but the CM series sparked off considerable research into this issue. Similar designs using custom hardware were made by many companies, including theEvans & Sutherland ES-1,MasPar,nCUBE,Intel iPSCand theGoodyear MPP. But by the mid-1990s, general-purpose CPU performance had improved so much in that a supercomputer could be built using them as the individual processing units, instead of using custom chips. By the turn of the 21st century, designs featuring tens of thousands of commodity CPUs were the norm, with later machines addinggraphic unitsto the mix.[9][10]
In 1998,David Baderdeveloped the firstLinuxsupercomputer using commodity parts.[36]While at the University of New Mexico, Bader sought to build a supercomputer running Linux using consumer off-the-shelf parts and a high-speed low-latency interconnection network. The prototype utilized an Alta Technologies "AltaCluster" of eight dual, 333 MHz, Intel Pentium II computers running a modified Linux kernel. Bader ported a significant amount of software to provide Linux support for necessary components as well as code from members of the National Computational Science Alliance (NCSA) to ensure interoperability, as none of it had been run on Linux previously.[37]Using the successful prototype design, he led the development of "RoadRunner," the first Linux supercomputer for open use by the national science and engineering community via the National Science Foundation's National Technology Grid. RoadRunner was put into production use in April 1999. At the time of its deployment, it was considered one of the 100 fastest supercomputers in the world.[37][38]Though Linux-based clusters using consumer-grade parts, such asBeowulf, existed prior to the development of Bader's prototype and RoadRunner, they lacked the scalability, bandwidth, and parallel computing capabilities to be considered "true" supercomputers.[37]
Systems with a massive number of processors generally take one of two paths. In thegrid computingapproach, the processing power of many computers, organized as distributed, diverse administrative domains, is opportunistically used whenever a computer is available.[39]In another approach, many processors are used in proximity to each other, e.g. in acomputer cluster. In such a centralizedmassively parallelsystem the speed and flexibility of theinterconnectbecomes very important and modern supercomputers have used various approaches ranging from enhancedInfinibandsystems to three-dimensionaltorus interconnects.[40][41]The use ofmulti-core processorscombined with centralization is an emerging direction, e.g. as in theCyclops64system.[42][43]
As the price, performance andenergy efficiencyofgeneral-purpose graphics processing units(GPGPUs) have improved, a number ofpetaFLOPSsupercomputers such asTianhe-IandNebulaehave started to rely on them.[44]However, other systems such as theK computercontinue to use conventional processors such asSPARC-based designs and the overall applicability ofGPGPUsin general-purpose high-performance computing applications has been the subject of debate, in that while a GPGPU may be tuned to score well on specific benchmarks, its overall applicability to everyday algorithms may be limited unless significant effort is spent to tune the application to it.[45]However, GPUs are gaining ground, and in 2012 theJaguarsupercomputer was transformed intoTitanby retrofitting CPUs with GPUs.[46][47][48]
High-performance computers have an expected life cycle of about three years before requiring an upgrade.[49]TheGyoukousupercomputer is unique in that it uses both a massively parallel design andliquid immersion cooling.
A number of special-purpose systems have been designed, dedicated to a single problem. This allows the use of specially programmedFPGAchips or even customASICs, allowing better price/performance ratios by sacrificing generality. Examples of special-purpose supercomputers includeBelle,[50]Deep Blue,[51]andHydra[52]for playingchess,Gravity Pipefor astrophysics,[53]MDGRAPE-3for protein structure prediction and molecular dynamics,[54]andDeep Crackfor breaking theDEScipher.[55]
Throughout the decades, the management ofheat densityhas remained a key issue for most centralized supercomputers.[58][59][60]The large amount of heat generated by a system may also have other effects, e.g. reducing the lifetime of other system components.[61]There have been diverse approaches to heat management, from pumpingFluorinertthrough the system, to a hybrid liquid-air cooling system or air cooling with normalair conditioningtemperatures.[62][63]A typical supercomputer consumes large amounts of electrical power, almost all of which is converted into heat, requiring cooling. For example,Tianhe-1Aconsumes 4.04megawatts(MW) of electricity.[64]The cost to power and cool the system can be significant, e.g. 4 MW at $0.10/kWh is $400 an hour or about $3.5 million per year.
Heat management is a major issue in complex electronic devices and affects powerful computer systems in various ways.[65]Thethermal design powerandCPU power dissipationissues in supercomputing surpass those of traditionalcomputer coolingtechnologies. The supercomputing awards forgreen computingreflect this issue.[66][67][68]
The packing of thousands of processors together inevitably generates significant amounts ofheat densitythat need to be dealt with. TheCray-2wasliquid cooled, and used aFluorinert"cooling waterfall" which was forced through the modules under pressure.[62]However, the submerged liquid cooling approach was not practical for the multi-cabinet systems based on off-the-shelf processors, and inSystem Xa special cooling system that combined air conditioning with liquid cooling was developed in conjunction with theLiebert company.[63]
In theBlue Genesystem, IBM deliberately used low power processors to deal with heat density.[69]The IBMPower 775, released in 2011, has closely packed elements that require water cooling.[70]The IBMAquasarsystem uses hot water cooling to achieve energy efficiency, the water being used to heat buildings as well.[71][72]
The energy efficiency of computer systems is generally measured in terms of "FLOPS per watt". In 2008,RoadrunnerbyIBMoperated at 376MFLOPS/W.[73][74]In November 2010, theBlue Gene/Qreached 1,684 MFLOPS/W[75][76]and in June 2011 the top two spots on theGreen 500list were occupied byBlue Genemachines in New York (one achieving 2097 MFLOPS/W) with theDEGIMA clusterin Nagasaki placing third with 1375 MFLOPS/W.[77]
Because copper wires can transfer energy into a supercomputer with much higher power densities than forced air or circulating refrigerants can removewaste heat,[78]the ability of the cooling systems to remove waste heat is a limiting factor.[79][80]As of 2015[update], many existing supercomputers have more infrastructure capacity than the actual peak demand of the machine – designers generally conservatively design the power and cooling infrastructure to handle more than the theoretical peak electrical power consumed by the supercomputer. Designs for future supercomputers are power-limited – thethermal design powerof the supercomputer as a whole, the amount that the power and cooling infrastructure can handle, is somewhat more than the expected normal power consumption, but less than the theoretical peak power consumption of the electronic hardware.[81]
Since the end of the 20th century,supercomputer operating systemshave undergone major transformations, based on the changes insupercomputer architecture.[82]While early operating systems were custom tailored to each supercomputer to gain speed, the trend has been to move away from in-house operating systems to the adaptation of generic software such asLinux.[83]
Since modernmassively parallelsupercomputers typically separate computations from other services by using multiple types ofnodes, they usually run different operating systems on different nodes, e.g. using a small and efficientlightweight kernelsuch asCNKorCNLon compute nodes, but a larger system such as a fullLinux distributionon server andI/Onodes.[84][85][86]
While in a traditional multi-user computer systemjob schedulingis, in effect, ataskingproblem for processing and peripheral resources, in a massively parallel system, the job management system needs to manage the allocation of both computational and communication resources, as well as gracefully deal with inevitable hardware failures when tens of thousands of processors are present.[87]
Although most modern supercomputers useLinux-based operating systems, each manufacturer has its own specific Linux distribution, and no industry standard exists, partly due to the fact that the differences in hardware architectures require changes to optimize the operating system to each hardware design.[82][88]
The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. Software tools for distributed processing include standardAPIssuch asMPI[90]andPVM,VTL, andopen sourcesoftware such asBeowulf.
In the most common scenario, environments such asPVMandMPIfor loosely connected clusters andOpenMPfor tightly coordinated shared memory machines are used. Significant effort is required to optimize an algorithm for the interconnect characteristics of the machine it will be run on; the aim is to prevent any of the CPUs from wasting time waiting on data from other nodes.GPGPUshave hundreds of processor cores and are programmed using programming models such asCUDAorOpenCL.
Moreover, it is quite difficult to debug and test parallel programs.Special techniquesneed to be used for testing and debugging such applications.
Opportunistic supercomputing is a form of networkedgrid computingwhereby a "super virtual computer" of manyloosely coupledvolunteer computing machines performs very large computing tasks. Grid computing has been applied to a number of large-scaleembarrassingly parallelproblems that require supercomputing performance scales. However, basic grid andcloud computingapproaches that rely onvolunteer computingcannot handle traditional supercomputing tasks such as fluid dynamic simulations.[91]
The fastest grid computing system is thevolunteer computing projectFolding@home(F@h). As of April 2020[update], F@h reported 2.5 exaFLOPS ofx86processing power. Of this, over 100 PFLOPS are contributed by clients running on various GPUs, and the rest from various CPU systems.[92]
TheBerkeley Open Infrastructure for Network Computing(BOINC) platform hosts a number of volunteer computing projects. As of February 2017[update], BOINC recorded a processing power of over 166 petaFLOPS through over 762 thousand active Computers (Hosts) on the network.[93]
As of October 2016[update],Great Internet Mersenne Prime Search's (GIMPS) distributedMersenne Primesearch achieved about 0.313 PFLOPS through over 1.3 million computers.[94]The PrimeNet server has supported GIMPS's grid computing approach, one of the earliest volunteer computing projects, since 1997.
Quasi-opportunistic supercomputing is a form ofdistributed computingwhereby the "super virtual computer" of many networked geographically disperse computers performs computing tasks that demand huge processing power.[95]Quasi-opportunistic supercomputing aims to provide a higher quality of service thanopportunistic grid computingby achieving more control over the assignment of tasks to distributed resources and the use of intelligence about the availability and reliability of individual systems within the supercomputing network. However, quasi-opportunistic distributed execution of demanding parallel computing software in grids should be achieved through the implementation of grid-wise allocation agreements, co-allocation subsystems, communication topology-aware allocation mechanisms, fault tolerant message passing libraries and data pre-conditioning.[95]
Cloud computingwith its recent and rapid expansions and development have grabbed the attention of high-performance computing (HPC) users and developers in recent years. Cloud computing attempts to provide HPC-as-a-service exactly like other forms of services available in the cloud such assoftware as a service,platform as a service, andinfrastructure as a service. HPC users may benefit from the cloud in different angles such as scalability, resources being on-demand, fast, and inexpensive. On the other hand, moving HPC applications have a set of challenges too. Good examples of such challenges arevirtualizationoverhead in the cloud, multi-tenancy of resources, and network latency issues. Much research is currently being done to overcome these challenges and make HPC in the cloud a more realistic possibility.[96][97][98][99]
In 2016, Penguin Computing, Parallel Works, R-HPC,Amazon Web Services,Univa,Silicon Graphics International,Rescale, Sabalcore, and Gomput started to offer HPCcloud computing. The Penguin On Demand (POD) cloud is abare-metalcompute model to execute code, but each user is givenvirtualizedlogin node. POD computing nodes are connected via non-virtualized10 Gbit/sEthernetor QDRInfiniBandnetworks. User connectivity to the PODdata centerranges from 50 Mbit/s to 1 Gbit/s.[100]Citing Amazon's EC2 Elastic Compute Cloud, Penguin Computing argues thatvirtualizationof compute nodes is not suitable for HPC. Penguin Computing has also criticized that HPC clouds may have allocated computing nodes to customers that are far apart, causing latency that impairs performance for some HPC applications.[101]
Supercomputers generally aim for the maximum in capability computing rather than capacity computing. Capability computing is typically thought of as using the maximum computing power to solve a single large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can, e.g. a very complexweather simulationapplication.[102]
Capacity computing, in contrast, is typically thought of as using efficient cost-effective computing power to solve a few somewhat large problems or many small problems.[102]Architectures that lend themselves to supporting many users for routine everyday tasks may have a lot of capacity but are not typically considered supercomputers, given that they do not solve a single very complex problem.[102]
In general, the speed of supercomputers is measured andbenchmarkedinFLOPS(floating-point operations per second), and not in terms ofMIPS(million instructions per second), as is the case with general-purpose computers.[103]These measurements are commonly used with anSI prefixsuch astera-, combined into the shorthand TFLOPS (1012FLOPS, pronouncedteraflops), orpeta-, combined into the shorthand PFLOPS (1015FLOPS, pronouncedpetaflops.)Petascalesupercomputers can process one quadrillion (1015) (1000 trillion) FLOPS.Exascaleis computing performance in the exaFLOPS (EFLOPS) range. An EFLOPS is one quintillion (1018) FLOPS (one million TFLOPS). However, The performance of a supercomputer can be severely impacted by fluctuation brought on by elements like system load, network traffic, and concurrent processes, as mentioned by Brehm and Bruhwiler (2015).[104]
No single number can reflect the overall performance of a computer system, yet the goal of the Linpack benchmark is to approximate how fast the computer solves numerical problems and it is widely used in the industry.[105]The FLOPS measurement is either quoted based on the theoretical floating point performance of a processor (derived from manufacturer's processor specifications and shown as "Rpeak" in the TOP500 lists), which is generally unachievable when running real workloads, or the achievable throughput, derived from theLINPACK benchmarksand shown as "Rmax" in the TOP500 list.[106]The LINPACK benchmark typically performsLU decompositionof a large matrix.[107]The LINPACK performance gives some indication of performance for some real-world problems, but does not necessarily match the processing requirements of many other supercomputer workloads, which for example may require more memory bandwidth, or may require better integer computing performance, or may need a high performance I/O system to achieve high levels of performance.[105]
Since 1993, the fastest supercomputers have been ranked on the TOP500 list according to theirLINPACK benchmarkresults. The list does not claim to be unbiased or definitive, but it is a widely cited current definition of the "fastest" supercomputer available at any given time.
This is a list of the computers which appeared at the top of theTOP500 listsince June 1993,[108]and the "Peak speed" is given as the "Rmax" rating. In 2018,Lenovobecame the world's largest provider for the TOP500 supercomputers with 117 units produced.[109]
Legend:[112]
The stages of supercomputer application are summarized in the following table:
The IBMBlue Gene/P computer has been used to simulate a number of artificial neurons equivalent to approximately one percent of a human cerebral cortex, containing 1.6 billion neurons with approximately 9 trillion connections. The same research group also succeeded in using a supercomputer to simulate a number of artificial neurons equivalent to the entirety of a rat's brain.[121]
Modern weather forecasting relies on supercomputers. TheNational Oceanic and Atmospheric Administrationuses supercomputers to crunch hundreds of millions of observations to help make weather forecasts more accurate.[122]
In 2011, the challenges and difficulties in pushing the envelope in supercomputing were underscored byIBM's abandonment of theBlue Waterspetascale project.[123]
TheAdvanced Simulation and Computing Programcurrently uses supercomputers to maintain and simulate the United States nuclear stockpile.[124]
In early 2020,COVID-19was front and center in the world. Supercomputers used different simulations to find compounds that could potentially stop the spread. These computers run for tens of hours using multiple paralleled running CPU's to model different processes.[125][126][127]
In the 2010s, China, the United States, the European Union, and others competed to be the first to create a 1exaFLOP(1018or one quintillion FLOPS) supercomputer.[128]Erik P. DeBenedictis ofSandia National Laboratorieshas theorized that a zettaFLOPS (1021or one sextillion FLOPS) computer is required to accomplish fullweather modeling, which could cover a two-week time span accurately.[129][130][131]Such systems might be built around 2030.[132]
ManyMonte Carlo simulationsuse the same algorithm to process a randomly generated data set; particularly,integro-differential equationsdescribingphysical transport processes, therandom paths, collisions, and energy and momentum depositions of neutrons, photons, ions, electrons, etc.The next step for microprocessors may be into thethird dimension; and specializing to Monte Carlo, the many layers could be identical, simplifying the design and manufacture process.[133]
The cost of operating high performance supercomputers has risen, mainly due to increasing power consumption. In the mid-1990s a top 10 supercomputer required in the range of 100 kilowatts, in 2010 the top 10 supercomputers required between 1 and 2 megawatts.[134]A 2010 study commissioned byDARPAidentified power consumption as the most pervasive challenge in achievingExascale computing.[135]At the time a megawatt per year in energy consumption cost about 1 million dollars. Supercomputing facilities were constructed to efficiently remove the increasing amount of heat produced by modern multi-corecentral processing units. Based on the energy consumption of the Green 500 list of supercomputers between 2007 and 2011, a supercomputer with 1 exaFLOPS in 2011 would have required nearly 500 megawatts. Operating systems were developed for existing hardware to conserve energy whenever possible.[136]CPU cores not in use during the execution of a parallelized application were put into low-power states, producing energy savings for some supercomputing applications.[137]
The increasing cost of operating supercomputers has been a driving factor in a trend toward bundling of resources through a distributed supercomputer infrastructure. National supercomputing centers first emerged in the US, followed by Germany and Japan. The European Union launched thePartnership for Advanced Computing in Europe(PRACE) with the aim of creating a persistent pan-European supercomputer infrastructure with services to support scientists across theEuropean Unionin porting, scaling and optimizing supercomputing applications.[134]Iceland built the world's first zero-emission supercomputer. Located at the Thor Data Center inReykjavík, Iceland, this supercomputer relies on completely renewable sources for its power rather than fossil fuels. The colder climate also reduces the need for active cooling, making it one of the greenest facilities in the world of computers.[138]
Funding supercomputer hardware also became increasingly difficult. In the mid-1990s a top 10 supercomputer cost about 10 million euros, while in 2010 the top 10 supercomputers required an investment of between 40 and 50 million euros.[134]In the 2000s national governments put in place different strategies to fund supercomputers. In the UK the national government funded supercomputers entirely and high performance computing was put under the control of a national funding agency. Germany developed a mixed funding model, pooling local state funding and federal funding.[134]
Examples of supercomputers in fiction includeHAL 9000,Multivac,The Machine Stops,GLaDOS,The Evitable Conflict,Vulcan's Hammer,Colossus,WOPR,AM, andDeep Thought. A supercomputer fromThinking Machineswas mentioned as the supercomputer used to sequence theDNAextracted from preserved parasites in theJurassic Parkseries.
|
https://en.wikipedia.org/wiki/Supercomputer
|
Inmathematics, there are severalintegralsknown as theDirichlet integral, after the German mathematicianPeter Gustav Lejeune Dirichlet, one of which is theimproper integralof thesinc functionover the positive real number line.
∫0∞sinxxdx=π2.{\displaystyle \int _{0}^{\infty }{\frac {\sin x}{x}}\,dx={\frac {\pi }{2}}.}
This integral is notabsolutely convergent, meaning|sinxx|{\displaystyle \left|{\frac {\sin x}{x}}\right|}has infinite Lebesgue or Riemann improper integral over the positive real line, so the sinc function is notLebesgue integrableover the positive real line. The sinc function is, however, integrable in the sense of the improperRiemann integralor the generalized Riemann orHenstock–Kurzweil integral.[1][2]This can be seen by usingDirichlet's test for improper integrals.
It is a good illustration of special techniques for evaluating definite integrals, particularly when it is not useful to directly apply thefundamental theorem of calculusdue to the lack of an elementaryantiderivativefor the integrand, as thesine integral, an antiderivative of the sinc function, is not anelementary function. In this case, the improper definite integral can be determined in several ways: the Laplace transform, double integration, differentiating under the integral sign, contour integration, and the Dirichlet kernel. But since the integrand is an even function, the domain of integration can be extended to the negative real number line as well.
Letf(t){\displaystyle f(t)}be a function defined whenevert≥0.{\displaystyle t\geq 0.}Then itsLaplace transformis given byL{f(t)}=F(s)=∫0∞e−stf(t)dt,{\displaystyle {\mathcal {L}}\{f(t)\}=F(s)=\int _{0}^{\infty }e^{-st}f(t)\,dt,}if the integral exists.[3]
A property of theLaplace transform useful for evaluating improper integralsisL[f(t)t]=∫s∞F(u)du,{\displaystyle {\mathcal {L}}\left[{\frac {f(t)}{t}}\right]=\int _{s}^{\infty }F(u)\,du,}providedlimt→0f(t)t{\displaystyle \lim _{t\to 0}{\frac {f(t)}{t}}}exists.
In what follows, one needs the resultL{sint}=1s2+1,{\displaystyle {\mathcal {L}}\{\sin t\}={\frac {1}{s^{2}+1}},}which is the Laplace transform of the functionsint{\displaystyle \sin t}(see the section 'Differentiating under the integral sign' for a derivation) as well as a version ofAbel's theorem(a consequence of thefinal value theorem for the Laplace transform).
Therefore,∫0∞sinttdt=lims→0∫0∞e−stsinttdt=lims→0L[sintt]=lims→0∫s∞duu2+1=lims→0arctanu|s∞=lims→0[π2−arctan(s)]=π2.{\displaystyle {\begin{aligned}\int _{0}^{\infty }{\frac {\sin t}{t}}\,dt&=\lim _{s\to 0}\int _{0}^{\infty }e^{-st}{\frac {\sin t}{t}}\,dt=\lim _{s\to 0}{\mathcal {L}}\left[{\frac {\sin t}{t}}\right]\\[6pt]&=\lim _{s\to 0}\int _{s}^{\infty }{\frac {du}{u^{2}+1}}=\lim _{s\to 0}\arctan u{\Biggr |}_{s}^{\infty }\\[6pt]&=\lim _{s\to 0}\left[{\frac {\pi }{2}}-\arctan(s)\right]={\frac {\pi }{2}}.\end{aligned}}}
Evaluating the Dirichlet integral using the Laplace transform is equivalent to calculating the same double definite integral by changing theorder of integration, namely,(I1=∫0∞∫0∞e−stsintdtds)=(I2=∫0∞∫0∞e−stsintdsdt),{\displaystyle \left(I_{1}=\int _{0}^{\infty }\int _{0}^{\infty }e^{-st}\sin t\,dt\,ds\right)=\left(I_{2}=\int _{0}^{\infty }\int _{0}^{\infty }e^{-st}\sin t\,ds\,dt\right),}(I1=∫0∞1s2+1ds=π2)=(I2=∫0∞sinttdt),provideds>0.{\displaystyle \left(I_{1}=\int _{0}^{\infty }{\frac {1}{s^{2}+1}}\,ds={\frac {\pi }{2}}\right)=\left(I_{2}=\int _{0}^{\infty }{\frac {\sin t}{t}}\,dt\right),{\text{ provided }}s>0.}The change of order is justified by the fact that for alls>0{\displaystyle s>0}, the integral is absolutely convergent.
First rewrite the integral as a function of the additional variables,{\displaystyle s,}namely, the Laplace transform ofsintt.{\displaystyle {\frac {\sin t}{t}}.}So letf(s)=∫0∞e−stsinttdt.{\displaystyle f(s)=\int _{0}^{\infty }e^{-st}{\frac {\sin t}{t}}\,dt.}
In order to evaluate the Dirichlet integral, we need to determinef(0).{\displaystyle f(0).}The continuity off{\displaystyle f}can be justified by applying thedominated convergence theoremafter integration by parts. Differentiate with respect tos>0{\displaystyle s>0}and apply theLeibniz rule for differentiating under the integral signto obtaindfds=dds∫0∞e−stsinttdt=∫0∞∂∂se−stsinttdt=−∫0∞e−stsintdt.{\displaystyle {\begin{aligned}{\frac {df}{ds}}&={\frac {d}{ds}}\int _{0}^{\infty }e^{-st}{\frac {\sin t}{t}}\,dt=\int _{0}^{\infty }{\frac {\partial }{\partial s}}e^{-st}{\frac {\sin t}{t}}\,dt\\[6pt]&=-\int _{0}^{\infty }e^{-st}\sin t\,dt.\end{aligned}}}
Now, using Euler's formulaeit=cost+isint,{\displaystyle e^{it}=\cos t+i\sin t,}one can express the sine function in terms of complex exponentials:sint=12i(eit−e−it).{\displaystyle \sin t={\frac {1}{2i}}\left(e^{it}-e^{-it}\right).}
Therefore,dfds=−∫0∞e−stsintdt=−∫0∞e−steit−e−it2idt=−12i∫0∞[e−t(s−i)−e−t(s+i)]dt=−12i[−1s−ie−t(s−i)−−1s+ie−t(s+i)]0∞=−12i[0−(−1s−i+1s+i)]=−12i(1s−i−1s+i)=−12i(s+i−(s−i)s2+1)=−1s2+1.{\displaystyle {\begin{aligned}{\frac {df}{ds}}&=-\int _{0}^{\infty }e^{-st}\sin t\,dt=-\int _{0}^{\infty }e^{-st}{\frac {e^{it}-e^{-it}}{2i}}dt\\[6pt]&=-{\frac {1}{2i}}\int _{0}^{\infty }\left[e^{-t(s-i)}-e^{-t(s+i)}\right]dt\\[6pt]&=-{\frac {1}{2i}}\left[{\frac {-1}{s-i}}e^{-t(s-i)}-{\frac {-1}{s+i}}e^{-t(s+i)}\right]_{0}^{\infty }\\[6pt]&=-{\frac {1}{2i}}\left[0-\left({\frac {-1}{s-i}}+{\frac {1}{s+i}}\right)\right]=-{\frac {1}{2i}}\left({\frac {1}{s-i}}-{\frac {1}{s+i}}\right)\\[6pt]&=-{\frac {1}{2i}}\left({\frac {s+i-(s-i)}{s^{2}+1}}\right)=-{\frac {1}{s^{2}+1}}.\end{aligned}}}
Integrating with respect tos{\displaystyle s}givesf(s)=∫−dss2+1=A−arctans,{\displaystyle f(s)=\int {\frac {-ds}{s^{2}+1}}=A-\arctan s,}
whereA{\displaystyle A}is a constant of integration to be determined. Sincelims→∞f(s)=0,{\displaystyle \lim _{s\to \infty }f(s)=0,}A=lims→∞arctans=π2,{\displaystyle A=\lim _{s\to \infty }\arctan s={\frac {\pi }{2}},}using the principal value. This means that fors>0{\displaystyle s>0}f(s)=π2−arctans.{\displaystyle f(s)={\frac {\pi }{2}}-\arctan s.}
Finally, by continuity ats=0,{\displaystyle s=0,}we havef(0)=π2−arctan(0)=π2,{\displaystyle f(0)={\frac {\pi }{2}}-\arctan(0)={\frac {\pi }{2}},}as before.
Considerf(z)=eizz.{\displaystyle f(z)={\frac {e^{iz}}{z}}.}
As a function of the complex variablez,{\displaystyle z,}it has a simple pole at the origin, which prevents the application ofJordan's lemma, whose other hypotheses are satisfied.
Define then a new function[4]g(z)=eizz+iε.{\displaystyle g(z)={\frac {e^{iz}}{z+i\varepsilon }}.}
The pole has been moved to the negative imaginary axis, sog(z){\displaystyle g(z)}can be integrated along the semicircleγ{\displaystyle \gamma }of radiusR{\displaystyle R}centered atz=0{\displaystyle z=0}extending in the positive imaginary direction, and closed along the real axis. One then takes the limitε→0.{\displaystyle \varepsilon \to 0.}
The complex integral is zero by theresidue theorem, as there are no poles inside the integration pathγ{\displaystyle \gamma }:0=∫γg(z)dz=∫−RReixx+iεdx+∫0πei(Reiθ+θ)Reiθ+iεiRdθ.{\displaystyle 0=\int _{\gamma }g(z)\,dz=\int _{-R}^{R}{\frac {e^{ix}}{x+i\varepsilon }}\,dx+\int _{0}^{\pi }{\frac {e^{i(Re^{i\theta }+\theta )}}{Re^{i\theta }+i\varepsilon }}iR\,d\theta .}
The second term vanishes asR{\displaystyle R}goes to infinity. As for the first integral, one can use one version of theSokhotski–Plemelj theoremfor integrals over the real line: for acomplex-valued functionfdefined and continuously differentiable on the real line and real constantsa{\displaystyle a}andb{\displaystyle b}witha<0<b{\displaystyle a<0<b}one findslimε→0+∫abf(x)x±iεdx=∓iπf(0)+P∫abf(x)xdx,{\displaystyle \lim _{\varepsilon \to 0^{+}}\int _{a}^{b}{\frac {f(x)}{x\pm i\varepsilon }}\,dx=\mp i\pi f(0)+{\mathcal {P}}\int _{a}^{b}{\frac {f(x)}{x}}\,dx,}
whereP{\displaystyle {\mathcal {P}}}denotes theCauchy principal value. Back to the above original calculation, one can write0=P∫eixxdx−πi.{\displaystyle 0={\mathcal {P}}\int {\frac {e^{ix}}{x}}\,dx-\pi i.}
By taking the imaginary part on both sides and noting that the functionsin(x)/x{\displaystyle \sin(x)/x}is even, we get∫−∞+∞sin(x)xdx=2∫0+∞sin(x)xdx.{\displaystyle \int _{-\infty }^{+\infty }{\frac {\sin(x)}{x}}\,dx=2\int _{0}^{+\infty }{\frac {\sin(x)}{x}}\,dx.}
Finally,limε→0∫ε∞sin(x)xdx=∫0∞sin(x)xdx=π2.{\displaystyle \lim _{\varepsilon \to 0}\int _{\varepsilon }^{\infty }{\frac {\sin(x)}{x}}\,dx=\int _{0}^{\infty }{\frac {\sin(x)}{x}}\,dx={\frac {\pi }{2}}.}
Alternatively, choose as the integration contour forf{\displaystyle f}the union of upper half-plane semicircles of radiiε{\displaystyle \varepsilon }andR{\displaystyle R}together with two segments of the real line that connect them. On one hand the contour integral is zero, independently ofε{\displaystyle \varepsilon }andR;{\displaystyle R;}on the other hand, asε→0{\displaystyle \varepsilon \to 0}andR→∞{\displaystyle R\to \infty }the integral's imaginary part converges to2I+ℑ(ln0−ln(πi))=2I−π{\displaystyle 2I+\Im {\big (}\ln 0-\ln(\pi i){\big )}=2I-\pi }(herelnz{\displaystyle \ln z}is any branch of logarithm on upper half-plane), leading toI=π2.{\displaystyle I={\frac {\pi }{2}}.}
Consider the well-known formula for theDirichlet kernel:[5]Dn(x)=1+2∑k=1ncos(2kx)=sin[(2n+1)x]sin(x).{\displaystyle D_{n}(x)=1+2\sum _{k=1}^{n}\cos(2kx)={\frac {\sin[(2n+1)x]}{\sin(x)}}.}
It immediately follows that:∫0π2Dn(x)dx=π2.{\displaystyle \int _{0}^{\frac {\pi }{2}}D_{n}(x)\,dx={\frac {\pi }{2}}.}
Definef(x)={1x−1sin(x)x≠00x=0{\displaystyle f(x)={\begin{cases}{\frac {1}{x}}-{\frac {1}{\sin(x)}}&x\neq 0\\[6pt]0&x=0\end{cases}}}
Clearly,f{\displaystyle f}is continuous whenx∈(0,π/2];{\displaystyle x\in (0,\pi /2];}to see its continuity at 0 applyL'Hopital's Rule:limx→0sin(x)−xxsin(x)=limx→0cos(x)−1sin(x)+xcos(x)=limx→0−sin(x)2cos(x)−xsin(x)=0.{\displaystyle \lim _{x\to 0}{\frac {\sin(x)-x}{x\sin(x)}}=\lim _{x\to 0}{\frac {\cos(x)-1}{\sin(x)+x\cos(x)}}=\lim _{x\to 0}{\frac {-\sin(x)}{2\cos(x)-x\sin(x)}}=0.}
Hence,f{\displaystyle f}fulfills the requirements of theRiemann-Lebesgue Lemma. This means:limλ→∞∫0π/2f(x)sin(λx)dx=0⟹limλ→∞∫0π/2sin(λx)xdx=limλ→∞∫0π/2sin(λx)sin(x)dx.{\displaystyle \lim _{\lambda \to \infty }\int _{0}^{\pi /2}f(x)\sin(\lambda x)dx=0\quad \Longrightarrow \quad \lim _{\lambda \to \infty }\int _{0}^{\pi /2}{\frac {\sin(\lambda x)}{x}}dx=\lim _{\lambda \to \infty }\int _{0}^{\pi /2}{\frac {\sin(\lambda x)}{\sin(x)}}dx.}
(The form of the Riemann-Lebesgue Lemma used here is proven in the article cited.)
We would like to compute:∫0∞sin(t)tdt=limλ→∞∫0λπ2sin(t)tdt=limλ→∞∫0π2sin(λx)xdx=limλ→∞∫0π2sin(λx)sin(x)dx=limn→∞∫0π2sin((2n+1)x)sin(x)dx=limn→∞∫0π2Dn(x)dx=π2{\displaystyle {\begin{aligned}\int _{0}^{\infty }{\frac {\sin(t)}{t}}dt=&\lim _{\lambda \to \infty }\int _{0}^{\lambda {\frac {\pi }{2}}}{\frac {\sin(t)}{t}}dt\\[6pt]=&\lim _{\lambda \to \infty }\int _{0}^{\frac {\pi }{2}}{\frac {\sin(\lambda x)}{x}}dx\\[6pt]=&\lim _{\lambda \to \infty }\int _{0}^{\frac {\pi }{2}}{\frac {\sin(\lambda x)}{\sin(x)}}dx\\[6pt]=&\lim _{n\to \infty }\int _{0}^{\frac {\pi }{2}}{\frac {\sin((2n+1)x)}{\sin(x)}}dx\\[6pt]=&\lim _{n\to \infty }\int _{0}^{\frac {\pi }{2}}D_{n}(x)dx={\frac {\pi }{2}}\end{aligned}}}
However, we must justify switching the real limit inλ{\displaystyle \lambda }to the integral limit inn,{\displaystyle n,}which will follow from showing that the limit does exist.
Usingintegration by parts, we have:∫absin(x)xdx=∫abd(1−cos(x))xdx=1−cos(x)x|ab+∫ab1−cos(x)x2dx{\displaystyle \int _{a}^{b}{\frac {\sin(x)}{x}}dx=\int _{a}^{b}{\frac {d(1-\cos(x))}{x}}dx=\left.{\frac {1-\cos(x)}{x}}\right|_{a}^{b}+\int _{a}^{b}{\frac {1-\cos(x)}{x^{2}}}dx}
Now, asa→0{\displaystyle a\to 0}andb→∞{\displaystyle b\to \infty }the term on the left converges with no problem. See thelist of limits of trigonometric functions. We now show that∫−∞∞1−cos(x)x2dx{\displaystyle \int _{-\infty }^{\infty }{\frac {1-\cos(x)}{x^{2}}}dx}is absolutely integrable, which implies that the limit exists.[6]
First, we seek to bound the integral near the origin. Using the Taylor-series expansion of the cosine about zero,1−cos(x)=1−∑k≥0(−1)(k+1)x2k2k!=∑k≥1(−1)(k+1)x2k2k!.{\displaystyle 1-\cos(x)=1-\sum _{k\geq 0}{\frac {{(-1)^{(k+1)}}x^{2k}}{2k!}}=\sum _{k\geq 1}{\frac {{(-1)^{(k+1)}}x^{2k}}{2k!}}.}
Therefore,|1−cos(x)x2|=|−∑k≥0x2k2(k+1)!|≤∑k≥0|x|kk!=e|x|.{\displaystyle \left|{\frac {1-\cos(x)}{x^{2}}}\right|=\left|-\sum _{k\geq 0}{\frac {x^{2k}}{2(k+1)!}}\right|\leq \sum _{k\geq 0}{\frac {|x|^{k}}{k!}}=e^{|x|}.}
Splitting the integral into pieces, we have∫−∞∞|1−cos(x)x2|dx≤∫−∞−ε2x2dx+∫−εεe|x|dx+∫ε∞2x2dx≤K,{\displaystyle \int _{-\infty }^{\infty }\left|{\frac {1-\cos(x)}{x^{2}}}\right|dx\leq \int _{-\infty }^{-\varepsilon }{\frac {2}{x^{2}}}dx+\int _{-\varepsilon }^{\varepsilon }e^{|x|}dx+\int _{\varepsilon }^{\infty }{\frac {2}{x^{2}}}dx\leq K,}
for some constantK>0.{\displaystyle K>0.}This shows that the integral is absolutely integrable, which implies the original integral exists, and switching fromλ{\displaystyle \lambda }ton{\displaystyle n}was in fact justified, and the proof is complete.
|
https://en.wikipedia.org/wiki/Dirichlet_integral
|
Ingrammar, aphrase—calledexpressionin some contexts—is a group of words or singular word acting as a grammatical unit. For instance, theEnglishexpression "the very happy squirrel" is anoun phrasewhich contains theadjective phrase"very happy". Phrases can consist of a single word or a complete sentence. Intheoretical linguistics, phrases are often analyzed as units of syntactic structure such as aconstituent. There is a difference between the common use of the termphraseand its technical use in linguistics. In common usage, a phrase is usually a group of words with some specialidiomaticmeaning or other significance, such as "all rights reserved", "economical with the truth", "kick the bucket", and the like. It may be aeuphemism, asayingorproverb, afixed expression, afigure of speech, etc.. Inlinguistics, these are known asphrasemes.
In theories ofsyntax, a phrase is any group of words, or sometimes a single word, which plays a particular role within the syntactic structure of asentence. It does not have to have any special meaning or significance, or even exist anywhere outside of the sentence being analyzed, but it must function there as a complete grammatical unit. For example, in the sentenceYesterday I saw an orange bird with a white neck, the wordsan orange bird with a white neckform anoun phrase, or adeterminer phrasein some theories, which functions as theobjectof the sentence.
Many theories of syntax and grammar illustrate sentence structure using phrase 'trees', which provide schematics of how the words in a sentence are grouped and relate to each other. A tree shows the words, phrases, and clauses that make up a sentence. Any word combination that corresponds to a complete subtree can be seen as a phrase.
There are two competing principles for constructing trees; they produce 'constituency' and 'dependency' trees and both are illustrated here using an example sentence. The constituency-based tree is on the left and the dependency-based tree is on the right (whereadjective(A),determiner(D),noun(N), sentence (S),verb(V),noun phrase(NP),prepositional phrase(PP),verb phrase(VP)):
The tree on the left is of the constituency-based,phrase structure grammar, and the tree on the right is of thedependency grammar. The node labels in the two trees mark thesyntactic categoryof the differentconstituents, or word elements, of the sentence.
In the constituency tree each phrase is marked by a phrasal node (NP, PP, VP); and there are eight phrases identified by phrase structure analysis in the example sentence. On the other hand, the dependency tree identifies a phrase by any node that exerts dependency upon, or dominates, another node. And, using dependency analysis, there are six phrases in the sentence.
The trees and phrase-counts demonstrate that different theories of syntax differ in the word combinations they qualify as a phrase. Here the constituency tree identifies three phrases that the dependency trees does not, namely:house at the end of the street,end of the street, andthe end. More analysis, including about the plausibilities of both grammars, can be made empirically by applyingconstituency tests.
In grammatical analysis, most phrases contain ahead, which identifies the type and linguistic features of the phrase. Thesyntactic categoryof the head is used to name the category of the phrase;[1]for example, a phrase whose head is anounis called anoun phrase. The remaining words in a phrase are called the dependents of the head.
In the following phrases the head-word, or head, is bolded:
The above five examples are the most common of phrase types; but, by the logic of heads and dependents, others can be routinely produced. For instance, thesubordinatorphrase:
By linguistic analysis this is a group of words that qualifies as a phrase, and the head-word gives its syntactic name, "subordinator", to the grammatical category of the entire phrase. But this phrase, "beforethat happened", is more commonly classified in other grammars, including traditional English grammars, as asubordinate clause(ordependent clause); and it is then labellednotas a phrase, but as aclause.
Most theories of syntax view most phrases as having a head, but some non-headed phrases are acknowledged. A phrase lacking a head is known asexocentric, and phrases with heads areendocentric.
Some modern theories of syntax introducefunctional categoriesin which the head of a phrase is a functional lexical item. Some functional heads in some languages are not pronounced, but are rathercovert. For example, in order to explain certain syntactic patterns which correlate with thespeech acta sentence performs, some researchers have positedforce phrases(ForceP), whose heads are not pronounced in many languages including English. Similarly, many frameworks assume that covertdeterminersare present in bare noun phrases such asproper names.
Another type is theinflectional phrase, where (for example) afinite verbphrase is taken to be the complement of a functional, possibly covert head (denoted INFL) which is supposed to encode the requirements for the verb toinflect– foragreementwith its subject (which is thespecifierof INFL), fortenseandaspect, etc. If these factors are treated separately, then more specific categories may be considered:tense phrase(TP), where the verb phrase is the complement of an abstract "tense" element;aspect phrase;agreement phraseand so on.
Further examples of such proposed categories includetopic phraseandfocus phrase, which are argued to be headed by elements that encode the need for a constituent of the sentence to be marked as thetopicorfocus.
Theories of syntax differ in what they regard as a phrase. For instance, while most if not all theories of syntax acknowledge the existence ofverb phrases(VPs),Phrase structure grammarsacknowledge bothfinite verbphrases andnon-finite verbphrases whiledependency grammarsonly acknowledge non-finite verb phrases. The split between these views persists due to conflicting results from the standard empirical diagnostics of phrasehood such asconstituency tests.[2]
The distinction is illustrated with the following examples:
The syntax trees of this sentence are next:
The constituency tree on the left shows the finite verb stringmay nominate Newtas a constituent; it corresponds to VP1. In contrast, this same string is not shown as a phrase in the dependency tree on the right. However, both trees, take the non-finite VP stringnominate Newtto be a constituent.
|
https://en.wikipedia.org/wiki/Phrase
|
Duolingo, Inc.[b]is an Americaneducational technologycompany that produces learningappsand provideslanguage certification. Duolingo offers courses on 43 languages,[5]ranging fromEnglish,French, andSpanishto less commonly studied languages such asWelsh,Irish, andNavajo, and even constructed languages such asKlingon.[6]It also offers courses onmusic,[7]math, andchess.[8]The learning method incorporatesgamificationto motivate users with points, rewards and interactive lessons featuringspaced repetition.[9]The app promotes short, daily lessons for consistent-phased practice.
Duolingo also offers theDuolingo English Test, an onlinelanguage assessment, and Duolingo ABC, a literacy app designed for children. The company follows afreemiummodel, with optional premium services like Super Duolingo and Duolingo Max, which are ad-free and provide additional features. Additionally, Duolingo runsDuo's Taqueria, a Mexican taco restaurant in Pittsburgh.
With over 130 million monthly active users, Duolingo is the most populareducational appin the world.[10][11][12]Over 10 million people have a Duolingo streak longer than a year.[13]In total, learners on Duolingo complete more than 13 billion exercises per week.[14]Asystematic reviewofresearchon Duolingo from 2012 to 2020 found comparatively few studies on the platform's efficiency forlanguage learningbut identified several studies that reported relatively high user satisfaction, enjoyment, and positive perceptions of the app's effectiveness.[15]The company has also been recognized for its successful marketing tactics and strongbrand engagement.[16][17]
The idea of Duolingo was formulated in 2009 byCarnegie Mellon UniversityprofessorLuis von Ahnand his Swiss-born post-graduate studentSeverin Hacker.[18][19]Von Ahn had sold his second company,reCAPTCHA, toGoogleand, with Hacker, wanted to work on an education-related project.[20]Von Ahn stated that he saw how expensive it was for people in his community inGuatemalato learn English.[21][22]Hacker (co-founder and currentCTOof Duolingo) believed that "free education will really change the world"[23]and wanted to provide an accessible means for doing so. He was recognized by theNational Inventors Hall of Famefor his contributions to language learning and technological development.[24]The Duo mascot is a green owl because co-founder Severin Hacker hates the color green.[25]
The project was originally financed by von Ahn'sMacArthur fellowshipand aNational Science Foundationgrant.[26][27][28]The founders considered creating Duolingo as anonprofit organization, but von Ahn judged this model unsustainable.[23]Its early revenue stream, acrowdsourcedtranslation service, was replaced by aDuolingo English Testcertification program, advertising, and subscription.[29][30]
In October 2011, Duolingo announced that it had raised $3.3 million from aSeries A roundof funding, led byUnion Square Ventures, with participation from authorTim Ferrissand actorAshton Kutcher's investing firmA-Grade Investments.[31]Duolingo launched a private beta on November 30, 2011, and accumulated a waiting list of more than 100,000 people by December 13.[32][33]It launched to the general public on June 19, 2012, at which point the waiting list had grown to around 500,000.[34][35]
In September 2012, Duolingo announced that it had raised a further $15 million from a Series B funding round led byNew Enterprise Associates, with participation from Union Square Ventures.[36]In November 2012, Duolingo released aniPhoneapp,[37]followed by anAndroidapp in May 2013, at which time Duolingo had around 3 million users.[38]By July 2013, it had grown to 5 million users and was rated the No. 1 free education app in theGoogle Play Store.[39]
In February 2014, Duolingo announced that it had raised $20 million from a Series C funding round led byKleiner Caufield & Byers, with prior investors also participating.[40]At this time, it had 34 employees, and reported about 25 millionregistered usersand 12.5 million active users;[40]it later reported a figure closer to 60 million users.[41]
In June 2015, Duolingo announced that it had raised $45 million from a Series D funding round led byGoogle Capital, bringing its total funding to $83.3 million. The round valued the company at around $470 million, with 100 million registered users globally.[29][41]In April 2016, it was reported that Duolingo had more than 18 million monthly users.[42][43]
In July 2017, Duolingo announced that it had raised $25 million in a Series E funding round led byDrive Capital, bringing its total funding to $108.3 million. The round valued Duolingo at $700 million, and the company reported passing 200 million registered users, with 25 million active users.[44]It was reported that Duolingo had 95 employees.[45]Funds from the Series E round would be directed toward creating initiatives such as a related educational flashcard app, TinyCards, and testbeds for initiatives related to reading and listening comprehension.[46]On August 1, 2018, Duolingo surpassed 300 million registered users.[47]
In December 2019, it was announced that Duolingo raised $30 million in a Series F funding round fromAlphabet's investment company,CapitalG.[22]The round valued Duolingo at $1.5 billion. Duolingo reported 30 million active users at this time. The headcount at the company had increased to around two hundred, and new offices had been opened inSeattle,New York, andBeijing.[48]Duolingo planned to use the funds to develop new products and further expand its team in sectors like engineering, business development, design, curriculum and content creators, community outreach, and marketing.[49]
In October 2013, Duolingo launched acrowdsourcedlanguage incubator.[50]In March 2021, it announced that it would be ending its volunteer contributor program. The company said that language courses would instead be maintained and developed by professional linguists aligning withCEFR standards.[51][non-primary source needed]On June 28, 2021, Duolingo filed for aninitial public offeringonNASDAQunder the ticker DUOL.[52]From August 2021 to June 2022, the Duolingo language learning app was removed from some app stores in China.[53]
In August 2022, Duolingo overhauled itsinterface, changing its course structure from a tree-like design, where users could choose from a range of lessons after completing previous ones, to a linear progression. This update has been criticized by users across social media outlets, such asRedditandTwitter.[21]CEO Luis von Ahn stated that there were no plans to reverse the changes.[54]In October 2022, Duolingo acquired Detroit-based animation studio Gunner; it is the studio that produces art assets and animation for Duolingo and Duolingo ABC and its marketing campaigns.[citation needed]
In March 2023, Duolingo officially announced the planned Duolingo Max, a subscription tier above Super Duolingo, in their blog.[55]In October 2023, Duolingo released math and music courses in English and Spanish foriOSusers.[56][57]
In January 2024, Duolingo fired some contractors and announced plans to replace them withAI.[58][59]The company acquired Detroit-based design studio Hobbes in March.[60]
CEFRbased language courses for learners ofEnglish,Spanish,French,Italian,Chinese (Mandarin),Japanese,Korean,Portuguese, andGermanare available for all users.[61]Additional courses are also available for speakers of English (Arabic,Czech,Danish,Dutch,Esperanto,Finnish,Greek,Haitian Creole,Hawaiian,Hebrew,High Valyrian,Hindi,Hungarian,Indonesian,Irish,Klingon,Latin,Navajo,Norwegian,Polish,Romanian,Russian,Scottish Gaelic,Swahili,Swedish,Turkish,Ukrainian,Vietnamese,Welsh,Yiddish,Zulu), Chinese (Chinese (Cantonese)), Arabic (Swedish), and Spanish (Catalan,Russian,Swedish).[needs update]
As of 2014, most of Duolingo's language learning features are free with advertising. Users can remove advertising by paying a subscription fee or promoting referral links.[62]The paid user program, Super Duolingo (formerly known as Duolingo Plus), offers unlimited retries and access to some additional types of lessons. It is otherwise identical to Duolingo for Schools.[63][64][non-primary source needed]
Duolingo Max is a subscription above Super Duolingo that adds additional functions usinggenerative AI: Roleplay, an AI conversation partner; Explain My Answer, which breaks down the rules with a modifiedGPT-4when the user makes a mistake; and Video Call, where users can have video chat with one of the characters, which currently includes only Lily. Intended to provide immersion through conversation.[65][non-primary source needed]
Duolingo for Schools is designed to help teachers use Duolingo in their classrooms. It allows teachers to create classrooms, assign lessons, track student progress, and personalize learning.[66][non-primary source needed]
The Duolingo English Test (DET) is an online English proficiency test that measures proficiency in reading, writing, speaking, and listening in English. It is a computer-based test scored on a scale of 10–160, with scores above 120 considered English proficiency. The test's questions algorithmically adjust to the test-takers' ability level. The test's certificate is reportedly accepted by over 5,500 programs internationally,[67]albeit with exceptions.[68]
Duolingo Math is an app course for learningelementary mathematics. It was announced on YouTube on August 27, 2022.[69]
On October 11, 2023, the company released Duolingo Music,[57]a new platform within the existing app that provides basic music learning throughpianoandsheet musiclessons.[70][71]
Duolingo introducedchesslessons in beta in April 2025, with an initial rollout planned for iOS in English by May. The lessons are structured around theElo rating system, gradually increasing in difficulty to match the user's skill level. Learners can play mini-matches or full games against Duolingo’s virtual chess coach, Oscar, whose difficulty also scales with progress. At launch, users are not able to play against each other.[72][73]
Duolingo ABC is a free app designed for young children to learn letters, their sounds, phonics, and other early reading concepts. Released in 2020, it does not contain ads or in-app purchases. As of April 2024, iOS and Android versions are available, but only in English.[8][74]
On Duolingo, learnerslearn by doing, engaging with the course material.[75]Lessons are designed to be brief, allowing users to learn in manageable chunks.[76][77]Duolingo uses agamified approachto learning, with lessons that incorporate translating, interactive exercises, quizzes, and stories.[78]It also uses an algorithm that adapts to each learner and can provide personalized feedback and recommendations.
Duolingo provides a competitive space,[79]such as in leagues, where people can compete with randomly selected worldwide player groupings of up to 30 users. Leagues: Bronze, Silver, Gold, Sapphire, Ruby, Emerald, Amethyst, Pearl, Obsidian, Diamond. Rankings in leagues are determined by the number of experience points earned in a week. Badges in Duolingo represent achievements earned from completing specific objectives.[80]Users can also create their own avatars.[81][82]
Any lesson completed in Duolingo will count towards the user'sdaily streak.[83]Thedaily streak'svisual symbolin the app is fire. Duolingo's "Friend Streak" lets users maintain streaks with up to five friends.[84]Streaks encourage consistent daily practice and help build a habit of regular learning.
The app has a personalizedbandit algorithmsystem (later theA/B testedvariantrecovering differencesoftmax algorithm) that determines the daily notification that will be sent out to the user.[85]
The Duolingo Score is an estimate of users’ proficiency in the language they're learning in CEFR-aligned courses. Duolingo Score provides a granular assessment of what a student has learned and they can do with the language.[86]DETis using a similar scoring system. The most developed CEFR-aligned courses (French, English and Spanish) cover Duolingo Score from 0 to 130.
Duolingo operates on a freemium business model, offering free access to its learning platforms with ads. Revenue is primarily generated through subscriptions, which remove ads, and provide other perks like unlimited hearts and generative AI. The app also generates income from in-app purchases of virtual currency (Gems) and power-ups that enhance the learning experience. Another key revenue stream is the Duolingo English Test (DET), a low-cost English proficiency test.[87]
In April 2020, it passed one million paid subscribers;[88]it reached 2.9 million in March 2022,[89]and 4.8 million at the end of March 2023.[90]As of June 2024 Duolingo has 8 million paying subscribers.
Duolingo had revenue of $531 million in 2023, compared to $250.77 million in 2021,[91]$36 million in 2018,[92]$13 million in 2017,[47]and $1 million in 2016. In May 2022, it was reported that 6.8% of its monthly active users paid for the ad-free version of the app.[93]
A 2017 study found no significant difference between elementary students learning Spanish through the "gamification" of the Duolingo app and those learning in classroom environments, with both groups demonstrating a similar increase in achievements andself-efficacy.[94]
Duolingo's occasional use of 'erratic' phrases—such as "The bride is a woman and the groom is a hedgehog" or "The man eats ice cream with mustard"[95]—is reportedly derived from research published in 2018 by psychologists atGhent Universityin Belgium,[96]which concluded that such "semantically unpredictable sentences" were more effective for language learning than conventional and predictable phrases, based on the concept of "reward prediction errors", in which unexpected or surprising outcomes are more rewarding and thus encourage further learning.[97][95]
A 2022 study on adults using Duolingo as their only language learning tool, published in the journalForeign Language Annals, found that participants who completed a course had similar reading and listening proficiency to university students after four semesters of study, concluding that Duolingo could be an effective tool for language learning.[98]Another 2022 study of Malaysian students learning French, published by theNational University of MalaysiaPress, found that the app facilitated the acquisition of vocabulary and concluded that it was "well suited" for beginners in this regard.[99]
According to Duolingo's own 2021 study, five sections of the app are roughly equivalent to five semesters of university instruction, and Duolingo is an "effective tool [...] at an intermediate level".[100][101]A 2023 study funded by Duolingo concluded that Duolingo English learners did not significantly learn much grammar.[102]Duolingo English learners inColombiaandSpainwere found to gain significantly more proficiency than students in a classroom, except for listening.[103]
Some language professionals have criticized the app for its limitations and gamified design.[104]Players have also reported that "gamification" has led to cheating, hacking, and incentivized game strategies that conflict with actual learning.[105]
In March 2022, Duolingo forums were discontinued,[106]and sentence discussions became read-only.[107]The change has been criticized on some social media sites.
In January 2023, Duolingo's data on over 2.6 million users' usernames, names, and phone numbers was sold in a hacker forum. Duolingo later stated that they would investigate the "dark web post".[108]They concluded that the data was obtained by scraping publicly available information based on an exposedapplication programming interface(API).[109][110]Duolingo's spokesperson states that the API is intentionally publicly visible.
Since the end of October 2023, Duolingo has stopped updating its Welsh course to "focus on languages in higher demand". Some users criticized this decision because it came at the expense of learners of a language with limited resources on the market and the potential halting of theWelsh Government's"Cymraeg 2050" strategy to promote Welsh language learning.[111][112]
Duolingo courses vary greatly in quality. While most popular language courses like Spanish or French are well developed, other courses for less studied languages like Ukrainian cover very little grammar and vocabulary.[113][114]
In 2025, CEO Luis von Ahn announced Duolingo would become an "AI-first" company and would be replacing contracted workers withartificial intelligencethrough automation.[115][116]This decision was met with public outcry, leading many users declaring they have ended their learning streaks in protest.[117]
In 2013,Applechose Duolingo as its iPhone App of the Year, a first for an educational application.[118]That year, Duolingo ranked No.7 onFast Company's"The World's Most Innovative Companies: Education Honorees" list "for crowdsourcing web translation by turning it into a free language-learning program".[119][120][121]Duolingo won Best Education Startup at the 2014Crunchies,[122][123]and was the most downloaded 'education app' in Google Play in 2013 and 2014.[124]In July 2020,PCMagnamed it "The Best Free Language Learning App".[125]
As a company, Duolingo has likewise won several awards and recognitions. In 2015, it was announced as that year's Index Award winner in the Play & Learning category byThe Index Project.[126]It wonInc.magazine's Best Workplaces 2018,[127]madeEntrepreneurmagazine's Top Company Culture List 2018,[128]was amongCNBC's "Disruptor 50" lists for 2018 and 2019,[129][130][131]and was ranked as one ofTIMEmagazine's 50 Genius Companies.[132]Duolingo was named one ofForbes's"Next Billion-Dollar Startups 2019".[133]In 2023, Duolingo won a Design Award during the 2023 edition of theApple Design Awards.[134]
Duolingo hasbrand charactersthat are used forengagementand creating storylines.[135][136]The main characters include:[137]
All characters mentioned above are human, with the exceptions of Duo, who is an owl, and Falstaff, who is a bear.
Due to the app's frequent reminder notifications, Duolingo's mascot, a green cartoonowlnamed Duo, has been the subject ofInternet memesantagonising him, with the character often depicted stalking or threatening users if they do not continue using the app.[151][152]
Duolingo has leaned into its online reputation and has adjusted its social media and marketing strategies accordingly.[153]Acknowledging the meme, Duolingo released a video onApril Fools' Day2019, depicting a facetious new feature called "Duolingo Push". In the video, users of "Duolingo Push" are reminded to use the app by Duo himself (depicted by apersonwearing a Duolingomascot costume), who stares at and follows them until they comply.[154][155]It was also acknowledged during Duolingo's 2022 April Fools' Day video, "Lawyer Fights Duolingo Owl for $2,700,000", where a fictitious law firm fights for those that have been harmed by Duolingo's owl mascot.[156]This was further referenced by the company in its 2024 April Fools' Day skit "Duo on Ice", in which the owl, in a mix of Spanish and English, admitted to having an appetite for human flesh, and if the user failed to continue their streak, they would "eat their head like apraying mantis."[157]In February 2020, as part of the company's partnership with the developers of the video gameAngry Birds 2, a skit depicting Duo and the red Angry Bird attacking a crowd was uploaded.[158]
Duolingo has effectively engaged with Generation Alpha through itsYouTube shorts, featuring global meme trends and content like songs, workplace insights, and humor, including elements of dark comedy, and "kidnapping children" as a joke.[159]
In November 2019,Saturday Night Liveparodied Duolingo in a sketch where adults learned to communicate with children by using a fictitious course called "Duolingo for Talking to Children".[160]
The 2023 filmBarbiecontains a running gag where the husband of disgruntledMattelemployee Gloria uses Duolingo to learn Spanish, Gloria's native language.
Duo's Taqueria is ataqueria(a Mexican taco restaurant) in Pittsburgh, Pennsylvania, operated by Duolingo. The taqueria offers a variety of authentic Mexican tacos and other traditional dishes.[161]The restaurant encourages patrons to order in Spanish, aligning with Duolingo's mission of making language learning fun and accessible. Duolingo's taco shop brought in $700,000 in 2023.[162]
Duolingo is headquartered in Pittsburgh, Pennsylvania, and has offices inSeattle,New York,[163]Detroit,[164]Beijing, andBerlin.[165]
In 2024, Duolingo opened a new office in New York City, featuring an art gallery in which the company's characters are depicted in the style of famous historical paintings. The gallery showcases moving images of Duo and other characters in a range of artistic styles.[166][167]
Duolingo employs around 830 people.[168][169]
|
https://en.wikipedia.org/wiki/Duolingo
|
Aword listis a list of words in alexicon, generally sorted by frequency of occurrence (either bygraded levels, or as a ranked list). A word list is compiled bylexical frequency analysiswithin a giventext corpus, and is used incorpus linguisticsto investigate genealogies and evolution of languages and texts. A word which appears only once in the corpus is called ahapax legomena. Inpedagogy, word lists are used incurriculum designforvocabulary acquisition. A lexicon sorted by frequency "provides a rational basis for making sure that learners get the best return for their vocabulary learning effort" (Nation 1997), but is mainly intended forcourse writers, not directly for learners. Frequency lists are also made for lexicographical purposes, serving as a sort ofchecklistto ensure that common words are not left out. Some major pitfalls are the corpus content, the corpusregister, and the definition of "word". While word counting is a thousand years old, with still gigantic analysis done by hand in the mid-20th century,natural language electronic processingof large corpora such as movie subtitles (SUBTLEX megastudy) has accelerated the research field.
Incomputational linguistics, afrequency listis a sorted list ofwords(word types) together with theirfrequency, where frequency here usually means the number of occurrences in a givencorpus, from which the rank can be derived as the position in the list.
Nation (Nation 1997) noted the incredible help provided by computing capabilities, making corpus analysis much easier. He cited several key issues which influence the construction of frequency lists:
Most of currently available studies are based on writtentext corpus, more easily available and easy to process.
However,New et al. 2007proposed to tap into the large number of subtitles available online to analyse large numbers of speeches.Brysbaert & New 2009made a long critical evaluation of the traditional textual analysis approach, and support a move toward speech analysis and analysis of film subtitles available online. The initial research saw a handful of follow-up studies,[1]providing valuable frequency count analysis for various languages. In depth SUBTLEX researches over cleaned up open subtitles were produce for French (New et al. 2007), American English (Brysbaert & New 2009;Brysbaert, New & Keuleers 2012), Dutch (Keuleers & New 2010), Chinese (Cai & Brysbaert 2010), Spanish (Cuetos et al. 2011), Greek (Dimitropoulou et al. 2010), Vietnamese (Pham, Bolger & Baayen 2011), Brazil Portuguese (Tang 2012) and Portugal Portuguese (Soares et al. 2015), Albanian (Avdyli & Cuetos 2013), Polish (Mandera et al. 2014) and Catalan (2019[2]), Welsh (Van Veuhen et al. 2024[3]). SUBTLEX-IT (2015) provides raw data only.[4]
In any case, the basic "word" unit should be defined. For Latin scripts, words are usually one or several characters separated either by spaces or punctuation. But exceptions can arise : English "can't" and French "aujourd'hui" include punctuations while French "chateau d'eau" designs a concept different from the simple addition of its components while including a space. It may also be preferable to group words of aword familyunder the representation of itsbase word. Thus,possible, impossible, possibilityare words of the same word family, represented by the base word*possib*. For statistical purpose, all these words are summed up under the base word form *possib*, allowing the ranking of a concept and form occurrence. Moreover, other languages may present specific difficulties. Such is the case of Chinese, which does not use spaces between words, and where a specified chain of several characters can be interpreted as either a phrase of unique-character words, or as a multi-character word.
It seems thatZipf's lawholds for frequency lists drawn from longer texts of any natural language. Frequency lists are a useful tool when building an electronic dictionary, which is a prerequisite for a wide range of applications incomputational linguistics.
German linguists define theHäufigkeitsklasse(frequency class)N{\displaystyle N}of an item in the list using thebase 2 logarithmof the ratio between its frequency and the frequency of the most frequent item. The most common item belongs to frequency class 0 (zero) and any item that is approximately half as frequent belongs in class 1. In the example list above, the misspelled wordoutragioushas a ratio of 76/3789654 and belongs in class 16.
where⌊…⌋{\displaystyle \lfloor \ldots \rfloor }is thefloor function.
Frequency lists, together withsemantic networks, are used to identify the least common, specialized terms to be replaced by theirhypernymsin a process ofsemantic compression.
Those lists are not intended to be given directly to students, but rather to serve as a guideline for teachers and textbook authors (Nation 1997).Paul Nation's modern language teaching summary encourages first to "move from high frequency vocabulary and special purposes [thematic] vocabulary to low frequency vocabulary, then to teach learners strategies to sustain autonomous vocabulary expansion" (Nation 2006).
Word frequency is known to have various effects (Brysbaert et al. 2011;Rudell 1993). Memorization is positively affected by higher word frequency, likely because the learner is subject to more exposures (Laufer 1997). Lexical access is positively influenced by high word frequency, a phenomenon calledword frequency effect(Segui et al.). The effect of word frequency is related to the effect ofage-of-acquisition, the age at which the word was learned.
Below is a review of available resources.
Word counting is an ancient field,[5]with known discussion back toHellenistictime. In 1944,Edward Thorndike,Irvin Lorgeand colleagues[6]hand-counted 18,000,000 running words to provide the first large-scale English language frequency list, before modern computers made such projects far easier (Nation 1997). 20th century's works all suffer from their age. In particular, words relating to technology, such as "blog," which, in 2014, was #7665 in frequency[7]in the Corpus of Contemporary American English,[8]was first attested to in 1999,[9][10][11]and does not appear in any of these three lists.
The Teacher Word Book contains 30,000 lemmas or ~13,000 word families (Goulden, Nation and Read, 1990). A corpus of 18 million written words was hand analysed. The size of its source corpus increased its usefulness, but its age, and language changes, have reduced its applicability (Nation 1997).
The General Service List contains 2,000 headwords divided into two sets of 1,000 words. A corpus of 5 million written words was analyzed in the 1940s. The rate of occurrence (%) for different meanings, and parts of speech, of the headword are provided. Various criteria, other than frequence and range, were carefully applied to the corpus. Thus, despite its age, some errors, and its corpus being entirely written text, it is still an excellent database of word frequency, frequency of meanings, and reduction of noise (Nation 1997). This list was updated in 2013 by Dr. Charles Browne, Dr. Brent Culligan and Joseph Phillips as theNew General Service List.
A corpus of 5 million running words, from written texts used in United States schools (various grades, various subject areas). Its value is in its focus on school teaching materials, and its tagging of words by the frequency of each word, in each of the school grade, and in each of the subject areas (Nation 1997).
These now contain 1 million words from a written corpus representing different dialects of English. These sources are used to produce frequency lists (Nation 1997).
A review has been made byNew & Pallier.
An attempt was made in the 1950s–60s with theFrançais fondamental. It includes the F.F.1 list with 1,500 high-frequency words, completed by a later F.F.2 list with 1,700 mid-frequency words, and the most used syntax rules.[12]It is claimed that 70 grammatical words constitute 50% of the communicatives sentence,[13][14]while 3,680 words make about 95~98% of coverage.[15]A list of 3,000 frequent words is available.[16]
The French Ministry of the Education also provide a ranked list of the 1,500 most frequentword families, provided by the lexicologueÉtienne Brunet.[17]Jean Baudot made a study on the model of the American Brown study, entitled "Fréquences d'utilisation des mots en français écrit contemporain".[18]
More recently, the projectLexique3provides 142,000 French words, withorthography,phonetic,syllabation,part of speech,gender, number of occurrence in the source corpus, frequency rank, associatedlexemes, etc., available under an open licenseCC-by-sa-4.0.[19]
This Lexique3 is a continuous study from which originate theSubtlex movementcited above.New et al. 2007made a completely new counting based on online film subtitles.
There have been several studies of Spanish word frequency (Cuetos et al. 2011).[20]
Chinese corpora have long been studied from the perspective of frequency lists. The historical way to learn Chinese vocabulary is based on characters frequency (Allanic 2003). American sinologistJohn DeFrancismentioned its importance for Chinese as a foreign language learning and teaching inWhy Johnny Can't Read Chinese(DeFrancis 1966). As a frequency toolkit, Da (Da 1998) and the Taiwanese Ministry of Education (TME 1997) provided large databases with frequency ranks for characters and words. TheHSKlist of 8,848 high and medium frequency words in thePeople's Republic of China, and theRepublic of China (Taiwan)'sTOPlist of about 8,600 common traditional Chinese words are two other lists displaying common Chinese words and characters. Following the SUBTLEX movement,Cai & Brysbaert 2010recently made a rich study of Chinese word and character frequencies.
Wiktionarycontains frequency lists in more languages.[21]
Most frequently used words in different languages based on Wikipedia or combined corpora.[22]
|
https://en.wikipedia.org/wiki/Word_lists_by_frequency
|
Insemiotics, asignis anything thatcommunicatesameaningthat is not the sign itself to the interpreter of the sign. The meaning can be intentional, as when a word is uttered with a specific meaning, or unintentional, as when asymptomis taken as a sign of a particular medical condition. Signs can communicate through any of thesenses, visual, auditory, tactile, olfactory, or taste.
Two major theories describe the way signs acquire the ability to transfer information. Both theories understand the defining property of the sign as a relation between a number of elements. In semiology, the tradition of semiotics developed byFerdinand de Saussure(1857–1913), the sign relation is dyadic, consisting only of a form of the sign (the signifier) and its meaning (the signified). Saussure saw this relation as being essentially arbitrary (the principle ofsemiotic arbitrariness), motivated only bysocial convention. Saussure's theory has been particularly influential in the study of linguistic signs. The other majorsemiotic theory, developed byCharles Sanders Peirce(1839–1914), defines the sign as a triadic relation as "something that stands for something, to someone in some capacity".[1]This means that a sign is a relation between the sign vehicle (the specific physical form of the sign), a sign object (the aspect of the world that the sign carries meaning about) and an interpretant (the meaning of the sign as understood by an interpreter). According to Peirce, signs can be divided by the type of relation that holds the sign relation together as eithericons, indices orsymbols. Icons are those signs that signify by means ofsimilaritybetween sign vehicle and sign object (e.g. a portrait or map), indices are those that signify by means of a direct relation of contiguity or causality between sign vehicle and sign object (e.g. a symptom), and symbols are those that signify through a law or arbitrary social convention.
According toFerdinand de Saussure(1857–1913), a sign is composed of thesignifier[2](signifiant), and thesignified(signifié). These cannot be conceptualized as separate entities but rather as a mapping from significant differences in sound to potential (correct) differential denotation. The Saussurean sign exists only at the level of thesynchronicsystem, in which signs are defined by their relative and hierarchical privileges of co-occurrence. It is thus a common misreading of Saussure to take signifiers to be anything one could speak, and signifieds as things in the world. In fact, the relationship of language toparole(or speech-in-context) is and always has been a theoretical problem for linguistics (cf. Roman Jakobson's famous essay "Closing Statement: Linguistics and Poetics" et al.).
A famous thesis by Saussure states that the relationship between a sign and the real-world thing it denotes is an arbitrary one. There is not a natural relationship between a word and the object it refers to, nor is there a causal relationship between the inherent properties of the object and the nature of the sign used to denote it. For example, there is nothing about the physical quality of paper that requires denotation by the phonological sequence 'paper'. There is, however, what Saussure called 'relative motivation': the possibilities of signification of a signifier are constrained by thecompositionalityof elements in the linguistic system (cf.Émile Benveniste's paper on the arbitrariness of the sign in the first volume of his papers on general linguistics). In other words, a word is only available to acquire a new meaning if it is identifiablydifferentfrom all the other words in the language and it has no existing meaning.Structuralismwas later based on this idea that it is only within a given system that one can define the distinction between the levels of system and use, or the semantic "value" of a sign.
Charles Sanders Peirce(1839–1914) proposed a different theory. Unlike Saussure who approached the conceptual question from a study oflinguisticsandphonology, Peirce, considered the father ofPragmaticism, extended the concept of sign to embrace many other forms. He considered "word" to be only one particular kind of sign, and characterized sign as any mediational means tounderstanding. He covered not only artificial, linguistic and symbolic signs, but also all semblances (such as kindred sensible qualities), and all indicators (such as mechanical reactions). He counted as symbols all terms, propositions and arguments whose interpretation is based upon convention or habit, even apart from their expression in particular languages. He held that "all this universe is perfused with signs, if it is not composed exclusively of signs".[3]The setting of Peirce's study of signs is philosophical logic, which he defined as formal semiotic,[4]and characterized as a normative field following esthetics and ethics, as more basic than metaphysics,[5]and as the art of devising methods of research.[6]He argued that, since all thought takes time, all thought is in signs,[7]that all thought has the form of inference (even when not conscious and deliberate),[7]and that, as inference, "logic is rooted in the social principle", since inference depends on a standpoint that, in a sense, is unlimited.[8]The result is a theory not of language in particular, but rather of the production of meaning, and it rejects the idea of a static relationship between a sign and what it represents: itsobject. Peirce believed that signs are meaningful through recursive relationships that arise in sets of three.
Even when a sign represents by a resemblance or factual connection independent of interpretation, the sign is a sign only insofar as it is at least potentially interpretable by a mind and insofar as the sign is a determination of a mind or at least aquasi-mind, that functions as if it were a mind, for example in crystals and the work of bees[9]—the focus here is on sign action in general, not on psychology, linguistics, or social studies (fields Peirce also pursued).
A sign depends on an object in a way that enables (and, in a sense, determines) an interpretation, aninterpretant, to depend on the objectas the sign depends on the object. The interpretant, then, is a further sign of the object, and thus enables and determines still further interpretations, further interpretant signs. The process, calledsemiosis, is irreducibly triadic, Peirce held, and is logically structured to perpetuate itself. It is what defines sign, object and interpretant in general.[10]AsJean-Jacques Nattiezput it, "the process of referring effected by the sign isinfinite." (Peirce used the word "determine" in the sense not of strict determinism, but of effectiveness that can vary like an influence.[11][12])
Peirce further characterized the threesemiotic elementsas follows:[13]
Peirce explained that signs mediate between their objects and their interpretants in semiosis, the triadic process of determination. In semiosis afirstis determined or influenced to be a sign by asecond, as its object. The object determines the sign to determine athirdas an interpretant.Firstnessitself is one of Peirce'sthree categoriesof all phenomena, and is quality of feeling. Firstness is associated with a vague state of mind as feeling and a sense of the possibilities, with neither compulsion nor reflection. In semiosis the mind discerns an appearance or phenomenon, a potential sign.Secondnessis reaction or resistance, a category associated with moving from possibility to determinate actuality. Here, through experience outside of and collateral to the given sign or sign system, one recalls or discovers the object the sign refers to, for example when a sign consists in a chance semblance of an absent but remembered object. It is through one's collateral experience[15]that the object determines the sign to determine an interpretant.Thirdnessis representation or mediation, the category associated with signs, generality, rule, continuity, habit-taking and purpose. Here one forms an interpretant expressing a meaning or ramification of the sign about the object. When a second sign is considered, the initial interpretant may be confirmed, or new possible meanings may be identified. As each new sign is addressed, more interpretants, themselves signs, emerge. It can involve a mind's reading of nature, people, mathematics, anything.
Peirce generalized the communicational idea of utterance and interpretation of a sign, to cover all signs:[16]
Admitting that connected Signs must have a Quasi-mind, it may further be declared that there can be no isolated sign. Moreover, signs require at least two Quasi-minds; aQuasi-uttererand aQuasi-interpreter; and although these two are at one (i.e., are one mind) in the sign itself, they must nevertheless be distinct. In the Sign they are, so to say,welded. Accordingly, it is not merely a fact of human Psychology, but a necessity of Logic, that every logical evolution of thought should be dialogic.
According to Nattiez, writing withJean Molino, the tripartite definition of sign, object and interpretant is based on the "trace" orneutral level, Saussure's "sound-image" (or "signified", thus Peirce's "representamen"). Thus, "a symbolic form...is not some 'intermediary' in a process of 'communication' that transmits the meaning intended by the author to the audience; it is instead the result of a complexprocessof creation (thepoieticprocess) that has to do with the form as well as the content of the work; it is also the point of departure for a complex process of reception (theesthesicprocess thatreconstructsa 'message'").[17]
Molino's and Nattiez's diagram:
Peirce's theory of the sign therefore offered a powerful analysis of the signification system, its codes, and its processes of inference and learning—because the focus was often on natural or cultural context rather than linguistics, which only analyses usage in slow time whereas human semiotic interaction in the real world often has a chaotic blur of language and signal exchange. Nevertheless, the implication that triadic relations are structured to perpetuate themselves leads to a level of complexity not usually experienced in the routine of message creation and interpretation. Hence, different ways of expressing the idea have developed.
By 1903,[18]Peirce came toclassify signsby three universal trichotomies dependent on his three categories (quality, fact, habit). He classified any sign:[19]
Because of those classificatory interdependences, the three trichotomies intersect to form ten (rather than 27) classes of signs. There are also various kinds of meaningful combination. Signs can be attached to one another. A photograph is an index with a meaningfully attached icon. Arguments are composed of dicisigns, and dicisigns are composed of rhemes. In order to be embodied, legisigns (types) need sinsigns (tokens) as their individual replicas or instances. A symbol depends as a sign on how itwillbe interpreted, regardless of resemblance or factual connection to its object; but the symbol's individual embodiment is an index to your experience of the object. A symbol is instanced by a specialized indexical sinsign. A symbol such as a sentence in a language prescribes qualities of appearance for its instances, and is itself a replica of a symbol such as a proposition apart from expression in a particular language. Peirce covered both semantic and syntactical issues in his theoretical grammar, as he sometimes called it. He regarded formal semiotic, as logic, as furthermore encompassing study of arguments (hypothetical,deductiveandinductive) and inquiry's methods includingpragmatism; and as allied to but distinct from logic's pure mathematics.
Peirce sometimes referred to thegroundof a sign. The ground is the pure abstraction of a quality.[22]A sign's ground is therespectin which the sign represents its object, e.g. as inliteral and figurative language. For example, an iconpresentsa characteristic or quality attributed to an object, while a symbolimputesto an object a quality either presented by an icon or symbolized so as to evoke a mental icon.
Peirce called an icon apart from a label, legend, or other index attached to it, a "hypoicon", and divided the hypoicon into three classes: (a) theimage, which depends on a simple quality; (b) thediagram, whose internal relations, mainly dyadic or so taken, represent by analogy the relations in something; and (c) themetaphor, which represents the representative character of a sign by representing a parallelism in something else.[23]A diagram can be geometric, or can consist in an array of algebraic expressions, or even in the common form "All __ is ___" which is subjectable, like any diagram, to logical or mathematical transformations. Peirce held that mathematics is done by diagrammatic thinking—observation of, and experimentation on, diagrams. Peirce developed for deductive logic a system of visualexistential graphs, which continue to be researched today.
It is now agreed that the effectiveness of the acts that may convert the message into text (including speaking, writing, drawing, music and physical movements) depends uponthe knowledge of the sender. If the sender is not familiar with the current language, its codes and its culture, then he or she will not be able to say anything at all, whether as a visitor in a different language area or because of a medical condition such asaphasia.
Modern theories deny the Saussurian distinction between signifier and signified, and look for meaning not in the individual signs, but in their context and the framework of potential meanings that could be applied. Such theories assert that language is a collective memory or cultural history of all the different ways in which meaning has been communicated, and may to that extent, constitute all life's experiences (seeLouis Hjelmslev). Hjelmslev did not consider the sign to be the smallestsemioticunit, as he believed it possible to decompose it further; instead, he considered the "internal structure of language" to be a system offigurae, a concept somewhat related to that offigure of speech, which he considered to be the ultimate semiotic unit.[24][25][26]
This position implies that speaking is simply one more form of behaviour and changes the focus of attention from the text as language, to the text as arepresentationof purpose, a functional version ofauthorial intent. But, once the message has been transmitted, the text exists independently.[citation needed]
Hence, although the writers who co-operated to produce this page exist, they can only be represented by the signs actually selected and presented here. The interpretation process in the receiver's mind may attribute meanings completely different from those intended by the senders. But, why might this happen? Neither the sender nor the receiver of a text has a perfect grasp of all language. Each individual's relatively smallstockof knowledge is the product of personal experience and their attitude to learning. When theaudiencereceives the message, there will always be an excess of connotations available to be applied to the particular signs in their context (no matter how relatively complete or incomplete their knowledge, thecognitiveprocess is the same).[citation needed]
The first stage in understanding the message is therefore, to suspend or defer judgement until more information becomes available. At some point, the individual receiver decides which of all possible meanings represents the best possible fit. Sometimes, uncertainty may not be resolved, so meaning is indefinitely deferred, or a provisional or approximate meaning is allocated. More often, the receiver's desire forclosure(seeGestalt psychology) leads to simple meanings being attributed out of prejudices and without reference to the sender's intentions.[citation needed]
Incritical theory, the notion of sign is used variously. AsDaniel Chandlerhas said:
Many postmodernist theorists postulate a complete disconnection of the signifier and the signified. An 'empty' or 'floating signifier' is variously defined as a signifier with a vague, highly variable, unspecifiable or non-existent signified. Such signifiers mean different things to different people: they may stand for many or even any signifieds; they may mean whatever their interpreters want them to mean.[27]
In the semiotic theory ofFélix Guattari, semioticblack holesare the "a-temporal" destruction ofsigns.[28][further explanation needed]
|
https://en.wikipedia.org/wiki/Sign_(semiotics)
|
Many countries around the world maintain military units that are specifically trained to operate in acyberwarfareenvironment. In several cases these units act also as the nationalcomputer emergency response teamfor civiliancybersecuritythreats.
[35]
Inter-service
Army
Navy
Air Force
General Staff Department of the Korean People's Army:[80][81][82][83][84]
Reconnaissance General Bureau:[e]
Others:
Mexican Army
Mexican Navy
Inter-service
Army
Navy
Air Force
Inter-service
Inter-service
Army
Navy
Air Force
Marines
Coast Guard
Space Force
|
https://en.wikipedia.org/wiki/List_of_cyber_warfare_forces
|
Symmetryoccurs not only ingeometry, but also in other branches of mathematics. Symmetry is a type ofinvariance: the property that a mathematical object remains unchanged under a set ofoperationsortransformations.[1]
Given a structured objectXof any sort, asymmetryis amappingof the object onto itself which preserves the structure. This can occur in many ways; for example, ifXis a set with no additional structure, a symmetry is abijectivemap from the set to itself, giving rise topermutation groups. If the objectXis a set of points in the plane with itsmetricstructure or any othermetric space, a symmetry is abijectionof the set to itself which preserves the distance between each pair of points (i.e., anisometry).
In general, every kind of structure in mathematics will have its own kind of symmetry, many of which are listed in the given points mentioned above.
The types of symmetry considered in basic geometry includereflectional symmetry,rotational symmetry,translational symmetryandglide reflection symmetry, which are described more fully in the main articleSymmetry (geometry).
Letf(x) be areal-valued function of a real variable, thenfisevenif the following equation holds for allxand-xin the domain off:
Geometrically speaking, the graph face of an even function issymmetricwith respect to they-axis, meaning that itsgraphremains unchanged afterreflectionabout they-axis. Examples of even functions include|x|,x2,x4,cos(x), andcosh(x).
Again, letfbe areal-valued function of a real variable, thenfisoddif the following equation holds for allxand-xin the domain off:
That is,
Geometrically, the graph of an odd function has rotational symmetry with respect to theorigin, meaning that itsgraphremains unchanged afterrotationof 180degreesabout the origin. Examples of odd functions arex,x3,sin(x),sinh(x), anderf(x).
Theintegralof an odd function from −Ato +Ais zero, provided thatAis finite and that the function is integrable (e.g., has no vertical asymptotes between −AandA).[3]
The integral of an even function from −Ato +Ais twice the integral from 0 to +A, provided thatAis finite and the function is integrable (e.g., has no vertical asymptotes between −AandA).[3]This also holds true whenAis infinite, but only if the integral converges.
Inlinear algebra, asymmetric matrixis asquare matrixthat is equal to itstranspose(i.e., it is invariant under matrix transposition). Formally, matrixAis symmetric if
By the definition of matrix equality, which requires that the entries in all corresponding positions be equal, equal matrices must have the same dimensions (as matrices of different sizes or shapes cannot be equal). Consequently, only square matrices can be symmetric.
The entries of a symmetric matrix are symmetric with respect to themain diagonal. So if the entries are written asA= (aij), thenaij= aji, for all indicesiandj.
For example, the following 3×3 matrix is symmetric:
Every squarediagonal matrixis symmetric, since all off-diagonal entries are zero. Similarly, each diagonal element of askew-symmetric matrixmust be zero, since each is its own negative.
In linear algebra, arealsymmetric matrix represents aself-adjoint operatorover arealinner product space. The corresponding object for acomplexinner product space is aHermitian matrixwith complex-valued entries, which is equal to itsconjugate transpose. Therefore, in linear algebra over the complex numbers, it is often assumed that a symmetric matrix refers to one which has real-valued entries. Symmetric matrices appear naturally in a variety of applications, and typical numerical linear algebra software makes special accommodations for them.
Thesymmetric groupSn(on afinite setofnsymbols) is thegroupwhose elements are all thepermutationsof thensymbols, and whosegroup operationis thecompositionof such permutations, which are treated asbijective functionsfrom the set of symbols to itself.[4]Since there aren! (nfactorial) possible permutations of a set ofnsymbols, it follows that theorder(i.e., the number of elements) of the symmetric groupSnisn!.
Asymmetric polynomialis apolynomialP(X1,X2, ...,Xn) innvariables, such that if any of the variables are interchanged, one obtains the same polynomial. Formally,Pis asymmetric polynomialif for anypermutationσ of the subscripts 1, 2, ...,n, one hasP(Xσ(1),Xσ(2), ...,Xσ(n)) =P(X1,X2, ...,Xn).
Symmetric polynomials arise naturally in the study of the relation between the roots of a polynomial in one variable and its coefficients, since the coefficients can be given by polynomial expressions in the roots, and all roots play a similar role in this setting. From this point of view, theelementary symmetric polynomialsare the most fundamental symmetric polynomials. Atheoremstates that any symmetric polynomial can be expressed in terms of elementary symmetric polynomials, which implies that everysymmetricpolynomial expressionin the roots of amonic polynomialcan alternatively be given as a polynomial expression in the coefficients of the polynomial.
In two variablesX1andX2, one has symmetric polynomials such as:
and in three variablesX1,X2andX3, one has as a symmetric polynomial:
Inmathematics, asymmetric tensoristensorthat is invariant under apermutationof its vector arguments:
for every permutation σ of the symbols {1,2,...,r}.
Alternatively, anrthorder symmetric tensor represented in coordinates as a quantity withrindices satisfies
The space of symmetric tensors of rankron a finite-dimensionalvector spaceisnaturally isomorphicto the dual of the space ofhomogeneous polynomialsof degreeronV. Overfieldsofcharacteristic zero, thegraded vector spaceof all symmetric tensors can be naturally identified with thesymmetric algebraonV. A related concept is that of theantisymmetric tensororalternating form. Symmetric tensors occur widely inengineering,physicsandmathematics.
Given a polynomial, it may be that some of the roots are connected by variousalgebraic equations. For example, it may be that for two of the roots, sayAandB, thatA2+ 5B3= 7. The central idea of Galois theory is to consider thosepermutations(or rearrangements) of the roots having the property thatanyalgebraic equation satisfied by the roots isstill satisfiedafter the roots have been permuted. An important proviso is that we restrict ourselves to algebraic equations whose coefficients arerational numbers. Thus, Galois theory studies the symmetries inherent in algebraic equations.
Inabstract algebra, anautomorphismis anisomorphismfrom amathematical objectto itself. It is, in some sense, asymmetryof the object, and a way ofmappingthe object to itself while preserving all of its structure. The set of all automorphisms of an object forms agroup, called theautomorphism group. It is, loosely speaking, thesymmetry groupof the object.
In quantum mechanics, bosons have representatives that are symmetric under permutation operators, and fermions have antisymmetric representatives.
This implies the Pauli exclusion principle for fermions. In fact, the Pauli exclusion principle with a single-valued many-particle wavefunction is equivalent to requiring the wavefunction to be antisymmetric. An antisymmetric two-particle state is represented as asum of statesin which one particle is in state|x⟩{\displaystyle \scriptstyle |x\rangle }and the other in state|y⟩{\displaystyle \scriptstyle |y\rangle }:
and antisymmetry under exchange means thatA(x,y) = −A(y,x). This implies thatA(x,x) = 0, which is Pauli exclusion. It is true in any basis, since unitary changes of basis keep antisymmetric matrices antisymmetric, although strictly speaking, the quantityA(x,y)is not a matrix but an antisymmetric rank-twotensor.
Conversely, if the diagonal quantitiesA(x,x)are zeroin every basis, then the wavefunction component:
is necessarily antisymmetric. To prove it, consider the matrix element:
This is zero, because the two particles have zero probability to both be in the superposition state|x⟩+|y⟩{\displaystyle \scriptstyle |x\rangle +|y\rangle }. But this is equal to
The first and last terms on the right hand side are diagonal elements and are zero, and the whole sum is equal to zero. So the wavefunction matrix elements obey:
or
We call a relation symmetric if every time the relation stands from A to B, it stands too from B to A.
Note that symmetry is not the exact opposite ofantisymmetry.
Anisometryis adistance-preserving map betweenmetric spaces. Given a metric space, or a set and scheme for assigning distances between elements of the set, an isometry is a transformation which maps elements to another metric space such that the distance between the elements in the new metric space is equal to the distance between the elements in the original metric space. In a two-dimensional or three-dimensional space, two geometric figures arecongruentif they are related by an isometry: related by either arigid motion, or acompositionof a rigid motion and areflection. Up to a relation by a rigid motion, they are equal if related by adirect isometry.
Isometries have been used to unify the working definition of symmetry in geometry and for functions, probability distributions, matrices, strings, graphs, etc.[7]
A symmetry of adifferential equationis a transformation that leaves the differential equation invariant. Knowledge of such symmetries may help solve the differential equation.
ALine symmetryof asystem of differential equationsis a continuous symmetry of the system of differential equations. Knowledge of a Line symmetry can be used to simplify an ordinary differential equation throughreduction of order.[8]
Forordinary differential equations, knowledge of an appropriate set of Lie symmetries allows one to explicitly calculate a set of first integrals, yielding a complete solution without integration.
Symmetries may be found by solving a related set of ordinary differential equations.[8]Solving these equations is often much simpler than solving the original differential equations.
In the case of a finite number of possible outcomes, symmetry with respect to permutations (relabelings) implies adiscrete uniform distribution.
In the case of a real interval of possible outcomes, symmetry with respect to interchanging sub-intervals of equal length corresponds to acontinuous uniform distribution.
In other cases, such as "taking a random integer" or "taking a random real number", there are no probability distributions at all symmetric with respect to relabellings or to exchange of equally long subintervals. Other reasonable symmetries do not single out one particular distribution, or in other words, there is not a unique probability distribution providing maximum symmetry.
There is one type ofisometry in one dimensionthat may leave the probability distribution unchanged, that is reflection in a point, for example zero.
A possible symmetry for randomness with positive outcomes is that the former applies for the logarithm, i.e., the outcome and its reciprocal have the same distribution. However this symmetry does not single out any particular distribution uniquely.
For a "random point" in a plane or in space, one can choose an origin, and consider a probability distribution with circular or spherical symmetry, respectively.
|
https://en.wikipedia.org/wiki/Symmetry_in_mathematics
|
Apositioning systemis a system for determining thepositionof an object inspace.[1]Positioning system technologies exist ranging from interplanetary coverage with meter accuracy to workspace and laboratory coverage with sub-millimeter accuracy. A major subclass is made ofgeopositioningsystems, used for determining an object's position with respect to Earth, i.e., itsgeographical position; one of the most well-known and commonly used geopositioning systems is theGlobal Positioning System(GPS) and similarglobal navigation satellite systems(GNSS).
Interplanetary-radio communication systems not only communicate with spacecraft, but they are also used to determine their position.Radarcan track targets near the Earth, but spacecraft in deep space must have a workingtransponderon board to echo a radio signal back. Orientation information can be obtained usingstar trackers.
Global navigation satellite systems(GNSS) allow specialized radio receivers to determine their 3-D space position, as well as time, with an accuracy of 2–20 metres or tens of nanoseconds. Currently deployed systems use microwave signals that can only be received reliably outdoors and that cover most of Earth's surface, as well as near-Earth space.
The existing and planned systems are:
Networks of land-based positioning transmitters allow specializedradio receiversto determine their 2-D position on the surface of the Earth. They are generally less accurate than GNSS because their signals are not entirely restricted toline-of-sight propagation, and they have only regional coverage. However, they remain useful for special purposes and as a backup where their signals are more reliably received, including underground and indoors, and receivers can be built that consume very low battery power.LORANis an example of such a system.
Alocal positioning system(LPS) is a navigation system that provides location information in all weather, anywhere within the coverage of the network, where there is an unobstructedline of sightto three or more signalingbeaconsof which the exact position on Earth is known.[2][3][4][5]
UnlikeGPSor otherglobal navigation satellite systems,local positioning systemsdon't provide global coverage. Instead, they use beacons, which have a limited range, hence requiring the user to be near these. Beacons includecellularbase stations,Wi-FiandLiFiaccess points, and radiobroadcast towers.
In the past, long-range LPS's have been used for navigation of ships and aircraft. Examples are theDecca Navigator SystemandLORAN.
Nowadays, local positioning systems are often used as complementary (and in some cases alternative) positioning technology to GPS, especially in areas where GPS does not reach or is weak, for example,inside buildings, orurban canyons. Local positioning using cellular andbroadcast towerscan be used on cell phones that do not have a GPS receiver. Even if the phone has a GPS receiver, battery life will be extended if cell tower location accuracy is sufficient.
They are also used in trackless amusement rides likePooh's Hunny HuntandMystic Manor.
Examples of existing systems include
Indoor positioning systems are optimized for use within individual rooms, buildings, or construction sites. They typically offer centimeter-accuracy. Some provide6-Dlocation and orientation information.
Examples of existing systems include
These are designed to cover only a restricted workspace, typically a few cubic meters, but can offer accuracy in the millimeter-range or better. They typically provide 6-D position and orientation. Example applications includevirtual realityenvironments, alignment tools forcomputer-assisted surgeryor radiology, and cinematography (motion capture,match moving).
Examples:Wii Remotewith Sensor Bar, Polhemus Tracker, Precision Motion Tracking Solutions InterSense.[6]
High performance positioning systemis used in manufacturing processes to move an object (tool or part) smoothly and accurately in six degrees of freedom, along a desired path, at a desired orientation, with highacceleration, highdeceleration, highvelocityand lowsettling time. It is designed to quickly stop its motion and accurately place the moving object at its desired final position and orientation with minimal jittering.
Examples: high velocitymachine tools,laser scanning,wire bonding,printed circuit boardinspection,lab automationassaying,flight simulators
Multiple technologies exist to determine the position and orientation of an object or person in a room, building or in the world.
Time of flightsystems determine the distance by measuring the time of propagation of pulsed signals between a transmitter and receiver. When distances of at least three locations are known, a fourth position can be determined usingtrilateration.Global Positioning Systemis an example.
Optical trackers, such aslaser ranging trackerssuffer fromline of sightproblems and their performance is adversely affected by ambient light and infrared radiation. On the other hand, they do not suffer from distortion effects in the presence of metals and can have high update rates because of the speed of light.[7]
Ultrasonic trackershave a more limited range because of the loss of energy with the distance traveled. Also they are sensitive to ultrasonic ambient noise and have a low update rate. But the main advantage is that they do not need line of sight.
Systems usingradio wavessuch as theGlobal navigation satellite systemdo not suffer ambient light, but still need line of sight.
A spatial scan system uses (optical) beacons and sensors. Two categories can be distinguished:
By aiming the sensor at the beacon the angle between them can be measured. Withtriangulationthe position of the object can be determined.
The main advantage of aninertial sensingis that it does not require an external reference. Instead it measures rotation with agyroscopeor position with anaccelerometerwith respect to a known starting position and orientation. Because these systems measure relative positions instead of absolute positions they can suffer from accumulated errors and therefore are subject to drift. A periodic re-calibration of the system will provide more accuracy.
This type of tracking system uses mechanical linkages between the reference and the target. Two types of linkages have been used. One is an assembly of mechanical parts that can each rotate, providing the user with multiple rotation capabilities. The orientation of the linkages is computed from the various linkage angles measured with incremental encoders or potentiometers. Other types of mechanical linkages are wires that are rolled in coils. A spring system ensures that the wires are tensed in order to measure the distance accurately. The degrees of freedom sensed by mechanical linkage trackers are dependent upon the constitution of the tracker's mechanical structure. While six degrees of freedom are most often provided, typically only a limited range of motions is possible because of the kinematics of the joints and the length of each link. Also, the weight and the deformation of the structure increase with the distance of the target from the reference and impose a limit on the working volume.[8]
Phase differencesystems measure the shift in phase of an incoming signal from an emitter on a moving target compared to the phase of an incoming signal from a reference emitter. With this the relative motion of the emitter with respect to the receiver can be calculated.
Like inertial sensing systems, phase-difference systems can suffer from accumulated errors and therefore are subject to drift, but because the phase can be measured continuously they are able to generate high data rates.Omega (navigation system)is an example.
Direct field sensing systems use a known field to derive orientation or position: A simplecompassuses theEarth's magnetic fieldto know its orientation in two directions.[8]Aninclinometeruses theearth gravitational fieldto know its orientation in the remaining third direction. The field used for positioning does not need to originate from nature, however. A system of threeelectromagnetsplaced perpendicular to each other can define a spatial reference. On the receiver, three sensors measure the components of the field's flux received as a consequence ofmagnetic coupling. Based on these measures, the system determines the position and orientation of the receiver with respect to the emitters' reference.
Optical positioning systems are based onopticscomponents, such as intotal stations.[9]
Magnetic positioningis an IPS (Indoor positioning system) solution that takes advantage of the magnetic field anomalies typical of indoor settings by using them as distinctive place recognition signatures. The first citation of positioning based on magnetic anomaly can be traced back to military applications in 1970.[10]The use of magnetic field anomalies for indoor positioning was first claimed in 1999,[11]with later publications related to robotics in the early 2000s.[12][13]
Most recent applications can employ magnetic sensor data from asmartphoneused to wirelessly locate objects or people inside a building.[14]
Because every technology has its pros and cons, most systems use more than one technology. A system based on relative position changes like the inertial system needs periodic calibration against a system with absolute position measurement. Systems combining two or more technologies are called hybrid positioning systems.[16]
Hybrid positioning systems are systems for finding the location of a mobile device using several different positioning technologies. Usually GPS (Global Positioning System) is one major component of such systems, combined with cell tower signals, wireless internet signals,Bluetoothsensors,IP addressesand network environment data.[17]
These systems are specifically designed to overcome the limitations of GPS, which is very exact in open areas, but works poorly indoors or between tall buildings (theurban canyoneffect). By comparison, cell tower signals are not hindered by buildings or bad weather, but usually provide less precise positioning.Wi-Fi positioning systemsmay give very exact positioning, in urban areas with high Wi-Fi density - and depend on a comprehensive database of Wi-Fi access points.
Hybrid positioning systems are increasingly being explored for certain civilian and commerciallocation-based servicesandlocation-based media, which need to work well in urban areas in order to be commercially and practically viable.
Early works in this area include the Place Lab project, which started in 2003 and went inactive in 2006. Later methods let smartphones combine the accuracy of GPS with the low power consumption of cell-ID transition point finding.[18]In 2022, the satellite-free positioning systemSuperGPSwith higher-resolution than GPS using existing telecommunications networks was demonstrated.[19][20]
|
https://en.wikipedia.org/wiki/Local_positioning_system
|
Automatic image annotation(also known asautomatic image taggingorlinguistic indexing) is the process by which a computer system automatically assignsmetadatain the form ofcaptioningorkeywordsto adigital image. This application ofcomputer visiontechniques is used inimage retrievalsystems to organize and locate images of interest from adatabase.
This method can be regarded as a type ofmulti-classimage classificationwith a very large number of classes - as large as the vocabulary size. Typically,image analysisin the form of extractedfeature vectorsand the training annotation words are used bymachine learningtechniques to attempt to automatically apply annotations to new images.[1]The first methods learned the correlations betweenimage featuresand training annotations. Subsequently, techniques were developed usingmachine translationto to attempt to translate the textual vocabulary into the 'visual vocabulary,' represented by clustered regions known asblobs.Subsequent work has included classification approaches, relevance models, and other related methods.
The advantages of automatic image annotation versuscontent-based image retrieval(CBIR) are that queries can be more naturally specified by the user.[2]At present, Content-Based Image Retrieval (CBIR) generally requires users to search by image concepts such as color andtextureor by finding example queries. However, certain image features in example images may override the concept that the user is truly focusing on. Traditional methods of image retrieval, such as those used by libraries, have relied on manually annotated images, which is expensive and time-consuming, especially given the large and constantly growing image databases in existence.
Simultaneous Image Classification and Annotation
|
https://en.wikipedia.org/wiki/Automatic_image_annotation
|
TheXOP(eXtended Operations[1])instruction set, announced byAMDon May 1, 2009, is an extension to the 128-bitSSEcore instructions in thex86andAMD64instruction set for theBulldozerprocessor core, which was released on October 12, 2011.[2]However AMD removed support for XOP fromZen (microarchitecture)onward.[3]
The XOP instruction set contains several different types of vector instructions since it was originally intended as a major upgrade toSSE. Most of the instructions are integer instructions, but it also contains floating point permutation and floating point fraction extraction instructions. See the index for a list of instruction types.
XOP is a revised subset of what was originally intended asSSE5. It was changed to be similar but not overlapping withAVX, parts that overlapped with AVX were removed or moved to separate standards such asFMA4(floating-point vectormultiply–accumulate) andCVT16(Half-precisionfloating-point conversion implemented as F16C byIntel).[1]
All SSE5 instructions that were equivalent or similar to instructions in theAVXandFMA4instruction sets announced by Intel have been changed to use the coding proposed by Intel.Integerinstructionswithoutequivalents in AVX were classified as the XOP extension.[1]The XOP instructions have an opcode byte 8F (hexadecimal), but otherwise almost identical coding scheme asAVXwith the 3-byte VEX prefix.
Commentators[4]have seen this as evidence that Intel has not allowed AMD to use any part of the large VEX coding space. AMD has been forced to use different codes in order to avoid using any code combination that Intel might possibly be using in its development pipeline for something else. The XOP coding scheme is as close to the VEX scheme as technically possible without risking that the AMD codes overlap with future Intel codes. This inference is speculative, since no public information is available about negotiations between the two companies on this issue.
The use of the 8F byte requires that the m-bits (seeVEX coding scheme) have a value larger than or equal to 8 in order to avoid overlap with existing instructions.[Note 1]The C4 byte used in the VEX scheme has no such restriction. This may prevent the use of the m-bits for other purposes in the future in the XOP scheme, but not in the VEX scheme. Another possible problem is that the pp bits have the value 00 in the XOP scheme, while they have the value 01 in the VEX scheme for instructions that have no legacy equivalent. This may complicate the use of the pp bits for other purposes in the future.
A similar compatibility issue is the difference between theFMA3 and FMA4instruction sets. Intel initially proposed FMA4 in AVX/FMA specification version 3 to supersede the 3-operand FMA proposed by AMD in SSE5. After AMD adopted FMA4, Intel canceled FMA4 support and reverted to FMA3 in the AVX/FMA specification version 5 (SeeFMA history).[1][5][6]
In March 2015, AMD explicitly revealed in the description of the patch for the GNU Binutils package thatZen, its third-generation x86-64 architecture in its first iteration (znver1 – Zen, version 1), will not supportTBM,FMA4,XOPandLWPinstructions developed specifically for the "Bulldozer" family of micro-architectures.[7][8]
These are integer version of theFMA instruction set. These are all four operand instructions similar toFMA4and they all operate on signed integers.
r0 = a0 * b0 + c0,r1 = a1 * b1 + c1, ..
r0 = a0 * b0 + c0,r1 = a2 * b2 + c1, .[2]
r0 = a0 * b0 + c0,r1 = a1 * b1 + c1, ..
r0 = a0 * b0 + c0,r1 = a2 * b2 + c1
r0 = a1 * b1 + c0,r1 = a3 * b3 + c1
r0 = a0 * b0 + a1 * b1 + c0,r1 = a2 * b2+a3 * b3 + c1, ..
Horizontal addition instructions adds adjacent values in the input vector to each other. The output size in the instructions below describes how wide the horizontal addition performed is. For instance horizontal byte to word adds two bytes at a time and returns the result as vector of words, but byte to quadword adds eight bytes together at a time and returns the result as vector of quadwords. Six additional horizontal addition and subtraction instructions can be found inSSSE3, but they operate on two input vectors and only does two and two operations.
r0 = a0+a1,r1 = a2+a3,r2 = a4+a5, ...
r0 = a0+a1+a2+a3,r1 = a4+a5+a6+a7, ...
r0 = a0+a1+a2+a3+a4+a5+a6+a7, ...
r0 = a0+a1,r1 = a2+a3,r2 = a4+a5, ...
r0 = a0+a1+a2+a3,r1 = a4+a5+a6+a7
r0 = a0+a1,r1 = a2+a3
r0 = a0-a1,r1 = a2-a3,r2 = a4-a5, ...
r0 = a0-a1,r1 = a2-a3,r2 = a4-a5, ...
r0 = a0-a1,r1 = a2-a3
This set of vector compare instructions all take an immediate as an extra argument. The immediate controls what kind of comparison is performed. There are eight comparison possible for each instruction. The vectors are compared and all comparisons that evaluate to true set all corresponding bits in the destination to 1, and false comparisons sets all the same bits to 0. This result can be used directly in VPCMOV instruction for a vectorizedconditional move.
VPCMOVworks as bitwise variant of the blend instructions inSSE4. Like the AVX instruction VPBLENDVB, it is a four-operand instruction with three source operands and a destination. For each bit in the third operand (which acts as a selector), 1 selects the same bit in the first source, and 0 selects the same in the second source. When used together with the XOP vector comparison instructions above this can be used to implement a vectorized ternary move, or if the second input is the same as the destination, a conditional move (CMOV).
The shift instructions here differ from those inSSE2in that they can shift each unit with a different amount using a vector register interpreted as packed signed integers. The sign indicates the direction of shift or rotate, with positive values causing left shift and negative right shift[10]Intel has specified a different incompatible set of variable vector shift instructions in AVX2.[11]
VPPERMis a single instruction that combines theSSSE3instruction PALIGNR and PSHUFB and adds more to both. Some compare it theAltivecinstructionVPERM.[12]It takes three registers as input, the first two are source registers and the third the selector register. Each byte in the selector selects one of the bytes in one of the two input registers for the output. The selector can also apply effects on the selected bytes such as setting it to 0, reverse the bit order, and repeating the most-significant bit. All of the effects or the input can in addition be inverted.
TheVPERMIL2PDandVPERMIL2PSinstructions are two source versions of theVPERMILPDandVPERMILPSinstructions inAVXwhich means likeVPPERMthey can select output from any of the fields in the two inputs.
These instructions extracts the fractional part of floating point, that is the part that would be lost in conversion to integer.
|
https://en.wikipedia.org/wiki/XOP_instruction_set
|
ITIL security managementdescribes the structured fitting of security into an organization.ITILsecurity management is based on the ISO 27001 standard. "ISO/IEC 27001:2005 covers all types of organizations (e.g. commercial enterprises, government agencies, not-for profit organizations).[1]ISO/IEC 27001:2005 specifies the requirements for establishing, implementing, operating, monitoring, reviewing, maintaining and improving a documented Information Security Management System within the context of the organization's overall business risks. It specifies requirements for the implementation of security controls customized to the needs of individual organizations or parts thereof. ISO/IEC 27001:2005 is designed to ensure the selection of adequate and proportionate security controls that protect information assets and give confidence to interested parties."
A basic concept of security management isinformation security. The primary goal of information security is to control access to information. The value of the information is what must be protected. These values includeconfidentiality,integrityandavailability. Inferred aspects are privacy, anonymity and verifiability.
The goal of security management comes in two parts:
SLAs define security requirements, along with legislation (if applicable) and other contracts. These requirements can act askey performance indicators(KPIs) that can be used for process management and for interpreting the results of the security management process.
The security management process relates to other ITIL-processes. However, in this particular section the most obvious relations are the relations to theservice level management,incident managementandchange managementprocesses.
Security management is a continuous process that can be compared toW. Edwards Deming's Quality Circle (Plan, Do, Check, Act).
The inputs are requirements from clients. The requirements are translated into security services and security metrics. Both the client and the plan sub-process affect the SLA. The SLA is an input for both the client and the process. The provider develops security plans for the organization. These plans contain policies and operational level agreements. The security plans (Plan) are then implemented (Do) and the implementation is then evaluated (Check). After the evaluation, the plans and the plan implementation are maintained (Act).
The activities, results/products and the process are documented. External reports are written and sent to the clients. The clients are then able to adapt their requirements based on the information received through the reports. Furthermore, the service provider can adjust their plan or the implementation based on their findings in order to satisfy all the requirements stated in the SLA (including new requirements).
The first activity in the security management process is the “Control” sub-process. The Control sub-process organizes and manages the security management process. The Control sub-process defines the processes, the allocation of responsibility for the policy statements and the management framework.
The security management framework defines the sub-processes for development, implementation and evaluations into action plans. Furthermore, the management framework defines how results should be reported to clients.
The meta-process model of the control sub-process is based on aUMLactivity diagramand gives an overview of the activities of the Control sub-process. The grey rectangle represents the control sub-process and the smaller beam shapes inside it represent activities that take place inside it.
The meta-data model of the control sub-process is based on a UMLclass diagram. Figure 2.1.2 shows the metamodel of the control sub-process.
Figure 2.1.2: Meta-process model control sub-process
The CONTROL rectangle with a white shadow is an open complex concept. This means that the Control rectangle consists of a collection of (sub) concepts.
Figure 2.1.3 is the process data model of the control sub-process. It shows the integration of the two models. The dotted arrows indicate the concepts that are created or adjusted in the corresponding activities.
Figure 2.1.3: Process-data model control sub-process
The Plan sub-process contains activities that in cooperation withservice level managementlead to the (information) Security section in the SLA. Furthermore, the Plan sub-process contains activities that are related to the underpinning contracts which are specific for (information) security.
In the Plan sub-process the goals formulated in the SLA are specified in the form of operational level agreements (OLA). These OLA's can be defined as security plans for a specific internal organization entity of the service provider.
Besides the input of the SLA, the Plan sub-process also works with the policy statements of the service provider itself. As said earlier these policy statements are defined in the control sub-process.
The operational level agreements forinformation securityare set up and implemented based on the ITIL process. This requires cooperation with other ITIL processes. For example, if security management wishes to change theIT infrastructurein order to enhance security, these changes will be done through thechange managementprocess. Security management delivers the input (Request for change) for this change. The Change Manager is responsible for the change management process.
Plan consists of a combination of unordered and ordered (sub) activities. The sub-process contains three complex activities that are all closed activities and one standard activity.
Just as the Control sub-process the Plan sub-process is modeled using the meta-modeling technique. The left side of figure 2.2.1 is the meta-data model of the Plan sub-process.
The Plan rectangle is an open (complex) concept which has an aggregation type of relationship with two closed (complex) concepts and one standard concept. The two closed concepts are not expanded in this particular context.
The following picture (figure 2.2.1) is theprocess-data diagramof the Plan sub-process. This picture shows the integration of the two models. The dotted arrows indicate which concepts are created or adjusted in the corresponding activities of the Plan sub-process.
Figure 2.2.1: Process-data model Plan sub-process
The Implementation sub-process makes sure that all measures, as specified in the plans, are properly implemented. During the Implementation sub-process no measures are defined nor changed. The definition or change of measures takes place in the Plan sub-process in cooperation with the Change Management Process.
Process of formally identifying changes by type e.g., project scope change request, validation change request, infrastructure change request this process leads toasset classification and control documents.
The left side of figure 2.3.1 is the meta-process model of the Implementation phase. The four labels with a black shadow mean that these activities are closed concepts and they are not expanded in this context. No arrows connect these four activities, meaning that these activities are unordered and the reporting will be carried out after the completion of all four activities.
During the implementation phase concepts are created and /or adjusted.
The concepts created and/or adjusted are modeled using the meta-modeling technique. The right side of figure 2.3.1 is the meta-data model of the implementation sub-process.
Implementation documents are an open concept and is expanded upon in this context. It consists of four closed concepts that are not expanded because they are irrelevant in this particular context.
In order to make the relations between the two models clearer the integration of the two models is illustrated in Figure 2.3.1. The dotted arrows running from the activities to the concepts illustrate which concepts are created/ adjusted in the corresponding activities.
Figure 2.3.1: Process-data model Implementation sub-process
Evaluation is necessary to measure the success of the implementation and security plans. The evaluation is important for clients (and possibly third parties). The results of the Evaluation sub-process are used to maintain the agreed measures and the implementation. Evaluation results can lead to new requirements and a correspondingRequest for Change. The request for change is then defined and sent to Change Management.
The three sorts of evaluation are self-assessment, internal audit and external audit.
The self-assessment is mainly carried out in the organization of the processes. Internal audits are carried out by internal IT-auditors. External audits are carried out by external, independent IT-auditors. Besides those already mentioned, an evaluation based on communicated security incidents occurs. The most important activities for this evaluation are the security monitoring of IT-systems; verify the security legislation and security plan implementation; trace and react to undesirable use of IT-supplies.
Figure 2.4.1: Process-data model Evaluation sub-process
The process-data diagram illustrated in the figure 2.4.1 consists of a meta-process model and a meta-data model. The Evaluation sub-process was modeled using the meta-modeling technique.
The dotted arrows running from the meta-process diagram (left) to the meta-data diagram (right) indicate which concepts are created/ adjusted in the corresponding activities. All of the activities in the evaluation phase are standard activities. For a short description of the Evaluation phase concepts see Table 2.4.2 where the concepts are listed and defined.
Table 2.4.2: Concept and definition evaluation sub-process Security management
Because of organizational and IT-infrastructure changes, security risks change over time, requiring revisions to the security section of service level agreements and security plans.
Maintenance is based on the results of the Evaluation sub-process and insight in the changing risks. These activities will produce proposals. The proposals either serve as inputs for the plan sub-process and travel through the cycle or can be adopted as part of maintaining service level agreements. In both cases the proposals could lead to activities in the action plan. The actual changes are made by the Change Management process.
Figure 2.5.1 is the process-data diagram of the implementation sub-process. This picture shows the integration of the meta-process model (left) and the meta-data model (right). The dotted arrows indicate which concepts are created or adjusted in the activities of the implementation phase.
Figure 2.5.1: Process-data model Maintenance sub-process
The maintenance sub-process starts with the maintenance of the service level agreements and the maintenance of the operational level agreements. After these activities take place (in no particular order) and there is a request for a change the request for change activity will take place and after the request for change activity is concluded the reporting activity starts. If there is no request for a change then the reporting activity will start directly after the first two activities. The concepts in the meta-data model are created/ adjusted during the maintenance phase. For a list of the concepts and their definition take a look at table 2.5.2.
Table 2.5.2: Concept and definition Plan sub-process Security management
Figure 2.6.1: Complete process-data model Security Management process
The Security Management Process, as stated in the introduction, has relations with almost all other ITIL-processes. These processes are:
Within these processes activities concerning security are required. The concerning process and its process manager are responsible for these activities. However, Security Management gives indications to the concerning process on how to structure these activities.
Internal e-mail is subject to multiple security risks, requiring corresponding security plan and policies. In this example the ITIL security Management approach is used to implement e-mail policies.
The Security management team is formed and process guidelines are formulated and communicated to all employees and providers. These actions are carried out in the Control phase.
In the subsequent Planning phase, policies are formulated. Policies specific to e-mail security are formulated and added to service level agreements. At the end of this phase the entire plan is ready to be implemented.
Implementation is done according to the plan.
After implementation the policies are evaluated, either as self-assessments, or via internal or external auditors.
In the maintenance phase the e-policies are adjusted based on the evaluation. Needed changes are processed via Requests for Change.
|
https://en.wikipedia.org/wiki/ITIL_security_management
|
Inmathematics, theLucas sequencesUn(P,Q){\displaystyle U_{n}(P,Q)}andVn(P,Q){\displaystyle V_{n}(P,Q)}are certainconstant-recursiveinteger sequencesthat satisfy therecurrence relation
whereP{\displaystyle P}andQ{\displaystyle Q}are fixedintegers. Any sequence satisfying this recurrence relation can be represented as alinear combinationof the Lucas sequencesUn(P,Q){\displaystyle U_{n}(P,Q)}andVn(P,Q).{\displaystyle V_{n}(P,Q).}
More generally, Lucas sequencesUn(P,Q){\displaystyle U_{n}(P,Q)}andVn(P,Q){\displaystyle V_{n}(P,Q)}represent sequences ofpolynomialsinP{\displaystyle P}andQ{\displaystyle Q}with integercoefficients.
Famous examples of Lucas sequences include theFibonacci numbers,Mersenne numbers,Pell numbers,Lucas numbers,Jacobsthal numbers, and a superset ofFermat numbers(see below). Lucas sequences are named after theFrenchmathematicianÉdouard Lucas.
Given two integer parametersP{\displaystyle P}andQ{\displaystyle Q}, the Lucas sequences of the first kindUn(P,Q){\displaystyle U_{n}(P,Q)}and of the second kindVn(P,Q){\displaystyle V_{n}(P,Q)}are defined by therecurrence relations:
and
It is not hard to show that forn>0{\displaystyle n>0},
The above relations can be stated inmatrixform as follows:
Initial terms of Lucas sequencesUn(P,Q){\displaystyle U_{n}(P,Q)}andVn(P,Q){\displaystyle V_{n}(P,Q)}are given in the table:
The characteristic equation of the recurrence relation for Lucas sequencesUn(P,Q){\displaystyle U_{n}(P,Q)}andVn(P,Q){\displaystyle V_{n}(P,Q)}is:
It has thediscriminantD=P2−4Q{\displaystyle D=P^{2}-4Q}and theroots:
Thus:
Note that the sequencean{\displaystyle a^{n}}and the sequencebn{\displaystyle b^{n}}also satisfy the recurrence relation. However these might not be integer sequences.
WhenD≠0{\displaystyle D\neq 0},aandbare distinct and one quickly verifies that
It follows that the terms of Lucas sequences can be expressed in terms ofaandbas follows
The caseD=0{\displaystyle D=0}occurs exactly whenP=2SandQ=S2{\displaystyle P=2S{\text{ and }}Q=S^{2}}for some integerSso thata=b=S{\displaystyle a=b=S}. In this case one easily finds that
The ordinarygenerating functionsare
WhenQ=±1{\displaystyle Q=\pm 1}, the Lucas sequencesUn(P,Q){\displaystyle U_{n}(P,Q)}andVn(P,Q){\displaystyle V_{n}(P,Q)}satisfy certainPell equations:
The terms of Lucas sequences satisfy relations that are generalizations of those betweenFibonacci numbersFn=Un(1,−1){\displaystyle F_{n}=U_{n}(1,-1)}andLucas numbersLn=Vn(1,−1){\displaystyle L_{n}=V_{n}(1,-1)}. For example:
Among the consequences is thatUkm(P,Q){\displaystyle U_{km}(P,Q)}is a multiple ofUm(P,Q){\displaystyle U_{m}(P,Q)}, i.e., the sequence(Um(P,Q))m≥1{\displaystyle (U_{m}(P,Q))_{m\geq 1}}is adivisibility sequence. This implies, in particular, thatUn(P,Q){\displaystyle U_{n}(P,Q)}can beprimeonly whennis prime.
Another consequence is an analog ofexponentiation by squaringthat allows fast computation ofUn(P,Q){\displaystyle U_{n}(P,Q)}for large values ofn.
Moreover, ifgcd(P,Q)=1{\displaystyle \gcd(P,Q)=1}, then(Um(P,Q))m≥1{\displaystyle (U_{m}(P,Q))_{m\geq 1}}is astrong divisibility sequence.
Other divisibility properties are as follows:[1]
The last fact generalizesFermat's little theorem. These facts are used in theLucas–Lehmer primality test.
Like Fermat's little theorem, theconverseof the last fact holds often, but not always; there existcomposite numbersnrelatively prime toDand dividingUl{\displaystyle U_{l}}, wherel=n−(Dn){\displaystyle l=n-\left({\tfrac {D}{n}}\right)}. Such composite numbers are calledLucas pseudoprimes.
Aprime factorof a term in a Lucas sequence which does not divide any earlier term in the sequence is calledprimitive.Carmichael's theoremstates that all but finitely many of the terms in a Lucas sequence have a primitive prime factor.[2]Indeed,Carmichael(1913) showed that ifDis positive andnis not 1, 2 or 6, thenUn{\displaystyle U_{n}}has a primitive prime factor. In the caseDis negative, a deep result of Bilu, Hanrot, Voutier and Mignotte[3]shows that ifn> 30, thenUn{\displaystyle U_{n}}has a primitive prime factor and determines all casesUn{\displaystyle U_{n}}has no primitive prime factor.
The Lucas sequences for some values ofPandQhave specific names:
Some Lucas sequences have entries in theOn-Line Encyclopedia of Integer Sequences:
Sagemath implementsUn{\displaystyle U_{n}}andVn{\displaystyle V_{n}}aslucas_number1()andlucas_number2(), respectively.[7]
|
https://en.wikipedia.org/wiki/Lucas_sequence
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.