text
stringlengths
21
172k
source
stringlengths
32
113
JAB Code (Just Another Barcode)is a colour 2D matrix symbology made of colour squares arranged in either square or rectangle grids. It was developed byFraunhofer Institute for Secure Information Technology SIT[de].[1] The code contains one primary symbol and optionally multiple secondary symbols. The primary symbol contains four finder patterns located at the corners of the symbol.[2] The code uses either four or eight colours.[3]The four basic colours (cyan, magenta, yellow, and black) are the four primary colours of the subtractiveCMYK colour model, which is the most widely used system in the industry for colour printing on a white base such as paper. The other four colours (blue, red, green, and white) are secondary colours of the CMYK model and each originates as an equal mixture of a pair of basic colours. The barcode is not subject to licensing and was submitted to ISO/IEC standardization as ISO/IEC 23634 expected to be approved at the beginning of 2021[4]and finalized in 2022.[3]The software isopen sourceand published under theLGPLv2.1 license.[5]The specification is freely available.[2] Because the colour adds a third dimension to the two-dimensional matrix, a JAB Code can contain more information in the same area than two-colour (black and white) codes; a four-colour code doubles the amount of data that can be stored, and an eight-colour code triples it. This increases the chances the barcode can store an entire message, rather than just partial data with a reference to a full message somewhere else (such as a link to a website), which would eliminate the need for additional always-available infrastructure beyond the printed barcode itself. It may be used todigitally signencrypted digital versions of printed legal documents, contracts, certificates (e.g., diplomas, training), and medical prescriptions or to provide product authenticity assurance, increasing protection against counterfeits.[3]
https://en.wikipedia.org/wiki/JAB_Code
Afilename extension,file name extensionorfile extensionis a suffix to thenameof acomputer file(for example,.txt,.mp3,.exe) that indicates a characteristic of the file contents or its intended use. A filename extension is typically delimited from the rest of the filename with afull stop(period), but in some systems[1]it is separated with spaces. Somefile systemsimplement filename extensions as a feature of the file system itself and may limit the length and format of the extension (as seen inDOS), while others treat filename extensions as part of the filename without special distinction (as seen inUnix) and instead prefer to usefile signatures. TheMulticsfile system stores the file name as a single string, not split into base name and extension components, allowing the "." to be just another character allowed in file names. It allows for variable-length filenames, permitting more than one dot, and hence multiple suffixes, as well as no dot, and hence no suffix. Some components of Multics, and applications running on it, use suffixes to indicate file types, but not all files are required to have a suffix — for example, executables and ordinary text files usually have no suffixes in their names. File systems forUNIX-likeoperating systems also store the file name as a single string, with "." as just another character in the file name. A file with more than one suffix is sometimes said to have more than one extension, although terminology varies in this regard, and most authors defineextensionin a way that does not allow more than one in the same file name.[citation needed]More than one extension usually represents nested transformations, such asfiles.tar.gz(the.tarindicates that the file is atar archiveof one or more files, and the.gzindicates that the tar archive file is compressed withgzip). Programs transforming or creating files may add the appropriate extension to names inferred from input file names (unless explicitly given an output file name), but programs reading files usually ignore the information; it is mostly intended for the human user. It is more common, especially in binary files, for the file to containinternalorexternalmetadata describing its contents. This model generally requires the full filename to be provided in commands, whereas the metadata approach often allows the extension to be omitted. CTSSwas an early operating system in which the filename and file type were separately stored. Continuing this practice, and also using a dot as a separator for display and input purposes (while not storing the dot), were variousDECoperating systems (such asRT-11), followed byCP/Mand subsequentlyDOS. InDOSand 16-bitWindows, file names have a maximum of 8 characters, a period, and an extension of up to three letters. TheFATfile system for DOS and Windows stores file names as an 8-character name and a three-character extension. The period character is not stored. TheHigh Performance File System(HPFS), used in Microsoft andIBM'sOS/2stores the file name as a single string, with the "." character as just another character in the file name. The convention of using suffixes continued, even though HPFS supports extended attributes for files, allowing a file's type to be stored in the file as an extended attribute. Microsoft'sWindows NT's native file system,NTFS, and the laterReFS, also store the file name as a single string; again, the convention of using suffixes to simulate extensions continued, for compatibility with existing versions of Windows. InWindows NT 3.5, a variant of the FAT file system, calledVFATappeared; it supports longer file names, with the file name being treated as a single string. Windows 95, with VFAT, introduced support for long file names, and removed the 8.3 name/extension split in file names from non-NT Windows. Theclassic Mac OSdisposed of filename-based extension metadata entirely; it used, instead, a distinct filetype codeto identify the file format. Additionally, acreator codewas specified to determine which application would be launched when the file'siconwasdouble-clicked.[2]macOS, however, uses filename suffixes as a consequence of being derived from the UNIX-likeNeXTSTEPoperating system, in addition to using type and creator codes. In Commodore systems, files can only have four extensions: PRG, SEQ, USR, REL. However, these are used to separate data types used by a program and are irrelevant for identifying their contents. With the advent ofgraphical user interfaces, the issue of file management and interface behavior arose. Microsoft Windows allowed multiple applications to beassociatedwith a given extension, and different actions were available for selecting the required application, such as acontext menuoffering a choice between viewing, editing or printing the file. The assumption was still that any extension represented a single file type; there was an unambiguous mapping between extension and icon. When theInternetage first arrived, those using Windows systems that were still restricted to8.3filename formats had to create web pages with names ending in.HTM, while those usingMacintoshor UNIX computers could use the recommended.htmlfilename extension. This also became a problem for programmers experimenting with theJava programming language, since itrequiresthe four-letter suffix.javaforsource codefiles and the five-letter suffix.classfor Javacompilerobject codeoutput files.[3] Filename extensions may be considered a type ofmetadata.[4]They are commonly used to imply information about the way data might be stored in the file. The exact definition, giving the criteria for deciding what part of the file name is its extension, belongs to the rules of the specificfile systemused; usually the extension is the substring which follows the last occurrence, if any, of thedot character(example:txtis the extension of the filenamereadme.txt, andhtmlthe extension ofindex.html). On file systems of some mainframe systems such asCMSinVM,VMS, and of PC systems such asCP/Mand derivative systems such asMS-DOS, the extension is a separatenamespacefrom the filename. Under Microsoft'sDOSandWindows, extensions such asEXE,COMorBATindicate that a file is a programexecutable. InOS/360 and successors, the part of the dataset name following the last period, called the low level qualifier, is treated as an extension by some software, e.g.,TSOEDIT, but it has no special significance to the operating system itself; the same applies to Unix files in MVS. The filename extension was originally used to determine the file's generic type.[citation needed]The need to condense a file's type intothree charactersfrequently led to abbreviated extensions. Examples include using.GFXfor graphics files,.TXTforplain text, and.MUSfor music. However, because many different software programs have been made that all handle these data types (and others) in a variety of ways, filename extensions started to become closely associated with certain products—even specific product versions. For example, earlyWordStarfiles used.WSor.WSn, wherenwas the program's version number. Also, conflicting uses of some filename extensions developed. One example is.rpm, used for bothRPM Package Managerpackages andRealPlayerMedia files;.[5]Others are.qif, shared byDESQviewfonts,Quickenfinancial ledgers, andQuickTimepictures;[6].gba, shared byGrabItscripts andGame Boy AdvanceROM images;[7].sb, used forSmallBasicandScratch; and.dts, being used forDynamix Three SpaceandDTS. In manyInternetprotocols, such asHTTPandMIME email, the type of a bitstream is stated as themedia type, or MIME type, of the stream, rather than a filename extension. This is given in a line of text preceding the stream, such asContent-type: text/plain. There is no standard mapping between filename extensions and media types, resulting in possible mismatches in interpretation between authors, web servers, and client software when transferring files over the Internet. For instance, a content author may specify the extensionsvgzfor a compressedScalable Vector Graphicsfile, but a web server that does not recognize this extension may not send the proper content typeapplication/svg+xmland its required compression header, leaving web browsers unable to correctly interpret and display the image. BeOS, whoseBFSfile system supports extended attributes, would tag a file with its media type as an extended attribute. Somedesktop environments, such asKDE PlasmaandGNOME, associate a media type with a file by examining both the filename suffix and the contents of the file, in the fashion of thefilecommand, as aheuristic. They choose the application to launch when a file is opened based on that media type, reducing the dependency on filename extensions.macOSuses both filename extensions and media types, as well asfile type codes, to select aUniform Type Identifierby which to identify the file type internally. The use of a filename extension in a command name appears occasionally, usually as a side effect of the command having been implemented as a script, e.g., for theBourne shellor forPython, and the interpreter name being suffixed to the command name, a practice common on systems that rely on associations between filename extension and interpreter, but sharply deprecated[8]inUnix-likesystems, such asLinux,Oracle Solaris,BSD-based systems, and Apple'smacOS, where the interpreter is normally specified as a header in the script ("shebang"). On association-based systems, the filename extension is generally mapped to a single, system-wide selection of interpreter for that extension (such as ".py" meaning to use Python), and the command itself is runnable from the command line even if the extension is omitted (assuming appropriate setup is done). If the implementation language is changed, the command name extension is changed as well, and the OS provides a consistentAPIby allowing the same extensionless version of the command to be used in both cases. This method suffers somewhat from the essentially global nature of the association mapping, as well as from developers' incomplete avoidance of extensions when calling programs, and that developers can not force that avoidance. Windows is the only remaining widespread employer of this mechanism. On systems withinterpreter directives, including virtually all versions of Unix, command name extensions have no special significance, and are by standard practice not used, since the primary method to set interpreters for scripts is to start them with a single line specifying the interpreter to use. In these environments, including the extension in a command name unnecessarily exposes an implementation detail which puts all references to the commands from other programs at future risk if the implementation changes. For example, it would be perfectly normal for a shell script to be reimplemented in Python or Ruby, and later in C or C++, all of which would change the name of the command were extensions used. Without extensions, a program always has the same extension-less name, with only theinterpreter directiveormagic numberchanging, and references to the program from other programs remain valid. File extensions alone are not a reliable indicator of a file's type, as the extension can be modified without changing the file's contents, such as to disguisemalicious content. Therefore, especially in the context ofcybersecurity, a file's true nature should be examined forits signature, which is a distinctive sequence of bytes affixed to a file's header. This is accomplished using file identification software or ahex editor, which provides ahex dumpof a file's contents.[9]For example, onUNIX-likesystems, it is not uncommon to find files with no extensions at all,[10]as commands such asfileare meant to be used instead, and will read the file's header to determine its content.[citation needed] Malware such asTrojan horsestypically takes the form of anexecutable, but any file type that performsinput/outputoperations may contain malicious code. A fewdata filetypes such asPDFshave been found to be vulnerable to exploits that causebuffer overflows.[11]There have been instances of malware crafted to exploit such vulnerabilities in some Windows applications when opening a file with an overly long, unhandled filename extension. File managersmay have an option to hide filenames extensions. This is the case forFile Explorer, the file browser provided withMicrosoft Windows, which by default does not display extensions. Malicious users have tried to spreadcomputer virusesandcomputer wormsby using file names formed likeLOVE-LETTER-FOR-YOU.TXT.vbs. The idea is that this will appear asLOVE-LETTER-FOR-YOU.TXT, a harmless text file, without alerting the user to the fact that it is a harmful computer program, in this case, written inVBScript.[11]The default behavior forReactOSis to display filename extensions inReactOS Explorer. Later Windows versions (starting withWindows XP Service Pack 2andWindows Server 2003) included customizable lists of filename extensions that should be considered "dangerous" in certain "zones" of operation, such as whendownloadedfrom thewebor received as an e-mail attachment. Modernantivirus softwaresystems also help to defend users against such attempted attacks where possible.[citation needed] A virus may couple itself with an executable without actually modifying the executable. These viruses, known ascompanion viruses, attach themselves in such a way that they are executed when the original file is requested. One way such a virus does this involves giving the virus the same name as the target file, but with a different extension to which the operating system gives priority, and often assigning the former a "hidden"attributeto conceal the malware's existence. The efficacy of this approach depends on whether the user attempts to open the intended file by entering a command and whether the user includes the extension. Later versions of DOS and Windows check for and attempt to run.COMfiles first by default, followed by.EXEand finally.BATfiles. In this case, the infected file is the one with the.COMextension, which the user unwittingly executes.[10][11] Some viruses take advantage of the similarity between the ".com"top-level domainand the.COMfilename extension by emailing malicious, executable command-file attachments under names superficially similar to URLs (e.g., "myparty.yahoo.com"), with the effect that unaware users click on email-embedded links that they think lead to websites but actually download and execute the malicious attachments.[citation needed]
https://en.wikipedia.org/wiki/Filename_extension
Distributed Proofreaders(commonly abbreviated asDPorPGDP) is a web-based project that supports the development ofe-textsforProject Gutenbergby allowing many people to work together inproofreadingdrafts of e-texts for errors. As of July 2024,[update]the site had digitized 48,000 titles.[2][3][4][5] Distributed Proofreaders was founded by Charles Franks in 2000 as an independent site to assistProject Gutenberg.[6]Distributed Proofreaders became an official Project Gutenberg site in 2002. On 8 November 2002, Distributed Proofreaders wasslashdotted,[7][8]and more than 4,000 new members joined in one day, causing an influx of new proofreaders and software developers, which helped to increase the quantity and quality of e-text production. In July 2015, the 30,000th Distributed Proofreaders produced e-text was posted to Project Gutenberg. DP-contributed e-texts comprised more than half of works in Project Gutenberg, as of July 2015[update]. On 31 July 2006, theDistributed Proofreaders Foundationwas formed to provide Distributed Proofreaders with its own legal entity andnot-for-profitstatus.IRSapproval of section501(c)(3)status was granted retroactive to 7 April 2006. Public domainworks, typically books with expired copyright, are scanned by volunteers, or sourced from digitization projects and the images are run throughoptical character recognition(OCR)software. Since OCR software is far from perfect, many errors often appear in the resulting text. To correct them, pages are made available to volunteers via the Internet; the original page image and the recognized text appear side by side.[9]This process thereby distributes the time-consuming error-correction process, akin todistributed computing. Each page is proofread and formatted several times, and then a post-processor combines the pages and prepares the text for uploading to Project Gutenberg. Besides custom software created to support the project, DP also runs a forum and a wiki for project coordinators and participants. In January 2004, Distributed Proofreaders Europe started, hosted byProject Rastko, Serbia.[10]This site had the ability to process text inUnicodeUTF-8encoding. Books proofread centered on European culture, with a considerable proportion of non-English texts including Hebrew, Arabic, Urdu, and many others. As of October 2013[update], DP Europe had produced 787 e-texts, the last of these in November 2011. The original DP is sometimes referred to as "DP International" by members of DP Europe. However, DP servers are located in the United States, and therefore works must be cleared by Project Gutenberg as being in thepublic domainaccording to U.S.copyrightlaw before they can be proofread and eventually published at DP. In December 2007,Distributed Proofreaders Canadalaunched to support the production of e-books forProject Gutenberg Canadaand take advantage of shorterCanadian copyrightterms. Although it was established by members of the original Distributed Proofreaders site, it is a separate entity. All its projects are posted toFaded Page, their book archive website. In addition, it supplies books to Project Gutenberg Canada (which launched onCanada Day2007) and (where copyright laws are compatible) to the original Project Gutenberg. In addition to preservingCanadiana, DP Canada is notable because it is the first major effort to take advantage of Canada's copyright laws which may allow more works to be preserved. Unlike copyright law in some other countries, Canada has a "life plus 50" copyright term. This means that works by authors who died more than fifty years ago may be preserved in Canada, whereas in other parts of the world those works may not be distributed because they are still under copyright. Notable authors whose works may be preserved in Canada but not in other parts of the world includeClark Ashton Smith,Dashiell Hammett,Ernest Hemingway,Carl Jung,A. A. Milne,Dorothy Sayers,Nevil Shute,Walter de la Mare,Sheila Kaye-SmithandAmy Carmichael. On 9 March 2007, Distributed Proofreaders announced the completion of more than 10,000 titles. In celebration, a collection of fifteen titles was published: On April 10, 2011, the 20,000th book milestone was celebrated as a group release of bilingual books:[18] On 7 July 2015, the 30,000th book milestone was celebrated with a group of thirty texts. One was numbered 30,000:[19]
https://en.wikipedia.org/wiki/Distributed_Proofreaders
Lambda liftingis ameta-processthat restructures acomputer programso thatfunctionsare defined independently of each other in a globalscope. An individuallifttransforms a local function (subroutine) into a global function. It is a two step process, consisting of: The term "lambda lifting" was first introduced by Thomas Johnsson around 1982 and was historically considered as a mechanism for implementingprogramming languagesbased onfunctional programming. It is used in conjunction with other techniques in some moderncompilers. Lambda lifting is not the same as closure conversion. It requires allcall sitesto be adjusted (adding extra arguments (parameters) to calls) and does not introduce aclosurefor the lifted lambda expression. In contrast, closure conversion does not require call sites to be adjusted but does introduce a closure for the lambda expression mapping free variables to values. The technique may be used on individual functions, incode refactoring, to make a function usable outside the scope in which it was written. Lambda lifts may also be repeated, to transform the program. Repeated lifts may be used to convert a program written inlambda calculusinto a set ofrecursive functions, without lambdas. This demonstrates the equivalence of programs written in lambda calculus and programs written as functions.[1]However it does not demonstrate the soundness of lambda calculus for deduction, as theeta reductionused in lambda lifting is the step that introducescardinality problemsinto the lambda calculus, because it removes the value from the variable, without first checking that there is only one value that satisfies the conditions on the variable (seeCurry's paradox). Lambda lifting is expensive on processing time for the compiler. An efficient implementation of lambda lifting isO(n2){\displaystyle O(n^{2})}on processing time for the compiler.[2] In theuntyped lambda calculus, where the basic types are functions, lifting may change the result ofbeta reductionof a lambda expression. The resulting functions will have the same meaning, in a mathematical sense, but are not regarded as the same function in the untyped lambda calculus. See alsointensional versus extensional equality. The reverse operation to lambda lifting islambda dropping.[3] Lambda dropping may make the compilation of programs quicker for the compiler, and may also increase the efficiency of the resulting program, by reducing the number of parameters, and reducing the size of stack frames. However it makes a function harder to re-use. A dropped function is tied to its context, and can only be used in a different context if it is first lifted. The following algorithm is one way to lambda-lift an arbitrary program in a language which doesn't support closures asfirst-class objects: If the language has closures as first-class objects that can be passed as arguments or returned from other functions, the closure will need to be represented by a data structure that captures the bindings of the free variables. The followingOCamlprogram computes the sum of the integers from 1 to 100: (Thelet recdeclaressumas a function that may call itself.) The function f, which adds sum's argument to the sum of the numbers less than the argument, is a local function. Within the definition of f, n is a free variable. Start by converting the free variable to a parameter: Next, lift f into a global function: The following is the same example, this time written inJavaScript: Lambda lifting andclosureare both methods for implementingblock structuredprograms. It implements block structure by eliminating it. All functions are lifted to the global level. Closure conversion provides a "closure" which links the current frame to other frames. Closure conversion takes less compile time. Recursive functions, and block structured programs, with or without lifting, may be implemented using astackbased implementation, which is simple and efficient. However a stack frame based implementation must bestrict (eager). The stack frame based implementation requires that the life of functions belast-in, first-out(LIFO). That is, the most recent function to start its calculation must be the first to end. Some functional languages (such asHaskell) are implemented usinglazy evaluation, which delays calculation until the value is needed. The lazy implementation strategy gives flexibility to the programmer. Lazy evaluation requires delaying the call to a function until a request is made to the value calculated by the function. One implementation is to record a reference to a "frame" of data describing the calculation, in place of the value. Later when the value is required, the frame is used to calculate the value, just in time for when it is needed. The calculated value then replaces the reference. The "frame" is similar to astack frame, the difference being that it is not stored on the stack. Lazy evaluation requires that all the data required for the calculation be saved in the frame. If the function is "lifted", then the frame needs only record thefunction pointer, and the parameters to the function. Some modern languages usegarbage collectionin place of stack based allocation to manage the life of variables. In a managed, garbage collected environment, aclosurerecords references to the frames from which values may be obtained. In contrast a lifted function has parameters for each value needed in the calculation. TheLet expressionis useful in describing lifting, dropping, and the relationship between recursive equations and lambda expressions. Most functional languages have let expressions. Also, block structured programming languages likeALGOLandPascalare similar in that they too allow the local definition of a function for use in a restrictedscope. Theletexpression used here is a fully mutually recursive version oflet rec, as implemented in many functional languages. Let expressions are related toLambda calculus. Lambda calculus has a simple syntax and semantics, and is good for describing Lambda lifting. It is convenient to describe lambda lifting as a translations fromlambdato aletexpression, and lambda dropping as the reverse. This is becauseletexpressions allow mutual recursion, which is, in a sense, more lifted than is supported in lambda calculus. Lambda calculus does not support mutual recursion and only one function may be defined at the outermost global scope. Conversion ruleswhich describe translation without lifting are given in theLet expressionarticle. The following rules describe the equivalence of lambda and let expressions, Meta-functions will be given that describe lambda lifting and dropping. A meta-function is a function that takes a program as a parameter. The program is data for the meta-program. The program and the meta program are at different meta-levels. The following conventions will be used to distinguish program from the meta program, For simplicity the first rule given that matches will be applied. The rules also assume that the lambda expressions have been pre-processed so that each lambda abstraction has a unique name. The substitution operator is used extensively. The expressionL[G:=S]{\displaystyle L[G:=S]}means substitute every occurrence ofGinLbySand return the expression. The definition used is extended to cover the substitution of expressions, from the definition given on theLambda calculuspage. The matching of expressions should compare expressions for alpha equivalence (renaming of variables). Each lambda lift takes a lambda abstraction which is a sub expression of a lambda expression and replaces it by a function call (application) to a function that it creates. The free variables in the sub expression are the parameters to the function call. Lambda lifts may be used on individual functions, incode refactoring, to make a function usable outside the scope in which it was written. Such lifts may also be repeated, until the expression has no lambda abstractions, to transform the program. A lift is given a sub-expression within an expression to lift to the top of that expression. The expression may be part of a larger program. This allows control of where the sub-expression is lifted to. The lambda lift operation used to perform a lift within a program is, The sub expression may be either a lambda abstraction, or a lambda abstraction applied to a parameter. Two types of lift are possible. An anonymous lift has a lift expression which is a lambda abstraction only. It is regarded as defining an anonymous function. A name must be created for the function. A named lift expression has a lambda abstraction applied to an expression. This lift is regarded as a named definition of a function. An anonymous lift takes a lambda abstraction (calledS). ForS; The lambda lift is the substitution of the lambda abstractionSfor a function application, along with the addition of a definition for the function. The new lambda expression hasSsubstituted for G:L[S:=G] means substitution ofSforGinL. The function definitions has the function definitionG = Sadded. In the above ruleGis the function application that is substituted for the expressionS. It is defined by, whereVis the function name. It must be a new variable, i.e. a name not already used in the lambda expression, wherevars⁡[E]{\displaystyle \operatorname {vars} [E]}is a meta function that returns the set of variables used inE. Seede-lambdainConversion from lambda to let expressions. The result is, The function callGis constructed by adding parameters for each variable in the free variable set (represented byV), to the functionH, The named lift is similar to the anonymous lift except that the function nameVis provided. As for the anonymous lift, the expressionGis constructed fromVby applying the free variables ofS. It is defined by, For example, Seede-lambdainConversion from lambda to let expressions. The result is, gives, A lambda lift transformation takes a lambda expression and lifts all lambda abstractions to the top of the expression. The abstractions are then translated intorecursive functions, which eliminates the lambda abstractions. The result is a functional program in the form, whereMis a series of function definitions, andNis the expression representing the value returned. For example, Thede-letmeta function may then be used to convert the result back into lambda calculus. The processing of transforming the lambda expression is a series of lifts. Each lift has, After the lifts are applied the lets are combined together into a single let. ThenParameter droppingis applied to remove parameters that are not necessary in the "let" expression. The let expression allows the function definitions to refer to each other directly, whereas lambda abstractions are strictly hierarchical, and a function may not directly refer to itself. There are two different ways that an expression may be selected for lifting. The first treats all lambda abstractions as defining anonymous functions. The second, treats lambda abstractions which are applied to a parameter as defining a function. Lambda abstractions applied to a parameter have a dual interpretation as either a let expression defining a function, or as defining an anonymous function. Both interpretations are valid. These two predicates are needed for both definitions. lambda-free- An expression containing no lambda abstractions. lambda-anon- An anonymous function. An expression likeλx1....λxn.X{\displaystyle \lambda x_{1}.\ ...\ \lambda x_{n}.X}where X is lambda free. Search for the deepest anonymous abstraction, so that when the lift is applied the function lifted will become a simple equation. This definition does not recognize a lambda abstractions with a parameter as defining a function. All lambda abstractions are regarded as defining anonymous functions. lift-choice- The first anonymous found in traversing the expression ornoneif there is no function. For example, Search for the deepest named or anonymous function definition, so that when the lift is applied the function lifted will become a simple equation. This definition recognizes a lambda abstraction with an actual parameter as defining a function. Only lambda abstractions without an application are treated as anonymous functions. For example, For example, theY combinator, is lifted as, and afterParameter dropping, As a lambda expression (seeConversion from let to lambda expressions), (λf.(λx.f(xx))(λx.f(xx))){\displaystyle (\lambda f.(\lambda x.f\ (x\ x))(\lambda x.f\ (x\ x)))} If lifting anonymous functions only, the Y combinator is, and afterParameter dropping, As a lambda expression, The first sub expression to be chosen for lifting isλx.f(xx){\displaystyle \lambda x.f\ (x\ x)}. This transforms the lambda expression intoλf.(pf)(pf){\displaystyle \lambda f.(p\ f)\ (p\ f)}and creates the equationpfx=f(xx){\displaystyle p\ f\ x=f(x\ x)}. The second sub expression to be chosen for lifting isλf.(pf)(pf){\displaystyle \lambda f.(p\ f)\ (p\ f)}. This transforms the lambda expression intoqp{\displaystyle q\ p}and creates the equationqpf=(pf)(pf){\displaystyle q\ p\ f=(p\ f)\ (p\ f)}. And the result is, Surprisingly this result is simpler than the one obtained from lifting named functions. Apply function toK, So, or The Y-Combinator calls its parameter (function) repeatedly on itself. The value is defined if the function has afixed point. But the function will never terminate. Lambda dropping[4]is making the scope of functions smaller and using the context from the reduced scope to reduce the number of parameters to functions. Reducing the number of parameters makes functions easier to comprehend. In theLambda liftingsection, a meta function for first lifting and then converting the resulting lambda expression into recursive equation was described. The Lambda Drop meta function performs the reverse by first converting recursive equations to lambda abstractions, and then dropping the resulting lambda expression, into the smallest scope which covers all references to the lambda abstraction. Lambda dropping is performed in two steps, A Lambda drop is applied to an expression which is part of a program. Dropping is controlled by a set of expressions from which the drop will be excluded. where, The lambda drop transformation sinks all abstractions in an expression. Sinking is excluded from expressions in a set of expressions, where, sink-transinks each abstraction, starting from the innermost, Sinking is moving a lambda abstraction inwards as far as possible such that it is still outside all references to the variable. Application- 4 cases. Abstraction. Use renaming to ensure that the variable names are all distinct. Variable- 2 cases. Sink test excludes expressions from dropping, For example, Parameter dropping is optimizing a function for its position in the function. Lambda lifting added parameters that were necessary so that a function can be moved out of its context. In dropping, this process is reversed, and extra parameters that contain variables that are free may be removed. Dropping a parameter is removing an unnecessary parameter from a function, where the actual parameter being passed in is always the same expression. The free variables of the expression must also be free where the function is defined. In this case the parameter that is dropped is replaced by the expression in the body of the function definition. This makes the parameter unnecessary. For example, consider, In this example the actual parameter for the formal parameterois alwaysp. Aspis a free variable in the whole expression, the parameter may be dropped. The actual parameter for the formal parameteryis alwaysn. Howevernis bound in a lambda abstraction. So this parameter may not be dropped. The result of dropping the parameter is, For the main example, The definition ofdrop-params-tranis, where, For each abstraction that defines a function, build the information required to make decisions on dropping names. This information describes each parameter; the parameter name, the expression for the actual value, and an indication that all the expressions have the same value. For example, in, the parameters to the functiongare, Each abstraction is renamed with a unique name, and the parameter list is associated with the name of the abstraction. For example,gthere is parameter list. build-param-listsbuilds all the lists for an expression, by traversing the expression. It has four parameters; Abstraction- A lambda expression of the form(λN.S)L{\displaystyle (\lambda N.S)\ L}is analyzed to extract the names of parameters for the function. Locate the name and start building the parameter list for the name, filling in the formal parameter names. Also receive any actual parameter list from the body of the expression, and return it as the actual parameter list from this expression Variable- A call to a function. For a function name or parameter start populating actual parameter list by outputting the parameter list for this name. Application- An application (function call) is processed to extract actual parameter details. Retrieve the parameter lists for the expression, and the parameter. Retrieve a parameter record from the parameter list from the expression, and check that the current parameter value matches this parameter. Record the value for the parameter name for use later in checking. The above logic is quite subtle in the way that it works. The same value indicator is never set to true. It is only set to false if all the values cannot be matched. The value is retrieved by usingSto build a set of the Boolean values allowed forS. If true is a member then all the values for this parameter are equal, and the parameter may be dropped. Similarly,defuses set theory to query if a variable has been given a value; Let- Let expression. And- For use in "let". For example, building the parameter lists for, gives, and the parameter o is dropped to give, build-list⁡[λx.λo.λy.oxy,D,V,D[g]]∧D[g]=L1{\displaystyle \operatorname {build-list} [\lambda x.\lambda o.\lambda y.o\ x\ y,D,V,D[g]]\land D[g]=L_{1}} build-list⁡[λo.λy.oxy,D,V,L1]∧D[g]=[x,_,_]::L1{\displaystyle \operatorname {build-list} [\lambda o.\lambda y.o\ x\ y,D,V,L_{1}]\land D[g]=[x,\_,\_]::L_{1}} build-list⁡[λy.oxy,D,V,L2]∧D[g]=[x,_,_]::[o,_,_]::L2{\displaystyle \operatorname {build-list} [\lambda y.o\ x\ y,D,V,L_{2}]\land D[g]=[x,\_,\_]::[o,\_,\_]::L_{2}} build-param-lists⁡[n(gmpn),D,V,T1]∧build-param-lists⁡[gqpn,D,V,K1]{\displaystyle \operatorname {build-param-lists} [n\ (g\ m\ p\ n),D,V,T_{1}]\land \operatorname {build-param-lists} [g\ q\ p\ n,D,V,K_{1}]} build-param-lists⁡[n,D,V,T2]∧build-param-lists⁡[gmpn,D,V,K2]∧build-param-lists⁡[gqpn,D,V,K1]{\displaystyle \operatorname {build-param-lists} [n,D,V,T_{2}]\land \operatorname {build-param-lists} [g\ m\ p\ n,D,V,K_{2}]\land \operatorname {build-param-lists} [g\ q\ p\ n,D,V,K_{1}]} build-param-lists⁡[gmpn,D,V,K2]∧build-param-lists⁡[gqpn,D,V,K1]{\displaystyle \operatorname {build-param-lists} [g\ m\ p\ n,D,V,K_{2}]\land \operatorname {build-param-lists} [g\ q\ p\ n,D,V,K_{1}]} Gives, build-param-lists⁡[gmpn,D,V,K2]{\displaystyle \operatorname {build-param-lists} [g\ m\ p\ n,D,V,K_{2}]} build-param-lists⁡[gmp,D,V,T3]∧build-param-lists⁡[n,D,V,K3]{\displaystyle \operatorname {build-param-lists} [g\ m\ p,D,V,T_{3}]\land \operatorname {build-param-lists} [n,D,V,K_{3}]} build-param-lists⁡[gm,D,V,T4]∧build-param-lists⁡[p,D,V,K4]{\displaystyle \operatorname {build-param-lists} [g\ m,D,V,T_{4}]\land \operatorname {build-param-lists} [p,D,V,K_{4}]} build-param-lists⁡[g,D,V,T5]∧build-param-lists⁡[m,D,V,K5]{\displaystyle \operatorname {build-param-lists} [g,D,V,T_{5}]\land \operatorname {build-param-lists} [m,D,V,K_{5}]} D[g]=[x,S5,A5]::[o,S4,A4]::[y,S3,A3]::K2{\displaystyle D[g]=[x,S_{5},A_{5}]::[o,S_{4},A_{4}]::[y,S_{3},A_{3}]::K_{2}} build-param-lists⁡[gqpn,D,V,K1]{\displaystyle \operatorname {build-param-lists} [g\ q\ p\ n,D,V,K_{1}]} build-param-lists⁡[gqp,D,V,T6]∧build-param-lists⁡[n,D,V,K6]{\displaystyle \operatorname {build-param-lists} [g\ q\ p,D,V,T_{6}]\land \operatorname {build-param-lists} [n,D,V,K_{6}]} build-param-lists⁡[gq,D,V,T7]∧build-param-lists⁡[p,D,V,K7]{\displaystyle \operatorname {build-param-lists} [g\ q,D,V,T_{7}]\land \operatorname {build-param-lists} [p,D,V,K_{7}]} build-param-lists⁡[g,D,V,T8]∧build-param-lists⁡[m,D,V,K8]{\displaystyle \operatorname {build-param-lists} [g,D,V,T_{8}]\land \operatorname {build-param-lists} [m,D,V,K_{8}]} D[g]=[x,S8,A8]::[o,S6,A7]::[y,S6,A6]::K1{\displaystyle D[g]=[x,S_{8},A_{8}]::[o,S_{6},A_{7}]::[y,S_{6},A_{6}]::K_{1}} As there are no definitions for,V[n],V[p],V[q],V[m]{\displaystyle V[n],V[p],V[q],V[m]}, then equate can be simplified to, By removing expressions not needed,D[g]=[x,S5,A5]::[o,S4,A4]::[y,S3,A3]::K2{\displaystyle D[g]=[x,S_{5},A_{5}]::[o,S_{4},A_{4}]::[y,S_{3},A_{3}]::K_{2}} D[g]=[x,S8,A8]::[o,S6,A7]::[y,S6,A6]::K1{\displaystyle D[g]=[x,S_{8},A_{8}]::[o,S_{6},A_{7}]::[y,S_{6},A_{6}]::K_{1}} By comparing the two expressions forD[g]{\displaystyle D[g]}, get, IfS3{\displaystyle S_{3}}is true; IfS3{\displaystyle S_{3}}is false there is no implication. SoS3=_{\displaystyle S_{3}=\_}which means it may be true or false. IfS4{\displaystyle S_{4}}is true; IfS5{\displaystyle S_{5}}is true; SoS5{\displaystyle S_{5}}is false. The result is,D[g]=[x,false,_]::[o,_,p]::[y,_,n]::_{\displaystyle D[g]=[x,\operatorname {false} ,\_]::[o,\_,p]::[y,\_,n]::\_} build-param-lists⁡[ox,D,V,T9]∧build-param-lists⁡[y,D,V,K9]{\displaystyle \operatorname {build-param-lists} [o\ x,D,V,T_{9}]\land \operatorname {build-param-lists} [y,D,V,K_{9}]} build-param-lists⁡[o,D,V,T10]∧build-param-lists⁡[x,D,V,K10]∧build-param-lists⁡[y,D,V,K10]{\displaystyle \operatorname {build-param-lists} [o,D,V,T_{10}]\land \operatorname {build-param-lists} [x,D,V,K_{10}]\land \operatorname {build-param-lists} [y,D,V,K_{10}]} By similar arguments as used above get, and from previously, Another example is, Here x is equal to f. The parameter list mapping is, and the parameter x is dropped to give, The logic inequateis used in this more difficult example. ∧D[f]=[F2,S2,A2]::[F1,S1,A1]::_{\displaystyle \land D[f]=[F_{2},S_{2},A_{2}]::[F_{1},S_{1},A_{1}]::\_} After collecting the results together, From the two definitions forD[p]{\displaystyle D[p]}; so UsingD[q]=D[p]{\displaystyle D[q]=D[p]}and by comparing with the above, so, in, reduces to, also, reduces to, So the parameter list for p is effectively; Use the information obtained byBuild parameter liststo drop actual parameters that are no longer required.drop-paramshas the parameters, Abstraction where, where, Variable For a function name or parameter start populating actual parameter list by outputting the parameter list for this name. Application- An application (function call) is processed to extract Let- Let expression. And- For use in "let". From the results of building parameter lists; so, so, D[g]=[[x,false,_],[o,_,p],[y,_,n]]{\displaystyle D[g]=[[x,\operatorname {false} ,\_],[o,\_,p],[y,\_,n]]}=[F3,S3,A3]::[F2,S2,A2]::[F1,S1,A1]::_]{\displaystyle =[F_{3},S_{3},A_{3}]::[F_{2},S_{2},A_{2}]::[F_{1},S_{1},A_{1}]::\_]} F3=x,S3=false,A3=_{\displaystyle F_{3}=x,S_{3}=\operatorname {false} ,A_{3}=\_}F2=o,S2=_,A2=p{\displaystyle F_{2}=o,S_{2}=\_,A_{2}=p}F1=y,S1=_,A1=n{\displaystyle F_{1}=y,S_{1}=\_,A_{1}=n} gmn{\displaystyle g\ m\ n} D[g]=[[x,false,_],[o,_,p],[y,_,n]]{\displaystyle D[g]=[[x,\operatorname {false} ,\_],[o,\_,p],[y,\_,n]]}=[F6,S6,A6]::[F5,S5,A5]::[F4,S4,A4]::_]{\displaystyle =[F_{6},S_{6},A_{6}]::[F_{5},S_{5},A_{5}]::[F_{4},S_{4},A_{4}]::\_]} F6=x,S6=false,A6=_{\displaystyle F_{6}=x,S_{6}=\operatorname {false} ,A_{6}=\_}F5=o,S5=_,A5=p{\displaystyle F_{5}=o,S_{5}=\_,A_{5}=p}F4=y,S4=_,A4=n{\displaystyle F_{4}=y,S_{4}=\_,A_{4}=n} gqn{\displaystyle g\ q\ n} drop-formalremoves formal parameters, based on the contents of the drop lists. Its parameters are, drop-formalis defined as, Which can be explained as, Starting with the function definition of the Y-combinator, Which gives back theY combinator,
https://en.wikipedia.org/wiki/Lambda_lifting
Infolklore,fairy-locks(orelflocks) are the result offairiestangling andknottingthe hairs of sleeping children and the manes of beasts as the fairies play in and out of their hair at night.[1] The concept is first attested in English inShakespeare'sRomeo and JulietinMercutio's speech of the many exploits ofQueen Mab, where he seems to imply the locks are only unlucky if combed out: Therefore, the appellation of elf lock or fairy lock could be attributed to any various tangles and knots of unknown origins appearing in the manes of beasts or hair of sleeping children. It can also refer to tangles of elflocks or fairy-locks in human hair. In King Lear, when Edgar impersonates a madman, "elf all my hair in knots."[2](Lear, ii. 3.) What Edgar has done, simply put, is made a mess of his hair. See alsoJane Eyre, Ch. XIX; Jane's description of Rochester disguised as a gypsy: "... elf-locks bristled out from beneath a white band ..." German counterparts of the "elf-lock" areAlpzopf,Drutenzopf,Wichtelzopf,Weichelzopf,Mahrenlocke,Elfklatte, etc. (wherealp,drude,mare, andwightare given as the beings responsible). Grimm, who compiled the list, also remarked on the similarity toFrau Holle, who entangled people's hair and herself had matted hair.[3]The use of the wordelfseems to have declined steadily in English, becoming a rural dialect term, before being revived by translations of fairy tales in the nineteenth century and fantasy fiction in the twentieth. Fairy-locks are ascribed in French traditions to thelutin.[4] In Poland and nearby countries, witches and evil spirits were often blamed forPolish plait. This can be, however, a serious medical condition or an intentional hairstyle.[citation needed]
https://en.wikipedia.org/wiki/Elflock
Integrityis the quality of being honest and having a consistent and uncompromising adherence to strong moral and ethicalprinciplesandvalues.[1][2]Inethics, integrity is regarded as the honesty andtruthfulnessorearnestnessof one's actions. Integrity can stand in opposition tohypocrisy.[3]It regards internal consistency as a virtue, and suggests that people who hold apparently conflicting values should account for the discrepancy or alter those values. The wordintegrityevolved from the Latin adjectiveinteger, meaningwholeorcomplete.[1]In this context, integrity is the inner sense of "wholeness" deriving from qualities such ashonestyand consistency ofcharacter.[4] Inethics, a person is said to possess thevirtueof integrity if the person's actions are based upon an internally consistent framework of principles.[5]These principles should uniformly adhere to sound logicalaxiomsor postulates. A person has ethical integrity to the extent that the person's actions, beliefs, methods, measures, and principles align with a well-integratedcore group of values. A person must, therefore, be flexible and willing to adjust these values to maintain consistency when these values are challenged—such as when observed results are incongruous with expected outcomes. Because such flexibility is a form ofaccountability, it is regarded as amoral responsibilityas well as a virtue. A person'svalue systemprovides aframeworkwithin which the person acts in ways that are consistent and expected. Integrity can be seen as the state of having such a framework and acting congruently within it. One essential aspect of a consistent framework is its avoidance of any unwarranted (arbitrary) exceptions for a particular person or group—especially the person or group that holds the framework. In law, this principle of universal application requires that even those in positions of official power can be subjected to the same laws as pertain to their fellow citizens. In personal ethics, this principle requires that one should not act according to any rule that one would not wish to see universally followed. For example, one should not steal unless one would want to live in a world in which everyone was a thief. The philosopherImmanuel Kantformally described the principle of universality of application for one's motives in hiscategorical imperative. The concept of integrity implies a wholeness—a comprehensive corpus of beliefs often referred to as aworldview. This concept of wholeness emphasizeshonestyandauthenticity, requiring that one act at all times in accordance with one's worldview. Ethical integrity is not synonymous with the good, as Zuckert and Zuckert show aboutTed Bundy: When caught, he defended his actions in terms of thefact-value distinction. He scoffed at those, like the professors from whom he learned the fact-value distinction, who still lived their lives as if there were truth-value to value claims. He thought they were fools and that he was one of the few who had the courage and integrity to live a consistent life in light of the truth that value judgments, including the command "Thou shalt not kill," are merely subjective assertions. Politicians are given power to make, execute, or control policy, which can have important consequences. They typically promise to exercise this power in a way that serves society, but may not do so, which opposes the notion of integrity. Aristotle said that because rulers have power they will be tempted to use it for personal gain.[7] In the bookThe Servant of the People, Muel Kaptein says integrity should start with politicians knowing what their position[ambiguous]entails, because the consistency required by integrity applies also to the consequences of one's position. Integrity also demands knowledge and compliance with both the letter and the spirit of the written and unwritten rules. Integrity is also acting consistently not only with what is generally accepted as moral, what others think, but primarily with what is ethical, what politicians should do based on reasonable arguments.[8] Important[to whom?]virtues of politicians are faithfulness, humility,[8]and accountability. Furthermore, they should[according to whom?]be authentic and a role model.Aristotleidentifieddignity(megalopsychia, variously translated as proper pride, greatness of soul, and magnanimity)[9]as the crown of the virtues, distinguishing it from vanity, temperance, and humility. "Integrity tests" or (more confrontationally) "honesty tests"[10]aim to identify prospective employees who may hide perceived negative or derogatory aspects of their past, such as a criminal conviction or drug abuse. Identifying unsuitable candidates can save the employer from problems that might otherwise arise during their term of employment. Integrity tests make certain assumptions, specifically:[11] The claim that such tests can detect "fake" answers plays a crucial role in detecting people who have low integrity. Naive respondents really believe this pretense and behave accordingly, reporting some of their past deviance and their thoughts about the deviance of others, fearing that if they do not answer truthfully their untrue answers will reveal their "low integrity". These respondents believe that the more candid they are in their answers, the higher their "integrity score" will be.[12][clarification needed] Disciplines and fields with an interest in integrity includephilosophy of action, philosophy ofmedicine,mathematics, themind,cognition,consciousness,materials science,structural engineering, andpolitics. Popular psychology identifies personal integrity, professional integrity, artistic integrity, and intellectual integrity. For example, to behave with scientific integrity, a scientific investigation shouldn't determine the outcome in advance of the actual results. As an example of a breach of this principle,Public Health England, a UK Government agency, stated that they upheld a line of government policy in advance of the outcome of a study that they had commissioned.[13] The concept of integrity may also feature inbusinesscontexts that go beyond the issues of employee/employer honesty and ethical behavior, notably in marketing or branding contexts. Brand "integrity" gives a company's brand a consistent, unambiguous position in the mind of their audience. This is established for example via consistent messaging and a set of graphics standards to maintain visual integrity inmarketing communications. Kaptein and Wempe developed a theory of corporate integrity that includes criteria for businesses dealing with moral dilemmas.[14] Another use of the term "integrity" appears inMichael Jensen's andWerner Erhard's paper, "Integrity: A Positive Model that Incorporates the Normative Phenomenon of Morality, Ethics, and Legality". The authors model integrity as the state of being whole and complete, unbroken, unimpaired, sound, and in perfect condition. They posit a model of integrity that provides access to increased performance for individuals, groups, organizations, and societies. Their model "reveals the causal link between integrity and increased performance, quality of life, and value-creation for all entities, and provides access to that causal link."[15]According to Muel Kaptein, integrity is not a one-dimensional concept. In his book he presents a multifaceted perspective of integrity. Integrity relates, for example, to compliance to the rules as well as to social expectations, to morality as well as to ethics, and to actions as well as to attitude.[8] Electronic signals are said to have integrity when there is no corruption of information between one domain and another, such as from a disk drive to a computer display. Such integrity is a fundamental principle ofinformation assurance. Corrupted information isuntrustworthy; uncorrupted information is of value.
https://en.wikipedia.org/wiki/Integrity
Affirmed,United States v. Carpenter, 819F.3d880(6th Cir. 2016). Remanded for resentencing, 788 Fed. Appx. 364 (6th Cir. 2019).Affirmed, No.22-1198(6th Cir. 2023).Rehearingen bancdenied (6th Cir. 2023). Carpenter v. United States,585U.S.296(2018), is alandmarkUnited States Supreme Courtcase concerning the privacy of historicalcell site location information(CSLI). The Court held that government entities violate theFourth Amendment to the United States Constitutionwhen accessing historical CSLI records containing the physical locations of cellphones without asearch warrant.[1] Prior toCarpenter, government entities could obtain cellphone location records from service providers by claiming the information was required as part of an investigation, without a warrant, but the ruling changed this procedure. Recognizing the influence of new consumer communications devices in the 2010s, the Court expanded its conceptions of constitutional rights toward the privacy of this type of data. However, the Court emphasized that theCarpenterruling was narrowly restricted to the precise types of information and search procedures that were relevant to this case.[2][3] Cellular telephoneservice providers are able to find the location of cell phones through either global positioning system (GPS) data orcell site location information(CSLI), in the process of connecting calls and data transmissions. CSLI is captured by nearbycell towers, and this information is used totriangulatethe location of phones.[4]Service providers capture and store this data for business purposes, such as troubleshooting, maximizing network efficiencies, and determining whether to charge customers roaming fees for particular calls.[5] The data can also illustrate the historical movements of a cellphone. Thus, anyone with access to this data has the ability to know where the phone has been and what other cell phones were in the same area at a given time. When users travel with their cellphones, this data can theoretically illustrate every place a person has traveled, and possibly the locations of other people encountered via their corresponding data.[6] Prior toCarpenter, the Supreme Court consistently held that a person had no reasonable expectation of privacy in regard to information voluntarily turned over to third-parties such as telephone companies, and therefore asearch warrantis not required when government officials seek this information.[7]This legal theory is known as thethird-party doctrine, established by the Supreme Court inSmith v. Maryland(1979), in which the Court determined that government can obtain a list of phone numbers dialed from a suspect's phone.[8] By the 2010s, cellphones and particularlysmartphoneshad become important tools for nearly every person in the United States.[9]Many applications, such asGPSnavigation and location tools, require a phone to send and receive information constantly, including the exact location of the phone, often without an affirmative action on the part of its owner. As technology advanced in the 2010s, the Supreme Court began to modify its precedents on government searches of personal communications devices, given new consumer behaviors that may transcend the third-party doctrine.[10] Between December 2010 and March 2011, several individuals in theDetroit,Michiganarea conspired and participated in armed robberies atRadioShackandT-Mobilestores across the region.[11]In April 2011, four of the robbers were captured and arrested. The petitioner, Timothy Carpenter, was not among the initial group of arrestees. One of those arrested confessed and turned over his phone so thatFBIagents could review the calls made from his phone around the time of the robberies.[1]The agents obtained a search warrant to inspect the information in that arrestee's phone, in order to find additional contacts of the arrestee and compile more evidence about the crime ring.[12][13] From the historical cell site records on the arrestee's phone, the agents confirmed that Timothy Carpenter was also part of the crime ring, and proceeded to compile information about the location of his phone over 127 days. In turn, this information revealed that Carpenter had been within a two-mile radius of four robberies at the times they were perpetrated.[1]This evidence was used to support Carpenter's arrest. At criminal court, Carpenter was found guilty of several counts of aiding and abetting robberies that affected interstate commerce, and another count of using a firearm during a violent crime. He was sentenced to 116 years in prison.[14] Carpenter appealed his conviction and sentence to theUnited States Court of Appeals for the Sixth Circuit, arguing that the CSLI evidence used against him should be suppressed because the police had not obtained a warrant pertaining tohisCSLI records before searching through them. In 2015, the Circuit Court upheld Carpenter's conviction.[15]This ruling was largely based on theSmith v. Marylandprecedent, stating that Carpenter used cellular telephone networks voluntarily, and per thethird-party doctrinehe had noreasonable expectationthat the data should be private. Thus, review of that information by the police did not constitute a "search" and did not require a warrant under theFourth Amendment.[16] Carpenter appealed this ruling to the U.S. Supreme Court, which grantedcertiorariin 2016.[17][18] Twentyamicus curiaebriefs were filed by interested organizations, scholars, and corporations for Carpenter's case.[19]Some considered the case to be the most important Fourth Amendment dispute to come before the Supreme Court in a generation.[20][21]The Court issued its decision in 2018, with the majority opinion written by Chief JusticeJohn Roberts. The Court's ruling recognized that theCarpentercase revealed a contradiction between two lines of Supreme Court rulings on the matter of police searches of personal communications information.[1]InUnited States v. Jones(2012) the Court had ruled thatGPS trackingcould constitute a search under theFourth Amendmentas a violation of a person'sreasonable expectation of privacy.[22]Meanwhile, the Court had held inSmith v. Maryland(1979) that thethird-party doctrineabsolved the government from warrant requirements when searching through telephone records.[23] Ultimately, inCarpenterthe court determined that the third-party doctrine could not be extended to historicalcell site location information(CSLI). Instead, the Court compared "detailed, encyclopedic, and effortlessly compiled" CSLI records to theGPSinformation at issue inUnited States v. Jones, recognizing that both forms of data accord the government the ability to track individuals' past movements.[24]Furthermore, the Court noted that CSLI could pose even greater privacy risks thanGPSdata, as the prevalence ofcellphonescould accord the government "near perfect surveillance" of an individual's movements. Accordingly, the Court ruled that, under theFourth Amendment, the government must obtain asearch warrantin order to access historical CSLI records.[1] Roberts argued that technology "has afforded law enforcement a powerful new tool to carry out its important responsibilities. At the same time, this tool risks Government encroachment of the sort the Framers [of the U.S. Constitution], after consulting the lessons of history, drafted the Fourth Amendment to prevent."[25]As stated in the opinion, "Unlike the nosy neighbor who keeps an eye on comings and goings, they [new technologies] are ever alert, and their memory is nearly infallible. There is a world of difference between the limited types of personal information addressed inSmith[...] and the exhaustive chronicle of location information casually collected by wireless carriers today."[26] However, Roberts stressed that theCarpenterdecision was a very narrow one and did not affect other uses of thethird-party doctrine, such as searches of banking records. Similarly, he noted that the decision did not prevent the collection of CSLI without a warrant in cases of emergency or for issues of national security.[27] JusticeAnthony Kennedy, in a dissenting opinion joined by Thomas and Alito, cautioned against the limitations on law enforcement inherent in the majority opinion. According to Kennedy, the ruling "places undue restrictions on the lawful and necessary enforcement powers exercised not only by the Federal Government, but also by law enforcement in every State and locality throughout the Nation. Adherence to this Court's longstanding precedents and analytic framework would have been the proper and prudent way to resolve this case."[28] In another dissent, JusticeSamuel Alitowrote: "I fear that today's decision will do far more harm than good. The Court's reasoning fractures two fundamental pillars of Fourth Amendment law, and in doing so, it guarantees a blizzard of litigation while threatening many legitimate and valuable investigative practices upon which law enforcement has rightfully come to rely."[29] In his dissent, JusticeNeil Gorsuchargued that the Fourth Amendment had lost its original meaning based on property rights, stating that it "grants you the right to invoke its guarantees whenever one of your protected things (your person, your house, your papers, or your effects) is unreasonably searched or seized. Period."[30]Gorsuch further recommended that thethird-party doctrine, as well asKatz v. United States, be overturned as inconsistent with the original meaning and application of the Fourth Amendment.[31]On the facts of the case, Gorsuch stressed that CSLI data is personal property, and its storage by telephone companies should be immaterial as the company is serving, in effect, as abailee.[32] JusticeThomasargued, in his dissent, that what mattered was not if there was a search, but rather on whose property was searched.[33]Thomas points that the Fourth Amendment guarantees each person to be secure from unreasonable searches intheir own property.He argues that the cell phone records were the property of the phone service provider, since Carpenter did not keep the records, maintain the records, nor could he destroy them. Therefore, Carpenter could not bring suit under a Fourth Amendment violation, since it was not his property which was searched. Additionally, Thomas criticizes theKatzdecision's framework, on which this case, and other cases relied on by the majority such asSmith v. MarylandandUnited States v. Jones,draw from heavily. Thomas says that theKatztest has, "no basis in the text or history of the Fourth Amendment. And, it invites courts to make judgements about policy, not law."[34]Thomas goes on further to write that, "The Fourth Amendment, as relevant here, protects '[t]he right of people to be secure in their persons, houses, papers, and effects, against unreasonable searches.' By defining 'search' to mean 'any violation of reasonable expectations of privacy,' theKatztest misconstrues virtually every one of those words."[35]Thomas concludes by saying theKatztest is a failed experiment, and that the court should reconsider it. After the Supreme Court ruling, Carpenter's criminal conviction wasremandedto theSixth Circuitto determine if it could stand without the CSLI data that required a warrant per the Supreme Court. Carpenter's lawyers argued that the data should have been subject to theexclusionary ruleand thrown out as material collected without a proper warrant under the Supreme Court's ruling. However, the Circuit Court judges concluded that the FBI was acting ingood faithwith respect to collecting the data based on the law at the time the crimes were committed.[36]This type of good faith exemption is permitted per another Supreme Court precedent,Davis v. United States(2011).[37]The evidence was allowed to stand, and the Sixth Circuit again upheld Carpenter's criminal conviction and prison sentence. His arguments concerning sentencing procedures under the recently enactedFirst Step Actwere rejected.[36] The Supreme Court's ruling inCarpenterwas narrow and did not otherwise change thethird-party doctrinerelated to other business records that might incidentally reveal location information, nor did it overrule prior decisions concerning conventional surveillance techniques and tools such assecurity cameras. The Court did not extend its ruling to other matters related to cellphones not presented inCarpenter, including real-time CSLI or "tower dumps" (the downloading of information about all the devices that were connected to a particular cell site during a particular interval). The opinion also did not consider other data collection goals involving foreign affairs or national security.[2][3]
https://en.wikipedia.org/wiki/Carpenter_v._United_States
Social media measurement, also calledsocial media controlling, is themanagementpractice of evaluating successful social media communications of brands, companies, or other organizations.[1] Key performance indicatorsmay be measured by extracting information from social media channels,[2]such asblogs,wikis, micro-blogs such asTwitter,social networkingsites, or video/photo sharing websites,forumsfrom time to time. It is also used by companies to gauge current trends in the industry.[3]The process first gathers data from different websites and then performs analysis based on different metrics like time spent on the page,click through rate, content share, comments,text analyticsto identify positive or negative emotions about the brand.[4][5]Some other social media metrics include share of voice, owned mentions, and earned mentions. The social media measurement process starts with defining a goal that needs to be achieved and defining the expected outcome of the process. The expected outcome varies per the goal and is usually measured by a variety of metrics. This is followed by defining possible social strategies to be used to achieve the goal. Then the next step is designing strategies to be used and setting up configuration tools that ease the process of collecting the data. In the next step, strategies and tools are deployed in real-time. This step involves conductingQuality Assurancetests of the methods deployed to collect the data. And in the final step, data collected from the system is analyzed and if the need arises, it is refined on the run time to enhance the methodologies used. The last step ensures that the result obtained is more aligned with the goal defined in the first step.[6] Acquiring data from social media is in demand of an exploring the user participation and population with the purpose of retrieving and collecting so many kinds of data(ex: comments, downloads etc.).[7]There are several prevalent techniques to acquire data such as Networktraffic analysis, Ad-hoc application andCrawling[8] Network Traffic Analysis- Network traffic analysis is the process of capturing network traffic and observing it closely to determine what is happening in the network. It is primarily done to improve the performance, security and other general management of the network.[9]However concerned about the potential tort of privacy on the Internet, network traffic analysis is always restricted by the government. Furthermore, high-speed links are not adaptable to traffic analysis because of the possible overload problem according to the packet sniffing mechanism[10] Ad-hoc Application- Ad-hoc application is a kind of application that provides services and games tosocial networkusers by developing the APIs offered by social network companies (Facebook Developer Platform). The infrastructure of Ad-hoc application allows the user to interact with the interface layer instead of the application servers. The API provides a path for application to access information after the user login.[8]Moreover, the size of the data set collected vary with the popularity of the social media platform i.e. social media platforms having high number of users will have more data than platforms having less user base.[8]Scraping is a process in which the APIs collect online data from social media. The data collected from Scraping is in raw format. However, having access to these type of data is a bit difficult because of its commercial value.[11] Crawling- Crawling is a process in which a web crawler creates indexes of all the words in a web-page, stores them, then follows all the hyperlinks and indexes on that page and again stores them.[12]It is the most popular technique for data acquisition and is also well known for its easy operation based on prevalent Object-Orientated Programming Language (Java or Python etc.). And most important, social network companies (YouTube, Flicker, Facebook, Instagram, etc.) are friendly to crawling techniques by providing public APIs[13] Monitoring social media allows researchers to find insights into abrand's overall visibility on social media, to measure the impact of campaigns, to identify opportunities for engagement, to assess competitor activity and share of voice, and to detect impending crises. It can also provide valuable information about emerging trends and whatconsumersand clients think about specific topics, brands or products.[14]This is the work of a cross-section of groups that include market researchers,PRstaff,marketingteams, social-engagement, and community staff, agencies andsalesteams. Several different providers have developed tools to facilitate the monitoring of a variety of social media channels - from blogging to internet video to internet forums. This allowscompaniesto track what consumers say about their brands and actions. Companies can then react to these conversations and interact with consumers through social media platforms.[2] Apart from commercial applications, social media monitoring has become a pervasive technique applied by public organizations and governments. Monitoring is a tradition within thepublic sector, and social-media monitoring provides a real-time approach to detecting and responding to social developments. Governments have come to realize the need for strategies to cope with surprises from the rapid expansion of public issues. Sobkowicz[15]introduced a framework with three blocks of social-media opinion tracking, simulating and forecasting. It includes: Bekkers introduced the application of social media monitoring in the Netherlands.[16][need quotation to verify]Public organizations in the Netherlands (such as the Tax Agency and theEducation Ministry) have started to use social media monitoring to obtain better insights into the sentiments of target groups. On the one hand, the public sector will be enabled to provide timely and efficient answers to the public by using social media monitoring techniques, but on the other hand, they also have to deal with concerns about ethical issues such astransparencyandprivacy. Social media management software (SMMS) is an application program or software that facilitates an organization's ability to successfully engage in social media across differentcommunication channels. SMMS is used to monitor inbound and outbound conversations, support customer interaction, audit or document social marketing initiatives and evaluate the usefulness of a social media presence.[17] It can be difficult to measure all social media conversations. Due to privacy settings and other issues, not all social media conversations can be found and reported by monitoring tools. However, whilst social media monitoring cannot give absolute figures, it can be extremely useful for identifying trends and for benchmarking, in addition to the uses mentioned above. These findings can, in turn, influence and shape future business decisions. In order to access social media data (posts, Tweets, and meta-data) and to analyze and monitor social media, many companies use software technologies built for business. Mostsocial media networksallow users to add a location to their posts (reference all of our feeds). The location can be classified as either 'at-the-location' or 'about-the-location'. "'At-the-location' services can be defined as services where location-based content is created at the geographic location. 'About-the-location' services can be defined as services which are referring to a particular location but the content is not necessarily created in this particular physical place."[18]The added information available from geotagged (link to Geotagging article) posts means that they can be displayed on a map. This means that a location can be used as the start of a social media search rather than a keyword or hashtag. This has major implications for disaster relief, event monitoring, safety and security professionals since a large portion of their job is related to tracking and monitoring specific locations. Various monitoring platforms use different technologies for social media monitoring and measurement. These technology providers may connect to theAPIprovided by social platforms that are created for 3rd party developers to develop their own applications and services that access data. Facebook's Graph API is one such API that social media monitoring solution products would connect to pull data from.[19]Some social media monitoring and analytics companies use calls to data providers each time an end-user develops a query. Others will also store and index social posts to offer historical data to their customers. Additional monitoring companies usecrawlersand spidering technology to find keyword references. (See also:Semantic analysis,Natural language processing.) Basic implementation involves curating data from social media on a large scale and analyzing the results to make sense out of it. Examples of these platforms includeHootsuite,Sprout Social, andGoogle Analytics.[20]
https://en.wikipedia.org/wiki/Social_media_measurement
Conceptual dependency theoryis a model ofnatural language understandingused inartificial intelligencesystems. Roger SchankatStanford Universityintroduced the model in 1969, in the early days of artificial intelligence.[1]This model was extensively used by Schank's students atYale Universitysuch asRobert Wilensky, Wendy Lehnert, andJanet Kolodner. Schank developed the model to represent knowledge for natural language input into computers. Partly influenced by the work ofSydney Lamb, his goal was to make the meaning independent of the words used in the input, i.e. two sentences identical in meaning would have a single representation. The system was also intended to draw logical inferences.[2] The model uses the following basic representational tokens:[3] A set ofconceptual transitionsthen act on this representation, e.g. an ATRANS is used to represent a transfer such as "give" or "take" while a PTRANS is used to act on locations such as "move" or "go". An MTRANS represents mental acts such as "tell", etc. A sentence such as "John gave a book to Mary" is then represented as the action of an ATRANS on two real world objects, John and Mary.
https://en.wikipedia.org/wiki/Conceptual_dependency_theory
Financial analysis(also known asfinancial statement analysis,accounting analysis, oranalysis of finance) refers to an assessment of the viability, stability, and profitability of abusiness, sub-business,projector investment. It is performed by professionals who prepare reports usingratiosand other techniques, that make use of information taken fromfinancial statementsand other reports. These reports are usually presented to top management as one of their bases in making business decisions. Financial analysis may determine if a business will: Financial analysts often assess the following elements of a firm: Both 2 and 3 are based on the company'sbalance sheet, which indicates the financial condition of a business as of a given point in time. Financial analysts often comparefinancial ratios(ofsolvency,profitability, growth, etc.): Comparing financial ratios is merely one way of conducting financial analysis. Financial analysts can also use percentage analysis which involves reducing a series of figures as a percentage of some base amount.[1]For example, a group of items can be expressed as a percentage of net income. When proportionate changes in the same figure over a given time period expressed as a percentage is known as horizontal analysis.[2] Vertical or common-size analysis reduces all items on a statement to a "common size" as a percentage of some base value which assists in comparability with other companies of different sizes.[3]As a result, all Income Statement items are divided by Sales, and all Balance Sheet items are divided by Total Assets.[4] Another method is comparative analysis. This provides a better way to determine trends. Comparative analysis presents the same information for two or more time periods and is presented side-by-side to allow for easy analysis.[5] Financial ratiosface several theoretical challenges:
https://en.wikipedia.org/wiki/Financial_analysis
Testabilityis a primary aspect ofscience[1]and thescientific method. There are two components to testability: In short, ahypothesisistestableif there is a possibility of deciding whether it is true or false based onexperimentationby anyone. This allows anyone to decide whether atheorycan besupportedorrefutedbydata. However, the interpretation of experimental data may be also inconclusive oruncertain.Karl Popperintroduced the concept that scientific knowledge had the property offalsifiabilityas published inThe Logic of Scientific Discovery.[2] Thisphilosophy of science-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Testability
"The Magic Words are Squeamish Ossifrage" was the solution to a challengeciphertextposed by the inventors of theRSAcipherin 1977. The problem appeared inMartin Gardner'sMathematical Games columnin the August 1977 issue ofScientific American.[1]It was solved in 1993–94 by a large, joint computer project co-ordinated byDerek Atkins,Michael Graff,Arjen LenstraandPaul Leyland.[2][3][4][5]More than 600 volunteers contributedCPUtime from about 1,600 machines (two of which werefaxmachines) over six months. The coordination was done via theInternetand was one of the first such projects. Ossifrage('bone-breaker', from Latin) is an older name for thebearded vulture, a scavenger famous for dropping animal bones and live tortoises on top of rocks to crack them open. The 1993–94 effort began the tradition of using the words "squeamish ossifrage" incryptanalyticchallenges. The difficulty ofbreaking the RSA cipher—recovering aplaintextmessage given a ciphertext and the public key—is connected to the difficulty offactoringlarge numbers. While it is not known whether the two problems are mathematically equivalent, factoring is currently the only publicly known method of directly breaking RSA. Thedecryptionof the 1977 ciphertext involved the factoring of a 129-digit (426 bit) number,RSA-129, in order to recover the plaintext. Ron Rivestestimated in 1977 that factoring a 125-digit semiprime would require 40quadrillionyears, using the bestalgorithmknown and the fastest computers of the day.[6]In their original paper they recommended using 200-digit (663 bit) primes to provide a margin of safety against future developments,[7]though it may have only delayed the solution as a 200-digit semiprime was factored in 2005.[8][9]However, efficient factoring algorithms had not been studied much at the time, and a lot of progress was made in the following decades. Atkins et al. used thequadratic sievealgorithm invented byCarl Pomerancein 1981. While the asymptotically fasternumber field sievehad just been invented, it was not clear at the time that it would be better than the quadratic sieve for 129-digit numbers. The memory requirements of the newer algorithm were also a concern.[10] There was a US$100 prize associated with the challenge, which the winners donated to theFree Software Foundation. In 2015, the same RSA-129 number was factored in about one day, with the CADO-NFS open source implementation of number field sieve, using a commercial cloud computing service for about $30.[11]
https://en.wikipedia.org/wiki/The_Magic_Words_are_Squeamish_Ossifrage
Markov renewal processesare a class ofrandom processesin probability and statistics that generalize the class ofMarkovjump processes. Other classes of random processes, such asMarkov chainsandPoisson processes, can be derived as special cases among the class of Markov renewal processes, while Markov renewal processes are special cases among the more general class ofrenewal processes. In the context of a jump process that takes states in a state spaceS{\displaystyle \mathrm {S} }, consider the set of random variables(Xn,Tn){\displaystyle (X_{n},T_{n})}, whereTn{\displaystyle T_{n}}represents the jump times andXn{\displaystyle X_{n}}represents the associated states in the sequence of states (see Figure). Let the sequence of inter-arrival timesτn=Tn−Tn−1{\displaystyle \tau _{n}=T_{n}-T_{n-1}}. In order for the sequence(Xn,Tn){\displaystyle (X_{n},T_{n})}to be considered a Markov renewal process the following condition should hold: Pr(τn+1≤t,Xn+1=j∣(X0,T0),(X1,T1),…,(Xn=i,Tn))=Pr(τn+1≤t,Xn+1=j∣Xn=i)∀n≥1,t≥0,i,j∈S{\displaystyle {\begin{aligned}&\Pr(\tau _{n+1}\leq t,X_{n+1}=j\mid (X_{0},T_{0}),(X_{1},T_{1}),\ldots ,(X_{n}=i,T_{n}))\\[5pt]={}&\Pr(\tau _{n+1}\leq t,X_{n+1}=j\mid X_{n}=i)\,\forall n\geq 1,t\geq 0,i,j\in \mathrm {S} \end{aligned}}}
https://en.wikipedia.org/wiki/Semi-Markov_process
Theusage share of an operating systemis the percentage ofcomputersrunning thatoperating system(OS). These statistics are estimates as wide scale OS usage data is difficult to obtain and measure. Reliable primary sources are limited and data collection methodology is not formally agreed. Currently devices connected to the internet allow for web data collection to approximately measure OS usage. As of March 2025[update],Android, which uses theLinux kernel, is the world's most popular operating system with 46% of the global market, followed byWindowswith 25%,iOSwith 18%,macOSwith 6%, and other operating systems with 5% .[1]This is for all device types excludingembedded devices. Linux is also most used for web servers, and the most commonLinux distributionisUbuntu, followed byDebian. Linux has almost caught up with the second-most popular (desktop) OS, macOS, in some regions, such as in South America,[7]and in Asia it's at 6.4% (7% with ChromeOS) vs 9.7% for macOS.[8]In the US, ChromeOS is third at 5.5%, followed by (desktop) Linux at 4.3%, but can arguably be combined into a single number 9.8%.[9][10] The most numerous type of device with an operating system areembedded systems. Not all embedded systems have operating systems, instead running their application code on the "bare metal"; of those that do have operating systems, a high percentage are standalone or do not have a web browser, which makes their usage sharedifficult to measure. Some operating systems used in embedded systems are more widely used than some of those mentioned above; for example, modern Intel microprocessors contain anembedded management processorrunning a version of theMinixoperating system.[11] According to Gartner, the following is the worldwide device shipments (referring towholesale) by operating system, which includes smartphones,tablets,laptopsandPCstogether. macOS = 1% Shipments (to stores) do not necessarily translate to sales to consumers, therefore suggesting the numbers indicate popularity and/or usage could be misleading. Not only do smartphones sell in higher numbers than PCs, but also a lot more by dollar value, with the gap only projected to widen, to well over double.[19] On 27 January 2016, Paul Thurrott summarized the operating system market, the day after Apple announced "one billion devices": Apple's "active installed base" is now one billion devices. [..] Granted, some of those Apple devices were probably sold into the marketplace years ago. But that 1 billion figure can and should be compared to the numbers Microsoft touts for Windows 10 (200 million, most recently) or Windows more generally (1.5 billion active users, a number that hasn’t moved, magically, in years), and that Google touts for Android (over 1.4 billion, as of September). My understanding of iOS is that the user base was previously thought to be around 800 million strong, and when you factor out Macs and other non-iOS Apple devices, that's probably about right. But as you can see, there are three big personal computing platforms. For 2015 (and earlier), Gartner reports for "the year, worldwide PC shipments declined for the fourth consecutive year, which started in 2012 with the launch of tablets" with an 8% decline in PC sales for 2015 (not including cumulative decline in sales over the previous years).[21] Microsoft backed away from their goal of one billion Windows 10 devices in three years (or "by the middle of 2018")[22]and reported on 26 September 2016 that Windows 10 was running on over 400 million devices,[23]and in March 2019 on more than 800 million.[24] In May 2020,Gartnerpredicted further decline in all market segments for 2020 due toCOVID-19, predicting a decline of 13.6% for all devices. while the "Work from Home Trend Saved PC Market from Collapse", with only a decline of 10.5% predicted for PCs. However, in the end, according to Gartner, PC shipments grew 10.7% in the fourth quarter of 2020 and reached 275 million units in 2020, a 4.8% increase from 2019 and the highest growth in ten years." Apple in 4th place for PCs had the largest growth in shipments for a company in Q4 of 31.3%, while "the fourth quarter of 2020 was another remarkable period of growth for Chromebooks, with shipments increasing around 200% year over year to reach 11.7 million units. In 2020, Chromebook shipments increased over 80% to total nearly 30 million units, largely due to demand from the North American education market." Chromebooks sold more (30 million) than Apple's Macs worldwide (22.5 million) in pandemic year 2020.[25] According to the Catalyst group, the year 2021 had record high PC shipments with total shipments of 341 million units (including Chromebooks), 15% higher than 2020 and 27% higher than 2019, while being the largest shipment total since 2012.[26] According to Gartner, worldwide PC shipments declined by 16.2% in 2022, the largest annual decrease since the mid-1990s, due to geopolitical, economic, and supply chain challenges.[27] In 2015,eMarketerestimated at the beginning of the year that the tabletinstalled basewould hit one billion[28]for the first time (with China's use at 328 million, whichGoogle Playdoesn't serve or track, and the United States's use second at 156 million). At the end of the year, because of cheap tablets – not counted by all analysts – that goal was met (even excluding cumulative sales of previous years) as: Sales quintupled to an expected 1 billion units worldwide this year, from 216 million units in 2014, according to projections from the Envisioneering Group. While that number is far higher than the 200-plus million units globally projected by research firms IDC, Gartner and Forrester, Envisioneering analyst Richard Doherty says the rival estimates miss all the cheap Asian knockoff tablets that have been churning off assembly lines.[..] Forrester says its definition of tablets "is relatively narrow" while IDC says it includes some tablets by Amazon — but not all.[..] The top tech purchase of the year continued to be the smartphone, with an expected 1.5 billion sold worldwide, according to projections from researcher IDC. Last year saw some 1.2 billion sold.[..] Computers didn’t fare as well, despite the introduction of Microsoft's latest software upgrade, Windows 10, and the expected but not realized bump it would provide for consumers looking to skip the upgrade and just get a new computer instead. Some 281 million PCs were expected to be sold, according to IDC, down from 308 million in 2014. Folks tend to be happy with the older computers and keep them for longer, as more of our daily computing activities have moved to the smartphone.[..] While Windows 10 got good reviews from tech critics, only 11% of the 1-billion-plus Windows user base opted to do the upgrade, according to Microsoft. This suggests Microsoft has a ways to go before the software gets "hit" status. Apple's new operating system El Capitan has been downloaded by 25% of Apple's user base, according to Apple. This conflicts with statistics from IDC that say the tablet market contracted by 10% in 2015 with onlyHuawei, ranked fifth, with big gains, more than doubling their share; for fourth quarter 2015, the five biggest vendors were the same except thatAmazon Firetablets ranked third worldwide, new on the list, enabled by its not quite tripling of market share to 7.9%, with itsFire OSAndroid-derivative.[30] Gartnerexcludes some devices from their tablet shipment statistic and includes them in a different category called "premium ultramobiles" with screen sizes of more than 10" inches.[35] There are more mobile phone owners than toothbrush owners,[36]with mobile phones the fastest growing technology in history.[citation needed]There are a billion more active mobile phones in the world than people (and many more than 10 billion sold so far with less than half still in use), explained by the fact that some people have more than one, such as an extra for work.[37]All the phones have an operating system, but only a fraction of them are smartphones with an OS capable of running modern applications. In 2018, 3.1 billion smartphones and tablets were in use across the world (with tablets, a small fraction of the total, generally running the same operating systems, Android or iOS, the latter being more popular on tablets. In 2019, a variant of iOS callediPadOSbuilt for iPad tablets was released). On 28 May 2015, Google announced that there were 1.4 billion Android users and 1 billion Google play users active during that month.[38][39]This changed to 2 billion monthly active users in May 2017.[40][41] By late 2016, Android had been said to be "killing" Apple's iOS market share (i.e. its declining sales of smartphones, not just relatively but also by number of units, when the whole market was increasing).[42]Gartner's press release stated: "Apple continued its downward trend with a decline of 7.7 percent in the second quarter of 2016",[43]which is their decline, based on absolute number of units, that underestimates the relative decline (with the market increasing), along with the misleading "1.7percent [point]" decline. That point decline means an 11.6% relative decline (from 14.6% down to 12.9%). Although by units sold Apple was declining in the late 2010s, the company was almost the only vendor making any profit in the smartphone sector from hardware sales alone. In Q3 2016 for example, they captured 103.6% of the market profits.[44] In May 2019 the biggest smartphone companies (by market share) were Samsung, Huawei and Apple, respectively.[45] In November 2024, a new competitor to Android and iOS emerged, when sales of the HuaweiMate 70started with the all-new operating systemHarmonyOS NEXTinstalled[46]on the flagship device. Future Huawei devices are to be sold mainly with this operating system, creating a third player on the market for smartphone operating systems.[47] The following table shows worldwide smartphone sales to end users by operating systems, as measured byGartner,International Data Corporation (IDG)and others: Data from various sources published over the 2021/2022 period is summarized in the table below. All of these sources monitor a substantial number of websites, any statistics that relate to only one web site have been excluded. Android currently ranks highest,[67]above Windows (incl. Xbox console) systems.Windows Phoneaccounted for 0.51% of the web usage, before it was discontinued.[68] Considering all personal computers,Microsoft Windowsis well below 50% usage share on every continent, and at 30% in the US (24% single-day low) and in many countries lower, e.g. China, and in India at 19% (12% some days) and Windows' lowest share globally was 29% in May 2022 (25% some days), and 29% in the US.[69] For a short time, iOS was slightly more popular than Windows in the US, but this is no longer the case. Worldwide, Android holds 45.49%, more than Windows at 25.35%, and iOS third at 18.26%. In Africa, Android is at 66.07%, Windows is 13.46 (and iOS third at 10.24%).[70] Before iOS became the most popular operating system in any independent country, it was most popular in Guam, anunincorporated territory of the United States, for four consecutive quarters in 2017–18,[71][72]although Android is now the most popular there.[73]iOS has been the highest ranked OS inJersey(a BritishCrown dependencyin Europe) for years, by a wide margin, and iOS was also highest ranked in Falkland Islands, aBritish Overseas Territory, for one quarter in 2019, before being overtaken by Android in the following quarter.[74][75]iOS is competitive with Windows in Sweden, where some days it is more used.[76] The designation of an "Unknown" operating system is strangely high in a few countries such asMadagascarwhere it was at 32.44% (no longer near as high).[77]This may be due to the fact that StatCounter usesbrowser detectionto get OS statistics, and there the most common browsers are not often used. The version breakdown for browsers in Madagascar shows "Other" at 34.9%,[78]and Opera Mini 4.4 is the most popular known browser at 22.1% (plus e.g. 3.34% for Opera 7.6). However browser statistics without version-breakdown has Opera at 48.11% with the "Other" category very small.[79][clarification needed] In China, Android became the highest ranked operating system in July 2016 (Windows has occasionally topped it since then, while since April 2016 it or all non-mobile operating systems haven't outranked mobile operating systems, meaning Android plus iOS).[80]In the Asian continent as a whole, Android has been ranked highest since February 2016 and Android alone has the majority share,[81]because of a large majority in all the most populous countries of the continent, up to 84% in Bangladesh, where it has had over 70% share for over four years.[82]Since August 2015, Android is ranked first, at 48.36% in May 2016, in the African continent – when it took a big jump ahead of Windows 7,[83]and thereby Africa joined Asia as a mobile-majority continent. China is no longer a desktop-majority country,[84]joining India, which has a mobile-majority of 71%, confirming Asia's significant mobile-majority. Online usage ofLinux kernelderivatives (Android+ChromeOS+ otherLinux) exceeds that of Windows. This has been true since some time between January and April 2016, according to W3Counter[85]and StatCounter.[86]However, even before that, the figure for all Unix-like OSes, including those from Apple, was higher than that for Windows. 2020 20.55% 6.74% 8.06% 13.67% 37.44% Windows is still the dominant desktop OS, but the dominance varies by region and it has gradually lost market share to other desktop operating systems (not just to mobile) with the slide very noticeable in the US, where macOS usage has more than quadrupled from Jan. 2009 to Dec. 2020 to 30.62% (i.e. in Christmas month; and 34.72% in April 2020 in the middle ofCOVID-19, and iOS was more popular overall that year;[97]globally Windows lost to Android that year,[98]as for the two years prior), with Windows down to 61.136% and ChromeOS at 5.46%, plus traditional Linux at 1.73%.[99] There is little openly published information on the device shipments of desktop and laptop computers. Gartner publishes estimates, but the way the estimates are calculated is not openly published. Another source of market share of various operating systems isStatCounter[100]basing its estimate on web use (although this may not bevery accurate). Also, sales may overstate usage. Most computers are sold with apre-installed operating system, with some users replacing that OS with a different one due to personal preference, or installing another OS alongside it and using both. Conversely, sales underestimate usage by not counting unauthorized copies. For example, in 2009, approximately 80% of software sold in China consisted of illegitimate copies.[101]In 2007, the statistics from an automated update of IE7 for registered Windows computers differed with the observed web browser share, leading one writer to estimate that 25–35% of all Windows XP installations were unlicensed.[102] The usage share of Microsoft's (then latest operating system version)Windows 10has slowly increased since July/August 2016, reaching around 27.15% (of all Windows versions, not all desktop or all operating systems) in December 2016. It eventually reached 79.79% on 5 October 2021, the same day on which its successorWindows 11was released. In the United States, usage of Windows XP has dropped to 0.38% (of all Windows versions), and its global average to 0.59%, while in Africa it is still at 2.71%, and inArmeniait is more than 70%, as of 2017.[103] StatCounter web usage data of desktop or laptop operating systems varies significantly by country. For example, in 2017, macOS usage in North America was at 16.82%[104](17.52% in the US[105]) whereas in Asia it was only 4.4%.[106]As of July 2023, macOS usage has increased to 30.81% in North America[107](31.77% in the US)[108]and to 9.64% in Asia.[109] The 2023Stack Overflowdeveloper survey counts 87,222 survey responses. However, usage of a particular system as a desktop or as a server was not differentiated in the survey responses. The operating system share among those identifying as professional developers was:[126] In June 2016, Microsoft claimed Windows 10 had half the market share of all Windows installations in the US and UK, as quoted by BetaNews: Microsoft's Windows trends page [shows] Windows 10 hit 50 percent in the US (51 percent in the UK, 39 percent globally), while ... Windows 7 was on 38 percent (36 percent in the UK, 46 percent globally). A big reason for the difference in numbers comes down to how they are recorded. ... actual OS usage (based on web browsing), while Microsoft records the number of devices Windows 10 is installed on. ... Microsoft also only records Windows 7, Windows 8, Windows 8.1 and Windows 10, while NetMarketShare includes both XP and Vista. The digital video game distribution platformSteampublishes a monthly "Hardware & Software Survey", with the statistics below: ^†These figures, as reported by Steam, do not includeSteamOSstatistics.[137] By Q1 2018,mobile operating systemsonsmartphonesincludedGoogle's dominantAndroid(and variants) andApple'siOSwhich combined had an almost 100% market share.[138] Smartphone penetration vs. desktop use differs substantially by country. Some countries, like Russia, still have smartphone use as low as 22.35% (as a fraction of all web use),[139]but in most western countries, smartphone use is close to 50% of all web use. This doesn't mean that only half of the population has a smartphone, could mean almost all have, just that other platforms have about equal use. Smartphone usage share in developing countries is much higher – in Bangladesh, for example, Android smartphones had up to 84% and currently 70% share,[82]and in Mali smartphones had over 90% (up to 95%) share for almost two years.[140][141](A section below has more information on regional trends on the move to smartphones.) There is a clear correlation between the GDP per capita of a country and that country's respective smartphone OS market share, with users in the richest countries being much more likely to choose Apple's iPhone, with Google's Android being predominant elsewhere.[142][143][144] Tablet computers, or simplytablets, became a significant OS market share category starting with Apple'siPad. In Q1 2018, iOS had 65.03% market share and Android had 34.58% market share.[155]Windows tablets may not get classified as such by some analysts, and thus barely register; e.g.2-in-1 PCsmay get classified as "desktops", not tablets. Since 2016, in South America (and Cuba[156]in North America), Android tablets have gained majority,[157]and in Asia in 2017 Android was slightly more popular than the iPad, which was at 49.05% usage share in October 2015.[158][159][160]In Africa, Android tablets are much more popular while elsewhere the iPad has a safe margin. As of March 2015[update], Android has made steady gains to becoming the most popular tablet operating system:[161]that is the trend in many countries, having already gained the majority in large countries (India at 63.25%,[162]and in Indonesia at 62.22%[163]) and in the African continent with Android at 62.22% (first to gain Android majority in late 2014),[164]with steady gains from 20.98% in August 2012[165](Egypt at 62.37%,[166]Zimbabwe at 62.04%[166]), and South America at 51.09% in July 2015.[167](Peru at 52.96%[168]). Asia is at 46%.[169]In Nepal, Android gained majority lead in November 2014 but lost it down to 41.35% with iOS at 56.51%.[170]In Taiwan, as of October 2016, Android after having gained a confident majority, has been on a losing streak.[171]China is a major exception to Android gaining market share in Asia (there Androidphabletsare much more popular than Android tablets, while similar devices get classified as smartphones) where the iPad/iOS is at 82.84% in March 2015.[172] According toStatCounterweb use statistics (a proxy for all use), smartphones are more popular than desktop computers globally (and Android in particular more popular than Windows). Including tablets with mobiles/smartphones, as they also run so-calledmobile operating systems, even in the United States (and most countries) are mobiles including tablets more popular than other (older originally made for desktops) operating systems (such as Windows and macOS). Windows in the US (at 33.42%) has only 8% head-start (2.55-percentage points) over iOS only; with Android, that mobile operating system and iOS have 52.14% majority.[180]Alternatively, Apple, with iOS plus their non-mobile macOS (9.33%) has 20% more share (6.7-percentage points more) than Microsoft's Windows in the country where both companies were built. Although desktop computers are still popular in many countries (while overall down to 44.9% in the first quarter of 2017[181]), smartphones are more popular even in many developed countries. A few countries on all continents are desktop-minority with Android more popular than Windows; many, e.g. Poland in Europe, and about half of the countries in South America, and many in North America, e.g. Guatemala, Honduras, Haiti; up to most countries in Asia and Africa[182]with smartphone-majority because of Android, Poland and Turkey in Europe highest with 57.68% and 62.33%, respectively. In Ireland, smartphone use at 45.55% outnumbers desktop use and mobile as a whole gains majority when including the tablet share at 9.12%.[183][184]Spain was also slightly desktop-minority. As of July 2019, Sweden had been desktop-minority for eight weeks in a row.[185] The range of measured mobile web use varies a lot by country, and a StatCounter press release recognizes "India amongst world leaders in use of mobile to surf the internet"[186](of the big countries) where the share is around (or over) 80%[187]and desktop is at 19.56%, with Russia trailing with 17.8% mobile use (and desktop the rest). Smartphones (discounting tablets), first gained majority in December 2016 (desktop-majority was lost the month before),[where?]and it wasn't a Christmas-time fluke, as while close to majority after smartphone majority happened again in March 2017.[188][clarification needed] In the week of 7–13 November 2016, smartphones alone (without tablets) overtook desktop for the first time, albeit for a short period.[189]Examples of mobile-majority countries include Paraguay in South America, Poland in Europe and Turkey and most of Asia and Africa. Some of the world is still desktop-majority, with for example the United States at 54.89% (but not on all days).[190]However, in someterritories of the United States, such asPuerto Rico,[191]desktop is significantly under majority, with Windows just under 25%, overtaken by Android. On 22 October 2016 (and subsequent weekends), mobile showed majority.[192]Since 27 October, the desktop hasn't had a majority, including on weekdays. Smartphones alone have shown majority since 23 December to the end of the year, with the share topping at 58.22% on Christmas Day.[193]To the "mobile"-majority share of smartphones, tablets could be added giving a 63.22% majority. While an unusually high top, a similar high also occurred on Monday 17 April 2017, with the smartphone share slightly lower and tablet share slightly higher, combining to 62.88%. Formerly, according to a StatCounter press release, the world has turned desktop-minority;[194]as of October 2016[update], at about 49% desktop use for that month, but mobile wasn't ranked higher, tablet share had to be added to it to exceed desktop share. For the Christmas season (i.e. temporarily, while desktop-minority remains and smartphone-majority on weekends[195][196]), the last two weeks in December 2016, Australia (and Oceania in general)[197]was desktop-minority for the first time for an extended period, i.e. every day from 23 December.[198] In South America, smartphones alone took majority from desktops on Christmas Day,[196]but for a full-week-average, desktop is still at least at 58%.[199] The UK desktop-minority dropped down to 44.02% on Christmas Day and for the eight days to the end of the year.[200]Ireland joined some other European countries with smartphone-majority, for three days after Christmas, topping that day at 55.39%.[201][202] In the US, desktop-minority happened for three days on and around Christmas (while a longer four-day stretch happened in November, and happens frequently on weekends).[203] According to StatCounter web use statistics (a proxy for all use), in the week from 7–13 November 2016, "mobile" (meaning smartphones) alone (without tablets) overtook desktop, for the first time, with them highest ranked at 52.13% (on 27 November 2016)[204]or up to 49.02% for a full week.[205][206]Mobile-majority applies to countries such as Paraguay in South America, Poland in Europe and Turkey; and the continents Asia and Africa. Large regions of the rest of the world are still desktop-majority, while on some days, the United States,[207](and North America as a whole)[208]isn't; the US is desktop-minority up to four days in a row,[209]and up to a five-day average.[210]Other examples, of desktop-minority on some days, include the UK,[208]Ireland,[211]Australia[212](andOceaniaas a whole); in fact, at least one country on every continent[213][214][215]has turned desktop-minority (for at least a month). On 22 October 2016 (and subsequent weekends), mobile has shown majority.[216] Previously, according to a StatCounter press release, the world has turned desktop-minority;[217]as of October 2016[update], at about 49% desktop use for that month,[218][219]with desktop-minority stretching up to an 18-weeks/4-months period from 28 June to 31 October 2016,[220][221]while whole of July, August or September 2016, showed desktop-majority (and many other long sub-periods in the long stretch showed desktop-minority; similarly only Fridays, Saturdays and Sundays are desktop-minority). The biggest continents, Asia and Africa, have shown vast mobile-majority for long time (any day of the week), as well as several individual countries elsewhere have also turned mobile-majority: Poland, Albania (andTurkey)[222]in Europe and Paraguay and Bolivia[223]in South America.[224] According to StatCounter's web use statistics, Saturday 28 May 2016, was the day when smartphones ("mobile" at StatCounter, that now counts tablets separately) became a most used platform, ranking first, at 47.27%, above desktops.[225][226]The next day, desktops slightly outnumbered "mobile" (unless counting tablets: some analysts count tablets with smartphones or separately while others with desktops – even when most tablets are iPad or Android, not Windows devices).[227] Since Sunday 27 March 2016, the first day the world dipped to desktop-minority,[228]it has happened almost every week, and by week of 11–17 July 2016, the world was desktop-minority,[229]followed by the next week, and thereon also for a three-week period.[230]The trend is still stronger on weekends, with e.g. 17 July 2016 showed desktop at 44.67%, "mobile" at 49.5% plus tablets at 5.7%.[231]Recent weekly data shows a downward trend for desktops.[232][233] According to StatCounter web use statistics (a proxy for overall use), on weekends desktops worldwide lose about 5 percent points, e.g. down to 51.46% on 15 August 2015, with the loss in (relative) web use going to mobile (and also a minuscule increase for tablets),[234]mostly becauseWindows 7, ranked 1st on workdays, declines in web use, with it shifting to Android and lesser degree to iOS.[235] Two continents have already crossed over to mobile-majority (because of Android), based on StatCounters web use statistics. In June 2015,Asiabecame the first continent where mobile overtook desktop[236](followed byAfricain August;[237]whileNigeriahad mobile majority in October 2011,[238][239]because ofSymbian– that later had 51% share, thenSeries 40dominating, followed by Android as dominating operating system[240]) and as far back as October 2014, they had reported this trend on a large scale in a press release: "Mobile usage has already overtaken desktop in several countries includingIndia,South AfricaandSaudi Arabia".[241]In India, desktop went from majority, in July 2012, down to 32%.[242]In Bangladesh desktop went from majority, in May 2013, down to 17%, with Android alone now accounting for majority web use.[243]Only a few African countries were still desktop-majority[244]and many have a large mobile majority includingEthiopiaandKenya, where mobile usage is over 72%.[245] The popularity of mobile use worldwide has been driven by the huge popularity increase of Android in Asian countries, where Android is the highest ranked operating system statistically in virtually every south-east Asian country,[246]while it also ranks most popular in almost every African country. Poland has been desktop-minority since April 2015,[247]because of Android being vastly more popular there,[248]and other European countries, such as Albania (andTurkey), have also crossed over. The South America continent is somewhat far from losing desktop-majority, but Paraguay had lost it as of March 2015[update].[249]Android and mobile browsing in general has also become hugely popular in all other continents where desktop has a large desktop base and the trend to mobile is not as clear as a fraction of the total web use. While some analysts count tablets with desktops (as some of them run Windows), others count them with mobile phones (as the vast majority of tablets run so-calledmobile operating systems, such asAndroidoriOSon theiPad). iPad has a clear lead globally, but has clearly lost the majority to Android in South America,[250]and a number of Eastern European countries such as Poland; lost virtually all African countries and has lost the majority twice in Asia, but gained the majority back (while many individual countries, e.g. India and most of the middle East have clear Android majority on tablets).[251]Android on tablets is thus second most popular after the iPad.[252] In March 2015, for the first time in the US the number of mobile-only adult internet users exceeded the number of desktop-only internet users with 11.6% of the digital population only using mobile compared to 10.6% only using desktop; this also means the majority, 78%, use both desktop and mobile to access the internet.[253]A few smaller countries in North America, such as Haiti (because of Android) have gone mobile majority (mobile went to up to 72.35%, and is at 64.43% in February 2016).[254] The region with the largest Android usage[67]also has the largest mobile revenue.[255] Internet based servers'market sharecan be measured with statistical surveys of publicly accessible servers, such asweb servers,mail servers[257]orDNS serverson the Internet: the operating systems powering such servers are found by inspecting raw response messages. This method gives insight only into market share of operating systems that are publicly accessible on the Internet. There will be differences in the result depending on how the sample is done and observations weighted. Usually the surveys are not based on a random sample of all IP addresses, domain names, hosts or organisations, but on servers found by some other method.[citation needed]Additionally, many domains and IP addresses may be served by one host and some domains may be served by several hosts or by one host with several IP addresses. Mainframesare larger and more powerful than most servers, but notsupercomputers. They are used to process large sets of data, for exampleenterprise resource planningorcredit card transactions. The most common operating system for mainframes is IBM'sz/OS.[265][citation needed]Operating systems forIBM Zgeneration hardware include IBM's proprietary z/OS,[266]Linux on IBM Z,z/TPF,z/VSEandz/VM. Gartnerreported on 23 December 2008 that Linux on System z was used on approximately 28% of the "customer z base" and that they expected this to increase to over 50% in the following five years.[267]Of Linux on IBM Z,Red HatandMicro Focuscompete to sellRHELandSLESrespectively: Like today's trend of mobile devices from personal computers,[253]in 1984 for the first time estimated sales of desktop computers ($11.6 billion) exceeded mainframe computers ($11.4 billion). IBM received the vast majority of mainframe revenue.[269] From 1991 to 1996, AT&T Corporation briefly owned NCR, one of themajor original mainframe producers. During the same period, companies found that servers based on microcomputer designs could be deployed at a fraction of the acquisition price and offer local users much greater control over their own systems given the IT policies and practices at that time. Terminals used for interacting with mainframe systems were gradually replaced by personal computers. Consequently, demand plummeted and new mainframe installations were restricted mainly to financial services and government. In the early 1990s, there was a rough consensus among industry analysts that the mainframe was a dying market as mainframe platforms were increasingly replaced by personal computer networks.[270] In 2012,NASApowered down its last mainframe, an IBM System z9.[271]However, IBM's successor to the z9, thez10, led a New York Times reporter to state four years earlier that "mainframe technology—hardware, software and services—remains a large and lucrative business for IBM, and mainframes are still the back-office engines behind the world's financial markets and much of global commerce".[272]As of 2010[update], while mainframe technology represented less than 3% of IBM's revenues, it "continue[d] to play an outsized role in Big Blue's results".[273] TheTOP500project lists and ranks the 500 fastestsupercomputersfor which benchmark results are submitted. Since the early 1990s, the field of supercomputers has been dominated byUnixorUnix-likeoperating systems, and starting in 2017, every top 500 fastest supercomputer usesLinuxas itssupercomputer operating system. The last supercomputer to rank #1 while using an operating system other than Linux wasASCI White, which ranAIX. It held the title from November 2000 to November 2001,[274]and was decommissioned in 2006. Then in June 2017, two AIX computers held rank 493 and 494,[275]the last non-Linux systems before they dropped off the list. Historically all kinds ofUnixoperating systems dominated, and in the end ultimately Linux remains.
https://en.wikipedia.org/wiki/Usage_share_of_operating_systems
Inpacket switchingnetworks,traffic flow,packet flowornetwork flowis a sequence ofpacketsfrom a sourcecomputerto a destination, which may be another host, amulticastgroup, or abroadcastdomain. RFC 2722 defines traffic flow as "an artificial logical equivalent to a call or connection."[1]RFC 3697 defines traffic flow as "a sequence of packets sent from a particular source to a particular unicast, anycast, or multicast destination that the source desires to label as a flow. A flow could consist of all packets in a specific transport connection or a media stream. However, a flow is not necessarily 1:1 mapped to a transport connection."[2]Flow is also defined in RFC 3917 as "a set of IP packets passing an observation point in the network during a certain time interval."[3]Packet flow temporal efficiency can be affected byone-way delay (OWD)that is described as a combination of the following components: Packets from one flow need to be handled differently from others, by means of separate queues inswitches,routersandnetwork adapters, to achievetraffic shaping,policing,fair queueingorquality of service. It is also a concept used in Queueing Network Analyzers (QNAs) or in packet tracing. Applied to Internetrouters, a flow may be a host-to-host communication path, or asocket-to-socketcommunication identified by a unique combination of source and destination addresses and port numbers, together with transport protocol (for example,UDPorTCP). In the TCP case, a flow may be avirtual circuit, also known as avirtual connectionor abyte stream.[4] In packet switches, the flow may be identified byIEEE 802.1QVirtual LAN tagging in Ethernet networks, or by alabel-switched pathinMPLStag switching. Packet flow can be represented as apathin a network to model network performance. For example, a waterflow networkcan be used to conceptualize packet flow.Communication channelscan be thought of as pipes, with the pipe capacity corresponding to bandwidth and flows corresponding to data throughput. This visualization can help to understand bottlenecks, queuing, and the unique requirements of tailored systems.
https://en.wikipedia.org/wiki/Traffic_flow_(computer_networking)
Acommand-line interface(CLI) is a means of interacting withsoftwareviacommands– each formatted as a line of text. Command-line interfaces emerged in the mid-1960s, oncomputer terminals, as an interactive and more user-friendly alternative to the non-interactive mode available withpunched cards.[1] For a long time, CLI was the most common interface for software, but today thegraphical user interface(GUI) is more common. None-the-less, many programs such asoperating systemandsoftware developmentutilitiesstill provide CLI. CLI enablesautomatingprogramssince commands can be stored in ascriptfilethat can be used repeatedly. A script allows its contained commands to be executed as group; as a program; as a command. CLI is made possible bycommand-line interpretersorcommand-line processors, which are programs that execute input commands. Alternatives to CLI include GUI (including thedesktop metaphorsuch asWindows),text-basedmenuing(includingDOS ShellandIBM AIX SMIT), andkeyboard shortcuts. Compared with a graphical user interface, a command-line interface requires fewer system resources to implement. Since options to commands are given in a few characters in each command line, an experienced user often finds the options easier to access. Automation of repetitive tasks is simplified by line editing and history mechanisms for storing frequently used sequences; this may extend to ascripting languagethat can take parameters and variable options. A command-line history can be kept, allowing review or repetition of commands. A command-line system may require paper or online manuals for the user's reference, although often ahelpoption provides a concise review of the options of a command. The command-line environment may not provide graphical enhancements such as differentfontsor extendededit windowsfound in a GUI. It may be difficult for a new user to become familiar with all the commands and options available, compared with theiconsanddrop-down menusof a graphical user interface, without reference to manuals. Operating system (OS) command-line interfaces are usually distinct programs supplied with the operating system. A program that implements such a text interface is often called a command-line interpreter, command processor orshell. Examples of command-line interpreters include Nushell,DEC'sDIGITAL Command Language(DCL) inOpenVMSandRSX-11, the variousUnix shells(sh,ksh,csh,tcsh,zsh,Bash, etc.),CP/M'sCCP,DOS'COMMAND.COM, as well as theOS/2and the WindowsCMD.EXEprograms, the latter groups being based heavily on DEC's RSX-11 andRSTSCLIs. Under most operating systems, it is possible to replace the default shell program with alternatives; examples include4DOSfor DOS,4OS2for OS/2, and4NT / Take Commandfor Windows. Although the termshellis often used to describe a command-line interpreter, strictly speaking, ashellcan be any program that constitutes the user interface, including fully graphically oriented ones. For example, the default Windows GUI is a shell program namedEXPLORER.EXE, as defined in the SHELL=EXPLORER.EXE line in the WIN.INI configuration file. These programs are shells, but not CLIs. Application programs (as opposed to operating systems) may also have command-line interfaces. An application program may support none, any, or all of these three major types of command-line interface mechanisms: Some applications support a CLI, presenting their own prompt to the user and accepting command lines. Other programs support both a CLI and a GUI. In some cases, a GUI is simply awrapperaround a separate CLIexecutable file. In other cases, a program may provide a CLI as an optional alternative to its GUI. CLIs and GUIs often support different functionality. For example, all features ofMATLAB, anumerical analysiscomputer program, are available via the CLI, whereas the MATLAB GUI exposes only a subset of features. InColossal Cave Adventurefrom 1975, the user uses a CLI to enter one or two words to explore a cave system. The command-line interface evolved from a form of communication conducted by people overteleprinter(TTY) machines. Sometimes these involved sending an order or a confirmation usingtelex. Early computer systems often used teleprinter as the means of interaction with an operator. The mechanical teleprinter was replaced by a"glass tty", a keyboard and screen emulating the teleprinter."Smart" terminalspermitted additional functions, such as cursor movement over the entire screen, or local editing of data on the terminal for transmission to the computer. As themicrocomputer revolutionreplaced the traditional – minicomputer + terminals –time sharingarchitecture, hardware terminals were replaced byterminal emulators— PC software that interpreted terminal signals sent through the PC'sserial ports. These were typically used to interface an organization's new PC's with their existing mini- or mainframe computers, or to connect PC to PC. Some of these PCs were runningBulletin Board Systemsoftware. Early operating system CLIs were implemented as part ofresident monitorprograms, and could not easily be replaced. The first implementation of the shell as a replaceable component was part of theMulticstime-sharingoperating system.[2]In 1964,MIT Computation Centerstaff memberLouis Pouzindeveloped theRUNCOMtool for executing command scripts while allowing argument substitution.[3]Pouzin coined the termshellto describe the technique of using commands like a programming language, and wrote a paper about how to implement the idea in theMulticsoperating system.[4]Pouzin returned to his native France in 1965, and the first Multics shell was developed byGlenda Schroeder.[3] The firstUnix shell, theV6 shell, was developed byKen Thompsonin 1971 atBell Labsand was modeled after Schroeder's Multics shell.[5][6]TheBourne shellwas introduced in 1977 as a replacement for the V6 shell. Although it is used as an interactive command interpreter, it was also intended as a scripting language and contains most of the features that are commonly considered to produce structured programs. The Bourne shell led to the development of theKornShell(ksh),Almquist shell(ash), and the popularBourne-again shell(or Bash).[6] Early microcomputers themselves were based on a command-line interface such asCP/M,DOSorAppleSoft BASIC. During the 1980s and 1990s, the introduction of theApple Macintoshand ofMicrosoft Windowson PCs saw the command line interface as the primary user interface replaced by theGraphical User Interface.[7]The command line remained available as an alternative user interface, often used bysystem administratorsand other advanced users for system administration,computer programmingandbatch processing. In November 2006,Microsoftreleased version 1.0 ofWindows PowerShell(formerly codenamedMonad), which combined features of traditional Unix shells with their proprietary object-oriented.NET Framework.MinGWandCygwinareopen-sourcepackages for Windows that offer a Unix-like CLI. Microsoft providesMKS Inc.'skshimplementationMKS Korn shellfor Windows through theirServices for UNIXadd-on. Since 2001, theMacintoshoperating systemmacOShas been based on aUnix-likeoperating system calledDarwin.[8]On these computers, users can access a Unix-like command-line interface by running theterminal emulatorprogram calledTerminal, which is found in the Utilities sub-folder of the Applications folder, or by remotely logging into the machine usingssh.Z shellis the default shell for macOS; Bash,tcsh, and theKornShellare also provided. BeforemacOS Catalina, Bash was the default. A CLI is used whenever a large vocabulary of commands or queries, coupled with a wide (or arbitrary) range of options, can be entered more rapidly as text than with a pure GUI. This is typically the case withoperating system command shells. CLIs are also used by systems with insufficient resources to support a graphical user interface. Some computer language systems (such asPython,[9]Forth,LISP,Rexx, and many dialects ofBASIC) provide an interactive command-line mode to allow for rapid evaluation of code. CLIs are often used by programmers and system administrators, in engineering and scientific environments, and by technically advanced personal computer users.[10]CLIs are also popular among people with visual disabilities since the commands and responses can be displayed usingrefreshable Braille displays. The general pattern of a command line is:[11][12] In this format, the delimiters between command-line elements arewhitespace charactersand the end-of-line delimiter is thenewlinedelimiter. This is a widely used (but not universal) convention. A CLI can generally be considered as consisting ofsyntaxandsemantics. Thesyntaxis the grammar that all commands must follow. In the case ofoperating systems,DOSandUnixeach define their own set of rules that all commands must follow. In the case ofembedded systems, each vendor, such asNortel,Juniper NetworksorCisco Systems, defines their own proprietary set of rules. These rules also dictate how a user navigates through the system of commands. Thesemanticsdefine what sort of operations are possible, on what sort of data these operations can be performed, and how the grammar represents these operations and data—the symbolic meaning in the syntax. Two different CLIs may agree on either syntax or semantics, but it is only when they agree on both that they can be considered sufficiently similar to allow users to use both CLIs without needing to learn anything, as well as to enable re-use of scripts. A simple CLI will display a prompt, accept acommand linetyped by the user terminated by theEnter key, then execute the specified command and provide textual display of results or error messages. Advanced CLIs will validate, interpret and parameter-expand the command line before executing the specified command, and optionally capture or redirect its output. Unlike a button or menu item in a GUI, a command line is typically self-documenting,[16]stating exactly what the user wants done. In addition, command lines usually include manydefaultsthat can be changed to customize the results. Useful command lines can be saved by assigning acharacter stringoraliasto represent the full command, or several commands can be grouped to perform a more complex sequence – for instance, compile the program, install it, and run it — creating a single entity, called a command procedure or script which itself can be treated as a command. These advantages mean that a user must figure out a complex command or series of commands only once, because they can be saved, to be used again. The commands given to a CLI shell are often in one of the following forms: wheredoSomethingis, in effect, averb,howanadverb(for example, should the command be executedverboselyorquietly) andtoFilesan object or objects (typically one or more files) on which the command should act. The>in the third example is aredirection operator, telling the command-line interpreter to send the output of the command not to its own standard output (the screen) but to the named file. This will overwrite the file. Using>>will redirect the output and append it to the file. Another redirection operator is thevertical bar(|), which creates apipelinewhere the output of one command becomes the input to the next command.[17] One can modify the set of available commands by modifying which paths appear in thePATHenvironment variable. Under Unix, commands also need be marked asexecutablefiles. The directories in the path variable are searched in the order they are given. By re-ordering the path, one can run e.g. \OS2\MDOS\E.EXE instead of \OS2\E.EXE, when the default is the opposite. Renaming of the executables also works: people often rename their favourite editor to EDIT, for example. The command line allows one to restrict available commands, such as access to advanced internal commands. The WindowsCMD.EXEdoes this. Often, shareware programs will limit the range of commands, including printing a command 'your administrator has disabled running batch files' from the prompt.[clarification needed] Some CLIs, such as those innetwork routers, have a hierarchy ofmodes, with a different set of commands supported in each mode. The set of commands are grouped by association with security, system, interface, etc. In these systems the user might traverse through a series of sub-modes. For example, if the CLI had two modes calledinterfaceandsystem, the user might use the commandinterfaceto enter the interface mode. At this point, commands from the system mode may not be accessible until the user exits the interface mode and enters the system mode. A command prompt (or justprompt) is a sequence of (one or more) characters used in a command-line interface to indicate readiness to accept commands. It literallypromptsthe user to take action. A prompt usually ends with one of the characters$,%,#,[18][19]:,>or-[20]and often includes other information, such as the path of the currentworking directoryand thehostname. On manyUnixandderivative systems, the prompt commonly ends in$or%if the user is a normal user, but in#if the user is asuperuser("root" in Unix terminology). End-users can often modify prompts. Depending on the environment, they may include colors, special characters, and other elements (like variables and functions for the current time, user, shell number or working directory) in order, for instance, to make the prompt more informative or visually pleasing, to distinguish sessions on various machines, or to indicate the current level of nesting of commands. On some systems, special tokens in the definition of the prompt can be used to cause external programs to be called by the command-line interpreter while displaying the prompt. In DOS' COMMAND.COM and in Windows NT'scmd.exeusers can modify the prompt by issuing aPROMPTcommand or by directly changing the value of the corresponding%PROMPT%environment variable. The default of most modern systems, theC:\>style is obtained, for instance, withPROMPT $P$G. The default of older DOS systems,C>is obtained by justPROMPT, although on some systems this produces the newerC:\>style, unless used on floppy drives A: or B:; on those systemsPROMPT $N$Gcan be used to override the automatic default and explicitly switch to the older style. Many Unix systems feature the$PS1variable (Prompt String 1),[21]although other variables also may affect the prompt (depending on theshellused). In the Bash shell, a prompt of the form: could be set by issuing the command Inzshthe$RPROMPTvariable controls an optionalprompton the right-hand side of the display. It is not a real prompt in that the location of text entry does not change. It is used to display information on the same line as the prompt, but right-justified. InRISC OSthe command prompt is a*symbol, and thus (OS) CLI commands are often referred to asstar commands.[22]One can also access the same commands from other command lines (such as theBBC BASICcommand line), by preceding the command with a*. Acommand-line argumentorparameteris an item of information provided to a program when it is started.[23]A program can have many command-line arguments that identify sources or destinations of information, or that alter the operation of the program. When a command processor is active a program is typically invoked by typing its name followed by command-line arguments (if any). For example, inUnixandUnix-likeenvironments, an example of a command-line argument is: file.sis a command-line argument which tells the programrmto remove the file namedfile.s. Some programming languages, such asC,C++andJava, allow a program to interpret the command-line arguments by handling them as string parameters in themain function.[24][25]Other languages, such asPython, expose operating system specificAPI(functionality) throughsysmodule, and in particularsys.argvforcommand-line arguments. InUnix-like operating systems, a single hyphen used in place of a file name is a special value specifying that a program should handle data coming from thestandard inputor send data to thestandard output. Acommand-line optionor simplyoption(also known as aflagorswitch) modifies the operation of a command; the effect is determined by the command's program. Options follow the command name on the command line, separated by spaces. A space before the first option is not always required, such asDir/?andDIR /?in DOS, which have the same effect[20]of listing the DIR command's available options, whereasdir --help(in many versions of Unix)doesrequire the option to be preceded by at least one space (and is case-sensitive). The format of options varies widely between operating systems. In most cases, the syntax is by convention rather than an operating system requirement; the entire command line is simply a string passed to a program, which can process it in any way the programmer wants, so long as the interpreter can tell where the command name ends and its arguments and options begin. A few representative samples of command-line options, all relating to listing files in a directory, to illustrate some conventions: InMultics, command-line options and subsystem keywords may be abbreviated. This idea appears to derive from thePL/I programming language, with its shortened keywords (e.g., STRG for STRINGRANGE and DCL for DECLARE). For example, in the Multicsforumsubsystem, the-long_subjectparameter can be abbreviated-lgsj. It is also common for Multics commands to be abbreviated, typically corresponding to the initial letters of the words that are strung together with underscores to form command names, such as the use ofdidfordelete_iacl_dir. In some other systems abbreviations are automatic, such as permitting enough of the first characters of a command name to uniquely identify it (such asSUas an abbreviation forSUPERUSER) while others may have some specific abbreviations pre-programmed (e.g.MDforMKDIRin COMMAND.COM) or user-defined via batch scripts andaliases(e.g.alias md mkdirintcsh). On DOS, OS/2 and Windows, different programs called from their COMMAND.COM or CMD.EXE (or internal their commands) may use different syntax within the same operating system. For example: InDOS,OS/2andWindows, the forward slash (/) is most prevalent, although the hyphen-minus is also sometimes used. In many versions of DOS (MS-DOS/PC DOS 2.xx and higher, all versions ofDR-DOSsince 5.0, as well asPTS-DOS,Embedded DOS,FreeDOSandRxDOS) theswitch character(sometimes abbreviatedswitcharorswitchchar) to be used is defined by a value returned from asystem call(INT 21h/AX=3700h). The default character returned by this API is/, but can be changed to a hyphen-minus on the above-mentioned systems, except for under Datalight ROM-DOS and MS-DOS/PC DOS 5.0 and higher, which always return/from this call (unless one of many availableTSRsto reenable the SwitChar feature is loaded). In some of these systems (MS-DOS/PC DOS 2.xx, DOS Plus 2.1, DR-DOS 7.02 and higher, PTS-DOS, Embedded DOS, FreeDOS and RxDOS), the setting can also be pre-configured by aSWITCHARdirective inCONFIG.SYS. General Software's Embedded DOS provides a SWITCH command for the same purpose, whereas4DOSallows the setting to be changed viaSETDOS /W:n.[26]Under DR-DOS, if the setting has been changed from/, the first directory separator\in the display of thePROMPTparameter$Gwill change to a forward slash/(which is also a valid directory separator in DOS, FlexOS, 4680 OS, 4690 OS, OS/2 and Windows) thereby serving as a visual clue to indicate the change.[20]Also, the current setting is reflected also in the built-in help screens.[20]Some versions of DR-DOSCOMMAND.COMalso support a PROMPT token$/to display the current setting. COMMAND.COM since DR-DOS 7.02 also provides apseudo-environment variablenamed%/%to allow portable batchjobs to be written.[27][28]Several external DR-DOS commands additionally support anenvironment variable%SWITCHAR%to override the system setting. However, many programs are hardwired to use/only, rather than retrieving the switch setting before parsing command-line arguments. A very small number, mainly ports from Unix-like systems, are programmed to accept-even if the switch character is not set to it (for examplenetstatandping, supplied withMicrosoft Windows, will accept the /? option to list available options, and yet the list will specify the-convention). InUnix-likesystems, the ASCIIhyphen-minusbegins options; the new (andGNU) convention is to usetwohyphens then a word (e.g.--create) to identify the option's use while the old convention (and still available as an option for frequently-used options) is to use one hyphen then one letter (e.g.,-c); if one hyphen is followed by two or more letters it may mean two options are being specified, or it may mean the second and subsequent letters are a parameter (such as filename or date) for the first option.[29] Two hyphen-minus characters without following letters (--) may indicate that the remaining arguments should not be treated as options, which is useful for example if a file name itself begins with a hyphen, or if further arguments are meant for an inner command (e.g.,sudo). Double hyphen-minuses are also sometimes used to prefixlong optionswhere more descriptive option names are used. This is a common feature ofGNUsoftware. Thegetoptfunction and program, and thegetoptscommand are usually used for parsing command-line options. Unix command names, arguments and options are case-sensitive (except in a few examples, mainly where popular commands from other operating systems have been ported to Unix). FlexOS,4680 OSand4690 OSuse-. CP/Mtypically used[. Conversational Monitor System(CMS) uses a singleleft parenthesisto separate options at the end of the command from the other arguments. For example, in the following command the options indicate that the target file should be replaced if it exists, and the date and time of the source file should be retained on the copy:COPY source file a target file b (REPLACE OLDDATE) Data General's CLI under theirRDOS,AOS, etc. operating systems, as well as the version of CLI that came with theirBusiness Basic, uses only/as the switch character, is case-insensitive, and allowslocal switcheson some arguments to control the way they are interpreted, such asMAC/U LIB/S A B C $LPT/Lhas the global optionUto the macro assembler command to append user symbols, but two local switches, one to specify LIB should be skipped on pass 2 and the other to direct listing to the printer, $LPT. One of the criticisms of a CLI is the lack of cues to the user as to the available actions.[citation needed]In contrast, GUIs usually inform the user of available actions with menus, icons, or other visual cues.[citation needed]To overcome this limitation, many CLI programs display ausage message, typically when invoked with no arguments or one of?,-?,-h,-H,/?,/h,/H,/Help,-help, or--help.[20][30][31] However, entering a program name without parameters in the hope that it will display usage help can be hazardous, as programs and scripts for which command line arguments are optional will execute without further notice. Although desirable at least for the help parameter, programs may not support all option lead-in characters exemplified above. Under DOS, where the defaultcommand-line option charactercan be changed from/to-, programs may query theSwitCharAPI in order to determine the current setting. So, if a program is not hardwired to support them all, a user may need to know the current setting even to be able to reliably request help. If the SwitChar has been changed to-and therefore the/character is accepted as alternative path delimiter also at the DOS command line, programs may misinterpret options like/hor/Has paths rather than help parameters.[20]However, if given as first or only parameter, most DOS programs will, by convention, accept it as request for help regardless of the current SwitChar setting.[20][26] In some cases, different levels of help can be selected for a program. Some programs supporting this allow to give a verbosity level as an optional argument to the help parameter (as in/H:1,/H:2, etc.) or they give just a short help on help parameters with question mark and a longer help screen for the other help options.[32] Depending on the program, additional or more specific help on accepted parameters is sometimes available by either providing the parameter in question as an argument to the help parameter or vice versa (as in/H:Wor in/W:?(assuming/Wwould be another parameter supported by the program)).[33][34][31][30][32][nb 1] In a similar fashion to the help parameter, but much less common, some programs provide additional information about themselves (like mode, status, version, author, license or contact information) when invoked with anaboutparameter like-!,/!,-about, or--about.[30] Since the?and!characters typically also serve other purposes at the command line, they may not be available in all scenarios, therefore, they should not be the only options to access the corresponding help information. If more detailed help is necessary than provided by a program's built-in internal help, many systems support a dedicated externalhelpcommand" command (or similar), which accepts a command name as calling parameter and will invoke an external help system. In the DR-DOS family, typing/?or/Hat theCOMMAND.COMprompt instead of a command itself will display a dynamically generated list of available internal commands;[20]4DOSandNDOSsupport the same feature by typing?at the prompt[26](which is also accepted by newer versions of DR-DOS COMMAND.COM); internal commands can be individually disabled or reenabled viaSETDOS /I.[26]In addition to this, some newer versions of DR-DOS COMMAND.COM also accept a?%command to display a list of available built-inpseudo-environment variables. Besides their purpose as quick help reference this can be used in batchjobs to query the facilities of the underlying command-line processor.[20] Built-in usage help andman pagescommonly employ a small syntax to describe the valid command form:[35][36][37][nb 2] Notice that these characters have different meanings than when used directly in the shell. Angle brackets may be omitted when confusing the parameter name with a literal string is not likely. In many areas of computing, but particularly in the command line, thespace charactercan cause problems as it has two distinct and incompatible functions: as part of a command or parameter, or as a parameter or nameseparator. Ambiguity can be prevented either by prohibiting embedded spaces in file and directory names in the first place (for example, by substituting them withunderscores_), or by enclosing a name with embedded spaces between quote characters or using anescape characterbefore the space, usually abackslash(\). For example is ambiguous (isprogram namepart of the program name, or two parameters?); however and are not ambiguous.Unix-based operating systems minimize the use of embedded spaces to minimize the need for quotes. InMicrosoft Windows, one often has to use quotes because embedded spaces (such as in directory names) are common. Although most users think of the shell as an interactive command interpreter, it is really a programming language in which each statement runs a command. Because it must satisfy both the interactive and programming aspects of command execution, it is a strange language, shaped as much by history as by design. The termcommand-line interpreteris applied tocomputer programsdesigned tointerpreta sequence of lines of text which may be entered by a user, read from a file or another kind ofdata stream. The context of interpretation is usually one of a givenoperating systemorprogramming language. Command-line interpreters allow users to issue various commands in a very efficient (and often terse) way. This requires the user to know the names of the commands and their parameters, and the syntax of thelanguagethat is interpreted. The Unix#!mechanism and OS/2 EXTPROC command facilitate the passing of batch files to external processors. One can use these mechanisms to write specific command processors for dedicated uses, and process external data files which reside in batch files. Many graphical interfaces, such as the OS/2Presentation Managerand early versions of Microsoft Windows use command lines to call helper programs to open documents and programs. The commands are stored in the graphical shell[clarification needed]or in files like the registry or theOS/2OS2USER.INIfile. The earliest computers did not support interactive input/output devices, often relying onsense switchesand lights to communicate with thecomputer operator. This was adequate forbatchsystems that ran one program at a time, often with the programmer acting as operator. This also had the advantage of low overhead, since lights and switches could be tested and set with one machine instruction. Later a singlesystem consolewas added to allow the operator to communicate with the system. From the 1960s onwards, user interaction with computers was primarily by means of command-line interfaces, initially on machines like theTeletype Model 33ASR, but then on earlyCRT-basedcomputer terminalssuch as theVT52. All of these devices were purely text based, with no ability to display graphic or pictures.[nb 3]For businessapplication programs, text-basedmenuswere used, but for more general interaction the command line was the interface. Around 1964Louis Pouzinintroduced the concept and the nameshellinMultics, building on earlier, simpler facilities in theCompatible Time-Sharing System(CTSS).[39][better source needed] From the early 1970s theUnixoperating system adapted the concept of a powerful command-line environment, and introduced the ability topipethe output of one command in as input to another. Unix also had the capability to save and re-run strings of commands asshell scriptswhich acted like custom commands. The command line was also the main interface for the early home computers such as theCommodore PET,Apple IIandBBC Micro– almost always in the form of aBASICinterpreter. When more powerful business-oriented microcomputers arrived withCP/Mand laterDOScomputers such as theIBM PC, the command line began to borrow some of the syntax and features of the Unix shells such asglobbingandpipingof output. The command line was first seriously challenged by thePARCGUIapproach used in the 1983Apple Lisaand the 1984Apple Macintosh. A few computer users used GUIs such asGEOSandWindows 3.1but the majority ofIBM PCusers did not replace theirCOMMAND.COMshell with a GUI untilWindows 95was released in 1995.[40][41] While most non-expert computer users now use a GUI almost exclusively, more advanced users have access to powerful command-line environments: Most command-line interpreters supportscripting, to various extents. (They are, after all, interpreters of aninterpreted programming language, albeit in many cases the language is unique to the particular command-line interpreter.) They will interpret scripts (variously termedshell scriptsorbatch files) written in thelanguagethat they interpret. Some command-line interpreters also incorporate the interpreter engines of other languages, such asREXX, in addition to their own, allowing the executing of scripts, in those languages, directly within the command-line interpreter itself. Conversely,scripting programming languages, in particular those with anevalfunction(such as REXX,Perl,Python,RubyorJython), can be used to implement command-line interpreters and filters. For a fewoperating systems, most notablyDOS, such a command interpreter provides a more flexible command-line interface than the one supplied. In other cases, such a command interpreter can present a highly customised user interface employing the user interface and input/output facilities of the language. The command line provides an interface between programs as well as the user. In this sense, a command line is an alternative to adialog box. Editors and databases present a command line, in which alternate command processors might run. On the other hand, one might have options on the command line, which opens a dialog box. The latest version of 'Take Command' has this feature. DBase used a dialog box to construct command lines, which could be further edited before use. Programs like BASIC,diskpart,Edlin, and QBASIC all provide command-line interfaces, some of which use the system shell. Basic is modeled on the default interface for 8-bit Intel computers. Calculators can be run as command-line or dialog interfaces. Emacsprovides a command-line interface in the form of its minibuffer. Commands and arguments can be entered using Emacs standard text editing support, and output is displayed in another buffer. There are a number of text mode games, likeAdventureorKing's Quest 1-3, which relied on the user typing commands at the bottom of the screen. One controls the character by typing commands like 'get ring' or 'look'. The program returns a text which describes how the character sees it, or makes the action happen. Thetext adventureThe Hitchhiker's Guide to the Galaxy, a piece ofinteractive fictionbased onDouglas Adam'sbook of the same name, is a teletype-style command-line game. The most notable of these interfaces is thestandard streamsinterface, which allows the output of one command to be passed to the input of another. Text files can serve either purpose as well. This provides the interfaces of piping, filters and redirection. Under Unix,devices are filestoo, so the normal type of file for the shell used for stdin, stdout and stderr is attydevice file. Another command-line interface allows a shell program to launch helper programs, either to launch documents or start a program. The command is processed internally by the shell, and then passed on to another program to launch the document. The graphical interface of Windows and OS/2 rely heavily on command lines passed through to other programs – console or graphical, which then usually process the command line without presenting a user-console. Programs like the OS/2E editorand some other IBMeditors, can process command lines normally meant for the shell, the output being placed directly in the document window. A web browser's URL input field can be used as a command line. It can be used tolaunchweb apps,access browser configuration, as well as perform a search.Google, which has been called "the command line of the internet" will perform a domain-specific search when it detects search parameters in a known format.[51]This functionality is present whether the search is triggered from a browser field or on Google's website. There areJavaScriptlibraries that allow to write command line applications in browser as standalone Web apps or as part of bigger application.[52]An example of such a website is the CLI interface toDuckDuckGo.[53]There are alsoweb-based SSHapplications that allow access to a server’s command-line interface from a browser. Many PCvideo gamesfeature a command line interface often referred to as a console. It is typically used by the game developers during development and by mod developers for debugging purposes as well as for cheating or skipping parts of the game.
https://en.wikipedia.org/wiki/Command_line_interface
Language teaching, like other educational activities, may employ specializedvocabularyandword use. This list is aglossaryforEnglish language learning and teachingusing thecommunicative approach.
https://en.wikipedia.org/wiki/Glossary_of_language_education_terms
XML logorXMLlogging is used by manycomputerprogramsto log the program's operations. An XMLlogfilerecords a description of the operations done by a program during its session. The log normally includes:timestamp, the programs settings during the operation, what was completed during the session, the files or directories used and any errors that may have occurred. Incomputing, a logfile records eithereventsthat occur in anoperating systemor othersoftwarerunning. It may also log messages between different users of acommunication software. XML file standard is controlled by theWorld Wide Web Consortiumas the XML file standard is used for many other data standards, seeList of XML markup languages. XML is short for eXtensible Markup Language file.[1][2][3]
https://en.wikipedia.org/wiki/XML_log
Inprobability theory, apairwise independentcollection ofrandom variablesis a set of random variables any two of which areindependent.[1]Any collection ofmutually independentrandom variables is pairwise independent, but some pairwise independent collections are not mutually independent. Pairwise independent random variables with finitevarianceareuncorrelated. A pair of random variablesXandYareindependentif and only if the random vector (X,Y) withjointcumulative distribution function (CDF)FX,Y(x,y){\displaystyle F_{X,Y}(x,y)}satisfies or equivalently, their joint densityfX,Y(x,y){\displaystyle f_{X,Y}(x,y)}satisfies That is, the joint distribution is equal to the product of the marginal distributions.[2] Unless it is not clear in context, in practice the modifier "mutual" is usually dropped so thatindependencemeansmutual independence. A statement such as "X,Y,Zare independent random variables" means thatX,Y,Zare mutually independent. Pairwise independence does not imply mutual independence, as shown by the following example attributed to S. Bernstein.[3] SupposeXandYare two independent tosses of a fair coin, where we designate 1 for heads and 0 for tails. Let the third random variableZbe equal to 1 if exactly one of those coin tosses resulted in "heads", and 0 otherwise (i.e.,Z=X⊕Y{\displaystyle Z=X\oplus Y}). Then jointly the triple (X,Y,Z) has the followingprobability distribution: Here themarginal probability distributionsare identical:fX(0)=fY(0)=fZ(0)=1/2,{\displaystyle f_{X}(0)=f_{Y}(0)=f_{Z}(0)=1/2,}andfX(1)=fY(1)=fZ(1)=1/2.{\displaystyle f_{X}(1)=f_{Y}(1)=f_{Z}(1)=1/2.}Thebivariate distributionsalso agree:fX,Y=fX,Z=fY,Z,{\displaystyle f_{X,Y}=f_{X,Z}=f_{Y,Z},}wherefX,Y(0,0)=fX,Y(0,1)=fX,Y(1,0)=fX,Y(1,1)=1/4.{\displaystyle f_{X,Y}(0,0)=f_{X,Y}(0,1)=f_{X,Y}(1,0)=f_{X,Y}(1,1)=1/4.} Since each of the pairwise joint distributions equals the product of their respective marginal distributions, the variables are pairwise independent: However,X,Y, andZarenotmutually independent, sincefX,Y,Z(x,y,z)≠fX(x)fY(y)fZ(z),{\displaystyle f_{X,Y,Z}(x,y,z)\neq f_{X}(x)f_{Y}(y)f_{Z}(z),}the left side equalling for example 1/4 for (x,y,z) = (0, 0, 0) while the right side equals 1/8 for (x,y,z) = (0, 0, 0). In fact, any of{X,Y,Z}{\displaystyle \{X,Y,Z\}}is completely determined by the other two (any ofX,Y,Zis thesum (modulo 2)of the others). That is as far from independence as random variables can get. Bounds on theprobabilitythat the sum ofBernoullirandom variablesis at least one, commonly known as theunion bound, are provided by theBoole–Fréchet[4][5]inequalities. While these bounds assume onlyunivariateinformation, several bounds with knowledge of generalbivariateprobabilities, have been proposed too. Denote by{Ai,i∈{1,2,...,n}}{\displaystyle \{{A}_{i},i\in \{1,2,...,n\}\}}a set ofn{\displaystyle n}Bernoullievents withprobabilityof occurrenceP(Ai)=pi{\displaystyle \mathbb {P} (A_{i})=p_{i}}for eachi{\displaystyle i}. Suppose thebivariateprobabilities are given byP(Ai∩Aj)=pij{\displaystyle \mathbb {P} (A_{i}\cap A_{j})=p_{ij}}for every pair of indices(i,j){\displaystyle (i,j)}. Kounias[6]derived the followingupper bound: which subtracts the maximum weight of astarspanning treeon acomplete graphwithn{\displaystyle n}nodes (where the edge weights are given bypij{\displaystyle p_{ij}}) from the sum of themarginalprobabilities∑ipi{\displaystyle \sum _{i}p_{i}}.Hunter-Worsley[7][8]tightened thisupper boundby optimizing overτ∈T{\displaystyle \tau \in T}as follows: whereT{\displaystyle T}is the set of allspanning treeson the graph. These bounds are not thetightestpossible with generalbivariatespij{\displaystyle p_{ij}}even whenfeasibilityis guaranteed as shown in Boros et.al.[9]However, when the variables arepairwise independent(pij=pipj{\displaystyle p_{ij}=p_{i}p_{j}}), Ramachandra—Natarajan[10]showed that the Kounias-Hunter-Worsley[6][7][8]bound istightby proving that the maximum probability of the union of events admits aclosed-form expressiongiven as: where theprobabilitiesare sorted in increasing order as0≤p1≤p2≤…≤pn≤1{\displaystyle 0\leq p_{1}\leq p_{2}\leq \ldots \leq p_{n}\leq 1}. Thetightbound inEq. 1depends only on the sum of the smallestn−1{\displaystyle n-1}probabilities∑i=1n−1pi{\displaystyle \sum _{i=1}^{n-1}p_{i}}and the largest probabilitypn{\displaystyle p_{n}}. Thus, whileorderingof theprobabilitiesplays a role in the derivation of the bound, theorderingamong the smallestn−1{\displaystyle n-1}probabilities{p1,p2,...,pn−1}{\displaystyle \{p_{1},p_{2},...,p_{n-1}\}}is inconsequential since only their sum is used. It is useful to compare the smallest bounds on the probability of the union with arbitrarydependenceandpairwise independencerespectively. ThetightestBoole–Fréchetupperunion bound(assuming onlyunivariateinformation) is given as: As shown in Ramachandra-Natarajan,[10]it can be easily verified that the ratio of the twotightbounds inEq. 2andEq. 1isupper boundedby4/3{\displaystyle 4/3}where the maximum value of4/3{\displaystyle 4/3}is attained when where theprobabilitiesare sorted in increasing order as0≤p1≤p2≤…≤pn≤1{\displaystyle 0\leq p_{1}\leq p_{2}\leq \ldots \leq p_{n}\leq 1}. In other words, in the best-case scenario, the pairwise independence bound inEq. 1provides an improvement of25%{\displaystyle 25\%}over theunivariatebound inEq. 2. More generally, we can talk aboutk-wise independence, for anyk≥ 2. The idea is similar: a set ofrandom variablesisk-wise independent if every subset of sizekof those variables is independent.k-wise independence has been used in theoretical computer science, where it was used to prove a theorem about the problemMAXEkSAT. k-wise independence is used in the proof thatk-independent hashingfunctions are secure unforgeablemessage authentication codes.
https://en.wikipedia.org/wiki/Pairwise_independence
Afloating toneis amorpheme[1]or element of a morpheme that contains neitherconsonantsnorvowels, but onlytone. It cannot be pronounced by itself but affects the tones of neighboring morphemes.[2][3] An example occurs inBambara, aMande languageof Mali that has twophonemictones,[4]highandlow.Thedefinite articleis a floating low tone, and with a noun in isolation, it is associated with the preceding vowel and turns a high tone into a falling tone: [bá]river;[bâ]the river. When it occurs between two high tones, itdownstepsthe following tone: Also common are floating tones associated with asegmentalmorpheme such as an affix.[5]For example, inOkphela, anEdoid languageof Nigeria,[6]the main negative morpheme is distinguished from the present tense morpheme by tone; the present tense morpheme (á-) carries high tone, whereas the negative past morpheme (´a-) imposes a high tone on the syllable which precedes it: Floating tones derive historically from morphemes whichassimilate[7]orlenite[8]to the point that only their tone remains.[9] Thisphoneticsarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Floating_tone
Inphilosophy, asupertaskis acountably infinitesequence of operations that occur sequentially within a finite interval of time.[1]Supertasks are calledhypertaskswhen the number of operations becomesuncountably infinite. A hypertask that includes one task for eachordinal numberis called anultratask.[2]The term "supertask" was coined by the philosopherJames F. Thomson, who devisedThomson's lamp. The term "hypertask" derives from Clark and Read in their paper of that name.[3] The origin of the interest in supertasks is normally attributed toZeno of Elea. Zeno claimed thatmotion was impossible. He argued as follows: suppose our burgeoning "mover", Achilles say, wishes to move from A to B. To achieve this he must traverse half the distance from A to B. To get from the midpoint of AB to B, Achilles must traverse halfthisdistance, and so on and so forth. However many times he performs one of these "traversing" tasks, there is another one left for him to do before he arrives at B. Thus it follows, according to Zeno, that motion (travelling a non-zero distance in finite time) is a supertask. Zeno further argues that supertasks are not possible (how can this sequence be completed if for each traversing there is another one to come?). It follows that motion is impossible. Zeno's argument takes the following form: Most subsequent philosophers reject Zeno's bold conclusion in favor of common sense. Instead, they reverse the argument and take it as aproof by contradictionwhere the possibility of motion is taken for granted. They accept the possibility of motion and applymodus tollens(contrapositive) to Zeno's argument to reach the conclusion that either motion is not a supertask or not all supertasks are impossible.[citation needed] Zeno himself also discusses the notion of what he calls "Achillesand the tortoise". Suppose that Achilles is the fastest runner, and moves at a speed of 1 m/s. Achilles chases a tortoise, an animal renowned for being slow, that moves at 0.1 m/s. However, the tortoise starts 0.9 metres ahead. Common sense seems to decree that Achilles will catch up with the tortoise after exactly 1 second, but Zeno argues that this is not the case. He instead suggests that Achilles must inevitably come up to the point where the tortoise has started from, but by the time he has accomplished this, the tortoise will already have moved on to another point. This continues, and every time Achilles reaches the mark where the tortoise was, the tortoise will have reached a new point that Achilles will have to catch up with; while it begins with 0.9 metres, it becomes an additional 0.09 metres, then 0.009 metres, and so on, infinitely. While these distances will grow very small, they will remain finite, while Achilles' chasing of the tortoise will become an unending supertask. Much commentary has been made on this particular paradox; many assert that it finds a loophole in common sense.[4] James F. Thomsonbelieved that motion was not a supertask, and he emphatically denied that supertasks are possible. He considered a lamp that may either be on or off. At timet= 0the lamp is off, and the switch is flipped on att= 1/2; after that, the switch is flipped after waiting for half the time as before. Thomson asks what is the state att= 1, when the switch has been flipped infinitely many times. He reasons that it cannot be on because there was never a time when it was not subsequently turned off, and vice versa, and reaches a contradiction. He concludes that supertasks are impossible.[5] Paul Benacerrafbelieves that supertasks are at least logically possible despite Thomson's apparent contradiction. Benacerraf agrees with Thomson insofar as that the experiment he outlined does not determine the state of the lamp at t = 1. However he disagrees with Thomson that he can derive a contradiction from this, since the state of the lamp at t = 1 cannot be logically determined by the preceding states.[citation needed] Most of the modern literature comes from the descendants of Benacerraf, those who tacitly accept the possibility of supertasks. Philosophers who reject their possibility tend not to reject them on grounds such as Thomson's but because they have qualms with the notion of infinity itself. Of course there are exceptions. For example, McLaughlin claims that Thomson's lamp is inconsistent if it is analyzed withinternal set theory, a variant ofreal analysis. If supertasks are possible, then the truth or falsehood of unknown propositions of number theory, such asGoldbach's conjecture, or evenundecidablepropositions could be determined in a finite amount of time by a brute-force search of the set of all natural numbers. This would, however, be in contradiction with theChurch–Turing thesis. Some have argued this poses a problem forintuitionism, since the intuitionist must distinguish between things that cannot in fact be proven (because they are too long or complicated; for exampleBoolos's "Curious Inference"[6]) but nonetheless are considered "provable", and those whichareprovable by infinite brute force in the above sense. Some have claimed, Thomson's lamp is physically impossible since it must have parts moving at speeds faster than thespeed of light(e.g., the lamp switch).Adolf Grünbaumsuggests that the lamp could have a strip of wire which, when lifted, disrupts the circuit and turns off the lamp; this strip could then be lifted by a smaller distance each time the lamp is to be turned off, maintaining a constant velocity. However, such a design would ultimately fail, as eventually the distance between the contacts would be so small as to allow electrons to jump the gap, preventing the circuit from being broken at all. Still, for either a human or any device, to perceive or act upon the state of the lamp some measurement has to be done, for example the light from the lamp would have to reach an eye or a sensor. Any such measurement will take a fixed frame of time, no matter how small and, therefore, at some point measurement of the state will be impossible. Since the state at t=1 cannot be determined even in principle, it is not meaningful to speak of the lamp being either on or off. Other physically possible supertasks have been suggested. In one proposal, one person (or entity) counts upward from 1, taking an infinite amount of time, while another person observes this from a frame of reference where this occurs in a finite space of time. For the counter, this is not a supertask, but for the observer, it is. (This could theoretically occur due totime dilation, for example if the observer were falling into ablack holewhile observing a counter whose position is fixed relative to the singularity.) Gustavo E. Romeroin the paper 'The collapse of supertasks'[7]maintains that any attempt to carry out a supertask will result in the formation of ablack hole, making supertasks physically impossible. The impact of supertasks on theoretical computer science has triggered some new and interesting work, for example Hamkins and Lewis – "Infinite Time Turing Machine".[8] Suppose there is a jar capable of containing infinitely many marbles and an infinite collection of marbles labelled 1, 2, 3, and so on. At timet= 0, marbles 1 through 10 are placed in the jar and marble 1 is taken out. Att= 0.5, marbles 11 through 20 are placed in the jar and marble 2 is taken out; att= 0.75, marbles 21 through 30 are put in the jar and marble 3 is taken out; and in general at timet= 1 − 0.5n, marbles 10n+ 1 through 10n+ 10 are placed in the jar and marblen+ 1 is taken out. How many marbles are in the jar at timet= 1? One argument states that there should be infinitely many marbles in the jar, because at each step beforet= 1 the number of marbles increases from the previous step and does so unboundedly. A second argument, however, shows that the jar is empty. Consider the following argument: if the jar is non-empty, then there must be a marble in the jar. Let us say that that marble is labeled with the numbern. But at timet= 1 − 0.5n- 1, thenth marble has been taken out, so marblencannot be in the jar. This is a contradiction, so the jar must be empty. The Ross–Littlewood paradox is that here we have two seemingly perfectly good arguments with completely opposite conclusions. There has been considerable interest inJ. A. Benardete’s “Paradox of the Gods”:[9] A man walks a mile from a point α. But there is an infinity of gods each of whom, unknown to the others, intends to obstruct him. One of them will raise a barrier to stop his further advance if he reaches the half-mile point, a second if he reaches the quarter-mile point, a third if he goes one-eighth of a mile, and so on ad infinitum. So he cannot even get started, because however short a distance he travels he will already have been stopped by a barrier. But in that case no barrier will rise, so that there is nothing to stop him setting off. He has been forced to stay where he is by the mere unfulfilled intentions of the gods.[10] Inspired byJ. A. Benardete’s paradox regarding an infinite series of assassins,[11]David Chalmersdescribes the paradox as follows: There are countably many grim reapers, one for every positive integer. Grim reaper 1 is disposed to kill you with a scythe at 1pm, if and only if you are still alive then (otherwise his scythe remains immobile throughout), taking 30 minutes about it. Grim reaper 2 is disposed to kill you with a scythe at 12:30 pm, if and only if you are still alive then, taking 15 minutes about it. Grim reaper 3 is disposed to kill you with a scythe at 12:15 pm, and so on. You are still alive just before 12pm, you can only die through the motion of a grim reaper’s scythe, and once dead you stay dead. On the face of it, this situation seems conceivable — each reaper seems conceivable individually and intrinsically, and it seems reasonable to combine distinct individuals with distinct intrinsic properties into one situation. But a little reflection reveals that the situation as described is contradictory. I cannot survive to any moment past 12pm (a grim reaper would get me first), but I cannot be killed (for grim reapernto kill me, I must have survived grim reapern+1, which is impossible).[12] It has gained significance in philosophy via its use in arguing for a finite past, thereby bearing relevance to theKalam cosmological argument.[13][14][15][16] Proposed byE. Brian Davies,[17]this is a machine that can, in the space of half an hour, create an exact replica of itself that is half its size and capable of twice its replication speed. This replica will in turn create an even faster version of itself with the same specifications, resulting in a supertask that finishes after an hour. If, additionally, the machines create a communication link between parent and child machine that yields successively faster bandwidth and the machines are capable of simple arithmetic, the machines can be used to perform brute-force proofs of unknown conjectures. However, Davies also points out that – due to fundamental properties of the real universe such asquantum mechanics,thermal noiseandinformation theory– his machine cannot actually be built.
https://en.wikipedia.org/wiki/Supertask
TheMatilda effectis a bias against acknowledging the achievements ofwomen scientistswhose work is attributed to their male colleagues. This phenomenon was first described by suffragist and abolitionistMatilda Joslyn Gage(1826–1898) in her essay, "Woman as Inventor" (first published as a tract in 1870 and in theNorth American Reviewin 1883). The termMatilda effectwas coined in 1993 by science historianMargaret W. Rossiter.[1][2] Rossiter provides several examples of this effect. Trotula (Trota of Salerno), a 12th-century Italian woman physician, wrote books which, after her death, were attributed to male authors. Nineteenth- and twentieth-century cases illustrating the Matilda effect include those ofNettie Stevens,[3]Lise Meitner,Marietta Blau,Rosalind Franklin, andJocelyn Bell Burnell. The Matilda effect was compared to theMatthew effect, whereby an eminent scientist often gets more credit than a comparatively unknown researcher, even if their work is shared or similar.[4][5] In 2012, Marieke van den Brink and Yvonne Benschop fromRadboud University Nijmegenshowed that in theNetherlandsthe sex of professorship candidates influences the evaluation made of them.[6]Similar cases are described by Andrea Cerroni and Zenia Simonella in a study[7]corroborated further by a Spanish study.[8]On the other hand, several studies found no difference between citations and impact of publications of male authors and those of female authors.[9][10][11] Swiss researchers have indicated that mass media asks male scientists more often to contribute on shows than they do their female fellow scientists.[12] According to one U.S. study, "although overt gender discrimination generally continues to decline in American society," "women continue to be disadvantaged with respect to the receipt of scientific awards and prizes, particularly for research."[13] Examples of women subjected to the Matilda effect: Examples of men scientists favored over women scientists forNobel Prizes: The Spanish Association of Women Researchers and Technologists (AMIT) has created a movement called "No more Matildas" that honours Matilda Joslyn Gage.[33]The campaign's goal is to promote the number of women in science from an early age, eliminating stereotypes. Ben Barres(1954–2017) was a neurobiologist atStanford University Medical Schoolwho transitioned from female to male. He spoke of his scientific achievements having been perceived differently, depending on what sex others thought he was at the time.[34]Prior to his transition to male, Barres' scientific achievements were ascribed to men or devalued, but after transitioning to male, his achievements were credited to him and lauded.
https://en.wikipedia.org/wiki/Matilda_effect
MyriaNedis awireless sensor network (WSN)platform developed byDevLab. It uses an epidemic communication style based on standardradio broadcasting. This approach reflects the way humans interact, which is calledgossiping.[1]Messages are sent periodically and received by adjoining neighbours. Each message is repeated and duplicated towards allnodesthat span the network; it spreads like avirus(hence the term epidemic communication). This is a very efficient and robust[2][3]protocol, mainly for two reasons: Nodes can be added, removed or may be physically moving without the need to reconfigure the network. TheGOSSIP protocolis a self-configuring network solution. The network may even be heterogeneous, where several types of nodes communicate different pieces of information with each other at the same time. This is possible due to the fact that no interpretation of the message content is required in order to be able to forward it to other nodes. Message communication is fully transparent, providing a seamless communication platform, where new functionality can be added later, without the need to change the installed base. Furthermore, MyriaNed is enabled to update the wireless sensor nodes software by means of “over the air” programming of a deployed network. Traditionallyradio communicationis organized according to themaster-slavephilosophy. The way two nodes communicate ispoint-to-point. A command is senttop-downand a confirmation is sentbottom-upbetween twohierarchicallevels. However, inbiologythis is organized differently. For instanceadrenalinein the human body works completely different. This message (hormoneandneurotransmitter) is sent to different types ofcells. Every cell knows what to do with this message (increase heart rate, constrict blood vessels, dilate air passages) and does not sent a confirmation. This is theinspirationfor MyriaNed in a nutshell. Another inspiration is the basicradio broadcastingprinciple. A radio with an antenna is made to send and receive a message to and from every direction. Implicitly it is not optimized to perform point-to-point communication. Wires are ideally suitable for that because they always link two devices. Looking at wireless communication, it should be structured in such a way that it uses the potential of radio transmission. The third inspiration is that of humangossiping. The term is sometimes associated with spreadingmisinformationof trivial nature but the way information is disseminated is one of the oldest and most common in nature. Information is generated by a source and gossiped to its neighbours. They spread the message to their neighbours, thereby exponentially increasing the number of people familiar with the information. Together these three inspirations led to the development of the MyriaNed platform. There is no master-slave structure in the network rather each node ishierarchicallyequal. MyriaNed uses biological routing which is random and independent of the function of the node. Each node decides what to do with a message. Furthermore, it sends the message to all its neighbours thereby using the basic radio communication characteristics. In potential the complete set of information (e.g. sensor values, control data) is available to every node in the network. By using an intelligent strategy, called shared state, this information is stored as adistributed databasein the network. Nodes that are newly added to the network can utilize this shared state to instantaneously adapt and contribute to the network functionality. When it comes to caching the messages there are two scenarios. The first scenario, if a message is new to the receiving node (meaning the data was not received in previous communication rounds), the node will store the message in cache and transmit this message to its own neighbours. Secondly, if the message is old (meaning the data was already received before, i.e. through another neighbour), the message is discarded. If the cache is full, different strategies can be employed in order to make room for new messages. Since there is notop-downstructure imposed on the network and data dissemination is transparent, the network is naturally scalable. On the communication level no identification administration is necessary and messages have a standard structure. This makes it possible that a MyriaNed network can scale far beyond the limits of currently available WSN technologies. Also different functionality can be integrated and executed on a single network. In order to reduce the energy consumption of the nodes in the networkduty cyclingis used. This means that nodes communicate periodically, and go to standby mode in a large part of the period in order to preserve energy. In order to communicate the nodes need to wake up at the same time, therefore they have a built-in synchronization mechanism. During radio communication aTDMA(time-division multiple access)[4]scheme is used to overcome collisions during broadcast communication. Current implementations run on 2.4 GHz and 868 MHz radios. The concept of MyriaNed is however not restricted to these frequencies. From the previous characteristics of MyriaNed it can be derived that it uses a truemeshtopology. The advantage of such a topology is reliability, and coping with mobility, because of the redundant communication paths in the network. Setup and configuration is kept to a bare minimum because of thebottom-up approachutilized in the self-organizing network. There is no notion of a coordinator or network manager entity compared to technologies such asZigbeeorWirelessHART. This reduces the effort spent on setup and maintenance. When MyriaNed is used for specific applications, the ultimate implementation is based on a large set of autonomous devices which make their own autonomous decisions (e.g. controlling actuators) based on the available information that travels through the network by gossiping dissemination. The sum of all individual behaviors of the network nodes reflect the emergent behavior of the system as a whole, which is the systems application. MyriaNed has an extremely small stack, uses low calculation power and does not need a large amount of energy. Therefore, it can be run on a simplemicrocontrollerand small sized battery. This makes the costs of a single node very low. DevLabmembers work with a single chip solution in which theradioandmicrocontrollerare integrated. This chip with an attached battery is smaller than a 2 euro coin. Installation and expansion of networks using the MyriaNed protocol is very cost efficient as well. There is no need for addressing and the information in the network is synchronized over time with added nodes. Therefore, no additional costs have to be made (like gateways/setup/bridges) in order to install or expand the network. Because of the structure of MyriaNed there is no need for different profiles for market applications. Different applications can run next to each other without interfering. Instead they will only help each other by increasing the density of the network. EveryDevLab memberis free to use MyriaNed in whatever market they want. This has resulted in many interoperable devices in completely different applications. Chess Wise, one of the companies behind DEVLAB, used the MyriaNED technology as an early base for Mymesh, their network protocol. This technology is used to connect, control and analyze thousands of devices simultaneously within demanding environments.[5] EP application 2301302, van der Wateren, Frits, "Broadcast-only distributed wireless network", published 2009-06-22, assigned to CHESS
https://en.wikipedia.org/wiki/MyriaNed
Athesaurus(pl.:thesauriorthesauruses), sometimes called asynonym dictionaryordictionary of synonyms, is areference workwhich arranges words by their meanings (or in simpler terms, a book where one can find different words with similar meanings to other words),[1][2]sometimes as a hierarchy ofbroader and narrower terms, sometimes simply as lists ofsynonymsandantonyms. They are often used by writers to help find the best word to express an idea: ...to find the word, or words, by which [an] idea may be most fitly and aptly expressed Synonym dictionaries have a long history. The word 'thesaurus' was used in 1852 byPeter Mark Rogetfor hisRoget's Thesaurus. While some works called "thesauri", such asRoget's Thesaurus, group words in ahierarchicalhypernymictaxonomyof concepts, others are organised alphabetically[4][2]or in some other way. Most thesauri do not include definitions, but many dictionaries include listings of synonyms. Some thesauri and dictionary synonym notes characterise the distinctions between similar words, with notes on their "connotations and varying shades of meaning".[5]Some synonym dictionaries are primarily concerned with differentiating synonyms by meaning and usage.Usage manualssuch as Fowler'sDictionary of Modern English UsageorGarner's Modern English Usageoftenprescribeappropriate usage of synonyms. Writers sometimes use thesauri to avoid repetition of words –elegant variation– which is often criticised by usage manuals: "Writers sometimes use them not just to vary their vocabularies but to dress them up too much".[6] The word "thesaurus" comes fromLatinthēsaurus, which in turn comes fromGreekθησαυρός(thēsauros) 'treasure, treasury, storehouse'.[7]The wordthēsaurosis of uncertain etymology.[7][8][9] Until the 19th century, a thesaurus was anydictionaryorencyclopedia,[9]as in theThesaurus Linguae Latinae(Dictionary of the Latin Language, 1532), and theThesaurus Linguae Graecae(Dictionary of the Greek Language, 1572). It was Roget who introduced the meaning "collection of words arranged according to sense", in 1852.[7] In antiquity,Philo of Byblosauthored the first text that could now be called a thesaurus. InSanskrit, theAmarakoshais a thesaurus in verse form, written in the 4th century. The study of synonyms became an important theme in 18th-century philosophy, andCondillacwrote, but never published, a dictionary of synonyms.[10][11] Some early synonym dictionaries include: Roget's Thesaurus, first compiled in 1805 by Peter Mark Roget, and published in 1852, followsJohn Wilkins' semantic arrangement of 1668. Unlike earlier synonym dictionaries, it does not include definitions or aim to help the user choose among synonyms. It has been continuously in print since 1852 and remains widely used across the English-speaking world.[20]Roget described his thesaurus in the foreword to the first edition:[21] It is now nearly fifty years since I first projected a system of verbal classification similar to that on which the present work is founded. Conceiving that such a compilation might help to supply my deficiencies, I had, in the year 1805, completed a classed catalogue of words on a small scale, but on the same principle, and nearly in the same form, as the Thesaurus now published. Roget's original thesaurus was organized into 1000 conceptual Heads (e.g., 806 Debt) organized into a four-leveltaxonomy. For example, debt is classed under V.ii.iv:[22] Each head includes direct synonyms: Debt, obligation, liability, ...; related concepts: interest, usance, usury; related persons: debtor, debitor, ... defaulter (808); verbs: to be in debt, to owe, ...seeBorrow (788); phrases: to run up a bill or score, ...; and adjectives: in debt, indebted, owing, .... Numbers in parentheses arecross-referencesto other Heads. The book starts with a Tabular Synopsis of Categories laying out the hierarchy,[23]then the main body of the thesaurus listed by the Head, and then an alphabetical index listing the different Heads under which a word may be found: Liable,subject to, 177;debt, 806;duty, 926.[24] Some recent versions have kept the same organization, though often with more detail under each Head.[25]Others have made modest changes such as eliminating the four-level taxonomy and adding new heads: one has 1075 Heads in fifteen Classes.[26] Some non-English thesauri have also adopted this model.[27] In addition to its taxonomic organization, theHistorical Thesaurus of English(2009) includes the date when each word came to have a given meaning. It has the novel and unique goal of "charting the semantic development of the huge and varied vocabulary of English". Different senses of a word are listed separately. For example, three different senses of "debt" are listed in three different places in the taxonomy:[28]A sum of money that is owed or due; a liability or obligation to pay An immaterial debt; is an obligation to do something An offence requiring expiation (figurative, Biblical) Other thesauri and synonym dictionaries are organized alphabetically. Most repeat the list of synonyms under each word.[29][30][31][32] Some designate a principal entry for each concept and cross-reference it.[33][34][35] A third system interfiles words and conceptual headings.Francis March'sThesaurus Dictionarygives forliability:CONTINGENCY, CREDIT–DEBT, DUTY–DERELICTION, LIBERTY–SUBJECTION, MONEY, each of which is a conceptual heading.[36]TheCREDIT—DEBTarticle has multiple subheadings, including Nouns of Agent, Verbs, Verbal Expressions,etc.Under each are listed synonyms with brief definitions,e.g."Credit.Transference of property on promise of future payment." The conceptual headings are not organized into a taxonomy. Benjamin Lafaye'sSynonymes français(1841) is organized aroundmorphologicallyrelated families of synonyms (e.g.logis, logement),[37]and hisDictionnaire des synonymes de la langue française(1858) is mostly alphabetical, but also includes a section on morphologically related synonyms, which is organized by prefix, suffix, or construction.[11] Before Roget, most thesauri and dictionary synonym notes included discussions of the differences among near-synonyms, as do some modern ones.[32][31][30][5] Merriam-Webster's Dictionary of Synonymsis a stand-alone modern English synonym dictionary that does discuss differences.[33]In addition, many general English dictionaries include synonym notes. Several modern synonym dictionaries in French areprimarilydevoted to discussing the precise demarcations among synonyms.[38][11] Some include short definitions.[36] Some give illustrative phrases.[32] Some include lists of objects within the category (hyponyms),e.g.breeds of dogs.[32] Bilingual synonym dictionaries are designed for language learners. One such dictionary gives various French words listed alphabetically, with an English translation and an example of use.[39]Another one is organized taxonomically with examples, translations, and some usage notes.[40] Inlibraryandinformation science, a thesaurus is a kind ofcontrolled vocabulary. A thesaurus can form part of anontologyand be represented in theSimple Knowledge Organization System(SKOS).[41] Thesauri are used innatural language processingforword-sense disambiguation[42]andtext simplificationformachine translationsystems.[43]
https://en.wikipedia.org/wiki/Thesaurus
Inmathematics,Euler's identity[note 1](also known asEuler's equation) is theequalityeiπ+1=0{\displaystyle e^{i\pi }+1=0}where Euler's identity is named after the Swiss mathematicianLeonhard Euler. It is a special case ofEuler's formulaeix=cos⁡x+isin⁡x{\displaystyle e^{ix}=\cos x+i\sin x}when evaluated forx=π{\displaystyle x=\pi }. Euler's identity is considered an exemplar ofmathematical beauty, as it shows a profound connection between the most fundamental numbers in mathematics. In addition, it is directly used ina proof[3][4]thatπistranscendental, which implies the impossibility ofsquaring the circle. Euler's identity is often cited as an example of deepmathematical beauty.[5]Three of the basicarithmeticoperations occur exactly once each:addition,multiplication, andexponentiation. The identity also links five fundamentalmathematical constants:[6] The equation is often given in the form of an expression set equal to zero, which is common practice in several areas of mathematics. Stanford Universitymathematics professorKeith Devlinhas said, "like a Shakespeareansonnetthat captures the very essence of love, or a painting that brings out the beauty of the human form that is far more than just skin deep, Euler's equation reaches down into the very depths of existence".[7]Paul Nahin, a professor emeritus at theUniversity of New Hampshirewho wrote a book dedicated toEuler's formulaand its applications inFourier analysis, said Euler's identity is "of exquisite beauty".[8] Mathematics writerConstance Reidhas said that Euler's identity is "the most famous formula in all mathematics".[9]Benjamin Peirce, a 19th-century American philosopher, mathematician, and professor atHarvard University, after proving Euler's identity during a lecture, said that it "is absolutely paradoxical; we cannot understand it, and we don't know what it means, but we have proved it, and therefore we know it must be the truth".[10] A 1990 poll of readers byThe Mathematical Intelligencernamed Euler's identity the "most beautiful theorem in mathematics".[11]In a 2004 poll of readers byPhysics World, Euler's identity tied withMaxwell's equations(ofelectromagnetism) as the "greatest equation ever".[12] At least three books inpopular mathematicshave been published about Euler's identity: Euler's identity asserts thateiπ{\displaystyle e^{i\pi }}is equal to −1. The expressioneiπ{\displaystyle e^{i\pi }}is a special case of the expressionez{\displaystyle e^{z}}, wherezis anycomplex number. In general,ez{\displaystyle e^{z}}is defined for complexzby extending one of thedefinitions of the exponential functionfrom real exponents to complex exponents. For example, one common definition is: Euler's identity therefore states that the limit, asnapproaches infinity, of(1+iπn)n{\displaystyle (1+{\tfrac {i\pi }{n}})^{n}}is equal to −1. This limit is illustrated in the animation to the right. Euler's identity is aspecial caseofEuler's formula, which states that for anyreal numberx, where the inputs of thetrigonometric functionssine and cosine are given inradians. In particular, whenx=π, Since and it follows that which yields Euler's identity: Any complex numberz=x+iy{\displaystyle z=x+iy}can be represented by the point(x,y){\displaystyle (x,y)}on thecomplex plane. This point can also be represented inpolar coordinatesas(r,θ){\displaystyle (r,\theta )}, whereris the absolute value ofz(distance from the origin), andθ{\displaystyle \theta }is the argument ofz(angle counterclockwise from the positivex-axis). By the definitions of sine and cosine, this point has cartesian coordinates of(rcos⁡θ,rsin⁡θ){\displaystyle (r\cos \theta ,r\sin \theta )}, implying thatz=r(cos⁡θ+isin⁡θ){\displaystyle z=r(\cos \theta +i\sin \theta )}. According to Euler's formula, this is equivalent to sayingz=reiθ{\displaystyle z=re^{i\theta }}. Euler's identity says that−1=eiπ{\displaystyle -1=e^{i\pi }}. Sinceeiπ{\displaystyle e^{i\pi }}isreiθ{\displaystyle re^{i\theta }}forr= 1 andθ=π{\displaystyle \theta =\pi }, this can be interpreted as a fact about the number −1 on the complex plane: its distance from the origin is 1, and its angle from the positivex-axis isπ{\displaystyle \pi }radians. Additionally, when any complex numberzismultipliedbyeiθ{\displaystyle e^{i\theta }}, it has the effect of rotatingz{\displaystyle z}counterclockwise by an angle ofθ{\displaystyle \theta }on the complex plane. Since multiplication by −1 reflects a point across the origin, Euler's identity can be interpreted as saying that rotating any pointπ{\displaystyle \pi }radians around the origin has the same effect as reflecting the point across the origin. Similarly, settingθ{\displaystyle \theta }equal to2π{\displaystyle 2\pi }yields the related equatione2πi=1,{\displaystyle e^{2\pi i}=1,}which can be interpreted as saying that rotating any point by oneturnaround the origin returns it to its original position. Euler's identity is also a special case of the more general identity that thenthroots of unity, forn> 1, add up to 0: Euler's identity is the case wheren= 2. A similar identity also applies toquaternion exponential: let{i,j,k}be the basisquaternions; then, More generally, letqbe a quaternion with a zero real part and a norm equal to 1; that is,q=ai+bj+ck,{\displaystyle q=ai+bj+ck,}witha2+b2+c2=1.{\displaystyle a^{2}+b^{2}+c^{2}=1.}Then one has The same formula applies tooctonions, with a zero real part and a norm equal to 1. These formulas are a direct generalization of Euler's identity, sincei{\displaystyle i}and−i{\displaystyle -i}are the only complex numbers with a zero real part and a norm (absolute value) equal to 1. Euler's identity is a direct result ofEuler's formula, published in his monumental 1748 work of mathematical analysis,Introductio in analysin infinitorum,[16]but it is questionable whether the particular concept of linking five fundamental constants in a compact form can be attributed to Euler himself, as he may never have expressed it.[17] Robin Wilsonwrites:[18] We've seen how [Euler's identity] can easily be deduced from results ofJohann BernoulliandRoger Cotes, but that neither of them seem to have done so. Even Euler does not seem to have written it down explicitly—and certainly it doesn't appear in any of his publications—though he must surely have realized that it follows immediately from his identity [i.e.Euler's formula],eix= cosx+isinx. Moreover, it seems to be unknown who first stated the result explicitly
https://en.wikipedia.org/wiki/Euler%27s_identity
Project Enterprisewas an Americanmicrofinancenonprofit organization inNew York Cityprovidingentrepreneursfrom underserved areas with loans, business training and networking opportunities. Operating on theGrameen Bankmodel of microlending, as of 2008[update], Project Enterprise (PE) had served more than 2,500 entrepreneurs in New York City, and provides microloans from $1,500 to $12,000.[2][3]The organizations web site was closed in 2017. Project Enterprise was started in 1997 as the only provider of business microloans in New York City that does not require prior business experience, credit history or collateral to provide market-rate financing for small businesses.[4][5]PE has been a certifiedCommunity development financial institutionsince 1998. Founding Executive Director Vanessa Rudin was replaced by Arva Rice in November 2003. From 2004-2006 PE saw substantial growth with increasing numbers of loans and total amounts lent. After conducting focus groups new loan products, events and resources for entrepreneurs were developed. PE launched a networking event programme, Big Connections, and an Access to Markets program addressing bringing products and services to the marketplace.[6] During the economic downturn, Project Enterprise saw an increase in demand and in 2008 had its best year since inception.[7]Mel Washington became the Executive Director on 1 September 2009. In 2006, PE won the Association of Enterprise Opportunity's Innovation in Program Design Award for the Access to Markets Initiative. In 2007, PE staff member Althea Burton was made the New York Small Business Administration Home-Based Business Champion of the Year.[citation needed] The organizations web site was closed and appeared to stop operating in 2017.
https://en.wikipedia.org/wiki/Project_Enterprise
Inmachine learning,backpropagationis agradientestimation method commonly used for training aneural networkto compute its parameter updates. It is an efficient application of thechain ruleto neural networks. Backpropagation computes thegradientof aloss functionwith respect to theweightsof the network for a single input–output example, and does soefficiently, computing the gradient one layer at a time,iteratingbackward from the last layer to avoid redundant calculations of intermediate terms in the chain rule; this can be derived throughdynamic programming.[1][2][3] Strictly speaking, the termbackpropagationrefers only to an algorithm for efficiently computing the gradient, not how the gradient is used; but the term is often used loosely to refer to the entire learning algorithm – including how the gradient is used, such as bystochastic gradient descent, or as an intermediate step in a more complicated optimizer, such asAdaptive Moment Estimation.[4]The local minimum convergence, exploding gradient, vanishing gradient, and weak control of learning rate are main disadvantages of these optimization algorithms. TheHessianand quasi-Hessian optimizers solve only local minimum convergence problem, and the backpropagation works longer. These problems caused researchers to develop hybrid[5]and fractional[6]optimization algorithms. Backpropagation had multiple discoveries and partial discoveries, with a tangled history and terminology. See thehistorysection for details. Some other names for the technique include "reverse mode ofautomatic differentiation" or "reverse accumulation".[7] Backpropagation computes the gradient inweight spaceof a feedforward neural network, with respect to aloss function. Denote: In the derivation of backpropagation, other intermediate quantities are used by introducing them as needed below. Bias terms are not treated specially since they correspond to a weight with a fixed input of 1. For backpropagation the specific loss function and activation functions do not matter as long as they and their derivatives can be evaluated efficiently. Traditional activation functions include sigmoid,tanh, andReLU.Swish,[8]mish,[9]and other activation functions have since been proposed as well. The overall network is a combination offunction compositionandmatrix multiplication: For a training set there will be a set of input–output pairs,{(xi,yi)}{\displaystyle \left\{(x_{i},y_{i})\right\}}. For each input–output pair(xi,yi){\displaystyle (x_{i},y_{i})}in the training set, the loss of the model on that pair is the cost of the difference between the predicted outputg(xi){\displaystyle g(x_{i})}and the target outputyi{\displaystyle y_{i}}: Note the distinction: during model evaluation the weights are fixed while the inputs vary (and the target output may be unknown), and the network ends with the output layer (it does not include the loss function). During model training the input–output pair is fixed while the weights vary, and the network ends with the loss function. Backpropagation computes the gradient for afixedinput–output pair(xi,yi){\displaystyle (x_{i},y_{i})}, where the weightswjkl{\displaystyle w_{jk}^{l}}can vary. Each individual component of the gradient,∂C/∂wjkl,{\displaystyle \partial C/\partial w_{jk}^{l},}can be computed by the chain rule; but doing this separately for each weight is inefficient. Backpropagation efficiently computes the gradient by avoiding duplicate calculations and not computing unnecessary intermediate values, by computing the gradient of each layer – specifically the gradient of the weightedinputof each layer, denoted byδl{\displaystyle \delta ^{l}}– from back to front. Informally, the key point is that since the only way a weight inWl{\displaystyle W^{l}}affects the loss is through its effect on thenextlayer, and it does solinearly,δl{\displaystyle \delta ^{l}}are the only data you need to compute the gradients of the weights at layerl{\displaystyle l}, and then the gradients of weights of previous layer can be computed byδl−1{\displaystyle \delta ^{l-1}}and repeated recursively. This avoids inefficiency in two ways. First, it avoids duplication because when computing the gradient at layerl{\displaystyle l}, it is unnecessary to recompute all derivatives on later layersl+1,l+2,…{\displaystyle l+1,l+2,\ldots }each time. Second, it avoids unnecessary intermediate calculations, because at each stage it directly computes the gradient of the weights with respect to the ultimate output (the loss), rather than unnecessarily computing the derivatives of the values of hidden layers with respect to changes in weights∂aj′l′/∂wjkl{\displaystyle \partial a_{j'}^{l'}/\partial w_{jk}^{l}}. Backpropagation can be expressed for simple feedforward networks in terms ofmatrix multiplication, or more generally in terms of theadjoint graph. For the basic case of a feedforward network, where nodes in each layer are connected only to nodes in the immediate next layer (without skipping any layers), and there is a loss function that computes a scalar loss for the final output, backpropagation can be understood simply by matrix multiplication.[c]Essentially, backpropagation evaluates the expression for the derivative of the cost function as a product of derivatives between each layerfrom right to left– "backwards" – with the gradient of the weights between each layer being a simple modification of the partial products (the "backwards propagated error"). Given an input–output pair(x,y){\displaystyle (x,y)}, the loss is: To compute this, one starts with the inputx{\displaystyle x}and works forward; denote the weighted input of each hidden layer aszl{\displaystyle z^{l}}and the output of hidden layerl{\displaystyle l}as the activational{\displaystyle a^{l}}. For backpropagation, the activational{\displaystyle a^{l}}as well as the derivatives(fl)′{\displaystyle (f^{l})'}(evaluated atzl{\displaystyle z^{l}}) must be cached for use during the backwards pass. The derivative of the loss in terms of the inputs is given by the chain rule; note that each term is atotal derivative, evaluated at the value of the network (at each node) on the inputx{\displaystyle x}: wheredaLdzL{\displaystyle {\frac {da^{L}}{dz^{L}}}}is a diagonal matrix. These terms are: the derivative of the loss function;[d]the derivatives of the activation functions;[e]and the matrices of weights:[f] The gradient∇{\displaystyle \nabla }is thetransposeof the derivative of the output in terms of the input, so the matrices are transposed and the order of multiplication is reversed, but the entries are the same: Backpropagation then consists essentially of evaluating this expression from right to left (equivalently, multiplying the previous expression for the derivative from left to right), computing the gradient at each layer on the way; there is an added step, because the gradient of the weights is not just a subexpression: there's an extra multiplication. Introducing the auxiliary quantityδl{\displaystyle \delta ^{l}}for the partial products (multiplying from right to left), interpreted as the "error at levell{\displaystyle l}" and defined as the gradient of the input values at levell{\displaystyle l}: Note thatδl{\displaystyle \delta ^{l}}is a vector, of length equal to the number of nodes in levell{\displaystyle l}; each component is interpreted as the "cost attributable to (the value of) that node". The gradient of the weights in layerl{\displaystyle l}is then: The factor ofal−1{\displaystyle a^{l-1}}is because the weightsWl{\displaystyle W^{l}}between levell−1{\displaystyle l-1}andl{\displaystyle l}affect levell{\displaystyle l}proportionally to the inputs (activations): the inputs are fixed, the weights vary. Theδl{\displaystyle \delta ^{l}}can easily be computed recursively, going from right to left, as: The gradients of the weights can thus be computed using a few matrix multiplications for each level; this is backpropagation. Compared with naively computing forwards (using theδl{\displaystyle \delta ^{l}}for illustration): There are two key differences with backpropagation: For more general graphs, and other advanced variations, backpropagation can be understood in terms ofautomatic differentiation, where backpropagation is a special case ofreverse accumulation(or "reverse mode").[7] The goal of anysupervised learningalgorithm is to find a function that best maps a set of inputs to their correct output. The motivation for backpropagation is to train a multi-layered neural network such that it can learn the appropriate internal representations to allow it to learn any arbitrary mapping of input to output.[10] To understand the mathematical derivation of the backpropagation algorithm, it helps to first develop some intuition about the relationship between the actual output of a neuron and the correct output for a particular training example. Consider a simple neural network with two input units, one output unit and no hidden units, and in which each neuron uses alinear output(unlike most work on neural networks, in which mapping from inputs to outputs is non-linear)[g]that is the weighted sum of its input. Initially, before training, the weights will be set randomly. Then the neuron learns fromtraining examples, which in this case consist of a set oftuples(x1,x2,t){\displaystyle (x_{1},x_{2},t)}wherex1{\displaystyle x_{1}}andx2{\displaystyle x_{2}}are the inputs to the network andtis the correct output (the output the network should produce given those inputs, when it has been trained). The initial network, givenx1{\displaystyle x_{1}}andx2{\displaystyle x_{2}}, will compute an outputythat likely differs fromt(given random weights). Aloss functionL(t,y){\displaystyle L(t,y)}is used for measuring the discrepancy between the target outputtand the computed outputy. Forregression analysisproblems the squared error can be used as a loss function, forclassificationthecategorical cross-entropycan be used. As an example consider a regression problem using the square error as a loss: whereEis the discrepancy or error. Consider the network on a single training case:(1,1,0){\displaystyle (1,1,0)}. Thus, the inputx1{\displaystyle x_{1}}andx2{\displaystyle x_{2}}are 1 and 1 respectively and the correct output,tis 0. Now if the relation is plotted between the network's outputyon the horizontal axis and the errorEon the vertical axis, the result is a parabola. Theminimumof theparabolacorresponds to the outputywhich minimizes the errorE. For a single training case, the minimum also touches the horizontal axis, which means the error will be zero and the network can produce an outputythat exactly matches the target outputt. Therefore, the problem of mapping inputs to outputs can be reduced to anoptimization problemof finding a function that will produce the minimal error. However, the output of a neuron depends on the weighted sum of all its inputs: wherew1{\displaystyle w_{1}}andw2{\displaystyle w_{2}}are the weights on the connection from the input units to the output unit. Therefore, the error also depends on the incoming weights to the neuron, which is ultimately what needs to be changed in the network to enable learning. In this example, upon injecting the training data(1,1,0){\displaystyle (1,1,0)}, the loss function becomes E=(t−y)2=y2=(x1w1+x2w2)2=(w1+w2)2.{\displaystyle E=(t-y)^{2}=y^{2}=(x_{1}w_{1}+x_{2}w_{2})^{2}=(w_{1}+w_{2})^{2}.} Then, the loss functionE{\displaystyle E}takes the form of a parabolic cylinder with its base directed alongw1=−w2{\displaystyle w_{1}=-w_{2}}. Since all sets of weights that satisfyw1=−w2{\displaystyle w_{1}=-w_{2}}minimize the loss function, in this case additional constraints are required to converge to a unique solution. Additional constraints could either be generated by setting specific conditions to the weights, or by injecting additional training data. One commonly used algorithm to find the set of weights that minimizes the error isgradient descent. By backpropagation, the steepest descent direction is calculated of the loss function versus the present synaptic weights. Then, the weights can be modified along the steepest descent direction, and the error is minimized in an efficient way. The gradient descent method involves calculating the derivative of the loss function with respect to the weights of the network. This is normally done using backpropagation. Assuming one output neuron,[h]the squared error function is where For each neuronj{\displaystyle j}, its outputoj{\displaystyle o_{j}}is defined as where theactivation functionφ{\displaystyle \varphi }isnon-linearanddifferentiableover the activation region (the ReLU is not differentiable at one point). A historically used activation function is thelogistic function: which has aconvenientderivative of: The inputnetj{\displaystyle {\text{net}}_{j}}to a neuron is the weighted sum of outputsok{\displaystyle o_{k}}of previous neurons. If the neuron is in the first layer after the input layer, theok{\displaystyle o_{k}}of the input layer are simply the inputsxk{\displaystyle x_{k}}to the network. The number of input units to the neuron isn{\displaystyle n}. The variablewkj{\displaystyle w_{kj}}denotes the weight between neuronk{\displaystyle k}of the previous layer and neuronj{\displaystyle j}of the current layer. Calculating thepartial derivativeof the error with respect to a weightwij{\displaystyle w_{ij}}is done using thechain ruletwice: In the last factor of the right-hand side of the above, only one term in the sumnetj{\displaystyle {\text{net}}_{j}}depends onwij{\displaystyle w_{ij}}, so that If the neuron is in the first layer after the input layer,oi{\displaystyle o_{i}}is justxi{\displaystyle x_{i}}. The derivative of the output of neuronj{\displaystyle j}with respect to its input is simply the partial derivative of the activation function: which for thelogistic activation function This is the reason why backpropagation requires that the activation function bedifferentiable. (Nevertheless, theReLUactivation function, which is non-differentiable at 0, has become quite popular, e.g. inAlexNet) The first factor is straightforward to evaluate if the neuron is in the output layer, because thenoj=y{\displaystyle o_{j}=y}and If half of the square error is used as loss function we can rewrite it as However, ifj{\displaystyle j}is in an arbitrary inner layer of the network, finding the derivativeE{\displaystyle E}with respect tooj{\displaystyle o_{j}}is less obvious. ConsideringE{\displaystyle E}as a function with the inputs being all neuronsL={u,v,…,w}{\displaystyle L=\{u,v,\dots ,w\}}receiving input from neuronj{\displaystyle j}, and taking thetotal derivativewith respect tooj{\displaystyle o_{j}}, a recursive expression for the derivative is obtained: Therefore, the derivative with respect tooj{\displaystyle o_{j}}can be calculated if all the derivatives with respect to the outputsoℓ{\displaystyle o_{\ell }}of the next layer – the ones closer to the output neuron – are known. [Note, if any of the neurons in setL{\displaystyle L}were not connected to neuronj{\displaystyle j}, they would be independent ofwij{\displaystyle w_{ij}}and the corresponding partial derivative under the summation would vanish to 0.] SubstitutingEq. 2,Eq. 3Eq.4andEq. 5inEq. 1we obtain: with ifφ{\displaystyle \varphi }is the logistic function, and the error is the square error: To update the weightwij{\displaystyle w_{ij}}using gradient descent, one must choose a learning rate,η>0{\displaystyle \eta >0}. The change in weight needs to reflect the impact onE{\displaystyle E}of an increase or decrease inwij{\displaystyle w_{ij}}. If∂E∂wij>0{\displaystyle {\frac {\partial E}{\partial w_{ij}}}>0}, an increase inwij{\displaystyle w_{ij}}increasesE{\displaystyle E}; conversely, if∂E∂wij<0{\displaystyle {\frac {\partial E}{\partial w_{ij}}}<0}, an increase inwij{\displaystyle w_{ij}}decreasesE{\displaystyle E}. The newΔwij{\displaystyle \Delta w_{ij}}is added to the old weight, and the product of the learning rate and the gradient, multiplied by−1{\displaystyle -1}guarantees thatwij{\displaystyle w_{ij}}changes in a way that always decreasesE{\displaystyle E}. In other words, in the equation immediately below,−η∂E∂wij{\displaystyle -\eta {\frac {\partial E}{\partial w_{ij}}}}always changeswij{\displaystyle w_{ij}}in such a way thatE{\displaystyle E}is decreased: Using aHessian matrixof second-order derivatives of the error function, theLevenberg–Marquardt algorithmoften converges faster than first-order gradient descent, especially when the topology of the error function is complicated.[11][12]It may also find solutions in smaller node counts for which other methods might not converge.[12]The Hessian can be approximated by theFisher informationmatrix.[13] As an example, consider a simple feedforward network. At thel{\displaystyle l}-th layer, we havexi(l),ai(l)=f(xi(l)),xi(l+1)=∑jWijaj(l){\displaystyle x_{i}^{(l)},\quad a_{i}^{(l)}=f(x_{i}^{(l)}),\quad x_{i}^{(l+1)}=\sum _{j}W_{ij}a_{j}^{(l)}}wherex{\displaystyle x}are the pre-activations,a{\displaystyle a}are the activations, andW{\displaystyle W}is the weight matrix. Given a loss functionL{\displaystyle L}, the first-order backpropagation states that∂L∂aj(l)=∑jWij∂L∂xi(l+1),∂L∂xj(l)=f′(xj(l))∂L∂aj(l){\displaystyle {\frac {\partial L}{\partial a_{j}^{(l)}}}=\sum _{j}W_{ij}{\frac {\partial L}{\partial x_{i}^{(l+1)}}},\quad {\frac {\partial L}{\partial x_{j}^{(l)}}}=f'(x_{j}^{(l)}){\frac {\partial L}{\partial a_{j}^{(l)}}}}and the second-order backpropagation states that∂2L∂aj1(l)∂aj2(l)=∑j1j2Wi1j1Wi2j2∂2L∂xi1(l+1)∂xi2(l+1),∂2L∂xj1(l)∂xj2(l)=f′(xj1(l))f′(xj2(l))∂2L∂aj1(l)∂aj2(l)+δj1j2f″(xj1(l))∂L∂aj1(l){\displaystyle {\frac {\partial ^{2}L}{\partial a_{j_{1}}^{(l)}\partial a_{j_{2}}^{(l)}}}=\sum _{j_{1}j_{2}}W_{i_{1}j_{1}}W_{i_{2}j_{2}}{\frac {\partial ^{2}L}{\partial x_{i_{1}}^{(l+1)}\partial x_{i_{2}}^{(l+1)}}},\quad {\frac {\partial ^{2}L}{\partial x_{j_{1}}^{(l)}\partial x_{j_{2}}^{(l)}}}=f'(x_{j_{1}}^{(l)})f'(x_{j_{2}}^{(l)}){\frac {\partial ^{2}L}{\partial a_{j_{1}}^{(l)}\partial a_{j_{2}}^{(l)}}}+\delta _{j_{1}j_{2}}f''(x_{j_{1}}^{(l)}){\frac {\partial L}{\partial a_{j_{1}}^{(l)}}}}whereδ{\displaystyle \delta }is theDirac delta symbol. Arbitrary-order derivatives in arbitrary computational graphs can be computed with backpropagation, but with more complex expressions for higher orders. The loss function is a function that maps values of one or more variables onto areal numberintuitively representing some "cost" associated with those values. For backpropagation, the loss function calculates the difference between the network output and its expected output, after a training example has propagated through the network. The mathematical expression of the loss function must fulfill two conditions in order for it to be possibly used in backpropagation.[14]The first is that it can be written as an averageE=1n∑xEx{\textstyle E={\frac {1}{n}}\sum _{x}E_{x}}over error functionsEx{\textstyle E_{x}}, forn{\textstyle n}individual training examples,x{\textstyle x}. The reason for this assumption is that the backpropagation algorithm calculates the gradient of the error function for a single training example, which needs to be generalized to the overall error function. The second assumption is that it can be written as a function of the outputs from the neural network. Lety,y′{\displaystyle y,y'}be vectors inRn{\displaystyle \mathbb {R} ^{n}}. Select an error functionE(y,y′){\displaystyle E(y,y')}measuring the difference between two outputs. The standard choice is the square of theEuclidean distancebetween the vectorsy{\displaystyle y}andy′{\displaystyle y'}:E(y,y′)=12‖y−y′‖2{\displaystyle E(y,y')={\tfrac {1}{2}}\lVert y-y'\rVert ^{2}}The error function overn{\textstyle n}training examples can then be written as an average of losses over individual examples:E=12n∑x‖(y(x)−y′(x))‖2{\displaystyle E={\frac {1}{2n}}\sum _{x}\lVert (y(x)-y'(x))\rVert ^{2}} Backpropagation had been derived repeatedly, as it is essentially an efficient application of thechain rule(first written down byGottfried Wilhelm Leibnizin 1676)[17][18]to neural networks. The terminology "back-propagating error correction" was introduced in 1962 byFrank Rosenblatt, but he did not know how to implement this.[19]In any case, he only studied neurons whose outputs were discrete levels, which only had zero derivatives, making backpropagation impossible. Precursors to backpropagation appeared inoptimal control theorysince 1950s.Yann LeCunet al credits 1950s work byPontryaginand others in optimal control theory, especially theadjoint state method, for being a continuous-time version of backpropagation.[20]Hecht-Nielsen[21]credits theRobbins–Monro algorithm(1951)[22]andArthur BrysonandYu-Chi Ho'sApplied Optimal Control(1969) as presages of backpropagation. Other precursors wereHenry J. Kelley1960,[1]andArthur E. Bryson(1961).[2]In 1962,Stuart Dreyfuspublished a simpler derivation based only on thechain rule.[23][24][25]In 1973, he adaptedparametersof controllers in proportion to error gradients.[26]Unlike modern backpropagation, these precursors used standard Jacobian matrix calculations from one stage to the previous one, neither addressing direct links across several stages nor potential additional efficiency gains due to network sparsity.[27] TheADALINE(1960) learning algorithm was gradient descent with a squared error loss for a single layer. The firstmultilayer perceptron(MLP) with more than one layer trained bystochastic gradient descent[22]was published in 1967 byShun'ichi Amari.[28]The MLP had 5 layers, with 2 learnable layers, and it learned to classify patterns not linearly separable.[27] Modern backpropagation was first published bySeppo Linnainmaaas "reverse mode ofautomatic differentiation" (1970)[29]for discrete connected networks of nesteddifferentiablefunctions.[30][31][32] In 1982,Paul Werbosapplied backpropagation to MLPs in the way that has become standard.[33][34]Werbos described how he developed backpropagation in an interview. In 1971, during his PhD work, he developed backpropagation to mathematicizeFreud's "flow of psychic energy". He faced repeated difficulty in publishing the work, only managing in 1981.[35]He also claimed that "the first practical application of back-propagation was for estimating a dynamic model to predict nationalism and social communications in 1974" by him.[36] Around 1982,[35]: 376David E. Rumelhartindependently developed[37]: 252backpropagation and taught the algorithm to others in his research circle. He did not cite previous work as he was unaware of them. He published the algorithm first in a 1985 paper, then in a 1986Naturepaper an experimental analysis of the technique.[38]These papers became highly cited, contributed to the popularization of backpropagation, and coincided with the resurging research interest in neural networks during the 1980s.[10][39][40] In 1985, the method was also described by David Parker.[41][42]Yann LeCunproposed an alternative form of backpropagation for neural networks in his PhD thesis in 1987.[43] Gradient descent took a considerable amount of time to reach acceptance. Some early objections were: there were no guarantees that gradient descent could reach a global minimum, only local minimum; neurons were "known" by physiologists as making discrete signals (0/1), not continuous ones, and with discrete signals, there is no gradient to take. See the interview withGeoffrey Hinton,[35]who was awarded the 2024Nobel Prize in Physicsfor his contributions to the field.[44] Contributing to the acceptance were several applications in training neural networks via backpropagation, sometimes achieving popularity outside the research circles. In 1987,NETtalklearned to convert English text into pronunciation. Sejnowski tried training it with both backpropagation and Boltzmann machine, but found the backpropagation significantly faster, so he used it for the final NETtalk.[35]: 324The NETtalk program became a popular success, appearing on theTodayshow.[45] In 1989, Dean A. Pomerleau published ALVINN, a neural network trained todrive autonomouslyusing backpropagation.[46] TheLeNetwas published in 1989 to recognize handwritten zip codes. In 1992,TD-Gammonachieved top human level play in backgammon. It was a reinforcement learning agent with a neural network with two layers, trained by backpropagation.[47] In 1993, Eric Wan won an international pattern recognition contest through backpropagation.[48][49] During the 2000s it fell out of favour[citation needed], but returned in the 2010s, benefiting from cheap, powerfulGPU-based computing systems. This has been especially so inspeech recognition,machine vision,natural language processing, and language structure learning research (in which it has been used to explain a variety of phenomena related to first[50]and second language learning.[51])[52] Error backpropagation has been suggested to explain human brainevent-related potential(ERP) components like theN400andP600.[53] In 2023, a backpropagation algorithm was implemented on aphotonic processorby a team atStanford University.[54]
https://en.wikipedia.org/wiki/Backpropagation
Insoftware engineering, asoftware development processorsoftware development life cycle(SDLC) is a process of planning and managingsoftware development. It typically involves dividing software development work into smaller, parallel, or sequential steps or sub-processes to improvedesignand/orproduct management. The methodology may include the pre-definition of specificdeliverablesand artifacts that are created and completed by a project team to develop or maintain an application.[1] Most modern development processes can be vaguely described asagile. Other methodologies includewaterfall,prototyping,iterative and incremental development,spiral development,rapid application development, andextreme programming. A life-cycle "model" is sometimes considered a more general term for a category of methodologies and a software development "process" is a particular instance as adopted by a specific organization.[citation needed]For example, many specific software development processes fit the spiral life-cycle model. The field is often considered a subset of thesystems development life cycle. The software development methodology framework did not emerge until the 1960s. According to Elliott (2004), thesystems development life cyclecan be considered to be the oldest formalized methodology framework for buildinginformation systems. The main idea of the software development life cycle has been "to pursue the development of information systems in a very deliberate, structured and methodical way, requiring each stage of the life cycle––from the inception of the idea to delivery of the final system––to be carried out rigidly and sequentially"[2]within the context of the framework being applied. The main target of this methodology framework in the 1960s was "to develop large scale functionalbusiness systemsin an age of large scale business conglomerates. Information systems activities revolved around heavydata processingandnumber crunchingroutines."[2] Requirements gathering and analysis:The first phase of the custom software development process involves understanding the client's requirements and objectives. This stage typically involves engaging in thorough discussions and conducting interviews with stakeholders to identify the desired features, functionalities, and overall scope of the software. The development team works closely with the client to analyze existing systems and workflows, determine technical feasibility, and define project milestones. Planning and design:Once the requirements are understood, the custom software development team proceeds to create a comprehensive project plan. This plan outlines the development roadmap, including timelines, resource allocation, and deliverables. The software architecture and design are also established during this phase. User interface (UI) and user experience (UX) design elements are considered to ensure the software's usability, intuitiveness, and visual appeal. Development:With the planning and design in place, the development team begins the coding process. This phase involveswriting, testing, and debugging the software code. Agile methodologies, such as scrum or kanban, are often employed to promote flexibility, collaboration, and iterative development. Regular communication between the development team and the client ensures transparency and enables quick feedback and adjustments. Testing and quality assurance:To ensure the software's reliability, performance, and security, rigorous testing and quality assurance (QA) processes are carried out. Different testing techniques, including unit testing, integration testing, system testing, and user acceptance testing, are employed to identify and rectify any issues or bugs. QA activities aim to validate the software against the predefined requirements, ensuring that it functions as intended. Deployment and implementation:Once the software passes the testing phase, it is ready for deployment and implementation. The development team assists the client in setting up the software environment, migrating data if necessary, and configuring the system. User training and documentation are also provided to ensure a smooth transition and enable users to maximize the software's potential. Maintenance and support:After the software is deployed, ongoing maintenance and support become crucial to address any issues, enhance performance, and incorporate future enhancements. Regular updates, bug fixes, and security patches are released to keep the software up-to-date and secure. This phase also involves providing technical support to end users and addressing their queries or concerns. Methodologies, processes, and frameworks range from specific prescriptive steps that can be used directly by an organization in day-to-day work, to flexible frameworks that an organization uses to generate a custom set of steps tailored to the needs of a specific project or group. In some cases, a "sponsor" or "maintenance" organization distributes an official set of documents that describe the process. Specific examples include: Since DSDM in 1994, all of the methodologies on the above list except RUP have been agile methodologies - yet many organizations, especially governments, still use pre-agile processes (often waterfall or similar). Software process andsoftware qualityare closely interrelated; some unexpected facets and effects have been observed in practice.[3] Among these, another software development process has been established inopen source. The adoption of these best practices known and established processes within the confines of a company is calledinner source. Software prototypingis about creating prototypes, i.e. incomplete versions of the software program being developed. The basic principles are:[1] A basic understanding of the fundamental business problem is necessary to avoid solving the wrong problems, but this is true for all software methodologies. "Agile software development" refers to a group of software development frameworks based on iterative development, where requirements and solutions evolve via collaboration between self-organizing cross-functional teams. The term was coined in the year 2001 when theAgile Manifestowas formulated. Agile software development uses iterative development as a basis but advocates a lighter and more people-centric viewpoint than traditional approaches. Agile processes fundamentally incorporate iteration and the continuous feedback that it provides to successively refine and deliver a software system. The Agile model also includes the following software development processes: Continuous integrationis the practice of merging all developer working copies to a sharedmainlineseveral times a day.[4]Grady Boochfirst named and proposed CI inhis 1991 method,[5]although he did not advocate integrating several times a day.Extreme programming(XP) adopted the concept of CI and did advocate integrating more than once per day – perhaps as many as tens of times per day. Various methods are acceptable for combining linear and iterative systems development methodologies, with the primary objective of each being to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process. There are three main variants of incremental development:[1] Rapid application development(RAD) is a software development methodology, which favorsiterative developmentand the rapid construction ofprototypesinstead of large amounts of up-front planning. The "planning" of software developed using RAD is interleaved with writing the software itself. The lack of extensive pre-planning generally allows software to be written much faster and makes it easier to change requirements. The rapid development process starts with the development of preliminarydata modelsandbusiness process modelsusingstructured techniques. In the next stage, requirements are verified using prototyping, eventually to refine the data and process models. These stages are repeated iteratively; further development results in "a combined business requirements and technical design statement to be used for constructing new systems".[6] The term was first used to describe a software development process introduced byJames Martinin 1991. According to Whitten (2003), it is a merger of variousstructured techniques, especially data-driveninformation technology engineering, with prototyping techniques to accelerate software systems development.[6] The basic principles of rapid application development are:[1] The waterfall model is a sequential development approach, in which development is seen as flowing steadily downwards (like a waterfall) through several phases, typically: The first formal description of the method is often cited as an article published byWinston W. Royce[7]in 1970, although Royce did not use the term "waterfall" in this article. Royce presented this model as an example of a flawed, non-working model.[8] The basic principles are:[1] The waterfall model is a traditional engineering approach applied to software engineering. A strict waterfall approach discourages revisiting and revising any prior phase once it is complete.[according to whom?]This "inflexibility" in a pure waterfall model has been a source of criticism by supporters of other more "flexible" models. It has been widely blamed for several large-scale government projects running over budget, over time and sometimes failing to deliver on requirements due to thebig design up frontapproach.[according to whom?]Except when contractually required, the waterfall model has been largely superseded by more flexible and versatile methodologies developed specifically for software development.[according to whom?]SeeCriticism of waterfall model. In 1988,Barry Boehmpublished a formal software system development "spiral model," which combines some key aspects of thewaterfall modelandrapid prototypingmethodologies, in an effort to combine advantages oftop-down and bottom-upconcepts. It provided emphasis on a key area many felt had been neglected by other methodologies: deliberate iterative risk analysis, particularly suited to large-scale complex systems. The basic principles are:[1] Shape Up is a software development approach introduced byBasecampin 2018. It is a set of principles and techniques that Basecamp developed internally to overcome the problem of projects dragging on with no clear end. Its primary target audience is remote teams. Shape Up has no estimation and velocity tracking, backlogs, or sprints, unlikewaterfall,agile, orscrum. Instead, those concepts are replaced with appetite, betting, and cycles. As of 2022, besides Basecamp, notable organizations that have adopted Shape Up include UserVoice and Block.[12][13] Other high-level software project methodologies include: Some "process models" are abstract descriptions for evaluating, comparing, and improving the specific process adopted by an organization.
https://en.wikipedia.org/wiki/Software_development_methodologies
Inmathematics, the concept ofgraph dynamical systemscan be used to capture a wide range of processes taking place on graphs or networks. A major theme in the mathematical and computational analysis of GDSs is to relate their structural properties (e.g. the network connectivity) and the global dynamics that result. The work on GDSs considers finite graphs and finite state spaces. As such, the research typically involves techniques from, e.g.,graph theory,combinatorics,algebra, anddynamical systemsrather than differential geometry. In principle, one could define and study GDSs over an infinite graph (e.g.cellular automataorprobabilistic cellular automataoverZk{\displaystyle \mathbb {Z} ^{k}}orinteracting particle systemswhen some randomness is included), as well as GDSs with infinite state space (e.g.R{\displaystyle \mathbb {R} }as in coupled map lattices); see, for example, Wu.[1]In the following, everything is implicitly assumed to be finite unless stated otherwise. A graph dynamical system is constructed from the following components: Thephase spaceassociated to a dynamical system with mapF:Kn→ Knis the finite directed graph with vertex setKnand directed edges (x,F(x)). The structure of the phase space is governed by the properties of the graphY, the vertex functions (fi)i, and the update scheme. The research in this area seeks to infer phase space properties based on the structure of the system constituents. The analysis has a local-to-global character. If, for example, the update scheme consists of applying the vertex functions synchronously one obtains the class ofgeneralized cellular automata(CA). In this case, the global mapF:Kn→ Knis given by F(x)v=fv(x[v]).{\displaystyle F(x)_{v}=f_{v}(x[v])\;.} This class is referred to as generalized cellular automata since the classical or standardcellular automataare typically defined and studied over regular graphs or grids, and the vertex functions are typically assumed to be identical. Example:LetYbe the circle graph on vertices {1,2,3,4} with edges {1,2}, {2,3}, {3,4} and {1,4}, denoted Circ4. LetK= {0,1} be the state space for each vertex and use the function nor3:K3→Kdefined by nor3(x,y,z) = (1 +x)(1 +y)(1 +z) with arithmetic modulo 2 for all vertex functions. Then for example the system state (0,1,0,0) is mapped to (0, 0, 0, 1) using a synchronous update. All the transitions are shown in the phase space below. If the vertex functions are applied asynchronously in the sequence specified by a wordw= (w1,w2, ... ,wm) or permutationπ{\displaystyle \pi }= (π1{\displaystyle \pi _{1}},π2,…,πn{\displaystyle \pi _{2},\dots ,\pi _{n}}) ofv[Y] one obtains the class ofSequential dynamical systems(SDS).[2]In this case it is convenient to introduce theY-local mapsFiconstructed from the vertex functions by The SDS mapF= [FY,w] :Kn→Knis the function composition If the update sequence is a permutation one frequently speaks of apermutation SDSto emphasize this point. Example:LetYbe the circle graph on vertices {1,2,3,4} with edges {1,2}, {2,3}, {3,4} and {1,4}, denoted Circ4. LetK={0,1} be the state space for each vertex and use the function nor3:K3→Kdefined by nor3(x, y, z) = (1 +x)(1 +y)(1 +z) with arithmetic modulo 2 for all vertex functions. Using the update sequence (1,2,3,4) then the system state (0, 1, 0, 0) is mapped to (0, 0, 1, 0). All the system state transitions for this sequential dynamical system are shown in the phase space below. From, e.g., the point of view of applications it is interesting to consider the case where one or more of the components of a GDS contains stochastic elements. Motivating applications could include processes that are not fully understood (e.g. dynamics within a cell) and where certain aspects for all practical purposes seem to behave according to some probability distribution. There are also applications governed by deterministic principles whose description is so complex or unwieldy that it makes sense to consider probabilistic approximations. Every element of a graph dynamical system can be made stochastic in several ways. For example, in a sequential dynamical system the update sequence can be made stochastic. At each iteration step one may choose the update sequencewat random from a given distribution of update sequences with corresponding probabilities. The matching probability space of update sequences induces a probability space of SDS maps. A natural object to study in this regard is theMarkov chainon state space induced by this collection of SDS maps. This case is referred to asupdate sequence stochastic GDSand is motivated by, e.g., processes where "events" occur at random according to certain rates (e.g. chemical reactions), synchronization in parallel computation/discrete event simulations, and in computational paradigms described later. This specific example with stochastic update sequence illustrates two general facts for such systems: when passing to a stochastic graph dynamical system one is generally led to (1) a study of Markov chains (with specific structure governed by the constituents of the GDS), and (2) the resulting Markov chains tend to be large having an exponential number of states. A central goal in the study of stochastic GDS is to be able to derive reduced models. One may also consider the case where the vertex functions are stochastic, i.e.,function stochastic GDS. For example, RandomBoolean networksare examples of function stochastic GDS using a synchronous update scheme and where the state space isK= {0, 1}. Finiteprobabilistic cellular automata(PCA) is another example of function stochastic GDS. In principle the class of Interacting particle systems (IPS) covers finite and infinitePCA, but in practice the work on IPS is largely concerned with the infinite case since this allows one to introduce more interesting topologies on state space. Graph dynamical systems constitute a natural framework for capturing distributed systems such as biological networks and epidemics over social networks, many of which are frequently referred to as complex systems.
https://en.wikipedia.org/wiki/Graph_dynamical_system
Language analysis for the determination of origin(LADO) is an instrument used inasylumcases todetermine the national or ethnic originof the asylum seeker, through an evaluation of their language profile.[why?]To this end, an interview with the asylum seeker is recorded and analysed. The analysis consists of an examination of thedialectologicallyrelevant features (e.g.accent,grammar,vocabularyandloanwords) in the speech of theasylum seeker. LADO is considered a type ofspeaker identificationbyforensic linguists.[1]LADO analyses are usually made at the request of government immigration/asylum bureaux attempting to verify asylum claims[how?], but may also be performed as part of the appeals process for claims which have been denied[why?]; they have frequently been the subject of appeals and litigation in several countries, e.g. Australia, the Netherlands and the UK.[why?] A number of established linguistic approaches are considered to be valid methods of conducting LADO, including language variation and change,[2][3]forensic phonetics,[4]dialectology, and language assessment.[5] The underlying assumption leading to government immigration and asylum bureaux's use of LADO is that a link exists between a person's nationality and the way they speak.[why?]To linguists, this assumption is flawed: instead, research supports links between thefamily and community in which a person learns their native language, and enduring features of their way of speaking it. The notion thatlinguistic socializationinto aspeech communitylies at the heart of LADO has been argued for by linguists since 2004,[6]and is now accepted by a range of government agencies (e.g. Switzerland,[7]Norway[8]), academic researchers (e.g. Eades 2009,[9]Fraser 2011,[10]Maryns 2006,[11]and Patrick 2013[12]), as well as some commercial agencies, e.g. De Taalstudio, according to Verrips 2010[13]). Since the mid-1990s, language analysis has been used to help determine the geographical origin ofasylum seekersby the governments of a growing number of countries (Reath, 2004),[14]now includingAustralia,Austria,Belgium,Canada,Finland,Germany, theNetherlands,New Zealand,Norway,Sweden,Switzerlandand theUnited Kingdom. Pilots have been conducted by the UK,Ireland, and Norway.[15]The UK legitimised the process in 2003; it has subsequently been criticised by immigration lawyers (see response by the Immigration Law Practitioners' Association[16]), and also Craig 2012[17]); and social scientists (e.g. Campbell 2013[18]), as well as linguists (e.g. Patrick 2011[19]). In theNetherlandsLADO is commissioned by the Dutch Immigration Service (IND).[20]Language analysis is used by the IND in cases where asylum seekers cannot produce valid identification documents, and, in addition, the IND sees reason to doubt the claimed origin of the asylum seeker. The IND has a specialised unit (Bureau Land en Taal, or BLT; in English, the Office for Country Information and Language Analysis, or OCILA) that carries out these analyses. Challenges to BLT analyses are provided by De Taalstudio,[21]a private company that provides language analysis and contra-expertise in LADO cases. Claims and criticisms regarding the Dutch LADO processes are discussed by Cambier-Langeveld (2010),[22]the senior linguist for BLT/OCILA, and by Verrips 2010,[23]the founder of De Taalstudio. Zwaan (2008,[24]2010[25]) reviews the legal situation. LADO reports are provided to governments in a number of ways: by their own regularly-employed linguists and/or freelance analysts; by independent academic experts; by commercial firms; or by a mixture of the above. In Switzerland language analysis is carried out by LINGUA, a specialized unit of the Federal Office for Migration, which both employs linguists and retains independent experts from around the world.[7]The German and Austrian bureaux commission reports primarily from experts within their own countries. The UK and a number of other countries have commercial contracts with providers such as the Swedish firms Sprakab[26]and Verified[27]both of which have carried out language analyses for UK Visas and Immigration (formerly UK Border Agency) and for the Dutch Immigration Service, as well as other countries around the world. It is widely agreed that language analysis should be done by language experts. Two basic types of practitioners commonly involved in LADO can be distinguished: trained native speakers of the language under analysis, and professional linguists specialized in the language under analysis. Usually native speaker analysts are free-lance employees who are said to be under the supervision of a qualified linguist. When such analysts lack academic training in linguistics, it has been questioned whether they should be accorded the status of 'experts' by asylum tribunals, e.g. by Patrick (2012),[28]who refers to them instead as "non-expert native speakers (NENSs)". Eades et al. (2003) note that "people who have studied linguistics to professional levels [...] have particular knowledge which is not available to either ordinary speakers or specialists in other disciplines".[29]Likewise Dikker and Verrips (2004)[30]conclude that native speakers who lack training in linguistics are not able to formulate reliable conclusions regarding the origin of other speakers of their language. The nature of the training which commercial firms and government bureaux provide to their analysts has been questioned in academic and legal arenas, but few specifics have been provided to date; see however accounts by the Swiss agency Lingua[31]and Cambier-Langeveld of BLT/OCILA,[32]as well as responses to the latter by Fraser[33]and Verrips.[34] Claims for and against the use of such native-speaker analysts, and their ability to conduct LADO satisfactorily vis-a-vis the ability of academically trained linguists, have only recently begun to be the subject of research (e.g. Wilson 2009),[35]and no consensus yet exists among linguists. While much linguistic research exists on the ability of people, including trained linguists and phoneticians and untrained native speakers, to correctly perceive, identify or label recorded speech that is played to them, almost none of the research has yet been framed in such a way that it can give clear answers to questions about the LADO context. The matter of native-speaker analysts and many other issues are subjects of ongoing litigation in asylum tribunals and appeals courts in several countries. Vedsted Hansen (2010[36]) describes the Danish situation, Noll (2010[37]) comments on Sweden, and Zwaan (2010) reviews the Dutch situation. In the UK, a 2010 Upper Tribunal (asylum) case known as 'RB',[38]supported by a 2012 Court of Appeal decision,[39]argue for giving considerable weight to LADO reports carried out by the methodology of native-speaker analyst plus supervising linguist. In contrast, a 2013 Scottish Court of Sessions decision known as M.Ab.N+K.A.S.Y.[40]found that all such reports must be weighed against the standard Practice Directions for expert reports. Lawyers in the latter case have argued that "What matters is the lack of qualification",[41]and since the Scottish court has equal standing to the England and Wales Appeals Court, the UK Supreme Court was petitioned to address the issues. On 5–6 March 2014, the UK Supreme Court heard an appeal[42]brought by the Home Office concerning the nature of expert linguistic evidence provided to the Home Office in asylum cases, whether expert witness should be granted anonymity, the weight that should be given to reports by the Swedish firm Sprakab, and related matters. Some methods of language analysis in asylum procedures have been heavily criticized by many linguists (e.g., Eades et al. 2003;[43]Arends, 2003). Proponents of the use of native-speaker analysts agree that "[earlier] LADO reports were not very satisfactory from a linguistic point of view... [while even] today's reports are still not likely to satisfy the average academic linguist".[44]Following an item on the Dutch public radio programme Argos, member of parliament De Wit of the Socialist Party presented a number of questions to the State Secretary of the Ministry of Justice regarding the reliability of LADO. The questions and the responses by the State Secretary can be found here.[45]
https://en.wikipedia.org/wiki/Language_analysis_for_the_determination_of_origin
Counter-mappingis creating maps that challenge "dominant power structures, to further seemingly progressive goals".[1]Counter-mapping is used in multiple disciplines to reclaim colonized territory. Counter-maps are prolific in indigenous cultures, "counter-mapping may reify, reinforce, and extend settler boundaries even as it seeks to challenge dominant mapping practices; and still, counter-mapping may simultaneously create conditions of possibility for decolonial ways of representing space and place."[2]The term came into use in the United States whenNancy Pelusoused it in 1995 to describe the commissioning of maps by forest users inKalimantan,Indonesia, to contest government maps of forest areas that underminedindigenousinterests.[3]The resultant counter-hegemonic maps strengthen forest users' resource claims.[3]There are numerous expressions closely related to counter-mapping: ethnocartography, alternative cartography, mapping-back, counter-hegemonic mapping,deep mapping[4]and public participatory mapping.[5]Moreover, the termscritical cartography,subversive cartography,bio-regional mapping, andremappingare sometimes used interchangeably withcounter-mapping, but in practice encompass much more.[5] Whilst counter-mapping still refers to indigenous mapping, it is increasingly being applied to non-indigenous mapping in economically developed countries.[5]Such counter-mapping has been facilitated by processes ofneoliberalism,[6]and technologicaldemocratisation.[3]Examples of counter-mapping include attempts to demarcate and protect traditional territories, community mapping,public participation geographic information systems, and mapping by a relatively weak state to counter the resource claims of a stronger state.[7]The power of counter-maps to advocate policy change in abottom-upmanner led commentators to affirm that counter-mapping should be viewed as a tool ofgovernance.[8] Despite its emancipatory potential, counter-mapping has not gone without criticism. There is a tendency for counter-mapping efforts to overlook the knowledge of women, minorities, and other vulnerable, disenfranchised groups.[9]From this perspective, counter-mapping is only empowering for a small subset of society, whilst others become further marginalised.[10] Nancy Peluso, professor of forest policy, coined the term 'counter-mapping' in 1995, having examined the implementation of two forest mapping strategies inKalimantan. One set of maps belonged to state forest managers, and theinternational financial institutionsthat supported them, such as theWorld Bank. This strategy recognised mapping as a means of protecting local claims to territory and resources to a government that had previously ignored them.[3]The other set of maps had been created by IndonesianNGOs, who often contract international experts to assist with mapping village territories.[3]The goal of the second set of maps was to co-opt the cartographic conventions of the Indonesian state, to legitimise the claims by theDayakpeople, indigenous to Kalimantan, to the rights to forest use.[5]Counter-mappers in Kalimantan have acquiredGIStechnologies, satellite technology, and computerisedresource managementtools, consequently making the Indonesian state vulnerable to counter-maps.[3]As such, counter-mapping strategies in Kalimantan have led to successful community action to block, and protest against, oil palm plantations and logging concessions imposed by the central government.[3] It must, however, be recognised that counter-mapping projects existed long before coinage of the term.[5]Counter-maps are rooted in map art practices that date to the early 20th century; in themental mapsmovement of the 1960s; in indigenous and bioregional mapping; and parish mapping.[11] In 1985, the charityCommon Groundlaunched theParish Maps Project, abottom-upinitiative encouraging local people to map elements of the environment valued by their parish.[12]Since then, more than 2,500 English parishes have made such maps.[11]Parish mapping projects aim to put every local person in an 'expert' role.[13]Clifford[14]exemplifies this notion, affirming: "making a parish map is about creating a community expression of values, and about beginning to assert ideas for involvement. It is about taking the place in your own hands". The final map product is typically an artistic artefact, usually painted, and often displayed in village halls or schools.[15]By questioning the biases of cartographic conventions and challenging predominant power effects of mapping,[16]The Parish Maps Project is an early example of what Peluso[3]went on to term 'counter-mapping' The development of counter-mapping can be situated within theneoliberalpolitical-economic restructuring of the state.[17]Prior to the 1960s, equipping a map-making enterprise was chiefly the duty of a single agency, funded by the national government.[18]In this sense, maps have conventionally been the products of privileged knowledges.[19]However, processes ofneoliberalism, predominantly since the late 1970s, have reconfigured the state's role in the cartographic project.[6]Neoliberalism denotes an emphasis on markets and minimal states, whereby individual choice is perceived to have replaced the mass-production of commodities.[20]The fact that citizens are now performing cartographic functions that were once exclusively state-controlled can be partially explained through a shift from "roll-back neoliberalism", in which the state dismantled some of its functions, to "roll-out neoliberalism", in which new modes of operating have been constructed.[21]In brief, the state can be seen to have "hollowed out" and delegated some of its mapping power to citizens.[22] Governmentalityrefers to a particular form of state power that is exercised when citizens self-discipline by acquiescing to state knowledge.[23]Historically,cartographyhas been a fundamental governmentality strategy,[24]a technology of power, used for surveillance and control.[25]Competing claimants and boundaries made no appearance on state-led maps.[25]This links toFoucault's[26]notion of "subjugated knowledges" - ones that did not rise to the top, or were disqualified.[24]However, through neoliberalising processes, the state has retracted from performing some of its cartographic functions.[17]Consequently, rather than being passive recipients of top-down map distribution, people now have the opportunity to claim sovereignty over the mapping process.[27]In this new regime of neoliberal cartographic governmentality the "insurrection of subjugated knowledges" occurs,[26]as counter-mapping initiatives incorporate previously marginalised voices. In response to technological change, predominantly since the 1980s, cartography has increasingly been democratised.[28]The wide availability of high-quality location information has enabled mass-market cartography based onGlobal Positioning Systemreceivers, home computers, and the internet.[29]The fact that civilians are using technologies which were once elitist led Brosiuset al..[30]to assert that counter-mapping involves "stealing the master's tools". Nevertheless, numerous early counter-mapping projects successfully utilised manual techniques, and many still use them. For instance, in recent years, the use of simple sketch mapping approaches has been revitalised, whereby maps are made on the ground, using natural materials.[31]Similarly, the use of scale model constructions and felt boards, as means of representing cartographic claims of different groups, have become increasingly popular.[9]Consequently,Woodet al.[11]assert that counter-mappers can "make gateau out of technological crumbs". In recent years,Public Participation Geographical Information Systems(PPGIS) have attempted to take the power of the map out of the hands of the cartographic elite, putting it into the hands of the people. For instance, Kyem[32]designed a PPGIS method termed Exploratory Strategy for Collaboration, Management, Allocation, and Planning (ESCMAP). The method sought to integrate the concerns and experiences of three rural communities in the Ashanti Region of Ghana into officialforest managementpractices.[32]Kyem[32]concluded that, notwithstanding the potential of PPGIS, it is possible that the majority of the rich and powerful people in the area would object to some of the participatory uses ofGIS. For example, loggers in Ghana affirmed that the PPGIS procedures were too open and democratic.[32]Thus, despite its democratising potential, there are barriers to its implementation. More recently,Woodet al..[11]disputed the notion of PPGIS entirely, affirming that it is "scarcelyGIS, intensely hegemonic, hardly public, and anything but participatory". Governancemakes problematic state-centric notions of regulation, recognising that there has been a shift to power operating across severalspatial scales.[33]Similarly, counter-mapping complicates state distribution ofcartography, advocatingbottom-upparticipatory mapping projects (seeGIS and environmental governance). Counter-mapping initiatives, often without state assistance, attempt to exert power. As such, counter-mapping conforms toJessop's[22]notion of "governance without government". Another characteristic of governance is its "purposeful effort to steer, control or manage sectors or facets of society" towards a common goal.[34]Likewise, as maps exude power and authority,[35]they are a trusted medium[36]with the ability to 'steer' society in a particular direction. In brief,cartography, once the tool of kings and governments,[37]is now being used as a tool of governance - to advocate policy change from thegrassroots.[8]The environmental sphere is one context in which counter-mapping has been utilised as a governance tool.[8] In contrast to expert knowledges, lay knowledges are increasingly valuable to decision-makers, in part due to the scientific uncertainty surrounding environmental issues.[38]Participatorycounter-mapping projects are an effective means of incorporating lay knowledges[39]into issues surroundingenvironmental governance. For instance, counter-maps depicting traditional use of areas now protected for biodiversity have been used to allow resource use, or to promote public debate about the issue, rather than forcing relocation.[8]For example, theWorld Wide Fund for Natureused the results of counter-mapping to advocate for the reclassification of several strictly protected areas into Indonesian national parks, including Kayan Mentarang and Gunung Lorentz.[8]The success of such counter-mapping efforts led Alcorn[8]to affirm thatgovernance(grassrootsmapping projects), rather than government (top-downmap distribution), offers the best hope for goodnatural resource management. In short, it can be seen that "maps are powerful political tools in ecological and governance discussions".[8] Numerous counter-mapping types exist, for instance: protest maps, map art, counter-mapping for conservation, andPPGIS. In order to emphasise the wide scope of what has come to be known as counter-mapping, three contrasting counter-mapping examples are elucidated in this section: indigenous counter-mapping, community mapping, and state counter-mapping, respectively. Counter-mapping has been undertaken predominantly in under-represented communities.[15]Indigenous peoplesare increasingly turning toparticipatorymapping, appropriating both the state's techniques and manner of representation.[40]Counter-mapping is a tool for indigenous identity-building,[41]and for bolstering the legitimacy of customary resource claims.[3]The success of counter-mapping in realising indigenous claims can be seen through Nietschmann's[42]assertion: More indigenous territory has been claimed by maps than by guns. And more Indigenous territory can be reclaimed and defended by maps than by guns. The power of indigenous counter-mapping can be exemplified through the creation ofNunavut. In 1967,Frank Arthur Calderand the Nisaga'a Nation Tribal Council brought an action against theProvince of British Columbiafor a declaration thataboriginaltitle to specified land had not been lawfully extinguished. In 1973, theCanadian Supreme Courtfound that there was, in fact, an aboriginal title. The Canadian government attempted to extinguish such titles by negotiatingtreatieswith the people who had not signed them.[11]As a first step, theInuit Tapirisat of CanadastudiedInuitland occupancy in the Arctic, resulting in the publication of theInuit Land Use and Occupancy Project.[43]Diverse interests, such as those of hunters, trappers, fishermen and berry-pickers mapped out the land they had used during their lives.[11]As Usher[44]noted: We were no longer mapping the 'territories' of Aboriginal people based on the cumulative observations of others of where they were…but instead, mapping the Aboriginal peoples’ own recollections of their own activities. These maps played a fundamental role in the negotiations that enabled the Inuit to assert an aboriginal title to the 2 million km2in Canada, today known as Nunavut.[11]Counter-mapping is a tool by which indigenous groups can re-present the world in ways which destabilise dominant representations.[45] Indigenous peoples have begun remapping areas of the world that were once occupied by their ancestors as an act of reclamation of land stolen from them by country governments. Indigenous peoples have begun this process all over the world from the Indigenous peoples from the United States, Aboriginal peoples from Australia, and Amazonian people from Brazil. The people of the lands have begun creating their own maps of the land in terms of the borders of the territory and pathways around the territory. When Native peoples first began this process it was done by hand, but presently GPS systems and other technological mapping devices are used.[46]Indigenous maps are reconceptualizing the "average" map and creatively representing space as well as the culture of those who live in the space. Indigenous people are creating maps that are for their power and social benefit instead of the ones forced on them through different titling, and description. Indigenous peoples are also creating maps to adjust to the contamination and pollution that is present In their land. Specifically in Peru, Indigenous peoples are using mapping to identify problem areas and innovating and creating strategies to combat these risks for the future.[47] White colonists saw land as property and a commodity to be possessed. As a result, as settlers grew in numbers and journeyed west, land was claimed and sold for profit. White colonists would “develop” the land and take ownership of it, believing the land was theirs to own. Indigenous peoples, on the other hand, saw themselves as connected with the land spiritually and that the land, instead owned them. Land to Aboriginal people is a major part of their identity and spirituality. They saw the land as being sacred and needing to be protected. Indigenous peoples believe it is their responsibility to take care of the land. As Marion Kickett states in her research, “Land is very important to Aboriginal people with the common belief of 'we don't own the land, the land owns us'. Aboriginal people have always had a spiritual connection to their land...[48]" These differing perspectives on land caused many disputes during the era of Manifest Destiny and as white settler populations began to increase and move into Indigenous peoples’ territory. The Indigenous people believed they were to serve the land while white colonists believed the land should serve them. As a result, when the two sides came in contact, they disputed over how to "claim" land. The height of this conflict began to occur duringManifest Destinyas the white colonist population began to grow and move westward into more parts of Indigenous lands and communities. Maps represent and reflect how an individual or society names and projects themselves onto nature, literally and symbolically. Mapping, while seemingly objective, is political and a method of control on territory.[46]Mapmaking has thus both socio-cultural (myth-making) and technical (utilitarian and economic) functions and traditions.[49]The difference between boundaries and territories made by the White colonists and Indigenous people were vastly different, and expressed their views on the land and nature. Indigenous peoples' territory often ended at rivers, mountains, and hills or were defined by relationships between different tribes, resources, and trade networks. The relationships between tribes would determine the access to the land and its resources. Instead of the borders being hard edges like the United States’, border on Indigenous peoples’ lands were more fluid and would change based on marriages between chiefs and their family members, hunting clans, and heredity. In Indigenous maps the landmarks would be drawn on paper and in some cases described. Detailed knowledge of the thickness of ice, places of shelter and predators were placed in maps to inform the user for what to look for when in the territory. Maps made by White colonists in America were first based on populations, created territories based on the edges of civilization. After the creation of the United States government, state land was designated by Congress and intended to be given equally by latitude and longitudinal coordinates. The ending of railroad tracks and crossings also designated the ending of one state to another, creating a fence-like boundary. In a special case, after the acquisition of theLouisiana Purchase, the United States had to decide between the territory where slavery was legal and where it was not. TheMissouri Compromisewas birthed as a result and a boundary line was created at the longitude and latitude lines of 36’30”. The states were documented by their coordinates and borders were made at the numbered locations. These numbered locations would stretch for miles and encompass all in that territory even if it belonged to Indigenous peoples’. That is often how land would be stolen from Indigenous peoples. The land that would be "claimed" by the United States Government would stretch across Indigenous lands without consideration of their borders. Indigenous peoples' lands were absorbed by the borders of America's newly mapped states and were forced out as a result. Their livelihoods and mythology tied to the land was also destroyed. White colonists claimed the land for their own and Indigenous peoples were no longer allowed to occupy the space. Another way was the differences in the way each group mapped the land. The United States Government would not recognize a Tribes territory without a map and most tribes did not have maps that were in the style of European maps, therefore they were ignored. Community mapping can be defined as "local mapping, produced collaboratively, by local people and often incorporating alternative local knowledge".[15]OpenStreetMap is an example of a community mapping initiative, with the potential to counter the hegemony of state-dominated map distribution.[50] OpenStreetMap(OSM), a citizen-led spatial data collection website, was founded bySteve Coastin 2004. Data are collected from diverse public domain sources; of whichGPStracks are the most important, collected by volunteers with GPS receivers.[15]As of 10 January 2011[update]there were 340,522 registered OSM users, who had uploaded 2.121 billion GPS points onto the website.[51]The process of map creation explicitly relies upon sharing and participation; consequently, every registered OSM user can edit any part of the map. Moreover, 'map parties' – social events which aim to fill gaps in coverage – help foster a community ethos.[52]In short, thegrassrootsOSM project can be seen to represent aparadigm shiftin who creates and shares geographic information - from the state, to society.[53]However, rather than countering the state-dominated cartographic project, some commentators have affirmed that OSM merely replicates the 'old' socio-economic order.[54]For instance, Haklay[54]affirmed that OSM users in the United Kingdom tend not to map council estates; consequently, middle-class areas are disproportionately mapped. Thus, in opposition to notions that OSM is a radical cartographic counter-culture,[55]are contentions that OSM "simply recreates a mirror copy of existingtopographic mapping".[56] What has come to be known as counter-mapping is not limited to the activities ofnon-stateactors within a particular nation-state; relatively weak states also engage in counter-mapping in an attempt to challenge other states.[57] East Timor's ongoing effort to gain control of gas and oil resources from Australia, which it perceives at its own, is a form of counter-mapping. This dispute involves a cartographic contestation of Australia's mapping of the seabed resources between the two countries.[57]As Nevins contends: whilst Australia's map is based on the status quo – a legacy of a 1989 agreement between Australia and the Indonesian occupier of East Timor at that time, East Timor's map represents an enlarged notion of what its sea boundaries should be, thereby entailing a redrawing of the map.[57]This form of counter-mapping thus represents a claim by a relatively weak state, East Timor, to territory and resources that are controlled by a stronger state, Australia.[5]However, Nevins notes that there is limited potential of realising a claim through East Timor's counter-map: counter-mapping is an effective strategy only when combined with broader legal and political strategies.[57] Counter-mapping's claim to incorporate counter-knowledges, and thereby empower traditionally disempowered people, has not gone uncontested.[58]A sample of criticisms leveled at counter-mapping: To summarise, whilst counter-mapping has the potential to transform map-making from "a science of princes",[63]the investment required to create a map with the ability to challenge state-produced cartography means that counter-mapping is unlikely to become a "science of the masses".[3]
https://en.wikipedia.org/wiki/Counter-mapping
Ininformation science, anontologyencompasses a representation, formal naming, and definitions of the categories, properties, and relations between the concepts, data, or entities that pertain to one, many, or alldomains of discourse. More simply, an ontology is a way of showing the properties of a subject area and how they are related, by defining a set of terms and relational expressions that represent the entities in that subject area. The field which studies ontologies so conceived is sometimes referred to asapplied ontology.[1] Everyacademic disciplineor field, in creating its terminology, thereby lays the groundwork for an ontology. Each uses ontological assumptions to frame explicit theories, research and applications. Improved ontologies may improve problem solving within that domain,interoperabilityof data systems, and discoverability of data. Translating research papers within every field is a problem made easier when experts from different countries maintain acontrolled vocabularyofjargonbetween each of their languages.[2]For instance, thedefinition and ontology of economicsis a primary concern inMarxist economics,[3]but also in othersubfields of economics.[4]An example of economics relying on information science occurs in cases where a simulation or model is intended to enable economic decisions, such as determining whatcapital assetsare at risk and by how much (seerisk management). What ontologies in bothinformation scienceandphilosophyhave in common is the attempt to represent entities, including both objects and events, with all their interdependent properties and relations, according to a system of categories. In both fields, there is considerable work on problems ofontology engineering(e.g.,QuineandKripkein philosophy,SowaandGuarinoin information science),[5]and debates concerning to what extentnormativeontology is possible (e.g.,foundationalismandcoherentismin philosophy,BFOandCycin artificial intelligence). Applied ontologyis considered by some as a successor to prior work in philosophy. However many current efforts are more concerned with establishingcontrolled vocabulariesof narrow domains than with philosophicalfirst principles, or with questions such as the mode of existence offixed essencesor whether enduring objects (e.g.,perdurantismandendurantism) may be ontologically more primary thanprocesses.Artificial intelligencehas retained considerable attention regardingapplied ontologyin subfields likenatural language processingwithinmachine translationandknowledge representation, but ontology editors are being used often in a range of fields, including biomedical informatics,[6]industry.[7]Such efforts often use ontology editing tools such asProtégé.[8] Ontologyis a branch ofphilosophyand intersects areas such asmetaphysics,epistemology, andphilosophy of language, as it considers how knowledge, language, and perception relate to the nature of reality.Metaphysicsdeals with questions like "what exists?" and "what is the nature of reality?". One of five traditional branches of philosophy, metaphysics is concerned with exploring existence through properties, entities and relations such as those betweenparticularsanduniversals,intrinsic and extrinsic properties, oressenceandexistence. Metaphysics has been an ongoing topic of discussion since recorded history. Thecompoundwordontologycombinesonto-, from theGreekὄν,on(gen.ὄντος,ontos), i.e. "being; that which is", which is thepresentparticipleof theverbεἰμί,eimí, i.e. "to be, I am", and-λογία,-logia, i.e. "logical discourse", seeclassical compoundsfor this type of word formation.[9][10] While theetymologyis Greek, the oldest extant record of the word itself, theNeo-Latinformontologia, appeared in 1606 in the workOgdoas ScholasticabyJacob Lorhard(Lorhardus) and in 1613 in theLexicon philosophicumbyRudolf Göckel(Goclenius).[11] The first occurrence in English ofontologyas recorded by theOED(Oxford English Dictionary, online edition, 2008) came inArcheologia Philosophica NovaorNew Principles of PhilosophybyGideon Harvey. Since the mid-1970s, researchers in the field ofartificial intelligence(AI) have recognized thatknowledge engineeringis the key to building large and powerful AI systems[citation needed]. AI researchers argued that they could create new ontologies ascomputational modelsthat enable certain kinds ofautomated reasoning, which was onlymarginally successful. In the 1980s, the AI community began to use the termontologyto refer to both a theory of a modeled world and a component ofknowledge-based systems. In particular, David Powers introduced the wordontologyto AI to refer to real world or robotic grounding,[12][13]publishing in 1990 literature reviews emphasizing grounded ontology in association with the call for papers for a AAAI Summer Symposium Machine Learning of Natural Language and Ontology, with an expanded version published in SIGART Bulletin and included as a preface to the proceedings.[14]Some researchers, drawing inspiration from philosophical ontologies, viewed computational ontology as a kind of applied philosophy.[15] In 1993, the widely cited web page and paper "Toward Principles for the Design of Ontologies Used for Knowledge Sharing" byTom Gruber[16]usedontologyas a technical term incomputer scienceclosely related to earlier idea ofsemantic networksandtaxonomies. Gruber introduced the term asa specification of a conceptualization: An ontology is a description (like a formal specification of a program) of the concepts and relationships that can formally exist for an agent or a community of agents. This definition is consistent with the usage of ontology as set of concept definitions, but more general. And it is a different sense of the word than its use in philosophy.[17] Attempting to distance ontologies from taxonomies and similar efforts inknowledge modelingthat rely onclassesandinheritance, Gruber stated (1993): Ontologies are often equated with taxonomic hierarchies of classes, class definitions, and the subsumption relation, but ontologies need not be limited to these forms. Ontologies are also not limited toconservative definitions, that is, definitions in the traditional logic sense that only introduce terminology and do not add any knowledge about the world (Enderton, 1972). To specify a conceptualization, one needs to state axioms thatdoconstrain the possible interpretations for the defined terms.[16] Recent experimental ontology frameworks have also explored resonance-based AI-human co-evolution structures, such as IAMF (Illumination AI Matrix Framework). Though not yet widely adopted in academic discourse, such models propose phased approaches to ethical harmonization and structural emergence.[18] As refinement of Gruber's definition Feilmayr and Wöß (2016) stated: "An ontology is a formal, explicit specification of a shared conceptualization that is characterized by high semantic expressiveness required for increased complexity."[19] Contemporary ontologies share many structural similarities, regardless of the language in which they are expressed. Most ontologies describe individuals (instances), classes (concepts), attributes and relations. A domain ontology (or domain-specific ontology) represents concepts which belong to a realm of the world, such as biology or politics. Each domain ontology typically models domain-specific definitions of terms. For example, the wordcardhas many different meanings. An ontology about the domain ofpokerwould model the "playing card" meaning of the word, while an ontology about the domain ofcomputer hardwarewould model the "punched card" and "video card" meanings. Since domain ontologies are written by different people, they represent concepts in very specific and unique ways, and are often incompatible within the same project. As systems that rely on domain ontologies expand, they often need to merge domain ontologies by hand-tuning each entity or using a combination of software merging and hand-tuning. This presents a challenge to the ontology designer. Different ontologies in the same domain arise due to different languages, different intended usage of the ontologies, and different perceptions of the domain (based on cultural background, education, ideology, etc.)[citation needed]. At present, merging ontologies that are not developed from a commonupper ontologyis a largely manual process and therefore time-consuming and expensive. Domain ontologies that use the same upper ontology to provide a set of basic elements with which to specify the meanings of the domain ontology entities can be merged with less effort. There are studies on generalized techniques for merging ontologies,[20]but this area of research is still ongoing, and it is a recent event to see the issue sidestepped by having multiple domain ontologies using the same upper ontology like theOBO Foundry. An upper ontology (or foundation ontology) is a model of the commonly shared relations and objects that are generally applicable across a wide range of domain ontologies. It usually employs acore glossarythat overarches the terms and associated object descriptions as they are used in various relevant domain ontologies. Standardized upper ontologies available for use includeBFO,BORO method,Dublin Core,GFO,Cyc,SUMO,UMBEL, andDOLCE.[21][22]WordNethas been considered an upper ontology by some and has been used as a linguistic tool for learning domain ontologies.[23] TheGellishontology is an example of a combination of an upper and a domain ontology. A survey of ontology visualization methods is presented by Katifori et al.[24]An updated survey of ontology visualization methods and tools was published by Dudás et al.[25]The most established ontology visualization methods, namely indented tree and graph visualization are evaluated by Fu et al.[26]A visual language for ontologies represented inOWLis specified by theVisual Notation for OWL Ontologies (VOWL).[27] Ontology engineering (also called ontology building) is a set of tasks related to the development of ontologies for a particular domain.[28]It is a subfield ofknowledge engineeringthat studies the ontology development process, the ontology life cycle, the methods and methodologies for building ontologies, and the tools and languages that support them.[29][30] Ontology engineering aims to make explicit the knowledge contained in software applications, and organizational procedures for a particular domain. Ontology engineering offers a direction for overcoming semantic obstacles, such as those related to the definitions of business terms and software classes. Known challenges with ontology engineering include: Ontology editorsare applications designed to assist in the creation or manipulation of ontologies. It is common for ontology editors to use one or moreontology languages. Aspects of ontology editors include: visual navigation possibilities within theknowledge model,inference enginesandinformation extraction; support for modules; the import and export of foreignknowledge representationlanguages forontology matching; and the support of meta-ontologies such asOWL-S,Dublin Core, etc.[31] Ontology learning is the automatic or semi-automatic creation of ontologies, including extracting a domain's terms from natural language text. As building ontologies manually is extremely labor-intensive and time-consuming, there is great motivation to automate the process. Information extraction andtext mininghave been explored to automatically link ontologies to documents, for example in the context of the BioCreative challenges.[32] Epistemological assumptions, which in research asks "What do you know? or "How do you know it?", creates the foundation researchers use when approaching a certain topic or area for potential research. As epistemology is directly linked to knowledge and how we come about accepting certain truths, individuals conducting academic research must understand what allows them to begin theory building. Simply, epistemological assumptions force researchers to question how they arrive at the knowledge they have.[citation needed] Anontology languageis aformal languageused to encode an ontology. There are a number of such languages for ontologies, both proprietary and standards-based: The W3CLinking Open Data community projectcoordinates attempts to converge different ontologies into worldwideSemantic Web. The development of ontologies has led to the emergence of services providing lists or directories of ontologies called ontology libraries. The following are libraries of human-selected ontologies. The following are both directories and search engines. In general, ontologies can be used beneficially in several fields.
https://en.wikipedia.org/wiki/Domain_ontology
Mill's methodsare five methods ofinductiondescribed byphilosopherJohn Stuart Millin his 1843 bookA System of Logic.[1][2]They are intended to establish acausal relationshipbetween two or more groups of data, analyzing their respective differences and similarities. If two or more instances of the phenomenon under investigation have only one circumstance in common, the circumstance in which alone all the instances agree, is the cause (or effect) of the given phenomenon. For a property to be anecessarycondition it must always be present if the effect is present. Since this is so, then we are interested in looking at cases where the effect is present and taking note of which properties, among those considered to be 'possible necessary conditions' are present and which are absent. Obviously, any properties which are absent when the effect is present cannot be necessary conditions for the effect. This method is also referred to more generally within comparative politics as the most different systems design. Symbolically, the method of agreement can be represented as: To further illustrate this concept, consider two structurally different countries. Country A is a former colony, has a centre-left government, and has a federal system with two levels of government. Country B has never been a colony, has a centre-left government and is a unitary state. One factor that both countries have in common, thedependent variablein this case, is that they have a system ofuniversal health care. Comparing the factors known about the countries above, a comparative political scientist would conclude that the government sitting on the centre-left of the spectrum would be the independent variable which causes a system of universal health care, since it is the only one of the factors examined which holds constant between the two countries, and the theoretical backing for that relationship is sound; social democratic (centre-left) policies often include universal health care. If an instance in which the phenomenon under investigation occurs, and an instance in which it does not occur, have every circumstance save one in common, that one occurring only in the former; the circumstance in which alone the two instances differ, is the effect, or cause, or an indispensable part of the cause, of the phenomenon. This method is also known more generally as the most similar systems design within comparative politics. As an example of the method of difference, consider two similar countries. Country A has a centre-right government, a unitary system and was a former colony. Country B has a centre-right government, a unitary system but was never a colony. The difference between the countries is that Country A readily supports anti-colonial initiatives, whereas Country B does not. The method of difference would identify the independent variable to be the status of each country as a former colony or not, with the dependant variable being supportive for anti-colonial initiatives. This is because, out of the two similar countries compared, the difference between the two is whether or not they were formerly a colony. This then explains the difference on the values of the dependent variable, with the former colony being more likely to support decolonization than the country with no history of being a colony. If two or more instances in which the phenomenon occurs have only one circumstance in common, while two or more instances in which it does not occur have nothing in common save the absence of that circumstance; the circumstance in which alone the two sets of instances differ, is the effect, or cause, or a necessary part of the cause, of the phenomenon. Also called the "Joint Method of Agreement and Difference", this principle is a combination of two methods of agreement. Despite the name, it is weaker than the direct method of difference and does not include it. Symbolically, the Joint method of agreement and difference can be represented as: Subduct[3]from any phenomenon such part as is known by previous inductions to be the effect of certain antecedents, and the residue of the phenomenon is the effect of the remaining antecedents. If a range of factors are believed to cause a range of phenomena, and we have matched all the factors, except one, with all the phenomena, except one, then the remaining phenomenon can be attributed to the remaining factor. Symbolically, the Method of Residue can be represented as: Whatever phenomenon varies in any manner whenever another phenomenon varies in some particular manner, is either a cause or an effect of that phenomenon, or is connected with it through some fact of causation. If across a range of circumstances leading to a phenomenon, some property of the phenomenon varies in tandem with some factor existing in the circumstances, then the phenomenon can be associated with that factor. For instance, suppose that various samples of water, each containing bothsaltandlead, were found to be toxic. If the level of toxicity varied in tandem with the level of lead, one could attribute the toxicity to the presence of lead. Symbolically, the method of concomitant variation can be represented as (with ± representing a shift): Unlike the preceding four inductive methods, the method of concomitant variation doesn't involve theelimination of any circumstance. Changing the magnitude of one factor results in the change in the magnitude of another factor.
https://en.wikipedia.org/wiki/Mill%27s_methods
Aprogram transformationis any operation that takes acomputer programand generates another program. In many cases the transformed program is required to besemantically equivalentto the original, relative to a particularformal semanticsand in fewer cases the transformations result in programs that semantically differ from the original in predictable ways.[1] While the transformations can be performed manually, it is often more practical to use aprogram transformation systemthat applies specifications of the required transformations. Program transformations may be specified as automated procedures that modify compiler data structures (e.g.abstract syntax trees) representing the program text, or may be specified more conveniently using patterns or templates representing parameterized source code fragments. A practical requirement forsource codetransformation systems is that they be able to effectively process programs written in aprogramming language. This usually requires integration of a full front-end for the programming language of interest, including source codeparsing, building internal program representations of code structures, the meaning of program symbols, usefulstatic analyses, and regeneration of valid source code from transformed program representations. The problem of building and integrating adequate front ends for conventional languages (Java,C++,PHPetc.) may be of equal difficulty as building the program transformation system itself because of the complexity of such languages. To be widely useful, a transformation system must be able to handle many target programming languages, and must provide some means of specifying such front ends. A generalisation of semantic equivalence is the notion ofprogram refinement: one program is a refinement of another if it terminates on all the initial states for which the original program terminates, and for each such state it is guaranteed to terminate in a possible final state for the original program. In other words, a refinement of a program ismore definedandmore deterministicthan the original program. If two programs are refinements of each other, then the programs are equivalent.[clarification needed] Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it. Thisprogramming-language-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Program_transformation
Basic English(abackronymforBritish American Scientific International and Commercial English)[1]is acontrolled languagebased on standardEnglish, but with a greatly simplifiedvocabularyandgrammar. It was created by the linguist and philosopherCharles Kay Ogdenas aninternational auxiliary language, and as an aid for teachingEnglish as a second language. It was presented in Ogden's 1930 bookBasic English: A General Introduction with Rules and Grammar. The first work on Basic English was written by two Englishmen,Ivor Richardsof Harvard University andCharles Kay Ogdenof the University of Cambridge in England. The design of Basic English drew heavily on the semiotic theory put forward by Ogden and Richards in their 1923 bookThe Meaning of Meaning.[2] Ogden's Basic, and the concept of a simplified English, gained its greatest publicity just after theAlliedvictory in World War II as a means for world peace. He was convinced that the world needed to gradually eradicateminority languagesand use as much as possible only one: English, in either a simple or complete form.[3] Although Basic English was not built into a program, similar simplifications have been devised for various international uses. Richards promoted its use in schools in China.[4]It has influenced the creation ofVoice of America'sLearning Englishfor news broadcasting, andSimplified Technical English, another English-based controlled language designed to write technical manuals. What survives of Ogden's Basic English is the basic 850-word list used as the beginner's vocabulary of the English language taught worldwide, especially in Asia.[5] Ogden tried to simplify English while keeping it normal for native speakers, by specifying grammar restrictions and acontrolled small vocabularywhich makes an extensive use ofparaphrasing. Most notably, Ogden allowed only 18 verbs, which he called "operators". His "General Introduction" says, "There are no 'verbs' in Basic English",[verify]with the underlying assumption that, as noun use in English is very straightforward but verb use/conjugation is not, the elimination of verbs would be a welcome simplification.[note 1] What the World needs most is about 1,000 more dead languages—and one more alive. Ogden's word lists include onlyword roots, which in practice are extended with the defined set of affixes and the full set of forms allowed for any available word (noun, pronoun, or the limited set of verbs).[note 2]The 850 core words of Basic English are found in Wiktionary'sBasic English word list. This core is theoretically enough for everyday life. However, Ogden prescribed that any student should learn an additional 150-word list for everyday work in some particular field, by adding a list of 100 words particularly useful in a general field (e.g., science, verse, business), along with a 50-word list from a more specialised subset of that general field, to make abasic 1000-wordvocabulary for everyday work and life. Moreover, Ogden assumed that any student should already be familiar with (and thus may only review) a core subset of around 200 "international" words.[6]Therefore, a first-level student should graduate with a core vocabulary of around 1200 words. A realistic general core vocabulary could contain around 2000 words (the core 850 words, plus 200 international words, and 1000 words for the general fields of trade, economics, and science). It is enough for a "standard" English level.[7][8]This 2000 word vocabulary represents "what any learner should know". At this level students could start to move on their own. Ogden'sBasic English 2000 word listand Voice of America'sSpecial English 1500 word listserve as dictionaries for theSimple English Wikipedia. Basic English includes a simple grammar for modifying or combining its 850 words to talk about additional meanings (morphological derivationorinflection). The grammar is based on English, but simplified.[9] Like allinternational auxiliary languages(or IALs), Basic English may be criticised as inevitably based on personal preferences, and is thus, paradoxically, inherently divisive.[10]Moreover, like all natural-language-based IALs, Basic is subject to criticism as unfairly biased towards the native speaker community.[note 3] As a teaching aid forEnglish as a second language, Basic English has been criticised for the choice of the core vocabulary and for its grammatical constraints.[note 4] In 1944,readabilityexpertRudolf Fleschpublished an article inHarper's Magazine, "How Basic is Basic English?" in which he said, "It's not basic, and it's not English." The essence of his complaint is that the vocabulary is too restricted, and, as a result, the text ends up being awkward and more difficult than necessary. He also argues that the words in the Basic vocabulary were arbitrarily selected, and notes that there had been no empirical studies showing that it made language simpler.[11] In his 1948 paper "A Mathematical Theory of Communication",Claude Shannoncontrasted the limited vocabulary of Basic English withJames Joyce'sFinnegans Wake, a work noted for a wide vocabulary. Shannon notes that the lack of vocabulary in Basic English leads to a very high level ofredundancy, whereas Joyce's large vocabulary "is alleged to achieve a compression of semantic content".[12] In the novelThe Shape of Things to Come, published in 1933,H. G. Wellsdepicted Basic English as thelingua francaof a new elite that after a prolonged struggle succeeds in uniting the world and establishing atotalitarianworld government. In the future world of Wells' vision, virtually all members of humanity know this language. From 1942 to 1944,George Orwellwas a proponent of Basic English, but in 1945, he became critical ofuniversal languages. Basic English later inspired his use ofNewspeakinNineteen Eighty-Four.[13] Evelyn Waughcriticized his own 1945 novelBrideshead Revisited, which he had previously called his magnum opus, in the preface of the 1959 reprint: "It [World War II] was a bleak period of present privation and threatening disaster—the period ofsoya beansand Basic English—and in consequence the book is infused with a kind of gluttony, for food and wine, for the splendours of the recent past, and for rhetorical and ornamental language that now, with a full stomach, I find distasteful."[14] In his story "Gulf", science fiction writerRobert A. Heinleinused aconstructed languagecalledSpeedtalk, in which every Basic English word is replaced with a singlephoneme, as an appropriate means of communication for a race of genius supermen.[15] TheLord's Prayerhas been often used for an impressionistic language comparison: Our Father in heaven,may your name be kept holy.Let your kingdom come.Let your pleasure be done,as in heaven, so on earth.Give us this day bread for our needs.And make us free of our debts,as we have made free those who are in debt to us.And let us not be put to the test,but keep us safe from the Evil One. Our Father in heaven,hallowed be your name.Your kingdom come.Your will be done,on earth as it is in heaven.Give us this day our daily bread.And forgive us our debts,as we also have forgiven our debtors.And do not bring us to the time of trial,but rescue us from the evil one.
https://en.wikipedia.org/wiki/Basic_English
Incomputational complexity theory, a computational problemHis calledNP-hardif, for every problemLwhich can be solved innon-deterministic polynomial-time, there is apolynomial-time reductionfromLtoH. That is, assuming a solution forHtakes 1 unit time,H's solution can be used to solveLin polynomial time.[1][2]As a consequence, finding a polynomial time algorithm to solve a single NP-hard problem would give polynomial time algorithms for all the problems in the complexity classNP. As it is suspected, but unproven, thatP≠NP, it is unlikely that any polynomial-time algorithms for NP-hard problems exist.[3][4] A simple example of an NP-hard problem is thesubset sum problem. Informally, ifHis NP-hard, then it is at least as difficult to solve as the problems inNP. However, the opposite direction is not true: some problems areundecidable, and therefore even more difficult to solve than all problems in NP, but they are probably not NP-hard (unless P=NP).[5] Adecision problemHis NP-hard when for every problemLin NP, there is apolynomial-time many-one reductionfromLtoH.[1]: 80 Another definition is to require that there be a polynomial-time reduction from anNP-completeproblemGtoH.[1]: 91As any problemLin NP reduces in polynomial time toG,Lreduces in turn toHin polynomial time so this new definition implies the previous one. It does not restrict the class NP-hard to decision problems, and it also includessearch problemsoroptimization problems. If P ≠ NP, then NP-hard problems could not be solved in polynomial time. Some NP-hard optimization problems can be polynomial-timeapproximatedup to some constant approximation ratio (in particular, those inAPX) or even up to any approximation ratio (those inPTASorFPTAS). There are many classes of approximability, each one enabling approximation up to a different level.[6] AllNP-completeproblems are also NP-hard (seeList of NP-complete problems). For example, the optimization problem of finding the least-cost cyclic route through all nodes of a weighted graph—commonly known as thetravelling salesman problem—is NP-hard.[7]Thesubset sum problemis another example: given a set of integers, does any non-empty subset of them add up to zero? That is adecision problemand happens to be NP-complete. There are decision problems that areNP-hardbut notNP-completesuch as thehalting problem. That is the problem which asks "given a program and its input, will it run forever?" That is ayes/noquestion and so is a decision problem. It is easy to prove that the halting problem is NP-hard but not NP-complete. For example, theBoolean satisfiability problemcan be reduced to the halting problem by transforming it to the description of aTuring machinethat tries alltruth valueassignments and when it finds one that satisfies the formula it halts and otherwise it goes into an infinite loop. It is also easy to see that the halting problem is not inNPsince all problems in NP are decidable in a finite number of operations, but the halting problem, in general, isundecidable. There are also NP-hard problems that are neitherNP-completenorUndecidable. For instance, the language oftrue quantified Boolean formulasis decidable inpolynomial space, but not in non-deterministic polynomial time (unless NP =PSPACE).[8] NP-hard problems do not have to be elements of the complexity class NP. As NP plays a central role incomputational complexity, it is used as the basis of several classes: NP-hard problems are often tackled with rules-based languages in areas including: Problems that are decidable but notNP-complete, often are optimization problems:
https://en.wikipedia.org/wiki/NP-hardness
Asystem on a chip(SoC) is anintegrated circuitthat combines most or all key components of acomputerorelectronic systemonto a single microchip.[1]Typically, an SoC includes acentral processing unit(CPU) withmemory,input/output, anddata storagecontrol functions, along with optional features like agraphics processing unit(GPU),Wi-Ficonnectivity, and radio frequency processing. This high level of integration minimizes the need for separate, discrete components, thereby enhancingpower efficiencyand simplifying device design. High-performance SoCs are often paired with dedicated memory, such asLPDDR, and flash storage chips, such aseUFSoreMMC, which may be stacked directly on top of the SoC in apackage-on-package(PoP) configuration or placed nearby on the motherboard. Some SoCs also operate alongside specialized chips, such ascellular modems.[2] Fundamentally, SoCs integrate one or moreprocessor coreswith critical peripherals. This comprehensive integration is conceptually similar to how amicrocontrolleris designed, but providing far greater computational power. While this unified design delivers lower power consumption and a reducedsemiconductor diearea compared to traditional multi-chip architectures, though at the cost of reduced modularity and component replaceability. SoCs are ubiquitous in mobile computing, where compact, energy-efficient designs are critical. They powersmartphones,tablets, andsmartwatches, and are increasingly important inedge computing, where real-time data processing occurs close to the data source. By driving the trend toward tighter integration, SoCs have reshaped modern hardware design, reshaping the design landscape for modern computing devices.[3][4] In general, there are three distinguishable types of SoCs: SoCs can be applied to any computing task. However, they are typically used in mobile computing such as tablets, smartphones, smartwatches, and netbooks as well asembedded systemsand in applications where previouslymicrocontrollerswould be used. Where previously only microcontrollers could be used, SoCs are rising to prominence in the embedded systems market. Tighter system integration offers better reliability andmean time between failure, and SoCs offer more advanced functionality and computing power than microcontrollers.[5]Applications includeAI acceleration, embeddedmachine vision,[6]data collection,telemetry,vector processingandambient intelligence. Often embedded SoCs target theinternet of things, multimedia, networking, telecommunications andedge computingmarkets. Some examples of SoCs for embedded applications include theSTMicroelectronicsSTM32, theRaspberry Pi LtdRP2040, and theAMDZynq 7000. Mobile computingbased SoCs always bundle processors, memories, on-chipcaches,wireless networkingcapabilities and oftendigital camerahardware and firmware. With increasing memory sizes, high end SoCs will often have no memory and flash storage and instead, the memory andflash memorywill be placed right next to, or above (package on package), the SoC.[7]Some examples of mobile computing SoCs include: In 1992,Acorn Computersproduced theA3010, A3020 and A4000 range of personal computerswith the ARM250 SoC. It combined the original Acorn ARM2 processor with a memory controller (MEMC), video controller (VIDC), and I/O controller (IOC). In previous AcornARM-powered computers, these were four discrete chips. The ARM7500 chip was their second-generation SoC, based on the ARM700, VIDC20 and IOMD controllers, and was widely licensed in embedded devices such as set-top-boxes, as well as later Acorn personal computers. Tablet and laptop manufacturers have learned lessons from embedded systems and smartphone markets about reduced power consumption, better performance and reliability from tighterintegrationof hardware andfirmwaremodules, andLTEand otherwireless networkcommunications integrated on chip (integratednetwork interface controllers).[10] On modern laptops and mini PCs, the low-power variants ofAMD RyzenandIntel Coreprocessors use SoC design integrating CPU, IGPU, chipset and other processors in a single package. However, such x86 processors still require external memory and storage chips. An SoC consists of hardwarefunctional units, includingmicroprocessorsthat runsoftware code, as well as acommunications subsystemto connect, control, direct and interface between these functional modules. An SoC must have at least oneprocessor core, but typically an SoC has more than one core. Processor cores can be amicrocontroller,microprocessor(μP),[11]digital signal processor(DSP) orapplication-specific instruction set processor(ASIP) core.[12]ASIPs haveinstruction setsthat are customized for anapplication domainand designed to be more efficient than general-purpose instructions for a specific type of workload. Multiprocessor SoCs have more than one processor core by definition. TheARM architectureis a common choice for SoC processor cores because some ARM-architecture cores aresoft processorsspecified asIP cores.[11] SoCs must havesemiconductor memoryblocks to perform their computation, as domicrocontrollersand otherembedded systems. Depending on the application, SoC memory may form amemory hierarchyandcache hierarchy. In the mobile computing market, this is common, but in manylow-powerembedded microcontrollers, this is not necessary. Memory technologies for SoCs includeread-only memory(ROM),random-access memory(RAM), Electrically Erasable Programmable ROM (EEPROM) andflash memory.[11]As in other computer systems, RAM can be subdivided into relatively faster but more expensivestatic RAM(SRAM) and the slower but cheaperdynamic RAM(DRAM). When an SoC has acachehierarchy, SRAM will usually be used to implementprocessor registersand cores'built-in cacheswhereas DRAM will be used formain memory. "Main memory" may be specific to a single processor (which can bemulti-core) when the SoChas multiple processors, in this case it isdistributed memoryand must be sent via§ Intermodule communicationon-chip to be accessed by a different processor.[12]For further discussion of multi-processing memory issues, seecache coherenceandmemory latency. SoCs include externalinterfaces, typically forcommunication protocols. These are often based upon industry standards such asUSB,Ethernet,USART,SPI,HDMI,I²C,CSI, etc. These interfaces will differ according to the intended application.Wireless networkingprotocols such asWi-Fi,Bluetooth,6LoWPANandnear-field communicationmay also be supported. When needed, SoCs includeanaloginterfaces includinganalog-to-digitalanddigital-to-analog converters, often forsignal processing. These may be able to interface with different types ofsensorsoractuators, includingsmart transducers. They may interface with application-specificmodulesor shields.[nb 1]Or they may be internal to the SoC, such as if an analog sensor is built in to the SoC and its readings must be converted to digital signals for mathematical processing. Digital signal processor(DSP) cores are often included on SoCs. They performsignal processingoperations in SoCs forsensors,actuators,data collection,data analysisand multimedia processing. DSP cores typically featurevery long instruction word(VLIW) andsingle instruction, multiple data(SIMD)instruction set architectures, and are therefore highly amenable to exploitinginstruction-level parallelismthroughparallel processingandsuperscalar execution.[12]: 4SP cores most often feature application-specific instructions, and as such are typicallyapplication-specific instruction set processors(ASIP). Such application-specific instructions correspond to dedicated hardwarefunctional unitsthat compute those instructions. Typical DSP instructions includemultiply-accumulate,Fast Fourier transform,fused multiply-add, andconvolutions. As with other computer systems, SoCs requiretiming sourcesto generateclock signals, control execution of SoC functions and provide time context tosignal processingapplications of the SoC, if needed. Popular time sources arecrystal oscillatorsandphase-locked loops. SoCperipheralsincludingcounter-timers, real-timetimersandpower-on resetgenerators. SoCs also includevoltage regulatorsandpower managementcircuits. SoCs comprise manyexecution units. These units must often send data andinstructionsback and forth. Because of this, all but the most trivial SoCs requirecommunications subsystems. Originally, as with othermicrocomputertechnologies,data busarchitectures were used, but recently designs based on sparse intercommunication networks known asnetworks-on-chip(NoC) have risen to prominence and are forecast to overtake bus architectures for SoC design in the near future.[13] Historically, a shared globalcomputer bustypically connected the different components, also called "blocks" of the SoC.[13]A very common bus for SoC communications is ARM's royalty-free Advanced Microcontroller Bus Architecture (AMBA) standard. Direct memory accesscontrollers route data directly between external interfaces and SoC memory, bypassing the CPU orcontrol unit, thereby increasing the datathroughputof the SoC. This is similar to somedevice driversof peripherals on component-basedmulti-chip modulePC architectures. Wire delay is not scalable due to continuedminiaturization,system performancedoes not scale with the number of cores attached, the SoC'soperating frequencymust decrease with each additional core attached for power to be sustainable, and long wires consume large amounts of electrical power. These challenges are prohibitive to supportingmanycoresystems on chip.[13]: xiii In the late 2010s, a trend of SoCs implementingcommunications subsystemsin terms of a network-like topology instead ofbus-basedprotocols has emerged. A trend towards more processor cores on SoCs has caused on-chip communication efficiency to become one of the key factors in determining the overall system performance and cost.[13]: xiiiThis has led to the emergence of interconnection networks withrouter-basedpacket switchingknown as "networks on chip" (NoCs) to overcome thebottlenecksof bus-based networks.[13]: xiii Networks-on-chip have advantages including destination- and application-specificrouting, greater power efficiency and reduced possibility ofbus contention. Network-on-chip architectures take inspiration fromcommunication protocolslikeTCPand theInternet protocol suitefor on-chip communication,[13]although they typically have fewernetwork layers. Optimal network-on-chipnetwork architecturesare an ongoing area of much research interest. NoC architectures range from traditional distributed computingnetwork topologiessuch astorus,hypercube,meshesandtree networkstogenetic algorithm schedulingtorandomized algorithmssuch asrandom walks with branchingand randomizedtime to live(TTL). Many SoC researchers consider NoC architectures to be the future of SoC design because they have been shown to efficiently meet power and throughput needs of SoC designs. Current NoC architectures are two-dimensional. 2D IC design has limitedfloorplanningchoices as the number of cores in SoCs increase, so asthree-dimensional integrated circuits(3DICs) emerge, SoC designers are looking towards building three-dimensional on-chip networks known as 3DNoCs.[13] A system on a chip consists of both thehardware, described in§ Structure, and the software controlling the microcontroller, microprocessor or digital signal processor cores, peripherals and interfaces. Thedesign flowfor an SoC aims to develop this hardware and software at the same time, also known as architectural co-design. The design flow must also take into account optimizations (§ Optimization goals) and constraints. Most SoCs are developed from pre-qualified hardware componentIP core specificationsfor the hardware elements andexecution units, collectively "blocks", described above, together with softwaredevice driversthat may control their operation. Of particular importance are theprotocol stacksthat drive industry-standard interfaces likeUSB. The hardware blocks are put together usingcomputer-aided designtools, specificallyelectronic design automationtools; thesoftware modulesare integrated using a softwareintegrated development environment. SoCs components are also often designed inhigh-level programming languagessuch asC++,MATLABorSystemCand converted toRTLdesigns throughhigh-level synthesis(HLS) tools such asC to HDLorflow to HDL.[14]HLS products called "algorithmic synthesis" allow designers to use C++ to model and synthesize system, circuit, software and verification levels all in one high level language commonly known tocomputer engineersin a manner independent of time scales, which are typically specified in HDL.[15]Other components can remain software and be compiled and embedded ontosoft-core processorsincluded in the SoC as modules in HDL asIP cores. Once thearchitectureof the SoC has been defined, any new hardware elements are written in an abstracthardware description languagetermedregister transfer level(RTL) which defines the circuit behavior, or synthesized into RTL from a high level language through high-level synthesis. These elements are connected together in a hardware description language to create the full SoC design. The logic specified to connect these components and convert between possibly different interfaces provided by different vendors is calledglue logic. Chips are verified for validation correctness before being sent to asemiconductor foundry. This process is calledfunctional verificationand it accounts for a significant portion of the time and energy expended in thechip design life cycle, often quoted as 70%.[16][17]With the growing complexity of chips,hardware verification languageslikeSystemVerilog,SystemC,e, and OpenVera are being used.Bugsfound in the verification stage are reported to the designer. Traditionally, engineers have employed simulation acceleration,emulationor prototyping onreprogrammable hardwareto verify and debug hardware and software for SoC designs prior to the finalization of the design, known astape-out.Field-programmable gate arrays(FPGAs) are favored for prototyping SoCs becauseFPGA prototypesare reprogrammable, allowdebuggingand are more flexible thanapplication-specific integrated circuits(ASICs).[18][19] With high capacity and fast compilation time, simulation acceleration and emulation are powerful technologies that provide wide visibility into systems. Both technologies, however, operate slowly, on the order of MHz, which may be significantly slower – up to 100 times slower – than the SoC's operating frequency. Acceleration and emulation boxes are also very large and expensive at over US$1 million.[citation needed] FPGA prototypes, in contrast, use FPGAs directly to enable engineers to validate and test at, or close to, a system's full operating frequency with real-world stimuli. Tools such as Certus[20]are used to insert probes in the FPGA RTL that make signals available for observation. This is used to debug hardware, firmware and software interactions across multiple FPGAs with capabilities similar to a logic analyzer. In parallel, the hardware elements are grouped and passed through a process oflogic synthesis, during which performance constraints, such as operational frequency and expected signal delays, are applied. This generates an output known as anetlistdescribing the design as a physical circuit and its interconnections. These netlists are combined with theglue logicconnecting the components to produce the schematic description of the SoC as a circuit which can beprintedonto a chip. This process is known asplace and routeand precedestape-outin the event that the SoCs are produced asapplication-specific integrated circuits(ASIC). SoCs must optimizepower use, area ondie, communication, positioning forlocalitybetween modular units and other factors. Optimization is necessarily a design goal of SoCs. If optimization was not necessary, the engineers would use amulti-chip modulearchitecture without accounting for the area use, power consumption or performance of the system to the same extent. Common optimization targets for SoC designs follow, with explanations of each. In general, optimizing any of these quantities may be a hardcombinatorial optimizationproblem, and can indeed beNP-hardfairly easily. Therefore, sophisticatedoptimization algorithmsare often required and it may be practical to useapproximation algorithmsorheuristicsin some cases. Additionally, most SoC designs containmultiple variables to optimize simultaneously, soPareto efficientsolutions are sought after in SoC design. Oftentimes the goals of optimizing some of these quantities are directly at odds, further adding complexity to design optimization of SoCs and introducingtrade-offsin system design. For broader coverage of trade-offs andrequirements analysis, seerequirements engineering. SoCs are optimized to minimize theelectrical powerused to perform the SoC's functions. Most SoCs must use low power. SoC systems often require longbattery life(such assmartphones), can potentially spend months or years without a power source while needing to maintain autonomous function, and often are limited in power use by a high number ofembeddedSoCs beingnetworked togetherin an area. Additionally, energy costs can be high and conserving energy will reduce thetotal cost of ownershipof the SoC. Finally,waste heatfrom high energy consumption can damage other circuit components if too much heat is dissipated, giving another pragmatic reason to conserve energy. The amount of energy used in a circuit is theintegralofpowerconsumed with respect to time, and theaverage rateof power consumption is the product ofcurrentbyvoltage. Equivalently, byOhm's law, power is current squared times resistance or voltage squared divided byresistance: P=IV=V2R=I2R{\displaystyle P=IV={\frac {V^{2}}{R}}={I^{2}}{R}}SoCs are frequently embedded inportable devicessuch assmartphones,GPS navigation devices, digitalwatches(includingsmartwatches) andnetbooks. Customers want long battery lives formobile computingdevices, another reason that power consumption must be minimized in SoCs.Multimedia applicationsare often executed on these devices, including video games,video streaming,image processing; all of which have grown incomputational complexityin recent years with user demands and expectations for higher-qualitymultimedia. Computation is more demanding as expectations move towards3D videoathigh resolutionwithmultiple standards, so SoCs performing multimedia tasks must be computationally capable platform while being low power to run off a standard mobile battery.[12]: 3 SoCs are optimized to maximizepower efficiencyin performance per watt: maximize the performance of the SoC given a budget of power usage. Many applications such asedge computing,distributed processingandambient intelligencerequire a certain level ofcomputational performance, but power is limited in most SoC environments. SoC designs are optimized to minimizewaste heatoutputon the chip. As with otherintegrated circuits, heat generated due to highpower densityare thebottleneckto furtherminiaturizationof components.[21]: 1The power densities of high speed integrated circuits, particularly microprocessors and including SoCs, have become highly uneven. Too much waste heat can damage circuits and erodereliabilityof the circuit over time. High temperatures and thermal stress negatively impact reliability,stress migration, decreasedmean time between failures,electromigration,wire bonding,metastabilityand other performance degradation of the SoC over time.[21]: 2–9 In particular, most SoCs are in a small physical area or volume and therefore the effects of waste heat are compounded because there is little room for it to diffuse out of the system. Because of hightransistor countson modern devices, oftentimes a layout of sufficient throughput and hightransistor densityis physically realizable fromfabrication processesbut would result in unacceptably high amounts of heat in the circuit's volume.[21]: 1 These thermal effects force SoC and other chip designers to apply conservativedesign margins, creating less performant devices to mitigate the risk ofcatastrophic failure. Due to increasedtransistor densitiesas length scales get smaller, eachprocess generationproduces more heat output than the last. Compounding this problem, SoC architectures are usually heterogeneous, creating spatially inhomogeneousheat fluxes, which cannot be effectively mitigated by uniformpassive cooling.[21]: 1 SoCs are optimized to maximize computational and communicationsthroughput. SoCs are optimized to minimizelatencyfor some or all of their functions. This can be accomplished bylaying outelements with proper proximity andlocalityto each-other to minimize the interconnection delays and maximize the speed at which data is communicated between modules,functional unitsand memories. In general, optimizing to minimize latency is anNP-completeproblem equivalent to theBoolean satisfiability problem. Fortasksrunning on processor cores, latency and throughput can be improved withtask scheduling. Some tasks run in application-specific hardware units, however, and even task scheduling may not be sufficient to optimize all software-based tasks to meet timing and throughput constraints. Systems on chip are modeled with standard hardwareverification and validationtechniques, but additional techniques are used to model and optimize SoC design alternatives to make the system optimal with respect tomultiple-criteria decision analysison the above optimization targets. Task schedulingis an important activity in any computer system with multipleprocessesorthreadssharing a single processor core. It is important to reduce§ Latencyand increase§ Throughputforembedded softwarerunning on an SoC's§ Processor cores. Not every important computing activity in a SoC is performed in software running on on-chip processors, but scheduling can drastically improve performance of software-based tasks and other tasks involvingshared resources. Software running on SoCs often schedules tasks according tonetwork schedulingandrandomized schedulingalgorithms. Hardware and software tasks are often pipelined inprocessor design. Pipelining is an important principle forspeedupincomputer architecture. They are frequently used inGPUs(graphics pipeline) and RISC processors (evolutions of theclassic RISC pipeline), but are also applied to application-specific tasks such asdigital signal processingand multimedia manipulations in the context of SoCs.[12] SoCs are often analyzed thoughprobabilistic models,queueing networks, andMarkov chains. For instance,Little's lawallows SoC states and NoC buffers to be modeled as arrival processes and analyzed throughPoisson random variablesandPoisson processes. SoCs are often modeled withMarkov chains, bothdiscrete timeandcontinuous timevariants. Markov chain modeling allowsasymptotic analysisof the SoC'ssteady state distributionof power, heat, latency and other factors to allow design decisions to be optimized for the common case. SoC chips are typicallyfabricatedusingmetal–oxide–semiconductor(MOS) technology.[22]The netlists described above are used as the basis for the physical design (place and route) flow to convert the designers' intent into the design of the SoC. Throughout this conversion process, the design is analyzed with static timing modeling, simulation and other tools to ensure that it meets the specified operational parameters such as frequency, power consumption and dissipation, functional integrity (as described in the register transfer level code) and electrical integrity. When all known bugs have been rectified and these have been re-verified and all physical design checks are done, the physical design files describing each layer of the chip are sent to the foundry's mask shop where a full set of glass lithographic masks will be etched. These are sent to a wafer fabrication plant to create the SoC dice before packaging and testing. SoCs can be fabricated by several technologies, including: ASICs consume less power and are faster than FPGAs but cannot be reprogrammed and are expensive to manufacture. FPGA designs are more suitable for lower volume designs, but after enough units of production ASICs reduce the total cost of ownership.[23] SoC designs consume less power and have a lower cost and higher reliability than the multi-chip systems that they replace. With fewer packages in the system, assembly costs are reduced as well. However, like mostvery-large-scale integration(VLSI) designs, the total cost[clarification needed]is higher for one large chip than for the same functionality distributed over several smaller chips, because oflower yields[clarification needed]and highernon-recurring engineeringcosts. When it is not feasible to construct an SoC for a particular application, an alternative is asystem in package(SiP) comprising a number of chips in a singlepackage. When produced in large volumes, SoC is more cost-effective than SiP because its packaging is simpler.[24]Another reason SiP may be preferred iswaste heatmay be too high in a SoC for a given purpose because functional components are too close together, and in an SiP heat will dissipate better from different functional modules since they are physically further apart. Some examples of systems on a chip are: SoCresearch and developmentoften compares many options. Benchmarks, such as COSMIC,[25]are developed to help such evaluations.
https://en.wikipedia.org/wiki/MPSoC
Inprobability theoryandstatistics,diffusion processesare a class of continuous-timeMarkov processwithalmost surelycontinuoussample paths. Diffusion process isstochasticin nature and hence is used to model many real-life stochastic systems.Brownian motion,reflected Brownian motionandOrnstein–Uhlenbeck processesare examples of diffusion processes. It is used heavily instatistical physics,statistical analysis,information theory,data science,neural networks,financeandmarketing. A sample path of a diffusion process models the trajectory of a particle embedded in a flowing fluid and subjected to random displacements due to collisions with other particles, which is calledBrownian motion. The position of the particle is then random; itsprobability density functionas afunction of space and timeis governed by aconvection–diffusion equation. Adiffusion processis aMarkov processwithcontinuous sample pathsfor which theKolmogorov forward equationis theFokker–Planck equation.[1] A diffusion process is defined by the following properties. Letaij(x,t){\displaystyle a^{ij}(x,t)}be uniformly continuous coefficients andbi(x,t){\displaystyle b^{i}(x,t)}be bounded, Borel measurable drift terms. There is a unique family of probability measuresPa;bξ,τ{\displaystyle \mathbb {P} _{a;b}^{\xi ,\tau }}(forτ≥0{\displaystyle \tau \geq 0},ξ∈Rd{\displaystyle \xi \in \mathbb {R} ^{d}}) on the canonical spaceΩ=C([0,∞),Rd){\displaystyle \Omega =C([0,\infty ),\mathbb {R} ^{d})}, with its Borelσ{\displaystyle \sigma }-algebra, such that: 1. (Initial Condition) The process starts atξ{\displaystyle \xi }at timeτ{\displaystyle \tau }:Pa;bξ,τ[ψ∈Ω:ψ(t)=ξfor0≤t≤τ]=1.{\displaystyle \mathbb {P} _{a;b}^{\xi ,\tau }[\psi \in \Omega :\psi (t)=\xi {\text{ for }}0\leq t\leq \tau ]=1.} 2. (Local Martingale Property) For everyf∈C2,1(Rd×[τ,∞)){\displaystyle f\in C^{2,1}(\mathbb {R} ^{d}\times [\tau ,\infty ))}, the processMt[f]=f(ψ(t),t)−f(ψ(τ),τ)−∫τt(La;b+∂∂s)f(ψ(s),s)ds{\displaystyle M_{t}^{[f]}=f(\psi (t),t)-f(\psi (\tau ),\tau )-\int _{\tau }^{t}{\bigl (}L_{a;b}+{\tfrac {\partial }{\partial s}}{\bigr )}f(\psi (s),s)\,ds}is a local martingale underPa;bξ,τ{\displaystyle \mathbb {P} _{a;b}^{\xi ,\tau }}fort≥τ{\displaystyle t\geq \tau }, withMt[f]=0{\displaystyle M_{t}^{[f]}=0}fort≤τ{\displaystyle t\leq \tau }. This familyPa;bξ,τ{\displaystyle \mathbb {P} _{a;b}^{\xi ,\tau }}is called theLa;b{\displaystyle {\mathcal {L}}_{a;b}}-diffusion. It is clear that if we have anLa;b{\displaystyle {\mathcal {L}}_{a;b}}-diffusion, i.e.(Xt)t≥0{\displaystyle (X_{t})_{t\geq 0}}on(Ω,F,Ft,Pa;bξ,τ){\displaystyle (\Omega ,{\mathcal {F}},{\mathcal {F}}_{t},\mathbb {P} _{a;b}^{\xi ,\tau })}, thenXt{\displaystyle X_{t}}satisfies the SDEdXti=12∑k=1dσki(Xt)dBtk+bi(Xt)dt{\displaystyle dX_{t}^{i}={\frac {1}{2}}\,\sum _{k=1}^{d}\sigma _{k}^{i}(X_{t})\,dB_{t}^{k}+b^{i}(X_{t})\,dt}. In contrast, one can construct this diffusion from that SDE ifaij(x,t)=∑kσik(x,t)σjk(x,t){\displaystyle a^{ij}(x,t)=\sum _{k}\sigma _{i}^{k}(x,t)\,\sigma _{j}^{k}(x,t)}andσij(x,t){\displaystyle \sigma ^{ij}(x,t)},bi(x,t){\displaystyle b^{i}(x,t)}are Lipschitz continuous. To see this, letXt{\displaystyle X_{t}}solve the SDE starting atXτ=ξ{\displaystyle X_{\tau }=\xi }. Forf∈C2,1(Rd×[τ,∞)){\displaystyle f\in C^{2,1}(\mathbb {R} ^{d}\times [\tau ,\infty ))}, apply Itô's formula:df(Xt,t)=(∂f∂t+∑i=1dbi∂f∂xi+v∑i,j=1daij∂2f∂xi∂xj)dt+∑i,k=1d∂f∂xiσkidBtk.{\displaystyle df(X_{t},t)={\bigl (}{\frac {\partial f}{\partial t}}+\sum _{i=1}^{d}b^{i}{\frac {\partial f}{\partial x_{i}}}+v\sum _{i,j=1}^{d}a^{ij}\,{\frac {\partial ^{2}f}{\partial x_{i}\partial x_{j}}}{\bigr )}\,dt+\sum _{i,k=1}^{d}{\frac {\partial f}{\partial x_{i}}}\,\sigma _{k}^{i}\,dB_{t}^{k}.}Rearranging givesf(Xt,t)−f(Xτ,τ)−∫τt(∂f∂s+La;bf)ds=∫τt∑i,k=1d∂f∂xiσkidBsk,{\displaystyle f(X_{t},t)-f(X_{\tau },\tau )-\int _{\tau }^{t}{\bigl (}{\frac {\partial f}{\partial s}}+L_{a;b}f{\bigr )}\,ds=\int _{\tau }^{t}\sum _{i,k=1}^{d}{\frac {\partial f}{\partial x_{i}}}\,\sigma _{k}^{i}\,dB_{s}^{k},}whose right‐hand side is a local martingale, matching the local‐martingale property in the diffusion definition. The law ofXt{\displaystyle X_{t}}definesPa;bξ,τ{\displaystyle \mathbb {P} _{a;b}^{\xi ,\tau }}onΩ=C([0,∞),Rd){\displaystyle \Omega =C([0,\infty ),\mathbb {R} ^{d})}with the correct initial condition and local martingale property. Uniqueness follows from the Lipschitz continuity ofσ,b{\displaystyle \sigma \!,\!b}. In fact,La;b+∂∂s{\displaystyle L_{a;b}+{\tfrac {\partial }{\partial s}}}coincides with the infinitesimal generatorA{\displaystyle {\mathcal {A}}}of this process. IfXt{\displaystyle X_{t}}solves the SDE, then forf(x,t)∈C2(Rd×R+){\displaystyle f(\mathbf {x} ,t)\in C^{2}(\mathbb {R} ^{d}\times \mathbb {R} ^{+})}, the generatorA{\displaystyle {\mathcal {A}}}isAf(x,t)=∑i=1dbi(x,t)∂f∂xi+v∑i,j=1daij(x,t)∂2f∂xi∂xj+∂f∂t.{\displaystyle {\mathcal {A}}f(\mathbf {x} ,t)=\sum _{i=1}^{d}b_{i}(\mathbf {x} ,t)\,{\frac {\partial f}{\partial x_{i}}}+v\sum _{i,j=1}^{d}a_{ij}(\mathbf {x} ,t)\,{\frac {\partial ^{2}f}{\partial x_{i}\partial x_{j}}}+{\frac {\partial f}{\partial t}}.} Thisprobability-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Diffusion_process
Theparallel operator‖{\displaystyle \|}(pronounced "parallel",[1]following theparallel lines notation from geometry;[2][3]also known asreduced sum,parallel sumorparallel addition) is abinary operationwhich is used as a shorthand inelectrical engineering,[4][5][6][nb 1]but is also used inkinetics,fluid mechanicsandfinancial mathematics.[7][8]The nameparallelcomes from the use of the operator computing the combined resistance ofresistors in parallel. The parallel operator represents thereciprocalvalue of a sum of reciprocal values (sometimes also referred to as the "reciprocal formula" or "harmonicsum") and is defined by:[9][6][10][11] wherea,b, anda∥b{\displaystyle a\parallel b}are elements of theextended complex numbersC¯=C∪{∞}.{\displaystyle {\overline {\mathbb {C} }}=\mathbb {C} \cup \{\infty \}.}[12][13] The operator gives half of theharmonic meanof two numbersaandb.[7][8] As a special case, for any numbera∈C¯{\displaystyle a\in {\overline {\mathbb {C} }}}: Further, for all distinct numbersa≠b{\displaystyle a\neq b}: with|a∥b|{\displaystyle {\big |}\,a\parallel b\,{\big |}}representing theabsolute valueofa∥b{\displaystyle a\parallel b}, andmin(x,y){\displaystyle \min(x,y)}meaning theminimum(least element) amongxandy. Ifa{\displaystyle a}andb{\displaystyle b}are distinct positive real numbers then12min(a,b)<|a∥b|<min(a,b).{\displaystyle {\tfrac {1}{2}}\min(a,b)<{\big |}\,a\parallel b\,{\big |}<\min(a,b).} The concept has been extended from ascalaroperation tomatrices[14][15][16][17][18]and furthergeneralized.[19] The operator was originally introduced asreduced sumby Sundaram Seshu in 1956,[20][21][14]studied as operator∗by Kent E. Erickson in 1959,[22][23][14]and popularized byRichard James Duffinand William Niles Anderson, Jr. asparallel additionorparallel sumoperator:inmathematicsandnetwork theorysince 1966.[15][16][1]While some authors continue to use this symbol up to the present,[7][8]for example, Sujit Kumar Mitra used∙as a symbol in 1970.[14]Inapplied electronics, a∥sign became more common as the operator's symbol around 1974.[24][25][26][27][28][nb 1][nb 2]This was often written as doubled vertical line (||) available in mostcharacter sets(sometimes italicized as//[29][30]), but now can be represented usingUnicodecharacter U+2225 ( ∥ ) for "parallel to". InLaTeXand related markup languages, the macros\|and\parallelare often used (and rarely\smallparallelis used) to denote the operator's symbol. LetC~{\displaystyle {\widetilde {\mathbb {C} }}}represent theextended complex planeexcluding zero,C~:=C∪{∞}∖{0},{\displaystyle {\widetilde {\mathbb {C} }}:=\mathbb {C} \cup \{\infty \}\smallsetminus \{0\},}andφ{\displaystyle \varphi }thebijective functionfromC{\displaystyle \mathbb {C} }toC~{\displaystyle {\widetilde {\mathbb {C} }}}such thatφ(z)=1/z.{\displaystyle \varphi (z)=1/z.}One has identities and This implies immediately thatC~{\displaystyle {\widetilde {\mathbb {C} }}}is afieldwhere the parallel operator takes the place of the addition, and that this field isisomorphictoC.{\displaystyle \mathbb {C} .} The following properties may be obtained by translating throughφ{\displaystyle \varphi }the corresponding properties of the complex numbers. As for any field,(C~,∥,⋅){\displaystyle ({\widetilde {\mathbb {C} }},\,\parallel \,,\,\cdot \,)}satisfies a variety of basic identities. It iscommutativeunder parallel and multiplication: It isassociativeunder parallel and multiplication:[12][7][8] Both operations have anidentityelement; for parallel the identity is∞{\displaystyle \infty }while for multiplication the identity is1: Every elementa{\displaystyle a}ofC~{\displaystyle {\widetilde {\mathbb {C} }}}has aninverseunder parallel, equal to−a,{\displaystyle -a,}the additive inverse under addition. (But0has no inverse under parallel.) The identity element∞{\displaystyle \infty }is its own inverse,∞∥∞=∞.{\displaystyle \infty \parallel \infty =\infty .} Every elementa≠∞{\displaystyle a\neq \infty }ofC~{\displaystyle {\widetilde {\mathbb {C} }}}has amultiplicative inversea−1=1/a{\displaystyle a^{-1}=1/a}: Multiplication isdistributiveover parallel:[1][7][8] Repeated parallel is equivalent to division, Or, multiplying both sides byn, Unlike forrepeated addition, this does not commute: Using the distributive property twice, the product of two parallel binomials can be expanded as The square of a binomial is The cube of a binomial is In general, thenth power of a binomial can be expanded usingbinomial coefficientswhich are the reciprocal of those under addition, resulting in an analog of thebinomial formula: The following identities hold: As with apolynomialunder addition, a parallel polynomial with coefficientsak{\displaystyle a_{k}}inC~{\textstyle {\widetilde {\mathbb {C} }}}(witha0≠∞{\displaystyle a_{0}\neq \infty })can befactoredinto a product of monomials: for some rootsrk{\displaystyle r_{k}}(possibly repeated) inC~.{\textstyle {\widetilde {\mathbb {C} }}.} Analogous to polynomials under addition, the polynomial equation implies thatx=rk{\textstyle x=r_{k}}for somek. Alinear equationcan be easily solved via the parallel inverse: To solve a parallelquadratic equation,complete the squareto obtain an analog of thequadratic formula Theextended complex numbersincludingzero,C¯:=C∪∞,{\displaystyle {\overline {\mathbb {C} }}:=\mathbb {C} \cup \infty ,}is no longer a field under parallel and multiplication, because0has no inverse under parallel. (This is analogous to the way(C¯,+,⋅){\displaystyle {\bigl (}{\overline {\mathbb {C} }},{+},{\cdot }{\bigr )}}is not a field because∞{\displaystyle \infty }has no additive inverse.) For every non-zeroa, The quantity0∥(−0)=0∥0{\displaystyle 0\parallel (-0)=0\parallel 0}can either be left undefined (seeindeterminate form) or defined to equal0. In the absence of parentheses, the parallel operator is defined astaking precedenceover addition or subtraction, similar to multiplication.[1][31][9][10] There are applications of the parallel operator in mechanics, electronics, optics, and study of periodicity: Given massesmandM, thereduced massμ=mMm+M=m∥M{\displaystyle \mu ={\frac {mM}{m+M}}=m\parallel M}is frequently applied in mechanics. For instance, when the masses orbit each other, themoment of inertiais their reduced mass times the distance between them. Inelectrical engineering, the parallel operator can be used to calculate the total impedance of variousserial and parallelelectrical circuits.[nb 2]There is adualitybetween the usual(series) sumand the parallel sum.[7][8] For instance, the totalresistanceofresistors connected in parallelis the reciprocal of the sum of the reciprocals of the individualresistors. Likewise for the totalcapacitanceof serialcapacitors.[nb 2] Thecoalesced density functionfcoalesced(x) of n independent probability density functions f1(x), f2(x), …, fn(x), is equal to the reciprocal of the sum of the reciprocal densities.[32] Ingeometric opticsthethin lens approximationto the lens maker's equation. The time between conjunctions of two orbiting bodies is called thesynodic period. If the period of the slower body is T2, and the period of the faster is T1, then the synodic period is Question: Answer: Question:[7][8] Answer: Suggested already by Kent E. Erickson as a subroutine in digital computers in 1959,[22]the parallel operator is implemented as a keyboard operator on theReverse Polish Notation(RPN) scientific calculatorsWP 34Ssince 2008[33][34][35]as well as on theWP 34C[36]andWP 43Ssince 2015,[37][38]allowing to solve even cascaded problems with few keystrokes like270↵ Enter180∥120∥. Given afieldFthere are twoembeddingsofFinto theprojective lineP(F):z→ [z: 1] andz→ [1 :z]. These embeddings overlap except for [0:1] and [1:0]. The parallel operator relates the addition operation between the embeddings. In fact, thehomographieson the projective line are represented by 2 x 2 matrices M(2,F), and the field operations (+ and ×) are extended to homographies. Each embedding has its additiona+brepresented by the followingmatrix multiplicationsin M(2,A): The two matrix products show that there are two subgroups of M(2,F) isomorphic to (F,+), the additive group ofF. Depending on which embedding is used, one operation is +, the other is∥.{\displaystyle \parallel .}
https://en.wikipedia.org/wiki/Parallel_addition_(mathematics)
Thedecision-matrix method, alsoPugh methodorPugh concept selection, invented byStuart Pugh,[1]is a qualitative technique used to rank the multi-dimensional options of an option set. It is frequently used inengineeringfor making design decisions but can also be used to rank investment options, vendor options, product options or any other set of multidimensional entities. A basicdecision matrixconsists of establishing a set of criteria and a group of potential candidate designs. One of these is a reference candidate design. The other designs are then compared to this reference design and being ranked as better, worse, or same based on each criterion. The number of times "better" and "worse" appeared for each design is then displayed, but not summed up. A weighted decision matrix operates in the same way as the basic decision matrix but introduces the concept of weighting the criteria in order of importance. The more important the criterion the higher the weighting it should be given.[2] The advantage of the decision-making matrix is that it encourages self-reflection amongst the members of a design team to analyze each candidate with a minimized bias (for team members can be biased towards certain designs, such as their own). Another advantage of this method is that sensitivity studies can be performed. An example of this might be to see how much your opinion would have to change in order for a lower ranked alternative to outrank a competing alternative. However, there are some important disadvantages of the decision-matrix method: Morphological analysisis another form of a decision matrix employing a multi-dimensional configuration space linked by way of logical relationships. This engineering-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Decision-matrix_method
lsofis a command meaning "list open files", which is used in manyUnix-likesystems to report a list of all open files and the processes that opened them. Thisopen sourceutility was developed and supported by Victor A. Abell, the retired Associate Director of thePurdue UniversityComputing Center. It works in and supports several Unix flavors.[4] A replacement for Linux,lsfd, is included inutil-linux.[5] In 1985, Cliff Spencer publishes theofilescommand. Itsman pagesays: "ofiles – who has a file open [...] displays the owner and id of any process accessing a specified device". Spencer compiled it for4.2BSDandULTRIX.[6]Moreover, in thenewsgroupnet.unix-wizards, he further remarks:[7] With all the chatter about dismounting active file systems, I have posted my program to indicate who is using a particular filesystem, "ofiles" to net.sources. In 1988, the commandfstat("file status") appears as part of the4.3BSD-Tahoerelease. Its man page says:[8] fstatidentifies open files. A file is considered open if a process has it open, if it is the working directory for a process, or if it is an active pure text file. If no options are specified,fstatreports on all open files. In 1989, in comp.sources.unix, Vic Abell publishes ports of the ofiles and fstat commands from4.3BSD-Tahoeto "DYNIX3.0.1[24] for Sequent Symmetry and Balance,SunOS4.0 andULTRIX2.2".[9][10]Various people had evolved and ported ofiles over the years. Abell contrasted the commands as follows:[10] Fstat is similar to the ofiles program which I recently submitted. Like ofiles, fstat identifies open files. It's orientation differs slightly from that of ofiles: ofiles starts with a file name and paws through the proc and user structures to identify the file; fstat reads all the proc and user structures, displaying information in all files, optionally applying a few filters to the output (including a single file name filter.) In combination with netstat -aA and grep, fstat will identify the process associated with a network connection, just as will ofiles. In 1991, Vic Abell publishes lsof version 1.0 to comp.sources.unix. He notes:[1] Lsof (for LiSt Open Files) lists files opened by processes on selected Unix systems. It is my answer to those who regularly ask me when I am going to make fstat (comp.sources.unix volume 18, number 107) or ofiles (volume 18, number 57) available onSunOS4.1.1 or the like. Lsof is a complete redesign of the fstat/ofiles series, based on the SunOS vnode model. Thus, it has been tested onAIX3.1.[357],HP-UX[78].x,NeXTStep2.[01], SequentDynix3.0.12 and 3.1.2, andSunos4.1 and 4.1.1. Using available kernel access methods, such as nlist() and kvm_read(), lsof reads process table entries, user areas and file pointers to reach the underlying structures that describe files opened by processes. In 2018, Vic Abbell publishes lsof version 4.92. The same year, he initiates the transfer of responsibility. He writes:[11] I will reach 80 years of age later this year and I think it's time for me to end my work on general lsof revision releases. The lsof code is put on Github and maintenance is transferred.[11][12] Open files in the system include disk files,named pipes, networksocketsand devices opened by all processes. One use for this command is when a disk cannot be unmounted because (unspecified) files are in use. The listing of open files can be consulted (suitably filtered if necessary) to identify the process that is using the files. To view the port associated with a daemon: From the above one can see that "sendmail" is listening on its standard port of "25". One can also list Unix Sockets by usinglsof -U. The lsof output describes: For a complete list of options, see the Lsof(8) Linux manual page.[13]
https://en.wikipedia.org/wiki/Lsof
{n∣∃k∈Z,n=2k}{\displaystyle \{n\mid \exists k\in \mathbb {Z} ,n=2k\}} Inmathematicsand more specifically inset theory,set-builder notationis anotationfor specifying asetby a property that characterizes its members.[1] Specifying sets by member properties is allowed by theaxiom schema of specification. This is also known asset comprehensionandset abstraction. Set-builder notation can be used to describe a set that is defined by apredicate, that is, a logical formula that evaluates totruefor an element of the set, andfalseotherwise.[2]In this form, set-builder notation has three parts: a variable, acolonorvertical barseparator, and a predicate. Thus there is a variable on the left of the separator, and a rule on the right of it. These three parts are contained in curly brackets: or The vertical bar (or colon) is a separator that can be read as "such that", "for which", or "with the property that". The formulaΦ(x)is said to be theruleor thepredicate. All values ofxfor which the predicate holds (is true) belong to the set being defined. All values ofxfor which the predicate does not hold do not belong to the set. Thus{x∣Φ(x)}{\displaystyle \{x\mid \Phi (x)\}}is the set of all values ofxthat satisfy the formulaΦ.[3]It may be theempty set, if no value ofxsatisfies the formula. A domainEcan appear on the left of the vertical bar:[4] or by adjoining it to the predicate: The ∈ symbol here denotesset membership, while the∧{\displaystyle \land }symbol denotes the logical "and" operator, known aslogical conjunction. This notation represents the set of all values ofxthat belong to some given setEfor which the predicate is true (see "Set existence axiom" below). IfΦ(x){\displaystyle \Phi (x)}is a conjunctionΦ1(x)∧Φ2(x){\displaystyle \Phi _{1}(x)\land \Phi _{2}(x)}, then{x∈E∣Φ(x)}{\displaystyle \{x\in E\mid \Phi (x)\}}is sometimes written{x∈E∣Φ1(x),Φ2(x)}{\displaystyle \{x\in E\mid \Phi _{1}(x),\Phi _{2}(x)\}}, using a comma instead of the symbol∧{\displaystyle \land }. In general, it is not a good idea to consider sets without defining adomain of discourse, as this would represent thesubsetofall possible things that may existfor which the predicate is true. This can easily lead to contradictions and paradoxes. For example,Russell's paradoxshows that the expression{x|x∉x},{\displaystyle \{x~|~x\not \in x\},}although seemingly well formed as a set builder expression, cannot define a set without producing a contradiction.[5] In cases where the setEis clear from context, it may be not explicitly specified. It is common in the literature for an author to state the domain ahead of time, and then not specify it in the set-builder notation. For example, an author may say something such as, "Unless otherwise stated, variables are to be taken to be natural numbers," though in less formal contexts where the domain can be assumed, a written mention is often unnecessary. The following examples illustrate particular sets defined by set-builder notation via predicates. In each case, the domain is specified on the left side of the vertical bar, while the rule is specified on the right side. An extension of set-builder notation replaces the single variablexwith anexpression. So instead of{x∣Φ(x)}{\displaystyle \{x\mid \Phi (x)\}}, we may have{f(x)∣Φ(x)},{\displaystyle \{f(x)\mid \Phi (x)\},}which should be read For example: When inverse functions can be explicitly stated, the expression on the left can be eliminated through simple substitution. Consider the example set{2t+1∣t∈Z}{\displaystyle \{2t+1\mid t\in \mathbb {Z} \}}. Make the substitutionu=2t+1{\displaystyle u=2t+1}, which is to sayt=(u−1)/2{\displaystyle t=(u-1)/2}, then replacetin the set builder notation to find Two sets are equal if and only if they have the same elements. Sets defined by set builder notation are equal if and only if their set builder rules, including the domain specifiers, are equivalent. That is if and only if Therefore, in order to prove the equality of two sets defined by set builder notation, it suffices to prove the equivalence of their predicates, including the domain qualifiers. For example, because the two rule predicates are logically equivalent: This equivalence holds because, for any real numberx, we havex2=1{\displaystyle x^{2}=1}if and only ifxis a rational number with|x|=1{\displaystyle |x|=1}. In particular, both sets are equal to the set{−1,1}{\displaystyle \{-1,1\}}. In many formal set theories, such asZermelo–Fraenkel set theory, set builder notation is not part of the formal syntax of the theory. Instead, there is aset existence axiom scheme, which states that ifEis a set andΦ(x)is a formula in the language of set theory, then there is a setYwhose members are exactly the elements ofEthat satisfyΦ: The setYobtained from this axiom is exactly the set described in set builder notation as{x∈E∣Φ(x)}{\displaystyle \{x\in E\mid \Phi (x)\}}. A similar notation available in a number ofprogramming languages(notablyPythonandHaskell) is thelist comprehension, which combinesmapandfilteroperations over one or morelists. In Python, the set-builder's braces are replaced with square brackets, parentheses, or curly braces, giving list,generator, and set objects, respectively. Python uses an English-based syntax. Haskell replaces the set-builder's braces with square brackets and uses symbols, including the standard set-builder vertical bar. The same can be achieved inScalausing Sequence Comprehensions, where the "for" keyword returns a list of the yielded variables using the "yield" keyword.[6] Consider these set-builder notation examples in some programming languages: The set builder notation and list comprehension notation are both instances of a more general notation known asmonad comprehensions, which permits map/filter-like operations over anymonadwith azero element.
https://en.wikipedia.org/wiki/Set-builder_notation
Inlinguistics, acatena(English pronunciation:/kəˈtiːnə/, pluralcatenasorcatenae; fromLatinfor "chain")[1]is a unit ofsyntaxandmorphology, closely associated withdependency grammars. It is a more flexible and inclusive unit than theconstituentand its proponents therefore consider it to be better suited than the constituent to serve as the fundamental unit of syntactic and morphosyntactic analysis.[2] The catena has served as the basis for the analysis of a number of phenomena of syntax, such asidiosyncratic meaning,ellipsismechanisms (e.g.gapping,stripping,VP-ellipsis,pseudogapping,sluicing,answer ellipsis, comparative deletion),predicate-argumentstructures, anddiscontinuities(topicalization,wh-fronting,scrambling,extraposition, etc.).[3]The catena concept has also been taken as the basis for a theory of morphosyntax, i.e. for the extension of dependencies into words; dependencies are acknowledged between the morphs that constitute words.[4] While the catena concept has been applied mainly to the syntax of English, other works are also demonstrating its applicability to the syntax and morphology of other languages.[5] Two descriptions and two definitions of the catena unit are now given. An understanding of the catena is established by distinguishing between the catena and other, similarly defined units. There are four units (including the catena) that are pertinent in this regard:string,catena,component, andconstituent. The informal definition of the catena is repeated for easy comparison with the definitions of the other three units:[8] A component is complete if it includes all the elements that its root node dominates. The string and catena complement each other in an obvious way, and the definition of the constituent is essentially the same as one finds in most theories of syntax, where a constituent is understood to consist ofany node plus all the nodes that that node dominates. These definitions will now be illustrated with the help of the following dependency tree. The capital letters serve to abbreviate the words: All of the distinct strings, catenae, components, and constituents in this tree are listed here:[9] Noteworthy is the fact that the tree contains 39 distinct word combinations that are not catenae, e.g. AC, BD, CE, BCE, ADF, ABEF, ABDEF, etc. Observe as well that there are a mere six constituents, but 24 catenae. There are therefore four times more catenae in the tree than there are constituents. The inclusivity and flexibility of the catena unit becomes apparent. The following Venn diagram provides an overview of how the four units relate to each other: The catena concept has been present in linguistics for a few decades. In the 1970s, the German dependency grammarian Jürgen Kunze called the unit aTeilbaum'subtree'.[10]In the early 1990s, the psycholinguists Martin Pickering and Guy Barry acknowledged the catena unit, calling it adependency constituent.[11]However, the catena concept did not generate much interest among linguists until William O'Grady observed in his 1998 article that the words that form idioms are stored as catenae in the lexicon.[12]O'Grady called the relevant syntactic unit achain, however, not acatena. The termcatenawas introduced later by Timothy Osborne and colleagues as a means of avoiding confusion with the preexisting chain concept ofMinimalisttheory.[13]Since that time, the catena concept has been developed beyond O'Grady's analysis of idioms to serve as the basis for the analysis of a number central phenomena in the syntax of natural languages (e.g. ellipsis and predicate–argument structures).[14] Idiosyncratic language of all sorts can be captured in terms of catenae. When meaning is constructed in such a manner that does not allow one to acknowledge meaning chunks as constituents, the catena is involved. The meaning-bearing units are catenae, not constituents. This situation is illustrated here in terms of various collocations and proper idioms. Simple collocations (i.e. the co-occurrence of certain words) demonstrate well the catena concept. The idiosyncratic nature ofparticle verbcollocations provide the first group of examples:take after,take in,take on,take over,take up, etc. In its purest form, the verbtakemeans 'seize, grab, possess'. In these collocations with the various particles, however, the meaning oftakeshifts significantly each time depending on the particle. The particle andtakeconvey a distinct meaning together, whereby this distinct meaning cannot be understood as a straightforward combination of the meaning oftakealone and the meaning of the preposition alone. In such cases, one says that the meaning isnon-compositional. Non-compositional meaning can be captured in terms of catenae. The word combinations that assume non-compositional meaning form catenae (but not constituents): Both sentences a and b show that while the verb and its particle do not form a constituent, they do form a catena each time. The contrast in word order across the sentences of each pair illustrates what is known asshifting. Shifting occurs to accommodate the relative weight of the constituents involved. Heavy constituents prefer to appear to the right of lighter sister constituents. The shifting does not change the fact that the verb and particle form a catena each time, even when they do not form a string. Numerous verb-preposition combinations are idiosyncratic collocations insofar as the choice of preposition is strongly restricted by the verb, e.g.account for,count on,fill out,rely on,take after,wait for, etc. The meaning of many of these combinations is also non-compositional, as with the particle verbs. And also as with the particle verbs, the combinations form catenae (but not constituents) in simple declarative sentences: The verb and the preposition that it demands form a single meaning-bearing unit, whereby this unit is a catena. These meaning-bearing units can thus be stored as catenae in the mental lexicon of speakers. As catenae, they are concrete units of syntax. The final type of collocations produced here to illustrate catenae is the complex preposition, e.g.because of,due to,inside of,in spite of,out of,outside of, etc. The intonation pattern for these prepositions suggests that orthographic conventions are correct in writing them as two (or more) words. This situation, however, might be viewed as a problem, since it is not clear that the two words each time can be viewed as forming a constituent. In this regard, they do of course qualify as a catena, e.g. The collocations illustrated in this section have focused mainly on prepositions and particles and they are therefore just a small selection of meaning-bearing collocations. They are, however, quite suggestive. It seems likely that all meaning-bearing collocations are stored as catenae in the mental lexicon of language users. Full idioms are the canonical cases of non-compositional meaning. The fixed words of idioms do not bear their productive meaning, e.g.take it on the chin. Someone who "takes it on the chin" does not actually experience any physical contact to their chin, which means thatchindoes not have its normal productive meaning and must hence be part of a greater collocation. This greater collocation is the idiom, which consists of five words in this case. While the idiomtake it on the chincan be stored as a VP constituent (and is therefore not a problem for constituent-based theories), there are many idioms that clearly cannot be stored as constituents. These idioms are a problem for constituent-based theories precisely because they do not qualify as constituents. However, they do of course qualify as catenae. The discussion here focuses on these idioms since they illustrate particularly well the value of the catena concept. Many idioms in English consist of a verb and a noun (and more), whereby the noun takes a possessor that co-indexed with the subject and will thus vary with subject. These idioms are stored as catenae but clearly not as constituents, e.g. Similar idioms have a possessor that is freer insofar as it is not necessarily co-indexed with the subject. These idioms are also stored as catenae (but not as constituents),[15]e.g. The following idioms include the verb, and object, and at least one preposition. It should again be obvious that the fixed words of the idioms can in no way be viewed as forming constituents: The following idioms include the verb and the prepositional phrase at the same time that the object is free: And the following idioms involving a ditransitive verb include the second object at the same time that the first object is free: Certainly sayings are also idiomatic. When an adverb (or some other adjunct) appears in a saying, it is not part of the saying. Nevertheless, the words of the saying still form a catena: Ellipsismechanisms (gapping, stripping, VP-ellipsis, pseudogapping, answer fragments, sluicing, comparative deletion) are eliding catenae, whereby many of these catenae are non-constituents.[16]The following examples illustrategapping:[17] Clauses a are acceptable instances of gapping; the gapped material corresponds to the catena in green. Clauses b are failed attempts at gapping; they fail because the gapped material does not correspond to a catena. The following examples illustratestripping. Many linguists see stripping as a particular manifestation of gapping where just a single remnant remains in the gapped/stripped clause:[18] Clauses a are acceptable instances of stripping, in part because the stripped material corresponds to a catena (in green). Clauses b again fail; they fail because the stripped material does not qualify as a catena. The following examples illustrate answer ellipsis:[19] In each of the acceptable answer fragments (a–e), the elided material corresponds to a catena. In contrast, the elided material corresponds to a non-catena in each of the unacceptable answer fragments (f–h). An analysis ofVP-ellipsisusing thecatenaaims to captureantecedent contained deletionwithoutquantifier raising.[20] Both the elided material (in light grey) and the antecedent (in bold) to the elided material qualify as catenae. As catenae, both are concrete units of syntactic analysis. The need for a movement-type analysis (in terms of QR or otherwise) does not occur. One can note that the second of the two examples is an instance ofpseudogapping, pseudogapping being a particular manifestation of VP-ellipsis. Two additional complex examples further illustrate how a catena-based analysis ofanswer fragmentsworks: While the elided material shown in light gray certainly cannot be construed as a constituent, it does qualify as a catena (because it forms a subtree). The following example shows that even when the answer contains two fragments, the elided material still qualifies as a catena: Such answers that contain two (or even more) fragments are rare in English (although they are more common in other languages) and may be less than fully acceptable. A movement analysis of this answer fragment would have to assume that bothSusanandLarryhave moved out of the encompassing constituent so that that constituent can be elided. The catena-based analysis, in contrast, does not need to appeal to movement in this way. In the following example ofpseudogapping, the elided words qualify as a catena in surface syntax, which means movement is not necessary: The elided words in light gray qualify as a catena (but not a constituent). Thus if the catena is taken as the fundamental unit of syntactic analysis, the analysis of pseudogapping can remain entirely with what is present on the surface. The catena unit is suited to an understanding ofpredicatesand theirarguments[21]—a predicate is a property that is assigned to an argument or as a relationship that is established between arguments. A given predicate appears in sentence structure as a catena, and so do its arguments. A standard matrix predicate in a sentence consists of a content verb and potentially one or more auxiliary verbs. The next examples illustrate how predicates and their arguments are manifest in synonymous sentences across languages: The words in green are the main predicate and those in red are that predicate's arguments. The single-word predicatesaidin the English sentence on the left corresponds to the two-word predicatehat gesagtin German. Each predicate shown and each of its arguments shown is a catena. The next example is similar, but this time a French sentence is used to make the point: The matrix predicates are again in green, and their arguments in red. The arrow dependency edge marks anadjunct—this convention was not employed in the examples further above. In this case, the main predicate in English consists of two words corresponding to one word in French. The next examples delivers a sense of the manner in which the main sentence predicate remains a catena as the number of auxiliary verbs increases: Sentence a contains one auxiliary verb, sentence b two, and sentence c three. The appearance of these auxiliary verbs adds functional information to the core content provided by the content verbrevised. As each additional auxiliary verb is added, the predicate grows, the predicate catena gaining links. When assessing the approach to predicate–argument structures in terms of catenae, it is important to keep in mind that the constituent unit of phrase structure grammar is much less helpful in characterizing the actual word combinations that qualify as predicates and their arguments. This fact should be evident from the examples here, where the word combinations in green would not qualify as constituents in phrase structure grammars.
https://en.wikipedia.org/wiki/Catena_(linguistics)
Inmathematics, specifically inalgebraic geometry, thefiber product of schemesis a fundamental construction. It has many interpretations and special cases. For example, the fiber product describes how analgebraic varietyover onefielddetermines a variety over a bigger field, or the pullback of a family of varieties, or a fiber of a family of varieties.Base changeis a closely related notion. Thecategoryofschemesis a broad setting for algebraic geometry. A fruitful philosophy (known asGrothendieck's relative point of view) is that much of algebraic geometry should be developed for amorphism of schemesX→Y(called a schemeXoverY), rather than for a single schemeX. For example, rather than simply studyingalgebraic curves, one can study families of curves over any base schemeY. Indeed, the two approaches enrich each other. In particular, a scheme over acommutative ringRmeans a schemeXtogether with a morphismX→Spec(R). The older notion of an algebraic variety over a fieldkis equivalent to a scheme overkwith certain properties. (There are different conventions for exactly which schemes should be called "varieties". One standard choice is that a variety over a fieldkmeans anintegral separatedscheme offinite typeoverk.[1]) In general, a morphism of schemesX→Ycan be imagined as a family of schemes parametrized by the points ofY. Given a morphism from some other schemeZtoY, there should be a "pullback" family of schemes overZ. This is exactly the fiber productX×YZ→Z. Formally: it is a useful property of the category of schemes that thefiber productalways exists.[2]That is, for any morphisms of schemesX→YandZ→Y, there is a schemeX×YZwith morphisms toXandZ, making the diagram commutative, and which isuniversalwith that property. That is, for any schemeWwith morphisms toXandZwhose compositions toYare equal, there is a unique morphism fromWtoX×YZthat makes the diagram commute. As always with universal properties, this condition determines the schemeX×YZup to a unique isomorphism, if it exists. The proof that fiber products of schemes always do exist reduces the problem to thetensor product of commutative rings(cf.gluing schemes). In particular, whenX,Y, andZare allaffine schemes, soX= Spec(A),Y= Spec(B), andZ= Spec(C) for some commutative ringsA,B,C, the fiber product is the affine scheme The morphismX×YZ→Zis called thebase changeorpullbackof the morphismX→Yvia the morphismZ→Y. In some cases, the fiber product of schemes has a right adjoint, therestriction of scalars. Some important properties P of morphisms of schemes arepreserved under arbitrary base change. That is, ifX→Yhas property P andZ→Yis any morphism of schemes, then the base changeXxYZ→Zhas property P. For example,flat morphisms,smooth morphisms,proper morphisms, and many other classes of morphisms are preserved under arbitrary base change.[5] The worddescentrefers to the reverse question: if the pulled-back morphismXxYZ→Zhas some property P, must the original morphismX→Yhave property P? Clearly this is impossible in general: for example,Zmight be the empty scheme, in which case the pulled-back morphism loses all information about the original morphism. But if the morphismZ→Yis flat and surjective (also calledfaithfully flat) andquasi-compact, then many properties do descend fromZtoY. Properties that descend include flatness, smoothness, properness, and many other classes of morphisms.[6]These results form part ofGrothendieck's theory offaithfully flat descent. Example: for any field extensionk⊂E, the morphism Spec(E) → Spec(k) is faithfully flat and quasi-compact. So the descent results mentioned imply that a schemeXoverkis smooth overkif and only if the base changeXEis smooth overE. The same goes for properness and many other properties.
https://en.wikipedia.org/wiki/Fiber_product_of_schemes#Base_change_and_descent
Detection theoryorsignal detection theoryis a means to measure the ability to differentiate between information-bearing patterns (calledstimulusin living organisms,signalin machines) and random patterns that distract from the information (callednoise, consisting of background stimuli and random activity of the detection machine and of the nervous system of the operator). In the field ofelectronics,signal recoveryis the separation of such patterns from a disguising background.[1] According to the theory, there are a number of determiners of how a detecting system will detect a signal, and where its threshold levels will be. The theory can explain how changing the threshold will affect the ability to discern, often exposing how adapted the system is to the task, purpose or goal at which it is aimed. When the detecting system is a human being, characteristics such as experience, expectations, physiological state (e.g. fatigue) and other factors can affect the threshold applied. For instance, a sentry in wartime might be likely to detect fainter stimuli than the same sentry in peacetime due to a lower criterion, however they might also be more likely to treat innocuous stimuli as a threat. Much of the early work in detection theory was done byradarresearchers.[2]By 1954, the theory was fully developed on the theoretical side as described byPeterson, Birdsall and Fox[3]and the foundation for the psychological theory was made by Wilson P. Tanner, David M. Green, andJohn A. Swets, also in 1954.[4]Detection theory was used in 1966 by John A. Swets and David M. Green forpsychophysics.[5]Green and Swets criticized the traditional methods of psychophysics for their inability to discriminate between the real sensitivity of subjects and their (potential)response biases.[6] Detection theory has applications in many fields such asdiagnosticsof any kind,quality control,telecommunications, andpsychology. The concept is similar to thesignal-to-noise ratioused in the sciences andconfusion matricesused inartificial intelligence. It is also usable inalarm management, where it is important to separate important events frombackground noise. Signal detection theory (SDT) is used when psychologists want to measure the way we make decisions under conditions of uncertainty, such as how we would perceive distances in foggy conditions or duringeyewitness identification.[7][8]SDT assumes that the decision maker is not a passive receiver of information, but an active decision-maker who makes difficult perceptual judgments under conditions of uncertainty. In foggy circumstances, we are forced to decide how far away from us an object is, based solely upon visual stimulus which is impaired by the fog. Since the brightness of the object, such as a traffic light, is used by the brain to discriminate the distance of an object, and the fog reduces the brightness of objects, we perceive the object to be much farther away than it actually is (see alsodecision theory). According to SDT, during eyewitness identifications, witnesses base their decision as to whether a suspect is the culprit or not based on their perceived level of familiarity with the suspect. To apply signal detection theory to a data set where stimuli were either present or absent, and the observer categorized each trial as having the stimulus present or absent, the trials are sorted into one of four categories: Based on the proportions of these types of trials, numerical estimates of sensitivity can be obtained with statistics like thesensitivity indexd'and A',[9]and response bias can be estimated with statistics like c and β.[9]β is the measure of response bias.[10] Signal detection theory can also be applied to memory experiments, where items are presented on a study list for later testing. A test list is created by combining these 'old' items with novel, 'new' items that did not appear on the study list. On each test trial the subject will respond 'yes, this was on the study list' or 'no, this was not on the study list'. Items presented on the study list are called Targets, and new items are called Distractors. Saying 'Yes' to a target constitutes a Hit, while saying 'Yes' to a distractor constitutes a False Alarm. Signal Detection Theory has wide application, both in humans andanimals. Topics includememory, stimulus characteristics of schedules of reinforcement, etc. Conceptually, sensitivity refers to how hard or easy it is to detect that a target stimulus is present from background events. For example, in a recognition memory paradigm, having longer to study to-be-remembered words makes it easier to recognize previously seen or heard words. In contrast, having to remember 30 words rather than 5 makes the discrimination harder. One of the most commonly used statistics for computing sensitivity is the so-calledsensitivity indexord'. There are alsonon-parametricmeasures, such as the area under theROC-curve.[6] Bias is the extent to which one response is more probable than another, averaging across stimulus-present and stimulus-absent cases. That is, a receiver may be more likely overall to respond that a stimulus is present or more likely overall to respond that a stimulus is not present. Bias is independent of sensitivity. Bias can be desirable if false alarms and misses lead to different costs. For example, if the stimulus is a bomber, then a miss (failing to detect the bomber) may be more costly than a false alarm (reporting a bomber when there is not one), making a liberal response bias desirable. In contrast, giving false alarms too often (crying wolf) may make people less likely to respond, a problem that can be reduced by a conservative response bias. Another field which is closely related to signal detection theory is calledcompressed sensing(or compressive sensing). The objective of compressed sensing is to recover high dimensional but with low complexity entities from only a few measurements. Thus, one of the most important applications of compressed sensing is in the recovery of high dimensional signals which are known to be sparse (or nearly sparse) with only a few linear measurements. The number of measurements needed in the recovery of signals is by far smaller than what Nyquist sampling theorem requires provided that the signal is sparse, meaning that it only contains a few non-zero elements. There are different methods of signal recovery in compressed sensing includingbasis pursuit,expander recovery algorithm[11], CoSaMP[12]and alsofastnon-iterative algorithm.[13]In all of the recovery methods mentioned above, choosing an appropriate measurement matrix using probabilistic constructions or deterministic constructions, is of great importance. In other words, measurement matrices must satisfy certain specific conditions such asRIP(Restricted Isometry Property) orNull-Space propertyin order to achieve robust sparse recovery. In the case of making a decision between twohypotheses,H1, absent, andH2, present, in the event of a particularobservation,y, a classical approach is to chooseH1whenp(H1|y) > p(H2|y)andH2in the reverse case.[14]In the event that the twoa posterioriprobabilitiesare equal, one might choose to default to a single choice (either always chooseH1or always chooseH2), or might randomly select eitherH1orH2. Thea prioriprobabilities ofH1andH2can guide this choice, e.g. by always choosing the hypothesis with the highera prioriprobability. When taking this approach, usually what one knows are the conditional probabilities,p(y|H1)andp(y|H2), and thea prioriprobabilitiesp(H1)=π1{\displaystyle p(H1)=\pi _{1}}andp(H2)=π2{\displaystyle p(H2)=\pi _{2}}. In this case, p(H1|y)=p(y|H1)⋅π1p(y){\displaystyle p(H1|y)={\frac {p(y|H1)\cdot \pi _{1}}{p(y)}}}, p(H2|y)=p(y|H2)⋅π2p(y){\displaystyle p(H2|y)={\frac {p(y|H2)\cdot \pi _{2}}{p(y)}}} wherep(y)is the total probability of eventy, p(y|H1)⋅π1+p(y|H2)⋅π2{\displaystyle p(y|H1)\cdot \pi _{1}+p(y|H2)\cdot \pi _{2}}. H2is chosen in case p(y|H2)⋅π2p(y|H1)⋅π1+p(y|H2)⋅π2≥p(y|H1)⋅π1p(y|H1)⋅π1+p(y|H2)⋅π2{\displaystyle {\frac {p(y|H2)\cdot \pi _{2}}{p(y|H1)\cdot \pi _{1}+p(y|H2)\cdot \pi _{2}}}\geq {\frac {p(y|H1)\cdot \pi _{1}}{p(y|H1)\cdot \pi _{1}+p(y|H2)\cdot \pi _{2}}}} ⇒p(y|H2)p(y|H1)≥π1π2{\displaystyle \Rightarrow {\frac {p(y|H2)}{p(y|H1)}}\geq {\frac {\pi _{1}}{\pi _{2}}}} andH1otherwise. Often, the ratioπ1π2{\displaystyle {\frac {\pi _{1}}{\pi _{2}}}}is calledτMAP{\displaystyle \tau _{MAP}}andp(y|H2)p(y|H1){\displaystyle {\frac {p(y|H2)}{p(y|H1)}}}is calledL(y){\displaystyle L(y)}, thelikelihood ratio. Using this terminology,H2is chosen in caseL(y)≥τMAP{\displaystyle L(y)\geq \tau _{MAP}}. This is called MAP testing, where MAP stands for "maximuma posteriori"). Taking this approach minimizes the expected number of errors one will make. In some cases, it is far more important to respond appropriately toH1than it is to respond appropriately toH2. For example, if an alarm goes off, indicating H1 (an incoming bomber is carrying anuclear weapon), it is much more important to shoot down the bomber if H1 = TRUE, than it is to avoid sending a fighter squadron to inspect afalse alarm(i.e., H1 = FALSE, H2 = TRUE) (assuming a large supply of fighter squadrons). TheBayescriterion is an approach suitable for such cases.[14] Here autilityis associated with each of four situations: As is shown below, what is important are the differences,U11−U21{\displaystyle U_{11}-U_{21}}andU22−U12{\displaystyle U_{22}-U_{12}}. Similarly, there are four probabilities,P11{\displaystyle P_{11}},P12{\displaystyle P_{12}}, etc., for each of the cases (which are dependent on one's decision strategy). The Bayes criterion approach is to maximize the expected utility: E{U}=P11⋅U11+P21⋅U21+P12⋅U12+P22⋅U22{\displaystyle E\{U\}=P_{11}\cdot U_{11}+P_{21}\cdot U_{21}+P_{12}\cdot U_{12}+P_{22}\cdot U_{22}} E{U}=P11⋅U11+(1−P11)⋅U21+P12⋅U12+(1−P12)⋅U22{\displaystyle E\{U\}=P_{11}\cdot U_{11}+(1-P_{11})\cdot U_{21}+P_{12}\cdot U_{12}+(1-P_{12})\cdot U_{22}} E{U}=U21+U22+P11⋅(U11−U21)−P12⋅(U22−U12){\displaystyle E\{U\}=U_{21}+U_{22}+P_{11}\cdot (U_{11}-U_{21})-P_{12}\cdot (U_{22}-U_{12})} Effectively, one may maximize the sum, U′=P11⋅(U11−U21)−P12⋅(U22−U12){\displaystyle U'=P_{11}\cdot (U_{11}-U_{21})-P_{12}\cdot (U_{22}-U_{12})}, and make the following substitutions: P11=π1⋅∫R1p(y|H1)dy{\displaystyle P_{11}=\pi _{1}\cdot \int _{R_{1}}p(y|H1)\,dy} P12=π2⋅∫R1p(y|H2)dy{\displaystyle P_{12}=\pi _{2}\cdot \int _{R_{1}}p(y|H2)\,dy} whereπ1{\displaystyle \pi _{1}}andπ2{\displaystyle \pi _{2}}are thea prioriprobabilities,P(H1){\displaystyle P(H1)}andP(H2){\displaystyle P(H2)}, andR1{\displaystyle R_{1}}is the region of observation events,y, that are responded to as thoughH1is true. ⇒U′=∫R1{π1⋅(U11−U21)⋅p(y|H1)−π2⋅(U22−U12)⋅p(y|H2)}dy{\displaystyle \Rightarrow U'=\int _{R_{1}}\left\{\pi _{1}\cdot (U_{11}-U_{21})\cdot p(y|H1)-\pi _{2}\cdot (U_{22}-U_{12})\cdot p(y|H2)\right\}\,dy} U′{\displaystyle U'}and thusU{\displaystyle U}are maximized by extendingR1{\displaystyle R_{1}}over the region where π1⋅(U11−U21)⋅p(y|H1)−π2⋅(U22−U12)⋅p(y|H2)>0{\displaystyle \pi _{1}\cdot (U_{11}-U_{21})\cdot p(y|H1)-\pi _{2}\cdot (U_{22}-U_{12})\cdot p(y|H2)>0} This is accomplished by deciding H2 in case π2⋅(U22−U12)⋅p(y|H2)≥π1⋅(U11−U21)⋅p(y|H1){\displaystyle \pi _{2}\cdot (U_{22}-U_{12})\cdot p(y|H2)\geq \pi _{1}\cdot (U_{11}-U_{21})\cdot p(y|H1)} ⇒L(y)≡p(y|H2)p(y|H1)≥π1⋅(U11−U21)π2⋅(U22−U12)≡τB{\displaystyle \Rightarrow L(y)\equiv {\frac {p(y|H2)}{p(y|H1)}}\geq {\frac {\pi _{1}\cdot (U_{11}-U_{21})}{\pi _{2}\cdot (U_{22}-U_{12})}}\equiv \tau _{B}} and H1 otherwise, whereL(y)is the so-definedlikelihood ratio. Das and Geisler[15]extended the results of signal detection theory for normally distributed stimuli, and derived methods of computing the error rate andconfusion matrixforideal observersand non-ideal observers for detecting and categorizing univariate and multivariate normal signals from two or more categories.
https://en.wikipedia.org/wiki/Detection_theory
Inmathematical optimization, theKarush–Kuhn–Tucker(KKT)conditions, also known as theKuhn–Tucker conditions, arefirst derivative tests(sometimes called first-ordernecessary conditions) for a solution innonlinear programmingto beoptimal, provided that someregularity conditionsare satisfied. Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method ofLagrange multipliers, which allows only equality constraints. Similar to the Lagrange approach, the constrained maximization (minimization) problem is rewritten as a Lagrange function whose optimal point is a global maximum or minimum over the domain of the choice variables and a global minimum (maximum) over the multipliers. The Karush–Kuhn–Tucker theorem is sometimes referred to as the saddle-point theorem.[1] The KKT conditions were originally named afterHarold W. KuhnandAlbert W. Tucker, who first published the conditions in 1951.[2]Later scholars discovered that the necessary conditions for this problem had been stated byWilliam Karushin his master's thesis in 1939.[3][4] Consider the following nonlinear optimization problem instandard form: wherex∈X{\displaystyle \mathbf {x} \in \mathbf {X} }is the optimization variable chosen from aconvex subsetofRn{\displaystyle \mathbb {R} ^{n}},f{\displaystyle f}is theobjectiveorutilityfunction,gi(i=1,…,m){\displaystyle g_{i}\ (i=1,\ldots ,m)}are the inequalityconstraintfunctions andhj(j=1,…,ℓ){\displaystyle h_{j}\ (j=1,\ldots ,\ell )}are the equalityconstraintfunctions. The numbers of inequalities and equalities are denoted bym{\displaystyle m}andℓ{\displaystyle \ell }respectively. Corresponding to the constrained optimization problem one can form the Lagrangian function L(x,μ,λ)=f(x)+μ⊤g(x)+λ⊤h(x)=L(x,α)=f(x)+α⊤(g(x)h(x)){\displaystyle {\mathcal {L}}(\mathbf {x} ,\mathbf {\mu } ,\mathbf {\lambda } )=f(\mathbf {x} )+\mathbf {\mu } ^{\top }\mathbf {g} (\mathbf {x} )+\mathbf {\lambda } ^{\top }\mathbf {h} (\mathbf {x} )=L(\mathbf {x} ,\mathbf {\alpha } )=f(\mathbf {x} )+\mathbf {\alpha } ^{\top }{\begin{pmatrix}\mathbf {g} (\mathbf {x} )\\\mathbf {h} (\mathbf {x} )\end{pmatrix}}} where g(x)=[g1(x)⋮gi(x)⋮gm(x)],h(x)=[h1(x)⋮hj(x)⋮hℓ(x)],μ=[μ1⋮μi⋮μm],λ=[λ1⋮λj⋮λℓ]andα=[μλ].{\displaystyle \mathbf {g} \left(\mathbf {x} \right)={\begin{bmatrix}g_{1}\left(\mathbf {x} \right)\\\vdots \\g_{i}\left(\mathbf {x} \right)\\\vdots \\g_{m}\left(\mathbf {x} \right)\end{bmatrix}},\quad \mathbf {h} \left(\mathbf {x} \right)={\begin{bmatrix}h_{1}\left(\mathbf {x} \right)\\\vdots \\h_{j}\left(\mathbf {x} \right)\\\vdots \\h_{\ell }\left(\mathbf {x} \right)\end{bmatrix}},\quad \mathbf {\mu } ={\begin{bmatrix}\mu _{1}\\\vdots \\\mu _{i}\\\vdots \\\mu _{m}\\\end{bmatrix}},\quad \mathbf {\lambda } ={\begin{bmatrix}\lambda _{1}\\\vdots \\\lambda _{j}\\\vdots \\\lambda _{\ell }\end{bmatrix}}\quad {\text{and}}\quad \mathbf {\alpha } ={\begin{bmatrix}\mu \\\lambda \end{bmatrix}}.}TheKarush–Kuhn–Tucker theoremthen states the following. Theorem—(sufficiency) If(x∗,α∗){\displaystyle (\mathbf {x} ^{\ast },\mathbf {\alpha } ^{\ast })}is asaddle pointofL(x,α){\displaystyle L(\mathbf {x} ,\mathbf {\alpha } )}inx∈X{\displaystyle \mathbf {x} \in \mathbf {X} },μ≥0{\displaystyle \mathbf {\mu } \geq \mathbf {0} }, thenx∗{\displaystyle \mathbf {x} ^{\ast }}is an optimal vector for the above optimization problem. (necessity) Suppose thatf(x){\displaystyle f(\mathbf {x} )}andgi(x){\displaystyle g_{i}(\mathbf {x} )},i=1,…,m{\displaystyle i=1,\ldots ,m}, areconvexinX{\displaystyle \mathbf {X} }and that there existsx0∈relint⁡(X){\displaystyle \mathbf {x} _{0}\in \operatorname {relint} (\mathbf {X} )}such thatg(x0)<0{\displaystyle \mathbf {g} (\mathbf {x} _{0})<\mathbf {0} }(i.e.,Slater's conditionholds). Then with an optimal vectorx∗{\displaystyle \mathbf {x} ^{\ast }}for the above optimization problem there is associated a vectorα∗=[μ∗λ∗]{\displaystyle \mathbf {\alpha } ^{\ast }={\begin{bmatrix}\mu ^{*}\\\lambda ^{*}\end{bmatrix}}}satisfyingμ∗≥0{\displaystyle \mathbf {\mu } ^{*}\geq \mathbf {0} }such that(x∗,α∗){\displaystyle (\mathbf {x} ^{\ast },\mathbf {\alpha } ^{\ast })}is a saddle point ofL(x,α){\displaystyle L(\mathbf {x} ,\mathbf {\alpha } )}.[5] Since the idea of this approach is to find asupporting hyperplaneon the feasible setΓ={x∈X:gi(x)≤0,i=1,…,m}{\displaystyle \mathbf {\Gamma } =\left\{\mathbf {x} \in \mathbf {X} :g_{i}(\mathbf {x} )\leq 0,i=1,\ldots ,m\right\}}, the proof of the Karush–Kuhn–Tucker theorem makes use of thehyperplane separation theorem.[6] The system of equations and inequalities corresponding to the KKT conditions is usually not solved directly, except in the few special cases where aclosed-formsolution can be derived analytically. In general, many optimization algorithms can be interpreted as methods for numerically solving the KKT system of equations and inequalities.[7] Suppose that theobjective functionf:Rn→R{\displaystyle f\colon \mathbb {R} ^{n}\rightarrow \mathbb {R} }and the constraint functionsgi:Rn→R{\displaystyle g_{i}\colon \mathbb {R} ^{n}\rightarrow \mathbb {R} }andhj:Rn→R{\displaystyle h_{j}\colon \mathbb {R} ^{n}\rightarrow \mathbb {R} }havesubderivativesat a pointx∗∈Rn{\displaystyle x^{*}\in \mathbb {R} ^{n}}. Ifx∗{\displaystyle x^{*}}is alocal optimumand the optimization problem satisfies some regularity conditions (see below), then there exist constantsμi(i=1,…,m){\displaystyle \mu _{i}\ (i=1,\ldots ,m)}andλj(j=1,…,ℓ){\displaystyle \lambda _{j}\ (j=1,\ldots ,\ell )}, called KKT multipliers, such that the following four groups of conditions hold:[8] The last condition is sometimes written in the equivalent form:μigi(x∗)=0,fori=1,…,m.{\displaystyle \mu _{i}g_{i}(x^{*})=0,{\text{ for }}i=1,\ldots ,m.} In the particular casem=0{\displaystyle m=0}, i.e., when there are no inequality constraints, the KKT conditions turn into the Lagrange conditions, and the KKT multipliers are calledLagrange multipliers. Theorem—(sufficiency) If there exists a solutionx∗{\displaystyle x^{*}}to the primal problem, a solution(μ∗,λ∗){\displaystyle (\mu ^{*},\lambda ^{*})}to the dual problem, such that together they satisfy the KKT conditions, then the problem pair has strong duality, andx∗,(μ∗,λ∗){\displaystyle x^{*},(\mu ^{*},\lambda ^{*})}is a solution pair to the primal and dual problems. (necessity) If the problem pair has strong duality, then for any solutionx∗{\displaystyle x^{*}}to the primal problem and any solution(μ∗,λ∗){\displaystyle (\mu ^{*},\lambda ^{*})}to the dual problem, the pairx∗,(μ∗,λ∗){\displaystyle x^{*},(\mu ^{*},\lambda ^{*})}must satisfy the KKT conditions.[9] First, for thex∗,(μ∗,λ∗){\displaystyle x^{*},(\mu ^{*},\lambda ^{*})}to satisfy the KKT conditions is equivalent to them being aNash equilibrium. Fix(μ∗,λ∗){\displaystyle (\mu ^{*},\lambda ^{*})}, and varyx{\displaystyle x}: equilibrium is equivalent to primal stationarity. Fixx∗{\displaystyle x^{*}}, and vary(μ,λ){\displaystyle (\mu ,\lambda )}: equilibrium is equivalent to primal feasibility and complementary slackness. Sufficiency: the solution pairx∗,(μ∗,λ∗){\displaystyle x^{*},(\mu ^{*},\lambda ^{*})}satisfies the KKT conditions, thus is a Nash equilibrium, and therefore closes the duality gap. Necessity: any solution pairx∗,(μ∗,λ∗){\displaystyle x^{*},(\mu ^{*},\lambda ^{*})}must close the duality gap, thus they must constitute a Nash equilibrium (since neither side could do any better), thus they satisfy the KKT conditions. The primal problem can be interpreted as moving a particle in the space ofx{\displaystyle x}, and subjecting it to three kinds of force fields: Primal stationarity states that the "force" of∂f(x∗){\displaystyle \partial f(x^{*})}is exactly balanced by a linear sum of forces∂hj(x∗){\displaystyle \partial h_{j}(x^{*})}and∂gi(x∗){\displaystyle \partial g_{i}(x^{*})}. Dual feasibility additionally states that all the∂gi(x∗){\displaystyle \partial g_{i}(x^{*})}forces must be one-sided, pointing inwards into the feasible set forx{\displaystyle x}. Complementary slackness states that ifgi(x∗)<0{\displaystyle g_{i}(x^{*})<0}, then the force coming from∂gi(x∗){\displaystyle \partial g_{i}(x^{*})}must be zero i.e.,μi(x∗)=0{\displaystyle \mu _{i}(x^{*})=0}, since the particle is not on the boundary, the one-sided constraint force cannot activate. The necessary conditions can be written withJacobian matricesof the constraint functions. Letg(x):Rn→Rm{\displaystyle \mathbf {g} (x):\,\!\mathbb {R} ^{n}\rightarrow \mathbb {R} ^{m}}be defined asg(x)=(g1(x),…,gm(x))⊤{\displaystyle \mathbf {g} (x)=\left(g_{1}(x),\ldots ,g_{m}(x)\right)^{\top }}and leth(x):Rn→Rℓ{\displaystyle \mathbf {h} (x):\,\!\mathbb {R} ^{n}\rightarrow \mathbb {R} ^{\ell }}be defined ash(x)=(h1(x),…,hℓ(x))⊤{\displaystyle \mathbf {h} (x)=\left(h_{1}(x),\ldots ,h_{\ell }(x)\right)^{\top }}. Letμ=(μ1,…,μm)⊤{\displaystyle {\boldsymbol {\mu }}=\left(\mu _{1},\ldots ,\mu _{m}\right)^{\top }}andλ=(λ1,…,λℓ)⊤{\displaystyle {\boldsymbol {\lambda }}=\left(\lambda _{1},\ldots ,\lambda _{\ell }\right)^{\top }}. Then the necessary conditions can be written as: One can ask whether a minimizer pointx∗{\displaystyle x^{*}}of the original, constrained optimization problem (assuming one exists) has to satisfy the above KKT conditions. This is similar to asking under what conditions the minimizerx∗{\displaystyle x^{*}}of a functionf(x){\displaystyle f(x)}in an unconstrained problem has to satisfy the condition∇f(x∗)=0{\displaystyle \nabla f(x^{*})=0}. For the constrained case, the situation is more complicated, and one can state a variety of (increasingly complicated) "regularity" conditions under which a constrained minimizer also satisfies the KKT conditions. Some common examples for conditions that guarantee this are tabulated in the following, with the LICQ the most frequently used one: The strict implications can be shown and In practice weaker constraint qualifications are preferred since they apply to a broader selection of problems. In some cases, the necessary conditions are also sufficient for optimality. In general, the necessary conditions are not sufficient for optimality and additional information is required, such as the Second Order Sufficient Conditions (SOSC). For smooth functions, SOSC involve the second derivatives, which explains its name. The necessary conditions are sufficient for optimality if the objective functionf{\displaystyle f}of a maximization problem is a differentiableconcave function, the inequality constraintsgj{\displaystyle g_{j}}are differentiableconvex functions, the equality constraintshi{\displaystyle h_{i}}areaffine functions, andSlater's conditionholds.[11]Similarly, if the objective functionf{\displaystyle f}of a minimization problem is a differentiableconvex function, the necessary conditions are also sufficient for optimality. It was shown by Martin in 1985 that the broader class of functions in which KKT conditions guarantees global optimality are the so-called Type 1invex functions.[12][13] For smooth,non-linear optimizationproblems, a second order sufficient condition is given as follows. The solutionx∗,λ∗,μ∗{\displaystyle x^{*},\lambda ^{*},\mu ^{*}}found in the above section is a constrained local minimum if for the Lagrangian, then, wheres≠0{\displaystyle s\neq 0}is a vector satisfying the following, where only those active inequality constraintsgi(x){\displaystyle g_{i}(x)}corresponding to strict complementarity (i.e. whereμi>0{\displaystyle \mu _{i}>0}) are applied. The solution is a strict constrained local minimum in the case the inequality is also strict. IfsT∇xx2L(x∗,λ∗,μ∗)s=0{\displaystyle s^{T}\nabla _{xx}^{2}L(x^{*},\lambda ^{*},\mu ^{*})s=0}, the third order Taylor expansion of the Lagrangian should be used to verify ifx∗{\displaystyle x^{*}}is a local minimum. The minimization off(x1,x2)=(x2−x12)(x2−3x12){\displaystyle f(x_{1},x_{2})=(x_{2}-x_{1}^{2})(x_{2}-3x_{1}^{2})}is a good counter-example, see alsoPeano surface. Often inmathematical economicsthe KKT approach is used in theoretical models in order to obtain qualitative results. For example,[14]consider a firm that maximizes its sales revenue subject to a minimum profit constraint. LettingQ{\displaystyle Q}be the quantity of output produced (to be chosen),R(Q){\displaystyle R(Q)}be sales revenue with a positive first derivative and with a zero value at zero output,C(Q){\displaystyle C(Q)}be production costs with a positive first derivative and with a non-negative value at zero output, andGmin{\displaystyle G_{\min }}be the positive minimal acceptable level ofprofit, then the problem is a meaningful one if the revenue function levels off so it eventually is less steep than the cost function. The problem expressed in the previously given minimization form is and the KKT conditions are SinceQ=0{\displaystyle Q=0}would violate the minimum profit constraint, we haveQ>0{\displaystyle Q>0}and hence the third condition implies that the first condition holds with equality. Solving that equality gives Because it was given thatdR/dQ{\displaystyle {\text{d}}R/{\text{d}}Q}anddC/dQ{\displaystyle {\text{d}}C/{\text{d}}Q}are strictly positive, this inequality along with the non-negativity condition onμ{\displaystyle \mu }guarantees thatμ{\displaystyle \mu }is positive and so the revenue-maximizing firm operates at a level of output at whichmarginal revenuedR/dQ{\displaystyle {\text{d}}R/{\text{d}}Q}is less thanmarginal costdC/dQ{\displaystyle {\text{d}}C/{\text{d}}Q}— a result that is of interest because it contrasts with the behavior of aprofit maximizingfirm, which operates at a level at which they are equal. If we reconsider the optimization problem as a maximization problem with constant inequality constraints: The value function is defined as so the domain ofV{\displaystyle V}is{a∈Rm∣for somex∈X,gi(x)≤ai,i∈{1,…,m}}.{\displaystyle \{a\in \mathbb {R} ^{m}\mid {\text{for some }}x\in X,g_{i}(x)\leq a_{i},i\in \{1,\ldots ,m\}\}.} Given this definition, each coefficientμi{\displaystyle \mu _{i}}is the rate at which the value function increases asai{\displaystyle a_{i}}increases. Thus if eachai{\displaystyle a_{i}}is interpreted as a resource constraint, the coefficients tell you how much increasing a resource will increase the optimum value of our functionf{\displaystyle f}. This interpretation is especially important in economics and is used, for instance, inutility maximization problems. With an extra multiplierμ0≥0{\displaystyle \mu _{0}\geq 0}, which may be zero (as long as(μ0,μ,λ)≠0{\displaystyle (\mu _{0},\mu ,\lambda )\neq 0}), in front of∇f(x∗){\displaystyle \nabla f(x^{*})}the KKT stationarity conditions turn into which are called theFritz John conditions. This optimality conditions holds without constraint qualifications and it is equivalent to the optimality conditionKKT or (not-MFCQ). The KKT conditions belong to a wider class of the first-order necessary conditions (FONC), which allow for non-smooth functions usingsubderivatives.
https://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions
Single-particle trajectories(SPTs) consist of a collection of successivediscretepoints causal in time. Thesetrajectoriesare acquired from images in experimental data. In the context of cell biology, the trajectories are obtained by the transient activation by a laser of small dyes attached to a moving molecule. Molecules can now by visualized based on recentsuper-resolution microscopy, which allow routine collections of thousands of short and long trajectories.[1]These trajectories explore part of a cell, either on the membrane or in 3 dimensions and their paths are critically influenced by the local crowded organization and molecular interaction inside the cell,[2]as emphasized in various cell types such as neuronal cells,[3]astrocytes,immunecells and many others. SPT allowed observing moving particles. These trajectories are used to investigate cytoplasm or membrane organization,[4]but also the cell nucleus dynamics, remodeler dynamics or mRNA production. Due to the constant improvement of the instrumentation, the spatial resolution is continuously decreasing, reaching now values of approximately 20 nm, while the acquisition time step is usually in the range of 10 to 50 ms to capture short events occurring in live tissues. A variant of super-resolution microscopy called sptPALM is used to detect the local and dynamically changing organization of molecules in cells, or events of DNA binding by transcription factors in mammalian nucleus. Super-resolution image acquisition and particle tracking are crucial to guarantee a high quality data[5][6][7] Once points are acquired, the next step is to reconstruct a trajectory. This step is done known tracking algorithms to connect the acquired points.[8]Tracking algorithms are based on a physical model of trajectories perturbed by an additive random noise. The redundancy of many short (SPTs) is a key feature to extract biophysical information parameters from empirical data at a molecular level.[9]In contrast, long isolated trajectories have been used to extract information along trajectories, destroying the natural spatial heterogeneity associated to the various positions. The main statistical tool is to compute themean-square displacement(MSD) or second orderstatistical moment: For a Brownian motion,⟨|X(t+Δt)−X(t)|2⟩=2nDt{\displaystyle \langle |X(t+\Delta t)-X(t)|^{2}\rangle =2nDt}, where D is the diffusion coefficient,nis dimension of the space. Some other properties can also be recovered from long trajectories, such as the radius of confinement for a confined motion.[12]The MSD has been widely used in early applications of long but not necessarily redundant single-particle trajectories in a biological context. However, the MSD applied to long trajectories suffers from several issues. First, it is not precise in part because the measured points could be correlated. Second, it cannot be used to compute any physical diffusion coefficient when trajectories consists of switching episodes for example alternating between free and confined diffusion. At low spatiotemporal resolution of the observed trajectories, the MSD behaves sublinearly with time, a process known as anomalous diffusion, which is due in part to the averaging of the different phases of the particle motion. In the context of cellular transport (ameoboid), high resolution motion analysis of long SPTs[13]in micro-fluidic chambers containing obstacles revealed different types of cell motions. Depending on the obstacle density: crawling was found at low density of obstacles and directed motion and random phases can even be differentiated. Statistical methods to extract information from SPTs are based on stochastic models, such as theLangevin equationor itsSmoluchowski's limitand associated models that account for additional localization point identification noise or memory kernel.[14]TheLangevin equationdescribes a stochastic particle driven by a Brownian forceΞ{\displaystyle \Xi }and a field of force (e.g., electrostatic, mechanical, etc.) with an expressionF(x,t){\displaystyle F(x,t)}: where m is the mass of the particle andΓ=6πaρ{\displaystyle \Gamma =6\pi a\rho }is thefriction coefficientof a diffusing particle,ρ{\displaystyle \rho }theviscosity. HereΞ{\displaystyle \Xi }is theδ{\displaystyle \delta }-correlatedGaussianwhite noise. The force can derived from a potential well U so thatF(x,t)=−U′(x){\displaystyle F(x,t)=-U'(x)}and in that case, the equation takes the form whereε=kBT,{\displaystyle \varepsilon =k_{\text{B}}T,}is the energy andkB{\displaystyle k_{\text{B}}}theBoltzmann constantandTthe temperature. Langevin's equation is used to describe trajectories where inertia or acceleration matters. For example, at very short timescales, when a molecule unbinds from a binding site or escapes from a potential well[15]and the inertia term allows the particles to move away from the attractor and thus prevents immediate rebinding that could plague numerical simulations. In the large friction limitγ→∞{\displaystyle \gamma \to \infty }the trajectoriesx(t){\displaystyle x(t)}of the Langevin equation converges in probability to those of the Smoluchowski's equation wherew˙(t){\displaystyle {\dot {w}}(t)}isδ{\displaystyle \delta }-correlated. This equation is obtained when the diffusion coefficient is constant in space. When this is not case, coarse grained equations (at a coarse spatial resolution) should be derived from molecular considerations. Interpretation of the physical forces are not resolved by Ito's vsStratonovich integralrepresentations or any others. For a timescale much longer than the elementary molecular collision, the position of a tracked particle is described by a more general overdamped limit of the Langevin stochastic model. Indeed, if the acquisition timescale of empirical recorded trajectories is much lower compared to the thermal fluctuations, rapid events are not resolved in the data. Thus at this coarser spatiotemporal scale, the motion description is replaced by an effective stochastic equation whereb(X){\displaystyle {b}(X)}is the drift field andBe{\displaystyle {B}_{e}}the diffusion matrix. The effectivediffusion tensorcan vary in spaceD(X)=12B(X)BTXT{\displaystyle D(X)={\frac {1}{2}}B(X)B^{T}X^{T}}(XT{\textstyle X^{T}}denotes the transpose ofX{\textstyle X}). This equation is not derived but assumed. However the diffusion coefficient should be smooth enough as any discontinuity in D should be resolved by a spatial scaling to analyse the source of discontinuity (usually inert obstacles or transitions between two medias). The observed effective diffusion tensor is not necessarily isotropic and can be state-dependent, whereas the friction coefficientγ{\displaystyle \gamma }remains constant as long as the medium stays the same and the microscopic diffusion coefficient (or tensor) could remain isotropic. The development of statistical methods are based on stochastic models, a possible deconvolution procedure applied to the trajectories. Numerical simulations could also be used to identify specific features that could be extracted from single-particle trajectories data.[16]The goal of building a statistical ensemble from SPTs data is to observe local physical properties of the particles, such as velocity, diffusion, confinement or attracting forces reflecting the interactions of the particles with their local nanometer environments. It is possible to use stochastic modeling to construct from diffusion coefficient (or tensor) the confinement or local density of obstacles reflecting the presence of biological objects of different sizes. Several empirical estimators have been proposed to recover the local diffusion coefficient, vector field and even organized patterns in the drift, such as potential wells.[17]The construction of empirical estimators that serve to recover physical properties from parametric and non-parametric statistics. Retrieving statistical parameters of a diffusion process from one-dimensional time series statistics use the first moment estimator or Bayesian inference. The models and the analysis assume that processes are stationary, so that the statistical properties of trajectories do not change over time. In practice, this assumption is satisfied when trajectories are acquired for less than a minute, where only few slow changes may occur on the surface of a neuron for example. Non stationary behavior are observed using a time-lapse analysis, with a delay of tens of minutes between successive acquisitions. The coarse-grained model Eq. 1 is recovered from the conditional moments of the trajectory by computing the incrementsΔX=X(t+Δt)−X(t){\displaystyle \Delta X=X(t+\Delta t)-X(t)}: Here the notationE[⋅|X(t)=x]{\displaystyle E[\cdot \,|\,X(t)=x]}means averaging over all trajectories that are at pointxat timet. The coefficients of the Smoluchowski equation can be statistically estimated at each pointxfrom an infinitely large sample of its trajectories in the neighborhood of the pointxat timet. In practice, the expectations for a and D are estimated by finite sample averages andΔt{\displaystyle \Delta t}is the time-resolution of the recorded trajectories. Formulas for a and D are approximated at the time stepΔt{\displaystyle \Delta t}, where for tens to hundreds of points falling in any bin. This is usually enough for the estimation. To estimate the local drift and diffusion coefficients, trajectories are first grouped within a small neighbourhood. The field of observation is partitioned into square binsS(xk,r){\displaystyle S(x_{k},r)}of side r and centrexk{\displaystyle x_{k}}and the local drift and diffusion are estimated for each of the square. Considering a sample withNt{\displaystyle N_{t}}trajectories{xi(t1),…,xi(tNs)},{\displaystyle \{x^{i}(t_{1}),\dots ,x^{i}(t_{N_{s}})\},}wheretj{\displaystyle t_{j}}are the sampling times, the discretization of equation for the drifta(xk)=(ax(xk),ay(xk)){\displaystyle a(x_{k})=(a_{x}(x_{k}),a_{y}(x_{k}))}at positionxk{\displaystyle x_{k}}is given for each spatial projection on the x and y axis by whereNk{\displaystyle N_{k}}is the number of points of trajectory that fall in the squareS(xk,r){\displaystyle S(x_{k},r)}. Similarly, the components of the effective diffusion tensorD(xk){\displaystyle D(x_{k})}are approximated by the empirical sums The moment estimation requires a large number of trajectories passing through each point, which agrees precisely with the massive data generated by the a certain types of super-resolution data such as those acquired by sptPALM technique on biological samples. The exact inversion of Lagenvin's equation demands in theory an infinite number of trajectories passing through any point x of interest. In practice, the recovery of the drift and diffusion tensor is obtained after a region is subdivided by a square grid of radiusror by moving sliding windows (of the order of 50 to 100 nm). Algorithms based on mapping the density of points extracted from trajectories allow to reveal local binding and trafficking interactions and organization of dynamic subcellular sites. The algorithms can be applied to study regions of high density, revealved by SPTs. Examples are organelles such as endoplasmic reticulum or cell membranes. The method is based on spatiotemporal segmentation to detect local architecture and boundaries of high-density regions for domains measuring hundreds of nanometers.[18]
https://en.wikipedia.org/wiki/Single_particle_trajectories
Head/tail breaksis aclustering algorithmfor data with aheavy-tailed distributionsuch aspower lawsandlognormal distributions. The heavy-tailed distribution can be simply referred to the scaling pattern of far more small things than large ones, or alternatively numerous smallest, a very few largest, and some in between the smallest and largest. The classification is done through dividing things into large (or called the head) and small (or called the tail) things around the arithmetic mean or average, and then recursively going on for the division process for the large things or the head until the notion of far more small things than large ones is no longer valid, or with more or less similar things left only.[1]Head/tail breaks is not just for classification, but also for visualization of big data by keeping the head, since the head is self-similar to the whole. Head/tail breaks can be applied not only to vector data such as points, lines and polygons, but also to raster data like digital elevation model (DEM). The head/tail breaks is motivated by inability of conventional classification methods such as equal intervals, quantiles, geometric progressions, standard deviation, and natural breaks - commonly known asJenks natural breaks optimizationork-means clusteringto reveal the underlying scaling or living structure with the inherent hierarchy (or heterogeneity) characterized by the recurring notion of far more small things than large ones.[2][3]Note that the notion of far more small things than large one is not only referred to geometric property, but also to topological and semantic properties. In this connection, the notion should be interpreted as far more unpopular (or less-connected) things than popular (or well-connected) ones, or far more meaningless things than meaningful ones. Head/tail breaks uses the mean or average to dichotomize a dataset into small and large values, rather than to characterize classes by average values, which is unlike k-means clustering or natural breaks. Through the head/tail breaks, a dataset is seen as a living structure with an inherent hierarchy with far more smalls than larges, or recursively perceived as the head of the head of the head and so on. It opens up new avenues of analyzing data from a holistic and organic point of view while considering different types of scales and scaling in spatial analysis.[4] Given some variable X that demonstrates a heavy-tailed distribution, there are far more small x than large ones. Take the average of all xi, and obtain the first mean m1. Then calculate the second mean for those xi greater than m1, and obtain m2. In the same recursive way, we can get m3 depending on whether the ending condition of no longer far more small x than large ones is met. For simplicity, we assume there are three means, m1, m2, and m3. This classification leads to four classes: [minimum, m1], (m1, m2], (m2, m3], (m3, maximum]. In general, it can be represented as a recursive function as follows: The resulting number of classes is referred to as ht-index, an alternative index tofractal dimensionfor characterizing complexity of fractals or geographic features: the higher the ht-index, the more complex the fractals.[5] The criterion to stop the iterative classification process using the head/tail breaks method is that the remaining data (i.e., the head part) are not heavy-tailed, or simply, the head part is no longer a minority (i.e., the proportion of the head part is no longer less than a threshold such as 40%). This threshold is suggested to be 40% by Jiang et al. (2013),[6]just as the codes above (i.e., (length/head)/length(data) ≤ 40%). This process is called head/tail breaks 1.0. But sometimes a larger threshold, for example 50% or more, can be used, as Jiang and Yin (2014)[5]noted in another article: "this condition can be relaxed for many geographic features, such as 50 percent or even more". However, all heads' percentage on average must be smaller than 40% (or 41, 42%), indicating far more small things than large ones. Many real-world data cannot be fit into a perfect long tailed distribution, therefore its threshold can be relaxed structurally. In head/tail breaks 2.0 the threshold only applies to the overall heads' percentage.[7]This means that the percentages of all heads related to the tails should be around 40% on average. Individual classes can have any percentage spit around the average, as long as this averages out as a whole. For example, if there is data distributed in such a way that it has a clearly defined head and tail during the first and second iteration (length(head)/(length(data)<20%) but a much less well defined long tailed distribution for the third iteration (60% in the head), head/tail breaks 2.0 allows the iteration to continue into the fourth iteration which can be distributed 30% head - 70% tail again and so on. As long as the overall threshold is not surpassed the head/tail breaks classification holds. A good tool to display the scaling pattern, or the heavy-tailed distribution, is the rank-size plot, which is a scatter plot to display a set of values according to their ranks. With this tool, a new index[8]termed as the ratio of areas (RA) in a rank-size plot was defined to characterize the scaling pattern. The RA index has been successfully used in the estimation of traffic conditions. However, the RA index can only be used as a complementary method to the ht-index, because it is ineffective to capture the scaling structure of geographic features. In addition to the ht-index, the following indices are also derived with the head/tail breaks. Instead of more or less similar things, there are far more small things than large ones surrounding us. Given the ubiquity of the scaling pattern, head/tail breaks is found to be of use to statistical mapping, map generalization, cognitive mapping and even perception of beauty .[6][12][13]It helps visualize big data, since big data are likely to show the scaling property of far more small things than large ones. Essentially geographic phenomena can be scaleful or scale-free. Scaleful phenomena can be explained by conventional mathematical or geographical operations, but scale-free phenomena can not. Head/tail breaks can be used to characterize the scale-free phenomena, which are in the majority.[14]The visualization strategy is to recursively drop out the tail parts until the head parts are clear or visible enough.[15][16]In addition, it helps delineate cities or natural cities to be more precise from various geographic information such as street networks, social media geolocation data, and nighttime images. As the head/tail breaks method can be used iteratively to obtain head parts of a data set, this method actually captures the underlying hierarchy of the data set. For example, if we divide the array (19, 8, 7, 6, 2, 1, 1, 1, 0) with the head/tail breaks method, we can get two head parts, i.e., the first head part (19, 8, 7, 6) and the second head part (19). These two head parts as well as the original array form a three-level hierarchy: The number of levels of the above-mentioned hierarchy is actually a characterization of the imbalance of the example array, and this number of levels has been termed as the ht-index.[5]With the ht-index, we are able to compare degrees of imbalance of two data sets. For example, the ht-index of the example array (19, 8, 7, 6, 2, 1, 1, 1, 0) is 3, and the ht-index of another array (19, 8, 8, 8, 8, 8, 8, 8, 8) is 2. Therefore, the degree of imbalance of the former array is higher than that of the latter array. The use of fractals in modelling human geography has for a longer period been seen as useful in measuring the spatial distribution of human settlements.[17]Head/tail breaks can be used to do just that with a concept called natural cities. The term ‘natural cities’ refers to the human settlements or human activities in general on Earth's surface that are naturally or objectively defined and delineated from massive geographic information based on head/tail division rule, a non-recursive form of head/tail breaks.[18][19]Such geographic information could be from various sources, such as massive street junctions[19]and street ends, a massive number of street blocks, nighttime imagery and social media users’ locations etc. Based on these the different urban forms and configurations detected in cities can be derived.[20]Distinctive from conventional cities, the adjective ‘natural’ could be explained not only by the sources of natural cities, but also by the approach to derive them[1]. Natural cities are derived from a meaningful cutoff averaged from a massive number of units extracted from geographic information.[15]Those units vary according to different kinds of geographic information, for example the units could be area units for the street blocks and pixel values for the nighttime images.[21]Anatural cities modelhas been created using ArcGIS model builder,[22]it follows the same process of deriving natural cities from location-based social media,[18]namely, building up huge triangular irregular network (TIN) based on the point features (street nodes in this case) and regarding the triangles which are smaller than a mean value as the natural cities. These natural cities can also be created from other open access information likeOpenStreetMapand further be used as an alternative delineation of administrative boundaries.[23]Scaling lawcan also at the same time correctly be identified and the administrative borders can be created to respect this by the delineation of the natural cities.[24][25]This type methodology can help urban geographers and planners by correctly identifying the effective urban territorial scope of the areas they work in.[26] Natural cities can vary depending on the scale on which the natural cities are delineated, which is why optimally they have to be based on data from the whole world. Due to that being computationally impossible, a country or county scale is suggested as alternative.[27]Due to the scale-free nature of natural cities and the data they are based on there are also possibilities to use the natural cities method for further measurements. One of the main advantages of natural cities is that it is derivedbottom-upinstead oftop-down. That means that the borders are determined by the data of something physical rather than determined by an administrative government or administration.[28]For example by calculating the natural cities of a natural city recursively the dense areas within a natural city are identified. These can be seen as city centers for example. By using the natural cities method in this way further border delineations can be made dependent on the scale the natural cities were generated from.[29]Natural cities derived from smaller regional areas will provide less accurate but still usable results in certain analysis, like for example determining urban expansion over time.[30]As mentioned before though, optimally natural cities should be based on a massive amount of for example street intersections for an entire country or even the world. This is because natural cities are based onthe wisdom of crowdsthinking, which needs the biggest set of available data for the best results. Also note that the structure of natural cities can be considered to befractalin nature.[31] It is important when head/tail breaks are being used to generate natural cities, that the data is not aggregated afterwards. For example, the amount of generated natural cities can only be known after they are generated. It is not possible to use a pre-defined number of cities for an area or country and aggregate the results of the natural cities to administratively determined city borders. Naturally natural cities should followZipf's law, if they do not, the area is most likely too small, or data has probably been processed wrongly. An example of this is seen in a research where head/tail breaks were used to extract natural cities, but they were aggregated to administrative borders, which following that concluded that the cities do not followZipf's law.[32]This happens more often in science, where papers actually produce results which are actually false.[33] Current color renderings for DEM or density map are essentially based on conventional classifications such as natural breaks or equal intervals, so they disproportionately exaggerate high elevations or high densities. As a matter of fact, there are not so many high elevations or high-density locations.[34]It was found that coloring based head/tail breaks is more favorable than those by other classifications.[35][36][2] The pattern of far more small things than large ones frequently recurs in geographical data. A spiral layout inspired by the golden ratio or Fibonacci sequence can help visualize this recursive notion of scaling hierarchy and the different levels of scale.[37][38]In other words, from the smallest to the largest scale, a map can be seen as a map of a map of a map, and so on. Other applications of Head/tail breaks: The following implementations are available underFree/Open Source Softwarelicenses.
https://en.wikipedia.org/wiki/Head/tail_breaks
Inmathematics, awell-posed problemis one for which the following properties hold:[a] Examples ofarchetypalwell-posed problems include theDirichlet problem for Laplace's equation, and theheat equationwith specified initial conditions. These might be regarded as 'natural' problems in that there are physical processes modelled by these problems. Problems that are not well-posed in the sense above are termedill-posed. A simple example is aglobal optimizationproblem, because the location of the optima is generally not a continuous function of the parameters specifying the objective, even when the objective itself is a smooth function of those parameters.Inverse problemsare often ill-posed; for example, the inverse heat equation, deducing a previous distribution of temperature from final data, is not well-posed in that the solution is highly sensitive to changes in the final data. Continuum models must often bediscretizedin order to obtain a numerical solution. While solutions may be continuous with respect to the initial conditions, they may suffer fromnumerical instabilitywhen solved with finiteprecision, or witherrorsin the data. Even if a problem is well-posed, it may still beill-conditioned, meaning that a small error in the initial data can result in much larger errors in the answers. Problems in nonlinearcomplex systems(so-calledchaoticsystems) provide well-known examples of instability. An ill-conditioned problem is indicated by a largecondition number. If the problem is well-posed, then it stands a good chance of solution on a computer using astable algorithm. If it is not well-posed, it needs to be re-formulated for numerical treatment. Typically this involves including additional assumptions, such as smoothness of solution. This process is known asregularization.[1]Tikhonov regularizationis one of the most commonly used for regularization of linear ill-posed problems. The existence of local solutions is often an important part of the well-posedness problem, and it is the foundation of many estimate methods, for example the energy method below. There are many results on this topic. For example, theCauchy–Kowalevski theoremfor Cauchy initial value problems essentially states that if the terms in a partialdifferential equationare all made up ofanalytic functionsand a certain transversality condition is satisfied (thehyperplaneor more generally hypersurface where the initial data are posed must be non-characteristic with respect to the partial differential operator), then on certain regions, there necessarily exist solutions which are as well analytic functions. This is a fundamental result in the study of analytic partial differential equations. Surprisingly, the theorem does not hold in the setting of smooth functions; anexamplediscovered byHans Lewyin 1957 consists of a linear partial differential equation whose coefficients are smooth (i.e., have derivatives of all orders) but not analytic for which no solution exists. So the Cauchy-Kowalevski theorem is necessarily limited in its scope to analytic functions. The energy method is useful for establishing both uniqueness and continuity with respect to initial conditions (i.e. it does not establish existence). The method is based upon deriving an upper bound of an energy-like functional for a given problem. Example: Consider the diffusion equation on the unit interval with homogeneousDirichlet boundary conditionsand suitable initial dataf(x){\displaystyle f(x)}(e.g. for whichf(0)=f(1)=0{\displaystyle f(0)=f(1)=0}). ut=Duxx,0<x<1,t>0,D>0,u(x,0)=f(x),u(0,t)=0,u(1,t)=0,{\displaystyle {\begin{aligned}u_{t}&=Du_{xx},&&0<x<1,\,t>0,\,D>0,\\u(x,0)&=f(x),\\u(0,t)&=0,\\u(1,t)&=0,\\\end{aligned}}} Multiply the equationut=Duxx{\displaystyle u_{t}=Du_{xx}}byu{\displaystyle u}and integrate in space over the unit interval to obtain ∫01uutdx=D∫01uuxxdx⟹∫0112∂tu2dx=Duux|01−D∫01(ux)2dx⟹12∂t‖u‖22=0−D∫01(ux)2dx≤0{\displaystyle {\begin{aligned}&&\int _{0}^{1}uu_{t}dx&=D\int _{0}^{1}uu_{xx}dx\\\Longrightarrow &&\int _{0}^{1}{\frac {1}{2}}\partial _{t}u^{2}dx&=Duu_{x}{\Big |}_{0}^{1}-D\int _{0}^{1}(u_{x})^{2}dx\\\Longrightarrow &&{\frac {1}{2}}\partial _{t}\|u\|_{2}^{2}&=0-D\int _{0}^{1}(u_{x})^{2}dx\leq 0\end{aligned}}} This tells us that‖u‖2{\displaystyle \|u\|_{2}}(p-norm) cannot grow in time. By multiplying by two and integrating in time, from0{\displaystyle 0}up tot{\displaystyle t}, one finds ‖u(⋅,t)‖22≤‖f(⋅)‖22{\displaystyle \|u(\cdot ,t)\|_{2}^{2}\leq \|f(\cdot )\|_{2}^{2}} This result is theenergy estimatefor this problem. To show uniqueness of solutions, assume there are two distinct solutions to the problem, call themu{\displaystyle u}andv{\displaystyle v}, each satisfying the same initial data. Upon definingw=u−v{\displaystyle w=u-v}then, via the linearity of the equations, one finds thatw{\displaystyle w}satisfies wt=Dwxx,0<x<1,t>0,D>0,w(x,0)=0,w(0,t)=0,w(1,t)=0,{\displaystyle {\begin{aligned}w_{t}&=Dw_{xx},&&0<x<1,\,t>0,\,D>0,\\w(x,0)&=0,\\w(0,t)&=0,\\w(1,t)&=0,\\\end{aligned}}} Applying the energy estimate tells us‖w(⋅,t)‖22≤0{\displaystyle \|w(\cdot ,t)\|_{2}^{2}\leq 0}which impliesu=v{\displaystyle u=v}(almost everywhere). Similarly, to show continuity with respect to initial conditions, assume thatu{\displaystyle u}andv{\displaystyle v}are solutions corresponding to different initial datau(x,0)=f(x){\displaystyle u(x,0)=f(x)}andv(x,0)=g(x){\displaystyle v(x,0)=g(x)}. Consideringw=u−v{\displaystyle w=u-v}once more, one finds thatw{\displaystyle w}satisfies the same equations as above but withw(x,0)=f(x)−g(x){\displaystyle w(x,0)=f(x)-g(x)}. This leads to the energy estimate‖w(⋅,t)‖22≤D‖f(⋅)−g(⋅)‖22{\displaystyle \|w(\cdot ,t)\|_{2}^{2}\leq D\|f(\cdot )-g(\cdot )\|_{2}^{2}}which establishes continuity (i.e. asf{\displaystyle f}andg{\displaystyle g}become closer, as measured by theL2{\displaystyle L^{2}}norm of their difference, then‖w(⋅,t)‖2→0{\displaystyle \|w(\cdot ,t)\|_{2}\to 0}). Themaximum principleis an alternative approach to establish uniqueness and continuity of solutions with respect to initial conditions for this example. The existence of solutions to this problem can be established usingFourier series. If it is possible to denote the solution to a Cauchy problem∂u∂t=Au,u(0)=u0(1){\displaystyle {\frac {\partial u}{\partial t}}=Au,u(0)=u_{0}{\text{ (1)}}}, whereAis a linear operator mapping a dense linear subspaceD(A)ofXintoX,withu(t)=S(t)u0{\displaystyle u(t)=S(t)u_{0}}, where{S(t);t≥0}{\displaystyle \{S(t);t\geq 0\}}is a family of linear operators onX, satisfying then (1) is well-posed. Hille-Yosida theoremstates the criteria onAfor such a{S(t);t≥0}{\displaystyle \{S(t);t\geq 0\}}to exist.
https://en.wikipedia.org/wiki/Ill-posed_problem
TheOmega ratiois a risk-return performance measure of an investment asset, portfolio, or strategy. It was devised by Con Keating and William F. Shadwick in 2002 and is defined as the probability weighted ratio of gains versus losses for some threshold return target.[1]The ratio is an alternative for the widely usedSharpe ratioand is based on information the Sharpe ratio discards. Omega is calculated by creating a partition in the cumulative return distribution in order to create an area of losses and an area for gains relative to this threshold. The ratio is calculated as: whereF{\displaystyle F}is thecumulative probability distribution functionof the returns andθ{\displaystyle \theta }is the target return threshold defining what is considered a gain versus a loss. A larger ratio indicates that the asset provides more gains relative to losses for some thresholdθ{\displaystyle \theta }and so would be preferred by an investor. Whenθ{\displaystyle \theta }is set to zero the gain-loss-ratio by Bernardo and Ledoit arises as a special case.[2] Comparisons can be made with the commonly usedSharpe ratiowhich considers the ratio of return versus volatility.[3]The Sharpe ratio considers only the first twomomentsof the return distribution whereas the Omega ratio, by construction, considers all moments. The standard form of the Omega ratio is a non-convex function, but it is possible to optimize a transformed version usinglinear programming.[4]To begin with, Kapsos et al. show that the Omega ratio of a portfolio is:Ω(θ)=wTE⁡(r)−θE⁡[(θ−wTr)+]+1{\displaystyle \Omega (\theta )={w^{T}\operatorname {E} (r)-\theta \over {\operatorname {E} [(\theta -w^{T}r)_{+}]}}+1}The optimization problem that maximizes the Omega ratio is given by:maxwwTE⁡(r)−θE⁡[(θ−wTr)+],s.t.wTE⁡(r)≥θ,wT1=1,w≥0{\displaystyle \max _{w}{w^{T}\operatorname {E} (r)-\theta \over {\operatorname {E} [(\theta -w^{T}r)_{+}]}},\quad {\text{s.t. }}w^{T}\operatorname {E} (r)\geq \theta ,\;w^{T}{\bf {1}}=1,\;w\geq 0}The objective function is non-convex, so several modifications are made. First, note that the discrete analogue of the objective function is:wTE⁡(r)−θ∑jpj(θ−wTr)+{\displaystyle {w^{T}\operatorname {E} (r)-\theta \over {\sum _{j}p_{j}(\theta -w^{T}r)_{+}}}}Form{\displaystyle m}sampled asset class returns, letuj=(θ−wTrj)+{\displaystyle u_{j}=(\theta -w^{T}r_{j})_{+}}andpj=m−1{\displaystyle p_{j}=m^{-1}}. Then the discrete objective function becomes:wTE⁡(r)−θm−11Tu∝wTE⁡(r)−θ1Tu{\displaystyle {w^{T}\operatorname {E} (r)-\theta \over {m^{-1}{\bf {1}}^{T}u}}\propto {w^{T}\operatorname {E} (r)-\theta \over {{\bf {1}}^{T}u}}}Following these substitutions, the non-convex optimization problem is transformed into an instance oflinear-fractional programming. Assuming that the feasible region is non-empty and bounded, it is possible to transform a linear-fractional program into a linear program. Conversion from a linear-fractional program to a linear program yields the final form of the Omega ratio optimization problem:maxy,q,zyTE⁡(r)−θzs.t.yTE⁡(r)≥θz,qT1=1,yT1=zqj≥θz−yTrj,q,z≥0,zL≤y≤zU{\displaystyle {\begin{aligned}\max _{y,q,z}{}&y^{T}\operatorname {E} (r)-\theta z\\{\text{s.t. }}&y^{T}\operatorname {E} (r)\geq \theta z,\;q^{T}{\bf {1}}=1,\;y^{T}{\bf {1}}=z\\&q_{j}\geq \theta z-y^{T}r_{j},\;q,z\geq 0,\;z{\mathcal {L}}\leq y\leq z{\mathcal {U}}\end{aligned}}}whereL,U{\displaystyle {\mathcal {L}},\;{\mathcal {U}}}are the respective lower and upper bounds for the portfolio weights. To recover the portfolio weights, normalize the values ofy{\displaystyle y}so that their sum is equal to 1.
https://en.wikipedia.org/wiki/Omega_ratio
Asemantic featureis a component of the concept associated with a lexical item ('female' + 'performer' = 'actress'). More generally, it can also be a component of the concept associated with any grammatical unit, whether composed or not ('female' + 'performer' = 'the female performer' or 'the actress').[1]An individual semantic feature constitutes one component of a word'sintention, which is the inherent sense or concept evoked.[2]Linguistic meaning of a word is proposed to arise from contrasts and significant differences with other words. Semantic features enable linguistics to explain how words that share certain features may be members of the samesemantic domain. Correspondingly, the contrast in meanings of words is explained by diverging semantic features. For example,fatherandsonshare the common components of "human", "kinship", "male" and are thus part of a semantic domain of male family relations. They differ in terms of "generation" and "adulthood", which is what gives each its individual meaning.[3] The analysis of semantic features is utilized in the field of linguistic semantics, more specifically the subfields oflexical semantics,[4]andlexicology.[5]One aim of these subfields is to explain the meaning of a word in terms of their relationships with other words.[6]In order to accomplish this aim, one approach is to analyze the internal semantic structure of a word as composed of a number of distinct and minimal components of meaning.[7]This approach is calledcomponential analysis, also known as semantic decomposition.[8]Semantic decomposition allows any given lexical item to be defined based on minimal elements of meaning, which are called semantic features. The termsemantic featureis usually used interchangeably with the termsemantic component.[9]Additionally, semantic features/semantic components are also often referred to assemantic properties.[10] The theory of componential analysis and semantic features is not the only approach to analyzing the semantic structure of words. An alternative direction of research that contrasts with componential analysis isprototype semantics.[9] Thesemantic featuresof a word can be notated using a binary feature notation common to the framework ofcomponential analysis.[11]Asemantic propertyis specified in square brackets and a plus or minus sign indicates the existence or non-existence of that property.[12] Intersectingsemantic classesshare the same features. Some features need not be specifically mentioned as their presence or absence is obvious from another feature. This is aredundancyrule. Thissemanticsarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Semantic_feature
Acontainer port,container terminal, orintermodal terminalis a facility wherecargo containersaretransshippedbetweendifferent transport vehicles, for onward transportation. The transshipment may be betweencontainer shipsand land vehicles, for exampletrainsortrucks, in which case the terminal is described as amaritime container port. Alternatively, the transshipment may be between land vehicles, typically between train and truck, in which case the terminal is described as aninland container port. In November 1932, the first inland container port in the world was opened by thePennsylvania Railroadcompany inEnola, Pennsylvania.[1] Port Newark-Elizabethon theNewark Bayin thePort of New York and New Jerseyis considered the world's first maritime container port. On April 26, 1956, theIdeal Xwas rigged for an experiment to use standardized cargo containers that were stacked and then unloaded to a compatible truck chassis at Port Newark. The concept had been developed by the McLean Trucking Company. On August 15, 1962, thePort Authority of New York and New Jerseyopened the world’s first container port, Elizabeth Marine Terminal.[2] Maritime container ports tend to be part of a largerport, and the biggest maritime container ports can be found situated around majorharbours. Inland container ports tend to be located in or near major cities, with good rail connections to maritime container ports. It is common for cargo that arrives to a container port in a single ship to be distributed over several modes of transportation for delivery to inland customers. According to a manager from thePort of Rotterdam, it may be fairly typical way for the cargo of a large 18,000TEUcontainer ship to be distributed over 19 container trains (74 TEU each), 32 barges (97 TEU each) and 1,560 trucks (1.6 TEU each, on average).[3]The further container terminal, in April 2015, suchAPM TerminalMaasvlakte II, that adapts the advanced technology ofremotely-controlledSTSgantry cranesand conceptions ofsustainability,renewable energy, and zerocarbon dioxide emission.[4] Both maritime and inland container ports usually provide storage facilities for both loaded and empty containers. Loaded containers are stored for relatively short periods, whilst waiting for onward transportation, whilst unloaded containers may be stored for longer periods awaiting their next use. Containers are normally stacked for storage, and the resulting stores are known as container stacks. In recent years methodological advances regarding container port operations have considerably improved, such ascontainer port design process. For a detailed description and a comprehensive list of references see, e.g., the operations research literature.[5][6] This is a list of the world's top 10 largest container port operators in 2024 according toLloyd's List.[7]
https://en.wikipedia.org/wiki/Container_terminal
Answer set programming(ASP) is a form ofdeclarative programmingoriented towards difficult (primarilyNP-hard)search problems. It is based on thestable model(answer set) semantics oflogic programming. In ASP, search problems are reduced to computing stable models, andanswer set solvers—programs for generating stable models—are used to perform search. The computational process employed in the design of many answer set solvers is an enhancement of theDPLL algorithmand, in principle, it always terminates (unlikePrologquery evaluation, which may lead to aninfinite loop). In a more general sense, ASP includes all applications of answer sets toknowledge representation and reasoning[1][2]and the use of Prolog-style query evaluation for solving problems arising in these applications. An early example of answer set programming was theplanningmethod proposed in 1997 by Dimopoulos, Nebel and Köhler.[3][4]Their approach is based on the relationship between plans and stable models.[5]In 1998 Soininen and Niemelä[6]applied what is now known as answer set programming to the problem ofproduct configuration.[4]In 1999, the term "answer set programming" appeared for the first time in a bookThe Logic Programming Paradigmas the title of a collection of two papers.[4]The first of these papers identified the use of answer set solvers for search as a newprogramming paradigm.[7]That same year Niemelä also proposed "logic programs with stable model semantics" as a new paradigm.[8] Lparseis the name of the program that was originally created as agroundingtool (front-end) for the answer set solversmodels. The language that Lparse accepts is now commonly called AnsProlog,[9]short forAnswer Set Programming in Logic.[10]It is now used in the same way in many other answer set solvers, includingassat,clasp,cmodels,gNt,nomore++andpbmodels. (dlvis an exception; the syntax of ASP programs written for dlv is somewhat different.) An AnsProlog program consists of rules of the form The symbol:-("if") is dropped if<body>is empty; such rules are calledfacts. The simplest kind of Lparse rules arerules with constraints. One other useful construct included in this language ischoice. For instance, the choice rule says: choose arbitrarily which of the atomsp,q,r{\displaystyle p,q,r}to include in the stable model. The Lparse program that contains this choice rule and no other rules has 8 stable models—arbitrary subsets of{p,q,r}{\displaystyle \{p,q,r\}}. The definition of a stable model was generalized to programs with choice rules.[11]Choice rules can be treated also as abbreviations forpropositional formulas under the stable model semantics.[12]For instance, the choice rule above can be viewed as shorthand for the conjunction of three "excluded middle" formulas: The language of Lparse allows us also to write "constrained" choice rules, such as This rule says: choose at least 1 of the atomsp,q,r{\displaystyle p,q,r}, but not more than 2. The meaning of this rule under the stable model semantics is represented by thepropositional formula Cardinality bounds can be used in the body of a rule as well, for instance: Adding this constraint to an Lparse program eliminates the stable models that contain at least 2 of the atomsp,q,r{\displaystyle p,q,r}. The meaning of this rule can be represented by the propositional formula Variables (capitalized, as inProlog) are used in Lparse to abbreviate collections of rules that follow the same pattern, and also to abbreviate collections of atoms within the same rule. For instance, the Lparse program has the same meaning as The program is shorthand for Arangeis of the form: where start and end are constant-valued arithmetic expressions. A range is a notational shortcut that is mainly used to define numerical domains in a compatible way. For example, the fact is a shortcut for Ranges can also be used in rule bodies with the same semantics. Aconditional literalis of the form: If the extension ofqis{q(a1), q(a2), ..., q(aN)}, the above condition is semantically equivalent to writing{p(a1), p(a2), ..., p(aN)}in the place of the condition. For example, is a shorthand for To find a stable model of the Lparse program stored in file${filename}we use the command Option 0 instructs smodels to findallstable models of the program. For instance, if filetestcontains the rules then the command produces the output Ann{\displaystyle n}-coloringof agraphG=⟨V,E⟩{\displaystyle G=\left\langle V,E\right\rangle }is a functioncolor:V→{1,…,n}{\displaystyle \mathrm {color} :V\to \{1,\dots ,n\}}such thatcolor(x)≠color(y){\displaystyle \mathrm {color} (x)\neq \mathrm {color} (y)}for every pair of adjacent vertices(x,y)∈E{\displaystyle (x,y)\in E}. We would like to use ASP to find ann{\displaystyle n}-coloring of a given graph (or determine that it does not exist). This can be accomplished using the following Lparse program: Line 1 defines the numbers1,…,n{\displaystyle 1,\dots ,n}to be colors. According to the choice rule in Line 2, a unique colori{\displaystyle i}should be assigned to each vertexx{\displaystyle x}. The constraint in Line 3 prohibits assigning the same color to verticesx{\displaystyle x}andy{\displaystyle y}if there is an edge connecting them. If we combine this file with a definition ofG{\displaystyle G}, such as and run smodels on it, with the numeric value ofn{\displaystyle n}specified on the command line, then the atoms of the formcolor(…,…){\displaystyle \mathrm {color} (\dots ,\dots )}in the output of smodels will represent ann{\displaystyle n}-coloring ofG{\displaystyle G}. The program in this example illustrates the "generate-and-test" organization that is often found in simple ASP programs. The choice rule describes a set of "potential solutions"—a simple superset of the set of solutions to the given search problem. It is followed by a constraint, which eliminates all potential solutions that are not acceptable. However, the search process employed by smodels and other answer set solvers is not based ontrial and error. Acliquein a graph is a set of pairwise adjacent vertices. The following Lparse program finds a clique of size≥n{\displaystyle \geq n}in a given directed graph, or determines that it does not exist: This is another example of the generate-and-test organization. The choice rule in Line 1 "generates" all sets consisting of≥n{\displaystyle \geq n}vertices. The constraint in Line 2 "weeds out" the sets that are not cliques. AHamiltonian cyclein adirected graphis acyclethat passes through each vertex of the graph exactly once. The following Lparse program can be used to find a Hamiltonian cycle in a given directed graph if it exists; we assume that 0 is one of the vertices. The choice rule in Line 1 "generates" all subsets of the set of edges. The three constraints "weed out" the subsets that are not Hamiltonian cycles. The last of them uses the auxiliary predicater(x){\displaystyle r(x)}("x{\displaystyle x}is reachable from 0") to prohibit the vertices that do not satisfy this condition. This predicate is defined recursively in Lines 6 and 7. This program is an example of the more general "generate, define and test" organization: it includes the definition of an auxiliary predicate that helps us eliminate all "bad" potential solutions. Innatural language processing,dependency-based parsingcan be formulated as an ASP problem.[13]The following code parses the Latin sentence "Puella pulchra in villa linguam latinam discit", "the pretty girl is learning Latin in the villa". The syntax tree is expressed by thearcpredicates which represent the dependencies between the words of the sentence. The computed structure is a linearly ordered rooted tree. The ASP standardization working group produced a standard language specification, called ASP-Core-2,[14]towards which recent ASP systems are converging. ASP-Core-2 is the reference language for the Answer Set Programming Competition, in which ASP solvers are periodically benchmarked over a number of reference problems. Early systems, such as smodels, usedbacktrackingto find solutions. As the theory and practice ofBoolean SAT solversevolved, a number of ASP solvers were built on top of SAT solvers, including ASSAT and Cmodels. These converted ASP formula into SAT propositions, applied the SAT solver, and then converted the solutions back to ASP form. More recent systems, such as Clasp, use a hybrid approach, using conflict-driven algorithms inspired by SAT, without fully converting into a Boolean-logic form. These approaches allow for significant improvements of performance, often by an order of magnitude, over earlier backtracking algorithms. ThePotasscoproject acts as an umbrella for many of the systems below, includingclasp, grounding systems (gringo), incremental systems (iclingo), constraint solvers (clingcon),action languageto ASP compilers (coala), distributedMessage Passing Interfaceimplementations (claspar), and many others. Most systems support variables, but only indirectly, by forcing grounding, by using a grounding system such asLparseorgringoas a front end. The need for grounding can cause a combinatorial explosion of clauses; thus, systems that perform on-the-fly grounding might have an advantage.[15] Query-driven implementations of answer set programming, such as the Galliwasp system[16]and s(CASP)[17]avoid grounding altogether by using a combination ofresolutionandcoinduction.
https://en.wikipedia.org/wiki/Answer_set_programming
Inmathematics, more specifically ingroup theory, thecharacterof agroup representationis afunctionon thegroupthat associates to each group element thetraceof the correspondingmatrix. The character carries the essential information about the representation in a more condensed form.Georg Frobeniusinitially developedrepresentation theory of finite groupsentirely based on the characters, and without any explicit matrix realization of representations themselves. This is possible because acomplexrepresentation of afinite groupis determined (up toisomorphism) by its character. The situation with representations over afieldof positivecharacteristic, so-called "modular representations", is more delicate, butRichard Brauerdeveloped a powerful theory of characters in this case as well. Many deep theorems on the structure of finite groups use characters ofmodular representations. Characters ofirreducible representationsencode many important properties of a group and can thus be used to study its structure. Character theory is an essential tool in theclassification of finite simple groups. Close to half of theproofof theFeit–Thompson theoreminvolves intricate calculations with character values. Easier, but still essential, results that use character theory includeBurnside's theorem(a purely group-theoretic proof of Burnside's theorem has since been found, but that proof came over half a century after Burnside's original proof), and a theorem ofRichard BrauerandMichio Suzukistating that a finitesimple groupcannot have ageneralized quaternion groupas itsSylow2-subgroup. LetVbe afinite-dimensionalvector spaceover afieldFand letρ:G→ GL(V)be arepresentationof a groupGonV. Thecharacterofρis the functionχρ:G→Fgiven by whereTris thetrace. A characterχρis calledirreducibleorsimpleifρis anirreducible representation. Thedegreeof the characterχis thedimensionofρ; in characteristic zero this is equal to the valueχ(1). A character of degree 1 is calledlinear. WhenGis finite andFhas characteristic zero, thekernelof the characterχρis thenormal subgroup: which is precisely the kernel of the representationρ. However, the character isnota group homomorphism in general. Let ρ and σ be representations ofG. Then the following identities hold: whereρ⊕σis thedirect sum,ρ⊗σis thetensor product,ρ∗denotes theconjugate transposeofρ, andAlt2is thealternating productAlt2ρ=ρ∧ρandSym2is thesymmetric square, which is determined byρ⊗ρ=(ρ∧ρ)⊕Sym2ρ.{\displaystyle \rho \otimes \rho =\left(\rho \wedge \rho \right)\oplus {\textrm {Sym}}^{2}\rho .} The irreduciblecomplexcharacters of a finite group form acharacter tablewhich encodes much useful information about the groupGin a compact form. Each row is labelled by an irreducible representation and the entries in the row are the characters of the representation on the respective conjugacy class ofG. The columns are labelled by (representatives of) the conjugacy classes ofG. It is customary to label the first row by the character of thetrivial representation, which is the trivial action ofGon a 1-dimensional vector space byρ(g)=1{\displaystyle \rho (g)=1}for allg∈G{\displaystyle g\in G}. Each entry in the first row is therefore 1. Similarly, it is customary to label the first column by the identity. Therefore, the first column contains the degree of each irreducible character. Here is the character table of thecyclic groupwith three elements and generatoru: whereωis aprimitivethird root of unity. The character table is always square, because the number of irreducible representations is equal to the number of conjugacy classes.[2] The space of complex-valuedclass functionsof a finite groupGhas a naturalinner product: whereβ(g)is thecomplex conjugateofβ(g). With respect to this inner product, the irreducible characters form anorthonormal basisfor the space of class-functions, and this yields the orthogonality relation for the rows of the character table: Forg,hinG, applying the same inner product to the columns of the character table yields: where the sum is over all of the irreducible charactersχiofGand the symbol|CG(g)|denotes the order of thecentralizerofg. Note that sincegandhare conjugate iff they are in the same column of the character table, this implies that the columns of the character table are orthogonal. The orthogonality relations can aid many computations including: Certain properties of the groupGcan be deduced from its character table: The character table does not in general determine the groupup toisomorphism: for example, thequaternion groupQand thedihedral groupof8elements,D4, have the same character table. Brauer asked whether the character table, together with the knowledge of how the powers of elements of its conjugacy classes are distributed, determines a finite group up to isomorphism. In 1964, this was answered in the negative byE. C. Dade. The linear representations ofGare themselves a group under thetensor product, since the tensor product of 1-dimensional vector spaces is again 1-dimensional. That is, ifρ1:G→V1{\displaystyle \rho _{1}:G\to V_{1}}andρ2:G→V2{\displaystyle \rho _{2}:G\to V_{2}}are linear representations, thenρ1⊗ρ2(g)=(ρ1(g)⊗ρ2(g)){\displaystyle \rho _{1}\otimes \rho _{2}(g)=(\rho _{1}(g)\otimes \rho _{2}(g))}defines a new linear representation. This gives rise to a group of linear characters, called thecharacter groupunder the operation[χ1∗χ2](g)=χ1(g)χ2(g){\displaystyle [\chi _{1}*\chi _{2}](g)=\chi _{1}(g)\chi _{2}(g)}. This group is connected toDirichlet charactersandFourier analysis. The characters discussed in this section are assumed to be complex-valued. LetHbe a subgroup of the finite groupG. Given a characterχofG, letχHdenote its restriction toH. Letθbe a character ofH.Ferdinand Georg Frobeniusshowed how to construct a character ofGfromθ, using what is now known asFrobenius reciprocity. Since the irreducible characters ofGform an orthonormal basis for the space of complex-valued class functions ofG, there is a unique class functionθGofGwith the property that for each irreducible characterχofG(the leftmost inner product is for class functions ofGand the rightmost inner product is for class functions ofH). Since the restriction of a character ofGto the subgroupHis again a character ofH, this definition makes it clear thatθGis a non-negativeintegercombination of irreducible characters ofG, so is indeed a character ofG. It is known asthe character ofGinduced fromθ. The defining formula of Frobenius reciprocity can be extended to general complex-valued class functions. Given a matrix representationρofH, Frobenius later gave an explicit way to construct a matrix representation ofG, known as the representationinduced fromρ, and written analogously asρG. This led to an alternative description of the induced characterθG. This induced character vanishes on all elements ofGwhich are not conjugate to any element ofH. Since the induced character is a class function ofG, it is only now necessary to describe its values on elements ofH. If one writesGas adisjoint unionof rightcosetsofH, say then, given an elementhofH, we have: Becauseθis a class function ofH, this value does not depend on the particular choice of coset representatives. This alternative description of the induced character sometimes allows explicit computation from relatively little information about the embedding ofHinG, and is often useful for calculation of particular character tables. Whenθis the trivial character ofH, the induced character obtained is known as thepermutation characterofG(on the cosets ofH). The general technique of character induction and later refinements found numerous applications infinite group theoryand elsewhere in mathematics, in the hands of mathematicians such asEmil Artin,Richard Brauer,Walter FeitandMichio Suzuki, as well as Frobenius himself. The Mackey decomposition was defined and explored byGeorge Mackeyin the context ofLie groups, but is a powerful tool in the character theory and representation theory of finite groups. Its basic form concerns the way a character (or module) induced from a subgroupHof a finite groupGbehaves on restriction back to a (possibly different) subgroupKofG, and makes use of the decomposition ofGinto(H,K)-double cosets. IfG=⋃t∈THtK{\textstyle G=\bigcup _{t\in T}HtK}is a disjoint union, andθis a complex class function ofH, then Mackey's formula states that whereθtis the class function oft−1Htdefined byθt(t−1ht) =θ(h)for allhinH. There is a similar formula for the restriction of an induced module to a subgroup, which holds for representations over anyring, and has applications in a wide variety of algebraic andtopologicalcontexts. Mackey decomposition, in conjunction with Frobenius reciprocity, yields a well-known and useful formula for the inner product of two class functionsθandψinduced from respective subgroupsHandK, whose utility lies in the fact that it only depends on how conjugates ofHandKintersect each other. The formula (with its derivation) is: (whereTis a full set of(H,K)-double coset representatives, as before). This formula is often used whenθandψare linear characters, in which case all the inner products appearing in the right hand sum are either1or0, depending on whether or not the linear charactersθtandψhave the same restriction tot−1Ht∩K. Ifθandψare both trivial characters, then the inner product simplifies to|T|. One may interpret the character of a representation as the "twisted"dimension of a vector space.[3]Treating the character as a function of the elements of the groupχ(g), its value at theidentityis the dimension of the space, sinceχ(1) = Tr(ρ(1)) = Tr(IV) = dim(V). Accordingly, one can view the other values of the character as "twisted" dimensions.[clarification needed] One can find analogs or generalizations of statements about dimensions to statements about characters or representations. A sophisticated example of this occurs in the theory ofmonstrous moonshine: thej-invariantis thegraded dimensionof an infinite-dimensional graded representation of theMonster group, and replacing the dimension with the character gives theMcKay–Thompson seriesfor each element of the Monster group.[3] IfG{\displaystyle G}is aLie groupandρ{\displaystyle \rho }a finite-dimensional representation ofG{\displaystyle G}, the characterχρ{\displaystyle \chi _{\rho }}ofρ{\displaystyle \rho }is defined precisely as for any group as Meanwhile, ifg{\displaystyle {\mathfrak {g}}}is aLie algebraandρ{\displaystyle \rho }a finite-dimensional representation ofg{\displaystyle {\mathfrak {g}}}, we can define the characterχρ{\displaystyle \chi _{\rho }}by The character will satisfyχρ(Adg⁡(X))=χρ(X){\displaystyle \chi _{\rho }(\operatorname {Ad} _{g}(X))=\chi _{\rho }(X)}for allg{\displaystyle g}in the associated Lie groupG{\displaystyle G}and allX∈g{\displaystyle X\in {\mathfrak {g}}}. If we have a Lie group representation and an associated Lie algebra representation, the characterχρ{\displaystyle \chi _{\rho }}of the Lie algebra representation is related to the characterXρ{\displaystyle \mathrm {X} _{\rho }}of the group representation by the formula Suppose now thatg{\displaystyle {\mathfrak {g}}}is a complexsemisimple Lie algebrawith Cartan subalgebrah{\displaystyle {\mathfrak {h}}}. The value of the characterχρ{\displaystyle \chi _{\rho }}of an irreducible representationρ{\displaystyle \rho }ofg{\displaystyle {\mathfrak {g}}}is determined by its values onh{\displaystyle {\mathfrak {h}}}. The restriction of the character toh{\displaystyle {\mathfrak {h}}}can easily be computed in terms of theweight spaces, as follows: where the sum is over allweightsλ{\displaystyle \lambda }ofρ{\displaystyle \rho }and wheremλ{\displaystyle m_{\lambda }}is the multiplicity ofλ{\displaystyle \lambda }.[4] The (restriction toh{\displaystyle {\mathfrak {h}}}of the) character can be computed more explicitly by the Weyl character formula.
https://en.wikipedia.org/wiki/Character_theory
Informal grammartheory, thedeterministic context-free grammars(DCFGs) are aproper subsetof thecontext-free grammars. They are the subset of context-free grammars that can be derived fromdeterministic pushdown automata, and they generate thedeterministic context-free languages. DCFGs are alwaysunambiguous, and are an important subclass of unambiguous CFGs; there are non-deterministic unambiguous CFGs, however. DCFGs are of great practical interest, as they can be parsed inlinear timeand in fact a parser can be automatically generated from the grammar by aparser generator. They are thus widely used throughout computer science. Various restricted forms of DCFGs can be parsed by simpler, less resource-intensive parsers, and thus are often used. These grammar classes are referred to by the type of parser that parses them, and important examples areLALR,SLR, andLL. In the 1960s, theoretical research in computer science onregular expressionsandfinite automataled to the discovery thatcontext-free grammarsare equivalent to nondeterministicpushdown automata.[1][2][3]These grammars were thought to capture the syntax of computer programming languages. The first high-level computer programming languages were under development at the time (seeHistory of programming languages) and writingcompilerswas difficult. But usingcontext-free grammarsto help automate the parsing part of the compiler simplified the task. Deterministic context-free grammars were particularly useful because they could be parsed sequentially by adeterministic pushdown automaton, which was a requirement due to computer memory constraints.[4]In 1965,Donald Knuthinvented theLR(k) parserand proved that there exists an LR(k) grammar for every deterministic context-free language.[5]This parser still required a lot of memory. In 1969Frank DeRemerinvented theLALRandSimple LRparsers, both based on the LR parser and having greatly reduced memory requirements at the cost of less language recognition power. The LALR parser was the stronger alternative.[6]These two parsers have since been widely used in compilers of many computer languages. Recent research has identified methods by which canonical LR parsers may be implemented with dramatically reduced table requirements over Knuth's table-building algorithm.[7]
https://en.wikipedia.org/wiki/Deterministic_context-free_grammar
Word error rate(WER) is a common metric of the performance of aspeech recognitionormachine translationsystem. The WER metric typically ranges from 0 to 1, where 0 indicates that the compared pieces of text are exactly identical, and 1 (or larger) indicates that they are completely different with no similarity. This way, a WER of 0.8 means that there is an 80% error rate for compared sentences. The general difficulty of measuring performance lies in the fact that the recognized word sequence can have a different length from the reference word sequence (supposedly the correct one). The WER is derived from theLevenshtein distance, working at the word level instead of thephonemelevel. The WER is a valuable tool for comparing different systems as well as for evaluating improvements within one system. This kind of measurement, however, provides no details on the nature of translation errors and further work is therefore required to identify the main source(s) of error and to focus any research effort. This problem is solved by first aligning the recognized word sequence with the reference (spoken) word sequence using dynamic string alignment. Examination of this issue is seen through a theory called the power law that states the correlation betweenperplexityand word error rate.[1] Word error rate can then be computed as: where The intuition behind 'deletion' and 'insertion' is how to get from the reference to the hypothesis. So if we have the reference "This is wikipedia" and hypothesis "This _ wikipedia", we call it a deletion. Note that sinceNis the number of words in the reference, the word error rate can be larger than 1.0, namely if the number of insertionsIis larger than the number of correct wordsC. When reporting the performance of a speech recognition system, sometimesword accuracy (WAcc)is used instead: Since the WER can be larger than 1.0, the word accuracy can be smaller than 0.0. It is commonly believed that a lower word error rate shows superior accuracy in recognition of speech, compared with a higher word error rate. However, at least one study has shown that this may not be true. In aMicrosoft Researchexperiment, it was shown that, if people were trained under "that matches the optimization objective for understanding", (Wang, Acero and Chelba, 2003) they would show a higher accuracy in understanding of language than other people who demonstrated a lower word error rate, showing that true understanding of spoken language relies on more than just highword recognitionaccuracy.[2] One problem with using a generic formula such as the one above, however, is that no account is taken of the effect that different types of error may have on the likelihood of successful outcome, e.g. some errors may be more disruptive than others and some may be corrected more easily than others. These factors are likely to be specific to thesyntaxbeing tested. A further problem is that, even with the best alignment, the formula cannot distinguish a substitution error from a combined deletion plus insertion error. Hunt (1990) has proposed the use of a weighted measure of performance accuracy where errors of substitution are weighted at unity but errors of deletion and insertion are both weighted only at 0.5, thus: There is some debate, however, as to whether Hunt's formula may properly be used to assess the performance of a single system, as it was developed as a means of comparing more fairly competing candidate systems. A further complication is added by whether a given syntax allows for error correction and, if it does, how easy that process is for the user. There is thus some merit to the argument that performance metrics should be developed to suit the particular system being measured. Whichever metric is used, however, one major theoretical problem in assessing the performance of a system is deciding whether a word has been “mis-pronounced,” i.e. does the fault lie with the user or with the recogniser. This may be particularly relevant in a system which is designed to cope with non-native speakers of a given language or with strong regional accents. The pace at which words should be spoken during the measurement process is also a source of variability between subjects, as is the need for subjects to rest or take a breath. All such factors may need to be controlled in some way. For text dictation it is generally agreed that performance accuracy at a rate below 95% is not acceptable, but this again may be syntax and/or domain specific, e.g. whether there is time pressure on users to complete the task, whether there are alternative methods of completion, and so on. The term "Single Word Error Rate" is sometimes referred to as the percentage of incorrect recognitions for each different word in the system vocabulary. The word error rate may also be referred to as the length normalizededit distance.[3]The normalized edit distance between X and Y,d( X, Y ) is defined as the minimum of W( P ) / L ( P ), where P is an editing path between X and Y, W ( P ) is the sum of the weights of the elementary edit operations of P, and L(P) is the number of these operations (length of P).[4]
https://en.wikipedia.org/wiki/Word_error_rate
AOL(formerly a company known asAOL Inc.and originally known asAmerica Online)[1]is an Americanweb portalandonline service providerbased in New York City, and a brand marketed byYahoo! Inc. The service traces its history to an online service known asPlayNET. PlayNET licensed its software toQuantum Link(Q-Link), which went online in November 1985. A newIBM PCclient was launched in 1988, and eventually renamed as America Online in 1989. AOL grew to become the largest online service, displacing established players likeCompuServeandThe Source. By 1995, AOL had about three million active users.[2] AOL was at one point the most recognized brand on the Web in the United States. AOL once provided adial-up Internetservice to millions of Americans and pioneeredinstant messagingandchat roomswithAOL Instant Messenger(AIM). In 1998, AOL purchasedNetscapefor US$4.2 billion. By 2000, AOL was providing internet service to over 20 million consumers, dominating the market ofInternet service providers(ISPs).[3]In 2001, at the height of its popularity, it purchased the media conglomerateTime Warnerin the largest merger in US history. AOL shrank rapidly thereafter, partly due to the decline ofdial-upand rise ofbroadband.[4]AOL was eventuallyspun offfrom Time Warner in 2009, withTim Armstrongappointed the new CEO. Under his leadership, the company invested in media brands and advertising technologies. On June 23, 2015, AOL was acquired byVerizon Communicationsfor $4.4 billion.[5][6]On May 3, 2021, Verizon announced it would sell Yahoo and AOL to private equity firmApollo Global Managementfor $5 billion.[7]On September 1, 2021, AOL became part of the newYahoo! Inc. AOL began in 1983, as a short-lived venture calledControl Video Corporation(CVC), founded byWilliam von Meister. Its sole product was an online service calledGameLinefor theAtari 2600video game console, after von Meister's idea of buying music on demand was rejected byWarner Bros.[8]Subscribers bought amodemfrom the company for $49.95 and paid a one-time $15 setup fee. GameLine permitted subscribers to temporarily download games and keep track of high scores, at a cost of $1 per game.[9]The telephone disconnected and the downloaded game would remain in GameLine's Master Module, playable until the user turned off the console or downloaded another game. In January 1983,Steve Casewas hired as a marketing consultant for Control Video on the recommendation of his brother, investment banker Dan Case. In May 1983,Jim Kimseybecame a manufacturing consultant for Control Video, which was near bankruptcy. Kimsey was brought in by his West Point friendFrank Caufield, an investor in the company.[8]In early 1985, von Meister left the company.[10] On May 24, 1985,Quantum Computer Services, an online services company, was founded by Kimsey from the remnants of Control Video, with Kimsey aschief executive officerandMarc Seriffaschief technology officer. The technical team consisted of Seriff, Tom Ralston, Ray Heinrich, Steve Trus, Ken Huntsman, Janet Hunter, Dave Brown, Craig Dykstra, Doug Coward, and Mike Ficco. In 1987, Case was promoted again to executive vice-president. Kimsey soon began to groom Case to take over the role of CEO, which he did when Kimsey retired in 1991.[10] Kimsey changed the company's strategy, and in 1985, launched a dedicated online service forCommodore 64and128computers, originally calledQuantum Link("Q-Link" for short).[9]The Quantum Link software was based on software licensed fromPlayNet, Inc., which was founded in 1983 by Howard Goldberg and Dave Panzl. The service was different from other online services as it used the computing power of the Commodore 64 and theApple IIrather than just a "dumb" terminal. It passed tokens back and forth and provided a fixed-price service tailored for home users. In May 1988, Quantum andApplelaunchedAppleLinkPersonal Edition for Apple II[11]andMacintoshcomputers. In August 1988, Quantum launched PC Link, a service forIBM-compatiblePCsdeveloped in a joint venture with theTandy Corporation. After the company parted ways with Apple in October 1989, Quantum changed the service's name to America Online.[12][13]Case promoted and sold AOL as the online service for people unfamiliar with computers, in contrast toCompuServe, which was well established in the technical community.[10] From the beginning, AOL includedonline gamesin its mix of products; many classic and casual games were included in the original PlayNet software system. The company introduced many innovative online interactive titles and games, including: In February 1991, AOL forDOSwas launched using aGeoWorksinterface; it was followed a year later by AOL forWindows.[9]This coincided with growth in pay-based online services, likeProdigy,CompuServe, andGEnie. During the early 1990s, the average subscription lasted for about 25 months and accounted for $350 in total revenue. Advertisements invited modem owners to "Try America Online FREE", promising free software and trial membership.[14]AOL discontinuedQ-Linkand PC Link in late 1994. In September 1993, AOL addedUsenetaccess to its features.[15]This is commonly referred to as the "Eternal September", as Usenet's cycle of new users was previously dominated by smaller numbers of college and university freshmen gaining access in September and taking a few weeks to acclimate. This also coincided with a new "carpet bombing" marketing campaign by CMOJan Brandtto distribute as many free trial AOL trial disks as possible through nonconventional distribution partners. At one point, 50% of theCDsproduced worldwide had an AOL logo.[16]AOL quickly surpassedGEnie, and by the mid-1990s, it passed Prodigy (which for several years allowed AOL advertising) andCompuServe.[10]In November 1994, AOL purchased Booklink for its web browser, to give its users web access.[17]In 1996, AOL replaced Booklink with a browser based on Internet Explorer, reportedly in exchange for inclusion of AOL in Windows.[18] AOL launched services with theNational Education Association, theAmerican Federation of Teachers,National Geographic, theSmithsonian Institution, theLibrary of Congress,Pearson,Scholastic,ASCD,NSBA, NCTE,Discovery Networks,TurnerEducation Services (CNN Newsroom),NPR,The Princeton Review,Stanley Kaplan,Barron's,Highlights for Kids, theUS Department of Education, and many other education providers. AOL offered the first real-time homework help service (the Teacher Pager—1990; prior to this, AOL provided homework help bulletin boards), the first service by children, for children (Kids Only Online, 1991), the first online service for parents (the Parents Information Network, 1991), the first online courses (1988), the first omnibus service for teachers (the Teachers' Information Network, 1990), the first online exhibit (Library of Congress, 1991), the first parental controls, and many other online education firsts.[19] AOL purchased search engineWebCrawlerin 1995, but sold it toExcitethe following year; the deal made Excite the sole search and directory service on AOL.[20]After the deal closed in March 1997, AOL launched its own branded search engine, based on Excite, called NetFind. This was renamed to AOL Search in 1999.[21] AOL charged its users an hourly fee until December 1996,[22]when the company changed to a flat monthly rate of $19.95.[9]During this time, AOL connections were flooded with users trying to connect, and many canceled their accounts due to constantbusy signals. A commercial was made featuring Steve Case telling people AOL was working day and night to fix the problem. Within three years, AOL's user base grew to 10 million people. In 1995, AOL was headquartered at 8619 Westwood Center Drive in theTysons Corner CDPinunincorporatedFairfax County, Virginia in theWashington, D.C. metropolitan area.[23][24]near theTown of Vienna.[25] AOL was quickly running out of room in October 1996 for its network at the Fairfax County campus. In mid-1996, AOL moved to 22000 AOL Way inDulles, unincorporatedLoudoun County, Virginia to provide room for future growth.[26]In a five-year landmark agreement with the most popular operating system, AOL was bundled withWindowssoftware.[27] On March 31, 1996, the short-livedeWorldwas purchased by AOL. In 1997, about half of all US homes with Internet access had it through AOL.[28]During this time, AOL's content channels, underJason Seiken, including News, Sports, and Entertainment, experienced their greatest growth as AOL become the dominant online service internationally with more than 34 million subscribers. In February 1998, AOL acquiredCompuServeInteractive Services (CIS) viaWorldCom(laterVerizon), which kept Compuware's networking business.[29] In November 1998, AOL announced it would acquireNetscape, best known for theirweb browser, in a major $4.2 billion deal.[9]The deal closed on March 17, 1999. Another large acquisition in December 1999 was that ofMapQuest, for $1.1 billion.[30] In January 2000, as new broadband technologies were being rolled out around the New York City metropolitan area and elsewhere across the United States, AOL andTime Warner Entertainmentannounced plans to merge, forming AOL Time Warner, Inc. The terms of the deal called for AOL shareholders to own 55% of the new, combined company. The deal closed on January 11, 2001. The new company was led by executives from AOL, SBI, and Time Warner.Gerald Levin, who had served as CEO of Time Warner, was CEO of the new company.Steve Caseserved as chairman, J. Michael Kelly (from AOL) was the chief financial officer,Robert W. Pittman(from AOL) andDick Parsons(from Time Warner) served as co-chief operating officers.[31]In 2002,Jonathan Millerbecame CEO of AOL.[32]The following year, AOL Time Warner dropped the "AOL" from its name. It was the largest merger in history when completed with the combined value of the companies at $360 billion. This value fell sharply, to as low as $120 billion, as markets repriced AOL's valuation as a pure internet firm more modestly when combined with the traditional media and cable business. This status did not last long, and the company's value rose again within three months. By the end of that year, the tide had turned against "pure" internet companies, with many collapsing under falling stock prices, and even the strongest companies in the field losing up to 75% of theirmarket value. The decline continued though 2001, but even with the losses, AOL was among the internet giants that continued to outperformbrick and mortarcompanies.[33] In 2004, along with the launch of AOL 9.0 Optimized, AOL also made available the option of personalized greetings which would enable the user to hear his or her name while accessing basic functions and mail alerts, or while logging in or out. In 2005, AOL broadcast theLive 8concert live over the Internet, and thousands of users downloaded clips of the concert over the following months.[34]In late 2005, AOL released AOL Safety & Security Center, a bundle ofMcAfee Antivirus,CAanti-spyware, and proprietaryfirewallandphishing protectionsoftware.[35]News reports in late 2005 identified companies such asYahoo!,Microsoft, andGoogleas candidates for turning AOL into a joint venture.[36]Those plans were abandoned when it was revealed on December 20, 2005, that Google would purchase a 5% share of AOL for $1 billion.[37] On April 3, 2006, AOL announced that it would retire the full name America Online. The official name of the service became AOL, and the full name of theTime Warnersubdivision became AOLLLC.[38]On June 8, 2006,[39]AOL offered a new program called AOL Active Security Monitor, a diagnostic tool to monitor and rate PC security status, and recommended additional security software from AOL orDownload.com. Two months later,[40]AOL releasedAOL Active Virus Shield, a free product developed byKaspersky Lab, that did not require an AOL account, only an internet email address. TheISPside ofAOL UKwas bought byCarphone Warehousein October 2006 to take advantage of its 100,000LLUcustomers, making Carphone Warehouse the largest LLU provider in the UK.[41] In August 2006, AOL announced that it would offeremailaccounts and software previously available only to its paying customers, provided that users accessed AOL or AOL.com through an access method not owned by AOL (otherwise known as "third party transit", "bring your own access" or "BYOA"). The move was designed to reduce costs associated with the "walled garden" business model by reducing usage of AOL-owned access points and shifting members with high-speed internet access from client-based usage to the more lucrative advertising provider AOL.com.[42]The change from paid to free access was also designed to slow the rate at which members canceled their accounts and defected toMicrosoftHotmail,Yahoo!or other free email providers. The other free services included:[43] Also in August, AOL informed its US customers of an increase in the price of itsdial-up accessto $25.90. The increase was part of an effort to migrate the service's remaining dial-up users to broadband, as the increased price was the same as that of its monthlyDSLaccess.[51]However, AOL subsequently began offering unlimited dial-up access for $9.95 a month.[52] On November 16, 2006,Randy FalcosucceededJonathan Milleras CEO.[53]In December 2006, AOL closed its last remaining call center in the United States, "taking the America out of America Online," according to industry pundits. Service centers based inIndiaand thePhilippinescontinue to provide customer support and technical assistance to subscribers.[54] On September 17, 2007, AOL announced the relocation of one of its corporate headquarters fromDulles, Virginia toNew York City[55][56]and the combination of its advertising units into a new subsidiary called Platform A. This action followed several advertising acquisitions, most notablyAdvertising.com, and highlighted the company's new focus on advertising-driven business models. AOL management stressed that "significant operations" would remain in Dulles, which included the company's access services and modem banks. In October 2007, AOL announced the relocation of its other headquarters fromLoudoun County, Virginia to New York City, while continuing to operate its Virginia offices.[57]As part of the move to New York and the restructuring of responsibilities at the Dulles headquarters complex after the Reston move, Falco announced on October 15, 2007, plans to lay off 2,000 employees worldwide by the end of 2007, beginning "immediately".[58]The result was a layoff of approximately 40% of AOL's employees. Most compensation packages associated with the October 2007 layoffs included a minimum of 120 days of severance pay, 60 of which were offered in lieu of the 60-day advance notice requirement by provisions of the 1988 federalWARN Act.[58] By November 2007, AOL's customer base had been reduced to 10.1 million subscribers,[59]slightly more than the number of subscribers ofComcastandAT&T Yahoo!. According to Falco, as of December 2007, the conversion rate of accounts from paid access to free access was more than 80%.[60] On January 3, 2008, AOL announced the closing of itsReston, Virginia, data center, which was sold toCRG West.[61]On February 6, Time Warner CEOJeff Bewkesannounced that Time Warner would divide AOL's internet-access and advertising businesses, with the possibility of later selling the internet-access division.[62] On March 13, 2008, AOL purchased the social networking siteBebofor $850 million (£417 million).[63]On July 25, AOL announced that it was shuttering Xdrive, AOL Pictures and BlueString to save on costs and focus on its core advertising business.[50]AOL Pictures was closed on December 31. On October 31,AOL Hometown(a web-hosting service for the websites of AOL customers) and the AOL Journal blog hosting service were eliminated.[64] On March 12, 2009,Tim Armstrong, formerly withGoogle, was named chairman and CEO of AOL.[65]On May 28, Time Warner announced that it would position AOL as an independent company afterGoogle's shares ceased at the end of the fiscal year.[66]On November 23, AOL unveiled a new brand identity with thewordmark"Aol." superimposed onto canvases created by commissioned artists. The new identity, designed byWolff Olins,[67]was integrated with all of AOL's services on December 10, the date upon which AOL traded independently for the first time since the Time Warner merger on theNew York Stock Exchangeunder the symbol AOL.[68] On April 6, 2010, AOL announced plans to shutter or sell Bebo.[69]On June 16, the property was sold toCriterion Capital Partnersfor an undisclosed amount, believed to be approximately $10 million.[70]In December, AIM eliminated access to AOL chat rooms, noting a marked decline in usage in recent months.[71] Under Armstrong's leadership, AOL followed a new business direction marked by a series of acquisitions. It announced the acquisition ofPatch Media, a network of community-specific news and information sites focused on towns and communities.[72]On September 28, 2010, at the San FranciscoTechCrunchDisrupt Conference, AOL signed an agreement to acquireTechCrunch.[73][74]On December 12, 2010, AOL acquiredabout.me, a personal profile and identity platform, four days after the platform's public launch.[75] On January 31, 2011, AOL announced the acquisition of European video distribution network goviral.[76]In March 2011, AOL acquiredHuffPostfor $315 million.[77][78]Shortly after the acquisition was announced,Huffington Postco-founderArianna Huffingtonreplaced AOL content chief David Eun, assuming the role of president and editor-in-chief of the AOL Huffington Post Media Group.[79]On March 10, AOL announced that it would cut approximately 900 workers following theHuffPostacquisition.[80] On September 14, 2011, AOL formed a strategic ad-selling partnership with two of its largest competitors,YahooandMicrosoft. The three companies would begin selling inventory on each other's sites. The strategy was designed to help the companies compete withGoogleand advertising networks.[81] On February 28, 2012, AOL partnered withPBSto launch MAKERS, a digital documentary series focusing on high-achieving women in industries perceived as male-dominated such as war, comedy, space, business, Hollywood and politics.[82][83][84]Subjects for MAKERS episodes have includedOprah Winfrey,Hillary Clinton,Sheryl Sandberg,Martha Stewart,Indra Nooyi,Lena DunhamandEllen DeGeneres. On March 15, 2012, AOL announced the acquisition of Hipster, a mobile photo-sharing app, for an undisclosed amount.[85]On April 9, 2012, AOL announced a deal to sell 800 patents toMicrosoftfor $1.056 billion. The deal included a perpetual license for AOL to use the patents.[86] In April, AOL took several steps to expand its ability to generate revenue throughonline video advertising. The company announced that it would offergross rating point(GRP) guarantee for online video, mirroring the television-ratings system and guaranteeing audience delivery for online-video advertising campaigns bought across its properties.[87]This announcement came just days before theDigital Content NewFront (DCNF)a two-week event held by AOL,Google,Hulu,Microsoft,VevoandYahooto showcase the participating sites' digital video offerings. The DCNF was conducted in advance of the traditional television upfronts in the hope of diverting more advertising money into the digital space.[88]On April 24, the company launched theAOL Onnetwork, a single website for its video output.[89] In February 2013, AOL reported its fourth quarter revenue of $599.5 million, its first growth in quarterly revenue in eight years.[90] In August 2013, Armstrong announced thatPatch Mediawould scale back or sell hundreds of its local news sites.[91]Not long afterward, layoffs began, with up to 500 out of 1,100 positions initially impacted.[92]On January 15, 2014, Patch Media was spun off, and majority ownership was held by Hale Global.[93]By the end of 2014, AOL controlled 0.74% of the global advertising market, well behind industry leader Google's 31.4%.[94] On January 23, 2014, AOL acquired Gravity, a software startup that tracked users' online behavior and tailored ads and content based on their interests, for $83 million.[95]The deal, which included approximately 40 Gravity employees and the company's personalization technology, was Armstrong's fourth-largest deal since taking command in 2009. Later that year, AOL acquired Vidible, a company that developed technology to help websites run video content from other publishers, and help video publishers sell their content to these websites. The deal, which was announced December 1, 2014, was reportedly worth roughly $50 million.[96] On July 16, 2014, AOL earned anEmmynomination for the AOL original seriesThe Future Starts Herein the News and Documentary category.[97]This came days after AOL earned its firstPrimetime Emmy Awardnomination and win forPark Bench with Steve Buscemiin theOutstanding Short Form Variety Series.[98]Created and hosted byTiffany Shlain, the series focused on humans' relationship with technology and featured episodes such as "The Future of Our Species", "Why We Love Robots" and "A Case for Optimism". On May 12, 2015,Verizonannounced plans to buy AOL for $50 per share in a deal valued at $4.4 billion. The transaction was completed on June 23.Armstrong, who continued to lead the firm following regulatory approval, called the deal the logical next step for AOL. "If you look forward five years, you're going to be in a space where there are going to be massive, global-scale networks, and there's no better partner for us to go forward with than Verizon." he said. "It's really not about selling the company today. It's about setting up for the next five to 10 years."[5] Analyst David Bank said he thought the deal made sense for Verizon.[5]The deal will broaden Verizon's advertising sales platforms and increase its video production ability through websites such asHuffPost,TechCrunch, andEngadget.[94]However, Craig Moffett said it was unlikely the deal would make a big difference to Verizon's bottom line.[5]AOL had about two million dial-up subscribers at the time of the buyout.[94]The announcement caused AOL's stock price to rise 17%, while Verizon's stock price dropped slightly.[5] Shortly before the Verizon purchase, on April 14, 2015, AOL launched ONE by AOL, a digital marketing programmatic platform that unifies buying channels and audience management platforms to track and optimize campaigns over multiple screens.[99]Later that year, on September 15, AOL expanded the product with ONE by AOL: Creative, which is geared towards creative and media agencies to similarly connect marketing and ad distribution efforts.[100] On May 8, 2015, AOL reported its first-quarter revenue of $625.1 million, $483.5 million of which came from advertising and related operations, marking a 7% increase from Q1 2014. Over that year, the AOL Platforms division saw a 21% increase in revenue, but a drop in adjustedOIBDAdue to increased investments in the company's video and programmatic platforms.[101] On June 29, 2015, AOL announced a deal withMicrosoftto take over the majority of its digital advertising business. Under the pact, as many as 1,200 Microsoft employees involved with the business will be transferred to AOL, and the company will take over the sale of display, video, and mobile ads on various Microsoft platforms in nine countries, including Brazil, Canada, the United States, and the United Kingdom. Additionally,Google Searchwill be replaced on AOL properties withBing—which will display advertisingsold by Microsoft. Both advertising deals are subject toaffiliate marketingrevenue sharing.[102][103] On July 22, 2015, AOL received two News and Documentary Emmy nominations, one for MAKERS in the Outstanding Historical Programming category, and the other forTrue Trans WithLaura Jane Grace, which documented the story of Laura Jane Grace, atransgendermusician best known as the founder, lead singer, songwriter and guitarist of the punk rock bandAgainst Me!, and her decision to come out publicly and overall transition experience.[104] On September 3, 2015, AOL agreed to buyMillennial Mediafor $238 million.[105]On October 23, 2015, AOL completed the acquisition.[106] On October 1, 2015, Go90, a free ad-supported mobile video service aimed at young adult and teen viewers that Verizon owns and AOL oversees and operates, launched its content publicly after months of beta testing.[107][108]The initial launch line-up included content fromComedy Central,HuffPost,Nerdist News,UnivisionNews,Vice,ESPNandMTV.[107] On April 20, 2016, AOL acquired virtual reality studioRYOTto bring immersive 360 degree video and VR content toHuffPost's global audience across desktop, mobile, and apps.[109] In July 2016, Verizon Communications announced its intent to purchase the core internet business ofYahoo!. Verizon merged AOL with Yahoo into a new company called "Oath Inc.", which in January 2019 rebranded itself asVerizon Media.[110] In April 2018,Oath Inc.soldMoviefonetoMoviePassParentHelios and Matheson Analytics.[111][112][113] In November 2020 theHuffington Postwas sold toBuzzFeedin a stock deal.[114] On May 3, 2021, Verizon announced it would sell 90 percent of its Verizon Media division toApollo Global Managementfor $5 billion. The division became thesecond incarnation of Yahoo! Inc.[7] As of September 1, 2021, the following media brands became subsidiary of AOL's parentYahoo Inc.[115] AOL's content contributors consists of over 20,000 bloggers, including politicians, celebrities, academics, and policy experts, who contribute on a wide range of topics making news.[119] In addition to mobile-optimized web experiences, AOL produces mobile applications for existing AOL properties like Autoblog, Engadget, The Huffington Post, TechCrunch, and products such as Alto, Pip, and Vivv. AOL has a global portfolio of media brands and advertising services across mobile, desktop, and TV. Services include brand integration and sponsorships through its in-house branded content arm, Partner Studio by AOL, as well as data and programmatic offerings through ad technology stack,ONE by AOL. AOL acquired a number of businesses and technologies help to form ONE by AOL. These acquisitions includedAdapTVin 2013 and Convertro, Precision Demand, and Vidible in 2014.[120]ONE by AOL is further broken down into ONE by AOL for Publishers (formerly Vidible, AOL On Network and Be On for Publishers) and ONE by AOL for Advertisers, each of which have several sub-platforms.[121][122] On September 10, 2018, AOL's parent company Oath consolidatedBrightRoll, One by AOL andYahoo Geminito 'simplify' adtech service by launching a single advertising proposition dubbed Oath Ad Platforms, now Yahoo! Ad Tech.[123] AOL offers a range of integrated products and properties including communication tools, mobile apps and services and subscription packages. In 2017, before the discontinuation of AIM, "billions of messages" were sent "daily" on it and AOL's other chat services.[1] AOL Desktopis an internet suite produced by AOL from 2007[132][133]that integrates aweb browser, amedia playerand aninstant messengerclient.[130]Version 10.X was based onAOL OpenRide,[134]it is an upgrade from such.[135]ThemacOSversion is based onWebKit. AOL Desktop version 10.X was different from previousAOL browsersand AOL Desktop versions. Its features are focused on web browsing as well asemail. For instance, one does not have to sign into AOL in order to use it as a regular browser. In addition, non-AOL email accounts can be accessed through it. Primary buttons include "MAIL", "IM", and several shortcuts to various web pages. The first two require users to sign in, but the shortcuts to web pages can be used without authentication. AOL Desktop version 10.X was later marked as unsupported in favor of supporting the AOL Desktop 9.X versions. Version 9.8 was released, replacing the Internet Explorer components of the web browser withCEF[131](Chromium Embedded Framework) to give users an improved web browsing experience closer to that ofChrome. Version 11 of AOL Desktop was a total rewrite but maintained a similar user interface to the previous 9.8.X series of releases.[131] In 2017, a new paid version called AOL Desktop Gold was released, available for $4.99 per month after trial. It replaced the previous free version.[136]After the shutdown of AIM in 2017, AOL's original chat rooms continued to be accessible through AOL Desktop Gold, and some rooms remained active during peak hours. That chat system was shut down on December 15, 2020.[137] In addition to AOL Desktop, the company also offered abrowser toolbarMozilla plug-in,AOL Toolbar, for several web browsers that provided quick access to AOL services. The toolbar was available from 2007 until 2018. In its earlier incarnation as a "walled garden" community and service provider, AOL received criticism for its community policies, terms of service, and customer service. Prior to 2006, AOL was known for its direct mailing of CD-ROMs and 3.5-inch floppy disks containing its software. The disks were distributed in large numbers; at one point, half of the CDs manufactured worldwide had AOL logos on them.[16]The marketing tactic was criticized for its environmental cost, and AOL CDs were recognized asPC World's most annoying tech product.[138][139] AOL used a system of volunteers to moderate its chat rooms, forums and user communities. The program dated back to AOL's early days, when it charged by the hour for access and one of its highest billing services was chat. AOL provided free access to community leaders in exchange for moderating the chat rooms, and this effectively made chat very cheap to operate, and more lucrative than AOL's other services of the era. There were 33,000 community leaders in 1996.[140]All community leaders received hours of training and underwent a probationary period. While most community leaders moderated chat rooms, some ran AOL communities and controlled their layout and design, with as much as 90% of AOL's content being created or overseen by community managers until 1996.[141] By 1996,ISPswere beginning to charge flat rates for unlimited access, which they could do at a profit because they only provided internet access. Even though AOL would lose money with such a pricing scheme, it was forced by market conditions to offer unlimited access in October 1996. In order to return to profitability, AOL rapidly shifted its focus from content creation to advertising, resulting in less of a need to carefully moderate every forum and chat room to keep users willing to pay by the minute to remain connected.[142] After unlimited access, AOL considered scrapping the program entirely, but continued it with a reduced number of community leaders, with scaled-back roles in creating content.[141]Although community leaders continued to receive free access, after 1996 they were motivated more by the prestige of the position and the access to moderator tools and restricted areas within AOL.[140][141]By 1999, there were over 15,000 volunteers in the program.[143] In May 1999, two former volunteers filed a class-action lawsuit alleging AOL violated theFair Labor Standards Actby treating volunteers like employees. Volunteers had to apply for the position, commit to working for at least three to four hours a week, fill out timecards and sign a non-disclosure agreement.[144]On July 22, AOL ended its youth corps, which consisted of 350 underage community leaders.[140]At this time, theUnited States Department of Laborbegan an investigation into the program, but it came to no conclusions about AOL's practices.[144] AOL ended its community leader program on June 8, 2005. The class action lawsuit dragged on for years, even after AOL ended the program and AOL declined as a major internet company. In 2010, AOL finally agreed to settle the lawsuit for $15 million.[145]The community leader program was described as an example ofco-productionin a 2009 article inInternational Journal of Cultural Studies.[141] AOL has faced a number of lawsuits over claims that it has been slow to stop billing customers after their accounts have been canceled, either by the company or the user. In addition, AOL changed its method of calculating used minutes in response to aclass action lawsuit. Previously, AOL would add 15 seconds to the time a user was connected to the service and round up to the next whole minute (thus, a person who used the service for 12 minutes and 46 seconds would be charged for 14 minutes).[146][147]AOL claimed this was to account for sign on/sign off time, but because this practice was not made known to its customers, the plaintiffs won (some also pointed out that signing on and off did not always take 15 seconds, especially when connecting via another ISP). AOL disclosed its connection-time calculation methods to all of its customers and credited them with extra free hours. In addition, the AOL software would notify the user of exactly how long they were connected and how many minutes they were being charged. AOL was sued by theOhio Attorney Generalin October 2003 for improper billing practices. The case was settled on June 8, 2005. AOL agreed to resolve any consumercomplaintsfiled with theOhioAG's office. In December 2006, AOL agreed to providerestitutionto Florida consumers to settle the case filed against them by theFlorida Attorney General.[148] Many customers complained that AOL personnel ignored their demands to cancel service and stop billing. In response to approximately 300 consumer complaints, theNew York Attorney General's office began an inquiry of AOL's customer service policies.[citation needed]The investigation revealed that the company had an elaborate scheme for rewarding employees who purported toretainor "save" subscribers who had called to cancel their Internet service. In many instances, such retention was done against subscribers' wishes, or without their consent. Under the scheme, customer service personnel received bonuses worth tens of thousands of dollars if they could successfully dissuade or "save" half of the people who called to cancel service.[citation needed]For several years, AOL had instituted minimum retention or "save" percentages, which consumer representatives were expected to meet. These bonuses, and the minimum "save" rates accompanying them, had the effect of employees not honoring cancellations, or otherwise making cancellation unduly difficult for consumers. On August 24, 2005, America Online agreed to pay $1.25 million to the state of New York and reformed its customer service procedures. Under the agreement, AOL would no longer require its customer service representatives to meet a minimum quota for customer retention in order to receive a bonus.[148]However the agreement only covered people in the state of New York.[149] On June 13, 2006, Vincent Ferrari documented his account cancellation phone call in a blog post,[150]stating he had switched to broadband years earlier. In the recorded phone call, the AOL representative refused to cancel the account unless the 30-year-old Ferrari explained why AOL hours were still being recorded on it. Ferrari insisted that AOL software was not even installed on the computer. When Ferrari demanded that the account be canceled regardless, the AOL representative asked to speak with Ferrari's father, for whom the account had been set up. The conversation was aired on CNBC. When CNBC reporters tried to have an account on AOL cancelled, they were hung up on immediately and it ultimately took more than 45 minutes to cancel the account.[151] On July 19, 2006, AOL's entireretentionmanual was released on the Internet.[152]On August 3, 2006,Time Warnerannounced that the company would be dissolving AOL's retention centers due to its profits hinging on $1 billion in cost cuts. The company estimated that it would lose more than six million subscribers over the following year.[153] Prior to 2006, AOL often sent unsolicited massdirect mailof 31⁄2"floppy disksandCD-ROMscontaining their software. They were the most frequent user of this marketing tactic, and received criticism for the environmental cost of the campaign.[154]According toPC World, in the 1990s "you couldn't open a magazine (PC Worldincluded) or your mailbox without an AOL disk falling out of it".[149] The mass distribution of these disks was seen as wasteful by the public and led to protest groups. One such was No More AOL CDs, a web-based effort by two IT workers[155]to collect one million disks with the intent to return the disks to AOL.[156]The website was started in August 2001, and an estimated 410,176 CDs were collected by August 2007 when the project was shut down.[156] In 2000, AOL was served with an $8 billion lawsuit alleging that its AOL 5.0 software caused significant difficulties for users attempting to use third-party Internet service providers. The lawsuit sought damages of up to $1000 for each user that had downloaded the software cited at the time of the lawsuit. AOL later agreed to a settlement of $15 million, without admission of wrongdoing.[157]The AOL software then was given a feature called AOL Dialer, or AOL Connect onMac OS X. This feature allowed users to connect to the ISP without running the full interface. This allowed users to use only the applications they wish to use, especially if they do not favor the AOL Browser. AOL 9.0 was once identified byStopbadwareas beingunder investigation[158]for installing additional software without disclosure, and modifying browser preferences, toolbars, and icons. However, as of the release of AOL 9.0 VR (Vista Ready) on January 26, 2007, it was no longer considered badware due to changes AOL made in the software.[159] When AOL gave clients access toUsenetin 1993, they hid at least one newsgroup in standard list view:alt.aol-sucks. AOL did list the newsgroup in the alternative description view, but changed the description to "Flames and complaints about America Online". With AOL clients swarmingUsenetnewsgroups, the old, existing user base started to develop a strong distaste for both AOL and its clients, referring to the new state of affairs asEternal September.[160] AOL discontinued access to Usenet on June 25, 2005.[161]No official details were provided as to the cause of decommissioning Usenet access, except providing users the suggestion to access Usenet services from a third-party,Google Groups. AOL then provided community-basedmessage boardsin lieu of Usenet. AOL has a detailed set of guidelines and expectations for users on their service, known as theTerms of Service(TOS, also known as Conditions of Service (COS) in the UK). It is separated into three different sections:Member Agreement,Community GuidelinesandPrivacy Policy.[162][163]All three agreements are presented to users at time of registration and digital acceptance is achieved when they access the AOL service. During the period when volunteer chat room hosts and board monitors were used, chat room hosts were given a brief online training session and test on Terms of Service violations. There have been many complaints over rules that govern an AOL user's conduct. Some users disagree with the TOS, citing the guidelines are too strict to follow coupled with the fact the TOS may change without users being made aware. A considerable cause for this was likely due to alleged censorship of user-generated content during the earlier years of growth for AOL.[164][165][166][167] In early 2005, AOL stated its intention to implement acertified emailsystem called Goodmail, which will allow companies to send email to users with whom they have pre-existing business relationships, with a visual indication that the email is from a trusted source and without the risk that the email messages might be blocked or stripped byspam filters. This decision drew fire fromMoveOn, which characterized the program as an "email tax", and theElectronic Frontier Foundation(EFF), which characterized it as a shakedown of non-profits.[168]A website called Dearaol.com[169]was launched, with an online petition and a blog that garnered hundreds of signatures from people and organizations expressing their opposition to AOL's use of Goodmail. Esther Dysondefended the move in an editorial inThe New York Times, saying "I hope Goodmail succeeds, and that it has lots of competition. I also think it and its competitors will eventually transform into services that more directly serve the interests of mail recipients. Instead of the fees going to Goodmail and AOL, they will also be shared with the individual recipients."[170] Tim Lee of the Technology Liberation Front[171]posted an article that questioned the Electronic Frontier Foundation's adopting a confrontational posture when dealing with private companies. Lee's article cited a series of discussions[172]onDeclan McCullagh's Politechbot mailing list on this subject between the EFF's Danny O'Brien and antispammer Suresh Ramasubramanian, who has also compared[173]the EFF's tactics in opposing Goodmail to tactics used by Republican political strategistKarl Rove.SpamAssassindeveloper Justin Mason posted some criticism of the EFF's and Moveon's "going overboard" in their opposition to the scheme. The dearaol.com campaign lost momentum and disappeared, with the last post to the now defunct dearaol.com blog—"AOL starts the shakedown" being made on May 9, 2006. Comcast, who also used the service, announced on its website that Goodmail had ceased operations and as of February 4, 2011, they no longer used the service.[174] On August 4, 2006, AOL released a compressed text file on one of its websites containing 20 million searchkeywordsfor over 650,000 users over a three-month period between March 1 and May 31, 2006, intended for research purposes. AOL pulled the file from public access by August 7, but not before its wide distribution on the Internet by others. Derivative research, titledA Picture of Search[175]was published by authors Pass, Chowdhury and Torgeson for The First International Conference on Scalable Information Systems.[176] The data were used by websites such as AOLstalker[177]for entertainment purposes, where users of AOLstalker are encouraged to judge AOL clients based on the humorousness of personal details revealed by search behavior. In 2003, Jason Smathers, an AOL employee, was convicted of stealing America Online's 92 million screen names and selling them to a known spammer. Smathers pled guilty to conspiracy charges in 2005.[178][179]Smathers pled guilty to violations of the USCAN-SPAM Act of 2003.[180]He was sentenced in August 2005 to 15 months in prison; the sentencing judge also recommended Smathers be forced to pay $84,000 in restitution, triple the $28,000 that he sold the addresses for.[178] On February 27, 2012, aclass action lawsuitwas filed againstSupport.com, Inc. and partnerAOL, Inc.The lawsuit alleged Support.com and AOL's Computer Checkup "scareware" (which uses software developed by Support.com) misrepresented that their software programs would identify and resolve a host of technical problems with computers, offered to perform a free "scan", which often found problems with users' computers. The companies then offered to sell software—for which AOL allegedly charged $4.99 a month and Support.com $29—to remedy those problems.[181]Both AOL, Inc. and Support.com, Inc. settled on May 30, 2013, for $8.5 million. This included $25.00 to each valid class member and $100,000 each toConsumer Watchdogand theElectronic Frontier Foundation.[182]JudgeJacqueline Scott Corleywrote: "Distributing a portion of the [funds] to Consumer Watchdog will meet the interests of the silent class members because the organization will use the funds to help protect consumers across the nation from being subject to the types of fraudulent and misleading conduct that is alleged here," and "EFF's mission includes a strong consumer protection component, especially in regards to online protection."[181] AOL continues to market Computer Checkup.[183] Following media reports aboutPRISM, NSA's massive electronicsurveillance program, in June 2013, several technology companies were identified as participants, including AOL. According to leaks of said program, AOL joined the PRISM program in 2011.[184] At one time, most AOL users had an online "profile" hosted by theAOL Hometownservice. When AOL Hometown was discontinued, users had to create a new profile onBebo. This was an unsuccessful attempt to create a social network that would compete with Facebook. When the value of Bebo decreased to a tiny fraction of the $850 million AOL paid for it, users were forced to recreate their profiles yet again, on a new service called AOL Lifestream. AOL decided to shut down Lifestream on February 24, 2017, and gave users one month's notice to save photos and videos that had been uploaded to Lifestream.[185]Following the shutdown, AOL no longer provides any option for hosting user profiles. During the Hometown/Bebo/Lifestream era, another user's profile could be displayed by clicking the "Buddy Info" button in the AOL Desktop software. After the shutdown of Lifestream, this was no longer supported, but opened to theAIMhome page (www.aim.com), which also became defunct, redirecting to AOL's home page. 40°43′51″N73°59′29″W / 40.7308°N 73.9914°W /40.7308; -73.9914
https://en.wikipedia.org/wiki/Criticism_of_AOL#Community_leaders
Generic data modelsare generalizations of conventionaldata models. They define standardised general relation types, together with the kinds of things that may be related by such a relation type. The definition of generic data model is similar to the definition of a natural language. For example, a generic data model may define relation types such as a 'classification relation', being abinary relationbetween an individual thing and a kind of thing (a class) and a 'part-whole relation', being a binary relation between two things, one with the role of part, the other with the role of whole, regardless the kind of things that are related. Given an extensible list of classes, this allows the classification of any individual thing and to specify part-whole relations for any individual object. By standardisation of an extensible list of relation types, a generic data model enables the expression of an unlimited number of kinds of facts and will approach the capabilities of natural languages. Conventional data models, on the other hand, have a fixed and limited domain scope, because the instantiation (usage) of such a model only allows expressions of kinds of facts that are predefined in the model. Generic data models are developed as an approach to solve some shortcomings of conventionaldata models. For example, different modelers usually produce different conventional data models of the same domain. This can lead to difficulty in bringing the models of different people together and is an obstacle for data exchange and data integration. Invariably, however, this difference is attributable to different levels of abstraction in the models and differences in the kinds of facts that can be instantiated (the semantic expression capabilities of the models). The modelers need to communicate and agree on certain elements which are to be rendered more concretely, in order to make the differences less significant. There are generic patterns that can be used to advantage for modeling business. These include entity types for PARTY (with included PERSON and ORGANIZATION), PRODUCT TYPE, PRODUCT INSTANCE, ACTIVITY TYPE, ACTIVITY INSTANCE, CONTRACT, GEOGRAPHIC AREA, and SITE. A model which explicitly includes versions of these entity classes will be both reasonably robust and reasonably easy to understand. More abstract models are suitable for general purpose tools, and consist of variations on THING and THING TYPE, with all actual data being instances of these. Such abstract models are on one hand more difficult to manage, since they are not very expressive of real world things, but on the other hand they have a much wider applicability, especially if they are accompanied by a standardised dictionary. More concrete and specific data models will risk having to change as the scope or environment changes. One approach to generic data modeling has the following characteristics: This way of modeling allows the addition of standard classes and standard relation types as data (instances), which makes the data model flexible and prevents data model changes when the scope of the application changes. A generic data model obeys the following rules[2]]: Examples of generic data models are 1. David C. Hay. 1995.Data Model Patterns: Conventions of Thought. (New York: Dorset House). 2. David C. Hay. 2011.Enterprise Model Patterns: Describing the World. (Bradley Beach,New Jersey: Technics Publications). 3. Matthew West 2011.Developing High Quality Data Models(Morgan Kaufmann)
https://en.wikipedia.org/wiki/Generic_data_model
Key Transparencyallows communicating parties to verifypublic keysused inend-to-end encryption.[1]In many end-to-end encryption services, to initiate communication a user will reach out to a central server and request the public keys of the user with which they wish to communicate.[2]If the central server is malicious or becomes compromised, aman-in-the-middle attackcan be launched through the issuance of incorrect public keys. The communications can then be intercepted and manipulated.[3]Additionally, legal pressure could be applied by surveillance agencies to manipulate public keys and read messages.[2] With Key Transparency, public keys are posted to a public log that can be universally audited.[4]Communicating parties can verify public keys used are accurate.[4] This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Key_Transparency
Instatistics, originally ingeostatistics,krigingorKriging(/ˈkriːɡɪŋ/), also known asGaussian process regression, is a method ofinterpolationbased onGaussian processgoverned by priorcovariances. Under suitable assumptions of the prior, kriging gives thebest linear unbiased prediction(BLUP) at unsampled locations.[1]Interpolating methods based on other criteria such assmoothness(e.g.,smoothing spline) may not yield the BLUP. The method is widely used in the domain ofspatial analysisandcomputer experiments. The technique is also known asWiener–Kolmogorov prediction, afterNorbert WienerandAndrey Kolmogorov. The theoretical basis for the method was developed by the French mathematicianGeorges Matheronin 1960, based on the master's thesis ofDanie G. Krige, the pioneering plotter of distance-weighted average gold grades at theWitwatersrandreef complex inSouth Africa. Krige sought to estimate the most likely distribution of gold based on samples from a few boreholes. The English verb isto krige, and the most common noun iskriging. The word is sometimes capitalized asKrigingin the literature. Though computationally intensive in its basic formulation, kriging can be scaled to larger problems using variousapproximation methods. Kriging predicts the value of a function at a given point by computing a weighted average of the known values of the function in the neighborhood of the point. The method is closely related toregression analysis. Both theories derive abest linear unbiased estimatorbased on assumptions oncovariances, make use ofGauss–Markov theoremto prove independence of the estimate and error, and use very similar formulae. Even so, they are useful in different frameworks: kriging is made for estimation of a single realization of a random field, while regression models are based on multiple observations of a multivariate data set. The kriging estimation may also be seen as asplinein areproducing kernel Hilbert space, with the reproducing kernel given by the covariance function.[2]The difference with the classical kriging approach is provided by the interpretation: while the spline is motivated by a minimum-norm interpolation based on a Hilbert-space structure, kriging is motivated by an expected squared prediction error based on a stochastic model. Kriging withpolynomial trend surfacesis mathematically identical togeneralized least squarespolynomialcurve fitting. Kriging can also be understood as a form ofBayesian optimization.[3]Kriging starts with apriordistributionoverfunctions. This prior takes the form of a Gaussian process:N{\displaystyle N}samples from a function will benormally distributed, where thecovariancebetween any two samples is the covariance function (orkernel) of the Gaussian process evaluated at the spatial location of two points. Asetof values is then observed, each value associated with a spatial location. Now, a new value can be predicted at any new spatial location by combining the Gaussian prior with a Gaussianlikelihood functionfor each of the observed values. The resultingposteriordistribution is also Gaussian, with a mean and covariance that can be simply computed from the observed values, their variance, and the kernel matrix derived from the prior. In geostatistical models, sampled data are interpreted as the result of a random process. The fact that these models incorporate uncertainty in their conceptualization doesn't mean that the phenomenon – the forest, the aquifer, the mineral deposit – has resulted from a random process, but rather it allows one to build a methodological basis for the spatial inference of quantities in unobserved locations and to quantify the uncertainty associated with the estimator. Astochastic processis, in the context of this model, simply a way to approach the set of data collected from the samples. The first step in geostatistical modulation is to create a random process that best describes the set of observed data. A value from locationx1{\displaystyle x_{1}}(generic denomination of a set ofgeographic coordinates) is interpreted as a realizationz(x1){\displaystyle z(x_{1})}of therandom variableZ(x1){\displaystyle Z(x_{1})}. In the spaceA{\displaystyle A}, where the set of samples is dispersed, there areN{\displaystyle N}realizations of the random variablesZ(x1),Z(x2),…,Z(xN){\displaystyle Z(x_{1}),Z(x_{2}),\ldots ,Z(x_{N})}, correlated between themselves. The set of random variables constitutes a random function, of which only one realization is known – the setz(xi){\displaystyle z(x_{i})}of observed data. With only one realization of each random variable, it's theoretically impossible to determine anystatistical parameterof the individual variables or the function. The proposed solution in the geostatistical formalism consists inassumingvarious degrees ofstationarityin the random function, in order to make the inference of some statistic values possible. For instance, if one assumes, based on the homogeneity of samples in areaA{\displaystyle A}where the variable is distributed, the hypothesis that thefirst momentis stationary (i.e. all random variables have the same mean), then one is assuming that the mean can be estimated by the arithmetic mean of sampled values. The hypothesis of stationarity related to thesecond momentis defined in the following way: the correlation between two random variables solely depends on the spatial distance between them and is independent of their location. Thus ifh=x2−x1{\displaystyle \mathbf {h} =x_{2}-x_{1}}andh=|h|{\displaystyle h=|\mathbf {h} |}, then: For simplicity, we defineC(xi,xj)=C(Z(xi),Z(xj)){\displaystyle C(x_{i},x_{j})=C{\big (}Z(x_{i}),Z(x_{j}){\big )}}andγ(xi,xj)=γ(Z(xi),Z(xj)){\displaystyle \gamma (x_{i},x_{j})=\gamma {\big (}Z(x_{i}),Z(x_{j}){\big )}}. This hypothesis allows one to infer those two measures – thevariogramand thecovariogram: where: In this set,(i,j){\displaystyle (i,\;j)}and(j,i){\displaystyle (j,\;i)}denote the same element. Generally an "approximate distance"h{\displaystyle h}is used, implemented using a certain tolerance. Spatial inference, or estimation, of a quantityZ:Rn→R{\displaystyle Z\colon \mathbb {R} ^{n}\to \mathbb {R} }, at an unobserved locationx0{\displaystyle x_{0}}, is calculated from a linear combination of the observed valueszi=Z(xi){\displaystyle z_{i}=Z(x_{i})}and weightswi(x0),i=1,…,N{\displaystyle w_{i}(x_{0}),\;i=1,\ldots ,N}: The weightswi{\displaystyle w_{i}}are intended to summarize two extremely important procedures in a spatial inference process: When calculating the weightswi{\displaystyle w_{i}}, there are two objectives in the geostatistical formalism:unbiasandminimal variance of estimation. If the cloud of real valuesZ(x0){\displaystyle Z(x_{0})}is plotted against the estimated valuesZ^(x0){\displaystyle {\hat {Z}}(x_{0})}, the criterion for global unbias,intrinsic stationarityorwide sense stationarityof the field, implies that the mean of the estimations must be equal to mean of the real values. The second criterion says that the mean of the squared deviations(Z^(x)−Z(x)){\displaystyle {\big (}{\hat {Z}}(x)-Z(x){\big )}}must be minimal, which means that when the cloud of estimated valuesversusthe cloud real values is more disperse, the estimator is more imprecise. Depending on the stochastic properties of the random field and the various degrees of stationarity assumed, different methods for calculating the weights can be deduced, i.e. different types of kriging apply. Classical methods are: The unknown valueZ(x0){\displaystyle Z(x_{0})}is interpreted as a random variable located inx0{\displaystyle x_{0}}, as well as the values of neighbors samplesZ(xi),i=1,…,N{\displaystyle Z(x_{i}),\ i=1,\ldots ,N}. The estimatorZ^(x0){\displaystyle {\hat {Z}}(x_{0})}is also interpreted as a random variable located inx0{\displaystyle x_{0}}, a result of the linear combination of variables. Kriging seeks to minimize the mean square value of the following error in estimatingZ(x0){\displaystyle Z(x_{0})}, subject to lack of bias: The two quality criteria referred to previously can now be expressed in terms of the mean and variance of the new random variableϵ(x0){\displaystyle \epsilon (x_{0})}: Since the random function is stationary,E[Z(xi)]=E[Z(x0)]=m{\displaystyle E[Z(x_{i})]=E[Z(x_{0})]=m}, the weights must sum to 1 in order to ensure that the model is unbiased. This can be seen as follows: Two estimators can haveE[ϵ(x0)]=0{\displaystyle E[\epsilon (x_{0})]=0}, but the dispersion around their mean determines the difference between the quality of estimators. To find an estimator with minimum variance, we need to minimizeE[ϵ(x0)2]{\displaystyle E[\epsilon (x_{0})^{2}]}. Seecovariance matrixfor a detailed explanation. where the literals{Varxi,Varx0,Covxix0}{\displaystyle \left\{\operatorname {Var} _{x_{i}},\operatorname {Var} _{x_{0}},\operatorname {Cov} _{x_{i}x_{0}}\right\}}stand for Once defined the covariance model orvariogram,C(h){\displaystyle C(\mathbf {h} )}orγ(h){\displaystyle \gamma (\mathbf {h} )}, valid in all field of analysis ofZ(x){\displaystyle Z(x)}, then we can write an expression for the estimation variance of any estimator in function of the covariance between the samples and the covariances between the samples and the point to estimate: Some conclusions can be asserted from this expression. The variance of estimation: Solving this optimization problem (seeLagrange multipliers) results in thekriging system: The additional parameterμ{\displaystyle \mu }is aLagrange multiplierused in the minimization of the kriging errorσk2(x){\displaystyle \sigma _{k}^{2}(x)}to honor the unbiasedness condition. Simple kriging is mathematically the simplest, but the least general.[9]It assumes theexpectationof therandom fieldis known and relies on acovariance function. However, in most applications neither the expectation nor the covariance are known beforehand. The practical assumptions for the application ofsimple krigingare: The covariance function is a crucial design choice, since it stipulates the properties of the Gaussian process and thereby the behaviour of the model. The covariance function encodes information about, for instance, smoothness and periodicity, which is reflected in the estimate produced. A very common covariance function is the squared exponential, which heavily favours smooth function estimates.[10]For this reason, it can produce poor estimates in many real-world applications, especially when the true underlying function contains discontinuities and rapid changes. Thekriging weightsofsimple kriginghave no unbiasedness condition and are given by thesimple kriging equation system: This is analogous to a linear regression ofZ(x0){\displaystyle Z(x_{0})}on the otherz1,…,zn{\displaystyle z_{1},\ldots ,z_{n}}. The interpolation by simple kriging is given by The kriging error is given by which leads to the generalised least-squares version of theGauss–Markov theorem(Chiles & Delfiner 1999, p. 159): See alsoBayesian Polynomial Chaos Although kriging was developed originally for applications in geostatistics, it is a general method of statistical interpolation and can be applied within any discipline to sampled data from random fields that satisfy the appropriate mathematical assumptions. It can be used where spatially related data has been collected (in 2-D or 3-D) and estimates of "fill-in" data are desired in the locations (spatial gaps) between the actual measurements. To date kriging has been used in a variety of disciplines, including the following: Another very important and rapidly growing field of application, inengineering, is the interpolation of data coming out as response variables of deterministic computer simulations,[28]e.g.finite element method(FEM) simulations. In this case, kriging is used as ametamodelingtool, i.e. a black-box model built over a designed set ofcomputer experiments. In many practical engineering problems, such as the design of ametal formingprocess, a single FEM simulation might be several hours or even a few days long. It is therefore more efficient to design and run a limited number of computer simulations, and then use a kriging interpolator to rapidly predict the response in any other design point. Kriging is therefore used very often as a so-calledsurrogate model, implemented insideoptimizationroutines.[29]Kriging-based surrogate models may also be used in the case of mixed integer inputs.[30]
https://en.wikipedia.org/wiki/Kriging
A number of computeroperating systemsemploy security features to help preventmalicious softwarefrom gaining sufficient privileges to compromise the computer system. Operating systems lacking such features, such asDOS,Windowsimplementations prior toWindows NT(and its descendants), CP/M-80, and all Mac operating systems prior to Mac OS X, had only one category of user who was allowed to do anything. With separate execution contexts it is possible for multiple users to store private files, for multiple users to use a computer at the same time, to protect the system against malicious users, and to protect the system against malicious programs. The first multi-user secure system wasMultics, which began development in the 1960s; it wasn't untilUNIX,BSD,Linux, andNTin the late 80s and early 90s that multi-tasking security contexts were brought tox86consumer machines. A major security consideration is the ability of malicious applications to simulate keystrokes or mouse clicks, thus tricking orspoofingthe security feature into granting malicious applications higher privileges. Another security consideration is the ability of malicious software tospoofdialogs that look like legitimate security confirmation requests. If the user were to input credentials into a fake dialog, thinking the dialog was legitimate, the malicious software would then know the user's password. If the Secure Desktop or similar feature were disabled, the malicious software could use that password to gain higher privileges. Another consideration that has gone into these implementations isusability. In order for an operating system to know when to prompt the user for authorization, an application or action needs to identify itself as requiring elevated privileges. While it is technically possible for the user to be prompted at the exact moment that an operation requiring such privileges is executed, it is often not ideal to ask for privileges partway through completing a task. If the user were unable to provide proper credentials, the work done before requiring administrator privileges would have to be undone because the task could not be seen through to the end. In the case of user interfaces such as the Control Panel in Microsoft Windows, and the Preferences panels in Mac OS X, the exact privilege requirements are hard-coded into the system so that the user is presented with an authorization dialog at an appropriate time (for example, before displaying information that only administrators should see). Different operating systems offer distinct methods for applications to identify their security requirements:
https://en.wikipedia.org/wiki/Comparison_of_privilege_authorization_features
Instatistics,interval estimationis the use ofsample datatoestimateanintervalof possible values of aparameterof interest. This is in contrast topoint estimation, which gives a single value.[1] The most prevalent forms of interval estimation areconfidence intervals(afrequentistmethod) andcredible intervals(aBayesian method).[2]Less common forms includelikelihood intervals,fiducial intervals,tolerance intervals,andprediction intervals. For a non-statistical method, interval estimates can be deduced fromfuzzy logic. Confidence intervals are used to estimate the parameter of interest from a sampled data set, commonly themeanorstandard deviation. A confidence interval states there is a 100γ% confidence that the parameter of interest is within a lower and upper bound. A common misconception of confidence intervals is 100γ% of the data set fits within or above/below the bounds, this is referred to as a tolerance interval, which is discussed below. There are multiple methods used to build a confidence interval, the correct choice depends on the data being analyzed. For a normal distribution with a knownvariance, one uses the z-table to create an interval where a confidence level of 100γ% can be obtained centered around the sample mean from a data set of n measurements, . For aBinomial distribution, confidence intervals can be approximated using theWald Approximate Method,Jeffreys interval, andClopper-Pearson interval. The Jeffrey method can also be used to approximate intervals for aPoisson distribution.[3]If the underlying distribution is unknown, one can utilizebootstrappingto create bounds about the median of the data set. As opposed to a confidence interval, a credible interval requires apriorassumption, modifying the assumption utilizing aBayes factor, and determining aposterior distribution. Utilizing the posterior distribution, one can determine a 100γ%probabilitythe parameter of interest is included, as opposed to the confidence interval where one can be 100γ%confidentthat an estimate is included within an interval.[4] While a prior assumption is helpful towards providing more data towards building an interval, it removes the objectivity of a confidence interval. A prior will be used to inform a posterior, if unchallenged this prior can lead to incorrect predictions.[5] The credible interval's bounds are variable, unlike the confidence interval. There are multiple methods to determine where the correct upper and lower limits should be located. Common techniques to adjust the bounds of the interval includehighest posterior density interval(HPDI), equal-tailed interval, or choosing the center the interval around the mean. Utilizes the principles of a likelihood function to estimate the parameter of interest. Utilizing the likelihood-based method, confidence intervals can be found for exponential, Weibull, and lognormal means. Additionally, likelihood-based approaches can give confidence intervals for the standard deviation. It is also possible to create a prediction interval by combining the likelihood function and the future random variable.[3] Fiducial inferenceutilizes a data set, carefully removes the noise and recovers a distribution estimator, Generalized Fiducial Distribution (GFD). Without the use of Bayes' Theorem, there is no assumption of a prior, much like confidence intervals. Fiducial inference is a less common form ofstatistical inference. The founder,R.A. Fisher, who had been developing inverse probability methods, had his own questions about the validity of the process. While fiducial inference was developed in the early twentieth century, the late twentieth century believed that the method was inferior to the frequentist and Bayesian approaches but held an important place in historical context for statistical inference. However, modern-day approaches have generalized the fiducial interval into Generalized Fiducial Inference (GFI), which can be used to estimate discrete and continuous data sets.[6] Tolerance intervals use collected data set population to obtain an interval, within tolerance limits, containing 100γ% values. Examples typically used to describe tolerance intervals include manufacturing. In this context, a percentage of an existing product set is evaluated to ensure that a percentage of the population is included within tolerance limits. When creating tolerance intervals, the bounds can be written in terms of an upper and lower tolerance limit, utilizing the samplemean,μ{\displaystyle \mu }, and the samplestandard deviation, s. for two-sided intervals And in the case of one-sided intervals where the tolerance is required only above or below a critical value, ki{\displaystyle k_{i}}varies by distribution and the number of sides, i, in the interval estimate. In a normal distribution,k2{\displaystyle k_{2}}can be expressed as[7] Where, zα/2{\displaystyle z_{\alpha /2}}is the critical values obtained from the normal distribution. A prediction interval estimates the interval containing future samples with some confidence, γ. Prediction intervals can be used for bothBayesianandfrequentistcontexts. These intervals are typically used in regression data sets, but prediction intervals are not used for extrapolation beyond the previous data's experimentally controlled parameters.[8] Fuzzy logic is used to handle decision-making in a non-binary fashion for artificial intelligence, medical decisions, and other fields. In general, it takes inputs, maps them throughfuzzy inference systems, and produces an output decision. This process involves fuzzification, fuzzy logic rule evaluation, and defuzzification. When looking at fuzzy logic rule evaluation,membership functionsconvert our non-binary input information into tangible variables. These membership functions are essential to predict the uncertainty of the system. Two-sided intervals estimate a parameter of interest, Θ, with a level of confidence, γ, using a lower (lb{\displaystyle l_{b}}) and upper bound (ub{\displaystyle u_{b}}). Examples may include estimating the average height of males in a geographic region or lengths of a particular desk made by a manufacturer. These cases tend to estimate the central value of a parameter. Typically, this is presented in a form similar to the equation below. Differentiating from the two-sided interval, the one-sided interval utilizes a level of confidence, γ, to construct a minimum or maximum bound which predicts the parameter of interest to γ*100% probability. Typically, a one-sided interval is required when the estimate's minimum or maximum bound is not of interest. When concerned about the minimum predicted value of Θ, one is no longer required to find an upper bounds of the estimate, leading to a form reduced form of the two-sided. As a result of removing the upper bound and maintaining the confidence, the lower-bound (lb{\displaystyle l_{b}}) will increase. Likewise, when concerned with finding only an upper bound of a parameter's estimate, the upper bound will decrease. A one-sided interval is a commonly found in material production'squality assurance, where an expected value of a material's strength, Θ, must be above a certain minimum value (lb{\displaystyle l_{b}}) with some confidence (100γ%). In this case, the manufacturer is not concerned with producing a product that is too strong, there is no upper-bound (ub{\displaystyle u_{b}}). When determining thestatistical significanceof a parameter, it is best to understand the data and its collection methods. Before collecting data, an experiment should be planned such that thesampling errorisstatistical variability(arandom error), as opposed to astatistical bias(asystematic error).[9]After experimenting, a typical first step in creating interval estimates isexploratory analysisplotting using variousgraphical methods. From this, one can determine the distribution of samples from the data set. Producing interval boundaries with incorrect assumptions based on distribution makes a prediction faulty.[10] When interval estimates are reported, they should have a commonly held interpretation within and beyond the scientific community. Interval estimates derived from fuzzy logic have much more application-specific meanings. In commonly occurring situations there should be sets of standard procedures that can be used, subject to the checking and validity of any required assumptions. This applies for both confidence intervals and credible intervals. However, in more novel situations there should be guidance on how interval estimates can be formulated. In this regard confidence intervals and credible intervals have a similar standing but there two differences. First, credible intervals can readily deal with prior information, while confidence intervals cannot. Secondly, confidence intervals are more flexible and can be used practically in more situations than credible intervals: one area where credible intervals suffer in comparison is in dealing withnon-parametric models. There should be ways of testing the performance of interval estimation procedures. This arises because many such procedures involve approximations of various kinds and there is a need to check that the actual performance of a procedure is close to what is claimed. The use ofstochastic simulationsmakes this is straightforward in the case of confidence intervals, but it is somewhat more problematic for credible intervals where prior information needs to be taken properly into account. Checking of credible intervals can be done for situations representing no-prior-information but the check involves checking the long-run frequency properties of the procedures. Severini (1993) discusses conditions under which credible intervals and confidence intervals will produce similar results, and also discusses both thecoverage probabilitiesof credible intervals and the posterior probabilities associated with confidence intervals.[11] Indecision theory, which is a common approach to and justification for Bayesian statistics, interval estimation is not of direct interest. The outcome is a decision, not an interval estimate, and thus Bayesian decision theorists use aBayes action: they minimize expected loss of a loss function with respect to the entire posterior distribution, not a specific interval. Applications of confidence intervals are used to solve a variety of problems dealing with uncertainty. Katz (1975) proposes various challenges and benefits for utilizing interval estimates in legal proceedings.[12]For use in medical research, Altmen (1990) discusses the use of confidence intervals and guidelines towards using them.[13]In manufacturing, it is also common to find interval estimates estimating a product life, or to evaluate the tolerances of a product. Meeker and Escobar (1998) present methods to analyze reliability data under parametric and nonparametric estimation, including the prediction of future, random variables (prediction intervals).[14]
https://en.wikipedia.org/wiki/Interval_estimation
Incomputer science,partitioned global address space(PGAS) is aparallel programming modelparadigm. PGAS is typified by communication operations involving a global memoryaddress spaceabstraction that is logically partitioned, where a portion is local to each process, thread, orprocessing element.[1][2]The novelty of PGAS is that the portions of theshared memoryspace may have an affinity for a particular process, thereby exploitinglocality of referencein order to improve performance. A PGAS memory model is featured in various parallel programming languages and libraries, including:Coarray Fortran,Unified Parallel C,Split-C,Fortress,Chapel,X10,UPC++,Coarray C++,Global Arrays,DASHandSHMEM. The PGAS paradigm is now an integrated part of theFortranlanguage, as ofFortran 2008which standardized coarrays. The various languages and libraries offering a PGAS memory model differ widely in other details, such as the base programming language and the mechanisms used to express parallelism. Many PGAS systems combine the advantages of aSPMDprogramming style for distributed memory systems (as employed byMPI) with the data referencing semantics of shared memory systems. In contrast tomessage passing, PGAS programming models frequently offer one-sided communication operations such as Remote Memory Access (RMA), whereby one processing element may directly access memory with affinity to a different (potentially remote) process, without explicit semantic involvement by the passive target process. PGAS offers more efficiency and scalability than traditional shared-memory approaches with a flat address space, because hardware-specificdata localitycan be explicitly exposed in the semantic partitioning of the address space. A variant of the PGAS paradigm,asynchronous partitioned global address space(APGAS) augments the programming model with facilities for both local and remote asynchronous task creation.[3]Two programming languages that use this model areChapelandX10.
https://en.wikipedia.org/wiki/Partitioned_global_address_space
Bivariate analysisis one of the simplest forms ofquantitative (statistical) analysis.[1]It involves the analysis of twovariables(often denoted asX,Y), for the purpose of determining the empirical relationship between them.[1] Bivariate analysis can be helpful in testing simplehypothesesofassociation. Bivariate analysis can help determine to what extent it becomes easier to know and predict a value for one variable (possibly adependent variable) if we know the value of the other variable (possibly theindependent variable) (see alsocorrelationandsimple linear regression).[2] Bivariate analysis can be contrasted withunivariate analysisin which only one variable is analysed.[1]Like univariate analysis, bivariate analysis can bedescriptiveorinferential. It is the analysis of the relationship between the two variables.[1]Bivariate analysis is a simple (two variable) special case ofmultivariate analysis(where multiple relations between multiple variables are examined simultaneously).[1] Regression is a statistical technique used to help investigate how variation in one or more variables predicts or explains variation in another variable. Bivariate regression aims to identify the equation representing the optimal line that defines the relationship between two variables based on a particular data set. This equation is subsequently applied to anticipate values of the dependent variable not present in the initial dataset. Through regression analysis, one can derive the equation for the curve or straight line and obtain the correlation coefficient. Simple linear regression is a statistical method used to model the linear relationship between an independent variable and a dependent variable. It assumes a linear relationship between the variables and is sensitive to outliers. The best-fitting linear equation is often represented as a straight line to minimize the difference between the predicted values from the equation and the actual observed values of the dependent variable. Equation:y=mx+b{\displaystyle y=mx+b} x{\displaystyle x}: independent variable (predictor) y{\displaystyle y}: dependent variable (outcome) m{\displaystyle m}: slope of the line b{\displaystyle b}:y{\displaystyle y}-intercept The least squares regression line is a method in simple linear regression for modeling the linear relationship between two variables, and it serves as a tool for making predictions based on new values of the independent variable. The calculation is based on the method of theleast squarescriterion. The goal is to minimize the sum of the squared vertical distances (residuals) between the observed y-values and the corresponding predicted y-values of each data point. A bivariate correlation is a measure of whether and how two variables covary linearly, that is, whether the variance of one changes in a linear fashion as the variance of the other changes. Covariance can be difficult to interpret across studies because it depends on the scale or level of measurement used. For this reason, covariance is standardized by dividing by the product of the standard deviations of the two variables to produce the Pearson product–moment correlation coefficient (also referred to as thePearson correlation coefficientor correlation coefficient), which is usually denoted by the letter “r.”[3] Pearson’s correlation coefficient is used when both variables are measured on an interval or ratio scale. Other correlation coefficients or analyses are used when variables are not interval or ratio, or when they are not normally distributed. Examples areSpearman’s correlation coefficient,Kendall’s tau,Biserial correlation, and Chi-square analysis. Three important notes should be highlighted with regard to correlation: If thedependent variable—the one whose value is determined to some extent by the other,independent variable— is acategorical variable, such as the preferred brand of cereal, thenprobitorlogitregression (ormultinomial probitormultinomial logit) can be used. If both variables areordinal, meaning they are ranked in a sequence as first, second, etc., then arank correlationcoefficient can be computed. If just the dependent variable is ordinal,ordered probitorordered logitcan be used. If the dependent variable is continuous—either interval level or ratio level, such as a temperature scale or an income scale—thensimple regressioncan be used. If both variables aretime series, a particular type of causality known asGranger causalitycan be tested for, andvector autoregressioncan be performed to examine the intertemporal linkages between the variables. When neither variable can be regarded as dependent on the other, regression is not appropriate but some form ofcorrelationanalysis may be.[4] Graphsthat are appropriate for bivariate analysis depend on the type of variable. For two continuous variables, ascatterplotis a common graph. When one variable is categorical and the other continuous, abox plotis common and when both are categorical amosaic plotis common. These graphs are part ofdescriptive statistics.
https://en.wikipedia.org/wiki/Bivariate_analysis
Data securityordata protectionmeans protectingdigital data, such as those in adatabase, from destructive forces and from the unwanted actions of unauthorized users,[1]such as acyberattackor adata breach.[2] Disk encryptionrefers toencryptiontechnology that encrypts data on ahard disk drive.  Disk encryption typically takes form in eithersoftware(seedisk encryption software) orhardware(seedisk encryption hardware). Disk encryption is often referred to ason-the-fly encryption(OTFE) or transparent encryption. Software-based security solutions encrypt the data to protect it from theft. However, amalicious programor ahackercouldcorrupt the datato make it unrecoverable, making the system unusable. Hardware-based security solutions prevent read and write access to data, which provides very strong protection against tampering and unauthorized access. Hardware-based security or assistedcomputer securityoffers an alternative to software-only computer security.Security tokenssuch as those usingPKCS#11or a mobile phone may be more secure due to the physical access required in order to be compromised.[3]Access is enabled only when the token is connected and the correctPINis entered (seetwo-factor authentication). However, dongles can be used by anyone who can gain physical access to it. Newer technologies in hardware-based security solve this problem by offering full proof of security for data.[4] Working off hardware-based security: A hardware device allows a user to log in, log out and set different levels through manual actions. The device usesbiometric technologyto prevent malicious users from logging in, logging out, and changing privilege levels. The current state of a user of the device is read by controllers inperipheral devicessuch as hard disks. Illegal access by a malicious user or a malicious program is interrupted based on the current state of a user by hard disk and DVD controllers making illegal access to data impossible. Hardware-based access control is more secure than the protection provided by the operating systems as operating systems are vulnerable to malicious attacks byvirusesand hackers. The data on hard disks can be corrupted after malicious access is obtained. With hardware-based protection, the software cannot manipulate the user privilege levels. Ahackeror a malicious program cannot gain access to secure data protected by hardware or perform unauthorized privileged operations. This assumption is broken only if the hardware itself is malicious or contains a backdoor.[5]The hardware protects the operating system image and file system privileges from being tampered with. Therefore, a completely secure system can be created using a combination of hardware-based security and secure system administration policies. Backupsare used to ensure data that is lost can be recovered from another source. It is considered essential to keep a backup of any data in most industries and the process is recommended for any files of importance to a user.[6] Data maskingof structured data is the process of obscuring (masking) specific data within a database table or cell to ensure that data security is maintained and sensitive information is not exposed to unauthorized personnel.[7]This may include masking the data from users (for example so banking customer representatives can only see the last four digits of a customer's national identity number), developers (who need real production data to test new software releases but should not be able to see sensitive financial data), outsourcing vendors, etc.[8] Data erasureis a method of software-based overwriting that completely wipes all electronic data residing on a hard drive or other digital media to ensure that no sensitive data is lost when an asset is retired or reused.[9] In theUK, theData Protection Actis used to ensure that personal data is accessible to those whom it concerns, and provides redress to individuals if there are inaccuracies.[10]This is particularly important to ensure individuals are treated fairly, for example for credit checking purposes. The Data Protection Act states that only individuals and companies with legitimate and lawful reasons can process personal information and cannot be shared.Data Privacy Dayis an internationalholidaystarted by theCouncil of Europethat occurs every January 28.[11] Since theGeneral Data Protection Regulation(GDPR) of theEuropean Union(EU) became law on May 25, 2018, organizations may face significant penalties of up to €20 million or 4% of their annual revenue if they do not comply with the regulation.[12]It is intended that GDPR will force organizations to understand theirdata privacyrisks and take the appropriate measures to reduce the risk of unauthorized disclosure of consumers’ private information.[13] The international standardsISO/IEC 27001:2013 andISO/IEC 27002:2013 cover data security under the topic ofinformation security, and one of its cardinal principles is that all stored information, i.e. data, should be owned so that it is clear whose responsibility it is to protect and control access to that data.[14][15]The following are examples of organizations that help strengthen and standardize computing security: TheTrusted Computing Groupis an organization that helps standardize computing security technologies. ThePayment Card Industry Data Security Standard(PCI DSS) is a proprietary international information security standard for organizations that handle cardholder information for the majordebit,credit, prepaid,e-purse,automated teller machines, and point of sale cards.[16] TheGeneral Data Protection Regulation (GDPR)proposed by the European Commission will strengthen and unify data protection for individuals within the EU, whilst addressing the export of personal data outside the EU. The four types of technical safeguards are access controls, flow controls, inference controls, anddata encryption. Access controls manage user entry and data manipulation, while flow controls regulate data dissemination. Inference controls prevent deduction of confidential information from statistical databases and data encryption prevents unauthorized access to confidential information.[17]
https://en.wikipedia.org/wiki/Data_security
Inalgebraic geometryand related areas ofmathematics,local analysisis the practice of looking at a problem relative to eachprime numberpfirst, and then later trying to integrate the information gained at each prime into a 'global' picture. These are forms of thelocalizationapproach. Ingroup theory, local analysis was started by theSylow theorems, which contain significant information about the structure of afinite groupGfor each prime numberpdividing the order ofG. This area of study was enormously developed in the quest for theclassification of finite simple groups, starting with theFeit–Thompson theoremthat groups of odd order aresolvable.[1] Innumber theoryone may study aDiophantine equation, for example, modulopfor all primesp, looking for constraints on solutions.[2]The next step is to look modulo prime powers, and then for solutions in thep-adic field. This kind of local analysis provides conditions for solution that arenecessary. In cases where local analysis (plus the condition that there are real solutions) provides alsosufficientconditions, one says that theHasse principleholds: this is the best possible situation. It does forquadratic forms, but certainly not in general (for example forelliptic curves). The point of view that one would like to understand what extra conditions are needed has been very influential, for example forcubic forms. Some form of local analysis underlies both the standard applications of theHardy–Littlewood circle methodinanalytic number theory, and the use ofadele rings, making this one of the unifying principles across number theory.
https://en.wikipedia.org/wiki/Local_analysis
Instatisticsandprobability theory, apoint processorpoint fieldis a set of a random number ofmathematical pointsrandomly located on a mathematical space such as thereal lineorEuclidean space.[1][2] Point processes on the real line form an important special case that is particularly amenable to study,[3]because the points are ordered in a natural way, and the whole point process can be described completely by the (random) intervals between the points. These point processes are frequently used as models for random events in time, such as the arrival of customers in a queue (queueing theory), of impulses in a neuron (computational neuroscience), particles in aGeiger counter, location of radio stations in atelecommunication network[4]or of searches on theworld-wide web. General point processes on a Euclidean space can be used forspatial data analysis,[5][6]which is of interest in such diverse disciplines as forestry, plant ecology, epidemiology, geography, seismology, materials science, astronomy, telecommunications, computational neuroscience,[7]economics[8]and others. Since point processes were historically developed by different communities, there are different mathematical interpretations of a point process, such as arandom counting measureor a random set,[9][10]and different notations. The notations are described in detail on thepoint process notationpage. Some authors regard a point process and stochastic process as two different objects such that a point process is a random object that arises from or is associated with a stochastic process,[11][12]though it has been remarked that the difference between point processes and stochastic processes is not clear.[12]Others consider a point process as a stochastic process, where the process is indexed by sets of the underlying space[a]on which it is defined, such as the real line orn{\displaystyle n}-dimensional Euclidean space.[15][16]Other stochastic processes such as renewal and counting processes are studied in the theory of point processes.[17][12]Sometimes the term "point process" is not preferred, as historically the word "process" denoted an evolution of some system in time, so point process is also called a random point field.[18] In mathematics, a point process is arandom elementwhose values are "point patterns" on asetS. While in the exact mathematical definition a point pattern is specified as alocally finitecounting measure, it is sufficient for more applied purposes to think of a point pattern as acountablesubset ofSthat has nolimit points.[clarification needed] To define general point processes, we start with a probability space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}, and a measurable space(S,S){\displaystyle (S,{\mathcal {S}})}whereS{\displaystyle S}is alocally compactsecond countableHausdorff spaceandS{\displaystyle {\mathcal {S}}}is itsBorel σ-algebra. Consider now an integer-valued locally finite kernelξ{\displaystyle \xi }from(Ω,F){\displaystyle (\Omega ,{\mathcal {F}})}into(S,S){\displaystyle (S,{\mathcal {S}})}, that is, a mappingΩ×S↦Z+{\displaystyle \Omega \times {\mathcal {S}}\mapsto \mathbb {Z} _{+}}such that: This kernel defines arandom measurein the following way. We would like to think ofξ{\displaystyle \xi }as defining a mapping which mapsω∈Ω{\displaystyle \omega \in \Omega }to a measureξω∈M(S){\displaystyle \xi _{\omega }\in {\mathcal {M}}({\mathcal {S}})}(namely,Ω↦M(S){\displaystyle \Omega \mapsto {\mathcal {M}}({\mathcal {S}})}), whereM(S){\displaystyle {\mathcal {M}}({\mathcal {S}})}is the set of all locally finite measures onS{\displaystyle S}. Now, to make this mapping measurable, we need to define aσ{\displaystyle \sigma }-field overM(S){\displaystyle {\mathcal {M}}({\mathcal {S}})}. Thisσ{\displaystyle \sigma }-field is constructed as the minimal algebra so that all evaluation maps of the formπB:μ↦μ(B){\displaystyle \pi _{B}:\mu \mapsto \mu (B)}, whereB∈S{\displaystyle B\in {\mathcal {S}}}isrelatively compact, are measurable. Equipped with thisσ{\displaystyle \sigma }-field, thenξ{\displaystyle \xi }is a random element, where for everyω∈Ω{\displaystyle \omega \in \Omega },ξω{\displaystyle \xi _{\omega }}is a locally finite measure overS{\displaystyle S}. Now, bya point processonS{\displaystyle S}we simply meanan integer-valued random measure(or equivalently, integer-valued kernel)ξ{\displaystyle \xi }constructed as above. The most common example for the state spaceSis the Euclidean spaceRnor a subset thereof, where a particularly interesting special case is given by the real half-line [0,∞). However, point processes are not limited to these examples and may among other things also be used if the points are themselves compact subsets ofRn, in which caseξis usually referred to as aparticle process. Despite the namepoint processsinceSmight not be a subset of the real line, as it might suggest that ξ is astochastic process. Every instance (or event) of a point process ξ can be represented as whereδ{\displaystyle \delta }denotes theDirac measure,nis an integer-valued random variable andXi{\displaystyle X_{i}}are random elements ofS. IfXi{\displaystyle X_{i}}'s arealmost surelydistinct (or equivalently, almost surelyξ(x)≤1{\displaystyle \xi (x)\leq 1}for allx∈Rd{\displaystyle x\in \mathbb {R} ^{d}}), then the point process is known assimple. Another different but useful representation of an event (an event in the event space, i.e. a series of points) is the counting notation, where each instance is represented as anN(t){\displaystyle N(t)}function, a continuous function which takes integer values:N:R→Z0+{\displaystyle N:{\mathbb {R} }\rightarrow {\mathbb {Z} _{0}^{+}}}: which is the number of events in the observation interval(t1,t2]{\displaystyle (t_{1},t_{2}]}. It is sometimes denoted byNt1,t2{\displaystyle N_{t_{1},t_{2}}}, andNT{\displaystyle N_{T}}orN(T){\displaystyle N(T)}meanN0,T{\displaystyle N_{0,T}}. Theexpectation measureEξ(also known asmean measure) of a point process ξ is a measure onSthat assigns to every Borel subsetBofSthe expected number of points ofξinB. That is, TheLaplace functionalΨN(f){\displaystyle \Psi _{N}(f)}of a point processNis a map from the set of all positive valued functionsfon the state space ofN, to[0,∞){\displaystyle [0,\infty )}defined as follows: They play a similar role as thecharacteristic functionsforrandom variable. One important theorem says that: two point processes have the same law if their Laplace functionals are equal. Then{\displaystyle n}th power of a point process,ξn,{\displaystyle \xi ^{n},}is defined on the product spaceSn{\displaystyle S^{n}}as follows : Bymonotone class theorem, this uniquely defines the product measure on(Sn,B(Sn)).{\displaystyle (S^{n},B(S^{n})).}The expectationEξn(⋅){\displaystyle E\xi ^{n}(\cdot )}is called then{\displaystyle n}thmoment measure. The first moment measure is the mean measure. LetS=Rd{\displaystyle S=\mathbb {R} ^{d}}. Thejoint intensitiesof a point processξ{\displaystyle \xi }w.r.t. theLebesgue measureare functionsρ(k):(Rd)k→[0,∞){\displaystyle \rho ^{(k)}:(\mathbb {R} ^{d})^{k}\to [0,\infty )}such that for any disjoint bounded Borel subsetsB1,…,Bk{\displaystyle B_{1},\ldots ,B_{k}} Joint intensities do not always exist for point processes. Given thatmomentsof arandom variabledetermine the random variable in many cases, a similar result is to be expected for joint intensities. Indeed, this has been shown in many cases.[2] A point processξ⊂Rd{\displaystyle \xi \subset \mathbb {R} ^{d}}is said to bestationaryifξ+x:=∑i=1NδXi+x{\displaystyle \xi +x:=\sum _{i=1}^{N}\delta _{X_{i}+x}}has the same distribution asξ{\displaystyle \xi }for allx∈Rd.{\displaystyle x\in \mathbb {R} ^{d}.}For a stationary point process, the mean measureEξ(⋅)=λ‖⋅‖{\displaystyle E\xi (\cdot )=\lambda \|\cdot \|}for some constantλ≥0{\displaystyle \lambda \geq 0}and where‖⋅‖{\displaystyle \|\cdot \|}stands for the Lebesgue measure. Thisλ{\displaystyle \lambda }is called theintensityof the point process. A stationary point process onRd{\displaystyle \mathbb {R} ^{d}}has almost surely either 0 or an infinite number of points in total. For more on stationary point processes and random measure, refer to Chapter 12 of Daley & Vere-Jones.[2]Stationarity has been defined and studied for point processes in more general spaces thanRd{\displaystyle \mathbb {R} ^{d}}. A point process transformation is a function that maps a point process to another point process. We shall see some examples of point processes inRd.{\displaystyle \mathbb {R} ^{d}.} The simplest and most ubiquitous example of a point process is thePoisson point process, which is a spatial generalisation of thePoisson process. A Poisson (counting) process on the line can be characterised by two properties : the number of points (or events) in disjoint intervals are independent and have aPoisson distribution. A Poisson point process can also be defined using these two properties. Namely, we say that a point processξ{\displaystyle \xi }is a Poisson point process if the following two conditions hold 1)ξ(B1),…,ξ(Bn){\displaystyle \xi (B_{1}),\ldots ,\xi (B_{n})}are independent for disjoint subsetsB1,…,Bn.{\displaystyle B_{1},\ldots ,B_{n}.} 2) For any bounded subsetB{\displaystyle B},ξ(B){\displaystyle \xi (B)}has aPoisson distributionwith parameterλ‖B‖,{\displaystyle \lambda \|B\|,}where‖⋅‖{\displaystyle \|\cdot \|}denotes theLebesgue measure. The two conditions can be combined and written as follows : For any disjoint bounded subsetsB1,…,Bn{\displaystyle B_{1},\ldots ,B_{n}}and non-negative integersk1,…,kn{\displaystyle k_{1},\ldots ,k_{n}}we have that The constantλ{\displaystyle \lambda }is called the intensity of the Poisson point process. Note that the Poisson point process is characterised by the single parameterλ.{\displaystyle \lambda .}It is a simple, stationary point process. To be more specific one calls the above point process a homogeneous Poisson point process. Aninhomogeneous Poisson processis defined as above but by replacingλ‖B‖{\displaystyle \lambda \|B\|}with∫Bλ(x)dx{\displaystyle \int _{B}\lambda (x)\,dx}whereλ{\displaystyle \lambda }is a non-negative function onRd.{\displaystyle \mathbb {R} ^{d}.} ACox process(named afterSir David Cox) is a generalisation of the Poisson point process, in that we userandom measuresin place ofλ‖B‖{\displaystyle \lambda \|B\|}. More formally, letΛ{\displaystyle \Lambda }be arandom measure. A Cox point process driven by therandom measureΛ{\displaystyle \Lambda }is the point processξ{\displaystyle \xi }with the following two properties : It is easy to see that Poisson point process (homogeneous and inhomogeneous) follow as special cases of Cox point processes. The mean measure of a Cox point process isEξ(⋅)=EΛ(⋅){\displaystyle E\xi (\cdot )=E\Lambda (\cdot )}and thus in the special case of a Poisson point process, it isλ‖⋅‖.{\displaystyle \lambda \|\cdot \|.} For a Cox point process,Λ(⋅){\displaystyle \Lambda (\cdot )}is called theintensity measure. Further, ifΛ(⋅){\displaystyle \Lambda (\cdot )}has a (random) density (Radon–Nikodym derivative)λ(⋅){\displaystyle \lambda (\cdot )}i.e., thenλ(⋅){\displaystyle \lambda (\cdot )}is called theintensity fieldof the Cox point process. Stationarity of the intensity measures or intensity fields imply the stationarity of the corresponding Cox point processes. There have been many specific classes of Cox point processes that have been studied in detail such as: By Jensen's inequality, one can verify that Cox point processes satisfy the following inequality: for all bounded Borel subsetsB{\displaystyle B}, whereξα{\displaystyle \xi _{\alpha }}stands for a Poisson point process with intensity measureα(⋅):=Eξ(⋅)=EΛ(⋅).{\displaystyle \alpha (\cdot ):=E\xi (\cdot )=E\Lambda (\cdot ).}Thus points are distributed with greater variability in a Cox point process compared to a Poisson point process. This is sometimes calledclusteringorattractive propertyof the Cox point process. An important class of point processes, with applications tophysics,random matrix theory, andcombinatorics, is that ofdeterminantal point processes.[25] A Hawkes processNt{\displaystyle N_{t}}, also known as a self-exciting counting process, is a simple point process whose conditional intensity can be expressed as whereν:R+→R+{\displaystyle \nu :\mathbb {R} ^{+}\rightarrow \mathbb {R} ^{+}}is a kernel function which expresses the positive influence of past eventsTi{\displaystyle T_{i}}on the current value of the intensity processλ(t){\displaystyle \lambda (t)},μ(t){\displaystyle \mu (t)}is a possibly non-stationary function representing the expected, predictable, or deterministic part of the intensity, and{Ti:Ti<Ti+1}∈R{\displaystyle \{T_{i}:T_{i}<T_{i+1}\}\in \mathbb {R} }is the time of occurrence of thei-th event of the process.[26] Given a sequence of non-negative random variables{Xk,k=1,2,…}{\textstyle \{X_{k},k=1,2,\dots \}}, if they are independent and the cdf ofXk{\displaystyle X_{k}}is given byF(ak−1x){\displaystyle F(a^{k-1}x)}fork=1,2,…{\displaystyle k=1,2,\dots }, wherea{\displaystyle a}is a positive constant, then{Xk,k=1,2,…}{\displaystyle \{X_{k},k=1,2,\ldots \}}is called a geometric process (GP).[27] The geometric process has several extensions, including theα- series process[28]and thedoubly geometric process.[29] Historically the first point processes that were studied had the real half lineR+= [0,∞) as their state space, which in this context is usually interpreted as time. These studies were motivated by the wish to model telecommunication systems,[30]in which the points represented events in time, such as calls to a telephone exchange. Point processes onR+are typically described by giving the sequence of their (random) inter-event times (T1,T2, ...), from which the actual sequence (X1,X2, ...) of event times can be obtained as If the inter-event times are independent and identically distributed, the point process obtained is called arenewal process. Theintensityλ(t|Ht) of a point process on the real half-line with respect to a filtrationHtis defined as Htcan denote the history of event-point times preceding timetbut can also correspond to other filtrations (for example in the case of a Cox process). In theN(t){\displaystyle N(t)}-notation, this can be written in a more compact form: Thecompensatorof a point process, also known as thedual-predictable projection, is the integrated conditional intensity function defined by ThePapangelou intensity functionof a point processN{\displaystyle N}in then{\displaystyle n}-dimensional Euclidean spaceRn{\displaystyle \mathbb {R} ^{n}}is defined as whereBδ(x){\displaystyle B_{\delta }(x)}is the ball centered atx{\displaystyle x}of a radiusδ{\displaystyle \delta }, andσ[N(Rn∖Bδ(x))]{\displaystyle \sigma [N(\mathbb {R} ^{n}\setminus B_{\delta }(x))]}denotes the information of the point processN{\displaystyle N}outsideBδ(x){\displaystyle B_{\delta }(x)}. The logarithmic likelihood of a parameterized simple point process conditional upon some observed data is written as The analysis of point pattern data in a compact subsetSofRnis a major object of study withinspatial statistics. Such data appear in a broad range of disciplines,[32]amongst which are The need to use point processes to model these kinds of data lies in their inherent spatial structure. Accordingly, a first question of interest is often whether the given data exhibitcomplete spatial randomness(i.e. are a realization of a spatialPoisson process) as opposed to exhibiting either spatial aggregation or spatial inhibition. In contrast, many datasets considered in classicalmultivariate statisticsconsist of independently generated datapoints that may be governed by one or several covariates (typically non-spatial). Apart from the applications in spatial statistics, point processes are one of the fundamental objects instochastic geometry. Research has also focussed extensively on various models built on point processes such asVoronoi tessellations,random geometric graphs, andBoolean models.
https://en.wikipedia.org/wiki/Point_process
In astatistical-classificationproblem with two classes, adecision boundaryordecision surfaceis ahypersurfacethat partitions the underlyingvector spaceinto two sets, one for each class. The classifier will classify all the points on one side of the decision boundary as belonging to one class and all those on the other side as belonging to the other class. A decision boundary is the region of a problem space in which the output label of aclassifieris ambiguous.[1] If the decision surface is ahyperplane, then the classification problem is linear, and the classes arelinearly separable. Decision boundaries are not always clear cut. That is, the transition from one class in thefeature spaceto another is not discontinuous, but gradual. This effect is common infuzzy logicbased classification algorithms, where membership in one class or another is ambiguous. Decision boundaries can be approximations of optimal stopping boundaries.[2]The decision boundary is the set of points of that hyperplane that pass through zero.[3]For example, the angle between a vector and points in a set must be zero for points that are on or close to the decision boundary.[4] Decision boundary instability can be incorporated with generalization error as a standard for selecting the most accurate and stable classifier.[5] In the case ofbackpropagationbasedartificial neural networksorperceptrons, the type of decision boundary that the network can learn is determined by the number of hidden layers the network has. If it has no hidden layers, then it can only learn linear problems. If it has one hidden layer, then it can learn anycontinuous functiononcompact subsetsofRnas shown by theuniversal approximation theorem, thus it can have an arbitrary decision boundary. In particular,support vector machinesfind ahyperplanethat separates the feature space into two classes with themaximum margin. If the problem is not originally linearly separable, thekernel trickcan be used to turn it into a linearly separable one, by increasing the number of dimensions. Thus a general hypersurface in a small dimension space is turned into a hyperplane in a space with much larger dimensions. Neural networks try to learn the decision boundary which minimizes the empirical error, while support vector machines try to learn the decision boundary which maximizes the empirical margin between the decision boundary and data points.
https://en.wikipedia.org/wiki/Decision_boundary
Organic computingis computing that behaves and interacts with humans in anorganic manner. The term "organic" is used to describe the system's behavior, and does not imply that they are constructed fromorganic materials. It is based on the insight that we will soon be surrounded by large collections ofautonomous systems, which are equipped withsensorsandactuators, aware of their environment, communicate freely, and organize themselves in order to perform the actions and services that seem to be required. The goal is to construct such systems as robust, safe, flexible, and trustworthy as possible. In particular, a strong orientation towards human needs as opposed to a pure implementation of the technologically possible seems absolutely central. In order to achieve these goals, our technical systems will have to act more independently, flexibly, and autonomously, i.e. they will have to exhibit lifelike properties. We call such systems "organic". Hence, an "Organic Computing System" is a technical system which adapts dynamically to exogenous and endogenous change. It is characterized by the properties of self-organization,self-configuration, self-optimization,self-healing, self-protection,self-explaining, andcontext awareness. It can be seen as an extension of theAutonomic computingvision of IBM. In a variety of research projects the priority research programSPP 1183of the German Research Foundation (DFG) addresses fundamental challenges in the design of Organic Computing systems; its objective is a deeper understanding of emergent global behavior in self-organizing systems and the design of specific concepts and tools to support the construction of Organic Computing systems for technical applications.
https://en.wikipedia.org/wiki/Organic_computing
Aheterogram(fromhetero-, meaning 'different', +-gram, meaning 'written') is a word, phrase, or sentence in which noletterof the alphabet occurs more than once. The termsisogramandnonpattern wordhave also been used to mean the same thing.[1][2][3] It is not clear who coined or popularized the term "heterogram". The concept appears inDmitri Borgmann's 1965 bookLanguage on Vacation: An Olio of Orthographical Odditiesbut he uses the termisogram.[4]In a 1985 article, Borgmann claims to have "launched" the termisogramthen.[5]He also suggests an alternative term,asogram, to avoid confusion with lines of constant value such ascontour lines, but usesisogramin the article itself. Isogramhas also been used to mean a string where each letter present is used the same number of times.[6][2][7]Multiple terms have been used to describe words where each letter used appears a certain number of times. For example, a word where every featured letter appears twice, like "noon", might be called apair isogram,[8]asecond-order isogram,[2]or a2-isogram.[3] A perfectpangramis an example of a heterogram, with the added restriction that it uses all the letters of the alphabet. A ten-letter heterogram can be used as the key to asubstitution cipherfor numbers, with the heterogram encoding the string 1234567890 or 0123456789. This is used in businesses where salespeople and customers traditionally haggle over sales prices, such as used-car lots and pawn shops. The nominal value or minimum sale price for an item can be listed on a tag for the salesperson's reference while being visible but meaningless to the customer.[9][10] A twelve-letter cipher could be used to indicate months of the year. In the bookLanguage on Vacation: An Olio of Orthographical Oddities,Dmitri Borgmanntries to find the longest such word. The longest one he found was "dermatoglyphics" at 15 letters. He coins several longer hypothetical words, such as "thumbscrew-japingly" (18 letters, defined as "as if mocking athumbscrew") and, with the "uttermost limit in the way of verbal creativeness", "pubvexingfjord-schmaltzy" (23 letters, defined as "as if in the manner of the extremesentimentalismgenerated in some individuals by the sight of a majesticfjord, which sentimentalism is annoying to the clientele of an English inn").[4] The word "subdermatoglyphic" was constructed by Edward R. Wolpow.[11]Later, in the bookMaking the Alphabet Dance,[12]Ross Ecklerreports the word "subdermatoglyphic" (17 letters) can be found in an article by Lowell Goldsmith calledChaos: To See a World in a Grain of Sand and a Heaven in a Wild Flower.[13]He also found the name "Melvin Schwarzkopf" (17 letters), a man living inAlton, Illinois, and proposed the name "Emily Jung Schwartzkopf" (21 letters). In an elaborate story, Eckler talked about a group of scientists who name the unavoidable urge to speak inpangramsthe "Hjelmqvist-Gryb-Zock-Pfund-Wax syndrome". The longest German heterogram is "Heizölrückstoßabdämpfung" (heating oil recoil dampening) which uses 24 of the 30 letters in theGerman alphabet, asä,ö,ü, andßare considered distinct letters froma,o,u, andsin German.[citation needed]It is closely followed by "Boxkampfjuryschützlinge" (boxing-match jury protégés) and "Zwölftonmusikbücherjagd" (twelve-tone music book chase) with 23 letters.[citation needed] There are hundreds of eleven-letter isograms, over one thousand ten-letter isograms and thousands of such nine-letter words.[14]
https://en.wikipedia.org/wiki/Isogram
Aratio distribution(also known as aquotient distribution) is aprobability distributionconstructed as the distribution of theratioofrandom variableshaving two other known distributions. Given two (usuallyindependent) random variablesXandY, the distribution of the random variableZthat is formed as the ratioZ=X/Yis aratio distribution. An example is theCauchy distribution(also called thenormal ratio distribution), which comes about as the ratio of twonormally distributedvariables with zero mean. Two other distributions often used in test-statistics are also ratio distributions: thet-distributionarises from aGaussianrandom variable divided by an independentchi-distributedrandom variable, while theF-distributionoriginates from the ratio of two independentchi-squared distributedrandom variables. More general ratio distributions have been considered in the literature.[1][2][3][4][5][6][7][8][9] Often the ratio distributions areheavy-tailed, and it may be difficult to work with such distributions and develop an associatedstatistical test. A method based on themedianhas been suggested as a "work-around".[10] The ratio is one type of algebra for random variables: Related to the ratio distribution are theproduct distribution,sum distributionanddifference distribution. More generally, one may talk of combinations of sums, differences, products and ratios. Many of these distributions are described inMelvin D. Springer's book from 1979The Algebra of Random Variables.[8] The algebraic rules known with ordinary numbers do not apply for the algebra of random variables. For example, if a product isC = ABand a ratio isD=C/Ait does not necessarily mean that the distributions ofDandBare the same. Indeed, a peculiar effect is seen for theCauchy distribution: The product and the ratio of two independent Cauchy distributions (with the same scale parameter and the location parameter set to zero) will give the same distribution.[8]This becomes evident when regarding the Cauchy distribution as itself a ratio distribution of two Gaussian distributions of zero means: Consider two Cauchy random variables,C1{\displaystyle C_{1}}andC2{\displaystyle C_{2}}each constructed from two Gaussian distributionsC1=G1/G2{\displaystyle C_{1}=G_{1}/G_{2}}andC2=G3/G4{\displaystyle C_{2}=G_{3}/G_{4}}then whereFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle C_3 = G_4/G_3}. The first term is the ratio of two Cauchy distributions while the last term is the product of two such distributions. A way of deriving the ratio distribution ofZ=X/Y{\displaystyle Z=X/Y}from the joint distribution of the two other random variablesX , Y, with joint pdfpX,Y(x,y){\displaystyle p_{X,Y}(x,y)}, is by integration of the following form[3] If the two variables are independent thenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle p_{XY}(x,y) = p_X(x) p_Y(y) }and this becomes This may not be straightforward. By way of example take the classical problem of the ratio of two standard Gaussian samples. The joint pdf is DefiningZ=X/Y{\displaystyle Z=X/Y}we have Using the known definite integral∫0∞xexp⁡(−cx2)dx=12c{\textstyle \int _{0}^{\infty }\,x\,\exp \left(-cx^{2}\right)\,dx={\frac {1}{2c}}}we get which is the Cauchy distribution, or Student'stdistribution withn= 1 TheMellin transformhas also been suggested for derivation of ratio distributions.[8] In the case of positive independent variables, proceed as follows. The diagram shows a separable bivariate distributionfx,y(x,y)=fx(x)fy(y){\displaystyle f_{x,y}(x,y)=f_{x}(x)f_{y}(y)}which has support in the positive quadrantx,y>0{\displaystyle x,y>0}and we wish to find the pdf of the ratioR=X/Y{\displaystyle R=X/Y}. The hatched volume above the liney=x/R{\displaystyle y=x/R}represents the cumulative distribution of the functionfx,y(x,y){\displaystyle f_{x,y}(x,y)}multiplied with the logical functionX/Y≤R{\displaystyle X/Y\leq R}. The density is first integrated in horizontal strips; the horizontal strip at heightyextends fromx= 0 tox = Ryand has incremental probabilityfy(y)dy∫0Ryfx(x)dx{\textstyle f_{y}(y)dy\int _{0}^{Ry}f_{x}(x)\,dx}.Secondly, integrating the horizontal strips upward over allyyields the volume of probability above the line Finally, differentiateFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle F_R(R)}with respect toR{\displaystyle R}to get the pdffR(R){\displaystyle f_{R}(R)}. Move the differentiation inside the integral: and since then As an example, find the pdf of the ratioRwhen We have thus Differentiation wrt.Ryields the pdf ofR FromMellin transformtheory, for distributions existing only on the positive half-linex≥0{\displaystyle x\geq 0}, we have the product identityE⁡[(UV)p]=E⁡[Up]E⁡[Vp]{\displaystyle \operatorname {E} [(UV)^{p}]=\operatorname {E} [U^{p}]\;\;\operatorname {E} [V^{p}]}providedU,V{\displaystyle U,\;V}are independent. For the case of a ratio of samples likeE⁡[(X/Y)p]{\displaystyle \operatorname {E} [(X/Y)^{p}]}, in order to make use of this identity it is necessary to use moments of the inverse distribution. SetFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle 1/Y = Z }such thatE⁡[(XZ)p]=E⁡[Xp]E⁡[Y−p]{\displaystyle \operatorname {E} [(XZ)^{p}]=\operatorname {E} [X^{p}]\;\operatorname {E} [Y^{-p}]}. Thus, if the moments ofXp{\displaystyle X^{p}}andY−p{\displaystyle Y^{-p}}can be determined separately, then the moments ofX/Y{\displaystyle X/Y}can be found. The moments ofFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Y^{-p} }are determined from the inverse pdf ofFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Y}, often a tractable exercise. At simplest,E⁡[Y−p]=∫0∞y−pfy(y)dy{\textstyle \operatorname {E} [Y^{-p}]=\int _{0}^{\infty }y^{-p}f_{y}(y)\,dy}. To illustrate, letX{\displaystyle X}be sampled from a standardGamma distribution Z=Y−1{\displaystyle Z=Y^{-1}}is sampled from an inverse Gamma distribution with parameterβ{\displaystyle \beta }and has pdfΓ−1(β)z−(1+β)e−1/z{\displaystyle \;\Gamma ^{-1}(\beta )z^{-(1+\beta )}e^{-1/z}}. The moments of this pdf are Multiplying the corresponding moments gives Independently, it is known that the ratio of the two Gamma samplesR=X/Y{\displaystyle R=X/Y}follows the Beta Prime distribution: SubstitutingB(α,β)=Γ(α)Γ(β)Γ(α+β){\displaystyle \mathrm {B} (\alpha ,\beta )={\frac {\Gamma (\alpha )\Gamma (\beta )}{\Gamma (\alpha +\beta )}}}we haveE⁡[Rp]=Γ(α+p)Γ(β−p)Γ(α+β)/Γ(α)Γ(β)Γ(α+β)=Γ(α+p)Γ(β−p)Γ(α)Γ(β){\displaystyle \operatorname {E} [R^{p}]={\frac {\Gamma (\alpha +p)\Gamma (\beta -p)}{\Gamma (\alpha +\beta )}}{\Bigg /}{\frac {\Gamma (\alpha )\Gamma (\beta )}{\Gamma (\alpha +\beta )}}={\frac {\Gamma (\alpha +p)\Gamma (\beta -p)}{\Gamma (\alpha )\Gamma (\beta )}}}which is consistent with the product of moments above. In theProduct distributionsection, and derived fromMellin transformtheory (see section above), it is found that the mean of a product of independent variables is equal to the product of their means. In the case of ratios, we have which, in terms of probability distributions, is equivalent to Note thatFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \operatorname{E}(1/Y) \neq \frac{1}{\operatorname{E}(Y)} }i.e.,∫−∞∞y−1fy(y)dy≠1∫−∞∞yfy(y)dy{\displaystyle \int _{-\infty }^{\infty }y^{-1}f_{y}(y)\,dy\neq {\frac {1}{\int _{-\infty }^{\infty }yf_{y}(y)\,dy}}} The variance of a ratio of independent variables is WhenXandYare independent and have aGaussian distributionwith zero mean, the form of their ratio distribution is aCauchy distribution. This can be derived by settingZ=X/Y=tan⁡θ{\displaystyle Z=X/Y=\tan \theta }then showing thatθ{\displaystyle \theta }has circular symmetry. For a bivariate uncorrelated Gaussian distribution we have Ifp(x,y){\displaystyle p(x,y)}is a function only ofrthenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \theta }is uniformly distributed onFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle [0, 2\pi ] }with density1/2π{\displaystyle 1/2\pi }so the problem reduces to finding the probability distribution ofZunder the mapping We have, by conservation of probability and sincedz/dθ=1/cos2⁡θ{\displaystyle dz/d\theta =1/\cos ^{2}\theta } and settingcos2⁡θ=11+(tan⁡θ)2=11+z2{\textstyle \cos ^{2}\theta ={\frac {1}{1+(\tan \theta )^{2}}}={\frac {1}{1+z^{2}}}}we get There is a spurious factor of 2 here. Actually, two values ofθ{\displaystyle \theta }spaced byπ{\displaystyle \pi }map onto the same value ofz, the density is doubled, and the final result is When either of the two Normal distributions is non-central then the result for the distribution of the ratio is much more complicated and is given below in the succinct form presented byDavid Hinkley.[6]The trigonometric method for a ratio does however extend to radial distributions like bivariate normals or a bivariate Studenttin which the density depends only on radiusr=x2+y2{\textstyle r={\sqrt {x^{2}+y^{2}}}}. It does not extend to the ratio of two independent Studenttdistributions which give the Cauchy ratio shown in a section below for one degree of freedom. In the absence of correlation(cor⁡(X,Y)=0){\displaystyle (\operatorname {cor} (X,Y)=0)}, theprobability density functionof the ratioZ=X/Yof two normal variablesX=N(μX,σX2) andY=N(μY,σY2) is given exactly by the following expression, derived in several sources:[6] where The above expression becomes more complicated when the variablesXandYare correlated. Ifμx=μy=0{\displaystyle \mu _{x}=\mu _{y}=0}butFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \sigma_X \neq \sigma_Y}andρ≠0{\displaystyle \rho \neq 0}the more general Cauchy distribution is obtained whereρis thecorrelation coefficientbetweenXandYand The complex distribution has also been expressed with Kummer'sconfluent hypergeometric functionor theHermite function.[9] This was shown in Springer 1979 problem 4.28. A transformation to the log domain was suggested by Katz(1978) (see binomial section below). Let the ratio be Take logs to get Sinceloge⁡(1+δ)=δ−δ22+δ33+⋯{\displaystyle \log _{e}(1+\delta )=\delta -{\frac {\delta ^{2}}{2}}+{\frac {\delta ^{3}}{3}}+\cdots }then asymptotically Alternatively, Geary (1930) suggested that has approximately astandard Gaussian distribution:[1]This transformation has been called theGeary–Hinkley transformation;[7]the approximation is good ifYis unlikely to assume negative values, basicallyμy>3σy{\displaystyle \mu _{y}>3\sigma _{y}}. This is developed by Dale (Springer 1979 problem 4.28) and Hinkley 1969. Geary showed how the correlated ratioz{\displaystyle z}could be transformed into a near-Gaussian form and developed an approximation fort{\displaystyle t}dependent on the probability of negative denominator valuesx+μx<0{\displaystyle x+\mu _{x}<0}being vanishingly small. Fieller's later correlated ratio analysis is exact but care is needed when combining modern math packages with verbal conditions in the older literature. Pham-Ghia has exhaustively discussed these methods. Hinkley's correlated results are exact but it is shown below that the correlated ratio condition can also be transformed into an uncorrelated one so only the simplified Hinkley equations above are required, not the full correlated ratio version. Let the ratio be: in whichx,y{\displaystyle x,y}are zero-mean correlated normal variables with variancesσx2,σy2{\displaystyle \sigma _{x}^{2},\sigma _{y}^{2}}andX,Y{\displaystyle X,Y}have meansμx,μy.{\displaystyle \mu _{x},\mu _{y}.}Writex′=x−ρyσx/σy{\displaystyle x'=x-\rho y\sigma _{x}/\sigma _{y}}such thatx′,y{\displaystyle x',y}become uncorrelated andx′{\displaystyle x'}has standard deviation The ratio: is invariant under this transformation and retains the same pdf. They{\displaystyle y}term in the numerator appears to be made separable by expanding: to get in whichμx′=μx−ρμyσxσy{\textstyle \mu '_{x}=\mu _{x}-\rho \mu _{y}{\frac {\sigma _{x}}{\sigma _{y}}}}andzhas now become a ratio of uncorrelated non-central normal samples with an invariantz-offset (this is not formally proven, though appears to have been used by Geary), Finally, to be explicit, the pdf of the ratioz{\displaystyle z}for correlated variables is found by inputting the modified parametersσx′,μx′,σy,μy{\displaystyle \sigma _{x}',\mu _{x}',\sigma _{y},\mu _{y}}andFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \rho'=0 }into the Hinkley equation above which returns the pdf for the correlated ratio with a constant offset−ρσxσy{\displaystyle -\rho {\frac {\sigma _{x}}{\sigma _{y}}}}onz{\displaystyle z}. The figures above show an example of a positively correlated ratio withσx=σy=1,μx=0,μy=0.5,ρ=0.975{\displaystyle \sigma _{x}=\sigma _{y}=1,\mu _{x}=0,\mu _{y}=0.5,\rho =0.975}in which the shaded wedges represent the increment of area selected by given ratiox/y∈[r,r+δ]{\displaystyle x/y\in [r,r+\delta ]}which accumulates probability where they overlap the distribution. The theoretical distribution, derived from the equations under discussion combined with Hinkley's equations, is highly consistent with a simulation result using 5,000 samples. In the top figure it is clear that for a ratioz=x/y≈1{\displaystyle z=x/y\approx 1}the wedge has almost bypassed the main distribution mass altogether and this explains the local minimum in the theoretical pdfpZ(x/y){\displaystyle p_{Z}(x/y)}. Conversely asFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle x/y}moves either toward or away from one the wedge spans more of the central mass, accumulating a higher probability. The ratio of correlated zero-mean circularly symmetriccomplex normal distributedvariables was determined by Baxley et al.[13]and has since been extended to the nonzero-mean and nonsymmetric case.[14]In the correlated zero-mean case, the joint distribution ofx,yis where (⋅)H{\displaystyle (\cdot )^{H}}is an Hermitian transpose and The PDF ofZ=X/Y{\displaystyle Z=X/Y}is found to be In the usual event thatσx=σy{\displaystyle \sigma _{x}=\sigma _{y}}we get Further closed-form results for the CDF are also given. The graph shows the pdf of the ratio of two complex normal variables with a correlation coefficient ofρ=0.7exp⁡(iπ/4){\displaystyle \rho =0.7\exp(i\pi /4)}. The pdf peak occurs at roughly the complex conjugate of a scaled downρ{\displaystyle \rho }. The ratio of independent or correlated log-normals is log-normal. This follows, because ifX1{\displaystyle X_{1}}andX2{\displaystyle X_{2}}arelog-normally distributed, thenln⁡(X1){\displaystyle \ln(X_{1})}andln⁡(X2){\displaystyle \ln(X_{2})}are normally distributed. If they are independent or their logarithms follow abivarate normal distribution, then the logarithm of their ratio is the difference of independent or correlated normally distributed random variables, which is normally distributed.[note 1] This is important for many applications requiring the ratio of random variables that must be positive, where joint distribution ofX1{\displaystyle X_{1}}andX2{\displaystyle X_{2}}is adequately approximated by a log-normal. This is a common result of themultiplicative central limit theorem, also known asGibrat's law, whenXi{\displaystyle X_{i}}is the result of an accumulation of many small percentage changes and must be positive and approximately log-normally distributed.[15] With two independent random variables following auniform distribution, e.g., the ratio distribution becomes If two independent random variables,XandYeach follow aCauchy distributionwith median equal to zero and shape factora{\displaystyle a} then the ratio distribution for the random variableZ=X/Y{\displaystyle Z=X/Y}is[16] This distribution does not depend ona{\displaystyle a}and the result stated by Springer[8](p158 Question 4.6) is not correct. The ratio distribution is similar to but not the same as theproduct distributionof the random variableW=XY{\displaystyle W=XY}: More generally, if two independent random variablesXandYeach follow aCauchy distributionwith median equal to zero and shape factorFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle a}andFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle b}respectively, then: The result for the ratio distribution can be obtained from the product distribution by replacingFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle b}with1b.{\displaystyle {\frac {1}{b}}.} IfXhas a standard normal distribution andYhas a standard uniform distribution, thenZ=X/Yhas a distribution known as theslash distribution, with probability density function where φ(z) is the probability density function of the standard normal distribution.[17] LetGbe a normal(0,1) distribution,YandZbechi-squared distributionswithmandndegrees of freedomrespectively, all independent, withfχ(x,k)=xk2−1e−x/22k/2Γ(k/2){\displaystyle f_{\chi }(x,k)={\frac {x^{{\frac {k}{2}}-1}e^{-x/2}}{2^{k/2}\Gamma (k/2)}}}. Then IfFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle V_1 \sim {\chi'}_{k_1}^2(\lambda)}, anoncentral chi-squared distribution, andV2∼χ′k22(0){\displaystyle V_{2}\sim {\chi '}_{k_{2}}^{2}(0)}andV1{\displaystyle V_{1}}is independent ofV2{\displaystyle V_{2}}then mnFm,n′=β′(m2,n2)orFm,n′=β′(m2,n2,1,nm){\displaystyle {\frac {m}{n}}F'_{m,n}=\beta '({\tfrac {m}{2}},{\tfrac {n}{2}}){\text{ or }}F'_{m,n}=\beta '({\tfrac {m}{2}},{\tfrac {n}{2}},1,{\tfrac {n}{m}})}definesFm,n′{\displaystyle F'_{m,n}}, Fisher's F density distribution, the PDF of the ratio of two Chi-squares withm, ndegrees of freedom. The CDF of the Fisher density, found inF-tables is defined in thebeta prime distributionarticle. If we enter anF-test table withm= 3,n= 4 and 5% probability in the right tail, the critical value is found to be 6.59. This coincides with the integral Forgamma distributionsUandVwith arbitraryshape parametersα1andα2and their scale parameters both set to unity, that is,U∼Γ(α1,1),V∼Γ(α2,1){\displaystyle U\sim \Gamma (\alpha _{1},1),V\sim \Gamma (\alpha _{2},1)}, whereΓ(x;α,1)=xα−1e−xΓ(α){\displaystyle \Gamma (x;\alpha ,1)={\frac {x^{\alpha -1}e^{-x}}{\Gamma (\alpha )}}}, then IfFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle U \sim \Gamma (x;\alpha,1) }, thenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \theta U \sim \Gamma (x;\alpha,\theta) = \frac { x^{\alpha-1} e^{- \frac{x}{\theta}}}{ \theta^k \Gamma(\alpha)} }. Note that hereθis ascale parameter, rather than a rate parameter. IfU∼Γ(α1,θ1),V∼Γ(α2,θ2){\displaystyle U\sim \Gamma (\alpha _{1},\theta _{1}),\;V\sim \Gamma (\alpha _{2},\theta _{2})}, then by rescaling theθ{\displaystyle \theta }parameter to unity we have Thus in whichFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \beta'(\alpha,\beta,p,q)}represents thegeneralisedbeta prime distribution. In the foregoing it is apparent that ifX∼β′(α1,α2,1,1)≡β′(α1,α2){\displaystyle X\sim \beta '(\alpha _{1},\alpha _{2},1,1)\equiv \beta '(\alpha _{1},\alpha _{2})}thenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \theta X \sim \beta'( \alpha_1, \alpha_2, 1, \theta ) }. More explicitly, since ifFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle U \sim \Gamma(\alpha_1, \theta_1 ), V \sim \Gamma(\alpha_2, \theta_2 ) }then where IfX,Yare independent samples from theRayleigh distributionFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle f_r(r) = (r/\sigma^2) e^ {-r^2/2\sigma^2}, \;\; r \ge 0 }, the ratioZ=X/Yfollows the distribution[18] and has cdf The Rayleigh distribution has scaling as its only parameter. The distribution ofFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Z = \alpha X/Y }follows and has cdf Thegeneralized gamma distributionis which includes the regular gamma, chi, chi-squared, exponential, Rayleigh, Nakagami and Weibull distributions involving fractional powers. Note that hereais ascale parameter, rather than a rate parameter;dis a shape parameter. In the ratios above, Gamma samples,U,Vmay have differing sample sizesα1,α2{\displaystyle \alpha _{1},\alpha _{2}}but must be drawn from the same distributionxα−1e−xθθkΓ(α){\displaystyle {\frac {x^{\alpha -1}e^{-{\frac {x}{\theta }}}}{\theta ^{k}\Gamma (\alpha )}}}with equal scalingθ{\displaystyle \theta }. In situations whereUandVare differently scaled, a variables transformation allows the modified random ratio pdf to be determined. LetX=UU+V=11+B{\displaystyle X={\frac {U}{U+V}}={\frac {1}{1+B}}}whereU∼Γ(α1,θ),V∼Γ(α2,θ),θ{\displaystyle U\sim \Gamma (\alpha _{1},\theta ),V\sim \Gamma (\alpha _{2},\theta ),\theta }arbitrary and, from above,X∼Beta(α1,α2),B=V/U∼Beta′(α2,α1){\displaystyle X\sim Beta(\alpha _{1},\alpha _{2}),B=V/U\sim Beta'(\alpha _{2},\alpha _{1})}. RescaleVarbitrarily, definingY∼UU+φV=11+φB,0≤φ≤∞{\displaystyle Y\sim {\frac {U}{U+\varphi V}}={\frac {1}{1+\varphi B}},\;\;0\leq \varphi \leq \infty } We haveFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle B = \frac{1-X}{X} }and substitution intoYgivesY=Xφ+(1−φ)X,dY/dX=φ(φ+(1−φ)X)2{\displaystyle Y={\frac {X}{\varphi +(1-\varphi )X}},dY/dX={\frac {\varphi }{(\varphi +(1-\varphi )X)^{2}}}} TransformingXtoYgivesFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle f_Y(Y) = \frac{f_X (X) } {| dY/dX|} = \frac {\beta(X,\alpha_1,\alpha_2)}{ \varphi / [\varphi + (1-\varphi) X]^2} } NotingFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle X = \frac {\varphi Y}{ 1-(1 - \varphi)Y} }we finally have Thus, ifFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle U \sim \Gamma(\alpha_1,\theta_1) }andV∼Γ(α2,θ2){\displaystyle V\sim \Gamma (\alpha _{2},\theta _{2})}thenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Y = \frac {U} { U + V} }is distributed asFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle f_Y(Y, \varphi) }withFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \varphi = \frac {\theta_2}{\theta_1} } The distribution ofYis limited here to the interval [0,1]. It can be generalized by scaling such that ifFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Y \sim f_Y(Y,\varphi) }then wherefY(Y,φ,Θ)=φ/Θ[1−(1−φ)Y/Θ]2β(φY/Θ1−(1−φ)Y/Θ,α1,α2),0≤Y≤Θ{\displaystyle f_{Y}(Y,\varphi ,\Theta )={\frac {\varphi /\Theta }{[1-(1-\varphi )Y/\Theta ]^{2}}}\beta \left({\frac {\varphi Y/\Theta }{1-(1-\varphi )Y/\Theta }},\alpha _{1},\alpha _{2}\right),\;\;\;0\leq Y\leq \Theta } Though not ratio distributions of two variables, the following identities for one variable are useful: combining the latter two equations yields Corollary IfU∼Γ(α,1),V∼Γ(β,1){\displaystyle U\sim \Gamma (\alpha ,1),V\sim \Gamma (\beta ,1)}thenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \frac{U}{V} \sim \beta' ( \alpha, \beta )}and Further results can be found in theInverse distributionarticle. This result was derived by Katz et al.[20] SupposeX∼Binomial(n,p1){\displaystyle X\sim {\text{Binomial}}(n,p_{1})}andY∼Binomial(m,p2){\displaystyle Y\sim {\text{Binomial}}(m,p_{2})}andX{\displaystyle X},Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Y}are independent. LetFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle T=\frac{X/n}{Y/m}}. Thenlog⁡(T){\displaystyle \log(T)}is approximately normally distributed with meanFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \log(p_1/p_2)}and variance(1/p1)−1n+(1/p2)−1m{\displaystyle {\frac {(1/p_{1})-1}{n}}+{\frac {(1/p_{2})-1}{m}}}. The binomial ratio distribution is of significance in clinical trials: if the distribution ofTis known as above, the probability of a given ratio arising purely by chance can be estimated, i.e. a false positive trial. A number of papers compare the robustness of different approximations for the binomial ratio.[citation needed] In the ratio of Poisson variablesR = X/Ythere is a problem thatYis zero with finite probability soRis undefined. To counter this, consider the truncated, or censored, ratioR' = X/Y'where zero sample ofYare discounted. Moreover, in many medical-type surveys, there are systematic problems with the reliability of the zero samples of both X and Y and it may be good practice to ignore the zero samples anyway. The probability of a null Poisson sample beinge−λ{\displaystyle e^{-\lambda }}, the generic pdf of a left truncated Poisson distribution is which sums to unity. Following Cohen,[21]fornindependent trials, the multidimensional truncated pdf is and the log likelihood becomes On differentiation we get and setting to zero gives the maximum likelihood estimateFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \hat \lambda_{ML} } Note that asλ^→0{\displaystyle {\hat {\lambda }}\to 0}thenFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \bar x \to 1 }so the truncated maximum likelihoodλ{\displaystyle \lambda }estimate, though correct for both truncated and untruncated distributions, gives a truncated meanFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \bar x }value which is highly biassed relative to the untruncated one. Nevertheless it appears thatFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \bar x }is asufficient statisticforFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \lambda }sinceFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \hat \lambda_{ML} }depends on the data only through the sample meanFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \bar x = \frac{1}{n} \sum_{i=1}^n x_i }in the previous equation which is consistent with the methodology of the conventionalPoisson distribution. Absent any closed form solutions, the following approximate reversion for truncatedFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \lambda }is valid over the whole range0≤λ≤∞;1≤x¯≤∞{\displaystyle 0\leq \lambda \leq \infty ;\;1\leq {\bar {x}}\leq \infty }. which compares with the non-truncated version which is simplyFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \hat \lambda = \bar x }. Taking the ratioFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle R = \hat \lambda_X / \hat \lambda_Y }is a valid operation even thoughFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \hat \lambda_X }may use a non-truncated model whileFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \hat \lambda_Y }has a left-truncated one. The asymptotic large-Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle n\lambda \text{ variance of }\hat \lambda }(andCramér–Rao bound) is in which substitutingLgives Then substitutingx¯{\displaystyle {\bar {x}}}from the equation above, we get Cohen's variance estimate The variance of the point estimate of the meanFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \lambda }, on the basis ofntrials, decreases asymptotically to zero asnincreases to infinity. For smallλ{\displaystyle \lambda }it diverges from the truncated pdf variance in Springael[22]for example, who quotes a variance of fornsamples in the left-truncated pdf shown at the top of this section. Cohen showed that the variance of the estimate relative to the variance of the pdf,Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \mathbb {Var} ( \hat \lambda) / \mathbb {Var} ( \lambda) }, ranges from 1 for largeFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \lambda }(100% efficient) up to 2 asλ{\displaystyle \lambda }approaches zero (50% efficient). These mean and variance parameter estimates, together with parallel estimates forX, can be applied to Normal or Binomial approximations for the Poisson ratio. Samples from trials may not be a good fit for the Poisson process; a further discussion of Poisson truncation is by Dietz and Bohning[23]and there is aZero-truncated Poisson distributionWikipedia entry. This distribution is the ratio of twoLaplace distributions.[24]LetXandYbe standard Laplace identically distributed random variables and letz=X/Y. Then the probability distribution ofzis Let the mean of theXandYbea. Then the standard double Lomax distribution is symmetric arounda. This distribution has an infinite mean and variance. IfZhas a standard double Lomax distribution, then 1/Zalso has a standard double Lomax distribution. The standard Lomax distribution is unimodal and has heavier tails than the Laplace distribution. For 0 <a< 1, thea-th moment exists. where Γ is thegamma function. Ratio distributions also appear inmultivariate analysis.[25]If the random matricesXandYfollow aWishart distributionthen the ratio of thedeterminants is proportional to the product of independentFrandom variables. In the case whereXandYare from independent standardizedWishart distributionsthen the ratio has aWilks' lambda distribution. In relation to Wishart matrix distributions ifS∼Wp(Σ,ν+1){\displaystyle S\sim W_{p}(\Sigma ,\nu +1)}is a sample Wishart matrix and vectorFailed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle V }is arbitrary, but statistically independent, corollary 3.2.9 of Muirhead[26]states The discrepancy of one in the sample numbers arises from estimation of the sample mean when forming the sample covariance, a consequence ofCochran's theorem. Similarly which is Theorem 3.2.12 of Muirhead[26]
https://en.wikipedia.org/wiki/Ratio_distribution
Stewardshipis a practice committed toethical valuethat embodies the responsible planning and management ofresources. The concepts of stewardship can be applied to the environment and nature,[1][2][3]economics,[4][5]health,[6]places,[7]property,[8]information,[9]theology,[10]and cultural resources. Stewardship was originally made up of the tasks of a domesticsteward, fromstiġ(house,hall) andweard, (ward,guard,guardian,keeper).[11][12]In the beginning, it referred to the household servant's duties for bringing food and drink to the castle's dining hall. Stewardship responsibilities were eventually expanded to include the domestic, service and management needs of the entire household. The NOAA Planet Stewards Education Project (PSEP) is an example of an environmental stewardship program in the United States to advance scientific literacy especially in areas that conserve, restore, and protect human communities and natural resources in the areas of climate, ocean, and atmosphere. It includes professional teachers of students of all ages and abilities, and informal educators who work with the public in nature and science centers, aquaria, and zoos. The project began in 2008 as the NOAA Climate Stewards Project. Its name was changed to NOAA Planet Stewards Educational Project in 2016.
https://en.wikipedia.org/wiki/Stewardship
Power ISAis areduced instruction set computer(RISC)instruction set architecture(ISA) currently developed by theOpenPOWER Foundation, led byIBM. It was originally developed by IBM and the now-defunctPower.orgindustry group. Power ISA is an evolution of thePowerPCISA, created by the mergers of the core PowerPC ISA and the optional Book E for embedded applications. The merger of these two components in 2006 was led by Power.org founders IBM andFreescale Semiconductor. Prior to version 3.0, the ISA is divided into several categories.Processorsimplement a set of these categories as required for theirtask. Different classes of processors are required to implement certain categories, for example a server-class processor includes the categories:Base,Server,Floating-Point,64-Bit, etc. All processors implement the Base category. Power ISA is a RISCload/store architecture. It has multiple sets ofregisters: Instructions up to version 3.0 have a length of 32 bits, with the exception of the VLE (variable-length encoding) subset that provides for highercode densityfor low-end embedded applications, and version 3.1 which introduced prefixing to create 64-bit instructions. Most instructions aretriadic, i.e. have two source operands and one destination. Single- anddouble-precisionIEEE 754compliant floating-point operations are supported, including additionalfused multiply–add(FMA) and decimal floating-point instructions. There are provisions forsingle instruction, multiple data(SIMD) operations on integer and floating-point data on up to 16 elements in one instruction. Power ISA has support forHarvardcache, i.e.split data and instruction caches, and support for unified caches. Memory operations are strictly load/store, but allow forout-of-order execution. There is also support for bothbig and little-endianaddressing with separate categories for moded and per-page endianness, and support for both32-bitand64-bitaddressing. Different modes of operation include user, supervisor and hypervisor. The Power ISA specification is divided into five parts, called "books": New in version 3 of the Power ISA is that you don't have to implement the entire specification to be compliant. The sprawl of instructions and technologies has made the complete specification unwieldy, so the OpenPOWER Foundation have decided to enable tiered compliancy. These levels include optional and mandatory requirements, however one common misunderstanding is that there is nothing stopping an implementation from being compliant at a lower level but having additional selected functions from higher levels and custom extensions. It is however recommended that an option be provided to disable any added functions beyond the design's declared subset level. A design must be compliant at its declared subset level to make use of the Foundation's protection regarding use ofintellectual property, be itpatentsortrademarks. This is explained in the OpenPOWER EULA.[1] A compliant design must:[2] If the extension is general-purpose enough, the OpenPOWER Foundation asks that implementors submit it as a Request for Comments (RFC) to theOpenPOWER ISA Workgroup. Note that it is not strictly necessary to join the OpenPOWER Foundation to submit RFCs.[3] The EABI specifications predate the announcement and creation of the Compliancy subsets. Regarding the Linux Compliancy subset having VSX (SIMD) optional: in 2003–4, 64-bit EABI v1.9 made SIMD optional,[4]but in July 2015, to improve performance for IBM POWER9 systems, SIMD was made mandatory in EABI v2.0.[5]This discrepancy between SIMD being optional in the Linux Compliancy level but mandatory in EABI v2.0 cannot be rectified without considerable effort: backwards incompatibility forLinux distributionsis not a viable option. At present this leaves new OpenPOWER implementors wishing to run standard Linux distributions having to implement a massive 962 instructions. By contrast, RISC-V RV64GC, the minimum to run Linux, requires only 165.[6] The specification for Power ISA v.2.03[7]is based on the former PowerPC ISA v.2.02[8]inPOWER5+ and the Book E[9]extension of thePowerPCspecification. The Book I included five new chapters regarding auxiliary processing units likeDSPsand theAltiVecextension. The specification for Power ISA v.2.04[10]was finalized in June 2007. It is based on Power ISA v.2.03 and includes changes primarily to theBook III-Spart regardingvirtualization,hypervisorfunctions,logical partitioningandvirtual pagehandling. The specification for Power ISA v.2.05[11]was released in December 2007. It is based on Power ISA v.2.04 and includes changes primarily toBook IandBook III-S, including significant enhancements such as decimal arithmetic (Category: Decimal Floating-Point inBook I) and server hypervisor improvements. The specification for Power ISA v.2.06[12]was released in February 2009, and revised in July 2010.[13]It is based on Power ISA v.2.05 and includes extensions for the POWER7 processor ande500-mc core. One significant new feature is vector-scalar floating-point instructions (VSX).[14]Book III-Ealso includes significant enhancement for the embedded specification regarding hypervisor and virtualisation on single and multi core implementations. The spec was revised in November 2010 to the Power ISA v.2.06 revision B spec, enhancing virtualization features.[13][15] The specification for Power ISA v.2.07[16]was released in May 2013. It is based on Power ISA v.2.06 and includes major enhancements tological partition functions,transactional memory, expanded performance monitoring, new storage control features, additions to the VMX and VSX vector facilities (VSX-2), along withAES[16]: 257[17]andGalois Counter Mode(GCM), SHA-224, SHA-256,[16]: 258SHA-384 and SHA-512[16]: 258(SHA-2) cryptographic extensions andcyclic redundancy check(CRC)algorithms.[18] The spec was revised in April 2015 to the Power ISA v.2.07 B spec.[19][20] The specification for Power ISA v.3.0[21][22]was released in November 2015. It is the first to come out after the founding of the OpenPOWER Foundation and includes enhancements for a broad spectrum of workloads and removes the server and embedded categories while retaining backwards compatibility and adds support for VSX-3 instructions. New functions include 128-bit quad-precision floating-point operations, arandom number generator, hardware-assistedgarbage collectionand hardware-enforced trusted computing. The spec was revised in March 2017 to the Power ISA v.3.0 B spec,[19][23]and revised again to v3.0C in May 2020.[19][24][25]One major change from v3.0 to v3.0B is the removal of support for hardware assisted garbage collection. The key difference between v3.0B and v3.0C is that the Compliancy Levels listed in v3.1 were also added to v3.0C. The specification for Power ISA v.3.1[19][27]was released in May 2020. Mainly giving support for new functions introduced in Power10, but also includes the notion of optionality to the PowerISA specification. Instructions can now be eightbyteslong, "prefixed instructions", compared to the usual four byte "word instructions". A lot of new functions to SIMD and VSX instructions are also added. VSX and the SVP64 extension provide hardware support for 16-bit half precision floats.[28][29] One key benefit of the new 64-bit prefixed instructions is the extension of immediates in branches to 34-bit. The spec was revised in September 2021 to the Power ISA v.3.1B spec.[19][30] The spec was revised in May 2024 to the Power ISA v.3.1C spec.[19][31]
https://en.wikipedia.org/wiki/Power_Architecture
Incomputational geometry, apower diagram, also called aLaguerre–Voronoi diagram,Dirichlet cell complex,radical Voronoi tesselationor asectional Dirichlet tesselation, is a partition of theEuclidean planeintopolygonalcells defined from a set of circles. The cell for a given circleCconsists of all the points for which thepower distancetoCis smaller than the power distance to the other circles. The power diagram is a form of generalizedVoronoi diagram, and coincides with the Voronoi diagram of the circle centers in the case that all the circles have equal radii.[1][2][3][4] IfCis a circle andPis a point outsideC, then thepowerofPwith respect toCis the square of the length of a line segment fromPto a pointTof tangency withC. Equivalently, ifPhas distancedfrom the center of the circle, and the circle has radiusr, then (by thePythagorean theorem) the power isd2−r2. The same formulad2−r2may be extended to all points in the plane, regardless of whether they are inside or outside ofC: points onChave zero power, and points insideChave negative power.[2][3][4] The power diagram of a set ofncirclesCiis a partition of the plane intonregionsRi(called cells), such that a pointPbelongs toRiwhenever circleCiis the circle minimizing the power ofP.[2][3][4] In the casen= 2, the power diagram consists of twohalfplanes, separated by a line called theradical axisor chordale of the two circles. Along the radical axis, both circles have equal power. More generally, in any power diagram, each cellRiis aconvex polygon, the intersection of the halfspaces bounded by the radical axes of circleCiwith each other circle. Triples of cells meet atverticesof the diagram, which are the radical centers of the three circles whose cells meet at the vertex.[2][3][4] The power diagram may be seen as a weighted form of theVoronoi diagramof a set of point sites, a partition of the plane into cells within which one of the sites is closer than all the other sites. Other forms ofweighted Voronoi diagraminclude the additively weighted Voronoi diagram, in which each site has a weight that is added to its distance before comparing it to the distances to the other sites, and the multiplicatively weighted Voronoi diagram, in which the weight of a site is multiplied by its distance before comparing it to the distances to the other sites. In contrast, in the power diagram, we may view each circle center as a site, and each circle's squared radius as a weight that is subtracted from thesquared Euclidean distancebefore comparing it to other squared distances. In the case that all the circle radii are equal, this subtraction makes no difference to the comparison, and the power diagram coincides with the Voronoi diagram.[3][4] A planar power diagram may also be interpreted as a planar cross-section of an unweighted three-dimensional Voronoi diagram. In this interpretation, the set of circle centers in the cross-section plane are the perpendicular projections of the three-dimensional Voronoi sites, and the squared radius of each circle is a constantKminus the squared distance of the corresponding site from the cross-section plane, whereKis chosen large enough to make all these radii positive.[5] Like the Voronoi diagram, the power diagram may be generalized to Euclidean spaces of any dimension. The power diagram ofnspheres inddimensions is combinatorially equivalent to the intersection of a set ofnupward-facing halfspaces ind+ 1 dimensions, and vice versa.[3] Two-dimensional power diagrams may be constructed by an algorithm that runs in time O(nlogn).[2][3]More generally, because of the equivalence with higher-dimensional halfspace intersections,d-dimensional power diagrams (ford> 2) may be constructed by an algorithm that runs in timeO(n⌈d/2⌉){\displaystyle O(n^{\lceil d/2\rceil })}.[3] The power diagram may be used as part of an efficient algorithm for computing the volume of a union of spheres. Intersecting each sphere with its power diagram cell gives its contribution to the total union, from which the volume may be computed in time proportional to the complexity of the power diagram.[6] Other applications of power diagrams includedata structuresfor testing whether a point belongs to a union of disks,[2]algorithms for constructing the boundary of a union of disks,[2]and algorithms for finding the closest two balls in a set of balls.[7]It is also used for solving the semi-discreteoptimal transportationproblem[8]which in turn has numerous applications, such as early universe reconstruction[9]or fluid dynamics.[10] Aurenhammer (1987)traces the definition of the power distance to the work of 19th-century mathematiciansEdmond LaguerreandGeorgy Voronoy.[3]Fejes Tóth (1977)defined power diagrams and used them to show that the boundary of a union ofncircular disks can always be illuminated from at most 2npoint light sources.[11]Power diagrams have appeared in the literature under other names including the "Laguerre–Voronoi diagram", "Dirichlet cell complex", "radical Voronoi tesselation" and "sectional Dirichlet tesselation".[12]
https://en.wikipedia.org/wiki/Power_diagram
Inmathematics, aconstraintis a condition of anoptimizationproblem that the solution must satisfy. There are several types of constraints—primarilyequalityconstraints,inequalityconstraints, andinteger constraints. The set ofcandidate solutionsthat satisfy all constraints is called thefeasible set.[1] The following is a simple optimization problem: subject to and wherex{\displaystyle \mathbf {x} }denotes the vector (x1,x2). In this example, the first line defines the function to be minimized (called theobjective function, loss function, or cost function). The second and third lines define two constraints, the first of which is an inequality constraint and the second of which is an equality constraint. These two constraints arehard constraints, meaning that it is required that they be satisfied; they define the feasible set of candidate solutions. Without the constraints, the solution would be (0,0), wheref(x){\displaystyle f(\mathbf {x} )}has the lowest value. But this solution does not satisfy the constraints. The solution of theconstrained optimizationproblem stated above isx=(1,1){\displaystyle \mathbf {x} =(1,1)}, which is the point with the smallest value off(x){\displaystyle f(\mathbf {x} )}that satisfies the two constraints. If the problem mandates that the constraints be satisfied, as in the above discussion, the constraints are sometimes referred to ashard constraints. However, in some problems, calledflexible constraint satisfaction problems, it is preferred but not required that certain constraints be satisfied; such non-mandatory constraints are known assoft constraints. Soft constraints arise in, for example,preference-based planning. In aMAX-CSPproblem, a number of constraints are allowed to be violated, and the quality of a solution is measured by the number of satisfied constraints. Global constraints[2]are constraints representing a specific relation on a number of variables, taken altogether. Some of them, such as thealldifferentconstraint, can be rewritten as a conjunction of atomic constraints in a simpler language: thealldifferentconstraint holds onnvariablesx1...xn{\displaystyle x_{1}...x_{n}}, and is satisfied if the variables take values which are pairwise different. It is semantically equivalent to the conjunction of inequalitiesx1≠x2,x1≠x3...,x2≠x3,x2≠x4...xn−1≠xn{\displaystyle x_{1}\neq x_{2},x_{1}\neq x_{3}...,x_{2}\neq x_{3},x_{2}\neq x_{4}...x_{n-1}\neq x_{n}}. Other global constraints extend the expressivity of the constraint framework. In this case, they usually capture a typical structure of combinatorial problems. For instance, theregularconstraint expresses that a sequence of variables is accepted by adeterministic finite automaton. Global constraints are used[3]to simplify the modeling ofconstraint satisfaction problems, to extend the expressivity of constraint languages, and also to improve theconstraint resolution: indeed, by considering the variables altogether, infeasible situations can be seen earlier in the solving process. Many of the global constraints are referenced into anonline catalog.
https://en.wikipedia.org/wiki/Constraint_(mathematics)
Unit testing,a.k.a.componentormoduletesting, is a form ofsoftware testingby which isolatedsource codeis tested to validate expected behavior.[1] Unit testing describes tests that are run at the unit-level to contrast testing at theintegrationorsystemlevel. Unit testing, as a principle for testing separately smaller parts of large software systems, dates back to the early days of software engineering. In June 1956 at US Navy's Symposium on Advanced Programming Methods for Digital Computers, H.D. Benington presented theSAGEproject. It featured a specification-based approach where the coding phase was followed by "parameter testing" to validate component subprograms against their specification, followed then by an "assembly testing" for parts put together.[2][3] In 1964, a similar approach is described for the software of theMercury project, where individual units developed by different programmes underwent "unit tests" before being integrated together.[4]In 1969, testing methodologies appear more structured, with unit tests, component tests and integration tests collectively validating individual parts written separately and their progressive assembly into larger blocks.[5]Some public standards adopted in the late 1960s, such as MIL-STD-483[6]and MIL-STD-490, contributed further to a wide acceptance of unit testing in large projects. Unit testing was in those times interactive[3]or automated,[7]using either coded tests or capture and replay testing tools. In 1989,Kent Beckdescribed a testing framework forSmalltalk(later calledSUnit) in "Simple Smalltalk Testing: With Patterns". In 1997,Kent BeckandErich Gammadeveloped and releasedJUnit, a unit test framework that became popular withJavadevelopers.[8]Googleembraced automated testing around 2005–2006.[9] A unit is defined as a single behaviour exhibited by the system under test (SUT), usually corresponding to a requirement[definition needed]. While a unit may correspond to a single function or module (inprocedural programming) or a single method or class (inobject-oriented programming), functions/methods and modules/classes do not necessarily correspond to units. From the system requirements perspective only the perimeter of the system is relevant, thus only entry points to externally visible system behaviours define units.[clarification needed][10] Unit tests can be performed manually or viaautomated testexecution. Automated tests include benefits such as: running tests often, running tests without staffing cost, and consistent and repeatable testing. Testing is often performed by the programmer who writes and modifies the code under test. Unit testing may be viewed as part of the process of writing code. Unit testing,a.k.a.componentormoduletesting, is a form ofsoftware testingby which isolatedsource codeis tested to validate expected behavior.[1] Unit testing describes tests that are run at the unit-level to contrast testing at theintegrationorsystemlevel.[11] Aparameterized testis a test that accepts a set of values that can be used to enable the test to run with multiple, different input values. A testing framework that supports parametrized tests supports a way to encode parameter sets and to run the test with each set. Use of parametrized tests can reduce test code duplication. Parameterized tests are supported byTestNG,JUnit,[14]XUnitandNUnit, as well as in various JavaScript test frameworks.[citation needed] Parameters for the unit tests may be coded manually or in some cases are automatically generated by the test framework. In recent years support was added for writing more powerful (unit) tests, leveraging the concept of theories, test cases that execute the same steps, but using test data generated at runtime, unlike regular parameterized tests that use the same execution steps with input sets that are pre-defined.[citation needed] Sometimes, in the agile software development, unit testing is done peruser storyand comes in the later half of the sprint after requirements gathering and development are complete. Typically, the developers or other members from the development team, such asconsultants, will write step-by-step 'test scripts' for the developers to execute in the tool. Test scripts are generally written to prove the effective and technical operation of specific developed features in the tool, as opposed to full fledged business processes that would be interfaced by theend user, which is typically done duringuser acceptance testing. If the test-script can be fully executed from start to finish without incident, the unit test is considered to have "passed", otherwise errors are noted and the user story is moved back to development in an 'in-progress' state. User stories that successfully pass unit tests are moved on to the final steps of the sprint - Code review, peer review, and then lastly a 'show-back' session demonstrating the developed tool to stakeholders. In test-driven development (TDD), unit tests are written while the production code is written. Starting with working code, the developer adds test code for a required behavior, then addsjust enoughcode to make the test pass, then refactors the code (including test code) as makes sense and then repeats by adding another test. Unit testing is intended to ensure that the units meet theirdesignand behave as intended.[15] By writing tests first for the smallest testable units, then the compound behaviors between those, one can build up comprehensive tests for complex applications.[15] One goal of unit testing is to isolate each part of the program and show that the individual parts are correct.[1]A unit test provides a strict, writtencontractthat the piece of code must satisfy. Unit testing finds problems early in thedevelopment cycle. This includes both bugs in the programmer's implementation and flaws or missing parts of the specification for the unit. The process of writing a thorough set of tests forces the author to think through inputs, outputs, and error conditions, and thus more crisply define the unit's desired behavior.[citation needed] The cost of finding a bug before coding begins or when the code is first written is considerably lower than the cost of detecting, identifying, and correcting the bug later. Bugs in released code may also cause costly problems for the end-users of the software.[16][17][18]Code can be impossible or difficult to unit test if poorly written, thus unit testing can force developers to structure functions and objects in better ways. Unit testing enables more frequent releases in software development. By testing individual components in isolation, developers can quickly identify and address issues, leading to faster iteration and release cycles.[19] Unit testing allows the programmer torefactorcode or upgrade system libraries at a later date, and make sure the module still works correctly (e.g., inregression testing). The procedure is to write test cases for allfunctionsandmethodsso that whenever a change causes a fault, it can be identified quickly. Unit tests detect changes which may break adesign contract. Unit testing may reduce uncertainty in the units themselves and can be used in abottom-uptesting style approach. By testing the parts of a program first and then testing the sum of its parts,integration testingbecomes much easier.[citation needed] Some programmers contend that unit tests provide a form of documentation of the code. Developers wanting to learn what functionality is provided by a unit, and how to use it, can review the unit tests to gain an understanding of it.[citation needed] Test cases can embody characteristics that are critical to the success of the unit. These characteristics can indicate appropriate/inappropriate use of a unit as well as negative behaviors that are to be trapped by the unit. A test case documents these critical characteristics, although many software development environments do not rely solely upon code to document the product in development.[citation needed] In some processes, the act of writing tests and the code under test, plus associated refactoring, may take the place of formal design. Each unit test can be seen as a design element specifying classes, methods, and observable behavior.[citation needed] Testing will not catch every error in the program, because it cannot evaluate every execution path in any but the most trivial programs. Thisproblemis a superset of thehalting problem, which isundecidable. The same is true for unit testing. Additionally, unit testing by definition only tests the functionality of the units themselves. Therefore, it will not catch integration errors or broader system-level errors (such as functions performed across multiple units, or non-functional test areas such asperformance). Unit testing should be done in conjunction with othersoftware testingactivities, as they can only show the presence or absence of particular errors; they cannot prove a complete absence of errors. To guarantee correct behavior for every execution path and every possible input, and ensure the absence of errors, other techniques are required, namely the application offormal methodsto prove that a software component has no unexpected behavior.[citation needed] An elaborate hierarchy of unit tests does not equal integration testing. Integration with peripheral units should be included in integration tests, but not in unit tests.[citation needed]Integration testing typically still relies heavily on humanstesting manually; high-level or global-scope testing can be difficult to automate, such that manual testing often appears faster and cheaper.[citation needed] Software testing is a combinatorial problem. For example, every Boolean decision statement requires at least two tests: one with an outcome of "true" and one with an outcome of "false". As a result, for every line of code written, programmers often need 3 to 5 lines of test code.[citation needed]This obviously takes time and its investment may not be worth the effort. There are problems that cannot easily be tested at all – for example those that arenondeterministicor involve multiplethreads. In addition, code for a unit test is as likely to be buggy as the code it is testing.Fred BrooksinThe Mythical Man-Monthquotes: "Never go to sea with two chronometers; take one or three."[20]Meaning, if twochronometerscontradict, how do you know which one is correct? Another challenge related to writing the unit tests is the difficulty of setting up realistic and useful tests. It is necessary to create relevant initial conditions so the part of the application being tested behaves like part of the complete system. If these initial conditions are not set correctly, the test will not be exercising the code in a realistic context, which diminishes the value and accuracy of unit test results.[citation needed] To obtain the intended benefits from unit testing, rigorous discipline is needed throughout the software development process. It is essential to keep careful records not only of the tests that have been performed, but also of all changes that have been made to the source code of this or any other unit in the software. Use of aversion controlsystem is essential. If a later version of the unit fails a particular test that it had previously passed, the version-control software can provide a list of the source code changes (if any) that have been applied to the unit since that time.[citation needed] It is also essential to implement a sustainable process for ensuring that test case failures are reviewed regularly and addressed immediately.[21]If such a process is not implemented and ingrained into the team's workflow, the application will evolve out of sync with the unit test suite, increasing false positives and reducing the effectiveness of the test suite. Unit testing embedded system software presents a unique challenge: Because the software is being developed on a different platform than the one it will eventually run on, you cannot readily run a test program in the actual deployment environment, as is possible with desktop programs.[22] Unit tests tend to be easiest when a method has input parameters and some output. It is not as easy to create unit tests when a major function of the method is to interact with something external to the application. For example, a method that will work with a database might require a mock up of database interactions to be created, which probably won't be as comprehensive as the real database interactions.[23][better source needed] Below is an example of a JUnit test suite. It focuses on theAdderclass. The test suite usesassertstatements to verify the expected result of various input values to thesummethod. Using unit-tests as a design specification has one significant advantage over other design methods: The design document (the unit-tests themselves) can itself be used to verify the implementation. The tests will never pass unless the developer implements a solution according to the design. Unit testing lacks some of the accessibility of a diagrammatic specification such as aUMLdiagram, but they may be generated from the unit test using automated tools. Most modern languages have free tools (usually available as extensions toIDEs). Free tools, like those based on thexUnitframework, outsource to another system the graphical rendering of a view for human consumption.[24] Unit testing is the cornerstone ofextreme programming, which relies on an automatedunit testing framework. This automated unit testing framework can be either third party, e.g.,xUnit, or created within the development group. Extreme programming uses the creation of unit tests fortest-driven development. The developer writes a unit test that exposes either a software requirement or a defect. This test will fail because either the requirement isn't implemented yet, or because it intentionally exposes a defect in the existing code. Then, the developer writes the simplest code to make the test, along with other tests, pass. Most code in a system is unit tested, but not necessarily all paths through the code. Extreme programming mandates a "test everything that can possibly break" strategy, over the traditional "test every execution path" method. This leads developers to develop fewer tests than classical methods, but this isn't really a problem, more a restatement of fact, as classical methods have rarely ever been followed methodically enough for all execution paths to have been thoroughly tested.[citation needed]Extreme programming simply recognizes that testing is rarely exhaustive (because it is often too expensive and time-consuming to be economically viable) and provides guidance on how to effectively focus limited resources. Crucially, the test code is considered a first class project artifact in that it is maintained at the same quality as the implementation code, with all duplication removed. Developers release unit testing code to the code repository in conjunction with the code it tests. Extreme programming's thorough unit testing allows the benefits mentioned above, such as simpler and more confident code development andrefactoring, simplified code integration, accurate documentation, and more modular designs. These unit tests are also constantly run as a form ofregression test. Unit testing is also critical to the concept ofEmergent Design. As emergent design is heavily dependent upon refactoring, unit tests are an integral component.[citation needed] An automated testing framework provides features for automating test execution and can accelerate writing and running tests. Frameworks have been developed fora wide variety of programming languages. Generally, frameworks arethird-party; not distributed with a compiler orintegrated development environment(IDE). Tests can be written without using a framework to exercise the code under test usingassertions,exception handling, and othercontrol flowmechanisms to verify behavior and report failure. Some note that testing without a framework is valuable since there is abarrier to entryfor the adoption of a framework; that having some tests is better than none, but once a framework is in place, adding tests can be easier.[25] In some frameworks advanced test features are missing and must be hand-coded. Some programming languages directly support unit testing. Their grammar allows the direct declaration of unit tests without importing a library (whether third party or standard). Additionally, the Boolean conditions of the unit tests can be expressed in the same syntax as Boolean expressions used in non-unit test code, such as what is used forifandwhilestatements. Languages with built-in unit testing support include: Languages with standard unit testing framework support include: Some languages do not have built-in unit-testing support but have established unit testing libraries or frameworks. These languages include:
https://en.wikipedia.org/wiki/Unit_testing
Inmathematics, theFourier transform(FT) is anintegral transformthat takes afunctionas input then outputs another function that describes the extent to which variousfrequenciesare present in the original function. The output of the transform is acomplex-valued function of frequency. The termFourier transformrefers to both this complex-valued function and themathematical operation. When a distinction needs to be made, the output of the operation is sometimes called thefrequency domainrepresentation of the original function. The Fourier transform is analogous to decomposing thesoundof a musicalchordinto theintensitiesof its constituentpitches. Functions that are localized in the time domain have Fourier transforms that are spread out across the frequency domain and vice versa, a phenomenon known as theuncertainty principle. Thecriticalcase for this principle is theGaussian function, of substantial importance inprobability theoryandstatisticsas well as in the study of physical phenomena exhibitingnormal distribution(e.g.,diffusion). The Fourier transform of a Gaussian function is another Gaussian function.Joseph Fourierintroducedsine and cosine transforms(whichcorrespond to the imaginary and real componentsof the modern Fourier transform) in his study ofheat transfer, where Gaussian functions appear as solutions of theheat equation. The Fourier transform can be formally defined as animproperRiemann integral, making it an integral transform, although this definition is not suitable for many applications requiring a more sophisticated integration theory.[note 1]For example, many relatively simple applications use theDirac delta function, which can be treated formally as if it were a function, but the justification requires a mathematically more sophisticated viewpoint.[note 2] The Fourier transform can also be generalized to functions of several variables onEuclidean space, sending a function of3-dimensional'position space' to a function of3-dimensionalmomentum (or a function of space and time to a function of4-momentum). This idea makes the spatial Fourier transform very natural in the study of waves, as well as inquantum mechanics, where it is important to be able to represent wave solutions as functions of either position or momentum and sometimes both. In general, functions to which Fourier methods are applicable are complex-valued, and possiblyvector-valued.[note 3]Still further generalization is possible to functions ongroups, which, besides the original Fourier transform onRorRn, notably includes thediscrete-time Fourier transform(DTFT, group =Z), thediscrete Fourier transform(DFT, group =ZmodN) and theFourier seriesor circular Fourier transform (group =S1, the unit circle ≈ closed finite interval with endpoints identified). The latter is routinely employed to handleperiodic functions. Thefast Fourier transform(FFT) is an algorithm for computing the DFT. The Fourier transform of a complex-valued (Lebesgue) integrable functionf(x){\displaystyle f(x)}on the real line, is the complex valued functionf^(ξ){\displaystyle {\hat {f}}(\xi )}, defined by the integral[1] f^(ξ)=∫−∞∞f(x)e−i2πξxdx,∀ξ∈R.{\displaystyle {\widehat {f}}(\xi )=\int _{-\infty }^{\infty }f(x)\ e^{-i2\pi \xi x}\,dx,\quad \forall \xi \in \mathbb {R} .} Evaluating the Fourier transform for all values ofξ{\displaystyle \xi }produces thefrequency-domainfunction, and it converges at all frequencies to a continuous function tending to zero at infinity. Iff(x){\displaystyle f(x)}decays with all derivatives, i.e.,lim|x|→∞f(n)(x)=0,∀n∈N,{\displaystyle \lim _{|x|\to \infty }f^{(n)}(x)=0,\quad \forall n\in \mathbb {N} ,}thenf^{\displaystyle {\widehat {f}}}converges for all frequencies and, by theRiemann–Lebesgue lemma,f^{\displaystyle {\widehat {f}}}also decays with all derivatives. First introduced inFourier'sAnalytical Theory of Heat.,[2][3][4][5]the corresponding inversion formula for "sufficiently nice" functions is given by theFourier inversion theorem, i.e., f(x)=∫−∞∞f^(ξ)ei2πξxdξ,∀x∈R.{\displaystyle f(x)=\int _{-\infty }^{\infty }{\widehat {f}}(\xi )\ e^{i2\pi \xi x}\,d\xi ,\quad \forall \ x\in \mathbb {R} .} The functionsf{\displaystyle f}andf^{\displaystyle {\widehat {f}}}are referred to as aFourier transform pair.[6]A common notation for designating transform pairs is:[7]f(x)⟷Ff^(ξ),{\displaystyle f(x)\ {\stackrel {\mathcal {F}}{\longleftrightarrow }}\ {\widehat {f}}(\xi ),}for examplerect⁡(x)⟷Fsinc⁡(ξ).{\displaystyle \operatorname {rect} (x)\ {\stackrel {\mathcal {F}}{\longleftrightarrow }}\ \operatorname {sinc} (\xi ).} By analogy, theFourier seriescan be regarded as an abstract Fourier transform on the groupZ{\displaystyle \mathbb {Z} }ofintegers. That is, thesynthesisof a sequence of complex numberscn{\displaystyle c_{n}}is defined by the Fourier transformf(x)=∑n=−∞∞cnei2πnPx,{\displaystyle f(x)=\sum _{n=-\infty }^{\infty }c_{n}\,e^{i2\pi {\tfrac {n}{P}}x},}such thatcn{\displaystyle c_{n}}are given by the inversion formula, i.e., theanalysiscn=1P∫−P/2P/2f(x)e−i2πnPxdx,{\displaystyle c_{n}={\frac {1}{P}}\int _{-P/2}^{P/2}f(x)\,e^{-i2\pi {\frac {n}{P}}x}\,dx,}for some complex-valued,P{\displaystyle P}-periodic functionf(x){\displaystyle f(x)}defined on a bounded interval[−P/2,P/2]∈R{\displaystyle [-P/2,P/2]\in \mathbb {R} }. WhenP→∞,{\displaystyle P\to \infty ,}the constituentfrequenciesare a continuum:nP→ξ∈R,{\displaystyle {\tfrac {n}{P}}\to \xi \in \mathbb {R} ,}[8][9][10]andcn→f^(ξ)∈C{\displaystyle c_{n}\to {\hat {f}}(\xi )\in \mathbb {C} }.[11] In other words, on the finite interval[−P/2,P/2]{\displaystyle [-P/2,P/2]}the functionf(x){\displaystyle f(x)}has a discrete decomposition in the periodic functionsei2πxn/P{\displaystyle e^{i2\pi xn/P}}. On the infinite interval(−∞,∞){\displaystyle (-\infty ,\infty )}the functionf(x){\displaystyle f(x)}has a continuous decomposition in periodic functionsei2πxξ{\displaystyle e^{i2\pi x\xi }}. Ameasurable functionf:R→C{\displaystyle f:\mathbb {R} \to \mathbb {C} }is called (Lebesgue) integrable if theLebesgue integralof its absolute value is finite:‖f‖1=∫R|f(x)|dx<∞.{\displaystyle \|f\|_{1}=\int _{\mathbb {R} }|f(x)|\,dx<\infty .}Iff{\displaystyle f}is Lebesgue integrable then the Fourier transform, given byEq.1, is well-defined for allξ∈R{\displaystyle \xi \in \mathbb {R} }.[12]Furthermore,f^∈L∞∩C(R){\displaystyle {\widehat {f}}\in L^{\infty }\cap C(\mathbb {R} )}is bounded,uniformly continuousand (by theRiemann–Lebesgue lemma) zero at infinity. The spaceL1(R){\displaystyle L^{1}(\mathbb {R} )}is the space of measurable functions for which the norm‖f‖1{\displaystyle \|f\|_{1}}is finite, modulo theequivalence relationof equalityalmost everywhere. The Fourier transform onL1(R){\displaystyle L^{1}(\mathbb {R} )}isone-to-one. However, there is no easy characterization of the image, and thus no easy characterization of the inverse transform. In particular,Eq.2is no longer valid, as it was stated only under the hypothesis thatf(x){\displaystyle f(x)}decayed with all derivatives. WhileEq.1defines the Fourier transform for (complex-valued) functions inL1(R){\displaystyle L^{1}(\mathbb {R} )}, it is not well-defined for other integrability classes, most importantly the space ofsquare-integrable functionsL2(R){\displaystyle L^{2}(\mathbb {R} )}. For example, the functionf(x)=(1+x2)−1/2{\displaystyle f(x)=(1+x^{2})^{-1/2}}is inL2{\displaystyle L^{2}}but notL1{\displaystyle L^{1}}and therefore the Lebesgue integralEq.1does not exist. However, the Fourier transform on the dense subspaceL1∩L2(R)⊂L2(R){\displaystyle L^{1}\cap L^{2}(\mathbb {R} )\subset L^{2}(\mathbb {R} )}admits a unique continuous extension to aunitary operatoronL2(R){\displaystyle L^{2}(\mathbb {R} )}. This extension is important in part because, unlike the case ofL1{\displaystyle L^{1}}, the Fourier transform is anautomorphismof the spaceL2(R){\displaystyle L^{2}(\mathbb {R} )}. In such cases, the Fourier transform can be obtained explicitly byregularizingthe integral, and then passing to a limit. In practice, the integral is often regarded as animproper integralinstead of a proper Lebesgue integral, but sometimes for convergence one needs to useweak limitorprincipal valueinstead of the (pointwise) limits implicit in an improper integral.Titchmarsh (1986)andDym & McKean (1985)each gives three rigorous ways of extending the Fourier transform to square integrable functions using this procedure. A general principle in working with theL2{\displaystyle L^{2}}Fourier transform is that Gaussians are dense inL1∩L2{\displaystyle L^{1}\cap L^{2}}, and the various features of the Fourier transform, such as its unitarity, are easily inferred for Gaussians. Many of the properties of the Fourier transform, can then be proven from two facts about Gaussians:[13] A feature of theL1{\displaystyle L^{1}}Fourier transform is that it is a homomorphism of Banach algebras fromL1{\displaystyle L^{1}}equipped with the convolution operation to the Banach algebra of continuous functions under theL∞{\displaystyle L^{\infty }}(supremum) norm. The conventions chosen in this article are those ofharmonic analysis, and are characterized as the unique conventions such that the Fourier transform is bothunitaryonL2and an algebra homomorphism fromL1toL∞, without renormalizing the Lebesgue measure.[14] When the independent variable (x{\displaystyle x}) representstime(often denoted byt{\displaystyle t}), the transform variable (ξ{\displaystyle \xi }) representsfrequency(often denoted byf{\displaystyle f}). For example, if time is measured inseconds, then frequency is inhertz. The Fourier transform can also be written in terms ofangular frequency,ω=2πξ,{\displaystyle \omega =2\pi \xi ,}whose units areradiansper second. The substitutionξ=ω2π{\displaystyle \xi ={\tfrac {\omega }{2\pi }}}intoEq.1produces this convention, where functionf^{\displaystyle {\widehat {f}}}is relabeledf1^:{\displaystyle {\widehat {f_{1}}}:}f3^(ω)≜∫−∞∞f(x)⋅e−iωxdx=f1^(ω2π),f(x)=12π∫−∞∞f3^(ω)⋅eiωxdω.{\displaystyle {\begin{aligned}{\widehat {f_{3}}}(\omega )&\triangleq \int _{-\infty }^{\infty }f(x)\cdot e^{-i\omega x}\,dx={\widehat {f_{1}}}\left({\tfrac {\omega }{2\pi }}\right),\\f(x)&={\frac {1}{2\pi }}\int _{-\infty }^{\infty }{\widehat {f_{3}}}(\omega )\cdot e^{i\omega x}\,d\omega .\end{aligned}}}Unlike theEq.1definition, the Fourier transform is no longer aunitary transformation, and there is less symmetry between the formulas for the transform and its inverse. Those properties are restored by splitting the2π{\displaystyle 2\pi }factor evenly between the transform and its inverse, which leads to another convention:f2^(ω)≜12π∫−∞∞f(x)⋅e−iωxdx=12πf1^(ω2π),f(x)=12π∫−∞∞f2^(ω)⋅eiωxdω.{\displaystyle {\begin{aligned}{\widehat {f_{2}}}(\omega )&\triangleq {\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }f(x)\cdot e^{-i\omega x}\,dx={\frac {1}{\sqrt {2\pi }}}\ \ {\widehat {f_{1}}}\left({\tfrac {\omega }{2\pi }}\right),\\f(x)&={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }{\widehat {f_{2}}}(\omega )\cdot e^{i\omega x}\,d\omega .\end{aligned}}}Variations of all three conventions can be created by conjugating the complex-exponentialkernelof both the forward and the reverse transform. The signs must be opposites. In 1822, Fourier claimed (seeJoseph Fourier § The Analytic Theory of Heat) that any function, whether continuous or discontinuous, can be expanded into a series of sines.[15]That important work was corrected and expanded upon by others to provide the foundation for the various forms of the Fourier transform used since. In general, the coefficientsf^(ξ){\displaystyle {\widehat {f}}(\xi )}are complex numbers, which have two equivalent forms (seeEuler's formula):f^(ξ)=Aeiθ⏟polar coordinate form=Acos⁡(θ)+iAsin⁡(θ)⏟rectangular coordinate form.{\displaystyle {\widehat {f}}(\xi )=\underbrace {Ae^{i\theta }} _{\text{polar coordinate form}}=\underbrace {A\cos(\theta )+iA\sin(\theta )} _{\text{rectangular coordinate form}}.} The product withei2πξx{\displaystyle e^{i2\pi \xi x}}(Eq.2) has these forms:f^(ξ)⋅ei2πξx=Aeiθ⋅ei2πξx=Aei(2πξx+θ)⏟polar coordinate form=Acos⁡(2πξx+θ)+iAsin⁡(2πξx+θ)⏟rectangular coordinate form.{\displaystyle {\begin{aligned}{\widehat {f}}(\xi )\cdot e^{i2\pi \xi x}&=Ae^{i\theta }\cdot e^{i2\pi \xi x}\\&=\underbrace {Ae^{i(2\pi \xi x+\theta )}} _{\text{polar coordinate form}}\\&=\underbrace {A\cos(2\pi \xi x+\theta )+iA\sin(2\pi \xi x+\theta )} _{\text{rectangular coordinate form}}.\end{aligned}}}which conveys bothamplitudeandphaseof frequencyξ.{\displaystyle \xi .}Likewise, the intuitive interpretation ofEq.1is that multiplyingf(x){\displaystyle f(x)}bye−i2πξx{\displaystyle e^{-i2\pi \xi x}}has the effect of subtractingξ{\displaystyle \xi }from every frequency component of functionf(x).{\displaystyle f(x).}[note 4]Only the component that was at frequencyξ{\displaystyle \xi }can produce a non-zero value of the infinite integral, because (at least formally) all the other shifted components are oscillatory and integrate to zero. (see§ Example) It is noteworthy how easily the product was simplified using the polar form, and how easily the rectangular form was deduced by an application of Euler's formula. Euler's formula introduces the possibility of negativeξ.{\displaystyle \xi .}AndEq.1is defined∀ξ∈R.{\displaystyle \forall \xi \in \mathbb {R} .}Only certain complex-valuedf(x){\displaystyle f(x)}have transformsf^=0,∀ξ<0{\displaystyle {\widehat {f}}=0,\ \forall \ \xi <0}(SeeAnalytic signal. A simple example isei2πξ0x(ξ0>0).{\displaystyle e^{i2\pi \xi _{0}x}\ (\xi _{0}>0).})  But negative frequency is necessary to characterize all other complex-valuedf(x),{\displaystyle f(x),}found insignal processing,partial differential equations,radar,nonlinear optics,quantum mechanics, and others. For a real-valuedf(x),{\displaystyle f(x),}Eq.1has the symmetry propertyf^(−ξ)=f^∗(ξ){\displaystyle {\widehat {f}}(-\xi )={\widehat {f}}^{*}(\xi )}(see§ Conjugationbelow). This redundancy enablesEq.2to distinguishf(x)=cos⁡(2πξ0x){\displaystyle f(x)=\cos(2\pi \xi _{0}x)}fromei2πξ0x.{\displaystyle e^{i2\pi \xi _{0}x}.}But of course it cannot tell us the actual sign ofξ0,{\displaystyle \xi _{0},}becausecos⁡(2πξ0x){\displaystyle \cos(2\pi \xi _{0}x)}andcos⁡(2π(−ξ0)x){\displaystyle \cos(2\pi (-\xi _{0})x)}are indistinguishable on just the real numbers line. The Fourier transform of a periodic function cannot be defined using the integral formula directly. In order for integral inEq.1to be defined the function must beabsolutely integrable. Instead it is common to useFourier series. It is possible to extend the definition to include periodic functions by viewing them astempered distributions. This makes it possible to see a connection between theFourier seriesand the Fourier transform for periodic functions that have aconvergent Fourier series. Iff(x){\displaystyle f(x)}is aperiodic function, with periodP{\displaystyle P}, that has a convergent Fourier series, then:f^(ξ)=∑n=−∞∞cn⋅δ(ξ−nP),{\displaystyle {\widehat {f}}(\xi )=\sum _{n=-\infty }^{\infty }c_{n}\cdot \delta \left(\xi -{\tfrac {n}{P}}\right),}wherecn{\displaystyle c_{n}}are the Fourier series coefficients off{\displaystyle f}, andδ{\displaystyle \delta }is theDirac delta function. In other words, the Fourier transform is aDirac combfunction whoseteethare multiplied by the Fourier series coefficients. The Fourier transform of anintegrablefunctionf{\displaystyle f}can be sampled at regular intervals of arbitrary length1P.{\displaystyle {\tfrac {1}{P}}.}These samples can be deduced from one cycle of a periodic functionfP{\displaystyle f_{P}}which hasFourier seriescoefficients proportional to those samples by thePoisson summation formula:fP(x)≜∑n=−∞∞f(x+nP)=1P∑k=−∞∞f^(kP)ei2πkPx,∀k∈Z{\displaystyle f_{P}(x)\triangleq \sum _{n=-\infty }^{\infty }f(x+nP)={\frac {1}{P}}\sum _{k=-\infty }^{\infty }{\widehat {f}}\left({\tfrac {k}{P}}\right)e^{i2\pi {\frac {k}{P}}x},\quad \forall k\in \mathbb {Z} } The integrability off{\displaystyle f}ensures the periodic summation converges. Therefore, the samplesf^(kP){\displaystyle {\widehat {f}}\left({\tfrac {k}{P}}\right)}can be determined by Fourier series analysis:f^(kP)=∫PfP(x)⋅e−i2πkPxdx.{\displaystyle {\widehat {f}}\left({\tfrac {k}{P}}\right)=\int _{P}f_{P}(x)\cdot e^{-i2\pi {\frac {k}{P}}x}\,dx.} Whenf(x){\displaystyle f(x)}hascompact support,fP(x){\displaystyle f_{P}(x)}has a finite number of terms within the interval of integration. Whenf(x){\displaystyle f(x)}does not have compact support, numerical evaluation offP(x){\displaystyle f_{P}(x)}requires an approximation, such as taperingf(x){\displaystyle f(x)}or truncating the number of terms. The frequency variable must have inverse units to the units of the original function's domain (typically namedt{\displaystyle t}orx{\displaystyle x}). For example, ift{\displaystyle t}is measured in seconds,ξ{\displaystyle \xi }should be in cycles per second orhertz. If the scale of time is in units of2π{\displaystyle 2\pi }seconds, then another Greek letterω{\displaystyle \omega }is typically used instead to representangular frequency(whereω=2πξ{\displaystyle \omega =2\pi \xi }) in units ofradiansper second. If usingx{\displaystyle x}for units of length, thenξ{\displaystyle \xi }must be in inverse length, e.g.,wavenumbers. That is to say, there are two versions of the real line: one which is therangeoft{\displaystyle t}and measured in units oft,{\displaystyle t,}and the other which is the range ofξ{\displaystyle \xi }and measured in inverse units to the units oft.{\displaystyle t.}These two distinct versions of the real line cannot be equated with each other. Therefore, the Fourier transform goes from one space of functions to a different space of functions: functions which have a different domain of definition. In general,ξ{\displaystyle \xi }must always be taken to be alinear formon the space of its domain, which is to say that the second real line is thedual spaceof the first real line. See the article onlinear algebrafor a more formal explanation and for more details. This point of view becomes essential in generalizations of the Fourier transform to generalsymmetry groups, including the case of Fourier series. That there is no one preferred way (often, one says "no canonical way") to compare the two versions of the real line which are involved in the Fourier transform—fixing the units on one line does not force the scale of the units on the other line—is the reason for the plethora of rival conventions on the definition of the Fourier transform. The various definitions resulting from different choices of units differ by various constants. In other conventions, the Fourier transform hasiin the exponent instead of−i, and vice versa for the inversion formula. This convention is common in modern physics[16]and is the default forWolfram Alpha, and does not mean that the frequency has become negative, since there is no canonical definition of positivity for frequency of a complex wave. It simply means thatf^(ξ){\displaystyle {\hat {f}}(\xi )}is the amplitude of the wavee−i2πξx{\displaystyle e^{-i2\pi \xi x}}instead of the waveei2πξx{\displaystyle e^{i2\pi \xi x}}(the former, with its minus sign, is often seen in the time dependence forSinusoidal plane-wave solutions of the electromagnetic wave equation, or in thetime dependence for quantum wave functions). Many of the identities involving the Fourier transform remain valid in those conventions, provided all terms that explicitly involveihave it replaced by−i. InElectrical engineeringthe letterjis typically used for theimaginary unitinstead ofibecauseiis used for current. When usingdimensionless units, the constant factors might not even be written in the transform definition. For instance, inprobability theory, the characteristic functionΦof the probability density functionfof a random variableXof continuous type is defined without a negative sign in the exponential, and since the units ofxare ignored, there is no 2πeither:ϕ(λ)=∫−∞∞f(x)eiλxdx.{\displaystyle \phi (\lambda )=\int _{-\infty }^{\infty }f(x)e^{i\lambda x}\,dx.} (In probability theory, and in mathematical statistics, the use of the Fourier—Stieltjes transform is preferred, because so many random variables are not of continuous type, and do not possess a density function, and one must treat not functions butdistributions, i.e., measures which possess "atoms".) From the higher point of view ofgroup characters, which is much more abstract, all these arbitrary choices disappear, as will be explained in the later section of this article, which treats the notion of the Fourier transform of a function on alocally compact Abelian group. Letf(x){\displaystyle f(x)}andh(x){\displaystyle h(x)}representintegrable functionsLebesgue-measurableon the real line satisfying:∫−∞∞|f(x)|dx<∞.{\displaystyle \int _{-\infty }^{\infty }|f(x)|\,dx<\infty .}We denote the Fourier transforms of these functions asf^(ξ){\displaystyle {\hat {f}}(\xi )}andh^(ξ){\displaystyle {\hat {h}}(\xi )}respectively. The Fourier transform has the following basic properties:[17] af(x)+bh(x)⟺Faf^(ξ)+bh^(ξ);a,b∈C{\displaystyle a\ f(x)+b\ h(x)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ a\ {\widehat {f}}(\xi )+b\ {\widehat {h}}(\xi );\quad \ a,b\in \mathbb {C} } f(x−x0)⟺Fe−i2πx0ξf^(ξ);x0∈R{\displaystyle f(x-x_{0})\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ e^{-i2\pi x_{0}\xi }\ {\widehat {f}}(\xi );\quad \ x_{0}\in \mathbb {R} } ei2πξ0xf(x)⟺Ff^(ξ−ξ0);ξ0∈R{\displaystyle e^{i2\pi \xi _{0}x}f(x)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\widehat {f}}(\xi -\xi _{0});\quad \ \xi _{0}\in \mathbb {R} } f(ax)⟺F1|a|f^(ξa);a≠0{\displaystyle f(ax)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\frac {1}{|a|}}{\widehat {f}}\left({\frac {\xi }{a}}\right);\quad \ a\neq 0}The casea=−1{\displaystyle a=-1}leads to thetime-reversal property:f(−x)⟺Ff^(−ξ){\displaystyle f(-x)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\widehat {f}}(-\xi )} When the real and imaginary parts of a complex function are decomposed into theireven and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform:[18] Timedomainf=fRE+fRO+ifIE+ifIO⏟⇕F⇕F⇕F⇕F⇕FFrequencydomainf^=f^RE+if^IO⏞+if^IE+f^RO{\displaystyle {\begin{array}{rlcccccccc}{\mathsf {Time\ domain}}&f&=&f_{_{\text{RE}}}&+&f_{_{\text{RO}}}&+&i\ f_{_{\text{IE}}}&+&\underbrace {i\ f_{_{\text{IO}}}} \\&{\Bigg \Updownarrow }{\mathcal {F}}&&{\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}\\{\mathsf {Frequency\ domain}}&{\widehat {f}}&=&{\widehat {f}}_{_{\text{RE}}}&+&\overbrace {i\ {\widehat {f}}_{_{\text{IO}}}\,} &+&i\ {\widehat {f}}_{_{\text{IE}}}&+&{\widehat {f}}_{_{\text{RO}}}\end{array}}} From this, various relationships are apparent, for example: (f(x))∗⟺F(f^(−ξ))∗{\displaystyle {\bigl (}f(x){\bigr )}^{*}\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ \left({\widehat {f}}(-\xi )\right)^{*}}(Note: the ∗ denotescomplex conjugation.) In particular, iff{\displaystyle f}isreal, thenf^{\displaystyle {\widehat {f}}}iseven symmetric(akaHermitian function):f^(−ξ)=(f^(ξ))∗.{\displaystyle {\widehat {f}}(-\xi )={\bigl (}{\widehat {f}}(\xi ){\bigr )}^{*}.} And iff{\displaystyle f}is purely imaginary, thenf^{\displaystyle {\widehat {f}}}isodd symmetric:f^(−ξ)=−(f^(ξ))∗.{\displaystyle {\widehat {f}}(-\xi )=-({\widehat {f}}(\xi ))^{*}.} Re⁡{f(x)}⟺F12(f^(ξ)+(f^(−ξ))∗){\displaystyle \operatorname {Re} \{f(x)\}\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\tfrac {1}{2}}\left({\widehat {f}}(\xi )+{\bigl (}{\widehat {f}}(-\xi ){\bigr )}^{*}\right)}Im⁡{f(x)}⟺F12i(f^(ξ)−(f^(−ξ))∗){\displaystyle \operatorname {Im} \{f(x)\}\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\tfrac {1}{2i}}\left({\widehat {f}}(\xi )-{\bigl (}{\widehat {f}}(-\xi ){\bigr )}^{*}\right)} Substitutingξ=0{\displaystyle \xi =0}in the definition, we obtain:f^(0)=∫−∞∞f(x)dx.{\displaystyle {\widehat {f}}(0)=\int _{-\infty }^{\infty }f(x)\,dx.} The integral off{\displaystyle f}over its domain is known as the average value orDC biasof the function. The Fourier transform may be defined in some cases for non-integrable functions, but the Fourier transforms of integrable functions have several strong properties. The Fourier transformf^{\displaystyle {\hat {f}}}of any integrable functionf{\displaystyle f}isuniformly continuousand[19][20]‖f^‖∞≤‖f‖1{\displaystyle \left\|{\hat {f}}\right\|_{\infty }\leq \left\|f\right\|_{1}} By theRiemann–Lebesgue lemma,[21]f^(ξ)→0as|ξ|→∞.{\displaystyle {\hat {f}}(\xi )\to 0{\text{ as }}|\xi |\to \infty .} However,f^{\displaystyle {\hat {f}}}need not be integrable. For example, the Fourier transform of therectangular function, which is integrable, is thesinc function, which is notLebesgue integrable, because itsimproper integralsbehave analogously to thealternating harmonic series, in converging to a sum without beingabsolutely convergent. It is not generally possible to write theinverse transformas aLebesgue integral. However, when bothf{\displaystyle f}andf^{\displaystyle {\hat {f}}}are integrable, the inverse equalityf(x)=∫−∞∞f^(ξ)ei2πxξdξ{\displaystyle f(x)=\int _{-\infty }^{\infty }{\hat {f}}(\xi )e^{i2\pi x\xi }\,d\xi }holds for almost everyx. As a result, the Fourier transform isinjectiveonL1(R). Letf(x)andg(x)be integrable, and letf̂(ξ)andĝ(ξ)be their Fourier transforms. Iff(x)andg(x)are alsosquare-integrable, then the Parseval formula follows:[22]⟨f,g⟩L2=∫−∞∞f(x)g(x)¯dx=∫−∞∞f^(ξ)g^(ξ)¯dξ,{\displaystyle \langle f,g\rangle _{L^{2}}=\int _{-\infty }^{\infty }f(x){\overline {g(x)}}\,dx=\int _{-\infty }^{\infty }{\hat {f}}(\xi ){\overline {{\hat {g}}(\xi )}}\,d\xi ,}where the bar denotescomplex conjugation. ThePlancherel theorem, which follows from the above, states that[23]‖f‖L22=∫−∞∞|f(x)|2dx=∫−∞∞|f^(ξ)|2dξ.{\displaystyle \|f\|_{L^{2}}^{2}=\int _{-\infty }^{\infty }\left|f(x)\right|^{2}\,dx=\int _{-\infty }^{\infty }\left|{\hat {f}}(\xi )\right|^{2}\,d\xi .} Plancherel's theorem makes it possible to extend the Fourier transform, by a continuity argument, to aunitary operatoronL2(R). OnL1(R) ∩L2(R), this extension agrees with original Fourier transform defined onL1(R), thus enlarging the domain of the Fourier transform toL1(R) +L2(R)(and consequently toLp(R)for1 ≤p≤ 2). Plancherel's theorem has the interpretation in the sciences that the Fourier transform preserves theenergyof the original quantity. The terminology of these formulas is not quite standardised. Parseval's theorem was proved only for Fourier series, and was first proved by Lyapunov. But Parseval's formula makes sense for the Fourier transform as well, and so even though in the context of the Fourier transform it was proved by Plancherel, it is still often referred to as Parseval's formula, or Parseval's relation, or even Parseval's theorem. SeePontryagin dualityfor a general formulation of this concept in the context of locally compact abelian groups. The Fourier transform translates betweenconvolutionand multiplication of functions. Iff(x)andg(x)are integrable functions with Fourier transformsf̂(ξ)andĝ(ξ)respectively, then the Fourier transform of the convolution is given by the product of the Fourier transformsf̂(ξ)andĝ(ξ)(under other conventions for the definition of the Fourier transform a constant factor may appear). This means that if:h(x)=(f∗g)(x)=∫−∞∞f(y)g(x−y)dy,{\displaystyle h(x)=(f*g)(x)=\int _{-\infty }^{\infty }f(y)g(x-y)\,dy,}where∗denotes the convolution operation, then:h^(ξ)=f^(ξ)g^(ξ).{\displaystyle {\hat {h}}(\xi )={\hat {f}}(\xi )\,{\hat {g}}(\xi ).} Inlinear time invariant (LTI) system theory, it is common to interpretg(x)as theimpulse responseof an LTI system with inputf(x)and outputh(x), since substituting theunit impulseforf(x)yieldsh(x) =g(x). In this case,ĝ(ξ)represents thefrequency responseof the system. Conversely, iff(x)can be decomposed as the product of two square integrable functionsp(x)andq(x), then the Fourier transform off(x)is given by the convolution of the respective Fourier transformsp̂(ξ)andq̂(ξ). In an analogous manner, it can be shown that ifh(x)is thecross-correlationoff(x)andg(x):h(x)=(f⋆g)(x)=∫−∞∞f(y)¯g(x+y)dy{\displaystyle h(x)=(f\star g)(x)=\int _{-\infty }^{\infty }{\overline {f(y)}}g(x+y)\,dy}then the Fourier transform ofh(x)is:h^(ξ)=f^(ξ)¯g^(ξ).{\displaystyle {\hat {h}}(\xi )={\overline {{\hat {f}}(\xi )}}\,{\hat {g}}(\xi ).} As a special case, theautocorrelationof functionf(x)is:h(x)=(f⋆f)(x)=∫−∞∞f(y)¯f(x+y)dy{\displaystyle h(x)=(f\star f)(x)=\int _{-\infty }^{\infty }{\overline {f(y)}}f(x+y)\,dy}for whichh^(ξ)=f^(ξ)¯f^(ξ)=|f^(ξ)|2.{\displaystyle {\hat {h}}(\xi )={\overline {{\hat {f}}(\xi )}}{\hat {f}}(\xi )=\left|{\hat {f}}(\xi )\right|^{2}.} Supposef(x)is an absolutely continuous differentiable function, and bothfand its derivativef′are integrable. Then the Fourier transform of the derivative is given byf′^(ξ)=F{ddxf(x)}=i2πξf^(ξ).{\displaystyle {\widehat {f'\,}}(\xi )={\mathcal {F}}\left\{{\frac {d}{dx}}f(x)\right\}=i2\pi \xi {\hat {f}}(\xi ).}More generally, the Fourier transformation of thenth derivativef(n)is given byf(n)^(ξ)=F{dndxnf(x)}=(i2πξ)nf^(ξ).{\displaystyle {\widehat {f^{(n)}}}(\xi )={\mathcal {F}}\left\{{\frac {d^{n}}{dx^{n}}}f(x)\right\}=(i2\pi \xi )^{n}{\hat {f}}(\xi ).} Analogously,F{dndξnf^(ξ)}=(i2πx)nf(x){\displaystyle {\mathcal {F}}\left\{{\frac {d^{n}}{d\xi ^{n}}}{\hat {f}}(\xi )\right\}=(i2\pi x)^{n}f(x)}, soF{xnf(x)}=(i2π)ndndξnf^(ξ).{\displaystyle {\mathcal {F}}\left\{x^{n}f(x)\right\}=\left({\frac {i}{2\pi }}\right)^{n}{\frac {d^{n}}{d\xi ^{n}}}{\hat {f}}(\xi ).} By applying the Fourier transform and using these formulas, someordinary differential equationscan be transformed into algebraic equations, which are much easier to solve. These formulas also give rise to the rule of thumb "f(x)is smoothif and only iff̂(ξ)quickly falls to 0 for|ξ| → ∞." By using the analogous rules for the inverse Fourier transform, one can also say "f(x)quickly falls to 0 for|x| → ∞if and only iff̂(ξ)is smooth." The Fourier transform is a linear transform which has eigenfunctions obeyingF[ψ]=λψ,{\displaystyle {\mathcal {F}}[\psi ]=\lambda \psi ,}withλ∈C.{\displaystyle \lambda \in \mathbb {C} .} A set of eigenfunctions is found by noting that the homogeneous differential equation[U(12πddx)+U(x)]ψ(x)=0{\displaystyle \left[U\left({\frac {1}{2\pi }}{\frac {d}{dx}}\right)+U(x)\right]\psi (x)=0}leads to eigenfunctionsψ(x){\displaystyle \psi (x)}of the Fourier transformF{\displaystyle {\mathcal {F}}}as long as the form of the equation remains invariant under Fourier transform.[note 5]In other words, every solutionψ(x){\displaystyle \psi (x)}and its Fourier transformψ^(ξ){\displaystyle {\hat {\psi }}(\xi )}obey the same equation. Assuminguniquenessof the solutions, every solutionψ(x){\displaystyle \psi (x)}must therefore be an eigenfunction of the Fourier transform. The form of the equation remains unchanged under Fourier transform ifU(x){\displaystyle U(x)}can be expanded in a power series in which for all terms the same factor of either one of±1,±i{\displaystyle \pm 1,\pm i}arises from the factorsin{\displaystyle i^{n}}introduced by thedifferentiationrules upon Fourier transforming the homogeneous differential equation because this factor may then be cancelled. The simplest allowableU(x)=x{\displaystyle U(x)=x}leads to thestandard normal distribution.[24] More generally, a set of eigenfunctions is also found by noting that thedifferentiationrules imply that theordinary differential equation[W(i2πddx)+W(x)]ψ(x)=Cψ(x){\displaystyle \left[W\left({\frac {i}{2\pi }}{\frac {d}{dx}}\right)+W(x)\right]\psi (x)=C\psi (x)}withC{\displaystyle C}constant andW(x){\displaystyle W(x)}being a non-constant even function remains invariant in form when applying the Fourier transformF{\displaystyle {\mathcal {F}}}to both sides of the equation. The simplest example is provided byW(x)=x2{\displaystyle W(x)=x^{2}}which is equivalent to considering the Schrödinger equation for thequantum harmonic oscillator.[25]The corresponding solutions provide an important choice of an orthonormal basis forL2(R)and are given by the "physicist's"Hermite functions. Equivalently one may useψn(x)=24n!e−πx2Hen(2xπ),{\displaystyle \psi _{n}(x)={\frac {\sqrt[{4}]{2}}{\sqrt {n!}}}e^{-\pi x^{2}}\mathrm {He} _{n}\left(2x{\sqrt {\pi }}\right),}whereHen(x)are the "probabilist's"Hermite polynomials, defined asHen(x)=(−1)ne12x2(ddx)ne−12x2.{\displaystyle \mathrm {He} _{n}(x)=(-1)^{n}e^{{\frac {1}{2}}x^{2}}\left({\frac {d}{dx}}\right)^{n}e^{-{\frac {1}{2}}x^{2}}.} Under this convention for the Fourier transform, we have thatψ^n(ξ)=(−i)nψn(ξ).{\displaystyle {\hat {\psi }}_{n}(\xi )=(-i)^{n}\psi _{n}(\xi ).} In other words, the Hermite functions form a completeorthonormalsystem ofeigenfunctionsfor the Fourier transform onL2(R).[17][26]However, this choice of eigenfunctions is not unique. Because ofF4=id{\displaystyle {\mathcal {F}}^{4}=\mathrm {id} }there are only four differenteigenvaluesof the Fourier transform (the fourth roots of unity ±1 and ±i) and any linear combination of eigenfunctions with the same eigenvalue gives another eigenfunction.[27]As a consequence of this, it is possible to decomposeL2(R)as a direct sum of four spacesH0,H1,H2, andH3where the Fourier transform acts onHeksimply by multiplication byik. Since the complete set of Hermite functionsψnprovides a resolution of the identity they diagonalize the Fourier operator, i.e. the Fourier transform can be represented by such a sum of terms weighted by the above eigenvalues, and these sums can be explicitly summed:F[f](ξ)=∫dxf(x)∑n≥0(−i)nψn(x)ψn(ξ).{\displaystyle {\mathcal {F}}[f](\xi )=\int dxf(x)\sum _{n\geq 0}(-i)^{n}\psi _{n}(x)\psi _{n}(\xi )~.} This approach to define the Fourier transform was first proposed byNorbert Wiener.[28]Among other properties, Hermite functions decrease exponentially fast in both frequency and time domains, and they are thus used to define a generalization of the Fourier transform, namely thefractional Fourier transformused in time–frequency analysis.[29]Inphysics, this transform was introduced byEdward Condon.[30]This change of basis functions becomes possible because the Fourier transform is a unitary transform when using the rightconventions. Consequently, under the proper conditions it may be expected to result from a self-adjoint generatorN{\displaystyle N}via[31]F[ψ]=e−itNψ.{\displaystyle {\mathcal {F}}[\psi ]=e^{-itN}\psi .} The operatorN{\displaystyle N}is thenumber operatorof the quantum harmonic oscillator written as[32][33]N≡12(x−∂∂x)(x+∂∂x)=12(−∂2∂x2+x2−1).{\displaystyle N\equiv {\frac {1}{2}}\left(x-{\frac {\partial }{\partial x}}\right)\left(x+{\frac {\partial }{\partial x}}\right)={\frac {1}{2}}\left(-{\frac {\partial ^{2}}{\partial x^{2}}}+x^{2}-1\right).} It can be interpreted as thegeneratoroffractional Fourier transformsfor arbitrary values oft, and of the conventional continuous Fourier transformF{\displaystyle {\mathcal {F}}}for the particular valuet=π/2,{\displaystyle t=\pi /2,}with theMehler kernelimplementing the correspondingactive transform. The eigenfunctions ofN{\displaystyle N}are theHermite functionsψn(x){\displaystyle \psi _{n}(x)}which are therefore also eigenfunctions ofF.{\displaystyle {\mathcal {F}}.} Upon extending the Fourier transform todistributionstheDirac combis also an eigenfunction of the Fourier transform. Under suitable conditions on the functionf{\displaystyle f}, it can be recovered from its Fourier transformf^{\displaystyle {\hat {f}}}. Indeed, denoting the Fourier transform operator byF{\displaystyle {\mathcal {F}}}, soFf:=f^{\displaystyle {\mathcal {F}}f:={\hat {f}}}, then for suitable functions, applying the Fourier transform twice simply flips the function:(F2f)(x)=f(−x){\displaystyle \left({\mathcal {F}}^{2}f\right)(x)=f(-x)}, which can be interpreted as "reversing time". Since reversing time is two-periodic, applying this twice yieldsF4(f)=f{\displaystyle {\mathcal {F}}^{4}(f)=f}, so the Fourier transform operator is four-periodic, and similarly the inverse Fourier transform can be obtained by applying the Fourier transform three times:F3(f^)=f{\displaystyle {\mathcal {F}}^{3}\left({\hat {f}}\right)=f}. In particular the Fourier transform is invertible (under suitable conditions). More precisely, defining theparity operatorP{\displaystyle {\mathcal {P}}}such that(Pf)(x)=f(−x){\displaystyle ({\mathcal {P}}f)(x)=f(-x)}, we have:F0=id,F1=F,F2=P,F3=F−1=P∘F=F∘P,F4=id{\displaystyle {\begin{aligned}{\mathcal {F}}^{0}&=\mathrm {id} ,\\{\mathcal {F}}^{1}&={\mathcal {F}},\\{\mathcal {F}}^{2}&={\mathcal {P}},\\{\mathcal {F}}^{3}&={\mathcal {F}}^{-1}={\mathcal {P}}\circ {\mathcal {F}}={\mathcal {F}}\circ {\mathcal {P}},\\{\mathcal {F}}^{4}&=\mathrm {id} \end{aligned}}}These equalities of operators require careful definition of the space of functions in question, defining equality of functions (equality at every point? equalityalmost everywhere?) and defining equality of operators – that is, defining the topology on the function space and operator space in question. These are not true for all functions, but are true under various conditions, which are the content of the various forms of theFourier inversion theorem. This fourfold periodicity of the Fourier transform is similar to a rotation of the plane by 90°, particularly as the two-fold iteration yields a reversal, and in fact this analogy can be made precise. While the Fourier transform can simply be interpreted as switching the time domain and the frequency domain, with the inverse Fourier transform switching them back, more geometrically it can be interpreted as a rotation by 90° in thetime–frequency domain(considering time as thex-axis and frequency as they-axis), and the Fourier transform can be generalized to thefractional Fourier transform, which involves rotations by other angles. This can be further generalized tolinear canonical transformations, which can be visualized as the action of thespecial linear groupSL2(R)on the time–frequency plane, with the preserved symplectic form corresponding to theuncertainty principle, below. This approach is particularly studied insignal processing, undertime–frequency analysis. TheHeisenberg groupis a certaingroupofunitary operatorson theHilbert spaceL2(R)of square integrable complex valued functionsfon the real line, generated by the translations(Tyf)(x) =f(x+y)and multiplication byei2πξx,(Mξf)(x) =ei2πξxf(x). These operators do not commute, as their (group) commutator is(Mξ−1Ty−1MξTyf)(x)=ei2πξyf(x){\displaystyle \left(M_{\xi }^{-1}T_{y}^{-1}M_{\xi }T_{y}f\right)(x)=e^{i2\pi \xi y}f(x)}which is multiplication by the constant (independent ofx)ei2πξy∈U(1)(thecircle groupof unit modulus complex numbers). As an abstract group, the Heisenberg group is the three-dimensionalLie groupof triples(x,ξ,z) ∈R2×U(1), with the group law(x1,ξ1,t1)⋅(x2,ξ2,t2)=(x1+x2,ξ1+ξ2,t1t2ei2π(x1ξ1+x2ξ2+x1ξ2)).{\displaystyle \left(x_{1},\xi _{1},t_{1}\right)\cdot \left(x_{2},\xi _{2},t_{2}\right)=\left(x_{1}+x_{2},\xi _{1}+\xi _{2},t_{1}t_{2}e^{i2\pi \left(x_{1}\xi _{1}+x_{2}\xi _{2}+x_{1}\xi _{2}\right)}\right).} Denote the Heisenberg group byH1. The above procedure describes not only the group structure, but also a standardunitary representationofH1on a Hilbert space, which we denote byρ:H1→B(L2(R)). Define the linear automorphism ofR2byJ(xξ)=(−ξx){\displaystyle J{\begin{pmatrix}x\\\xi \end{pmatrix}}={\begin{pmatrix}-\xi \\x\end{pmatrix}}}so thatJ2= −I. ThisJcan be extended to a unique automorphism ofH1:j(x,ξ,t)=(−ξ,x,te−i2πξx).{\displaystyle j\left(x,\xi ,t\right)=\left(-\xi ,x,te^{-i2\pi \xi x}\right).} According to theStone–von Neumann theorem, the unitary representationsρandρ∘jare unitarily equivalent, so there is a unique intertwinerW∈U(L2(R))such thatρ∘j=WρW∗.{\displaystyle \rho \circ j=W\rho W^{*}.}This operatorWis the Fourier transform. Many of the standard properties of the Fourier transform are immediate consequences of this more general framework.[34]For example, the square of the Fourier transform,W2, is an intertwiner associated withJ2= −I, and so we have(W2f)(x) =f(−x)is the reflection of the original functionf. Theintegralfor the Fourier transformf^(ξ)=∫−∞∞e−i2πξtf(t)dt{\displaystyle {\hat {f}}(\xi )=\int _{-\infty }^{\infty }e^{-i2\pi \xi t}f(t)\,dt}can be studied forcomplexvalues of its argumentξ. Depending on the properties off, this might not converge off the real axis at all, or it might converge to acomplexanalytic functionfor all values ofξ=σ+iτ, or something in between.[35] ThePaley–Wiener theoremsays thatfis smooth (i.e.,n-times differentiable for all positive integersn) and compactly supported if and only iff̂(σ+iτ)is aholomorphic functionfor which there exists aconstanta> 0such that for anyintegern≥ 0,|ξnf^(ξ)|≤Cea|τ|{\displaystyle \left\vert \xi ^{n}{\hat {f}}(\xi )\right\vert \leq Ce^{a\vert \tau \vert }}for some constantC. (In this case,fis supported on[−a,a].) This can be expressed by saying thatf̂is anentire functionwhich israpidly decreasinginσ(for fixedτ) and of exponential growth inτ(uniformly inσ).[36] (Iffis not smooth, but onlyL2, the statement still holds providedn= 0.[37]) The space of such functions of acomplex variableis called the Paley—Wiener space. This theorem has been generalised to semisimpleLie groups.[38] Iffis supported on the half-linet≥ 0, thenfis said to be "causal" because theimpulse response functionof a physically realisablefiltermust have this property, as no effect can precede its cause.Paleyand Wiener showed that thenf̂extends to aholomorphic functionon the complex lower half-planeτ< 0which tends to zero asτgoes to infinity.[39]The converse is false and it is not known how to characterise the Fourier transform of a causal function.[40] The Fourier transformf̂(ξ)is related to theLaplace transformF(s), which is also used for the solution ofdifferential equationsand the analysis offilters. It may happen that a functionffor which the Fourier integral does not converge on the real axis at all, nevertheless has a complex Fourier transform defined in some region of thecomplex plane. For example, iff(t)is of exponential growth, i.e.,|f(t)|<Cea|t|{\displaystyle \vert f(t)\vert <Ce^{a\vert t\vert }}for some constantsC,a≥ 0, then[41]f^(iτ)=∫−∞∞e2πτtf(t)dt,{\displaystyle {\hat {f}}(i\tau )=\int _{-\infty }^{\infty }e^{2\pi \tau t}f(t)\,dt,}convergent for all2πτ< −a, is thetwo-sided Laplace transformoff. The more usual version ("one-sided") of the Laplace transform isF(s)=∫0∞f(t)e−stdt.{\displaystyle F(s)=\int _{0}^{\infty }f(t)e^{-st}\,dt.} Iffis also causal, and analytical, then:f^(iτ)=F(−2πτ).{\displaystyle {\hat {f}}(i\tau )=F(-2\pi \tau ).}Thus, extending the Fourier transform to the complex domain means it includes the Laplace transform as a special case in the case of causal functions—but with the change of variables=i2πξ. From another, perhaps more classical viewpoint, the Laplace transform by its form involves an additional exponential regulating term which lets it converge outside of the imaginary line where the Fourier transform is defined. As such it can converge for at most exponentially divergent series and integrals, whereas the original Fourier decomposition cannot, enabling analysis of systems with divergent or critical elements. Two particular examples from linear signal processing are the construction of allpass filter networks from critical comb and mitigating filters via exact pole-zero cancellation on the unit circle. Such designs are common in audio processing, where highly nonlinear phase response is sought for, as in reverb. Furthermore, when extended pulselike impulse responses are sought for signal processing work, the easiest way to produce them is to have one circuit which produces a divergent time response, and then to cancel its divergence through a delayed opposite and compensatory response. There, only the delay circuit in-between admits a classical Fourier description, which is critical. Both the circuits to the side are unstable, and do not admit a convergent Fourier decomposition. However, they do admit a Laplace domain description, with identical half-planes of convergence in the complex plane (or in the discrete case, the Z-plane), wherein their effects cancel. In modern mathematics the Laplace transform is conventionally subsumed under the aegis Fourier methods. Both of them are subsumed by the far more general, and more abstract, idea ofharmonic analysis. Still withξ=σ+iτ{\displaystyle \xi =\sigma +i\tau }, iff^{\displaystyle {\widehat {f}}}is complex analytic fora≤τ≤b, then ∫−∞∞f^(σ+ia)ei2πξtdσ=∫−∞∞f^(σ+ib)ei2πξtdσ{\displaystyle \int _{-\infty }^{\infty }{\hat {f}}(\sigma +ia)e^{i2\pi \xi t}\,d\sigma =\int _{-\infty }^{\infty }{\hat {f}}(\sigma +ib)e^{i2\pi \xi t}\,d\sigma }byCauchy's integral theorem. Therefore, the Fourier inversion formula can use integration along different lines, parallel to the real axis.[42] Theorem: Iff(t) = 0fort< 0, and|f(t)| <Cea|t|for some constantsC,a> 0, thenf(t)=∫−∞∞f^(σ+iτ)ei2πξtdσ,{\displaystyle f(t)=\int _{-\infty }^{\infty }{\hat {f}}(\sigma +i\tau )e^{i2\pi \xi t}\,d\sigma ,}for anyτ< −⁠a/2π⁠. This theorem implies theMellin inversion formulafor the Laplace transformation,[41]f(t)=1i2π∫b−i∞b+i∞F(s)estds{\displaystyle f(t)={\frac {1}{i2\pi }}\int _{b-i\infty }^{b+i\infty }F(s)e^{st}\,ds}for anyb>a, whereF(s)is the Laplace transform off(t). The hypotheses can be weakened, as in the results of Carleson and Hunt, tof(t)e−atbeingL1, provided thatfbe of bounded variation in a closed neighborhood oft(cf.Dini test), the value offattbe taken to be thearithmetic meanof the left and right limits, and that the integrals be taken in the sense of Cauchy principal values.[43] L2versions of these inversion formulas are also available.[44] The Fourier transform can be defined in any arbitrary number of dimensionsn. As with the one-dimensional case, there are many conventions. For an integrable functionf(x), this article takes the definition:f^(ξ)=F(f)(ξ)=∫Rnf(x)e−i2πξ⋅xdx{\displaystyle {\hat {f}}({\boldsymbol {\xi }})={\mathcal {F}}(f)({\boldsymbol {\xi }})=\int _{\mathbb {R} ^{n}}f(\mathbf {x} )e^{-i2\pi {\boldsymbol {\xi }}\cdot \mathbf {x} }\,d\mathbf {x} }wherexandξaren-dimensionalvectors, andx·ξis thedot productof the vectors. Alternatively,ξcan be viewed as belonging to thedual vector spaceRn⋆{\displaystyle \mathbb {R} ^{n\star }}, in which case the dot product becomes thecontractionofxandξ, usually written as⟨x,ξ⟩. All of the basic properties listed above hold for then-dimensional Fourier transform, as do Plancherel's and Parseval's theorem. When the function is integrable, the Fourier transform is still uniformly continuous and theRiemann–Lebesgue lemmaholds.[21] Generally speaking, the more concentratedf(x)is, the more spread out its Fourier transformf̂(ξ)must be. In particular, the scaling property of the Fourier transform may be seen as saying: if we squeeze a function inx, its Fourier transform stretches out inξ. It is not possible to arbitrarily concentrate both a function and its Fourier transform. The trade-off between the compaction of a function and its Fourier transform can be formalized in the form of anuncertainty principleby viewing a function and its Fourier transform asconjugate variableswith respect to thesymplectic formon thetime–frequency domain: from the point of view of thelinear canonical transformation, the Fourier transform is rotation by 90° in the time–frequency domain, and preserves thesymplectic form. Supposef(x)is an integrable andsquare-integrablefunction. Without loss of generality, assume thatf(x)is normalized:∫−∞∞|f(x)|2dx=1.{\displaystyle \int _{-\infty }^{\infty }|f(x)|^{2}\,dx=1.} It follows from thePlancherel theoremthatf̂(ξ)is also normalized. The spread aroundx= 0may be measured by thedispersion about zerodefined by[45]D0(f)=∫−∞∞x2|f(x)|2dx.{\displaystyle D_{0}(f)=\int _{-\infty }^{\infty }x^{2}|f(x)|^{2}\,dx.} In probability terms, this is thesecond momentof|f(x)|2about zero. The uncertainty principle states that, iff(x)is absolutely continuous and the functionsx·f(x)andf′(x)are square integrable, thenD0(f)D0(f^)≥116π2.{\displaystyle D_{0}(f)D_{0}({\hat {f}})\geq {\frac {1}{16\pi ^{2}}}.} The equality is attained only in the casef(x)=C1e−πx2σ2∴f^(ξ)=σC1e−πσ2ξ2{\displaystyle {\begin{aligned}f(x)&=C_{1}\,e^{-\pi {\frac {x^{2}}{\sigma ^{2}}}}\\\therefore {\hat {f}}(\xi )&=\sigma C_{1}\,e^{-\pi \sigma ^{2}\xi ^{2}}\end{aligned}}}whereσ> 0is arbitrary andC1=⁠4√2/√σ⁠so thatfisL2-normalized. In other words, wherefis a (normalized)Gaussian functionwith varianceσ2/2π, centered at zero, and its Fourier transform is a Gaussian function with varianceσ−2/2π. Gaussian functions are examples ofSchwartz functions(see the discussion on tempered distributions below). In fact, this inequality implies that:(∫−∞∞(x−x0)2|f(x)|2dx)(∫−∞∞(ξ−ξ0)2|f^(ξ)|2dξ)≥116π2,∀x0,ξ0∈R.{\displaystyle \left(\int _{-\infty }^{\infty }(x-x_{0})^{2}|f(x)|^{2}\,dx\right)\left(\int _{-\infty }^{\infty }(\xi -\xi _{0})^{2}\left|{\hat {f}}(\xi )\right|^{2}\,d\xi \right)\geq {\frac {1}{16\pi ^{2}}},\quad \forall x_{0},\xi _{0}\in \mathbb {R} .}Inquantum mechanics, themomentumand positionwave functionsare Fourier transform pairs, up to a factor of thePlanck constant. With this constant properly taken into account, the inequality above becomes the statement of theHeisenberg uncertainty principle.[46] A stronger uncertainty principle is theHirschman uncertainty principle, which is expressed as:H(|f|2)+H(|f^|2)≥log⁡(e2){\displaystyle H\left(\left|f\right|^{2}\right)+H\left(\left|{\hat {f}}\right|^{2}\right)\geq \log \left({\frac {e}{2}}\right)}whereH(p)is thedifferential entropyof theprobability density functionp(x):H(p)=−∫−∞∞p(x)log⁡(p(x))dx{\displaystyle H(p)=-\int _{-\infty }^{\infty }p(x)\log {\bigl (}p(x){\bigr )}\,dx}where the logarithms may be in any base that is consistent. The equality is attained for a Gaussian, as in the previous case. Fourier's original formulation of the transform did not use complex numbers, but rather sines and cosines. Statisticians and others still use this form. An absolutely integrable functionffor which Fourier inversion holds can be expanded in terms of genuine frequencies (avoiding negative frequencies, which are sometimes considered hard to interpret physically[47])λbyf(t)=∫0∞(a(λ)cos⁡(2πλt)+b(λ)sin⁡(2πλt))dλ.{\displaystyle f(t)=\int _{0}^{\infty }{\bigl (}a(\lambda )\cos(2\pi \lambda t)+b(\lambda )\sin(2\pi \lambda t){\bigr )}\,d\lambda .} This is called an expansion as a trigonometric integral, or a Fourier integral expansion. The coefficient functionsaandbcan be found by using variants of the Fourier cosine transform and the Fourier sine transform (the normalisations are, again, not standardised):a(λ)=2∫−∞∞f(t)cos⁡(2πλt)dt{\displaystyle a(\lambda )=2\int _{-\infty }^{\infty }f(t)\cos(2\pi \lambda t)\,dt}andb(λ)=2∫−∞∞f(t)sin⁡(2πλt)dt.{\displaystyle b(\lambda )=2\int _{-\infty }^{\infty }f(t)\sin(2\pi \lambda t)\,dt.} Older literature refers to the two transform functions, the Fourier cosine transform,a, and the Fourier sine transform,b. The functionfcan be recovered from the sine and cosine transform usingf(t)=2∫0∞∫−∞∞f(τ)cos⁡(2πλ(τ−t))dτdλ.{\displaystyle f(t)=2\int _{0}^{\infty }\int _{-\infty }^{\infty }f(\tau )\cos {\bigl (}2\pi \lambda (\tau -t){\bigr )}\,d\tau \,d\lambda .}together with trigonometric identities. This is referred to as Fourier's integral formula.[41][48][49][50] Let the set ofhomogeneousharmonicpolynomialsof degreekonRnbe denoted byAk. The setAkconsists of thesolid spherical harmonicsof degreek. The solid spherical harmonics play a similar role in higher dimensions to the Hermite polynomials in dimension one. Specifically, iff(x) =e−π|x|2P(x)for someP(x)inAk, thenf̂(ξ) =i−kf(ξ). Let the setHkbe the closure inL2(Rn)of linear combinations of functions of the formf(|x|)P(x)whereP(x)is inAk. The spaceL2(Rn)is then a direct sum of the spacesHkand the Fourier transform maps each spaceHkto itself and is possible to characterize the action of the Fourier transform on each spaceHk.[21] Letf(x) =f0(|x|)P(x)(withP(x)inAk), thenf^(ξ)=F0(|ξ|)P(ξ){\displaystyle {\hat {f}}(\xi )=F_{0}(|\xi |)P(\xi )}whereF0(r)=2πi−kr−n+2k−22∫0∞f0(s)Jn+2k−22(2πrs)sn+2k2ds.{\displaystyle F_{0}(r)=2\pi i^{-k}r^{-{\frac {n+2k-2}{2}}}\int _{0}^{\infty }f_{0}(s)J_{\frac {n+2k-2}{2}}(2\pi rs)s^{\frac {n+2k}{2}}\,ds.} HereJ(n+ 2k− 2)/2denotes theBessel functionof the first kind with order⁠n+ 2k− 2/2⁠. Whenk= 0this gives a useful formula for the Fourier transform of a radial function.[51]This is essentially theHankel transform. Moreover, there is a simple recursion relating the casesn+ 2andn[52]allowing to compute, e.g., the three-dimensional Fourier transform of a radial function from the one-dimensional one. In higher dimensions it becomes interesting to studyrestriction problemsfor the Fourier transform. The Fourier transform of an integrable function is continuous and the restriction of this function to any set is defined. But for a square-integrable function the Fourier transform could be a generalclassof square integrable functions. As such, the restriction of the Fourier transform of anL2(Rn)function cannot be defined on sets of measure 0. It is still an active area of study to understand restriction problems inLpfor1 <p< 2. It is possible in some cases to define the restriction of a Fourier transform to a setS, providedShas non-zero curvature. The case whenSis the unit sphere inRnis of particular interest. In this case the Tomas–Steinrestriction theorem states that the restriction of the Fourier transform to the unit sphere inRnis a bounded operator onLpprovided1 ≤p≤⁠2n+ 2/n+ 3⁠. One notable difference between the Fourier transform in 1 dimension versus higher dimensions concerns the partial sum operator. Consider an increasing collection of measurable setsERindexed byR∈ (0,∞): such as balls of radiusRcentered at the origin, or cubes of side2R. For a given integrable functionf, consider the functionfRdefined by:fR(x)=∫ERf^(ξ)ei2πx⋅ξdξ,x∈Rn.{\displaystyle f_{R}(x)=\int _{E_{R}}{\hat {f}}(\xi )e^{i2\pi x\cdot \xi }\,d\xi ,\quad x\in \mathbb {R} ^{n}.} Suppose in addition thatf∈Lp(Rn). Forn= 1and1 <p< ∞, if one takesER= (−R,R), thenfRconverges tofinLpasRtends to infinity, by the boundedness of theHilbert transform. Naively one may hope the same holds true forn> 1. In the case thatERis taken to be a cube with side lengthR, then convergence still holds. Another natural candidate is the Euclidean ballER= {ξ: |ξ| <R}. In order for this partial sum operator to converge, it is necessary that the multiplier for the unit ball be bounded inLp(Rn). Forn≥ 2it is a celebrated theorem ofCharles Feffermanthat the multiplier for the unit ball is never bounded unlessp= 2.[28]In fact, whenp≠ 2, this shows that not only mayfRfail to converge tofinLp, but for some functionsf∈Lp(Rn),fRis not even an element ofLp. The definition of the Fourier transform naturally extends fromL1(R){\displaystyle L^{1}(\mathbb {R} )}toL1(Rn){\displaystyle L^{1}(\mathbb {R} ^{n})}. That is, iff∈L1(Rn){\displaystyle f\in L^{1}(\mathbb {R} ^{n})}then the Fourier transformF:L1(Rn)→L∞(Rn){\displaystyle {\mathcal {F}}:L^{1}(\mathbb {R} ^{n})\to L^{\infty }(\mathbb {R} ^{n})}is given byf(x)↦f^(ξ)=∫Rnf(x)e−i2πξ⋅xdx,∀ξ∈Rn.{\displaystyle f(x)\mapsto {\hat {f}}(\xi )=\int _{\mathbb {R} ^{n}}f(x)e^{-i2\pi \xi \cdot x}\,dx,\quad \forall \xi \in \mathbb {R} ^{n}.}This operator isboundedassupξ∈Rn|f^(ξ)|≤∫Rn|f(x)|dx,{\displaystyle \sup _{\xi \in \mathbb {R} ^{n}}\left\vert {\hat {f}}(\xi )\right\vert \leq \int _{\mathbb {R} ^{n}}\vert f(x)\vert \,dx,}which shows that itsoperator normis bounded by1. TheRiemann–Lebesgue lemmashows that iff∈L1(Rn){\displaystyle f\in L^{1}(\mathbb {R} ^{n})}then its Fourier transform actually belongs to thespace of continuous functions which vanish at infinity, i.e.,f^∈C0(Rn)⊂L∞(Rn){\displaystyle {\hat {f}}\in C_{0}(\mathbb {R} ^{n})\subset L^{\infty }(\mathbb {R} ^{n})}.[53]Furthermore, theimageofL1{\displaystyle L^{1}}underF{\displaystyle {\mathcal {F}}}is a strict subset ofC0(Rn){\displaystyle C_{0}(\mathbb {R} ^{n})}. Similarly to the case of one variable, the Fourier transform can be defined onL2(Rn){\displaystyle L^{2}(\mathbb {R} ^{n})}. The Fourier transform inL2(Rn){\displaystyle L^{2}(\mathbb {R} ^{n})}is no longer given by an ordinary Lebesgue integral, although it can be computed by animproper integral, i.e.,f^(ξ)=limR→∞∫|x|≤Rf(x)e−i2πξ⋅xdx{\displaystyle {\hat {f}}(\xi )=\lim _{R\to \infty }\int _{|x|\leq R}f(x)e^{-i2\pi \xi \cdot x}\,dx}where the limit is taken in theL2sense.[54][55] Furthermore,F:L2(Rn)→L2(Rn){\displaystyle {\mathcal {F}}:L^{2}(\mathbb {R} ^{n})\to L^{2}(\mathbb {R} ^{n})}is aunitary operator.[56]For an operator to be unitary it is sufficient to show that it is bijective and preserves the inner product, so in this case these follow from the Fourier inversion theorem combined with the fact that for anyf,g∈L2(Rn)we have∫Rnf(x)Fg(x)dx=∫RnFf(x)g(x)dx.{\displaystyle \int _{\mathbb {R} ^{n}}f(x){\mathcal {F}}g(x)\,dx=\int _{\mathbb {R} ^{n}}{\mathcal {F}}f(x)g(x)\,dx.} In particular, the image ofL2(Rn)is itself under the Fourier transform. For1<p<2{\displaystyle 1<p<2}, the Fourier transform can be defined onLp(R){\displaystyle L^{p}(\mathbb {R} )}byMarcinkiewicz interpolation, which amounts to decomposing such functions into a fat tail part inL2plus a fat body part inL1. In each of these spaces, the Fourier transform of a function inLp(Rn)is inLq(Rn), whereq=⁠p/p− 1⁠is theHölder conjugateofp(by theHausdorff–Young inequality). However, except forp= 2, the image is not easily characterized. Further extensions become more technical. The Fourier transform of functions inLpfor the range2 <p< ∞requires the study of distributions.[57]In fact, it can be shown that there are functions inLpwithp> 2so that the Fourier transform is not defined as a function.[21] One might consider enlarging the domain of the Fourier transform fromL1+L2{\displaystyle L^{1}+L^{2}}by consideringgeneralized functions, or distributions. A distribution onRn{\displaystyle \mathbb {R} ^{n}}is a continuous linear functional on the spaceCc∞(Rn){\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})}of compactly supported smooth functions (i.e.bump functions), equipped with a suitable topology. SinceCc∞(Rn){\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})}is dense inL2(Rn){\displaystyle L^{2}(\mathbb {R} ^{n})}, thePlancherel theoremallows one to extend the definition of the Fourier transform to general functions inL2(Rn){\displaystyle L^{2}(\mathbb {R} ^{n})}by continuity arguments. The strategy is then to consider the action of the Fourier transform onCc∞(Rn){\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})}and pass to distributions by duality. The obstruction to doing this is that the Fourier transform does not mapCc∞(Rn){\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})}toCc∞(Rn){\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})}. In fact the Fourier transform of an element inCc∞(Rn){\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})}can not vanish on an open set; see the above discussion on the uncertainty principle.[58][59] The Fourier transform can also be defined fortempered distributionsS′(Rn){\displaystyle {\mathcal {S}}'(\mathbb {R} ^{n})}, dual to the space ofSchwartz functionsS(Rn){\displaystyle {\mathcal {S}}(\mathbb {R} ^{n})}. A Schwartz function is a smooth function that decays at infinity, along with all of its derivatives, henceCc∞(Rn)⊂S(Rn){\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})\subset {\mathcal {S}}(\mathbb {R} ^{n})}and:F:Cc∞(Rn)→S(Rn)∖Cc∞(Rn).{\displaystyle {\mathcal {F}}:C_{c}^{\infty }(\mathbb {R} ^{n})\rightarrow S(\mathbb {R} ^{n})\setminus C_{c}^{\infty }(\mathbb {R} ^{n}).}The Fourier transform is anautomorphismof the Schwartz space and, by duality, also an automorphism of the space of tempered distributions.[21][60]The tempered distributions include well-behaved functions of polynomial growth, distributions of compact support as well as all the integrable functions mentioned above. For the definition of the Fourier transform of a tempered distribution, letf{\displaystyle f}andg{\displaystyle g}be integrable functions, and letf^{\displaystyle {\hat {f}}}andg^{\displaystyle {\hat {g}}}be their Fourier transforms respectively. Then the Fourier transform obeys the following multiplication formula,[21]∫Rnf^(x)g(x)dx=∫Rnf(x)g^(x)dx.{\displaystyle \int _{\mathbb {R} ^{n}}{\hat {f}}(x)g(x)\,dx=\int _{\mathbb {R} ^{n}}f(x){\hat {g}}(x)\,dx.} Every integrable functionf{\displaystyle f}defines (induces) a distributionTf{\displaystyle T_{f}}by the relationTf(ϕ)=∫Rnf(x)ϕ(x)dx,∀ϕ∈S(Rn).{\displaystyle T_{f}(\phi )=\int _{\mathbb {R} ^{n}}f(x)\phi (x)\,dx,\quad \forall \phi \in {\mathcal {S}}(\mathbb {R} ^{n}).}So it makes sense to define the Fourier transform of a tempered distributionTf∈S′(R){\displaystyle T_{f}\in {\mathcal {S}}'(\mathbb {R} )}by the duality:⟨T^f,ϕ⟩=⟨Tf,ϕ^⟩,∀ϕ∈S(Rn).{\displaystyle \langle {\widehat {T}}_{f},\phi \rangle =\langle T_{f},{\widehat {\phi }}\rangle ,\quad \forall \phi \in {\mathcal {S}}(\mathbb {R} ^{n}).}Extending this to all tempered distributionsT{\displaystyle T}gives the general definition of the Fourier transform. Distributions can be differentiated and the above-mentioned compatibility of the Fourier transform with differentiation and convolution remains true for tempered distributions. The Fourier transform of afiniteBorel measureμonRnis given by the continuous function:[61]μ^(ξ)=∫Rne−i2πx⋅ξdμ,{\displaystyle {\hat {\mu }}(\xi )=\int _{\mathbb {R} ^{n}}e^{-i2\pi x\cdot \xi }\,d\mu ,}and called theFourier-Stieltjes transformdue to its connection with theRiemann-Stieltjes integralrepresentation of(Radon) measures.[62]Ifμ{\displaystyle \mu }is theprobability distributionof arandom variableX{\displaystyle X}then its Fourier–Stieltjes transform is, by definition, acharacteristic function.[63]If, in addition, the probability distribution has aprobability density function, this definition is subject to the usual Fourier transform.[64]Stated more generally, whenμ{\displaystyle \mu }isabsolutely continuouswith respect to the Lebesgue measure, i.e.,dμ=f(x)dx,{\displaystyle d\mu =f(x)dx,}thenμ^(ξ)=f^(ξ),{\displaystyle {\hat {\mu }}(\xi )={\hat {f}}(\xi ),}and the Fourier-Stieltjes transform reduces to the usual definition of the Fourier transform. That is, the notable difference with the Fourier transform of integrable functions is that the Fourier-Stieltjes transform need not vanish at infinity, i.e., theRiemann–Lebesgue lemmafails for measures.[65] Bochner's theoremcharacterizes which functions may arise as the Fourier–Stieltjes transform of a positive measure on the circle. One example of a finite Borel measure that is not a function is theDirac measure.[66]Its Fourier transform is a constant function (whose value depends on the form of the Fourier transform used). The Fourier transform may be generalized to anylocally compact abelian group, i.e., anabelian groupthat is also alocally compact Hausdorff spacesuch that the group operation is continuous. IfGis a locally compact abelian group, it has a translation invariant measureμ, calledHaar measure. For a locally compact abelian groupG, the set of irreducible, i.e. one-dimensional, unitary representations are called itscharacters. With its natural group structure and the topology of uniform convergence on compact sets (that is, the topology induced by thecompact-open topologyon the space of all continuous functions fromG{\displaystyle G}to thecircle group), the set of charactersĜis itself a locally compact abelian group, called thePontryagin dualofG. For a functionfinL1(G), its Fourier transform is defined by[57]f^(ξ)=∫Gξ(x)f(x)dμfor anyξ∈G^.{\displaystyle {\hat {f}}(\xi )=\int _{G}\xi (x)f(x)\,d\mu \quad {\text{for any }}\xi \in {\hat {G}}.} The Riemann–Lebesgue lemma holds in this case;f̂(ξ)is a function vanishing at infinity onĜ. The Fourier transform onT= R/Zis an example; hereTis a locally compact abelian group, and the Haar measureμonTcan be thought of as the Lebesgue measure on [0,1). Consider the representation ofTon the complex planeCthat is a 1-dimensional complex vector space. There are a group of representations (which are irreducible sinceCis 1-dim){ek:T→GL1(C)=C∗∣k∈Z}{\displaystyle \{e_{k}:T\rightarrow GL_{1}(C)=C^{*}\mid k\in Z\}}whereek(x)=ei2πkx{\displaystyle e_{k}(x)=e^{i2\pi kx}}forx∈T{\displaystyle x\in T}. The character of such representation, that is the trace ofek(x){\displaystyle e_{k}(x)}for eachx∈T{\displaystyle x\in T}andk∈Z{\displaystyle k\in Z}, isei2πkx{\displaystyle e^{i2\pi kx}}itself. In the case of representation of finite group, the character table of the groupGare rows of vectors such that each row is the character of one irreducible representation ofG, and these vectors form an orthonormal basis of the space of class functions that map fromGtoCby Schur's lemma. Now the groupTis no longer finite but still compact, and it preserves the orthonormality of character table. Each row of the table is the functionek(x){\displaystyle e_{k}(x)}ofx∈T,{\displaystyle x\in T,}and the inner product between two class functions (all functions being class functions sinceTis abelian)f,g∈L2(T,dμ){\displaystyle f,g\in L^{2}(T,d\mu )}is defined as⟨f,g⟩=1|T|∫[0,1)f(y)g¯(y)dμ(y){\textstyle \langle f,g\rangle ={\frac {1}{|T|}}\int _{[0,1)}f(y){\overline {g}}(y)d\mu (y)}with the normalizing factor|T|=1{\displaystyle |T|=1}. The sequence{ek∣k∈Z}{\displaystyle \{e_{k}\mid k\in Z\}}is an orthonormal basis of the space of class functionsL2(T,dμ){\displaystyle L^{2}(T,d\mu )}. For any representationVof a finite groupG,χv{\displaystyle \chi _{v}}can be expressed as the span∑i⟨χv,χvi⟩χvi{\textstyle \sum _{i}\left\langle \chi _{v},\chi _{v_{i}}\right\rangle \chi _{v_{i}}}(Vi{\displaystyle V_{i}}are the irreps ofG), such that⟨χv,χvi⟩=1|G|∑g∈Gχv(g)χ¯vi(g){\textstyle \left\langle \chi _{v},\chi _{v_{i}}\right\rangle ={\frac {1}{|G|}}\sum _{g\in G}\chi _{v}(g){\overline {\chi }}_{v_{i}}(g)}. Similarly forG=T{\displaystyle G=T}andf∈L2(T,dμ){\displaystyle f\in L^{2}(T,d\mu )},f(x)=∑k∈Zf^(k)ek{\textstyle f(x)=\sum _{k\in Z}{\hat {f}}(k)e_{k}}. The Pontriagin dualT^{\displaystyle {\hat {T}}}is{ek}(k∈Z){\displaystyle \{e_{k}\}(k\in Z)}and forf∈L2(T,dμ){\displaystyle f\in L^{2}(T,d\mu )},f^(k)=1|T|∫[0,1)f(y)e−i2πkydy{\textstyle {\hat {f}}(k)={\frac {1}{|T|}}\int _{[0,1)}f(y)e^{-i2\pi ky}dy}is its Fourier transform forek∈T^{\displaystyle e_{k}\in {\hat {T}}}. The Fourier transform is also a special case ofGelfand transform. In this particular context, it is closely related to the Pontryagin duality map defined above. Given an abelianlocally compactHausdorfftopological groupG, as before we consider spaceL1(G), defined using a Haar measure. With convolution as multiplication,L1(G)is an abelianBanach algebra. It also has aninvolution* given byf∗(g)=f(g−1)¯.{\displaystyle f^{*}(g)={\overline {f\left(g^{-1}\right)}}.} Taking the completion with respect to the largest possiblyC*-norm gives its envelopingC*-algebra, called the groupC*-algebraC*(G)ofG. (AnyC*-norm onL1(G)is bounded by theL1norm, therefore their supremum exists.) Given any abelianC*-algebraA, the Gelfand transform gives an isomorphism betweenAandC0(A^), whereA^is the multiplicative linear functionals, i.e. one-dimensional representations, onAwith the weak-* topology. The map is simply given bya↦(φ↦φ(a)){\displaystyle a\mapsto {\bigl (}\varphi \mapsto \varphi (a){\bigr )}}It turns out that the multiplicative linear functionals ofC*(G), after suitable identification, are exactly the characters ofG, and the Gelfand transform, when restricted to the dense subsetL1(G)is the Fourier–Pontryagin transform. The Fourier transform can also be defined for functions on a non-abelian group, provided that the group iscompact. Removing the assumption that the underlying group is abelian, irreducible unitary representations need not always be one-dimensional. This means the Fourier transform on a non-abelian group takes values as Hilbert space operators.[67]The Fourier transform on compact groups is a major tool inrepresentation theory[68]andnon-commutative harmonic analysis. LetGbe a compactHausdorfftopological group. LetΣdenote the collection of all isomorphism classes of finite-dimensional irreducibleunitary representations, along with a definite choice of representationU(σ)on theHilbert spaceHσof finite dimensiondσfor eachσ∈ Σ. Ifμis a finiteBorel measureonG, then the Fourier–Stieltjes transform ofμis the operator onHσdefined by⟨μ^ξ,η⟩Hσ=∫G⟨U¯g(σ)ξ,η⟩dμ(g){\displaystyle \left\langle {\hat {\mu }}\xi ,\eta \right\rangle _{H_{\sigma }}=\int _{G}\left\langle {\overline {U}}_{g}^{(\sigma )}\xi ,\eta \right\rangle \,d\mu (g)}whereU(σ)is the complex-conjugate representation ofU(σ)acting onHσ. Ifμisabsolutely continuouswith respect to theleft-invariant probability measureλonG,representedasdμ=fdλ{\displaystyle d\mu =f\,d\lambda }for somef∈L1(λ), one identifies the Fourier transform offwith the Fourier–Stieltjes transform ofμ. The mappingμ↦μ^{\displaystyle \mu \mapsto {\hat {\mu }}}defines an isomorphism between theBanach spaceM(G)of finite Borel measures (seerca space) and a closed subspace of the Banach spaceC∞(Σ)consisting of all sequencesE= (Eσ)indexed byΣof (bounded) linear operatorsEσ:Hσ→Hσfor which the norm‖E‖=supσ∈Σ‖Eσ‖{\displaystyle \|E\|=\sup _{\sigma \in \Sigma }\left\|E_{\sigma }\right\|}is finite. The "convolution theorem" asserts that, furthermore, this isomorphism of Banach spaces is in fact an isometric isomorphism ofC*-algebrasinto a subspace ofC∞(Σ). Multiplication onM(G)is given byconvolutionof measures and the involution * defined byf∗(g)=f(g−1)¯,{\displaystyle f^{*}(g)={\overline {f\left(g^{-1}\right)}},}andC∞(Σ)has a naturalC*-algebra structure as Hilbert space operators. ThePeter–Weyl theoremholds, and a version of the Fourier inversion formula (Plancherel's theorem) follows: iff∈L2(G), thenf(g)=∑σ∈Σdσtr⁡(f^(σ)Ug(σ)){\displaystyle f(g)=\sum _{\sigma \in \Sigma }d_{\sigma }\operatorname {tr} \left({\hat {f}}(\sigma )U_{g}^{(\sigma )}\right)}where the summation is understood as convergent in theL2sense. The generalization of the Fourier transform to the noncommutative situation has also in part contributed to the development ofnoncommutative geometry.[citation needed]In this context, a categorical generalization of the Fourier transform to noncommutative groups isTannaka–Krein duality, which replaces the group of characters with the category of representations. However, this loses the connection with harmonic functions. Insignal processingterms, a function (of time) is a representation of a signal with perfecttime resolution, but no frequency information, while the Fourier transform has perfectfrequency resolution, but no time information: the magnitude of the Fourier transform at a point is how much frequency content there is, but location is only given by phase (argument of the Fourier transform at a point), andstanding wavesare not localized in time – a sine wave continues out to infinity, without decaying. This limits the usefulness of the Fourier transform for analyzing signals that are localized in time, notablytransients, or any signal of finite extent. As alternatives to the Fourier transform, intime–frequency analysis, one uses time–frequency transforms or time–frequency distributions to represent signals in a form that has some time information and some frequency information – by the uncertainty principle, there is a trade-off between these. These can be generalizations of the Fourier transform, such as theshort-time Fourier transform,fractional Fourier transform, Synchrosqueezing Fourier transform,[69]or other functions to represent signals, as inwavelet transformsandchirplet transforms, with the wavelet analog of the (continuous) Fourier transform being thecontinuous wavelet transform.[29] The following figures provide a visual illustration of how the Fourier transform's integral measures whether a frequency is present in a particular function. The first image depicts the functionf(t)=cos⁡(2π3t)e−πt2,{\displaystyle f(t)=\cos(2\pi \ 3t)\ e^{-\pi t^{2}},}which is a 3Hzcosine wave (the first term) shaped by aGaussianenvelope function(the second term) that smoothly turns the wave on and off. The next 2 images show the productf(t)e−i2π3t,{\displaystyle f(t)e^{-i2\pi 3t},}which must be integrated to calculate the Fourier transform at +3 Hz. The real part of the integrand has a non-negative average value, because the alternating signs off(t){\displaystyle f(t)}andRe⁡(e−i2π3t){\displaystyle \operatorname {Re} (e^{-i2\pi 3t})}oscillate at the same rate and in phase, whereasf(t){\displaystyle f(t)}andIm⁡(e−i2π3t){\displaystyle \operatorname {Im} (e^{-i2\pi 3t})}oscillate at the same rate but with orthogonal phase. The absolute value of the Fourier transform at +3 Hz is 0.5, which is relatively large. When added to the Fourier transform at -3 Hz (which is identical because we started with a real signal), we find that the amplitude of the 3 Hz frequency component is 1. However, when you try to measure a frequency that is not present, both the real and imaginary component of the integral vary rapidly between positive and negative values. For instance, the red curve is looking for 5 Hz. The absolute value of its integral is nearly zero, indicating that almost no 5 Hz component was in the signal. The general situation is usually more complicated than this, but heuristically this is how the Fourier transform measures how much of an individual frequency is present in a functionf(t).{\displaystyle f(t).} To re-enforce an earlier point, the reason for the response atξ=−3{\displaystyle \xi =-3}Hz  is becausecos⁡(2π3t){\displaystyle \cos(2\pi 3t)}andcos⁡(2π(−3)t){\displaystyle \cos(2\pi (-3)t)}are indistinguishable. The transform ofei2π3t⋅e−πt2{\displaystyle e^{i2\pi 3t}\cdot e^{-\pi t^{2}}}would have just one response, whose amplitude is the integral of the smooth envelope:e−πt2,{\displaystyle e^{-\pi t^{2}},}whereasRe⁡(f(t)⋅e−i2π3t){\displaystyle \operatorname {Re} (f(t)\cdot e^{-i2\pi 3t})}ise−πt2(1+cos⁡(2π6t))/2.{\displaystyle e^{-\pi t^{2}}(1+\cos(2\pi 6t))/2.} Linear operations performed in one domain (time or frequency) have corresponding operations in the other domain, which are sometimes easier to perform. The operation ofdifferentiationin the time domain corresponds to multiplication by the frequency,[note 6]so somedifferential equationsare easier to analyze in the frequency domain. Also,convolutionin the time domain corresponds to ordinary multiplication in the frequency domain (seeConvolution theorem). After performing the desired operations, transformation of the result can be made back to the time domain.Harmonic analysisis the systematic study of the relationship between the frequency and time domains, including the kinds of functions or operations that are "simpler" in one or the other, and has deep connections to many areas of modern mathematics. Perhaps the most important use of the Fourier transformation is to solvepartial differential equations. Many of the equations of the mathematical physics of the nineteenth century can be treated this way. Fourier studied the heat equation, which in one dimension and in dimensionless units is∂2y(x,t)∂2x=∂y(x,t)∂t.{\displaystyle {\frac {\partial ^{2}y(x,t)}{\partial ^{2}x}}={\frac {\partial y(x,t)}{\partial t}}.}The example we will give, a slightly more difficult one, is the wave equation in one dimension,∂2y(x,t)∂2x=∂2y(x,t)∂2t.{\displaystyle {\frac {\partial ^{2}y(x,t)}{\partial ^{2}x}}={\frac {\partial ^{2}y(x,t)}{\partial ^{2}t}}.} As usual, the problem is not to find a solution: there are infinitely many. The problem is that of the so-called "boundary problem": find a solution which satisfies the "boundary conditions"y(x,0)=f(x),∂y(x,0)∂t=g(x).{\displaystyle y(x,0)=f(x),\qquad {\frac {\partial y(x,0)}{\partial t}}=g(x).} Here,fandgare given functions. For the heat equation, only one boundary condition can be required (usually the first one). But for the wave equation, there are still infinitely many solutionsywhich satisfy the first boundary condition. But when one imposes both conditions, there is only one possible solution. It is easier to find the Fourier transformŷof the solution than to find the solution directly. This is because the Fourier transformation takes differentiation into multiplication by the Fourier-dual variable, and so a partial differential equation applied to the original function is transformed into multiplication by polynomial functions of the dual variables applied to the transformed function. Afterŷis determined, we can apply the inverse Fourier transformation to findy. Fourier's method is as follows. First, note that any function of the formscos⁡(2πξ(x±t))orsin⁡(2πξ(x±t)){\displaystyle \cos {\bigl (}2\pi \xi (x\pm t){\bigr )}{\text{ or }}\sin {\bigl (}2\pi \xi (x\pm t){\bigr )}}satisfies the wave equation. These are called the elementary solutions. Second, note that therefore any integraly(x,t)=∫0∞dξ[a+(ξ)cos⁡(2πξ(x+t))+a−(ξ)cos⁡(2πξ(x−t))+b+(ξ)sin⁡(2πξ(x+t))+b−(ξ)sin⁡(2πξ(x−t))]{\displaystyle {\begin{aligned}y(x,t)=\int _{0}^{\infty }d\xi {\Bigl [}&a_{+}(\xi )\cos {\bigl (}2\pi \xi (x+t){\bigr )}+a_{-}(\xi )\cos {\bigl (}2\pi \xi (x-t){\bigr )}+{}\\&b_{+}(\xi )\sin {\bigl (}2\pi \xi (x+t){\bigr )}+b_{-}(\xi )\sin \left(2\pi \xi (x-t)\right){\Bigr ]}\end{aligned}}}satisfies the wave equation for arbitrarya+,a−,b+,b−. This integral may be interpreted as a continuous linear combination of solutions for the linear equation. Now this resembles the formula for the Fourier synthesis of a function. In fact, this is the real inverse Fourier transform ofa±andb±in the variablex. The third step is to examine how to find the specific unknown coefficient functionsa±andb±that will lead toysatisfying the boundary conditions. We are interested in the values of these solutions att= 0. So we will sett= 0. Assuming that the conditions needed for Fourier inversion are satisfied, we can then find the Fourier sine and cosine transforms (in the variablex) of both sides and obtain2∫−∞∞y(x,0)cos⁡(2πξx)dx=a++a−{\displaystyle 2\int _{-\infty }^{\infty }y(x,0)\cos(2\pi \xi x)\,dx=a_{+}+a_{-}}and2∫−∞∞y(x,0)sin⁡(2πξx)dx=b++b−.{\displaystyle 2\int _{-\infty }^{\infty }y(x,0)\sin(2\pi \xi x)\,dx=b_{+}+b_{-}.} Similarly, taking the derivative ofywith respect totand then applying the Fourier sine and cosine transformations yields2∫−∞∞∂y(u,0)∂tsin⁡(2πξx)dx=(2πξ)(−a++a−){\displaystyle 2\int _{-\infty }^{\infty }{\frac {\partial y(u,0)}{\partial t}}\sin(2\pi \xi x)\,dx=(2\pi \xi )\left(-a_{+}+a_{-}\right)}and2∫−∞∞∂y(u,0)∂tcos⁡(2πξx)dx=(2πξ)(b+−b−).{\displaystyle 2\int _{-\infty }^{\infty }{\frac {\partial y(u,0)}{\partial t}}\cos(2\pi \xi x)\,dx=(2\pi \xi )\left(b_{+}-b_{-}\right).} These are four linear equations for the four unknownsa±andb±, in terms of the Fourier sine and cosine transforms of the boundary conditions, which are easily solved by elementary algebra, provided that these transforms can be found. In summary, we chose a set of elementary solutions, parametrized byξ, of which the general solution would be a (continuous) linear combination in the form of an integral over the parameterξ. But this integral was in the form of a Fourier integral. The next step was to express the boundary conditions in terms of these integrals, and set them equal to the given functionsfandg. But these expressions also took the form of a Fourier integral because of the properties of the Fourier transform of a derivative. The last step was to exploit Fourier inversion by applying the Fourier transformation to both sides, thus obtaining expressions for the coefficient functionsa±andb±in terms of the given boundary conditionsfandg. From a higher point of view, Fourier's procedure can be reformulated more conceptually. Since there are two variables, we will use the Fourier transformation in bothxandtrather than operate as Fourier did, who only transformed in the spatial variables. Note thatŷmust be considered in the sense of a distribution sincey(x,t)is not going to beL1: as a wave, it will persist through time and thus is not a transient phenomenon. But it will be bounded and so its Fourier transform can be defined as a distribution. The operational properties of the Fourier transformation that are relevant to this equation are that it takes differentiation inxto multiplication byi2πξand differentiation with respect totto multiplication byi2πfwherefis the frequency. Then the wave equation becomes an algebraic equation inŷ:ξ2y^(ξ,f)=f2y^(ξ,f).{\displaystyle \xi ^{2}{\hat {y}}(\xi ,f)=f^{2}{\hat {y}}(\xi ,f).}This is equivalent to requiringŷ(ξ,f) = 0unlessξ= ±f. Right away, this explains why the choice of elementary solutions we made earlier worked so well: obviouslyf̂=δ(ξ±f)will be solutions. Applying Fourier inversion to these delta functions, we obtain the elementary solutions we picked earlier. But from the higher point of view, one does not pick elementary solutions, but rather considers the space of all distributions which are supported on the (degenerate) conicξ2−f2= 0. We may as well consider the distributions supported on the conic that are given by distributions of one variable on the lineξ=fplus distributions on the lineξ= −fas follows: ifΦis any test function,∬y^ϕ(ξ,f)dξdf=∫s+ϕ(ξ,ξ)dξ+∫s−ϕ(ξ,−ξ)dξ,{\displaystyle \iint {\hat {y}}\phi (\xi ,f)\,d\xi \,df=\int s_{+}\phi (\xi ,\xi )\,d\xi +\int s_{-}\phi (\xi ,-\xi )\,d\xi ,}wheres+, ands−, are distributions of one variable. Then Fourier inversion gives, for the boundary conditions, something very similar to what we had more concretely above (putΦ(ξ,f) =ei2π(xξ+tf), which is clearly of polynomial growth):y(x,0)=∫{s+(ξ)+s−(ξ)}ei2πξx+0dξ{\displaystyle y(x,0)=\int {\bigl \{}s_{+}(\xi )+s_{-}(\xi ){\bigr \}}e^{i2\pi \xi x+0}\,d\xi }and∂y(x,0)∂t=∫{s+(ξ)−s−(ξ)}i2πξei2πξx+0dξ.{\displaystyle {\frac {\partial y(x,0)}{\partial t}}=\int {\bigl \{}s_{+}(\xi )-s_{-}(\xi ){\bigr \}}i2\pi \xi e^{i2\pi \xi x+0}\,d\xi .} Now, as before, applying the one-variable Fourier transformation in the variablexto these functions ofxyields two equations in the two unknown distributionss±(which can be taken to be ordinary functions if the boundary conditions areL1orL2). From a calculational point of view, the drawback of course is that one must first calculate the Fourier transforms of the boundary conditions, then assemble the solution from these, and then calculate an inverse Fourier transform. Closed form formulas are rare, except when there is some geometric symmetry that can be exploited, and the numerical calculations are difficult because of the oscillatory nature of the integrals, which makes convergence slow and hard to estimate. For practical calculations, other methods are often used. The twentieth century has seen the extension of these methods to all linear partial differential equations with polynomial coefficients, and by extending the notion of Fourier transformation to include Fourier integral operators, some non-linear equations as well. The Fourier transform is also used innuclear magnetic resonance(NMR) and in other kinds ofspectroscopy, e.g. infrared (FTIR). In NMR an exponentially shaped free induction decay (FID) signal is acquired in the time domain and Fourier-transformed to a Lorentzian line-shape in the frequency domain. The Fourier transform is also used inmagnetic resonance imaging(MRI) andmass spectrometry. The Fourier transform is useful inquantum mechanicsin at least two different ways. To begin with, the basic conceptual structure of quantum mechanics postulates the existence of pairs ofcomplementary variables, connected by theHeisenberg uncertainty principle. For example, in one dimension, the spatial variableqof, say, a particle, can only be measured by the quantum mechanical "position operator" at the cost of losing information about the momentumpof the particle. Therefore, the physical state of the particle can either be described by a function, called "the wave function", ofqor by a function ofpbut not by a function of both variables. The variablepis called the conjugate variable toq. In classical mechanics, the physical state of a particle (existing in one dimension, for simplicity of exposition) would be given by assigning definite values to bothpandqsimultaneously. Thus, the set of all possible physical states is the two-dimensional real vector space with ap-axis and aq-axis called thephase space. In contrast, quantum mechanics chooses a polarisation of this space in the sense that it picks a subspace of one-half the dimension, for example, theq-axis alone, but instead of considering only points, takes the set of all complex-valued "wave functions" on this axis. Nevertheless, choosing thep-axis is an equally valid polarisation, yielding a different representation of the set of possible physical states of the particle. Both representations of the wavefunction are related by a Fourier transform, such thatϕ(p)=∫dqψ(q)e−ipq/h,{\displaystyle \phi (p)=\int dq\,\psi (q)e^{-ipq/h},}or, equivalently,ψ(q)=∫dpϕ(p)eipq/h.{\displaystyle \psi (q)=\int dp\,\phi (p)e^{ipq/h}.} Physically realisable states areL2, and so by thePlancherel theorem, their Fourier transforms are alsoL2. (Note that sinceqis in units of distance andpis in units of momentum, the presence of the Planck constant in the exponent makes the exponentdimensionless, as it should be.) Therefore, the Fourier transform can be used to pass from one way of representing the state of the particle, by a wave function of position, to another way of representing the state of the particle: by a wave function of momentum. Infinitely many different polarisations are possible, and all are equally valid. Being able to transform states from one representation to another by the Fourier transform is not only convenient but also the underlying reason of the Heisenberguncertainty principle. The other use of the Fourier transform in both quantum mechanics andquantum field theoryis to solve the applicable wave equation. In non-relativistic quantum mechanics,Schrödinger's equationfor a time-varying wave function in one-dimension, not subject to external forces, is−∂2∂x2ψ(x,t)=ih2π∂∂tψ(x,t).{\displaystyle -{\frac {\partial ^{2}}{\partial x^{2}}}\psi (x,t)=i{\frac {h}{2\pi }}{\frac {\partial }{\partial t}}\psi (x,t).} This is the same as the heat equation except for the presence of the imaginary uniti. Fourier methods can be used to solve this equation. In the presence of a potential, given by the potential energy functionV(x), the equation becomes−∂2∂x2ψ(x,t)+V(x)ψ(x,t)=ih2π∂∂tψ(x,t).{\displaystyle -{\frac {\partial ^{2}}{\partial x^{2}}}\psi (x,t)+V(x)\psi (x,t)=i{\frac {h}{2\pi }}{\frac {\partial }{\partial t}}\psi (x,t).} The "elementary solutions", as we referred to them above, are the so-called "stationary states" of the particle, and Fourier's algorithm, as described above, can still be used to solve the boundary value problem of the future evolution ofψgiven its values fort= 0. Neither of these approaches is of much practical use in quantum mechanics. Boundary value problems and the time-evolution of the wave function is not of much practical interest: it is the stationary states that are most important. In relativistic quantum mechanics, Schrödinger's equation becomes a wave equation as was usual in classical physics, except that complex-valued waves are considered. A simple example, in the absence of interactions with other particles or fields, is the free one-dimensional Klein–Gordon–Schrödinger–Fock equation, this time in dimensionless units,(∂2∂x2+1)ψ(x,t)=∂2∂t2ψ(x,t).{\displaystyle \left({\frac {\partial ^{2}}{\partial x^{2}}}+1\right)\psi (x,t)={\frac {\partial ^{2}}{\partial t^{2}}}\psi (x,t).} This is, from the mathematical point of view, the same as the wave equation of classical physics solved above (but with a complex-valued wave, which makes no difference in the methods). This is of great use in quantum field theory: each separate Fourier component of a wave can be treated as a separate harmonic oscillator and then quantized, a procedure known as "second quantization". Fourier methods have been adapted to also deal with non-trivial interactions. Finally, thenumber operatorof thequantum harmonic oscillatorcan be interpreted, for example via theMehler kernel, as thegeneratorof theFourier transformF{\displaystyle {\mathcal {F}}}.[32] The Fourier transform is used for the spectral analysis of time-series. The subject of statistical signal processing does not, however, usually apply the Fourier transformation to the signal itself. Even if a real signal is indeed transient, it has been found in practice advisable to model a signal by a function (or, alternatively, a stochastic process) which is stationary in the sense that its characteristic properties are constant over all time. The Fourier transform of such a function does not exist in the usual sense, and it has been found more useful for the analysis of signals to instead take the Fourier transform of its autocorrelation function. The autocorrelation functionRof a functionfis defined byRf(τ)=limT→∞12T∫−TTf(t)f(t+τ)dt.{\displaystyle R_{f}(\tau )=\lim _{T\rightarrow \infty }{\frac {1}{2T}}\int _{-T}^{T}f(t)f(t+\tau )\,dt.} This function is a function of the time-lagτelapsing between the values offto be correlated. For most functionsfthat occur in practice,Ris a bounded even function of the time-lagτand for typical noisy signals it turns out to be uniformly continuous with a maximum atτ= 0. The autocorrelation function, more properly called the autocovariance function unless it is normalized in some appropriate fashion, measures the strength of the correlation between the values offseparated by a time lag. This is a way of searching for the correlation offwith its own past. It is useful even for other statistical tasks besides the analysis of signals. For example, iff(t)represents the temperature at timet, one expects a strong correlation with the temperature at a time lag of 24 hours. It possesses a Fourier transform,Pf(ξ)=∫−∞∞Rf(τ)e−i2πξτdτ.{\displaystyle P_{f}(\xi )=\int _{-\infty }^{\infty }R_{f}(\tau )e^{-i2\pi \xi \tau }\,d\tau .} This Fourier transform is called thepower spectral densityfunction off. (Unless all periodic components are first filtered out fromf, this integral will diverge, but it is easy to filter out such periodicities.) The power spectrum, as indicated by this density functionP, measures the amount of variance contributed to the data by the frequencyξ. In electrical signals, the variance is proportional to the average power (energy per unit time), and so the power spectrum describes how much the different frequencies contribute to the average power of the signal. This process is called the spectral analysis of time-series and is analogous to the usual analysis of variance of data that is not a time-series (ANOVA). Knowledge of which frequencies are "important" in this sense is crucial for the proper design of filters and for the proper evaluation of measuring apparatuses. It can also be useful for the scientific analysis of the phenomena responsible for producing the data. The power spectrum of a signal can also be approximately measured directly by measuring the average power that remains in a signal after all the frequencies outside a narrow band have been filtered out. Spectral analysis is carried out for visual signals as well. The power spectrum ignores all phase relations, which is good enough for many purposes, but for video signals other types of spectral analysis must also be employed, still using the Fourier transform as a tool. Other common notations forf^(ξ){\displaystyle {\hat {f}}(\xi )}include:f~(ξ),F(ξ),F(f)(ξ),(Ff)(ξ),F(f),F{f},F(f(t)),F{f(t)}.{\displaystyle {\tilde {f}}(\xi ),\ F(\xi ),\ {\mathcal {F}}\left(f\right)(\xi ),\ \left({\mathcal {F}}f\right)(\xi ),\ {\mathcal {F}}(f),\ {\mathcal {F}}\{f\},\ {\mathcal {F}}{\bigl (}f(t){\bigr )},\ {\mathcal {F}}{\bigl \{}f(t){\bigr \}}.} In the sciences and engineering it is also common to make substitutions like these:ξ→f,x→t,f→x,f^→X.{\displaystyle \xi \rightarrow f,\quad x\rightarrow t,\quad f\rightarrow x,\quad {\hat {f}}\rightarrow X.} So the transform pairf(x)⟺Ff^(ξ){\displaystyle f(x)\ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ {\hat {f}}(\xi )}can becomex(t)⟺FX(f){\displaystyle x(t)\ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ X(f)} A disadvantage of the capital letter notation is when expressing a transform such asf⋅g^{\displaystyle {\widehat {f\cdot g}}}orf′^,{\displaystyle {\widehat {f'}},}which become the more awkwardF{f⋅g}{\displaystyle {\mathcal {F}}\{f\cdot g\}}andF{f′}.{\displaystyle {\mathcal {F}}\{f'\}.} In some contexts such as particle physics, the same symbolf{\displaystyle f}may be used for both for a function as well as it Fourier transform, with the two only distinguished by theirargumentI.e.f(k1+k2){\displaystyle f(k_{1}+k_{2})}would refer to the Fourier transform because of the momentum argument, whilef(x0+πr→){\displaystyle f(x_{0}+\pi {\vec {r}})}would refer to the original function because of the positional argument. Although tildes may be used as inf~{\displaystyle {\tilde {f}}}to indicate Fourier transforms, tildes may also be used to indicate a modification of a quantity with a moreLorentz invariantform, such asdk~=dk(2π)32ω{\displaystyle {\tilde {dk}}={\frac {dk}{(2\pi )^{3}2\omega }}}, so care must be taken. Similarly,f^{\displaystyle {\hat {f}}}often denotes theHilbert transformoff{\displaystyle f}. The interpretation of the complex functionf̂(ξ)may be aided by expressing it inpolar coordinateformf^(ξ)=A(ξ)eiφ(ξ){\displaystyle {\hat {f}}(\xi )=A(\xi )e^{i\varphi (\xi )}}in terms of the two real functionsA(ξ)andφ(ξ)where:A(ξ)=|f^(ξ)|,{\displaystyle A(\xi )=\left|{\hat {f}}(\xi )\right|,}is theamplitudeandφ(ξ)=arg⁡(f^(ξ)),{\displaystyle \varphi (\xi )=\arg \left({\hat {f}}(\xi )\right),}is thephase(seearg function). Then the inverse transform can be written:f(x)=∫−∞∞A(ξ)ei(2πξx+φ(ξ))dξ,{\displaystyle f(x)=\int _{-\infty }^{\infty }A(\xi )\ e^{i{\bigl (}2\pi \xi x+\varphi (\xi ){\bigr )}}\,d\xi ,}which is a recombination of all the frequency components off(x). Each component is a complexsinusoidof the forme2πixξwhose amplitude isA(ξ)and whose initialphase angle(atx= 0) isφ(ξ). The Fourier transform may be thought of as a mapping on function spaces. This mapping is here denotedFandF(f)is used to denote the Fourier transform of the functionf. This mapping is linear, which means thatFcan also be seen as a linear transformation on the function space and implies that the standard notation in linear algebra of applying a linear transformation to a vector (here the functionf) can be used to writeFfinstead ofF(f). Since the result of applying the Fourier transform is again a function, we can be interested in the value of this function evaluated at the valueξfor its variable, and this is denoted either asFf(ξ)or as(Ff)(ξ). Notice that in the former case, it is implicitly understood thatFis applied first tofand then the resulting function is evaluated atξ, not the other way around. In mathematics and various applied sciences, it is often necessary to distinguish between a functionfand the value offwhen its variable equalsx, denotedf(x). This means that a notation likeF(f(x))formally can be interpreted as the Fourier transform of the values offatx. Despite this flaw, the previous notation appears frequently, often when a particular function or a function of a particular variable is to be transformed. For example,F(rect⁡(x))=sinc⁡(ξ){\displaystyle {\mathcal {F}}{\bigl (}\operatorname {rect} (x){\bigr )}=\operatorname {sinc} (\xi )}is sometimes used to express that the Fourier transform of arectangular functionis asinc function, orF(f(x+x0))=F(f(x))ei2πx0ξ{\displaystyle {\mathcal {F}}{\bigl (}f(x+x_{0}){\bigr )}={\mathcal {F}}{\bigl (}f(x){\bigr )}\,e^{i2\pi x_{0}\xi }}is used to express the shift property of the Fourier transform. Notice, that the last example is only correct under the assumption that the transformed function is a function ofx, not ofx0. As discussed above, thecharacteristic functionof a random variable is the same as theFourier–Stieltjes transformof its distribution measure, but in this context it is typical to take a different convention for the constants. Typically characteristic function is definedE(eit⋅X)=∫eit⋅xdμX(x).{\displaystyle E\left(e^{it\cdot X}\right)=\int e^{it\cdot x}\,d\mu _{X}(x).} As in the case of the "non-unitary angular frequency" convention above, the factor of 2πappears in neither the normalizing constant nor the exponent. Unlike any of the conventions appearing above, this convention takes the opposite sign in the exponent. The appropriate computation method largely depends how the original mathematical function is represented and the desired form of the output function. In this section we consider both functions of a continuous variable,f(x),{\displaystyle f(x),}and functions of a discrete variable (i.e. ordered pairs ofx{\displaystyle x}andf{\displaystyle f}values). For discrete-valuedx,{\displaystyle x,}the transform integral becomes a summation of sinusoids, which is still a continuous function of frequency (ξ{\displaystyle \xi }orω{\displaystyle \omega }). When the sinusoids are harmonically related (i.e. when thex{\displaystyle x}-values are spaced at integer multiples of an interval), the transform is calleddiscrete-time Fourier transform(DTFT). Sampling the DTFT at equally-spaced values of frequency is the most common modern method of computation. Efficient procedures, depending on the frequency resolution needed, are described atDiscrete-time Fourier transform § Sampling the DTFT. Thediscrete Fourier transform(DFT), used there, is usually computed by afast Fourier transform(FFT) algorithm. Tables ofclosed-formFourier transforms, such as§ Square-integrable functions, one-dimensionaland§ Table of discrete-time Fourier transforms, are created by mathematically evaluating the Fourier analysis integral (or summation) into another closed-form function of frequency (ξ{\displaystyle \xi }orω{\displaystyle \omega }).[70]When mathematically possible, this provides a transform for a continuum of frequency values. Many computer algebra systems such asMatlabandMathematicathat are capable ofsymbolic integrationare capable of computing Fourier transforms analytically. For example, to compute the Fourier transform ofcos(6πt)e−πt2one might enter the commandintegrate cos(6*pi*t) exp(−pi*t^2) exp(-i*2*pi*f*t) from -inf to infintoWolfram Alpha.[note 7] Discrete sampling of the Fourier transform can also be done bynumerical integrationof the definition at each value of frequency for which transform is desired.[71][72][73]The numerical integration approach works on a much broader class of functions than the analytic approach. If the input function is a series of ordered pairs, numerical integration reduces to just a summation over the set of data pairs.[74]The DTFT is a common subcase of this more general situation. The following tables record some closed-form Fourier transforms. For functionsf(x)andg(x)denote their Fourier transforms byf̂andĝ. Only the three most common conventions are included. It may be useful to notice that entry 105 gives a relationship between the Fourier transform of a function and the original function, which can be seen as relating the Fourier transform and its inverse. The Fourier transforms in this table may be found inErdélyi (1954)orKammler (2000, appendix). Asfis aSchwartz function The Fourier transforms in this table may be found inCampbell & Foster (1948),Erdélyi (1954), orKammler (2000, appendix). The Fourier transforms in this table may be found inErdélyi (1954)orKammler (2000, appendix).
https://en.wikipedia.org/wiki/Fourier_transform
Alarge memory storage and retrieval neural network(LAMSTAR)[1][2]is a fastdeep learningneural networkof many layers that can use many filters simultaneously. These filters may be nonlinear, stochastic, logic,non-stationary, or even non-analytical. They are biologically motivated and learn continuously. A LAMSTAR neural network may serve as a dynamic neural network in spatial or time domains or both. Its speed is provided byHebbianlink-weights[3]that integrate the various and usually different filters (preprocessing functions) into its many layers and to dynamically rank the significance of the various layers and functions relative to a given learning task. This vaguely imitates biological learning that integrates various preprocessors (cochlea,retina,etc.) and cortexes (auditory,visual,etc.) and their various regions. Its deep learning capability is further enhanced by using inhibition, correlation and by its ability to cope with incomplete data, or "lost" neurons or layers even amidst a task. It is fully transparent due to its link weights. The link-weights allow dynamic determination of innovation and redundancy, and facilitate the ranking of layers, of filters or of individual neurons relative to a task. LAMSTAR has been applied to many domains, including medical[4][5][6]and financial predictions,[7]adaptive filtering of noisy speech in unknown noise,[8]still-image recognition,[9]video image recognition,[10]software security[11]and adaptive control of non-linear systems.[12]LAMSTAR had a much faster learning speed and somewhat lower error rate than a CNN based onReLU-function filters and max pooling, in 20 comparative studies.[13] These applications demonstrate delving into aspects of the data that are hidden from shallow learning networks and the human senses, such as in the cases of predicting onset ofsleep apneaevents,[5]of an electrocardiogram of a fetus as recorded from skin-surface electrodes placed on the mother's abdomen early in pregnancy,[6]of financial prediction[1]or in blind filtering of noisy speech.[8] LAMSTAR was proposed in 1996 and was further developed Graupe and Kordylewski from 1997–2002.[14][15][16]A modified version, known as LAMSTAR 2, was developed by Schneider and Graupe in 2008.[17][18]
https://en.wikipedia.org/wiki/Large_memory_storage_and_retrieval_neural_networks
Inmathematics, ahomogeneous functionis afunction of several variablessuch that the following holds: If each of the function's arguments is multiplied by the samescalar, then the function's value is multiplied by some power of this scalar; the power is called thedegree of homogeneity, or simply thedegree. That is, ifkis an integer, a functionfofnvariables is homogeneous of degreekif for everyx1,…,xn,{\displaystyle x_{1},\ldots ,x_{n},}ands≠0.{\displaystyle s\neq 0.}This is also referred to akth-degreeorkth-orderhomogeneous function. For example, ahomogeneous polynomialof degreekdefines a homogeneous function of degreek. The above definition extends to functions whosedomainandcodomainarevector spacesover afieldF: a functionf:V→W{\displaystyle f:V\to W}between twoF-vector spaces ishomogeneousof degreek{\displaystyle k}if for all nonzeros∈F{\displaystyle s\in F}andv∈V.{\displaystyle v\in V.}This definition is often further generalized to functions whose domain is notV, but aconeinV, that is, a subsetCofVsuch thatv∈C{\displaystyle \mathbf {v} \in C}impliessv∈C{\displaystyle s\mathbf {v} \in C}for every nonzero scalars. In the case offunctions of several real variablesandreal vector spaces, a slightly more general form of homogeneity calledpositive homogeneityis often considered, by requiring only that the above identities hold fors>0,{\displaystyle s>0,}and allowing any real numberkas a degree of homogeneity. Every homogeneous real function ispositively homogeneous. The converse is not true, but is locally true in the sense that (for integer degrees) the two kinds of homogeneity cannot be distinguished by considering the behavior of a function near a given point. Anormover a real vector space is an example of a positively homogeneous function that is not homogeneous. A special case is theabsolute valueof real numbers. The quotient of two homogeneous polynomials of the same degree gives an example of a homogeneous function of degree zero. This example is fundamental in the definition ofprojective schemes. The concept of a homogeneous function was originally introduced forfunctions of several real variables. With the definition ofvector spacesat the end of 19th century, the concept has been naturally extended to functions between vector spaces, since atupleof variable values can be considered as acoordinate vector. It is this more general point of view that is described in this article. There are two commonly used definitions. The general one works for vector spaces over arbitraryfields, and is restricted to degrees of homogeneity that areintegers. The second one supposes to work over the field ofreal numbers, or, more generally, over anordered field. This definition restricts to positive values the scaling factor that occurs in the definition, and is therefore calledpositive homogeneity, the qualificativepositivebeing often omitted when there is no risk of confusion. Positive homogeneity leads to considering more functions as homogeneous. For example, theabsolute valueand allnormsare positively homogeneous functions that are not homogeneous. The restriction of the scaling factor to real positive values allows also considering homogeneous functions whose degree of homogeneity is any real number. LetVandWbe twovector spacesover afieldF. Alinear coneinVis a subsetCofVsuch thatsx∈C{\displaystyle sx\in C}for allx∈C{\displaystyle x\in C}and all nonzeros∈F.{\displaystyle s\in F.} Ahomogeneous functionffromVtoWis apartial functionfromVtoWthat has a linear coneCas itsdomain, and satisfies for someintegerk, everyx∈C,{\displaystyle x\in C,}and every nonzeros∈F.{\displaystyle s\in F.}The integerkis called thedegree of homogeneity, or simply thedegreeoff. A typical example of a homogeneous function of degreekis the function defined by ahomogeneous polynomialof degreek. Therational functiondefined by the quotient of two homogeneous polynomials is a homogeneous function; its degree is the difference of the degrees of the numerator and the denominator; itscone of definitionis the linear cone of the points where the value of denominator is not zero. Homogeneous functions play a fundamental role inprojective geometrysince any homogeneous functionffromVtoWdefines a well-defined function between theprojectivizationsofVandW. The homogeneous rational functions of degree zero (those defined by the quotient of two homogeneous polynomial of the same degree) play an essential role in theProj constructionofprojective schemes. When working over thereal numbers, or more generally over anordered field, it is commonly convenient to considerpositive homogeneity, the definition being exactly the same as that in the preceding section, with "nonzeros" replaced by "s> 0" in the definitions of a linear cone and a homogeneous function. This change allow considering (positively) homogeneous functions with any real number as their degrees, sinceexponentiationwith a positive real base is well defined. Even in the case of integer degrees, there are many useful functions that are positively homogeneous without being homogeneous. This is, in particular, the case of theabsolute valuefunction andnorms, which are all positively homogeneous of degree1. They are not homogeneous since|−x|=|x|≠−|x|{\displaystyle |-x|=|x|\neq -|x|}ifx≠0.{\displaystyle x\neq 0.}This remains true in thecomplexcase, since the field of the complex numbersC{\displaystyle \mathbb {C} }and every complex vector space can be considered as real vector spaces. Euler's homogeneous function theoremis a characterization of positively homogeneousdifferentiable functions, which may be considered as thefundamental theorem on homogeneous functions. The functionf(x,y)=x2+y2{\displaystyle f(x,y)=x^{2}+y^{2}}is homogeneous of degree 2:f(tx,ty)=(tx)2+(ty)2=t2(x2+y2)=t2f(x,y).{\displaystyle f(tx,ty)=(tx)^{2}+(ty)^{2}=t^{2}\left(x^{2}+y^{2}\right)=t^{2}f(x,y).} Theabsolute valueof areal numberis a positively homogeneous function of degree1, which is not homogeneous, since|sx|=s|x|{\displaystyle |sx|=s|x|}ifs>0,{\displaystyle s>0,}and|sx|=−s|x|{\displaystyle |sx|=-s|x|}ifs<0.{\displaystyle s<0.} The absolute value of acomplex numberis a positively homogeneous function of degree1{\displaystyle 1}over the real numbers (that is, when considering the complex numbers as avector spaceover the real numbers). It is not homogeneous, over the real numbers as well as over the complex numbers. More generally, everynormandseminormis a positively homogeneous function of degree1which is not a homogeneous function. As for the absolute value, if the norm or semi-norm is defined on a vector space over the complex numbers, this vector space has to be considered as vector space over the real number for applying the definition of a positively homogeneous function. Anylinear mapf:V→W{\displaystyle f:V\to W}betweenvector spacesover afieldFis homogeneous of degree 1, by the definition of linearity:f(αv)=αf(v){\displaystyle f(\alpha \mathbf {v} )=\alpha f(\mathbf {v} )}for allα∈F{\displaystyle \alpha \in {F}}andv∈V.{\displaystyle v\in V.} Similarly, anymultilinear functionf:V1×V2×⋯Vn→W{\displaystyle f:V_{1}\times V_{2}\times \cdots V_{n}\to W}is homogeneous of degreen,{\displaystyle n,}by the definition of multilinearity:f(αv1,…,αvn)=αnf(v1,…,vn){\displaystyle f\left(\alpha \mathbf {v} _{1},\ldots ,\alpha \mathbf {v} _{n}\right)=\alpha ^{n}f(\mathbf {v} _{1},\ldots ,\mathbf {v} _{n})}for allα∈F{\displaystyle \alpha \in {F}}andv1∈V1,v2∈V2,…,vn∈Vn.{\displaystyle v_{1}\in V_{1},v_{2}\in V_{2},\ldots ,v_{n}\in V_{n}.} Monomialsinn{\displaystyle n}variables define homogeneous functionsf:Fn→F.{\displaystyle f:\mathbb {F} ^{n}\to \mathbb {F} .}For example,f(x,y,z)=x5y2z3{\displaystyle f(x,y,z)=x^{5}y^{2}z^{3}\,}is homogeneous of degree 10 sincef(αx,αy,αz)=(αx)5(αy)2(αz)3=α10x5y2z3=α10f(x,y,z).{\displaystyle f(\alpha x,\alpha y,\alpha z)=(\alpha x)^{5}(\alpha y)^{2}(\alpha z)^{3}=\alpha ^{10}x^{5}y^{2}z^{3}=\alpha ^{10}f(x,y,z).\,}The degree is the sum of the exponents on the variables; in this example,10=5+2+3.{\displaystyle 10=5+2+3.} Ahomogeneous polynomialis apolynomialmade up of a sum of monomials of the same degree. For example,x5+2x3y2+9xy4{\displaystyle x^{5}+2x^{3}y^{2}+9xy^{4}}is a homogeneous polynomial of degree 5. Homogeneous polynomials also define homogeneous functions. Given a homogeneous polynomial of degreek{\displaystyle k}with real coefficients that takes only positive values, one gets a positively homogeneous function of degreek/d{\displaystyle k/d}by raising it to the power1/d.{\displaystyle 1/d.}So for example, the following function is positively homogeneous of degree 1 but not homogeneous:(x2+y2+z2)12.{\displaystyle \left(x^{2}+y^{2}+z^{2}\right)^{\frac {1}{2}}.} For every set of weightsw1,…,wn,{\displaystyle w_{1},\dots ,w_{n},}the following functions are positively homogeneous of degree 1, but not homogeneous: Rational functionsformed as the ratio of twohomogeneouspolynomials are homogeneous functions in theirdomain, that is, off of thelinear coneformed by thezerosof the denominator. Thus, iff{\displaystyle f}is homogeneous of degreem{\displaystyle m}andg{\displaystyle g}is homogeneous of degreen,{\displaystyle n,}thenf/g{\displaystyle f/g}is homogeneous of degreem−n{\displaystyle m-n}away from the zeros ofg.{\displaystyle g.} The homogeneousreal functionsof a single variable have the formx↦cxk{\displaystyle x\mapsto cx^{k}}for some constantc. So, theaffine functionx↦x+5,{\displaystyle x\mapsto x+5,}thenatural logarithmx↦ln⁡(x),{\displaystyle x\mapsto \ln(x),}and theexponential functionx↦ex{\displaystyle x\mapsto e^{x}}are not homogeneous. Roughly speaking,Euler's homogeneous function theoremasserts that the positively homogeneous functions of a given degree are exactly the solution of a specificpartial differential equation. More precisely: Euler's homogeneous function theorem—Iffis a(partial) functionofnreal variables that is positively homogeneous of degreek, andcontinuously differentiablein some open subset ofRn,{\displaystyle \mathbb {R} ^{n},}then it satisfies in this open set thepartial differential equationkf(x1,…,xn)=∑i=1nxi∂f∂xi(x1,…,xn).{\displaystyle k\,f(x_{1},\ldots ,x_{n})=\sum _{i=1}^{n}x_{i}{\frac {\partial f}{\partial x_{i}}}(x_{1},\ldots ,x_{n}).} Conversely, every maximal continuously differentiable solution of this partial differentiable equation is a positively homogeneous function of degreek, defined on a positive cone (here,maximalmeans that the solution cannot be prolongated to a function with a larger domain). For having simpler formulas, we setx=(x1,…,xn).{\displaystyle \mathbf {x} =(x_{1},\ldots ,x_{n}).}The first part results by using thechain rulefor differentiating both sides of the equationf(sx)=skf(x){\displaystyle f(s\mathbf {x} )=s^{k}f(\mathbf {x} )}with respect tos,{\displaystyle s,}and taking the limit of the result whenstends to1. The converse is proved by integrating a simpledifferential equation. Letx{\displaystyle \mathbf {x} }be in the interior of the domain off. Forssufficiently close to1, the functiong(s)=f(sx){\textstyle g(s)=f(s\mathbf {x} )}is well defined. The partial differential equation implies thatsg′(s)=kf(sx)=kg(s).{\displaystyle sg'(s)=kf(s\mathbf {x} )=kg(s).}The solutions of thislinear differential equationhave the formg(s)=g(1)sk.{\displaystyle g(s)=g(1)s^{k}.}Therefore,f(sx)=g(s)=skg(1)=skf(x),{\displaystyle f(s\mathbf {x} )=g(s)=s^{k}g(1)=s^{k}f(\mathbf {x} ),}ifsis sufficiently close to1. If this solution of the partial differential equation would not be defined for all positives, then thefunctional equationwould allow to prolongate the solution, and the partial differential equation implies that this prolongation is unique. So, the domain of a maximal solution of the partial differential equation is a linear cone, and the solution is positively homogeneous of degreek.◻{\displaystyle \square } As a consequence, iff:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }is continuously differentiable and homogeneous of degreek,{\displaystyle k,}its first-orderpartial derivatives∂f/∂xi{\displaystyle \partial f/\partial x_{i}}are homogeneous of degreek−1.{\displaystyle k-1.}This results from Euler's theorem by differentiating the partial differential equation with respect to one variable. In the case of a function of a single real variable (n=1{\displaystyle n=1}), the theorem implies that a continuously differentiable and positively homogeneous function of degreekhas the formf(x)=c+xk{\displaystyle f(x)=c_{+}x^{k}}forx>0{\displaystyle x>0}andf(x)=c−xk{\displaystyle f(x)=c_{-}x^{k}}forx<0.{\displaystyle x<0.}The constantsc+{\displaystyle c_{+}}andc−{\displaystyle c_{-}}are not necessarily the same, as it is the case for theabsolute value. The substitutionv=y/x{\displaystyle v=y/x}converts theordinary differential equationI(x,y)dydx+J(x,y)=0,{\displaystyle I(x,y){\frac {\mathrm {d} y}{\mathrm {d} x}}+J(x,y)=0,}whereI{\displaystyle I}andJ{\displaystyle J}are homogeneous functions of the same degree, into theseparable differential equationxdvdx=−J(1,v)I(1,v)−v.{\displaystyle x{\frac {\mathrm {d} v}{\mathrm {d} x}}=-{\frac {J(1,v)}{I(1,v)}}-v.} The definitions given above are all specialized cases of the following more general notion of homogeneity in whichX{\displaystyle X}can be any set (rather than a vector space) and the real numbers can be replaced by the more general notion of amonoid. LetM{\displaystyle M}be amonoidwith identity element1∈M,{\displaystyle 1\in M,}letX{\displaystyle X}andY{\displaystyle Y}be sets, and suppose that on bothX{\displaystyle X}andY{\displaystyle Y}there are defined monoid actions ofM.{\displaystyle M.}Letk{\displaystyle k}be a non-negative integer and letf:X→Y{\displaystyle f:X\to Y}be a map. Thenf{\displaystyle f}is said to behomogeneous of degreek{\displaystyle k}overM{\displaystyle M}if for everyx∈X{\displaystyle x\in X}andm∈M,{\displaystyle m\in M,}f(mx)=mkf(x).{\displaystyle f(mx)=m^{k}f(x).}If in addition there is a functionM→M,{\displaystyle M\to M,}denoted bym↦|m|,{\displaystyle m\mapsto |m|,}called anabsolute valuethenf{\displaystyle f}is said to beabsolutely homogeneous of degreek{\displaystyle k}overM{\displaystyle M}if for everyx∈X{\displaystyle x\in X}andm∈M,{\displaystyle m\in M,}f(mx)=|m|kf(x).{\displaystyle f(mx)=|m|^{k}f(x).} A function ishomogeneous overM{\displaystyle M}(resp.absolutely homogeneous overM{\displaystyle M}) if it is homogeneous of degree1{\displaystyle 1}overM{\displaystyle M}(resp. absolutely homogeneous of degree1{\displaystyle 1}overM{\displaystyle M}). More generally, it is possible for the symbolsmk{\displaystyle m^{k}}to be defined form∈M{\displaystyle m\in M}withk{\displaystyle k}being something other than an integer (for example, ifM{\displaystyle M}is the real numbers andk{\displaystyle k}is a non-zero real number thenmk{\displaystyle m^{k}}is defined even thoughk{\displaystyle k}is not an integer). If this is the case thenf{\displaystyle f}will be calledhomogeneous of degreek{\displaystyle k}overM{\displaystyle M}if the same equality holds:f(mx)=mkf(x)for everyx∈Xandm∈M.{\displaystyle f(mx)=m^{k}f(x)\quad {\text{ for every }}x\in X{\text{ and }}m\in M.} The notion of beingabsolutely homogeneous of degreek{\displaystyle k}overM{\displaystyle M}is generalized similarly. A continuous functionf{\displaystyle f}onRn{\displaystyle \mathbb {R} ^{n}}is homogeneous of degreek{\displaystyle k}if and only if∫Rnf(tx)φ(x)dx=tk∫Rnf(x)φ(x)dx{\displaystyle \int _{\mathbb {R} ^{n}}f(tx)\varphi (x)\,dx=t^{k}\int _{\mathbb {R} ^{n}}f(x)\varphi (x)\,dx}for allcompactly supportedtest functionsφ{\displaystyle \varphi }; and nonzero realt.{\displaystyle t.}Equivalently, making achange of variabley=tx,{\displaystyle y=tx,}f{\displaystyle f}is homogeneous of degreek{\displaystyle k}if and only ift−n∫Rnf(y)φ(yt)dy=tk∫Rnf(y)φ(y)dy{\displaystyle t^{-n}\int _{\mathbb {R} ^{n}}f(y)\varphi \left({\frac {y}{t}}\right)\,dy=t^{k}\int _{\mathbb {R} ^{n}}f(y)\varphi (y)\,dy}for allt{\displaystyle t}and all test functionsφ.{\displaystyle \varphi .}The last display makes it possible to define homogeneity ofdistributions. A distributionS{\displaystyle S}is homogeneous of degreek{\displaystyle k}ift−n⟨S,φ∘μt⟩=tk⟨S,φ⟩{\displaystyle t^{-n}\langle S,\varphi \circ \mu _{t}\rangle =t^{k}\langle S,\varphi \rangle }for all nonzero realt{\displaystyle t}and all test functionsφ.{\displaystyle \varphi .}Here the angle brackets denote the pairing between distributions and test functions, andμt:Rn→Rn{\displaystyle \mu _{t}:\mathbb {R} ^{n}\to \mathbb {R} ^{n}}is the mapping of scalar division by the real numbert.{\displaystyle t.} Letf:X→Y{\displaystyle f:X\to Y}be a map between twovector spacesover a fieldF{\displaystyle \mathbb {F} }(usually thereal numbersR{\displaystyle \mathbb {R} }orcomplex numbersC{\displaystyle \mathbb {C} }). IfS{\displaystyle S}is a set of scalars, such asZ,{\displaystyle \mathbb {Z} ,}[0,∞),{\displaystyle [0,\infty ),}orR{\displaystyle \mathbb {R} }for example, thenf{\displaystyle f}is said to behomogeneous overS{\displaystyle S}iff(sx)=sf(x){\textstyle f(sx)=sf(x)}for everyx∈X{\displaystyle x\in X}and scalars∈S.{\displaystyle s\in S.}For instance, everyadditive mapbetween vector spaces ishomogeneous over the rational numbersS:=Q{\displaystyle S:=\mathbb {Q} }although itmight not behomogeneous over the real numbersS:=R.{\displaystyle S:=\mathbb {R} .} The following commonly encountered special cases and variations of this definition have their own terminology: All of the above definitions can be generalized by replacing the conditionf(rx)=rf(x){\displaystyle f(rx)=rf(x)}withf(rx)=|r|f(x),{\displaystyle f(rx)=|r|f(x),}in which case that definition is prefixed with the word"absolute"or"absolutely."For example, Ifk{\displaystyle k}is a fixed real number then the above definitions can be further generalized by replacing the conditionf(rx)=rf(x){\displaystyle f(rx)=rf(x)}withf(rx)=rkf(x){\displaystyle f(rx)=r^{k}f(x)}(and similarly, by replacingf(rx)=|r|f(x){\displaystyle f(rx)=|r|f(x)}withf(rx)=|r|kf(x){\displaystyle f(rx)=|r|^{k}f(x)}for conditions using the absolute value, etc.), in which case the homogeneity is said to be"of degreek{\displaystyle k}"(where in particular, all of the above definitions are"of degree1{\displaystyle 1}"). For instance, A nonzerocontinuous functionthat is homogeneous of degreek{\displaystyle k}onRn∖{0}{\displaystyle \mathbb {R} ^{n}\backslash \lbrace 0\rbrace }extends continuously toRn{\displaystyle \mathbb {R} ^{n}}if and only ifk>0.{\displaystyle k>0.} Proofs
https://en.wikipedia.org/wiki/Homogeneous_function
Pair programmingis asoftware developmenttechnique in which twoprogrammerswork together at one workstation. One, thedriver, writescodewhile the other, theobserverornavigator,[1]reviewseach line of code as it is typed in. The two programmers switch roles frequently. While reviewing, the observer also considers the "strategic" direction of the work, coming up with ideas for improvements and likely future problems to address. This is intended to free the driver to focus all of their attention on the "tactical" aspects of completing the current task, using the observer as a safety net and guide. Pair programming increases theman-hoursrequired to deliver code compared to programmers working individually.[2]However, the resulting code has fewer defects.[3]Along with code development time, other factors like field support costs and quality assurance also figure into the return on investment. Pair programming might theoretically offset these expenses by reducing defects in the programs.[3] In addition to preventing mistakes as they are made, other intangible benefits may exist. For example, the courtesy of rejecting phone calls or other distractions while working together, taking fewer breaks at agreed-upon intervals or sharing breaks to return phone calls (but returning to work quickly since someone is waiting). One member of the team might have more focus and help drive or awaken the other if they lose focus, and that role might periodically change. One member might know about a topic or technique that the other does not, which might eliminate delays to finding or testing a solution, or allow for a better solution, thus effectively expanding the skill set, knowledge, and experience of a programmer as compared to working alone. Each of these intangible benefits, and many more, may be challenging to accurately measure but can contribute to more efficient working hours.[citation needed] A system with two programmers possesses greater potential for the generation of more diverse solutions to problems for three reasons: In an attempt to share goals and plans, the programmers must overtly negotiate a shared course of action when a conflict arises between them. In doing so, they consider a larger number of ways of solving the problem than a single programmer alone might do. This significantly improves the design quality of the program as it reduces the chances of selecting a poor method.[4] In an online survey of pair programmers from 2000, 96% of programmers stated that they enjoyed working more while pair programming than programming alone. Furthermore, 95% said that they were more confident in their work when they pair programmed. However, as the survey was among self-selected pair programmers, it did not account for programmers who were forced to pair program.[5] Knowledge is constantly shared between pair programmers, whether in the industry or in a classroom. Many sources suggest that students show higher confidence when programming in pairs,[5]and many learn whether it be from tips on programming language rules to overall design skills.[6]In "promiscuous pairing", each programmer communicates and works with all the other programmers on the team rather than pairing only with one partner, which causes knowledge of the system to spread throughout the whole team.[3]Pair programming allows programmers to examine their partner's code and provide feedback, which is necessary to increase their own ability to develop monitoring mechanisms for their own learning activities.[6] Pair programming allows team members to share quickly, making them less likely to have agendas hidden from each other. This helps pair programmers learn to communicate more easily. "This raises the communication bandwidth and frequency within the project, increasing overall information flow within the team."[3] There are both empirical studies and meta-analyses of pair programming. The empirical studies tend to examine the level of productivity and the quality of the code, while meta-analyses may focus on biases introduced by the process of testing and publishing. Ameta-analysisfound pairs typically consider more design alternatives than programmers working alone, arrive at simpler, more maintainable designs, and catch design defects earlier. However, it raised concerns that its findings may have been influenced by "signs ofpublication biasamong published studies on pair programming." It concluded that "pair programming is not uniformly beneficial or effective."[7] Although pair programmers may complete a task faster than a solo programmer, the total number ofman-hoursincreases.[2]A manager would have to balance faster completion of the work and reduced testing and debugging time against the higher cost of coding. The relative weight of these factors can vary by project and task. The benefit of pairing is greatest on tasks that the programmers do not fully understand before they begin: that is, challenging tasks that call for creativity and sophistication, and for novices as compared to experts.[2]Pair programming could be helpful for attaining high quality and correctness on complex programming tasks, but it would also increase the development effort (cost) significantly.[7] On simple tasks, which the pair already fully understands, pairing results in a net drop in productivity.[2][8]It may reduce the code development time but also risks reducing the quality of the program.[7]Productivity can also drop when novice–novice pairing is used without sufficient availability of a mentor to coach them.[9] A study of programmers using AI assistance tools such asGitHub Copilotfound that while some programmers conceived of AI assistance as similar to pair programming, in practice the use of such tools is very different in terms of the programmer experience, with the human programmer having to transition repeatedly between driver and navigator roles.[10] There are indicators that a pair is not performing well:[opinion] Remote pair programming, also known asvirtual pair programmingordistributed pair programming, is pair programming in which the two programmers are in different locations,[12]working via acollaborative real-time editor, shared desktop, or a remote pair programmingIDEplugin. Remote pairing introduces difficulties not present in face-to-face pairing, such as extra delays for coordination, depending more on "heavyweight" task-tracking tools instead of "lightweight" ones like index cards, and loss of verbal communication resulting in confusion and conflicts over such things as who "has the keyboard".[13] Tool support could be provided by:
https://en.wikipedia.org/wiki/Pair_programming
Password strengthis a measure of the effectiveness of apasswordagainst guessing orbrute-force attacks. In its usual form, it estimates how many trials an attacker who does not have direct access to the password would need, on average, to guess it correctly. The strength of a password is a function of length, complexity, and unpredictability.[1] Using strong passwords lowers the overallriskof a security breach, but strong passwords do not replace the need for other effectivesecurity controls.[2]The effectiveness of a password of a given strength is strongly determined by the design and implementation of theauthentication factors(knowledge, ownership, inherence). The first factor is the main focus of this article. The rate at which an attacker can submit guessed passwords to the system is a key factor in determining system security. Some systems impose a time-out of several seconds after a small number (e.g. three) of failed password entry attempts. In the absence of othervulnerabilities, such systems can be effectively secured with relatively simple passwords. However, the system store information about the user's passwords in some form and if that information is stolen, say by breaching system security, the user's passwords can be at risk. In 2019, the United Kingdom'sNCSCanalyzed public databases of breached accounts to see which words, phrases, and strings people used. The most popular password on the list was 123456, appearing in more than 23 million passwords. The second-most popular string, 123456789, was not much harder to crack, while the top five included "qwerty", "password", and 1111111.[3] Passwords are created either automatically (using randomizing equipment) or by a human; the latter case is more common. While the strength of randomly chosen passwords against abrute-force attackcan be calculated with precision, determining the strength of human-generated passwords is difficult. Typically, humans are asked to choose a password, sometimes guided by suggestions or restricted by a set of rules, when creating a new account for a computer system or internet website. Only rough estimates of strength are possible since humans tend to follow patterns in such tasks, and those patterns can usually assist an attacker.[4]In addition, lists of commonly chosen passwords are widely available for use by password-guessing programs. Such lists include the numerous online dictionaries for various human languages, breached databases ofplaintextandhashedpasswords from various online business and social accounts, along with other common passwords. All items in such lists are considered weak, as are passwords that are simple modifications of them. Although random password generation programs are available and are intended to be easy to use, they usually generate random, hard-to-remember passwords, often resulting in people preferring to choose their own. However, this is inherently insecure because the person's lifestyle, entertainment preferences, and other key individualistic qualities usually come into play to influence the choice of password, while the prevalence of onlinesocial mediahas made obtaining information about people much easier. Systems that use passwords forauthenticationmust have some way to check any password entered to gain access. If the valid passwords are simply stored in a system file or database, an attacker who gains sufficient access to the system will obtain all user passwords, giving the attacker access to all accounts on the attacked system and possibly other systems where users employ the same or similar passwords. One way to reduce this risk is to store only acryptographic hashof each password instead of the password itself. Standard cryptographic hashes, such as theSecure Hash Algorithm(SHA) series, are very hard to reverse, so an attacker who gets hold of the hash value cannot directly recover the password. However, knowledge of the hash value lets the attacker quickly test guesses offline.Password crackingprograms are widely available that will test a large number of trial passwords against a purloined cryptographic hash. Improvements in computing technology keep increasing the rate at which guessed passwords can be tested. For example, in 2010, theGeorgia Tech Research Institutedeveloped a method of usingGPGPUto crack passwords much faster.[5]Elcomsoftinvented the usage of common graphic cards for quicker password recovery in August 2007 and soon filed a corresponding patent in the US.[6]By 2011, commercial products were available that claimed the ability to test up to 112,000 passwords per second on a standard desktop computer, using a high-end graphics processor for that time.[7]Such a device will crack a six-letter single-case password in one day. The work can be distributed over many computers for an additional speedup proportional to the number of available computers with comparable GPUs. Specialkey stretchinghashes are available that take a relatively long time to compute, reducing the rate at which guessing can take place. Although it is considered best practice to use key stretching, many common systems do not. Another situation where quick guessing is possible is when the password is used to form acryptographic key. In such cases, an attacker can quickly check to see if a guessed password successfully decodes encrypted data. For example, one commercial product claims to test 103,000WPAPSK passwords per second.[8] If a password system only stores the hash of the password, an attacker can pre-compute hash values for common password variants and all passwords shorter than a certain length, allowing very rapid recovery of the password once its hash is obtained. Very long lists of pre-computed password hashes can be efficiently stored usingrainbow tables. This method of attack can be foiled by storing a random value, called acryptographic salt, along with the hash. The salt is combined with the password when computing the hash, so an attacker precomputing a rainbow table would have to store for each password its hash with every possible salt value. This becomes infeasible if the salt has a big enough range, say a 32-bit number. Many authentication systems in common use do not employ salts and rainbow tables are available on the Internet for several such systems. Password strength is specified by the amount ofinformation entropy, which is measured inshannon(Sh) and is a concept frominformation theory. It can be regarded as the minimum number ofbitsnecessary to hold the information in a password of a given type. A related measure is thebase-2 logarithmof the number of guesses needed to find the password with certainty, which is commonly referred to as the "bits of entropy".[9]A password with 42 bits of entropy would be as strong as a string of 42 bits chosen randomly, for example by afair cointoss. Put another way, a password with 42 bits of entropy would require 242(4,398,046,511,104) attempts to exhaust all possibilities during abrute force search. Thus, increasing the entropy of the password by one bit doubles the number of guesses required, making an attacker's task twice as difficult. On average, an attacker will have to try half the possible number of passwords before finding the correct one.[4] Random passwords consist of a string of symbols of specified length taken from some set of symbols using a random selection process in which each symbol is equally likely to be selected. The symbols can be individual characters from a character set (e.g., theASCIIcharacter set), syllables designed to form pronounceable passwords or even words from a word list (thus forming apassphrase). The strength of random passwords depends on the actual entropy of the underlying number generator; however, these are often not truly random, but pseudorandom. Many publicly available password generators use random number generators found in programming libraries that offer limited entropy. However, most modern operating systems offer cryptographically strong random number generators that are suitable for password generation. It is also possible to use ordinarydiceto generate random passwords(seeRandom password generator § Stronger methods). Random password programs often can ensure that the resulting password complies with a localpassword policy; for instance, by always producing a mix of letters, numbers, and special characters. For passwords generated by a process that randomly selects a string of symbols of length,L, from a set ofNpossible symbols, the number of possible passwords can be found by raising the number of symbols to the powerL, i.e.NL. Increasing eitherLorNwill strengthen the generated password. The strength of a random password as measured by theinformation entropyis just thebase-2 logarithmor log2of the number of possible passwords, assuming each symbol in the password is produced independently. Thus a random password's information entropy,H, is given by the formula: H=log2⁡NL=Llog2⁡N=Llog⁡Nlog⁡2{\displaystyle H=\log _{2}N^{L}=L\log _{2}N=L{\log N \over \log 2}} whereNis the number of possible symbols andLis the number of symbols in the password.His measured inbits.[4][10]In the last expression,logcan be to anybase. Abinarybyteis usually expressed using two hexadecimal characters. To find the length,L,needed to achieve a desired strengthH,with a password drawn randomly from a set ofNsymbols, one computes: L=⌈Hlog2⁡N⌉{\displaystyle L={\left\lceil {\frac {H}{\log _{2}N}}\right\rceil }} where⌈⌉{\displaystyle \left\lceil \ \right\rceil }denotes the mathematicalceiling function,i.e.rounding up to the next largestwhole number. The following table uses this formula to show the required lengths of truly randomly generated passwords to achieve desired password entropies for common symbol sets: People are notoriously poor at achieving sufficient entropy to produce satisfactory passwords. According to one study involving half a million users, the average password entropy was estimated at 40.54 bits.[11] Thus, in one analysis of over 3 million eight-character passwords, the letter "e" was used over 1.5 million times, while the letter "f" was used only 250,000 times. Auniform distributionwould have had each character being used about 900,000 times. The most common number used is "1", whereas the most common letters are a, e, o, and r.[12] Users rarely make full use of larger character sets in forming passwords. For example, hacking results obtained from a MySpace phishing scheme in 2006 revealed 34,000 passwords, of which only 8.3% used mixed case, numbers, and symbols.[13] The full strength associated with using the entire ASCII character set (numerals, mixed case letters, and special characters) is only achieved if each possible password is equally likely. This seems to suggest that all passwords must contain characters from each of several character classes, perhaps upper and lower-case letters, numbers, and non-alphanumeric characters. Such a requirement is a pattern in password choice and can be expected to reduce an attacker's "work factor" (in Claude Shannon's terms). This is a reduction in password "strength". A better requirement would be to require a passwordnotto contain any word in an online dictionary, or list of names, or any license plate pattern from any state (in the US) or country (as in the EU). If patterned choices are required, humans are likely to use them in predictable ways, such as capitalizing a letter, adding one or two numbers, and a special character. This predictability means that the increase in password strength is minor when compared to random passwords. Password Safety Awareness Projects Google developed Interland teach the kid internet audience safety on internet. On the chapter calledTower Of Tresureit is advised to use unusual names paired with characters like (₺&@#%) with a game.[14] NISTSpecial Publication 800-63 of June 2004 (revision two) suggested a scheme to approximate the entropy of human-generated passwords:[4] Using this scheme, an eight-character human-selected password without uppercase characters and non-alphabetic characters OR with either but of the two character sets is estimated to have eighteen bits of entropy. The NIST publication concedes that at the time of development, little information was available on the real-world selection of passwords. Later research into human-selected password entropy using newly available real-world data has demonstrated that the NIST scheme does not provide a valid metric for entropy estimation of human-selected passwords.[15]The June 2017 revision of SP 800-63 (Revision three) drops this approach.[16] Because national keyboard implementations vary, not all 94 ASCII printable characters can be used everywhere. This can present a problem to an international traveler who wished to log into a remote system using a keyboard on a local computer(see article concerned withkeyboard layouts). Many handheld devices, such astablet computersandsmart phones, require complex shift sequences or keyboard app swapping to enter special characters. Authentication programs can vary as to the list of allowable password characters. Some do not recognize case differences (e.g., the upper-case "E" is considered equivalent to the lower-case "e"), and others prohibit some of the other symbols. In the past few decades, systems have permitted more characters in passwords, but limitations still exist. Systems also vary as to the maximum length of passwords allowed. As a practical matter, passwords must be both reasonable and functional for the end user as well as strong enough for the intended purpose. Passwords that are too difficult to remember may be forgotten and so are more likely to be written on paper, which some consider a security risk.[17]In contrast, others argue that forcing users to remember passwords without assistance can only accommodate weak passwords, and thus poses a greater security risk. According toBruce Schneier, most people are good at securing their wallets or purses, which is a "great place" to store a written password.[18] The minimum number of bits of entropy needed for a password depends on thethreat modelfor the given application. Ifkey stretchingis not used, passwords with more entropy are needed. RFC 4086, "Randomness Requirements for Security", published June 2005, presents some example threat models and how to calculate the entropy desired for each one.[19]Their answers vary between 29 bits of entropy needed if only online attacks are expected, and up to 96 bits of entropy needed for important cryptographic keys used in applications like encryption where the password or key needs to be secure for a long period and stretching isn't applicable. A 2010Georgia Tech Research Institutestudy based on unstretched keys recommended a 12-character random password but as a minimum length requirement.[5][20]It pays to bear in mind that since computing power continually grows, to prevent offline attacks the required number of bits of entropy should also increase over time. The upper end is related to the stringent requirements of choosing keys used in encryption. In 1999,an Electronic Frontier Foundation projectbroke 56-bitDESencryption in less than a day using specially designed hardware.[21]In 2002,distributed.netcracked a 64-bit key in 4 years, 9 months, and 23 days.[22]As of October 12, 2011,distributed.netestimates that cracking a 72-bit key using current hardware will take about 45,579 days or 124.8 years.[23]Due to currently understood limitations from fundamental physics, there is no expectation that anydigital computer(or combination) will be capable of breaking 256-bit encryption via a brute-force attack.[24]Whether or notquantum computerswill be able to do so in practice is still unknown, though theoretical analysis suggests such possibilities.[25] Guidelines for choosing good passwords are typically designed to make passwords harder to discover by intelligent guessing. Common guidelines advocated by proponents of software system security have included:[26][27][28][29][30] Forcing the inclusion of lowercase letters, uppercase letters, numbers, and symbols in passwords was a common policy but has been found to decrease security, by making it easier to crack. Research has shown how predictable the common use of such symbols are, and the US[34]and UK[35]government cyber security departments advise against forcing their inclusion in password policy. Complex symbols also make remembering passwords much harder, which increases writing down, password resets, and password reuse – all of which lower rather than improve password security. The original author of password complexity rules, Bill Burr, has apologized and admits they decrease security, as research has found; this was widely reported in the media in 2017.[36]Online security researchers[37]and consultants are also supportive of the change[38]in best practice advice on passwords. Some guidelines advise against writing passwords down, while others, noting the large numbers of password-protected systems users must access, encourage writing down passwords as long as the written password lists are kept in a safe place, not attached to a monitor or in an unlocked desk drawer.[39]Use of apassword manageris recommended by the NCSC.[40] The possible character set for a password can be constrained by different websites or by the range of keyboards on which the password must be entered.[41] As with any security measure, passwords vary in strength; some are weaker than others. For example, the difference in strength between a dictionary word and a word with obfuscation (e.g. letters in the password are substituted by, say, numbers — a common approach) may cost a password-cracking device a few more seconds; this adds little strength. The examples below illustrate various ways weak passwords might be constructed, all of which are based on simple patterns which result in extremely low entropy, allowing them to be tested automatically at high speeds.:[12] There are many other ways a password can be weak,[44]corresponding to the strengths of various attack schemes; the core principle is that a password should have high entropy (usually taken to be equivalent to randomness) andnotbe readily derivable by any "clever" pattern, nor should passwords be mixed with information identifying the user. Online services often provide a restore password function that a hacker can figure out and by doing so bypass a password. In the landscape of 2012, as delineated byWilliam Cheswickin an article for ACM magazine, password security predominantly emphasized an alpha-numeric password of eight characters or more. Such a password, it was deduced, could resist ten million attempts per second for a duration of 252 days. However, with the assistance of contemporary GPUs at the time, this period was truncated to just about 9 hours, given a cracking rate of 7 billion attempts per second. A 13-character password was estimated to withstand GPU-computed attempts for over 900,000 years.[45][46] In the context of 2023 hardware technology, the 2012 standard of an eight-character alpha-numeric password has become vulnerable, succumbing in a few hours. The time needed to crack a 13-character password is reduced to a few years. The current emphasis, thus, has shifted. Password strength is now gauged not just by its complexity but its length, with recommendations leaning towards passwords comprising at least 13-16 characters. This era has also seen the rise of Multi-Factor Authentication (MFA) as a crucial fortification measure. The advent and widespread adoption of password managers have further aided users in cultivating and maintaining an array of strong, unique passwords.[47] A password policy is a guide to choosing satisfactory passwords. It is intended to: Previous password policies used to prescribe the characters which passwords must contain, such as numbers, symbols, or upper/lower case. While this is still in use, it has been debunked as less secure by university research,[48]by the original instigator[49]of this policy, and by the cyber security departments (and other related government security bodies[50]) of USA[51]and UK.[52]Password complexity rules of enforced symbols were previously used by major platforms such as Google[53]and Facebook,[54]but these have removed the requirement following the discovery that they actually reduced security. This is because the human element is a far greater risk than cracking, and enforced complexity leads most users to highly predictable patterns (number at the end, swap 3 for E, etc.) which helps crack passwords. So password simplicity and length (passphrases) are the new best practice and complexity is discouraged. Forced complexity rules also increase support costs, and user friction and discourage user signups. Password expiration was in some older password policies but has been debunked[36]as best practice and is not supported by USA or UK governments, or Microsoft which removed[55]the password expiry feature. Password expiration was previously trying to serve two purposes:[56] However, password expiration has its drawbacks:[57][58] The hardest passwords to crack, for a given length and character set, are random character strings; if long enough they resist brute force attacks (because there are many characters) and guessing attacks (due to high entropy). However, such passwords are typically the hardest to remember. The imposition of a requirement for such passwords in a password policy may encourage users to write them down, store them inmobile devices, or share them with others as a safeguard against memory failure. While some people consider each of these user resorts to increase security risks, others suggest the absurdity of expecting users to remember distinct complex passwords for each of the dozens of accounts they access. For example, in 2005, security expertBruce Schneierrecommended writing down one's password: Simply, people can no longer remember passwords good enough to reliably defend against dictionary attacks, and are much more secure if they choose a password too complicated to remember and then write it down. We're all good at securing small pieces of paper. I recommend that people write their passwords down on a small piece of paper, and keep it with their other valuable small pieces of paper: in their wallet.[39] The following measures may increase acceptance of strong password requirements if carefully used: Password policies sometimes suggestmemory techniquesto assist remembering passwords: A reasonable compromise for using large numbers of passwords is to record them in a password manager program, which include stand-alone applications, web browser extensions, or a manager built into the operating system. A password manager allows the user to use hundreds of different passwords, and only have to remember a single password, the one which opens the encrypted password database.[65]Needless to say, this single password should be strong and well-protected (not recorded anywhere). Most password managers can automatically create strong passwords using a cryptographically securerandom password generator, as well as calculating the entropy of the generated password. A good password manager will provide resistance against attacks such askey logging, clipboard logging and various other memory spying techniques. 6 Types of Password Attacks & How to Stop Them | OneLogin. (n.d.). Retrieved April 24, 2024, fromhttps://www.google.com/ Franchi, E., Poggi, A., & Tomaiuolo, M. (2015). Information and Password Attacks on Social Networks: An Argument for Cryptography. Journal of Information Technology Research, 8(1), 25–42.https://doi.org/10.4018/JITR.2015010103
https://en.wikipedia.org/wiki/Password_strength#Dictionary_attack
In a legal dispute, one party has theburden of proofto show that they are correct, while the other party has no such burden and is presumed to be correct. The burden of proof requires a party to produce evidence to establish the truth of facts needed to satisfy all the required legal elements of the dispute. It is also known as theonus of proof. The burden of proof is usually on the person who brings a claim in a dispute. It is often associated with the Latinmaximsemper necessitas probandi incumbit ei qui agit, a translation of which is: "the necessity of proof always lies with the person who lays charges."[1]In civil suits, for example, the plaintiff bears the burden of proof that the defendant's action or inaction caused injury to the plaintiff, and the defendant bears the burden of proving anaffirmative defense. The burden of proof is on the prosecutor forcriminal cases, and thedefendantispresumed innocent. If the claimant fails to discharge the burden of proof to prove their case, the claim will be dismissed. A "burden of proof" is a party's duty to prove a disputed assertion or charge, and includes the burden of production (providing enough evidence on an issue so that the trier-of-fact decides it rather than in a peremptory ruling like a directed verdict) and the burden of persuasion (standard of proof such as preponderance of the evidence).[2][3] A "burden of persuasion" or "risk of non-persuasion"[4]is an obligation that remains on a single party for the duration of the court proceeding.[5]Once the burden has been entirely discharged to the satisfaction of thetrier of fact, the party carrying the burden will succeed in its claim. For example, thepresumption of innocencein a criminal case places a legal burden upon the prosecution to prove all elements of the offense (generally beyond a reasonable doubt), and to disprove all the defenses except foraffirmative defensesin which the proof of non-existence of all affirmative defense(s) is not constitutionally required of the prosecution.[6] The burden of persuasion should not be confused with theevidential burden, or burden of production, or duty of producing (or going forward with evidence)[7]which is an obligation that may shift between parties over the course of the hearing or trial. The evidential burden is the burden to adduce sufficient evidence to properly raise an issue at court. There is no burden of proof with regard to motive or animus in criminal cases in the United States. The intent surrounding an offense is nevertheless crucial to the elements of the offense in a first-degree-murder conviction.[8]This brings up the ethical dilemma of whether or not a death sentence should be imposed when the defendant's motives or intentions are the contingent factors in sentencing. However, in some cases such as defamation suits with a public figure as the defamed party, the public figure must prove actual malice. Burden of proof refers most generally to the obligation of a party to prove its allegations at trial. In a civil case, the plaintiff sets forth its allegations in a complaint, petition or other pleading. The defendant is then required to file a responsive pleading denying some or all of the allegations and setting forth anyaffirmative facts in defense. Each party has the burden of proof of its allegations. PerSuperintendent v. Hill(1985), in order to take away a prisoner'sgood conduct timefor a disciplinary violation, prison officials need only have "some evidence", i.e., "a modicum of evidence"; however, the sentencing judge is under no obligation to adhere to good/work time constraints, nor are they required to credit time served.[9] "Reasonable indication (also known as reasonable suspicion) is substantially lower than probable cause; factors to consider are those facts and circumstances a prudent investigator would consider, but must include facts or circumstances indicating a past, current, or impending violation; an objective factual basis must be present, a mere 'hunch' is insufficient."[10] The reasonable indication standard is used in interpreting trade law in determining if the United States has been materially injured.[11] Reasonable suspicion is a low standard of proof to determine whether abriefinvestigative stop or search by a police officer or any government agent is warranted. This stop or search must be brief; its thoroughness is proportional to, and limited by, the low standard of evidence. A more definite standard of proof (oftenprobable cause) would be required to justify a more thorough stop/search. InTerry v. Ohio,392U.S.1(1968), theSupreme Courtruled that reasonable suspicion requires specific, articulable, and individualized suspicion that crime is afoot. A mere guess or "hunch" is not enough to constitute reasonable suspicion.[12] An investigatory stop is a seizure under theFourth Amendment.[12]The state must justify the seizure by showing that the officer conducting the stop had a reasonable articulable suspicion that criminal activity was afoot.[12]The important point is that officers cannot deprive a citizen of liberty unless the officer can point to specific facts and circumstances and inferences therefrom that would amount to a reasonable suspicion.[12]The officer must be prepared to establish that criminal activity was a logical explanation for what they perceived. The requirement serves to prevent officers from stopping individuals based merely on hunches or unfounded suspicions.[12]The purpose of the stop and detention is to investigate to the extent necessary to confirm or dispel the original suspicion.[12]If the initial confrontation with the person stopped dispels suspicion of criminal activity the officer must end the detention and allow the person to go about their business.[12]If the investigation confirms the officer's initial suspicion or reveals evidence that would justify continued detention the officer may require the person detained to remain at the scene until further investigation is complete, and may give rise to the level of probable cause.[12] InArizona v. Gant(2009), the United States Supreme Court arguably defined a new standard, that of "reasonable to believe". This standard applies only to vehicle searches after the suspect has been placed under arrest. The Court overruledNew York v. Belton(1981) and concluded that police officers are allowed to go back and search a vehicle incident to a suspect's arrest only where it is "reasonable to believe" that there is more evidence in the vehicle of the crime for which the suspect was arrested. There is still an ongoing debate as to the exact meaning of this phrase. Some courts have said it should be a new standard while others have equated it with the "reasonable suspicion" of theTerrystop. Most courts have agreed it is somewhere less than probable cause. Probable cause is a higher standard of proof than reasonable suspicion, which is used in the United States to determine whether a search, or an arrest, is unreasonable. It is also used bygrand juriesto determine whether to issue anindictment. In the civil context, this standard is often used where plaintiffs are seeking a prejudgementremedy. In the criminal context, the U.S. Supreme Court inUnited States v. Sokolow,490U.S.1(1989), determined that probable cause requires "a fairprobabilitythat contraband or evidence of a crime will be found". The primary issue was whetherDrug Enforcement Administrationagents had a reason to execute a search. Courts have traditionally interpreted the idea of "a fair probability" as meaning whether a fair-minded evaluator would have reason to find it more likely than not that a fact (or ultimate fact) is true, which is quantified as a 51% certainty standard (using whole numbers as the increment of measurement). Some courts and scholars have suggested probable cause could, in some circumstances, allow for a fact to be established as true to a standard of less than 51%,[13]but as of August 2019, the United States Supreme Court has never ruled that the quantification of probable cause is anything less than 51%. Probable cause can be contrasted with "reasonable articulable suspicion" which requires a police officer to have an unquantified amount of certainty the courts say is well below 51% before briefly detaining a suspect (without consent) to pat them down and attempt to question them.[12]The "beyond reasonable doubt" standard, used by criminal juries in the United States to determine guilt for a crime, also contrasts with probable cause which courts hold requires an unquantified level of proof well above that of probable cause's 51%.[citation needed]Though it is beyond the scope of this topic, when courts review whether 51% probable cause certainty was a reasonable judgment, the legal inquiry is different for police officers in the field than it would be for grand jurors. InFranks v. Delaware(1978), the U.S. Supreme Court held that probable cause requires that there not be "reckless disregard for the truth" of the facts asserted.[14] Examples of a police officer's truth-certainty standards in the field and their practical consequences are offered below: Some credible evidence is one of the least demanding standards of proof. This proof standard is often used in administrative law settings and in some states to initiateChild Protective Services(CPS) proceedings. This proof standard is used where short-term intervention is needed urgently, such as when a child is arguably in immediate danger from a parent or guardian. The "some credible evidence" standard is used as a legal placeholder to bring some controversy before a trier of fact, and into a legal process. It is on the order of the factual standard of proof needed to achieve a finding of "probable cause" used inex partethreshold determinations needed before a court will issue a search warrant.[citation needed]It is a lower standard of proof than the "preponderance of the evidence" standard. The standard does not require the fact-finder to weigh conflicting evidence, and merely requires the investigator or prosecutor to present the bare minimum of material credible evidence to support the allegations against the subject, or in support of the allegation; seeValmonte v. Bane,18 F.3d 992 (2nd Cir. 1994). In some Federal Appellate Circuit Courts, such as the Second Circuit, the "some credible evidence" standard has been found constitutionally insufficient to protect liberty interests of the parties in controversy at CPS hearings.[citation needed] Preponderance of the evidence (American English), also known as balance of probabilities (British English), is the standard required in civil cases, includingfamily courtdeterminations solely involving money, such aschild supportunder theChild Support Standards Act, and inchild custodydeterminations between parties having equal legal rights respecting a child. It is also the standard of proof by which the defendant must proveaffirmative defensesormitigating circumstancesin civil or criminal court in the United States. In civil courts,aggravating circumstancesalso only have to be proven by a preponderance of the evidence, as opposed to beyond reasonable doubt (as in criminal court). The standard is met if the proposition is more likely to be true than not true. Another high-level way of interpreting that is that the plaintiff's case (evidence) be 51% likely. A more precise statement is that "the weight [of the evidence, including in calculating such a percentage] is determined not by the amount of evidence, but by its quality."[15]The author goes on to affirm that preponderance is "merely enough to tip the scales" towards one party; however, that tilt need only be so slight as the weight of a "feather." Until 1970, it was also the standard used in juvenile court in theUnited States.[16]Compared to the criminal standard of "proof beyond a reasonable doubt," the preponderance of the evidence standard is "a somewhat easier standard to meet."[15] Preponderance of the evidence is also the standard of proof used inUnited States administrative law. In at least one case, there is a statutory definition of the standard. While there is no federal definition, such as by definition of the courts or by statute applicable to all cases, TheMerit Systems Protection Board's has codified their definition at 5 CFR 1201.56(c)(2). MSPB defines the standard as "The degree of relevant evidence that a reasonable person, considering the record as a whole, would accept as sufficient to find that a contested fact is more likely to be true than untrue." One author highlights the phrase "more likely to be true than untrue" as the critical component of the definition.[15] From 2013 to 2020, theDepartment of Educationrequired schools to use a preponderance of evidence standard in evaluating sexual assault claims.[17] Clear and convincing evidence is a higher level of burden of persuasion than "preponderance of the evidence", but less than "beyond reasonable doubt". It is employed intra-adjudicatively in administrative court determinations, as well as inciviland certaincriminal procedurein the United States. For example, a prisoner seekinghabeas corpusrelief fromcapital punishmentmust prove his factual innocence by clear and convincing evidence.[18]New York State uses this standard when a court must determine whether to involuntarily hospitalize a mentally ill patient or to issue anAssisted Outpatient TreatmentOrder.[19]This standard was also codified by the United States Supreme Court in all mental health civil commitment cases.[20] This standard is used in many types ofequitycases, includingpaternity,persons in need of supervision,child custody, theprobateof both wills andliving wills, petitions to remove a person fromlife support("right to die" cases),[21]mental hygiene and involuntary hospitalizations, and many similar cases. Clear and convincing evidence is the standard of proof used for immunity from prosecution under Florida'sstand-your-ground law.[22][non-primary source needed][original research?]Once raised by the defense, the state must present its evidence in a pre-trial hearing, showing that the statutory prerequisites have not been met, and then request that the court deny a motion for declaration of immunity. The judge must then decide from clear and convincing evidence whether to grant immunity.[23]This is a lower burden than "beyond a reasonable doubt", the threshold a prosecutor must meet at any proceeding criminal trial,[24]but higher than the "probable cause" threshold generally required forindictment. Clear and convincing proof means that the evidence presented by a party during the trial must be highly and substantially more probable to be true than not and the trier of fact must have a firm belief or conviction in its factuality.[25]In this standard, a greater degree of believability must be met than the common standard of proof in civil actions (i.e. preponderance of the evidence), which only requires that the facts as a threshold be more likely than not to prove the issue for which they are asserted. This standard is also known as "clear, convincing, and satisfactory evidence"; "clear, cognizant, and convincing evidence", and is applied in cases or situations involving an equitable remedy or where a presumptive civil liberty interest exists. For example, this is the standard or quantum of evidence use toprobatealast will and testament. This is the highest standard used as the burden of proof in Anglo-American jurisprudence and typically only applies in juvenile delinquency proceedings, criminal proceedings, and when consideringaggravating circumstancesin criminal proceedings. It has been described, in negative terms, as a proof having been met if there is no plausible reason to believe otherwise. If there is a real doubt, based upon reason and common sense after careful and impartial consideration of all the evidence, or lack of evidence, in a case, then the level of proof has not been met. Proof beyond a reasonable doubt, therefore, is proof of such a convincing character that one would be willing to rely and act upon it without hesitation in the most important of one's own affairs. However, it does not mean an absolute certainty. The standard that must be met by the prosecution's evidence in a criminal prosecution is that no other logical explanation can be derived from the facts except that the defendant committed the crime, thereby overcoming the presumption that a person is innocent unless and until proven guilty. If the trier of fact has no doubt as to the defendant's guilt, or if their only doubts are unreasonable doubts, then the prosecutor has proved the defendant's guilt beyond a reasonable doubt and the defendant should be pronounced guilty. The term connotes that evidence establishes a particular point to a moral certainty which precludes the existence of any reasonable alternatives. It does not mean that no doubt exists as to the accused's guilt, but only that noreasonabledoubt is possible from the evidence presented.[26]Further to this notion of moral certainty, where the trier of fact relies on proof that is solely circumstantial,i.e., when conviction is based entirely oncircumstantial evidence, certain jurisdictions specifically require the prosecution's burden of proof to be such that the facts proved must exclude to a moral certainty every reasonable hypothesis or inference other than guilt. The main reason that this high level of proof is demanded in criminal trials is that such proceedings can result in the deprivation of a defendant's liberty or even in their death. These outcomes are far more severe than in civil trials, in which monetary damages are the common remedy. Another noncriminal instance in which proof beyond a reasonable doubt is applied isLPS conservatorship. In the three jurisdictions of the UK (Northern Ireland;England and Wales; and Scotland) there are only two standards of proof in trials. There are others which are defined in statutes, such as those relating to police powers. The criminal standard was formerly described as "beyond reasonable doubt". That standard remains,[citation needed]and the words commonly used,[citation needed]though the Judicial Studies Board guidance is that juries might be assisted by being told that to convict they must be persuaded "so that you are sure". The civil standard is 'the balance of probabilities', often referred to in judgments as "more likely than not".Lord Denning, inMiller v. Minister of Pensions, worded the standard as "more probable than not".[27] The civil standard is also used in criminal trials in relation to those defences which must be proven by the defendant (for example, the statutory defence todrunk in chargethat there was no likelihood of the accused driving while still over the alcohol limit[28]). However, where the law does not stipulate a reverse burden of proof, the defendant need only raise the issue and it is then for the prosecution to negate the defence to the criminal standard in the usual way (for example, that of self-defence[29]). Prior to the decision of the House of Lords inRe B (A Child)[2008] UKHL 35[30]there had been some confusion – even at the Court of Appeal – as to whether there was some intermediate standard, described as the 'heightened standard'. The House of Lords found that there was not. As the above description of the American system shows, anxiety by judges about making decisions on very serious matters on the basis of the balance of probabilities had led to a departure from the common law principles of just two standards.Baroness Halesaid: 70. ... Neither the seriousness of the allegation nor the seriousness of the consequences should make any difference to the standard of proof to be applied in determining the facts. The inherent probabilities are simply something to be taken into account, where relevant, in deciding where the truth lies. 72. ... there is no logical or necessary connection between seriousness and probability. Some seriously harmful behaviour, such as murder, is sufficiently rare to be inherently improbable in most circumstances. Even then there are circumstances, such as a body with its throat cut and no weapon to hand, where it is not at all improbable. Other seriously harmful behaviour, such as alcohol or drug abuse, is regrettably all too common and not at all improbable. Nor are serious allegations made in a vacuum. Consider the famous example of the animal seen in Regent's Park. If it is seen outside the zoo on a stretch of greensward regularly used for walking dogs, then of course it is more likely to be a dog than a lion. If it is seen in the zoo next to the lions' enclosure when the door is open, then it may well be more likely to be a lion than a dog. The task for the tribunal then when faced with serious allegations is to recognize that their seriousness generally means they are inherently unlikely, such that to be satisfied that a fact is more likely than not the evidence must be of a good quality. But the standard of proof remains 'the balance of probabilities'. In Australia, two standards of proof are applied at common law: the criminal standard and the civil standard.[31]It is possible for other standards of proof to be applied where required by law.[citation needed] The criminal standard in Australia is 'beyond reasonable doubt'.[32]An offence against a Commonwealth law, with a term of imprisonment in excess of 12 months is an 'indictable offence';[33]and is constitutionally required to be tried before jury of 12 people.[34][35]Offences that do not carry a term of imprisonment exceeding 12 months are called 'Summary Offences'. Some offences (with a term of imprisonment <10 years) may be heard by a court of summary jurisdiction,a.k.a.Magistrates Court with the consent of all parties; however the court may not impose a sentence greater than 12 months. Juries are required to make findings of guilt 'beyond reasonable doubt' for criminal matters.[32] The Australian constitution does not expressly provide that criminal trials must be 'fair', nor does it set out the elements of a fair trial, but it may by implication protect other attributes.[36]The High Court has moved toward, but not yet, entrenched procedural fairness as a constitutional right. If it did so, this would have the potential to constitutionalise the 'beyond reasonable doubt' standard in criminal proceedings.[37] State offences are not subject to the constitution's section 80 requirement for a jury. However, the case ofKirkconstrains the way that State courts may operate during criminal trials per theKable Doctrine.[38] In Australia, the civil standard is termed the 'balance of probabilities'.[39]In Australia, the 'balance of probabilities' involves considerations that the evidence required to establish a fact at the civil standard will vary with the seriousness of what is being alleged.[40]Although it has been noted a similar approach is taken in Canada.[41][42]In the United Kingdom the evidential requirements of the civil standard of proof don't vary with the seriousness of an allegation.[30] The case law that establishes this isBriginshaw v Briginshaw, which is the fifth most cited decision of Australia's High Court.[43]The case has since been incorporated into the uniform evidence law.[44]TheBriginshawprinciple was articulated by Dixon in that case in these terms:[45] ...it is enough that the affirmative of an allegation is made out to the reasonable satisfaction of the tribunal. But reasonable satisfaction is not a state of mind that is attained or established independently of the nature and consequence of the fact or facts to be proved. The seriousness of an allegation made, the inherent unlikelihood of an occurrence of a given description, or the gravity of the consequences flowing from a particular finding are considerations which must affect the answer to the question whether the issue has been proved to the reasonable satisfaction of the tribunal. In such matters "reasonable satisfaction" should not be produced by inexact proofs, indefinite testimony, or indirect inferences. Everyone must feel that, when, for instance, the issue is on which of two dates an admitted occurrence took place, a satisfactory conclusion may be reached on materials of a kind that would not satisfy any sound and prudent judgment if the question was whether some act had been done involving grave moral delinquency TheBriginshawprinciple is sometimes incorrectly referred to as theBriginshawstandard of proof,[39]inQantas Airways Limited v. GamaJustices French and Jacobson stated the "Briginshaw test does not create any third standard of proof between the civil and the criminal."[46] In theHigh Courtcase ofG v. HJustices Deane, Dawson and Gaudron stated "Not every case involves issues of importance and gravity in theBriginshaw v. Briginshawsense. The need to proceed with caution is clear if, for example, there is an allegation of fraud or an allegation of criminal or moral wrongdoing..".[47] An example of theBriginshawprinciple applied in practice is the case ofBen Roberts-Smithwhere, due to the gravity of the allegations,Fairfax Mediawas required to rely on stronger proof than in the context of a normal allegation to win their case.[48][Note 1]In the end, despite the high burden of proof required, Fairfax won the trial, with Besanko ruling that it was proven he "broke the moral and legal rules of military engagement and is therefore a criminal".[49][50][51] Melbourne Law SchoolprofessorJeremy Gans, has noted that for particularly serious allegations, such as sexual assault, "It's hard to see how theBriginshawprinciple is much different to beyond reasonable doubt".[52]The decision has also been noted for affecting the ability of litigants to seek redress in anti-discrimination lawsuits, due to the seriousness of such allegations.[39] The "air of reality" is a standard of proof used inCanadato determine whether a criminal defense may be used. The test asks whether a defense can be successful if it is assumed that all the claimed facts are to be true. In most cases, the burden of proof rests solely on the prosecution, negating the need for a defense of this kind. However, when exceptions arise and the burden of proof has been shifted to the defendant, they are required to establish a defense that bears an "air of reality". Two instances in which such a case might arise are, first, when aprima faciecase has been made against the defendant or, second, when the defense mounts anaffirmative defense, such as theinsanity defense. This is similar to the concept ofsummary judgmentin the United States, though not identical.[53] Depending on the legal venue or intra-case hearing, varying levels of reliability of proof are considered dispositive of the inquiry being entertained. If the subject threshold level of reliability has been met by the presentation of the evidence, then the thing is considered legally proved for that trial, hearing or inquest. For example, in California, several evidentiary presumptions are codified, including a presumption that the owner of legal title is the beneficial owner (rebuttable only by clear and convincing evidence).[54] Criminalcases usually place the burden of proof on theprosecutor(expressed in theLatinbrocardei incumbit probatio qui dicit, non qui negat, "the burden of proof rests on who asserts, not on who denies"). This principle is known as thepresumption of innocence, and is summed up with "innocent until proven guilty", but is not upheld in all legal systems orjurisdictions. Where it is upheld, the accused will be found not guilty if this burden of proof is not sufficiently shown by the prosecution.[55]The presumption of innocence means three things: For example, if the defendant (D) is charged with murder, the prosecutor (P) bears the burden of proof to show the jury that D did indeed murder someone. However, in England and Wales, theMagistrates' Courts Act 1980, s.101 stipulates that where a defendant relies on some "exception, exemption, proviso, excuse or qualification" in their defence in a summary trial, the legal burden of proof as to that exception falls on the defendant, though only on the balance of probabilities. For example, a person charged with beingdrunk in chargeof a motor vehicle can raise the defense that there was no likelihood of their driving while drunk.[58]The prosecution has the legal burden of proof beyond reasonable doubt that the defendant exceeded the legal limit of alcohol and was in control of a motor vehicle. Possession of the keys is usually sufficient to prove control, even if the defendant is not in the vehicle and is perhaps in a nearby bar. That being proved, the defendant has the legal burden of proof on the balance of probabilities that they were not likely to drive.[59] In 2002, such practice in England and Wales was challenged as contrary to theEuropean Convention on Human Rights(ECHR), art.6(2) guaranteeing right to a fair trial. TheHouse of Lordsheld that:[59][60] In some cases, there is areverse onuson the accused. A typical example is that of ahit-and-runcharge prosecuted under the CanadianCriminal Code. The defendant is presumed to have fled the scene of a crash, to avoid civil or criminal liability, if the prosecution can prove the remaining essential elements of the offense. Incivil lawcases, such as a dispute over a contract or a claim about an accidentalinjury, the burden of proof usually requires the plaintiff to convince the trier of fact (whether judge or jury) of the plaintiff's entitlement to the relief sought. This means that the plaintiff must prove each element of the claim, or cause of action, in order to recover. This rule is not absolute in civil lawsuits; unlike with criminal offenses, laws may establish a different burden of proof, or the burden in an individual case may be reversed as a matter of fairness.[61]For example, if a bank or government agency has alegal dutyto keep certain records, and a lawsuit alleges that the proper records were not kept, then the plaintiff may not be required toprove a negative; instead, the respondent could be required to prove to the court that the records were kept. InKeyes v. Sch. Dist. No. 1, theUnited States Supreme Courtstated: "There are no hard-and-fast standards governing the allocation of the burden of proof in every situation. The issue, rather, 'is merely a question of policy and fairness based on experience in the different situations'."[62]For support, the Court cited 9 John H. Wigmore, Evidence § 2486, at 275 (3d ed. 1940). InKeyes, the Supreme Court held that if "school authorities have been found to have practiced purposeful segregation in part of a school system", the burden of persuasion shifts to the school to prove that it did not engage in such discrimination in other segregated schools in the same system.[62] InDirector, Office of Workers' Compensation Programs v. Greenwich Collieries, the Supreme Court explained that "burden of proof" is ambiguous because it has historically referred to two distinct burdens: theburden of persuasion, and theburden of production.[63] The Supreme Court discussed how courts should allocate the burden of proof (i.e., the burden of persuasion) inSchaffer ex rel. Schaffer v. Weast.[61]The Supreme Court explained that if a statute is silent about the burden of persuasion, the court will "begin with the ordinary default rule that plaintiffs bear the risk of failing to prove their claims".[61]In support of this proposition, the Court cited 2 J. Strong,McCormick on Evidence§ 337, 412 (5th ed. 1999), which states: The burdens of pleading and proof with regard to most facts have been and should be assigned to the plaintiff who generally seeks to change the present state of affairs and who therefore naturally should be expected to bear the risk of failure of proof or persuasion.[61] At the same time, the Supreme Court also recognized "The ordinary default rule, of course, admits of exceptions. ... For example, the burden of persuasion as to certain elements of a plaintiff's claim may be shifted to defendants, when such elements can fairly be characterized as affirmative defenses or exemptions. ... Under some circumstances this Court has even placed the burden of persuasion over an entire claim on the defendant. ... [Nonetheless,] [a]bsent some reason to believe that Congress intended otherwise, therefore, [the Supreme Court] will conclude that the burden of persuasion lies where it usually falls, upon the party seeking relief."[61]
https://en.wikipedia.org/wiki/Burden_of_proof_(law)
Thehorizon effect, also known as thehorizon problem, is a problem inartificial intelligencewhereby, in many games, the number of possible states or positions is immense and computers can only feasibly search a small portion of them, typically a fewpliesdown thegame tree. Thus, for a computer searching only a fixed number of plies, there is a possibility that it will make a poor long-term move. The drawbacks of the move are not "visible" because the computer does not search to the depth at which itsevaluation functionreveals the true evaluation of the line. The analogy is to peering at a distance on a sphere like the earth, but a threat being beneath the horizon and hence unseen. When evaluating a largegame treeusing techniques such asminimaxwithalpha-beta pruning, search depth is limited for feasibility reasons. However, evaluating a partial tree may give a misleading result. When a significant change exists just over the horizon of the search depth, the computational device falls victim to the horizon effect. In 1973Hans Berlinernamed this phenomenon, which he and other researchers had observed, the "Horizon Effect."[1]He split the effect into two: the Negative Horizon Effect "results in creating diversions which ineffectively delay an unavoidable consequence or make an unachievable one appear achievable." For the "largely overlooked" Positive Horizon Effect, "the program grabs much too soon at a consequence that can be imposed on an opponent at leisure, frequently in a more effective form." The horizon effect can be somewhat mitigated byquiescence search. This technique extends the effort and time spent searching board states left in volatile positions and allocates less effort to easier-to-assess board states. For example, "scoring" the worth of a chess position often involves amaterial value count, but this count is misleading if there arehanging piecesor an imminent checkmate. A board state after the white queen has captured a protected black knight would appear to the naive material count to be advantageous to white as they are now up a knight, but is probably disastrous as the queen will be taken in the exchange one ply later. A quiescence search may tell a search algorithm to play out thecapturesandchecksbefore scoringleaf nodeswith volatile positions. Inchess, assume a situation where the computer only searches the game tree to sixpliesand from the current position determines that the queen is lost in the sixth ply; and suppose there is a move in the search depth where it maysacrificea rook, and the loss of the queen is pushed to the eighth ply. This is, of course, a worse move than sacrificing the queen because it leads to losing both a queen and a rook. However, because the loss of the queen was pushed over the horizon of search, it is not discovered and evaluated by the search. Losing the rook seems to be better than losing the queen, so the sacrifice is returned as the best option whereas delaying the sacrifice of the queen has in fact additionally weakened the computer's position. As another example, while someperpetual checksquickly trigger a threefold-repetition draw, others can involve a queen chasing a king around across the board, varying the position each time, meaning the actual forced draw could be many moves into the future. If a quiescence search playing out the checks isn't done, then the AI might not detect the possibility, and can let the engine blunder a winning position into a drawn one. InGo, the horizon effect is a major concern for writing an AI capable of even beginner-level play, and part of why alpha-beta search was a weak approach toComputer Gocompared to later machine learning and pattern recognition approaches. It is a very common situation for certain stones to be "dead" yet require many moves to actually capture them if fought over. The horizon effect may cause a naive algorithm to incorrectly assess the situation and believe that the stones are savable by calculating a play that seems to keep the doomed stones alive as of the move the search tree stops at. While the death of the group can indeed be delayed, it cannot be stopped, and contesting this will only allow more stones to be captured. A classic example that beginners learn areGo ladders, but the same general idea occurs even in situations that aren't strictly ladders.[2]
https://en.wikipedia.org/wiki/Horizon_effect
TheWeb Cryptography APIis theWorld Wide Web Consortium’s (W3C) recommendation for a low-level interface that would increase the security ofweb applicationsby allowing them to performcryptographic functionswithout having to access raw keying material.[1]This agnosticAPIwould perform basic cryptographic operations, such ashashing,signature generationand verification andencryptionas well asdecryptionfrom within a web application.[2] On 26 January 2017, theW3Creleased its recommendation for a Web Cryptography API[3]that could perform basic cryptographic operations in web applications. This agnostic API would utilizeJavaScriptto perform operations that would increase the security of data exchange withinweb applications. The API would provide a low-level interface to create and/or managepublic keysandprivate keysforhashing,digital signaturegeneration and verification andencryptionanddecryptionfor use with web applications. The Web Cryptography API could be used for a wide range of uses, including: Because the Web Cryptography API is agnostic in nature, it can be used on anyplatform. It would provide a common set ofinterfacesthat would permitweb applicationsandprogressive web applicationsto conduct cryptographic functions without the need to access raw keying material. This would be done with the assistance of the SubtleCrypto interface, which defines a group of methods to perform the above cryptographic operations. Additional interfaces within the Web Cryptography API would allow for key generation, key derivation and key import and export.[1] The W3C’s specification for the Web Cryptography API places focus on the common functionality and features that currently exist between platform-specific and standardized cryptographic APIs versus those that are known to just a few implementations. The group’s recommendation for the use of the Web Cryptography API does not dictate that a mandatory set of algorithms must be implemented. This is because of the awareness that cryptographic implementations will vary amongst conforming user agents because ofgovernment regulations, localpolicies, securitypracticesandintellectual propertyconcerns. There are many types of existing web applications that the Web Cryptography API would be well suited for use with.[1] Todaymulti-factor authenticationis considered one of the most reliable methods for verifying the identity of a user of a web application, such asonline banking. Many web applications currently depend on this authentication method to protect both the user and theuser agent. With the Web Cryptography API, a web application would have the ability to provide authentication from within itself instead of having to rely on transport-layer authentication to secret keying material to authenticate user access. This process would provide a richer experience for the user. The Web Cryptography API would allow the application to locate suitable client keys that were previously created by the user agent or had been pre-provisioned by the web application. The application would be able to give the user agent the ability to either generate a new key or re-use an existing key in the event the user does not have a key already associated with their account. By binding this process to theTransport Layer Securitythat the user is authenticating through, the multi-factor authentication process can be additionally strengthened by the derivation of a key that is based on the underlying transport.[1][2] The API can be used to protect sensitive or confidential documents from unauthorized viewing from within a web application, even if they have been previously securely received. The web application would use the Web Cryptography API to encrypt the document with a secret key and then wrap it with public keys that have been associated with users who are authorized to view the document. Upon navigating to the web application, the authorized user would receive the document that had been encrypted and would be instructed to use their private key to begin the unwrapping process that would allow them to decrypt and view the document.[2] Many businesses and individuals rely oncloud storage. For protection, remote service provide might want their web application to give users the ability to protect their confidential documents before uploading their documents or other data. The Web Cryptography API would allow users to: The ability to electronically sign documents saves time, enhances the security of important documents and can serve as legal proof of a user’s acceptance of a document. Many web applications choose to acceptelectronic signaturesinstead of requiring written signatures. With the Web Cryptography API, a user would be prompted to choose a key that could be generated or pre-provisioned specifically for the web application. The key could then be used during the signing operation. Web applications often cache data locally, which puts the data at risk for compromise if an offline attack were to occur. The Web Cryptography API permits the web application to use apublic keydeployed from within itself to verify theintegrityof the data cache.[2] The Web Cryptography API can enhance the security ofmessagingfor use inoff-the-record (OTR)and other types of message-signing schemes through the use of key agreement. The message sender and intended recipient would negotiate shared encryption andmessage authentication code(MAC) keys to encrypt and decrypt messages to prevent unauthorized access.[2] The Web Cryptography API can be used by web applications to interact with message formats and structures that are defined under JOSE Working Group.[4]The application can read and importJSON Web Signature(JWK) keys, validate messages that have been protected through electronic signing orMACkeys and decrypt JWE messages. The W3C recommends that vendors avoid using vendor-specific proprietary extensions with specifications for the Web Cryptography API. This is because it could reduce the interoperability of the API and break up the user base since not all users would be able to access the particular content. It is recommended that when a vendor-specific extension cannot be avoided, the vendor should prefix it with vendor-specific strings to prevent clashes with future generations of the API’s specifications.
https://en.wikipedia.org/wiki/Web_Cryptography_API
Alist ofCDMA2000networksworldwide.
https://en.wikipedia.org/wiki/List_of_CDMA2000_networks
Incomputing, theExecutable and Linkable Format[2](ELF, formerly namedExtensible Linking Format) is a common standardfile formatforexecutablefiles,object code,shared libraries, andcore dumps. First published in the specification for theapplication binary interface(ABI) of theUnixoperating system version namedSystem V Release 4(SVR4),[3]and later in the Tool Interface Standard,[1]it was quickly accepted among different vendors ofUnixsystems. In 1999, it was chosen as the standard binary file format for Unix andUnix-likesystems onx86processors by the86openproject. By design, the ELF format is flexible, extensible, andcross-platform. For instance, it supports differentendiannessesand address sizes so it does not exclude any particularCPUorinstruction set architecture. This has allowed it to be adopted by many differentoperating systemson many different hardwareplatforms. Each ELF file is made up of one ELF header, followed by file data. The data can include: The segments contain information that is needed forrun timeexecution of the file, while sections contain important data for linking and relocation. Anybytein the entire file can be owned by one section at most, and orphan bytes can occur which are unowned by any section. The ELF header defines whether to use32-bitor64-bitaddresses. The header contains three fields that are affected by this setting and offset other fields that follow them. The ELF header is 52 or 64 bytes long for 32-bit and 64-bit binaries, respectively. glibc 2.12+ in casee_ident[EI_OSABI] == 3treats this field as ABI version of thedynamic linker:[6]it defines a list of dynamic linker's features,[7]treatse_ident[EI_ABIVERSION]as a feature level requested by the shared object (executable or dynamic library) and refuses to load it if an unknown feature is requested, i.e.e_ident[EI_ABIVERSION]is greater than the largest known feature.[8] [9] The program header table tells the system how to create a process image. It is found at file offsete_phoff, and consists ofe_phnumentries, each with sizee_phentsize. The layout is slightly different in32-bitELF vs64-bitELF, because thep_flagsare in a different structure location for alignment reasons. Each entry is structured as: The ELF format has replaced older executable formats in various environments. It has replaceda.outandCOFFformats inUnix-likeoperating systems: ELF has also seen some adoption in non-Unix operating systems, such as: Microsoft Windowsalso uses the ELF format, but only for itsWindows Subsystem for Linuxcompatibility system.[17] Some game consoles also use ELF: Other (operating) systems running on PowerPC that use ELF: Some operating systems for mobile phones and mobile devices use ELF: Some phones can run ELF files through the use of a patch that adds assembly code to the main firmware, which is a feature known asELFPackin the underground modding culture. The ELF file format is also used with theAtmel AVR(8-bit), AVR32[22]and with Texas Instruments MSP430 microcontroller architectures. Some implementations of Open Firmware can also load ELF files, most notably Apple's implementation used in almost all PowerPC machines the company produced. 86openwas a project to form consensus on a common binary file format for Unix and Unix-like operating systems on the common PC compatible x86 architecture, to encourage software developers to port to the architecture.[24]The initial idea was to standardize on a small subset of Spec 1170, a predecessor of the Single UNIX Specification, and the GNU C Library (glibc) to enable unmodified binaries to run on the x86 Unix-like operating systems. The project was originally designated "Spec 150". The format eventually chosen was ELF, specifically the Linux implementation of ELF, after it had turned out to be ade factostandard supported by all involved vendors and operating systems. The group began email discussions in 1997 and first met together at the Santa Cruz Operation offices on August 22, 1997. The steering committee was Marc Ewing, Dion Johnson, Evan Leibovitch,Bruce Perens, Andrew Roach, Bryan Wayne Sparks and Linus Torvalds. Other people on the project were Keith Bostic, Chuck Cranor, Michael Davidson, Chris G. Demetriou, Ulrich Drepper, Don Dugger, Steve Ginzburg, Jon "maddog" Hall, Ron Holt, Jordan Hubbard, Dave Jensen, Kean Johnston, Andrew Josey, Robert Lipe, Bela Lubkin, Tim Marsland, Greg Page, Ronald Joe Record, Tim Ruckle, Joel Silverstein, Chia-pi Tien, and Erik Troan. Operating systems and companies represented were BeOS, BSDI, FreeBSD,Intel, Linux, NetBSD, SCO and SunSoft. The project progressed and in mid-1998, SCO began developing lxrun, an open-source compatibility layer able to run Linux binaries on OpenServer, UnixWare, and Solaris. SCO announced official support of lxrun at LinuxWorld in March 1999. Sun Microsystems began officially supporting lxrun for Solaris in early 1999,[25]and later moved to integrated support of the Linux binary format via Solaris Containers for Linux Applications. With the BSDs having long supported Linux binaries (through a compatibility layer) and the main x86 Unix vendors having added support for the format, the project decided that Linux ELF was the format chosen by the industry and "declare[d] itself dissolved" on July 25, 1999.[26] FatELF is an ELF binary-format extension that adds fat binary capabilities.[27]It is aimed for Linux and other Unix-like operating systems. Additionally to the CPU architecture abstraction (byte order, word size,CPUinstruction set etc.), there is the potential advantage of software-platform abstraction e.g., binaries which support multiple kernel ABI versions. As of 2021[update], FatELF has not been integrated into the mainline Linux kernel.[28][29][30] [1]
https://en.wikipedia.org/wiki/Executable_and_Linkable_Format
Incomputer science,symbolic execution(alsosymbolic evaluationorsymbex) is a means ofanalyzing a programto determine whatinputscause each part of a program toexecute. Aninterpreterfollows the program, assuming symbolic values for inputs rather than obtaining actual inputs as normal execution of the program would. It thus arrives at expressions in terms of those symbols for expressions and variables in the program, and constraints in terms of those symbols for the possible outcomes of each conditional branch. Finally, the possible inputs that trigger a branch can be determined by solving the constraints. The field ofsymbolic simulationapplies the same concept to hardware.Symbolic computationapplies the concept to the analysis of mathematical expressions. Consider the program below, which reads in a value and fails if the input is 6. During a normal execution ("concrete" execution), the program would read a concrete input value (e.g., 5) and assign it toy. Execution would then proceed with the multiplication and the conditional branch, which would evaluate to false and printOK. During symbolic execution, the program reads a symbolic value (e.g.,λ) and assigns it toy. The program would then proceed with the multiplication and assignλ * 2toz. When reaching theifstatement, it would evaluateλ * 2 == 12. At this point of the program,λcould take any value, and symbolic execution can therefore proceed along both branches, by "forking" two paths. Each path gets assigned a copy of the program state at the branch instruction as well as a path constraint. In this example, the path constraint isλ * 2 == 12for theifbranch andλ * 2 != 12for theelsebranch. Both paths can be symbolically executed independently. When paths terminate (e.g., as a result of executingfail()or simply exiting), symbolic execution computes a concrete value forλby solving the accumulated path constraints on each path. These concrete values can be thought of as concrete test cases that can, e.g., help developers reproduce bugs. In this example, theconstraint solverwould determine that in order to reach thefail()statement,λwould need to equal 6. Symbolically executing all feasible program paths does not scale to large programs. The number of feasible paths in a program grows exponentially with an increase in program size and can even be infinite in the case of programs with unbounded loop iterations.[1]Solutions to thepath explosionproblem generally use either heuristics for path-finding to increase code coverage,[2]reduce execution time by parallelizing independent paths,[3]or by merging similar paths.[4]One example of merging isveritesting, which "employs static symbolic execution to amplify the effect of dynamic symbolic execution".[5] Symbolic execution is used to reason about a program path-by-path which is an advantage over reasoning about a program input-by-input as other testing paradigms use (e.g.dynamic program analysis). However, if few inputs take the same path through the program, there is little savings over testing each of the inputs separately. Symbolic execution is harder when the same memory location can be accessed through different names (aliasing). Aliasing cannot always be recognized statically, so the symbolic execution engine can't recognize that a change to the value of one variable also changes the other.[6] Since an array is a collection of many distinct values, symbolic executors must either treat the entire array as one value or treat each array element as a separate location. The problem with treating each array element separately is that a reference such as "A[i]" can only be specified dynamically, when the value for i has a concrete value.[6] Programs interact with their environment by performingsystem calls, receiving signals, etc. Consistency problems may arise when execution reaches components that are not under control of the symbolic execution tool (e.g., kernel or libraries). Consider the following example: This program opens a file and, based on some condition, writes different kind of data to the file. It then later reads back the written data. In theory, symbolic execution would fork two paths at line 5 and each path from there on would have its own copy of the file. The statement at line 11 would therefore return data that is consistent with the value of "condition" at line 5. In practice, file operations are implemented as system calls in the kernel, and are outside the control of the symbolic execution tool. The main approaches to address this challenge are: Executing calls to the environment directly.The advantage of this approach is that it is simple to implement. The disadvantage is that the side effects of such calls will clobber all states managed by the symbolic execution engine. In the example above, the instruction at line 11 would return "some datasome other data" or "some other datasome data" depending on the sequential ordering of the states. Modeling the environment.In this case, the engine instruments the system calls with a model that simulates their effects and that keeps all the side effects in per-state storage. The advantage is that one would get correct results when symbolically executing programs that interact with the environment. The disadvantage is that one needs to implement and maintain many potentially complex models of system calls. Tools such as KLEE,[7]Cloud9, and Otter[8]take this approach by implementing models for file system operations, sockets,IPC, etc. Forking the entire system state.Symbolic execution tools based on virtual machines solve the environment problem by forking the entire VM state. For example, in S2E[9]each state is an independent VM snapshot that can be executed separately. This approach alleviates the need for writing and maintaining complex models and allows virtually any program binary to be executed symbolically. However, it has higher memory usage overheads (VM snapshots may be large). The concept of symbolic execution was introduced academically in the 1970s with descriptions of: the Select system,[13]the EFFIGY system,[14]the DISSECT system,[15]and Clarke's system.[16]
https://en.wikipedia.org/wiki/Symbolic_execution
Theparametronis alogic circuitelement invented byEiichi Gotoin 1954.[1]The parametron is essentially aresonant circuitwith a nonlinear reactive element which oscillates at half the driving frequency.[2]The oscillation can be made to represent a binary digit by the choice between two stationary phases π radians (180 degrees) apart.[3] Parametrons were used in earlyJapanesecomputersfrom 1954 through the early 1960s. A prototype parametron-based computer, thePC-1, was built at theUniversity of Tokyoin 1958. Parametrons were used in early Japanese computers due to being reliable and inexpensive but were ultimately surpassed bytransistorsdue to differences in speed.[4] This computing article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Parametron
I can remember Bertrand Russell telling me of a horrible dream. He was in the top floor of the University Library, about A.D. 2100. A library assistant was going round the shelves carrying an enormous bucket, taking down books, glancing at them, restoring them to the shelves or dumping them into the bucket. At last he came to three large volumes which Russell could recognize as the last surviving copy ofPrincipia Mathematica. He took down one of the volumes, turned over a few pages, seemed puzzled for a moment by the curious symbolism, closed the volume, balanced it in his hand and hesitated.... He [Russell] said once, after some contact with the Chinese language, that he was horrified to find that the language ofPrincipia Mathematicawas an Indo-European one. ThePrincipia Mathematica(often abbreviatedPM) is a three-volume work on thefoundations of mathematicswritten by the mathematician–philosophersAlfred North WhiteheadandBertrand Russelland published in 1910, 1912, and 1913. In 1925–1927, it appeared in a second edition with an importantIntroduction to the Second Edition, anAppendix Athat replaced✱9with a newAppendix BandAppendix C.PMwas conceived as a sequel to Russell's 1903The Principles of Mathematics, but asPMstates, this became an unworkable suggestion for practical and philosophical reasons: "The present work was originally intended by us to be comprised in a second volume ofPrinciples of Mathematics... But as we advanced, it became increasingly evident that the subject is a very much larger one than we had supposed; moreover on many fundamental questions which had been left obscure and doubtful in the former work, we have now arrived at what we believe to be satisfactory solutions." PM, according to its introduction, had three aims: (1) to analyze to the greatest possible extent the ideas and methods of mathematical logic and to minimize the number ofprimitive notions,axioms, andinference rules; (2) to precisely express mathematical propositions insymbolic logicusing the most convenient notation that precise expression allows; (3) to solve the paradoxes that plagued logic andset theoryat the turn of the 20th century, likeRussell's paradox.[3] This third aim motivated the adoption of the theory oftypesinPM. The theory of types adopts grammatical restrictions on formulas that rule out the unrestricted comprehension of classes, properties, and functions. The effect of this is that formulas such as would allow the comprehension of objects like the Russell set turn out to be ill-formed: they violate the grammatical restrictions of the system ofPM. PMsparked interest in symbolic logic and advanced the subject, popularizing it and demonstrating its power.[4]TheModern LibraryplacedPM23rd in their list of the top 100 English-language nonfiction books of the twentieth century.[5] ThePrincipiacovered onlyset theory,cardinal numbers,ordinal numbers, andreal numbers. Deeper theorems fromreal analysiswere not included, but by the end of the third volume it was clear to experts that a large amount of known mathematics couldin principlebe developed in the adopted formalism. It was also clear how lengthy such a development would be. A fourth volume on the foundations ofgeometryhad been planned, but the authors admitted to intellectual exhaustion upon completion of the third. As noted in the criticism of the theory byKurt Gödel(below), unlike aformalist theory, the "logicistic" theory ofPMhas no "precise statement of the syntax of the formalism". Furthermore in the theory, it is almost immediately observable thatinterpretations(in the sense ofmodel theory) are presented in terms oftruth-valuesfor the behaviour of the symbols "⊢" (assertion of truth), "~" (logical not), and "V" (logical inclusive OR). Truth-values:PMembeds the notions of "truth" and "falsity" in the notion "primitive proposition". A raw (pure) formalist theory would not provide the meaning of the symbols that form a "primitive proposition"—the symbols themselves could be absolutely arbitrary and unfamiliar. The theory would specify onlyhow the symbols behave based on the grammar of the theory. Then later, byassignmentof "values", a model would specify aninterpretationof what the formulas are saying. Thus in the formalKleenesymbol set below, the "interpretation" of what the symbols commonly mean, and by implication how they end up being used, is given in parentheses, e.g., "¬ (not)". But this is not a pure Formalist theory. The following formalist theory is offered as contrast to the logicistic theory ofPM. A contemporary formal system would be constructed as follows: The theory ofPMhas both significant similarities, and similar differences, to a contemporary formal theory.[clarification needed]Kleene states that "this deduction of mathematics from logic was offered as intuitive axiomatics. The axioms were intended to be believed, or at least to be accepted as plausible hypotheses concerning the world".[10]Indeed, unlike a Formalist theory that manipulates symbols according to rules of grammar,PMintroduces the notion of "truth-values", i.e., truth and falsity in thereal-worldsense, and the "assertion of truth" almost immediately as the fifth and sixth elements in the structure of the theory (PM1962:4–36): Cf.PM1962:90–94, for the first edition: Thefirstedition (see discussion relative to the second edition, below) begins with a definition of the sign "⊃" ✱1.01.p⊃q.=.~p∨q.Df. ✱1.1. Anything implied by a true elementary proposition is true.Ppmodus ponens (✱1.11was abandoned in the second edition.) ✱1.2. ⊦:p∨p.⊃.p.Ppprinciple of tautology ✱1.3. ⊦:q.⊃.p∨q.Ppprinciple of addition ✱1.4. ⊦:p∨q.⊃.q∨p.Ppprinciple of permutation ✱1.5. ⊦:p∨ (q∨r).⊃.q∨ (p∨r).Ppassociative principle ✱1.6. ⊦:.q⊃r.⊃:p∨q.⊃.p∨r.Ppprinciple of summation ✱1.7. Ifpis an elementary proposition, ~pis an elementary proposition.Pp ✱1.71. Ifpandqare elementary propositions,p∨qis an elementary proposition.Pp ✱1.72. If φpand ψpare elementary propositional functions which take elementary propositions as arguments, φp∨ ψpis an elementary proposition.Pp Together with the "Introduction to the Second Edition", the second edition's Appendix A abandons the entire section✱9. This includes six primitive propositions✱9through✱9.15together with the Axioms of reducibility. The revised theory is made difficult by the introduction of theSheffer stroke("|") to symbolise "incompatibility" (i.e., if both elementary propositionspandqare true, their "stroke"p|qis false), the contemporary logicalNAND(not-AND). In the revised theory, the Introduction presents the notion of "atomic proposition", a "datum" that "belongs to the philosophical part of logic". These have no parts that are propositions and do not contain the notions "all" or "some". For example: "this is red", or "this is earlier than that". Such things can existad finitum, i.e., even an "infinite enumeration" of them to replace "generality" (i.e., the notion of "for all").[12]PMthen "advance[s] to molecular propositions" that are all linked by "the stroke". Definitions give equivalences for "~", "∨", "⊃", and ".". The new introduction defines "elementary propositions" as atomic and molecular positions together. It then replaces all the primitive propositions✱1.2to✱1.72with a single primitive proposition framed in terms of the stroke: The new introduction keeps the notation for "there exists" (now recast as "sometimes true") and "for all" (recast as "always true"). Appendix A strengthens the notion of "matrix" or "predicative function" (a "primitive idea",PM1962:164) and presents four new Primitive propositions as✱8.1–✱8.13. ✱88. Multiplicative axiom ✱120. Axiom of infinity In simple type theory objects are elements of various disjoint "types". Types are implicitly built up as follows. If τ1,...,τmare types then there is a type (τ1,...,τm) that can be thought of as the class of propositional functions of τ1,...,τm(which in set theory is essentially the set of subsets of τ1×...×τm). In particular there is a type () of propositions, and there may be a type ι (iota) of "individuals" from which other types are built. Russell and Whitehead's notation for building up types from other types is rather cumbersome, and the notation here is due toChurch. In theramified type theoryof PM all objects are elements of various disjoint ramified types. Ramified types are implicitly built up as follows. If τ1,...,τm,σ1,...,σnare ramified types then as in simple type theory there is a type (τ1,...,τm,σ1,...,σn) of "predicative" propositional functions of τ1,...,τm,σ1,...,σn. However, there are also ramified types (τ1,...,τm|σ1,...,σn) that can be thought of as the classes of propositional functions of τ1,...τmobtained from propositional functions of type (τ1,...,τm,σ1,...,σn) by quantifying over σ1,...,σn. Whenn=0 (so there are no σs) these propositional functions are called predicative functions or matrices. This can be confusing because modern mathematical practice does not distinguish between predicative and non-predicative functions, and in any case PM never defines exactly what a "predicative function" actually is: this is taken as a primitive notion. Russell and Whitehead found it impossible to develop mathematics while maintaining the difference between predicative and non-predicative functions, so they introduced theaxiom of reducibility, saying that for every non-predicative function there is a predicative function taking the same values. In practice this axiom essentially means that the elements of type (τ1,...,τm|σ1,...,σn) can be identified with the elements of type (τ1,...,τm), which causes the hierarchy of ramified types to collapse down to simple type theory. (Strictly speaking, PM allows two propositional functions to be different even if they take the same values on all arguments; this differs from modern mathematical practice where one normally identifies two such functions.) InZermeloset theory one can model the ramified type theory of PM as follows. One picks a set ι to be the type of individuals. For example, ι might be the set of natural numbers, or the set of atoms (in a set theory with atoms) or any other set one is interested in. Then if τ1,...,τmare types, the type (τ1,...,τm) is the power set of the product τ1×...×τm, which can also be thought of informally as the set of (propositional predicative) functions from this product to a 2-element set {true,false}. The ramified type (τ1,...,τm|σ1,...,σn) can be modeled as the product of the type (τ1,...,τm,σ1,...,σn) with the set of sequences ofnquantifiers (∀ or ∃) indicating which quantifier should be applied to each variable σi. (One can vary this slightly by allowing the σs to be quantified in any order, or allowing them to occur before some of the τs, but this makes little difference except to the bookkeeping.) The introduction to the second edition cautions: One point in regard to which improvement is obviously desirable is the axiom of reducibility ... . This axiom has a purely pragmatic justification ... but it is clearly not the sort of axiom with which we can rest content. On this subject, however, it cannot be said that a satisfactory solution is yet obtainable. DrLeon Chwistek[Theory of Constructive Types] took the heroic course of dispensing with the axiom without adopting any substitute; from his work it is clear that this course compels us to sacrifice a great deal of ordinary mathematics. There is another course, recommended by Wittgenstein† (†Tractatus Logico-Philosophicus, *5.54ff) for philosophical reasons. This is to assume that functions of propositions are always truth-functions, and that a function can only occur in a proposition through its values. (...) [Working through the consequences] ... the theory of inductive cardinals and ordinals survives; but it seems that the theory of infinite Dedekindian and well-ordered series largely collapses, so that irrationals, and real numbers generally, can no longer be adequately dealt with. Also Cantor's proof that 2n > n breaks down unless n is finite.[13] It might be possible to sacrifice infinite well-ordered series to logical rigour, but the theory of real numbers is an integral part of ordinary mathematics, and can hardly be the subject of reasonable doubt. We are therefore justified (sic) in supposing that some logical axioms which is true will justify it. The axiom required may be more restricted than the axiom of reducibility, but if so, it remains to be discovered.[14] One author[4]observes that "The notation in that work has been superseded by the subsequent development of logic during the 20th century, to the extent that the beginner has trouble reading PM at all"; while much of the symbolic content can be converted to modern notation, the original notation itself is "a subject of scholarly dispute", and some notation "embodies substantive logical doctrines so that it cannot simply be replaced by contemporary symbolism".[15] Kurt Gödelwas harshly critical of the notation: "What is missing, above all, is a precise statement of the syntax of the formalism. Syntactical considerations are omitted even in cases where they are necessary for the cogency of the proofs."[16]This is reflected in the example below of the symbols "p", "q", "r" and "⊃" that can be formed into the string "p⊃q⊃r".PMrequires adefinitionof what this symbol-string means in terms of other symbols; in contemporary treatments the "formation rules" (syntactical rules leading to "well formed formulas") would have prevented the formation of this string. Source of the notation: Chapter I "Preliminary Explanations of Ideas and Notations" begins with the source of the elementary parts of the notation (the symbols =⊃≡−ΛVε and the system of dots): PM changed Peano's Ɔ to ⊃, and also adopted a few of Peano's later symbols, such as ℩ and ι, and Peano's practice of turning letters upside down. PMadopts the assertion sign "⊦" from Frege's 1879Begriffsschrift:[18] Thus to assert a propositionpPMwrites: (Observe that, as in the original, the left dot is square and of greater size than the full stop on the right.) Most of the rest of the notation in PM was invented by Whitehead.[20] PM's dots[21]are used in a manner similar to parentheses. Each dot (or multiple dot) represents either a left or right parenthesis or the logical symbol ∧. More than one dot indicates the "depth" of the parentheses, for example, ".", ":" or ":.", "::". However the position of the matching right or left parenthesis is not indicated explicitly in the notation but has to be deduced from some rules that are complex and at times ambiguous. Moreover, when the dots stand for a logical symbol ∧ its left and right operands have to be deduced using similar rules. First one has to decide based on context whether the dots stand for a left or right parenthesis or a logical symbol. Then one has to decide how far the other corresponding parenthesis is: here one carries on until one meets either a larger number of dots, or the same number of dots next that have equal or greater "force", or the end of the line. Dots next to the signs ⊃, ≡,∨, =Df have greater force than dots next to (x), (∃x) and so on, which have greater force than dots indicating a logical product ∧. Example 1. The line corresponds to The two dots standing together immediately following the assertion-sign indicate that what is asserted is the entire line: since there are two of them, their scope is greater than that of any of the single dots to their right. They are replaced by a left parenthesis standing where the dots are and a right parenthesis at the end of the formula, thus: (In practice, these outermost parentheses, which enclose an entire formula, are usually suppressed.) The first of the single dots, standing between two propositional variables, represents conjunction. It belongs to the third group and has the narrowest scope. Here it is replaced by the modern symbol for conjunction "∧", thus The two remaining single dots pick out the main connective of the whole formula. They illustrate the utility of the dot notation in picking out those connectives which are relatively more important than the ones which surround them. The one to the left of the "⊃" is replaced by a pair of parentheses, the right one goes where the dot is and the left one goes as far to the left as it can without crossing a group of dots of greater force, in this case the two dots which follow the assertion-sign, thus The dot to the right of the "⊃" is replaced by a left parenthesis which goes where the dot is and a right parenthesis which goes as far to the right as it can without going beyond the scope already established by a group of dots of greater force (in this case the two dots which followed the assertion-sign). So the right parenthesis which replaces the dot to the right of the "⊃" is placed in front of the right parenthesis which replaced the two dots following the assertion-sign, thus Example 2, with double, triple, and quadruple dots: stands for Example 3, with a double dot indicating a logical symbol (from volume 1, page 10): stands for where the double dot represents the logical symbol ∧ and can be viewed as having the higher priority as a non-logical single dot. Later in section✱14, brackets "[ ]" appear, and in sections✱20and following, braces "{ }" appear. Whether these symbols have specific meanings or are just for visual clarification is unclear. Unfortunately the single dot (but also ":", ":.", "::", etc.) is also used to symbolise "logical product" (contemporary logical AND often symbolised by "&" or "∧"). Logical implication is represented by Peano's "Ɔ" simplified to "⊃", logical negation is symbolised by an elongated tilde, i.e., "~" (contemporary "~" or "¬"), the logical OR by "v". The symbol "=" together with "Df" is used to indicate "is defined as", whereas in sections✱13and following, "=" is defined as (mathematically) "identical with", i.e., contemporary mathematical "equality" (cf. discussion in section✱13). Logical equivalence is represented by "≡" (contemporary "if and only if"); "elementary" propositional functions are written in the customary way, e.g., "f(p)", but later the function sign appears directly before the variable without parenthesis e.g., "φx", "χx", etc. Example,PMintroduces the definition of "logical product" as follows: Translation of the formulas into contemporary symbols: Various authors use alternate symbols, so no definitive translation can be given. However, because of criticisms such as that ofKurt Gödelbelow, the best contemporary treatments will be very precise with respect to the "formation rules" (the syntax) of the formulas. The first formula might be converted into modern symbolism as follows:[22] alternately alternately etc. The second formula might be converted as follows: But note that this is not (logically) equivalent to (p→ (q→r)) nor to ((p→q) →r), and these two are not logically equivalent either. These sections concern what is now known aspredicate logic, and predicate logic with identity (equality). Section✱10: The existential and universal "operators":PMadds "(x)" to represent the contemporary symbolism "for allx" i.e., " ∀x", and it uses a backwards serifed E to represent "there exists anx", i.e., "(Ǝx)", i.e., the contemporary "∃x". The typical notation would be similar to the following: Sections✱10, ✱11, ✱12: Properties of a variable extended to all individuals: section✱10introduces the notion of "a property" of a "variable".PMgives the example: φ is a function that indicates "is a Greek", and ψ indicates "is a man", and χ indicates "is a mortal" these functions then apply to a variablex.PMcan now write, and evaluate: The notation above means "for allx,xis a man". Given a collection of individuals, one can evaluate the above formula for truth or falsity. For example, given the restricted collection of individuals { Socrates, Plato, Russell, Zeus } the above evaluates to "true" if we allow for Zeus to be a man. But it fails for: because Russell is not Greek. And it fails for because Zeus is not a mortal. Equipped with this notationPMcan create formulas to express the following: "If all Greeks are men and if all men are mortals then all Greeks are mortals". (PM1962:138) Another example: the formula: means "The symbols representing the assertion 'There exists at least onexthat satisfies function φ' is defined by the symbols representing the assertion 'It's not true that, given all values ofx, there are no values ofxsatisfying φ'". The symbolisms ⊃xand "≡x" appear at✱10.02and✱10.03. Both are abbreviations for universality (i.e., for all) that bind the variablexto the logical operator. Contemporary notation would have simply used parentheses outside of the equality ("=") sign: PMattributes the first symbolism to Peano. Section✱11applies this symbolism to two variables. Thus the following notations: ⊃x, ⊃y, ⊃x, ycould all appear in a single formula. Section✱12reintroduces the notion of "matrix" (contemporarytruth table), the notion of logical types, and in particular the notions offirst-orderandsecond-orderfunctions and propositions. New symbolism "φ!x" represents any value of a first-order function. If a circumflex "^" is placed over a variable, then this is an "individual" value ofy, meaning that "ŷ" indicates "individuals" (e.g., a row in a truth table); this distinction is necessary because of the matrix/extensional nature of propositional functions. Now equipped with the matrix notion,PMcan assert its controversialaxiom of reducibility: a function of one or two variables (two being sufficient forPM's use)where all its values are given(i.e., in its matrix) is (logically) equivalent ("≡") to some "predicative" function of the same variables. The one-variable definition is given below as an illustration of the notation (PM1962:166–167): ✱12.1⊢:(Ǝf):φx.≡x.f!xPp; This means: "We assert the truth of the following: There exists a functionfwith the property that: given all values ofx, their evaluations in function φ (i.e., resulting their matrix) is logically equivalent to somefevaluated at those same values ofx. (and vice versa, hence logical equivalence)". In other words: given a matrix determined by property φ applied to variablex, there exists a functionfthat, when applied to thexis logically equivalent to the matrix. Or: every matrix φxcan be represented by a functionfapplied tox, and vice versa. ✱13: The identity operator "=": This is a definition that uses the sign in two different ways, as noted by the quote fromPM: means: The not-equals sign "≠" makes its appearance as a definition at✱13.02. ✱14: Descriptions: From thisPMemploys two new symbols, a forward "E" and an inverted iota "℩". Here is an example: This has the meaning: The text leaps from section✱14directly to the foundational sections✱20 GENERAL THEORY OF CLASSESand✱21 GENERAL THEORY OF RELATIONS. "Relations" are what is known in contemporaryset theoryas sets ofordered pairs. Sections✱20and✱22introduce many of the symbols still in contemporary usage. These include the symbols "ε", "⊂", "∩", "∪", "–", "Λ", and "V": "ε" signifies "is an element of" (PM1962:188); "⊂" (✱22.01) signifies "is contained in", "is a subset of"; "∩" (✱22.02) signifies the intersection (logical product) of classes (sets); "∪" (✱22.03) signifies the union (logical sum) of classes (sets); "–" (✱22.03) signifies negation of a class (set); "Λ" signifies the null class; and "V" signifies the universal class or universe of discourse. Small Greek letters (other than "ε", "ι", "π", "φ", "ψ", "χ", and "θ") represent classes (e.g., "α", "β", "γ", "δ", etc.) (PM1962:188): When applied to relations in section✱23 CALCULUS OF RELATIONS, the symbols "⊂", "∩", "∪", and "–" acquire a dot: for example: "⊍", "∸".[26] The notion, and notation, of "a class" (set): In the first editionPMasserts that no new primitive ideas are necessary to define what is meant by "a class", and only two new "primitive propositions" called theaxioms of reducibilityfor classes and relations respectively (PM1962:25).[27]But before this notion can be defined,PMfeels it necessary to create a peculiar notation "ẑ(φz)" that it calls a "fictitious object". (PM1962:188) At leastPMcan tell the reader how these fictitious objects behave, because "A class is wholly determinate when its membership is known, that is, there cannot be two different classes having the same membership" (PM1962:26). This is symbolised by the following equality (similar to✱13.01above: Perhaps the above can be made clearer by the discussion of classes inIntroduction to the Second Edition, which disposes of theAxiom of Reducibilityand replaces it with the notion: "All functions of functions are extensional" (PM1962:xxxix), i.e., This has the reasonable meaning that "IF for all values ofxthetruth-valuesof the functions φ and ψ ofxare [logically] equivalent, THEN the function ƒ of a given φẑand ƒ of ψẑare [logically] equivalent."PMasserts this is "obvious": Observe the change to the equality "=" sign on the right.PMgoes on to state that will continue to hang onto the notation "ẑ(φz)", but this is merely equivalent to φẑ, and this is a class. (all quotes:PM1962:xxxix). According toCarnap's "Logicist Foundations of Mathematics", Russell wanted a theory that could plausibly be said to derive all of mathematics from purely logical axioms. However,Principia Mathematicarequired, in addition to the basic axioms of type theory, three further axioms that seemed to not be true as mere matters of logic, namely theaxiom of infinity, theaxiom of choice, and theaxiom of reducibility. Since the first two were existential axioms, Russell phrased mathematical statements depending on them as conditionals. But reducibility was required to be sure that the formal statements even properly express statements of real analysis, so that statements depending on it could not be reformulated as conditionals.Frank Ramseytried to argue that Russell's ramification of the theory of types was unnecessary, so that reducibility could be removed, but these arguments seemed inconclusive. Beyond the status of the axioms aslogical truths, one can ask the following questions about any system such as PM: Propositional logicitself was known to be consistent, but the same had not been established forPrincipia's axioms of set theory. (SeeHilbert's second problem.) Russell and Whitehead suspected that the system in PM is incomplete: for example, they pointed out that it does not seem powerful enough to show that the cardinal ℵωexists. However, one can ask if some recursively axiomatizable extension of it is complete and consistent. In 1930,Gödel's completeness theoremshowed that first-order predicate logic itself was complete in a much weaker sense—that is, any sentence that is unprovable from a given set of axioms must actually be false in somemodelof the axioms. However, this is not the stronger sense of completeness desired forPrincipia Mathematica, since a given system of axioms (such as those ofPrincipia Mathematica) may have many models, in some of which a given statement is true and in others of which that statement is false, so that the statement is left undecided by the axioms. Gödel's incompleteness theoremscast unexpected light on these two related questions. Gödel's first incompleteness theorem showed that no recursive extension ofPrincipiacould be both consistent and complete for arithmetic statements. (As mentioned above, Principia itself was already known to be incomplete for some non-arithmetic statements.) According to the theorem, within every sufficiently powerful recursivelogical system(such asPrincipia), there exists a statementGthat essentially reads, "The statementGcannot be proved." Such a statement is a sort ofCatch-22: ifGis provable, then it is false, and the system is therefore inconsistent; and ifGis not provable, then it is true, and the system is therefore incomplete. Gödel's second incompleteness theorem(1931) shows that noformal systemextending basic arithmetic can be used to prove its own consistency. Thus, the statement "there are no contradictions in thePrincipiasystem" cannot be proven in thePrincipiasystem unless therearecontradictions in the system (in which case it can be proven both true and false). By the second edition ofPM, Russell had removed hisaxiom of reducibilityto a new axiom (although he does not state it as such). Gödel 1944:126 describes it this way: This change is connected with the new axiom that functions can occur in propositions only "through their values", i.e., extensionally (...) [this is] quite unobjectionable even from the constructive standpoint (...) provided that quantifiers are always restricted to definite orders". This change from a quasi-intensionalstance to a fullyextensionalstance also restrictspredicate logicto the second order, i.e. functions of functions: "We can decide that mathematics is to confine itself to functions of functions which obey the above assumption". This new proposal resulted in a dire outcome. An "extensional stance" and restriction to a second-order predicate logic means that a propositional function extended to all individuals such as "All 'x' are blue" now has to list all of the 'x' that satisfy (are true in) the proposition, listing them in a possibly infinite conjunction: e.g.x1∧x2∧ . . . ∧xn∧ . . .. Ironically, this change came about as the result of criticism fromLudwig Wittgensteinin his 1919Tractatus Logico-Philosophicus. As described by Russell in the Introduction to the Second Edition ofPM: There is another course, recommended by Wittgenstein† (†Tractatus Logico-Philosophicus, *5.54ff) for philosophical reasons. This is to assume that functions of propositions are always truth-functions, and that a function can only occur in a proposition through its values. (...) [Working through the consequences] it appears that everything in Vol. I remains true (though often new proofs are required); the theory of inductive cardinals and ordinals survives; but it seems that the theory of infinite Dedekindian and well-ordered series largely collapses, so that irrationals, and real numbers generally, can no longer be adequately dealt with. Also Cantor's proof that 2n>nbreaks down unlessnis finite." In other words, the fact that an infinite list cannot realistically be specified means that the concept of "number" in the infinite sense (i.e. the continuum) cannot be described by the new theory proposed inPM Second Edition. Wittgenstein in hisLectures on the Foundations of Mathematics, Cambridge 1939criticisedPrincipiaon various grounds, such as: Wittgenstein did, however, concede thatPrincipiamay nonetheless make some aspects of everyday arithmetic clearer. Gödeloffered a "critical but sympathetic discussion of the logicistic order of ideas" in his 1944 article "Russell's Mathematical Logic".[28]He wrote: It is to be regretted that this first comprehensive and thorough-going presentation of a mathematical logic and the derivation of mathematics from it [is] so greatly lacking in formal precision in the foundations (contained in✱1–✱21ofPrincipia[i.e., sections✱1–✱5(propositional logic),✱8–14(predicate logic with identity/equality),✱20(introduction to set theory), and✱21(introduction to relations theory)]) that it represents in this respect a considerable step backwards as compared with Frege. What is missing, above all, is a precise statement of the syntax of the formalism. Syntactical considerations are omitted even in cases where they are necessary for the cogency of the proofs ... The matter is especially doubtful for the rule of substitution and of replacing defined symbols by theirdefiniens... it is chiefly the rule of substitution which would have to be proved.[16] This section describes the propositional and predicate calculus, and gives the basic properties of classes, relations, and types. This part covers various properties of relations, especially those needed for cardinal arithmetic. This covers the definition and basic properties of cardinals. A cardinal is defined to be an equivalence class of similar classes (as opposed toZFC, where a cardinal is a special sort of von Neumann ordinal). Each type has its own collection of cardinals associated with it, and there is a considerable amount of bookkeeping necessary for comparing cardinals of different types. PM define addition, multiplication and exponentiation of cardinals, and compare different definitions of finite and infinite cardinals. ✱120.03 is the Axiom of infinity. A "relation-number" is an equivalence class of isomorphic relations. PM defines analogues of addition, multiplication, and exponentiation for arbitrary relations. The addition and multiplication is similar to the usual definition of addition and multiplication of ordinals in ZFC, though the definition of exponentiation of relations in PM is not equivalent to the usual one used in ZFC. This covers series, which is PM's term for what is now called a totally ordered set. In particular it covers complete series, continuous functions between series with the order topology (though of course they do not use this terminology), well-ordered series, and series without "gaps" (those with a member strictly between any two given members). This section constructs the ring of integers, the fields of rational and real numbers, and "vector-families", which are related to what are now called torsors over abelian groups. This section compares the system in PM with the usual mathematical foundations of ZFC. The system of PM is roughly comparable in strength with Zermelo set theory (or more precisely a version of it where the axiom of separation has all quantifiers bounded). Apart from corrections of misprints, the main text of PM is unchanged between the first and second editions. The main text in Volumes 1 and 2 was reset, so that it occupies fewer pages in each. In the second edition, Volume 3 was not reset, being photographically reprinted with the same page numbering; corrections were still made. The total number of pages (excluding the endpapers) in the first edition is 1,996; in the second, 2,000. Volume 1 has five new additions: In 1962, Cambridge University Press published a shortened paperback edition containing parts of the second edition of Volume 1: the new introduction (and the old), the main text up to *56, and Appendices A and C.. The first edition was reprinted in 2009 by Merchant Books,ISBN978-1-60386-182-3,ISBN978-1-60386-183-0,ISBN978-1-60386-184-7. Andrew D. Irvinesays thatPMsparked interest in symbolic logic and advanced the subject by popularizing it; it showcased the powers and capacities of symbolic logic; and it showed how advances in philosophy of mathematics and symbolic logic could go hand-in-hand with tremendous fruitfulness.[4]PMwas in part brought about by an interest inlogicism, the view on which all mathematical truths are logical truths. Though flawed,PMwould be influential in several later advances in meta-logic, includingGödel's incompleteness theorems.[citation needed] The logical notation inPMwas not widely adopted, possibly because its foundations are often considered a form ofZermelo–Fraenkel set theory.[citation needed] Scholarly, historical, and philosophical interest inPMis great and ongoing, and mathematicians continue to work withPM, whether for the historical reason of understanding the text or its authors, or for furthering insight into the formalizations of math and logic.[citation needed] TheModern LibraryplacedPM23rd in their list of the top 100 English-language nonfiction books of the twentieth century.[5]
https://en.wikipedia.org/wiki/Principia_Mathematica
Anapplication serveris aserverthat hosts applications[1]orsoftwarethat delivers abusiness applicationthrough acommunication protocol.[2]For a typicalweb application, the application server sits behind theweb servers. An applicationserver frameworkis a service layer model. It includessoftwarecomponents available to asoftware developerthrough anapplication programming interface. An application server may have features such asclustering,fail-over, andload-balancing. The goal is for developers to focus on thebusiness logic.[3] Jakarta EE(formerly Java EE or J2EE) defines the core set of API and features ofJava application servers. The Jakarta EE infrastructure is partitioned into logical containers. Microsoft's .NET positions their middle-tier applications and services infrastructure in theWindows Serveroperating system and the.NET Frameworktechnologies in the role of an application server.[4]The Windows Application Server role includesInternet Information Services(IIS) to provide web server support, the .NET Framework to provide application support,ASP.NETto provideserver side scripting, COM+ for application component communication, Message Queuing for multithreaded processing, and theWindows Communication Foundation(WCF) for application communication.[5] PHP application servers run and managePHPapplications. Mobile application servers provide data delivery to mobile devices. Core capabilities of mobile application services include Although most standards-basedinfrastructure(includingSOAs) are designed to connect to any independent of any vendor, product or technology, most enterprises have trouble connecting back-end systems to mobile applications, because mobile devices add the following technological challenges:[6] An application server can be deployed: {Table Web Interfaces}
https://en.wikipedia.org/wiki/Application_server
Infunctional programming,fold(also termedreduce,accumulate,aggregate,compress, orinject) refers to a family ofhigher-order functionsthatanalyzearecursivedata structure and through use of a given combining operation, recombine the results ofrecursivelyprocessing its constituent parts, building up a return value. Typically, a fold is presented with a combiningfunction, a topnodeof adata structure, and possibly some default values to be used under certain conditions. The fold then proceeds to combine elements of the data structure'shierarchy, using the function in a systematic way. Folds are in a sense dual tounfolds, which take aseedvalue and apply a functioncorecursivelyto decide how to progressively construct a corecursive data structure, whereas a fold recursively breaks that structure down, replacing it with the results of applying a combining function at each node on itsterminalvalues and the recursive results (catamorphism, versusanamorphismof unfolds). Folds can be regarded as consistently replacing the structural components of a data structure with functions and values.Lists, for example, are built up in many functional languages from two primitives: any list is either an empty list, commonly callednil([]), or is constructed by prefixing an element in front of another list, creating what is called aconsnode(Cons(X1,Cons(X2,Cons(...(Cons(Xn,nil)))))), resulting from application of aconsfunction (written down as a colon(:)inHaskell). One can view a fold on lists asreplacingthenilat the end of the list with a specific value, andreplacingeachconswith a specific function. These replacements can be viewed as a diagram: There's another way to perform the structural transformation in a consistent manner, with the order of the two links of each node flipped when fed into the combining function: These pictures illustraterightandleftfold of a listvisually. They also highlight the fact thatfoldr (:) []is the identity function on lists (ashallow copyinLispparlance), as replacingconswithconsandnilwithnilwill not change the result. The left fold diagram suggests an easy way to reverse a list,foldl (flip (:)) []. Note that the parameters to cons must be flipped, because the element to add is now the right hand parameter of the combining function. Another easy result to see from this vantage-point is to write the higher-ordermap functionin terms offoldr, by composing the function to act on the elements withcons, as: where the period (.) is an operator denotingfunction composition. This way of looking at things provides a simple route to designing fold-like functions on otheralgebraic data typesand structures, like various sorts of trees. One writes a function which recursively replaces the constructors of the datatype with provided functions, and any constant values of the type with provided values. Such a function is generally referred to as acatamorphism. The folding of the list[1,2,3,4,5]with the addition operator would result in 15, the sum of the elements of the list[1,2,3,4,5]. To a rough approximation, one can think of this fold as replacing the commas in the list with the + operation, giving1 + 2 + 3 + 4 + 5.[1] In the example above, + is anassociative operation, so the final result will be the same regardless of parenthesization, although the specific way in which it is calculated will be different. In the general case of non-associative binary functions, the order in which the elements are combined may influence the final result's value. On lists, there are two obvious ways to carry this out: either by combining the first element with the result of recursively combining the rest (called aright fold), or by combining the result of recursively combining all elements but the last one, with the last element (called aleft fold). This corresponds to a binaryoperatorbeing either right-associative or left-associative, inHaskell's orProlog's terminology. With a right fold, the sum would be parenthesized as1 + (2 + (3 + (4 + 5))), whereas with a left fold it would be parenthesized as(((1 + 2) + 3) + 4) + 5. In practice, it is convenient and natural to have an initial value which in the case of a right fold is used when one reaches the end of the list, and in the case of a left fold is what is initially combined with the first element of the list. In the example above, the value 0 (theadditive identity) would be chosen as an initial value, giving1 + (2 + (3 + (4 + (5 + 0))))for the right fold, and((((0 + 1) + 2) + 3) + 4) + 5for the left fold. For multiplication, an initial choice of 0 wouldn't work:0 * 1 * 2 * 3 * 4 * 5 = 0. Theidentity elementfor multiplication is 1. This would give us the outcome1 * 1 * 2 * 3 * 4 * 5 = 120 = 5!. The use of an initial value is necessary when the combining functionfis asymmetrical in its types (e.g.a → b → b), i.e. when the type of its result is different from the type of the list's elements. Then an initial value must be used, with the same type as that off's result, for alinearchain of applications to be possible. Whether it will be left- or right-oriented will be determined by the types expected of its arguments by the combining function. If it is the second argument that must be of the same type as the result, thenfcould be seen as a binary operation thatassociates on the right, and vice versa. When the function is amagma, i.e. symmetrical in its types (a → a → a), and the result type is the same as the list elements' type, the parentheses may be placed in arbitrary fashion thus creating abinary treeof nested sub-expressions, e.g.,((1 + 2) + (3 + 4)) + 5. If the binary operationfis associative this value will be well-defined, i.e., same for any parenthesization, although the operational details of how it is calculated will be different. This can have significant impact on efficiency iffisnon-strict. Whereas linear folds arenode-orientedand operate in a consistent manner for eachnodeof alist, tree-like folds are whole-list oriented and operate in a consistent manner acrossgroupsof nodes. One often wants to choose theidentity elementof the operationfas the initial valuez. When no initial value seems appropriate, for example, when one wants to fold the function which computes the maximum of its two parameters over a non-empty list to get the maximum element of the list, there are variants offoldrandfoldlwhich use the last and first element of the list respectively as the initial value. In Haskell and several other languages, these are calledfoldr1andfoldl1, the 1 making reference to the automatic provision of an initial element, and the fact that the lists they are applied to must have at least one element. These folds use type-symmetrical binary operation: the types of both its arguments, and its result, must be the same. Richard Bird in his 2010 book proposes[2]"a general fold function on non-empty lists"foldrnwhich transforms its last element, by applying an additional argument function to it, into a value of the result type before starting the folding itself, and is thus able to use type-asymmetrical binary operation like the regularfoldrto produce a result of type different from the list's elements type. Using Haskell as an example,foldlandfoldrcan be formulated in a few equations. If the list is empty, the result is the initial value. If not, fold the tail of the list using as new initial value the result of applying f to the old initial value and the first element. If the list is empty, the result is the initial value z. If not, apply f to the first element and the result of folding the rest. Lists can be folded over in a tree-like fashion, both for finite and for indefinitely defined lists: In the case offoldifunction, to avoid its runaway evaluation onindefinitelydefined lists the functionfmustnot alwaysdemand its second argument's value, at least not all of it, or not immediately (seeexamplebelow). In the presence oflazy, ornon-strictevaluation,foldrwill immediately return the application offto the head of the list and the recursive case of folding over the rest of the list. Thus, iffis able to produce some part of its result without reference to the recursive case on its "right" i.e., in itssecondargument, and the rest of the result is never demanded, then the recursion will stop (e.g.,head==foldr(\ab->a)(error"empty list")). This allows right folds to operate on infinite lists. By contrast,foldlwill immediately call itself with new parameters until it reaches the end of the list. Thistail recursioncan be efficiently compiled as a loop, but can't deal with infinite lists at all — it will recurse forever in aninfinite loop. Having reached the end of the list, anexpressionis in effect built byfoldlof nested left-deepeningf-applications, which is then presented to the caller to be evaluated. Were the functionfto refer to its second argument first here, and be able to produce some part of its result without reference to the recursive case (here, on itslefti.e., in itsfirstargument), then the recursion would stop. This means that whilefoldrrecurseson the right, it allows for a lazy combining function to inspect list's elements from the left; and conversely, whilefoldlrecurseson the left, it allows for a lazy combining function to inspect list's elements from the right, if it so chooses (e.g.,last==foldl(\ab->b)(error"empty list")). Reversing a list is also tail-recursive (it can be implemented usingrev=foldl(\ysx->x:ys)[]). Onfinitelists, that means that left-fold and reverse can be composed to perform a right fold in a tail-recursive way (cf.1+>(2+>(3+>0))==((0<+3)<+2)<+1), with a modification to the functionfso it reverses the order of its arguments (i.e.,foldrfz==foldl(flipf)z.foldl(flip(:))[]), tail-recursively building a representation of expression that right-fold would build. The extraneous intermediate list structure can be eliminated with thecontinuation-passing styletechnique,foldrfzxs==foldl(\kx->k.fx)idxsz; similarly,foldlfzxs==foldr(\xk->k.flipfx)idxsz(flipis only needed in languages like Haskell with its flipped order of arguments to the combining function offoldlunlike e.g., in Scheme where the same order of arguments is used for combining functions to bothfoldlandfoldr). Another technical point is that, in the case of left folds using lazy evaluation, the new initial parameter is not being evaluated before the recursive call is made. This can lead to stack overflows when one reaches the end of the list and tries to evaluate the resulting potentially gigantic expression. For this reason, such languages often provide a stricter variant of left folding which forces the evaluation of the initial parameter before making the recursive call. In Haskell this is thefoldl'(note the apostrophe, pronounced 'prime') function in theData.Listlibrary (one needs to be aware of the fact though that forcing a value built with a lazy data constructor won't force its constituents automatically by itself). Combined with tail recursion, such folds approach the efficiency of loops, ensuring constant space operation, when lazy evaluation of the final result is impossible or undesirable. Using aHaskellinterpreter, the structural transformations which fold functions perform can be illustrated by constructing a string: Infinite tree-like folding is demonstrated e.g., inrecursiveprimes production byunbounded sieve of EratosthenesinHaskell: where the functionunionoperates on ordered lists in a local manner to efficiently produce theirset union, andminustheirset difference. A finite prefix of primes is concisely defined as a folding of set difference operation over the lists of enumerated multiples of integers, as For finite lists, e.g.,merge sort(and its duplicates-removing variety,nubsort) could be easily defined using tree-like folding as with the functionmergea duplicates-preserving variant ofunion. Functionsheadandlastcould have been defined through folding as Scala also features the tree-like folds using the methodlist.fold(z)(op).[11] Fold is apolymorphicfunction. For anyghaving a definition thengcan be expressed as[12] Also, in alazy languagewith infinite lists, afixed point combinatorcan be implemented via fold,[13]proving that iterations can be reduced to folds:
https://en.wikipedia.org/wiki/Fold_(higher-order_function)
HAMMERis a high-availability64-bitfile systemdeveloped byMatthew DillonforDragonFly BSDusingB+ trees. Its major features include infinite NFS-exportablesnapshots,master–multislaveoperation, configurable history retention,fsckless-mount, andchecksumsto deal withdata corruption.[5]HAMMER also supports data blockdeduplication, meaning that identical data blocks will be stored only once on a file system.[6]A successor,HAMMER2, was announced in 2011 and became the default in Dragonfly 5.2 (April 2018).[7] HAMMER file system provides configurable fine-grained and coarse-grained filesystem histories with online snapshots availability. Up to 65536master(read–write) andslave(read-only)pseudo file systems(PFSs), with independent individual retention parameters and inode numbering, may be created for each file system; PFS may be mirrored to multiple slaves both locally or over network connection with near real-time performance. No file system checking is required onremount.[5][8][9][10] HAMMER supports volumes up to 1EiBof storage capacity. File system supportsCRCchecksumming of data and metadata, online layout correction anddata deduplication, and dynamicinodesallocation with an effectively unlimited number of inodes.[8][11][12] As of May 2020[update], regular maintenance is required to keep the file system clean and regain space after file deletions. By default, acronjob performs the necessary actions on DragonFly BSD daily. HAMMER does not support multi-master configurations.[8][10] HAMMER is optimized to reduce the number of physical I/O operations to cover the most likely path,[13]ensuringsequential accessfor optimal performance. The following performance-related improvements were introduced inJuly 2011:[14] HAMMER was developed specifically for DragonFly BSD to provide a feature-rich yet better designed analogue[according to whom?]of the then increasingly popularZFS. HAMMER was declared production-ready with DragonFly 2.2 in 2009;[9]in 2012, design-level work shifted ontoHAMMER2, which was declared stable with DragonFly 5.2 in 2018. As of 2019[update], HAMMER is now often referred to as HAMMER1 to avoid confusion with HAMMER2, although an official renaming has not happened. Both filesystems are independent of each other due to different on-disk formats,[15][16]and continue to receive separate updates and improvements independently.[17]
https://en.wikipedia.org/wiki/HAMMER_(file_system)
Multiple-try Metropolis (MTM)is asampling methodthat is a modified form of theMetropolis–Hastingsmethod, first presented by Liu, Liang, and Wong in 2000. It is designed to help the sampling trajectory converge faster, by increasing both the step size and the acceptance rate. InMarkov chain Monte Carlo, theMetropolis–Hastings algorithm(MH) can be used to sample from aprobability distributionwhich is difficult to sample from directly. However, the MH algorithm requires the user to supply a proposal distribution, which can be relatively arbitrary. In many cases, one uses a Gaussian distribution centered on the current point in the probability space, of the formQ(x′;xt)=N(xt;σ2I){\displaystyle Q(x';x^{t})={\mathcal {N}}(x^{t};\sigma ^{2}I)\,}. This proposal distribution is convenient to sample from and may be the best choice if one has little knowledge about the target distribution,π(x){\displaystyle \pi (x)\,}. If desired, one can use the more generalmultivariate normal distribution,Q(x′;xt)=N(xt;Σ){\displaystyle Q(x';x^{t})={\mathcal {N}}(x^{t};\mathbf {\Sigma } )}, whereΣ{\displaystyle \mathbf {\Sigma } }is the covariance matrix which the user believes is similar to the target distribution. Although this method must converge to the stationary distribution in the limit of infinite sample size, in practice the progress can be exceedingly slow. Ifσ2{\displaystyle \sigma ^{2}\,}is too large, almost all steps under the MH algorithm will be rejected. On the other hand, ifσ2{\displaystyle \sigma ^{2}\,}is too small, almost all steps will be accepted, and the Markov chain will be similar to a random walk through the probability space. In the simpler case ofQ(x′;xt)=N(xt;I){\displaystyle Q(x';x^{t})={\mathcal {N}}(x^{t};I)\,}, we see thatN{\displaystyle N\,}steps only takes us a distance ofN{\displaystyle {\sqrt {N}}\,}. In this event, the Markov Chain will not fully explore the probability space in any reasonable amount of time. Thus the MH algorithm requires reasonable tuning of the scale parameter (σ2{\displaystyle \sigma ^{2}\,}orΣ{\displaystyle \mathbf {\Sigma } }). Even if the scale parameter is well-tuned, as the dimensionality of the problem increases, progress can still remain exceedingly slow. To see this, again considerQ(x′;xt)=N(xt;I){\displaystyle Q(x';x^{t})={\mathcal {N}}(x^{t};I)\,}. In one dimension, this corresponds to a Gaussian distribution with mean 0 and variance 1. For one dimension, this distribution has a mean step of zero, however the mean squared step size is given by As the number of dimensions increases, the expected step size becomes larger and larger. InN{\displaystyle N\,}dimensions, the probability of moving a radial distancer{\displaystyle r\,}is related to theChi distribution, and is given by This distribution is peaked atr=N−1{\displaystyle r={\sqrt {N-1}}\,}which is≈N{\displaystyle \approx {\sqrt {N}}\,}for largeN{\displaystyle N\,}. This means that the step size will increase as the roughly the square root of the number of dimensions. For the MH algorithm, large steps will almost always land in regions of low probability, and therefore be rejected. If we now add the scale parameterσ2{\displaystyle \sigma ^{2}\,}back in, we find that to retain a reasonable acceptance rate, we must make the transformationσ2→σ2/N{\displaystyle \sigma ^{2}\rightarrow \sigma ^{2}/N}. In this situation, the acceptance rate can now be made reasonable, but the exploration of the probability space becomes increasingly slow. To see this, consider a slice along any one dimension of the problem. By making the scale transformation above, the expected step size is any one dimension is notσ{\displaystyle \sigma \,}but instead isσ/N{\displaystyle \sigma /{\sqrt {N}}}. As this step size is much smaller than the "true" scale of the probability distribution (assuming thatσ{\displaystyle \sigma \,}is somehow known a priori, which is the best possible case), the algorithm executes a random walk along every parameter. SupposeQ(x,y){\displaystyle Q(\mathbf {x} ,\mathbf {y} )}is an arbitraryproposal function. We require thatQ(x,y)>0{\displaystyle Q(\mathbf {x} ,\mathbf {y} )>0}only ifQ(y,x)>0{\displaystyle Q(\mathbf {y} ,\mathbf {x} )>0}. Additionally,π(x){\displaystyle \pi (\mathbf {x} )}is the likelihood function. Definew(x,y)=π(x)Q(x,y)λ(x,y){\displaystyle w(\mathbf {x} ,\mathbf {y} )=\pi (\mathbf {x} )Q(\mathbf {x} ,\mathbf {y} )\lambda (\mathbf {x} ,\mathbf {y} )}whereλ(x,y){\displaystyle \lambda (\mathbf {x} ,\mathbf {y} )}is a non-negative symmetric function inx{\displaystyle \mathbf {x} }andy{\displaystyle \mathbf {y} }that can be chosen by the user. Now suppose the current state isx{\displaystyle \mathbf {x} }. The MTM algorithm is as follows: 1) Drawkindependent trial proposalsy1,…,yk{\displaystyle \mathbf {y} _{1},\ldots ,\mathbf {y} _{k}}fromQ(x,.){\displaystyle Q(\mathbf {x} ,.)}. Compute the weightsw(yj,x){\displaystyle w(\mathbf {y} _{j},\mathbf {x} )}for each of these. 2) Selecty{\displaystyle \mathbf {y} }from theyi{\displaystyle \mathbf {y} _{i}}with probability proportional to the weights. 3) Now produce a reference set by drawingx1,…,xk−1{\displaystyle \mathbf {x} _{1},\ldots ,\mathbf {x} _{k-1}}from the distributionQ(y,.){\displaystyle Q(\mathbf {y} ,.)}. Setxk=x{\displaystyle \mathbf {x} _{k}=\mathbf {x} }(the current point). 4) Accepty{\displaystyle \mathbf {y} }with probability It can be shown that this method satisfies thedetailed balanceproperty and therefore produces a reversible Markov chain withπ(x){\displaystyle \pi (\mathbf {x} )}as the stationary distribution. IfQ(x,y){\displaystyle Q(\mathbf {x} ,\mathbf {y} )}is symmetric (as is the case for themultivariate normal distribution), then one can chooseλ(x,y)=1Q(x,y){\displaystyle \lambda (\mathbf {x} ,\mathbf {y} )={\frac {1}{Q(\mathbf {x} ,\mathbf {y} )}}}which givesw(x,y)=π(x){\displaystyle w(\mathbf {x} ,\mathbf {y} )=\pi (\mathbf {x} )}. Multiple-try Metropolis needs to compute the energy of2k−1{\displaystyle 2k-1}other states at every step. If the slow part of the process is calculating the energy, then this method can be slower. If the slow part of the process is finding neighbors of a given point, or generating random numbers, then again this method can be slower. It can be argued that this method only appears faster because it puts much more computation into a "single step" than Metropolis-Hastings does.
https://en.wikipedia.org/wiki/Multiple-try_Metropolis
Amaster boot record(MBR) is a type ofboot sectorin the first block ofpartitionedcomputermass storage deviceslikefixed disksorremovable drivesintended for use withIBM PC-compatiblesystems and beyond. The concept of MBRs was publicly introduced in 1983 withPC DOS 2.0. The MBR holds the information on how the disc's sectors (A.K.A. "blocks") are divided into partitions, each partition notionally containing a file system. The MBR also contains executable code to function as a loader for the installed operating system—usually by passing control over to the loader's second stage, or in conjunction with each partition's volume boot record (VBR). This MBR code is usually referred to as a boot loader. The organization of the partition table in the MBR limits the maximum addressable storage space of a partitioned disk to 2TiB(232× 512 bytes).[1]Approaches to slightly raise this limit utilizing 32-bit arithmetic or 4096-byte sectors are not officially supported, as they fatally break compatibility with existing boot loaders, most MBR-compliant operating systems and associated system tools, and may cause serious data corruption when used outside of narrowly controlled system environments. Therefore, the MBR-based partitioning scheme is in the process of being superseded by theGUID Partition Table(GPT) scheme in new computers. A GPT can coexist with an MBR in order to provide some limited form of backward compatibility for older systems. MBRs are not present on non-partitioned media such asfloppies,superfloppiesor other storage devices configured to behave as such, nor are they necessarily present on drives used in non-PC platforms. Support for partitioned media, and thereby the master boot record (MBR), was introduced with IBMPC DOS2.0 in March 1983 in order to support the 10 MBhard diskof the then-newIBM Personal Computer XT, still using theFAT12file system. The original version of the MBR was written by David Litton of IBM in June 1982. The partition table supported up to fourprimary partitions. This did not change whenFAT16was introduced as a new file system with DOS 3.0. Support for anextended partition, a special primary partition type used as a container to hold other partitions, was added with DOS 3.2, and nestedlogical drivesinside an extended partition came with DOS 3.30. Since MS-DOS, PC DOS, OS/2 and Windows were never enabled to boot off them, the MBR format and boot code remained almost unchanged in functionality (except some third-party implementations) throughout the eras of DOS and OS/2 up to 1996. In 1996, support forlogical block addressing(LBA) was introduced in Windows 95B and MS-DOS 7.10 (Not to be confused with IBM PC-DOS 7.1) in order to support disks larger than 8 GB.Disk timestampswere also introduced.[2]This also reflected the idea that the MBR is meant to be operating system and file system independent. However, this design rule was partially compromised in more recent Microsoft implementations of the MBR, which enforceCHSaccess forFAT16BandFAT32partition types0x06/0x0B, whereas LBA is used for0x0E/0x0C. Despite sometimes poor documentation of certain intrinsic details of the MBR format (which occasionally caused compatibility problems), it has been widely adopted as a de facto industry standard, due to the broad popularity of PC-compatible computers and its semi-static nature over decades. This was even to the extent of being supported by computer operating systems for other platforms. Sometimes this was in addition to other pre-existing orcross-platformstandards for bootstrapping and partitioning.[3] MBR partition entries and the MBR boot code used in commercial operating systems, however, are limited to 32 bits.[1]Therefore, the maximum disk size supported on disks using 512-byte sectors (whether real or emulated) by the MBR partitioning scheme (without 32-bit arithmetic) is limited to 2 TiB.[1]Consequently, a different partitioning scheme must be used for larger disks, as they have become widely available since 2010. The MBR partitioning scheme is therefore in the process of being superseded by theGUID Partition Table(GPT). The official approach does little more than ensuring data integrity by employing aprotective MBR. Specifically, it does not provide backward compatibility with operating systems that do not support the GPT scheme as well. Meanwhile, multiple forms ofhybrid MBRshave been designed and implemented by third parties in order to maintain partitions located in the first physical 2 TiB of a disk in both partitioning schemes "in parallel" and/or to allow older operating systems to boot off GPT partitions as well. The present non-standard nature of these solutions causes various compatibility problems in certain scenarios. The MBR consists of 512 or morebyteslocated in the firstsectorof the drive. It may contain one or more of: IBMPC DOS2.0 introduced theFDISKutility to set up and maintain MBR partitions. When a storage device has been partitioned according to this scheme, its MBR contains a partition table describing the locations, sizes, and other attributes of linear regions referred to as partitions. The partitions themselves may also contain data to describe more complex partitioning schemes, such asextended boot records(EBRs),BSD disklabels, orLogical Disk Managermetadata partitions.[8] The MBR is not located in a partition; it is located at a first sector of the device (physical offset 0), preceding the first partition. (The boot sector present on a non-partitioned device or within an individual partition is called avolume boot recordinstead.) In cases where the computer is running aDDO BIOS overlayorboot manager, the partition table may be moved to some other physical location on the device; e.g.,Ontrack Disk Manageroften placed a copy of the original MBR contents in the second sector, then hid itself from any subsequently booted OS or application, so the MBR copy was treated as if it were still residing in the first sector. By convention, there are exactly four primary partition table entries in the MBR partition table scheme, although some operating systems and system tools extended this to five (Advanced Active Partitions (AAP) withPTS-DOS6.60[9]andDR-DOS7.07), eight (ASTandNECMS-DOS3.x[10][11]as well asStorage DimensionsSpeedStor), or even sixteen entries (withOntrack Disk Manager). An artifact of hard disk technology from the era of thePC XT, the partition table subdivides a storage medium using units ofcylinders,heads, andsectors(CHSaddressing). These values no longer correspond to their namesakes in modern disk drives, as well as being irrelevant in other devices such assolid-state drives, which do not physically have cylinders or heads. In the CHS scheme, sector indices have (almost) always begun with sector 1 rather than sector 0 by convention, and due to an error in all versions of MS-DOS/PC DOS up to including 7.10, the number of heads is generally limited to 255[h]instead of 256. When a CHS address is too large to fit into these fields, thetuple(1023, 254, 63) is typically used today, although on older systems, and with older disk tools, the cylinder value often wrapped around modulo the CHS barrier near 8 GB, causing ambiguity and risks of data corruption. (If the situation involves a "protective" MBR on a disk with a GPT, Intel'sExtensible Firmware Interfacespecification requires that the tuple (1023, 255, 63) be used.) The 10-bit cylinder value is recorded within two bytes in order to facilitate making calls to the original/legacyINT 13hBIOS disk access routines, where 16 bits were divided into sector and cylinder parts, and not on byte boundaries.[13] Due to the limits of CHS addressing,[16][17]a transition was made to using LBA, orlogical block addressing. Both the partition length and partition start address are sector values stored in the partition table entries as 32-bit quantities. The sector size used to be considered fixed at 512 (29) bytes, and a broad range of important components includingchipsets,boot sectors,operating systems,database engines,partitioningtools,backupandfile systemutilities and other software had this value hard-coded. Since the end of 2009, disk drives employing 4096-byte sectors (4KnorAdvanced Format) have been available, although the size of the sector for some of these drives was still reported as 512 bytes to the host system through conversion in the hard-drive firmware and referred to as 512 emulation drives (512e). Since block addresses and sizes are stored in the partition table of an MBR using 32 bits, the maximum size, as well as the highest start address, of a partition using drives that have 512-byte sectors (actual or emulated) cannot exceed 2TiB−512 bytes (2199023255040bytes or4294967295(232−1) sectors × 512 (29) bytes per sector).[1]Alleviating this capacity limitation was one of the prime motivations for the development of the GPT. Since partitioning information is stored in the MBR partition table using a beginning block address and a length, it may in theory be possible to define partitions in such a way that the allocated space for a disk with 512-byte sectors gives a total size approaching 4 TiB, if all but one partition are located below the 2 TiB limit and the last one is assigned as starting at or close to block 232−1 and specify the size as up to 232−1, thereby defining a partition that requires 33 rather than 32 bits for the sector address to be accessed. However, in practice, only certainLBA-48-enabled operating systems, including Linux, FreeBSD and Windows 7[18]that use 64-bit sector addresses internally actually support this. Due to code space constraints and the nature of the MBR partition table to only support 32 bits, boot sectors, even if enabled to support LBA-48 rather thanLBA-28, often use 32-bit calculations, unless they are specifically designed to support the full address range of LBA-48 or are intended to run on 64-bit platforms only. Any boot code or operating system using 32-bit sector addresses internally would cause addresses to wrap around accessing this partition and thereby result in serious data corruption over all partitions. For disks that present a sector size other than 512 bytes, such asUSBexternal drives, there are limitations as well. A sector size of 4096 results in an eight-fold increase in the size of a partition that can be defined using MBR, allowing partitions up to 16 TiB (232× 4096 bytes) in size.[19]Versions of Windows more recent than Windows XP support the larger sector sizes, as well as Mac OS X, andLinuxhas supported larger sector sizes since 2.6.31[20]or 2.6.32,[21]but issues with boot loaders, partitioning tools and computer BIOS implementations present certain limitations,[22]since they are often hard-wired to reserve only 512 bytes for sector buffers, causing memory to become overwritten for larger sector sizes. This may cause unpredictable behaviour as well, and therefore should be avoided when compatibility and standard conformity is an issue. Where a data storage device has been partitioned with the GPT scheme, the master boot record will still contain a partition table, but its only purpose is to indicate the existence of the GPT and to prevent utility programs that understand only the MBR partition table scheme from creating any partitions in what they would otherwise see as free space on the disk, thereby accidentally erasing the GPT. OnIBM PC-compatiblecomputers, thebootstrappingfirmware(contained within theROMBIOS) loads and executes the master boot record.[23]ThePC/XT (type 5160)used anIntel 8088microprocessor. In order to remain compatible, all x86 BIOS architecture systems start with the microprocessor in anoperating modereferred to asreal mode. The BIOS reads the MBR from the storage device intophysical memory, and then it directs the microprocessor to the start of the boot code. The BIOS will switch the processor to real mode, then begin to execute the MBR program, and so the beginning of the MBR is expected to contain real-modemachine code.[23] Since the BIOS bootstrap routine loads and runs exactly one sector from the physical disk, having the partition table in the MBR with the boot code simplifies the design of the MBR program. It contains a small program that loads theVolume Boot Record(VBR) of the targeted partition. Control is then passed to this code, which is responsible for loading the actual operating system. This process is known aschain loading. Popular MBR code programs were created for bootingPC DOSandMS-DOS, and similar boot code remains in wide use. These boot sectors expect theFDISKpartition table scheme to be in use and scans the list of partitions in the MBR's embedded partition table to find the only one that is marked with theactive flag.[24]It then loads and runs thevolume boot record(VBR) of the active partition. There are alternative boot code implementations, some of which are installed byboot managers, which operate in a variety of ways. Some MBR code loads additional code for a boot manager from the first track of the disk, which it assumes to be "free" space that is not allocated to any disk partition, and executes it. A MBR program may interact with the user to determine which partition on which drive should boot, and may transfer control to the MBR of a different drive. Other MBR code contains a list of disk locations (often corresponding to the contents offilesin afilesystem) of the remainder of the boot manager code to load and to execute. (The first relies on behavior that is not universal across all disk partitioning utilities, most notably those that read and write GPTs. The last requires that the embedded list of disk locations be updated when changes are made that would relocate the remainder of the code.) On machines that do not usex86processors, or on x86 machines with non-BIOS firmware such asOpen FirmwareorExtensible Firmware Interface(EFI) firmware, this design is unsuitable, and the MBR is not used as part of the system bootstrap.[25]EFI firmware is instead capable of directly understanding the GPT partitioning scheme and theFATfilesystem format, and loads and runs programs held as files in theEFI System partition.[26]The MBR will be involved only insofar as it might contain a partition table for compatibility purposes if the GPT partition table scheme has been used. There is some MBR replacement code that emulates EFI firmware's bootstrap, which makes non-EFI machines capable of booting from disks using the GPT partitioning scheme. It detects a GPT, places the processor in the correct operating mode, and loads the EFI compatible code from disk to complete this task. In addition to the bootstrap code and a partition table, master boot records may contain adisk signature. This is a 32-bit value that is intended to identify uniquely the disk medium (as opposed to the disk unit—the two not necessarily being the same for removable hard disks). The disk signature was introduced by Windows NT version 3.5, but it is now used by several operating systems, including theLinux kernelversion 2.6 and later. Linux tools can use the NT disk signature to determine which disk the machine booted from.[27] Windows NT (and later Microsoft operating systems) uses the disk signature as an index to all the partitions on any disk ever connected to the computer under that OS; these signatures are kept inWindows Registrykeys, primarily for storing the persistent mappings between disk partitions and drive letters. It may also be used in Windows NTBOOT.INIfiles (though most do not), to describe the location of bootable Windows NT (or later) partitions.[28]One key (among many), where NT disk signatures appear in a Windows 2000/XP registry, is: If a disk's signature stored in the MBR wasA8 E1 B9 D2(in that order) and its first partition corresponded with logical drive C: under Windows, then theREG_BINARYdata under the key value\DosDevices\C:would be: The first four bytes are said disk signature. (In other keys, these bytes may appear in reverse order from that found in the MBR sector.) These are followed by eight more bytes, forming a 64-bit integer, inlittle-endiannotation, which are used to locate the byte offset of this partition. In this case,00 7Ecorresponds to the hexadecimal value0x7E00(32,256). Under the assumption that the drive in question reports a sector size of 512 bytes, then dividing this byte offset by 512 results in 63, which is the physical sector number (or LBA) containing the first sector of the partition (unlike thesector countused in the sectors value of CHS tuples, which counts fromone, the absolute or LBA sector value startscounting fromzero). If this disk had another partition with the values00 F8 93 71 02following the disk signature (under, e.g., the key value\DosDevices\D:), it would begin at byte offset0x00027193F800(10,495,457,280), which is also the first byte of physical sector20,498,940. Starting withWindows Vista, the disk signature is also stored in theBoot Configuration Data(BCD) store, and the boot process depends on it.[29]If the disk signature changes, cannot be found or has a conflict, Windows is unable to boot.[30]Unless Windows is forced to use the overlapping part of the LBA address of the Advanced Active Partition entry as pseudo-disk signature, Windows' usage is conflictive with the Advanced Active Partition feature of PTS-DOS 7 and DR-DOS 7.07, in particular if their boot code is located outside the first 8 GB of the disk, so that LBA addressing must be used. The MBR originated in thePC XT.[31]IBM PC-compatiblecomputers arelittle-endian, which means theprocessorstores numeric values spanning two or more bytes in memoryleast significant bytefirst. The format of the MBR on media reflects this convention. Thus, the MBR signature will appear in adisk editoras the sequence55 AA.[a] The bootstrap sequence in the BIOS will load the first valid MBR that it finds into the computer'sphysical memoryataddress0x7C00to0x7DFF.[31]The last instruction executed in the BIOS code will be a "jump" to that address in order to direct execution to the beginning of the MBR copy. The primary validation for most BIOSes is the signature at offset0x01FE, although a BIOS implementer may choose to include other checks, such as verifying that the MBR contains a valid partition table without entries referring to sectors beyond the reported capacity of the disk. To the BIOS, removable (e.g. floppy) and fixed disks are essentially the same. For either, the BIOS reads the first physical sector of the media into RAM at absolute address0x7C00, checks the signature in the last two bytes of the loaded sector, and then, if the correct signature is found, transfers control to the first byte of the sector with a jump (JMP) instruction. The only real distinction that the BIOS makes is that (by default, or if the boot order is not configurable) it attempts to boot from the first removable disk before trying to boot from the first fixed disk. From the perspective of the BIOS, the action of the MBR loading a volume boot record into RAM is exactly the same as the action of a floppy disk volume boot record loading the object code of an operating system loader into RAM. In either case, the program that the BIOS loaded is going about the work of chain loading an operating system. While the MBRboot sectorcode expects to be loaded at physical address0x0000:0x7C00,[i]all the memory from physical address0x0000:0x0501(address0x0000:0x0500is the last one used by a Phoenix BIOS)[13]to0x0000:0x7FFF,[31]later relaxed to0x0000:0xFFFF[32](and sometimes[j]up to0x9000:0xFFFF)‍—‌the end of the first 640KB‍—‌is available in real mode.[k]TheINT 12hBIOS interrupt callmay help in determining how much memory can be allocated safely (by default, it simply reads the base memory size in KB fromsegment:offset location0x0040:0x0013, but it may be hooked by other resident pre-boot software like BIOS overlays,RPLcode or viruses to reduce the reported amount of available memory in order to keep other boot stage software like boot sectors from overwriting them). The last 66 bytes of the 512-byte MBR are reserved for the partition table and other information, so the MBR boot sector program must be small enough to fit within 446 bytes of memory or less. The MBR code examines the partition table, selects a suitable partition and loads the program that will perform the next stage of the boot process, usually by making use of INT 13hBIOS calls. The MBR bootstrap code loads and runs (a boot loader- or operating system-dependent)volume boot recordcode that is located at the beginning of the "active" partition. The volume boot record will fit within a 512-byte sector, but it is safe for the MBR code to load additional sectors to accommodate boot loaders longer than one sector, provided they do not make any assumptions on what the sector size is. In fact, at least 1 KB of RAM is available at address0x7C00in every IBM XT- and AT-class machine, so a 1 KB sector could be used with no problem. Like the MBR, a volume boot record normally expects to be loaded at address0x0000:0x7C00. This derives from the fact that the volume boot record design originated on unpartitioned media, where a volume boot record would be directly loaded by the BIOS boot procedure; as mentioned above, the BIOS treats MBRs and volume boot records (VBRs)[l]exactly alike. Since this is the same location where the MBR is loaded, one of the first tasks of an MBR is torelocateitself somewhere else in memory. The relocation address is determined by the MBR, but it is most often0x0000:0x0600(for MS-DOS/PC DOS, OS/2 and Windows MBR code) or0x0060:0x0000(most DR-DOS MBRs). (Even though both of these segmented addresses resolve to the same physical memory address in real mode, forApple Darwinto boot, the MBR must be relocated to0x0000:0x0600instead of0x0060:0x0000, since the code depends on the DS:SI pointer to the partition entry provided by the MBR, but it erroneously refers to it via0x0000:SI only.[33]) It is important not to relocate to other addresses in memory because manyVBRswill assume a certain standard memory layout when loading their boot file. TheStatusfield in a partition table record is used to indicate an active partition. Standard-conformant MBRs will allow only one partition marked active and use this as part of a sanity-check to determine the existence of a valid partition table. They will display an error message, if more than one partition has been marked active. Some non-standard MBRs will not treat this as an error condition and just use the first marked partition in the row. Traditionally, values other than0x00(not active) and0x80(active) were invalid and the bootstrap program would display an error message upon encountering them. However, thePlug and Play BIOS SpecificationandBIOS Boot Specification(BBS) allowed other devices to become bootable as well since 1994.[32][34]Consequently, with the introduction of MS-DOS 7.10 (Windows 95B) and higher, the MBR started to treat a set bit 7 as active flag and showed an error message for values0x01..0x7Fonly. It continued to treat the entry as physical drive unit to be used when loading the corresponding partition's VBR later on, thereby now also accepting other boot drives than0x80as valid, however, MS-DOS did not make use of this extension by itself. Storing the actual physical drive number in the partition table does not normally cause backward compatibility problems, since the value will differ from0x80onlyon drives other than the first one (which have not been bootable before, anyway). However, even with systems enabled to boot off other drives, the extension may still not work universally, for example, after the BIOS assignment of physical drives has changed when drives are removed, added or swapped. Therefore, per theBIOS Boot Specification(BBS),[32]it is best practice for a modern MBR accepting bit 7 as active flag to pass on the DL value originally provided by the BIOS instead of using the entry in the partition table. The MBR is loaded at memory location0x0000:0x7C00and with the followingCPUregisters set up when the prior bootstrap loader (normally theIPLin the BIOS) passes execution to it by jumping to0x0000:0x7C00in the CPU'sreal mode. Systems withPlug-and-PlayBIOS or BBS support will provide a pointer to PnP data in addition to DL:[32][34] By convention, a standard conformant MBR passes execution to a successfully loaded VBR, loaded at memory location0x0000:0x7C00, by jumping to0x0000:0x7C00in the CPU's real mode with the following registers maintained or specifically set up: The MBR code passes additional information to the VBR in many implementations: Under DR-DOS 7.07 an extended interface may be optionally provided by the extended MBR and in conjunction with LOADER: In conjunction with GPT, anEnhanced Disk Drive Specification(EDD) 4Hybrid MBRproposal recommends another extension to the interface:[37] Though it is possible to manipulate thebytesin the MBR sector directly using variousdisk editors, there are tools to write fixed sets of functioning code to the MBR. Since MS-DOS 5.0, the programFDISKhas included the switch/MBR, which will rewrite the MBR code.[38]UnderWindows 2000andWindows XP, theRecovery Consolecan be used to write new MBR code to a storage device using itsfixmbrcommand. UnderWindows VistaandWindows 7, theRecovery Environmentcan be used to write new MBR code using theBOOTREC /FIXMBRcommand. Some third-party utilities may also be used for directly editing the contents of partition tables (without requiring any knowledge of hexadecimal or disk/sector editors), such as MBRWizard.[o] ddis a POSIX command commonly used to read or write any location on a storage device, MBR included. InLinux, ms-sys may be used to install a Windows MBR. TheGRUBandLILOprojects have tools for writing code to the MBR sector, namelygrub-installandlilo -mbr. The GRUB Legacy interactive console can write to the MBR, using thesetupandembedcommands, but GRUB2 currently requiresgrub-installto be run from within an operating system. Various programs are able to create a "backup" of both the primary partition table and the logical partitions in the extended partition. Linuxsfdisk(on aSystemRescueCD) is able to save a backup of the primary and extended partition table. It creates a file that can be read in a text editor, or this file can be used by sfdisk to restore the primary/extended partition table. An example command to back up the partition table issfdisk -d /dev/hda > hda.outand to restore issfdisk /dev/hda < hda.out. It is possible to copy the partition table from one disk to another this way, useful for setting up mirroring, but sfdisk executes the command without prompting/warnings usingsfdisk -d /dev/sda | sfdisk /dev/sdb.[39]
https://en.wikipedia.org/wiki/Master_Boot_Record
James Samuel Coleman(May 12, 1926 – March 25, 1995) was an American sociologist, theorist, and empirical researcher, based chiefly at the University of Chicago.[1][2] He served as president of theAmerican Sociological Associationin 1991–1992. He studied the sociology of education and public policy, and was one of the earliest users of the termsocial capital.[3]He may be considered one of the original neoconservatives in sociology.[4]His workFoundations of Social Theory(1990) influenced countless sociological theories, and his worksThe Adolescent Society(1961) and "Coleman Report" (Equality of Educational Opportunity, 1966) were two of the most cited books in educational sociology. The landmark Coleman Report helped transform educational theory, reshape national education policies, and it influenced public and scholarly opinion regarding the role of schooling in determining equality and productivity in the United States.[3][5] As the son of James and Maurine Coleman, he spent his early childhood inBedford, Indiana, he then moved toLouisville, Kentucky. After graduating in 1944, he enrolled in a small school inVirginia, but left to enlist in theUS NavyduringWorld War II. After he was discharged from theUS Navyin 1946, he enrolled inIndiana University. Eventually he transferred schools, and Coleman received his bachelor's degree inchemical engineeringfromPurdue Universityin 1949. He initially intended on studying Chemistry but quickly became fascinated with Sociology as he navigated his way through University life. He began working atEastman Kodakuntil 1952.[6]He became interested insociologyand pursued his degree atColumbia University. During his time there, he spent two years as a research assistant with theBureau of Applied Social Research, and published a chapter inMathematical Thinking in the Social Sciences, which was edited byPaul Lazarsfeld. He went on to receive his doctorate fromColumbia Universityin 1955.[6] He is best known today for his work on the massive study that produced "Equality of Educational Opportunity" (EEO), or the Coleman Report, Coleman's intellectual appetite was prodigious.[7] In 1949 he married Lucille Richey with whom he had 3 children, Thomas, John, and Stephen. Lucille and James divorced in 1973 and he later went on to marry his second wife, Zdzislawa Walaszek, in which he had one son, Daniel Coleman. He died on March 25, 1995, at University Hospital in Chicago Illinois and was outlived by his wife Zdzislawa Walaszek and sons. Coleman achieved success with two studies on problem solving:Introduction to Mathematical Sociology(1964) andMathematics of Collective Action(1973). He was a fellow at theCenter for Advanced Study in the Behavioral Sciencesand taught at theUniversity of Chicago. In 1959, he moved toJohns Hopkins University, where he served as an associate professor and founded the Sociology department. In 1965 he became involved inProject Camelot, an academic research project funded by theUnited Statesmilitary through the Special Operations Research Office to train in counter-insurgency techniques. He eventually became a professor in social relations until 1973, when he returned toChicagoto teach as a University Professor of Sociology and Education at the University of Chicago again.[6] During the mid-1960s and early 1970s, Coleman was an elected member of theAmerican Academy of Arts and Sciences, theAmerican Philosophical Society, and the United StatesNational Academy of Sciences.[8][9][10]Proceeding on the assumption that the study of human society can become a true science, the author examines the contribution that various mathematical techniques might make to systematic conceptual elaboration of social behavior. He notes that it is only when the logical structure of mathematics is possible, and claims that in this way mathematics will ultimately become useful in sociology.[11] Upon his return, he became the professor and senior study director at theNational Opinion Research Center. In 1991, Coleman was elected as the eighty-third President of theAmerican Sociological Association.[12]In 2001, Coleman was named among the top 100 American intellectuals, as measured by academic citations, inRichard Posner's book,Public Intellectuals: A Study of Decline.[13]Over his lifetime he published nearly 30 books, and more than 300 articles and book chapters, which contributed to the understanding of education in the United States.[14] He was influenced byErnest NagelandPaul Lazarsfeld, both who interested Coleman in mathematical sociology, andRobert Merton, who introduced Coleman toÉmile DurkheimandMax Weber.[6]Coleman is associated withadolescence,corporate actionandrational choice. He shares common ground with sociologistsPeter Blau,Daniel Bell, andSeymour Martin Lipset, with whom Coleman first did research after obtaining his PhD.[15] Coleman is widely cited in the field ofsociology of education. In the 1960s, during his time teaching at Johns Hopkins University, Coleman and several other scholars were commissioned by theNational Center for Education Statistics[6]to write a report on educational equality in the US. It was one of the largest studies in history, with more than 650,000 students in the sample. The result was a massive report of over 700 pages. The 1966 report, titledEquality of Educational Opportunity(otherwise known as the "Coleman Report"), fueled debates about "school effects" that are still relevant today.[16]The report is commonly presented as evidence that school funding has little effect on student achievement, a key finding of the report and subsequent research.[17][18][5]It was found as for physical facilities, formal curricula, and other measurable criteria, there was little difference between black and white schools. Also, a significant gap in the achievement scores between black and white children already existed in the first grade. Despite the similar conditions of black and white schools, the gap became even wider by the end of elementary school. The only consistent variable explaining the differences in score within each racial group or ethnic group was the educational and economic attainment of the parents.[19]Therefore, student background and socioeconomic status were found to be more important in determining educational outcomes of a student. Specifically, the key factors were the attitudes toward education of parents and caregivers at home and peers at school. Differences in the quality of schools and teachers did have a small impact on student outcomes.[17][18][5] The study cost approximately 1.5 million dollars and to date is one of the largest studies in history, involving 600,000 students and 60,000 teachers in the research sample. The participants were black, Native, and Mexican American, poor white, Puerto Rican and Asian students. This study was a driving factor in the debate for “school effects”, a debate that continues to date. A few major findings and controversies from the study were that black student drop rates were twice as high that of white students, and that poor home environments were a major influence to poor academic performance for minorities. Eric Hanushekcriticized the focus on thestatistical methodologyand the estimation of the impacts of various factors on achievement which took attention away from the achievement comparisons in the Coleman Report. The study tested students around United States, and the differences in achievement by race and region were enormous. The average black twelfth grade student in the rural South was achieving at the level of a seventh grade white student in the urban Northeast. At the fiftieth anniversary of the report's publication,Eric Hanushekassessed the closure in the black-white achievement gap. He found that achievement differences had narrowed, largely from improvements in the South, but that at the pace of the previous half-century, it would take two-and-a-half centuries to close the mathematics achievement gap.[20][21] InFoundations of Social Theory(1990), Coleman discusses his theory ofsocial capital, the set of resources found in family relations and in a community's social organization.[3][22]Coleman believed that social capital is important for the development of a child or young person, and that functional communities are important as sources of social capital that can support families in terms of youth development.[3]He discusses three main types of capital: human, physical, and social.[23] Human capitalis an individual's skills, knowledge, and experience, which determine their value in society.[24]Physical capital, being completely tangible and generally a private good, originates from the creation of tools to facilitate production. In addition to social capital, the three types of investments create the three main aspects of society's exchange of capital.[25] According to Coleman, social capital and human capital are often go hand in hand with one another. By having certain skill sets, experiences, and knowledge, an individual can gainsocial statusand so receive more social capital.[22] “The interrogation by his colleague was likely very difficult to navigate as Coleman was a man who was opposed to segregation. It is known that when Coleman and his wife Lucille Richey brought their three children John, Tom, and Steve to a white only amusement park, outside of Baltimore. They attempted to enter the park with a black family and as anticipated were arrested along with approximately 300 other demonstrators”. Coleman was a pioneer in the construction of mathematical models in sociology with his book,Introduction to Mathematical Sociology(1964). His later treatise,Foundations of Social Theory(1990), made major contributions toward a more rigorous form of theorizing in sociology based on rational choice.[3][26][27]Coleman wrote more than thirty books and 300 articles.[3]He also created an educational corporation that developed and marketed "mental games" aimed at improving the abilities of disadvantaged students.[28]Coleman made it a practice to send his most controversial research findings "to his worst critics" prior to their publication, calling it "the best way to ensure validity."[citation needed] At the time of his death, he was engaged in a long-term study titled theHigh School and Beyond, which examined the lives and careers of 75,000 people who had been high school juniors and seniors in 1980.[29] “In 1966, James Coleman presented a report to the U.S Congress where he presented his findings from his research where he spoke of how to reach a racial balance in public schools. He shared his most controversial findings that poor black children did better academically when integrated into middle-class schools”. Coleman published lasting theories of education, which helped shape the field.[30][31]With his focus on the allocation of rights, one can understand the conflict between rights. Towards the end of his life, Coleman questioned how to make the education systems more accountable, which caused educators to question their use and interpretation of standardized testing.[32] Coleman's publication of the "Coleman Report" included greatly influential findings that pioneered aspects of the desegregation of American public schools. His theories of integration also contributed. He also raised the issue of narrowing the educational gap between those who had money and others. By creating a well-rounded student body, a student's educational experience can be greatly benefited.[3] With Coleman's many shocking findings and conclusions that were drawn from his research, many of the people who were interested and trusted his research including Coleman himself were reluctant to follow them as time passed. Coleman's later studies suggested that desegregation efforts via busing failed due to “white flight” from areas in which students were bussed. Coleman integrated himself into a teacher lifestyle with the intention of sharing his passion for sociology and continuing his legacy despite his difficulty after his failed research. Having been one of the pioneers in mathematical sociology, it was not uncommon for people to ask Coleman to review papers submitted to various scholarly journals. He had little time on his hands as a well-known sociologist in the United States, in turn he built a seminar on the mathematics of sociology to build more people with the capability and education necessary to broaden and strengthen the field.
https://en.wikipedia.org/wiki/James_Samuel_Coleman
TheBradley effect, less commonly known as theWilder effect,[1][2]is a theory concerning observed discrepancies between voter opinion polls and election outcomes in some United States government elections where awhiteand anon-whitecandidate run against each other.[3][4][5]The theory proposes that some white voters who intend to vote for the white candidate would nonetheless tell pollsters that they are undecided or likely to vote for the non-white candidate. It was named after Los Angeles mayorTom Bradley, an African-American who lost the1982 California gubernatorial electionto California attorney generalGeorge Deukmejian, an Armenian-American,[6]despite Bradley’s being ahead in voter polls going into the elections.[7] The Bradley effect posits that the inaccurate polls were skewed by the phenomenon ofsocial desirability bias.[8][9]Specifically, some voters give inaccurate polling responses for fear that, by stating their true preference, they will open themselves to criticism of racial motivation, even when neither candidate is considered "white" under the traditional rubric of Americanracism, as was the case in the Bradley/Deukmejian election.[10]Members of the public may feel under pressure to provide an answer that is deemed to be more publicly acceptable, orpolitically correct. The reluctance to give accurate polling answers has sometimes extended to post-electionexit pollsas well. Theraceof the pollster conducting the interview may factor into voters' answers. Some analysts have dismissed the validity of the Bradley effect.[11]Others have argued that it may have existed in past elections but not in more recent ones, such as when the African-AmericanBarack Obamawas elected President of the United States in 2008 and 2012, both times against white opponents.[12]Others believe that it is a persistent phenomenon.[13]Similar effects have been posited in other contexts, for example thespiral of silenceand theshy Tory factor.[12]Bradley himself was unpopular at the state level for a variety of reasons, and went on to lose the1986 California gubernatorial electionto Deukmejian by more than 22 percentage points before settling a longstanding federal corruption probe into his alleged involvement with several multi-million-dollar illegal schemes.[14]In 1991, Bradley would infamously describe the Justice Department's decision not to indict him, after he repaid a portion of illegally transferred funds at issue in the probe, as a "Christmas gift."[14] In 1982,Tom Bradley, the long-time mayor of Los Angeles, ran as theDemocraticParty's candidate forGovernor of CaliforniaagainstRepublicancandidateGeorge Deukmejian, who was ofArmeniandescent. Most polls in the final days before the election showed Bradley with a significant lead.[15]Based onexit polls, a number of media outlets projected Bradley as the winner and early editions of the next day'sSan Francisco Chroniclefeatured a headline proclaiming "Bradley Win Projected." However, despite winning a majority of the votes cast on election day,Bradley narrowly lost the overall raceonce absentee ballots were included.[11]Post-election research indicated that a smaller percentage of white voters actually voted for Bradley than polls had predicted, and that previously undecided voters had voted for Deukmejian in statistically anomalous numbers.[4][16] A month prior to the election, Bill Roberts, Deukmejian's campaign manager, predicted that white voters would break for his candidate. He told reporters that he expected Deukmejian to receive approximately 5 percent more votes than polling numbers indicated because white voters were giving inaccurate polling responses to conceal the appearance of racial prejudice. Deukmejian disavowed Roberts's comments, and Roberts resigned his post as campaign manager.[17] Some news outlets and columnists have attributed the theory's origin to Charles Henry, a professor ofAfrican-American Studiesat theUniversity of California, Berkeley.[18][19][20]Henry researched the election in its aftermath, and in a 1983 study reached the controversial conclusion that race was the most likely factor in Bradley's defeat. One critic of the Bradley effect theory charged thatMervin Fieldof The Field Poll had already offered the theory as explanation for his poll's errors, suggesting it (without providing supporting data for the claim) on the day after the election.[11]Ken Khachigian, a senior strategist and day-to-day tactician in Deukmejian's 1982 campaign, has noted that Field's final pre-election poll was badly timed, since it was taken over the weekend, and most late polls failed to register a surge in support for Deukmejian in the campaign's final two weeks.[21]In addition, the exit polling failed to consider absentee balloting in an election which saw an "unprecedented wave of absentee voters" organized on Deukmejian's behalf. In short, Khachigian argues, the "Bradley effect" was simply an attempt to come up with an excuse for what was really the result of flawed opinion polling practices.[22] Other elections which have been cited as possible demonstrations of the Bradley effect include the 1983 race forMayor of Chicago, the 1988Democraticprimaryrace inWisconsinfor President of the United States, and the 1989 race forMayor of New York City.[23][24][25] The 1983 race in Chicago featured a black candidate,Harold Washington, running against a white candidate,Bernard Epton. More so than the California governor's race the year before,[26]the Washington-Epton matchup evinced strong and overt racial overtones throughout the campaign.[27][28]Two polls conducted approximately two weeks before the election showed Washington with a 14-point lead in the race. A third conducted just three days before the election confirmed Washington continuing to hold a lead of 14 points. But in the election's final results, Washington won by less than four points.[23] In the 1988 Democratic presidential primary in Wisconsin, pre-election polls pegged black candidateJesse Jackson—at the time, a legitimate challenger to white candidate and frontrunnerMichael Dukakis—as likely to receive approximately one-third of the white vote.[29]Ultimately, however, Jackson carried only about one quarter of that vote, with the discrepancy in the heavily white state contributing to a large margin of victory for Dukakis over the second-place Jackson.[30] In the 1989 race for Mayor of New York, a poll conducted just over a week before the election showed black candidateDavid Dinkinsholding an 18-point lead over white candidateRudy Giuliani. Four days before the election, a new poll showed that lead to have shrunk, but still standing at 14 points. On the day of the election, Dinkins prevailed by only two points.[23] Similar voter behavior was noted in the 1989 race forGovernor of Virginiabetween DemocratL. Douglas Wilder, an African-American, and RepublicanMarshall Coleman, who was white. In that race, Wilder prevailed, but by less than half of one percent, when pre-election poll numbers showed him on average with a 9 percent lead.[31][23]The discrepancy was attributed to white voters telling pollsters that they were undecided when they actually voted for Coleman.[32] After the 1989 Virginia gubernatorial election, the Bradley effect was sometimes called the Wilder effect.[33][24]Both terms are still used; and less commonly, the term "Dinkins effect" is also used.[5] Also sometimes mentioned are: In 1995, whenColin Powell's name was floated as a possible 1996 Republican presidential candidate, Powell reportedly spoke of being cautioned by publisherEarl G. Gravesabout the phenomenon described by the Bradley effect. With regard to opinion polls showing Powell leading a hypothetical race with then-incumbentBill Clinton, Powell was quoted as saying, "Every time I see Earl Graves, he says, 'Look, man, don't let them hand you no crap. When [white voters] go in that booth, they ain't going to vote for you.'"[24][37] Analyses of recent elections suggest that there may be some evidence of a diminution in the 'Bradley Effect'. However, at this stage, such evidence is too limited to confirm a trend. A few analysts, such as political commentator andThe Weekly StandardeditorFred Barnes, attributed the four-point loss by Indian American candidateBobby Jindalin the2003 Louisiana gubernatorial runoff electionto the Bradley effect. In making his argument, Barnes mentioned polls that had shown Jindal with a lead.[38]Others, such asNational ReviewcontributorRod Dreher, countered that later polls taken just before the election correctly showed that lead to have evaporated, and reported the candidates to be statistically tied.[39][40]In 2007,Jindal ran again, this time securing an easy victory, with his final vote total[41]remaining in line with or stronger than the predictions of the polls conducted shortly before the election.[42] In 2006, there was speculation that the Bradley effect might appear in theTennessee race for United States SenatorbetweenHarold Ford, Jr.and white candidateBob Corker.[43][24][36][44][45]Ford lost by a slim margin, but an examination of exit polling data indicated that the percentage of white voters who voted for him remained close to the percentage that indicated they would do so in polls conducted prior to the election.[24][46]Several other 2006 biracial contests saw pre-election polls predict their respective elections' final results with similar accuracy.[23] In therace for United States Senator from Maryland, blackRepublicancandidateMichael Steelelost by a wider margin than predicted by late polls. However, those polls correctly predicted Steele's numbers, with the discrepancy in his margin of defeat resulting from their underestimating the numbers for his whiteDemocraticopponent, then longtime RepresentativeBen Cardin. Those same polls also underestimated the Democratic candidate in the state'srace for governor—a race in which both candidates were white.[23] The overall accuracy of the polling data from the 2006 elections was cited, both by those who argue that the Bradley effect has diminished in American politics,[23][45][47]and those who doubt its existence in the first place.[48]When asked about the issue in 2007, Douglas Wilder indicated that while he believed there was still a need for black candidates to be wary of polls, he felt that voters were displaying "more openness" in their polling responses and becoming "less resistant" to giving an accurate answer than was the case at the time of his gubernatorial election.[49]When asked about the possibility of seeing a Bradley effect in 2008, Joe Trippi, who had been a deputy campaign manager for Tom Bradley in 1982, offered a similar assessment, saying, "The country has come a hell of a long way. I think it's a mistake to think that there'll be any kind of big surprise like there was in the Bradley campaign in 1982. But I also think it'd be a mistake to say, 'It's all gone.'"[50] Inaccurate polling statistics attributed to the Bradley effect are not limited to pre-election polls. In the initial hours after voting concluded in the Bradley-Deukmejian race in 1982, similarly inaccurate exit polls led some news organizations to project Bradley to have won.[51]Republican pollsterV. Lance Tarrance, Jr.argues that this was not indicative of the Bradley effect; rather the exit polls were wrong because Bradley actually won on election day turnout, but lost the absentee vote.[52] Exit polls in the Wilder-Coleman race in 1989 also proved inaccurate in their projection of a ten-point win for Wilder, despite those same exit polls accurately predicting other statewide races.[23][31][53]In 2006, a ballot measure inMichiganto endaffirmative actiongenerated exit poll numbers showing the race to be too close to call. Ultimately, the measure passed by a wide margin.[54] The causes of the polling errors are debated, but pollsters generally believe that perceived societal pressures have led some white voters to be less than forthcoming in their poll responses. These voters supposedly have harbored a concern that declaring their support for a white candidate over a non-white candidate will create a perception that the voter is racially prejudiced.[45][55]During the 1988 Jackson presidential campaign, Murray Edelman, a veteran election poll analyst for news organizations and a former president of theAmerican Association for Public Opinion Research, found the race of the pollster conducting the interview to be a factor in the discrepancy. Edelman's research showed white voters to be more likely to indicate support for Jackson when asked by a black interviewer than when asked by a white interviewer.[5] Andrew Kohut, who was the president ofthe Gallup Organizationduring the 1989 Dinkins/Giuliani race and later president of thePew Research Center, which conducted research into the phenomenon, has suggested that the discrepancies may arise, not from white participants giving false answers, but rather from white voters who have negative opinions of blacks being less likely to participate in polling at all than white voters who do not share such negative sentiments with regard to blacks.[56][57] While there is widespread belief in a racial component as at least a partial explanation for the polling inaccuracies in the elections in question, it is not universally accepted that this is the primary factor. Peter Brodnitz, a pollster and contributor to the newsletterThe Polling Report, worked on the 2006 campaign of blackU.S. SenatecandidateHarold Ford, Jr., and contrary to Edelman's findings in 1988, Brodnitz indicated that he did not find the race of the interviewer to be a factor in voter responses in pre-election polls. Brodnitz suggested that late-deciding voters tend to have moderate-to-conservativepolitical opinions and that this may account in part for last-minute decision-makers breaking largely away from black candidates, who have generally been moreliberalthan their white opponents in the elections in question.[5]Another prominent skeptic of the Bradley effect is Gary Langer, the director of polling forABC News. Langer has described the Bradley effect as "a theory in search of data." He has argued that inconsistency of its appearance, particularly in more recent elections, casts doubt upon its validity as a theory.[48][58] Of all of the races presented as possible examples of the Bradley effect theory, perhaps the one most fiercely rebutted by the theory's critics is the 1982 Bradley/Deukmejian contest itself. People involved with both campaigns, as well as those involved with the inaccurate polls have refuted the significance of the Bradley effect in determining that election's outcome. FormerLos Angeles Timesreporter Joe Mathews said that he talked to more than a dozen people who played significant roles in either the Bradley or Deukmejian campaign and that only two felt there was a significant race-based component to the polling failures.[59]Mark DiCamillo, Director of The Field Poll, which was among those that had shown Bradley with a strong lead, has not ruled out the possibility of a Bradley effect as a minor factor, but also said that the organization's own internal examination after that election identified other possible factors that may have contributed to their error, including a shift in voter preference after the final pre-election polls and a high-profile ballot initiative in the same election, a Republican absentee ballot program and a low minority turnout, each of which may have caused pre-election polls to inaccurately predict which respondents were likely voters.[60] Prominent Republican pollsterV. Lance Tarrance, Jr.flatly denies that the Bradley effect occurred during that election, echoing the absentee ballot factor cited by DiCamillo.[11]Tarrance also reports that his own firm's pre-election polls done for the Deukmejian campaign showed the race as having closed from a wide lead for Bradley one month prior to the election down to a statistical dead heat by the day of the election. While acknowledging that some news sources projected a Bradley victory based upon Field Poll exit polls which were also inaccurate, he counters that at the same time, other news sources were able to correctly predict Deukmejian's victory by using other exit polls that were more accurate. Tarrance claims that The Field Poll speculated, without supplying supporting data, in offering the Bradley effect theory as an explanation for why its polling had failed, and he attributes the emergence of the Bradley effect theory to media outlets focusing on this, while ignoring that there were other conflicting polls which had been correct all along.[11] Sal Russo, a consultant for Deukmejian in the race, has said that another private pollster working for the campaign, Lawrence Research, also accurately captured the late surge in favor of Deukmejian, polling as late as the night before the election. According to Russo, that firm's prediction after its final poll was an extremely narrow victory for Deukmejian. He asserts that the failure of pre-election polls such as The Field Poll arose, largely because they stopped polling too soon, and that the failure of the exit polls was due to their inability to account for absentee ballots.[61] Blair Levin, a staffer on the Bradley campaign in 1982 said that as he reviewed early returns at a Bradley hotel on election night, he saw that Deukmejian would probably win. In those early returns, he had taken particular note of the high number of absentee ballots, as well as a higher-than-expected turnout in California'sCentral Valleyby conservative voters who had been mobilized to defeat the handgun ballot initiative mentioned by DiCamillo. According to Levin, even as he heard the "victory" celebration going on among Bradley supporters downstairs, those returns had led him to the conclusion that Bradley was likely to lose.[62][63]John Phillips, the primary sponsor of the controversial gun control proposition, said that he felt as though he, rather than polling inaccuracies, was the primary target of the blame assigned by those present at the Bradley hotel that night.[59]Nelson Rising, Bradley's campaign chair, spoke of having warned Bradley long before any polling concerns arose that endorsing the ballot initiative would ultimately doom his campaign. Rejecting the idea that the Bradley effect theory was a factor in the outcome, Rising said, "If there is such an effect, it shouldn't be named for Bradley, or associated with him in any way."[59] In 2008, several political analysts[64][65][66][67]discussing the Bradley effect referred to a study authored by Daniel J. Hopkins, a post-doctoral fellow inHarvard University's Department of Government, which sought to determine whether the Bradley effect theory was valid, and whether an analogous phenomenon might be observed in races between a female candidate and a male candidate. Hopkins analyzed data from 133 elections between 1989 and 2006, compared the results of those elections to the corresponding pre-election poll numbers, and considered some of the alternate explanations which have been offered for any discrepancies therein. The study concluded finally that the Bradley effect was a real phenomenon, amounting to a median gap of 3.1 percentage points before 1996, but that it was likely not the sole factor in those discrepancies, and further that it had ceased to manifest itself at all by 1996. The study also suggested a connection between the Bradley effect and the level of racial rhetoric exhibited in the discussion of the political issues of the day. It asserted that the timing of the disappearance of the Bradley effect coincided with that of a decrease in such rhetoric in American politics over such potentially racially charged issues as crime andwelfare. The study found no evidence of a corresponding effect based upon gender—in fact, female Senate candidates received on average 1.2 percentage points more votes than polls had predicted.[68] The2008 presidential campaignofBarack Obama, a blackUnited States Senator, brought a heightened level of scrutiny to the Bradley effect,[69]as observers searched for signs of the effect in comparing Obama's polling numbers to the actual election results during the Democratic primary elections.[5][24][46][70][71]After a victorious showing in theIowa caucuses, where votes were cast publicly, polls predicted that Obama would also capture theNew Hampshire Democratic primary electionby a large margin overHillary Clinton, a white senator. However, Clinton defeated Obama by three points in the New Hampshire race, where ballots were cast secretly, immediately initiating suggestions by some analysts that the Bradley effect may have been at work.[72][58]Other analysts cast doubt on that hypothesis, saying that the polls underestimated Clinton rather than overestimated Obama.[73]Clinton may have also benefited from theprimacy effectin the New Hampshire primary as she was listed ahead of Obama on every New Hampshire ballot.[74] After theSuper Tuesdayprimaries of February 5, 2008,political scienceresearchers from theUniversity of Washingtonfound trends suggesting the possibility that with regard to Obama, the effect's presence or absence may be dependent on the percentage of the electorate that is black. The researchers noted that to that point in the election season, opinion polls taken just prior to an election tended to overestimate Obama in states with a black population below eight percent, to track him within the polls' margins of error in states with a black population between ten and twenty percent, and to underestimate him in states with a black population exceeding twenty-five percent. The first finding suggested the possibility of the Bradley effect, while the last finding suggested the possibility of a "reverse" Bradley effect in which black voters might have been reluctant to declare to pollsters their support for Obama or are underpolled. For example, many general election polls in North Carolina and Virginia assume that black voters will be 15% to 20% of each state's electorate; they were around a quarter of each state's electorate in 2004.[75][76]That high support effect has been attributed to high black voter turnout in those states' primaries, with blacks supporting Obama by margins that often exceeded 97%. With only one exception, each state that had opinion polls incorrectly predict the outcome of the Democratic contest also had polls that accurately predicted the outcome of the state's Republican contest, which featured only white candidates).[77] Alternatively,Douglas Wilderhas suggested that a 'reverse Bradley effect' may be possible because some Republicans may not openly say they will vote for a black candidate, but may do so on election day.[78]The "Fishtown Effect" is a scenario where prejudiced or racist white voters cast their vote for a black candidate solely on economic concerns.[79][80]Fishtown, a mostly white and economically depressed neighborhood in Philadelphia, voted 81% for Obama in the 2008 election.[81]Alternatively, writerAlisa Valdes-Rodriguezsuggested another plausible factor is something called the "Huxtable effect", where the positive image of the respectable African American characterCliff Huxtable, a respected middle-classobstetricianand father on the 1980s television seriesThe Cosby Show, made young voters who grew up with that series' initial run comfortable with the idea of an African American man being a viable presidential candidate, which enhanced Obama's election chances with that population.[82]Others have called it the "Palmer effect" on the theory thatDavid Palmer, a fictional president played byDennis Haysbertduring the second and third seasons of the television drama24, showed viewers that an African American man can be a strong commander in chief.[83] This election was widely scrutinized as analysts tried to definitively determine whether the Bradley effect is still a significant factor in the political sphere.[84]An inspection of the discrepancy between pre-election polls and Obama's ultimate support[85]reveals significant bivariate support for the hypothesized "reverse Bradley effect". On average, Obama received three percentage points more support in the primaries and caucuses than he did during polling; however, he also had a strong ground campaign, and many polls do not question voters with only cell phones, who are predominantly young.[86] Obama went on to win the election with 53% of the popular vote and a large electoral college victory. Following the 2008 presidential election, a number of news sources reported that the result confirmed the absence of a 'Bradley Effect' in view of the close correlation between the pre-election polls and the actual share of the popular vote.[87] However, it has been suggested that such assumptions based on the overall share of the vote are too simplistic because they ignore the fact that underlying factors can be contradictory and hence masked in overall voting figures. For instance, it has been suggested that an extant Bradley Effect was masked by the unusually high turnout amongst African Americans and other Democratic leaning voter groups under the unique circumstances of the 2008 election (i.e. the first serious bid for President by an African-American candidate).[13] Although both candidates in the2016 United States presidential electionwere white, a similar phenomenon may have caused polls to inaccurately predict the election outcome. According to major opinion polling, formerUnited States SenatorandSecretary of StateHillary Clintonwas predicted[88]to defeat businessmanDonald Trump. Nevertheless, Trump won the keyRust Belt statesofOhio,Michigan,Pennsylvania, andWisconsin, giving him more electoral votes than Secretary Clinton.Post-election analysisof public opinion polling showed that Trump's base was larger than predicted, leading some experts to suggest that some "shy Trumpers" were hiding their preferences to avoid being seen as prejudiced by pollsters.[89]There may have been also some cases in which male respondents were hiding their preferences to avoid being seen assexist, as Hillary Clinton was the first female major party candidate for President.[89] In a 2019 press conference, Trump estimated the effect to be between 6 and 10% in his favor. He described this effect as "I don’t know if I consider that to be a compliment, but in one way it is a compliment."[90] However many pollsters have disputed this claim. A 2016 poll conducted by Morning Consult showed that Trump performed better in general election polls regardless of whether the poll was conducted online or by live interviewer over the phone. This finding led Morning Consult's chief research officer to conclude that there was little evidence that poll respondents were feeling pressured to downplay their true general election preferences.[91]Harry Enten, an analyst forFiveThirtyEight.comnoted that Trump generally underperformed his polling in Democratic-leaning states like California and New York — where the stigma against voting for Trump likely would have been stronger — and overperformed his polls in places like Wisconsin and Ohio. Enten concluded that, although Trump did better than the polls predicted in many states, he "didn’t do so in a pattern consistent with a 'shy Trump' effect".[92] The Bradley effect—as well a variant of the so-calledshy Tory factorthat involves prospective voters' expressed intentions to vote for candidates belonging to the U.S. Republican Party—reportedly skewed a number of opinion polls running up to the 2018 U.S. elections.[93]Notably, the effect was arguably present in theFlorida gubernatorial electionbetween black DemocratAndrew Gillum, the mayor ofTallahassee, and white RepublicanRon DeSantis, a U.S. Congressman. Despite Gillum having led in most polls before the election, DeSantis ultimately won by a margin of 0.4%.[94]
https://en.wikipedia.org/wiki/Bradley_effect
Inmathematics, for a functionf:X→Y{\displaystyle f:X\to Y}, theimageof an input valuex{\displaystyle x}is the single output value produced byf{\displaystyle f}when passedx{\displaystyle x}. Thepreimageof an output valuey{\displaystyle y}is the set of input values that producey{\displaystyle y}. More generally, evaluatingf{\displaystyle f}at eachelementof a given subsetA{\displaystyle A}of itsdomainX{\displaystyle X}produces a set, called the "imageofA{\displaystyle A}under (or through)f{\displaystyle f}". Similarly, theinverse image(orpreimage) of a given subsetB{\displaystyle B}of thecodomainY{\displaystyle Y}is the set of all elements ofX{\displaystyle X}that map to a member ofB.{\displaystyle B.} Theimageof the functionf{\displaystyle f}is the set of all output values it may produce, that is, the image ofX{\displaystyle X}. Thepreimageoff{\displaystyle f}, that is, the preimage ofY{\displaystyle Y}underf{\displaystyle f}, always equalsX{\displaystyle X}(thedomainoff{\displaystyle f}); therefore, the former notion is rarely used. Image and inverse image may also be defined for generalbinary relations, not just functions. The word "image" is used in three related ways. In these definitions,f:X→Y{\displaystyle f:X\to Y}is afunctionfrom thesetX{\displaystyle X}to the setY.{\displaystyle Y.} Ifx{\displaystyle x}is a member ofX,{\displaystyle X,}then the image ofx{\displaystyle x}underf,{\displaystyle f,}denotedf(x),{\displaystyle f(x),}is thevalueoff{\displaystyle f}when applied tox.{\displaystyle x.}f(x){\displaystyle f(x)}is alternatively known as the output off{\displaystyle f}for argumentx.{\displaystyle x.} Giveny,{\displaystyle y,}the functionf{\displaystyle f}is said totake the valuey{\displaystyle y}ortakey{\displaystyle y}as a valueif there exists somex{\displaystyle x}in the function's domain such thatf(x)=y.{\displaystyle f(x)=y.}Similarly, given a setS,{\displaystyle S,}f{\displaystyle f}is said totake a value inS{\displaystyle S}if there existssomex{\displaystyle x}in the function's domain such thatf(x)∈S.{\displaystyle f(x)\in S.}However,f{\displaystyle f}takes [all] values inS{\displaystyle S}andf{\displaystyle f}is valued inS{\displaystyle S}means thatf(x)∈S{\displaystyle f(x)\in S}foreverypointx{\displaystyle x}in the domain off{\displaystyle f}. Throughout, letf:X→Y{\displaystyle f:X\to Y}be a function. Theimageunderf{\displaystyle f}of a subsetA{\displaystyle A}ofX{\displaystyle X}is the set of allf(a){\displaystyle f(a)}fora∈A.{\displaystyle a\in A.}It is denoted byf[A],{\displaystyle f[A],}or byf(A){\displaystyle f(A)}when there is no risk of confusion. Usingset-builder notation, this definition can be written as[1][2]f[A]={f(a):a∈A}.{\displaystyle f[A]=\{f(a):a\in A\}.} This induces a functionf[⋅]:P(X)→P(Y),{\displaystyle f[\,\cdot \,]:{\mathcal {P}}(X)\to {\mathcal {P}}(Y),}whereP(S){\displaystyle {\mathcal {P}}(S)}denotes thepower setof a setS;{\displaystyle S;}that is the set of allsubsetsofS.{\displaystyle S.}See§ Notationbelow for more. Theimageof a function is the image of its entiredomain, also known as therangeof the function.[3]This last usage should be avoided because the word "range" is also commonly used to mean thecodomainoff.{\displaystyle f.} IfR{\displaystyle R}is an arbitrarybinary relationonX×Y,{\displaystyle X\times Y,}then the set{y∈Y:xRyfor somex∈X}{\displaystyle \{y\in Y:xRy{\text{ for some }}x\in X\}}is called the image, or the range, ofR.{\displaystyle R.}Dually, the set{x∈X:xRyfor somey∈Y}{\displaystyle \{x\in X:xRy{\text{ for some }}y\in Y\}}is called the domain ofR.{\displaystyle R.} Letf{\displaystyle f}be a function fromX{\displaystyle X}toY.{\displaystyle Y.}Thepreimageorinverse imageof a setB⊆Y{\displaystyle B\subseteq Y}underf,{\displaystyle f,}denoted byf−1[B],{\displaystyle f^{-1}[B],}is the subset ofX{\displaystyle X}defined byf−1[B]={x∈X:f(x)∈B}.{\displaystyle f^{-1}[B]=\{x\in X\,:\,f(x)\in B\}.} Other notations includef−1(B){\displaystyle f^{-1}(B)}andf−(B).{\displaystyle f^{-}(B).}[4]The inverse image of asingleton set, denoted byf−1[{y}]{\displaystyle f^{-1}[\{y\}]}or byf−1(y),{\displaystyle f^{-1}(y),}is also called thefiberor fiber overy{\displaystyle y}or thelevel setofy.{\displaystyle y.}The set of all the fibers over the elements ofY{\displaystyle Y}is a family of sets indexed byY.{\displaystyle Y.} For example, for the functionf(x)=x2,{\displaystyle f(x)=x^{2},}the inverse image of{4}{\displaystyle \{4\}}would be{−2,2}.{\displaystyle \{-2,2\}.}Again, if there is no risk of confusion,f−1[B]{\displaystyle f^{-1}[B]}can be denoted byf−1(B),{\displaystyle f^{-1}(B),}andf−1{\displaystyle f^{-1}}can also be thought of as a function from the power set ofY{\displaystyle Y}to the power set ofX.{\displaystyle X.}The notationf−1{\displaystyle f^{-1}}should not be confused with that forinverse function, although it coincides with the usual one for bijections in that the inverse image ofB{\displaystyle B}underf{\displaystyle f}is the image ofB{\displaystyle B}underf−1.{\displaystyle f^{-1}.} The traditional notations used in the previous section do not distinguish the original functionf:X→Y{\displaystyle f:X\to Y}from the image-of-sets functionf:P(X)→P(Y){\displaystyle f:{\mathcal {P}}(X)\to {\mathcal {P}}(Y)}; likewise they do not distinguish the inverse function (assuming one exists) from the inverse image function (which again relates the powersets). Given the right context, this keeps the notation light and usually does not cause confusion. But if needed, an alternative[5]is to give explicit names for the image and preimage as functions between power sets: For every functionf:X→Y{\displaystyle f:X\to Y}and all subsetsA⊆X{\displaystyle A\subseteq X}andB⊆Y,{\displaystyle B\subseteq Y,}the following properties hold: Also: For functionsf:X→Y{\displaystyle f:X\to Y}andg:Y→Z{\displaystyle g:Y\to Z}with subsetsA⊆X{\displaystyle A\subseteq X}andC⊆Z,{\displaystyle C\subseteq Z,}the following properties hold: For functionf:X→Y{\displaystyle f:X\to Y}and subsetsA,B⊆X{\displaystyle A,B\subseteq X}andS,T⊆Y,{\displaystyle S,T\subseteq Y,}the following properties hold: The results relating images and preimages to the (Boolean) algebra ofintersectionandunionwork for any collection of subsets, not just for pairs of subsets: (Here,S{\displaystyle S}can be infinite, evenuncountably infinite.) With respect to the algebra of subsets described above, the inverse image function is alattice homomorphism, while the image function is only asemilatticehomomorphism (that is, it does not always preserve intersections). This article incorporates material from Fibre onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
https://en.wikipedia.org/wiki/Image_(mathematics)#Properties
Linear discriminant analysis(LDA),normal discriminant analysis(NDA),canonical variates analysis(CVA), ordiscriminant function analysisis a generalization ofFisher's linear discriminant, a method used instatisticsand other fields, to find alinear combinationof features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as alinear classifier, or, more commonly, fordimensionality reductionbefore laterclassification. LDA is closely related toanalysis of variance(ANOVA) andregression analysis, which also attempt to express onedependent variableas a linear combination of other features or measurements.[2][3]However, ANOVA usescategoricalindependent variablesand acontinuousdependent variable, whereas discriminant analysis has continuousindependent variablesand a categorical dependent variable (i.e.the class label).[4]Logistic regressionandprobit regressionare more similar to LDA than ANOVA is, as they also explain a categorical variable by the values of continuous independent variables. These other methods are preferable in applications where it is not reasonable to assume that the independent variables are normally distributed, which is a fundamental assumption of the LDA method. LDA is also closely related toprincipal component analysis(PCA) andfactor analysisin that they both look for linear combinations of variables which best explain the data.[5]LDA explicitly attempts to model the difference between the classes of data. PCA, in contrast, does not take into account any difference in class, and factor analysis builds the feature combinations based on differences rather than similarities. Discriminant analysis is also different from factor analysis in that it is not an interdependence technique: a distinction between independent variables and dependent variables (also called criterion variables) must be made. LDA works when the measurements made on independent variables for each observation are continuous quantities. When dealing with categorical independent variables, the equivalent technique is discriminant correspondence analysis.[6][7] Discriminant analysis is used when groups are known a priori (unlike incluster analysis). Each case must have a score on one or more quantitative predictor measures, and a score on a group measure.[8]In simple terms, discriminant function analysis is classification - the act of distributing things into groups, classes or categories of the same type. The originaldichotomousdiscriminant analysis was developed by SirRonald Fisherin 1936.[9]It is different from anANOVAorMANOVA, which is used to predict one (ANOVA) or multiple (MANOVA) continuous dependent variables by one or more independent categorical variables. Discriminant function analysis is useful in determining whether a set of variables is effective in predicting category membership.[10] Consider a set of observationsx→{\displaystyle {\vec {x}}}(also called features, attributes, variables or measurements) for each sample of an object or event with known classy{\displaystyle y}. This set of samples is called thetraining setin asupervised learningcontext. The classification problem is then to find a good predictor for the classy{\displaystyle y}of any sample of the same distribution (not necessarily from the training set) given only an observationx→{\displaystyle {\vec {x}}}.[11]: 338 LDA approaches the problem by assuming that the conditionalprobability density functionsp(x→|y=0){\displaystyle p({\vec {x}}|y=0)}andp(x→|y=1){\displaystyle p({\vec {x}}|y=1)}are boththe normal distributionwith mean andcovarianceparameters(μ→0,Σ0){\displaystyle \left({\vec {\mu }}_{0},\Sigma _{0}\right)}and(μ→1,Σ1){\displaystyle \left({\vec {\mu }}_{1},\Sigma _{1}\right)}, respectively. Under this assumption, theBayes-optimal solutionis to predict points as being from the second class if the log of the likelihood ratios is bigger than some threshold T, so that: Without any further assumptions, the resulting classifier is referred to asquadratic discriminant analysis(QDA). LDA instead makes the additional simplifyinghomoscedasticityassumption (i.e.that the class covariances are identical, soΣ0=Σ1=Σ{\displaystyle \Sigma _{0}=\Sigma _{1}=\Sigma }) and that the covariances have full rank. In this case, several terms cancel: and the above decision criterion becomes a threshold on thedot product for some threshold constantc, where This means that the criterion of an inputx→{\displaystyle {\vec {x}}}being in a classy{\displaystyle y}is purely a function of this linear combination of the known observations. It is often useful to see this conclusion in geometrical terms: the criterion of an inputx→{\displaystyle {\vec {x}}}being in a classy{\displaystyle y}is purely a function of projection of multidimensional-space pointx→{\displaystyle {\vec {x}}}onto vectorw→{\displaystyle {\vec {w}}}(thus, we only consider its direction). In other words, the observation belongs toy{\displaystyle y}if correspondingx→{\displaystyle {\vec {x}}}is located on a certain side of a hyperplane perpendicular tow→{\displaystyle {\vec {w}}}. The location of the plane is defined by the thresholdc{\displaystyle c}. The assumptions of discriminant analysis are the same as those for MANOVA. The analysis is quite sensitive to outliers and the size of the smallest group must be larger than the number of predictor variables.[8] It has been suggested that discriminant analysis is relatively robust to slight violations of these assumptions,[12]and it has also been shown that discriminant analysis may still be reliable when using dichotomous variables (where multivariate normality is often violated).[13] Discriminant analysis works by creating one or more linear combinations of predictors, creating a newlatent variablefor each function. These functions are called discriminant functions. The number of functions possible is eitherNg−1{\displaystyle N_{g}-1}whereNg{\displaystyle N_{g}}= number of groups, orp{\displaystyle p}(the number of predictors), whichever is smaller. The first function created maximizes the differences between groups on that function. The second function maximizes differences on that function, but also must not be correlated with the previous function. This continues with subsequent functions with the requirement that the new function not be correlated with any of the previous functions. Given groupj{\displaystyle j}, withRj{\displaystyle \mathbb {R} _{j}}sets of sample space, there is a discriminant rule such that ifx∈Rj{\displaystyle x\in \mathbb {R} _{j}}, thenx∈j{\displaystyle x\in j}. Discriminant analysis then, finds “good” regions ofRj{\displaystyle \mathbb {R} _{j}}to minimize classification error, therefore leading to a high percent correct classified in the classification table.[14] Each function is given a discriminant score[clarification needed]to determine how well it predicts group placement. Aneigenvaluein discriminant analysis is the characteristic root of each function.[clarification needed]It is an indication of how well that function differentiates the groups, where the larger the eigenvalue, the better the function differentiates.[8]This however, should be interpreted with caution, as eigenvalues have no upper limit.[10][8]The eigenvalue can be viewed as a ratio ofSSbetweenandSSwithinas in ANOVA when the dependent variable is the discriminant function, and the groups are the levels of theIV[clarification needed].[10]This means that the largest eigenvalue is associated with the first function, the second largest with the second, etc.. Some suggest the use of eigenvalues aseffect sizemeasures, however, this is generally not supported.[10]Instead, thecanonical correlationis the preferred measure of effect size. It is similar to the eigenvalue, but is the square root of the ratio ofSSbetweenandSStotal. It is the correlation between groups and the function.[10]Another popular measure of effect size is the percent of variance[clarification needed]for each function. This is calculated by: (λx/Σλi) X 100 whereλxis the eigenvalue for the function and Σλiis the sum of all eigenvalues. This tells us how strong the prediction is for that particular function compared to the others.[10]Percent correctly classified can also be analyzed as an effect size. The kappa value can describe this while correcting for chance agreement.[10]Kappa normalizes across all categorizes rather than biased by a significantly good or poorly performing classes.[clarification needed][17] Canonical discriminant analysis (CDA) finds axes (k− 1canonical coordinates,kbeing the number of classes) that best separate the categories. These linear functions are uncorrelated and define, in effect, an optimalk− 1 space through then-dimensional cloud of data that best separates (the projections in that space of) thekgroups. See “Multiclass LDA” for details below. Because LDA uses canonical variates, it was initially often referred as the "method of canonical variates"[18]or canonical variates analysis (CVA).[19] The termsFisher's linear discriminantandLDAare often used interchangeably, althoughFisher'soriginal article[2]actually describes a slightly different discriminant, which does not make some of the assumptions of LDA such asnormally distributedclasses or equal classcovariances. Suppose two classes of observations havemeansμ→0,μ→1{\displaystyle {\vec {\mu }}_{0},{\vec {\mu }}_{1}}and covariancesΣ0,Σ1{\displaystyle \Sigma _{0},\Sigma _{1}}. Then the linear combination of featuresw→Tx→{\displaystyle {\vec {w}}^{\mathrm {T} }{\vec {x}}}will havemeansw→Tμ→i{\displaystyle {\vec {w}}^{\mathrm {T} }{\vec {\mu }}_{i}}andvariancesw→TΣiw→{\displaystyle {\vec {w}}^{\mathrm {T} }\Sigma _{i}{\vec {w}}}fori=0,1{\displaystyle i=0,1}. Fisher defined the separation between these twodistributionsto be the ratio of the variance between the classes to the variance within the classes: This measure is, in some sense, a measure of thesignal-to-noise ratiofor the class labelling. It can be shown that the maximum separation occurs when When the assumptions of LDA are satisfied, the above equation is equivalent to LDA. Be sure to note that the vectorw→{\displaystyle {\vec {w}}}is thenormalto the discriminanthyperplane. As an example, in a two dimensional problem, the line that best divides the two groups is perpendicular tow→{\displaystyle {\vec {w}}}. Generally, the data points to be discriminated are projected ontow→{\displaystyle {\vec {w}}}; then the threshold that best separates the data is chosen from analysis of the one-dimensional distribution. There is no general rule for the threshold. However, if projections of points from both classes exhibit approximately the same distributions, a good choice would be the hyperplane between projections of the two means,w→⋅μ→0{\displaystyle {\vec {w}}\cdot {\vec {\mu }}_{0}}andw→⋅μ→1{\displaystyle {\vec {w}}\cdot {\vec {\mu }}_{1}}. In this case the parameter c in threshold conditionw→⋅x→>c{\displaystyle {\vec {w}}\cdot {\vec {x}}>c}can be found explicitly: Otsu's methodis related to Fisher's linear discriminant, and was created to binarize the histogram of pixels in a grayscale image by optimally picking the black/white threshold that minimizes intra-class variance and maximizes inter-class variance within/between grayscales assigned to black and white pixel classes. In the case where there are more than two classes, the analysis used in the derivation of the Fisher discriminant can be extended to find asubspacewhich appears to contain all of the class variability.[20]This generalization is due toC. R. Rao.[21]Suppose that each of C classes has a meanμi{\displaystyle \mu _{i}}and the same covarianceΣ{\displaystyle \Sigma }. Then the scatter between class variability may be defined by the sample covariance of the class means whereμ{\displaystyle \mu }is the mean of the class means. The class separation in a directionw→{\displaystyle {\vec {w}}}in this case will be given by This means that whenw→{\displaystyle {\vec {w}}}is aneigenvectorofΣ−1Σb{\displaystyle \Sigma ^{-1}\Sigma _{b}}the separation will be equal to the correspondingeigenvalue. IfΣ−1Σb{\displaystyle \Sigma ^{-1}\Sigma _{b}}is diagonalizable, the variability between features will be contained in the subspace spanned by the eigenvectors corresponding to theC− 1 largest eigenvalues (sinceΣb{\displaystyle \Sigma _{b}}is of rankC− 1 at most). These eigenvectors are primarily used in feature reduction, as in PCA. The eigenvectors corresponding to the smaller eigenvalues will tend to be very sensitive to the exact choice of training data, and it is often necessary to use regularisation as described in the next section. If classification is required, instead ofdimension reduction, there are a number of alternative techniques available. For instance, the classes may be partitioned, and a standard Fisher discriminant or LDA used to classify each partition. A common example of this is "one against the rest" where the points from one class are put in one group, and everything else in the other, and then LDA applied. This will result in C classifiers, whose results are combined. Another common method is pairwise classification, where a new classifier is created for each pair of classes (givingC(C− 1)/2 classifiers in total), with the individual classifiers combined to produce a final classification. The typical implementation of the LDA technique requires that all the samples are available in advance. However, there are situations where the entire data set is not available and the input data are observed as a stream. In this case, it is desirable for the LDA feature extraction to have the ability to update the computed LDA features by observing the new samples without running the algorithm on the whole data set. For example, in many real-time applications such as mobile robotics or on-line face recognition, it is important to update the extracted LDA features as soon as new observations are available. An LDA feature extraction technique that can update the LDA features by simply observing new samples is anincremental LDA algorithm, and this idea has been extensively studied over the last two decades.[22]Chatterjee and Roychowdhury proposed an incremental self-organized LDA algorithm for updating the LDA features.[23]In other work, Demir and Ozmehmet proposed online local learning algorithms for updating LDA features incrementally using error-correcting and the Hebbian learning rules.[24]Later, Aliyari et al. derived fast incremental algorithms to update the LDA features by observing the new samples.[22] In practice, the class means and covariances are not known. They can, however, be estimated from the training set. Either themaximum likelihood estimateor themaximum a posterioriestimate may be used in place of the exact value in the above equations. Although the estimates of the covariance may be considered optimal in some sense, this does not mean that the resulting discriminant obtained by substituting these values is optimal in any sense, even if the assumption of normally distributed classes is correct. Another complication in applying LDA and Fisher's discriminant to real data occurs when the number of measurements of each sample (i.e., the dimensionality of each data vector) exceeds the number of samples in each class.[5]In this case, the covariance estimates do not have full rank, and so cannot be inverted. There are a number of ways to deal with this. One is to use apseudo inverseinstead of the usual matrix inverse in the above formulae. However, better numeric stability may be achieved by first projecting the problem onto the subspace spanned byΣb{\displaystyle \Sigma _{b}}.[25]Another strategy to deal with small sample size is to use ashrinkage estimatorof the covariance matrix, which can be expressed mathematically as whereI{\displaystyle I}is the identity matrix, andλ{\displaystyle \lambda }is theshrinkage intensityorregularisation parameter. This leads to the framework of regularized discriminant analysis[26]or shrinkage discriminant analysis.[27] Also, in many practical cases linear discriminants are not suitable. LDA and Fisher's discriminant can be extended for use in non-linear classification via thekernel trick. Here, the original observations are effectively mapped into a higher dimensional non-linear space. Linear classification in this non-linear space is then equivalent to non-linear classification in the original space. The most commonly used example of this is thekernel Fisher discriminant. LDA can be generalized tomultiple discriminant analysis, wherecbecomes acategorical variablewithNpossible states, instead of only two. Analogously, if the class-conditional densitiesp(x→∣c=i){\displaystyle p({\vec {x}}\mid c=i)}are normal with shared covariances, thesufficient statisticforP(c∣x→){\displaystyle P(c\mid {\vec {x}})}are the values ofNprojections, which are thesubspacespanned by theNmeans,affine projectedby the inverse covariance matrix. These projections can be found by solving ageneralized eigenvalue problem, where the numerator is the covariance matrix formed by treating the means as the samples, and the denominator is the shared covariance matrix. See “Multiclass LDA” above for details. In addition to the examples given below, LDA is applied inpositioningandproduct management. Inbankruptcy predictionbased on accounting ratios and other financial variables, linear discriminant analysis was the first statistical method applied to systematically explain which firms entered bankruptcy vs. survived. Despite limitations including known nonconformance of accounting ratios to the normal distribution assumptions of LDA,Edward Altman's1968 model[28]is still a leading model in practical applications.[29][30][31] In computerisedface recognition, each face is represented by a large number of pixel values. Linear discriminant analysis is primarily used here to reduce the number of features to a more manageable number before classification. Each of the new dimensions is a linear combination of pixel values, which form a template. The linear combinations obtained using Fisher's linear discriminant are calledFisher faces, while those obtained using the relatedprincipal component analysisare calledeigenfaces. Inmarketing, discriminant analysis was once often used to determine the factors which distinguish different types of customers and/or products on the basis of surveys or other forms of collected data.Logistic regressionor other methods are now more commonly used. The use of discriminant analysis in marketing can be described by the following steps: The main application of discriminant analysis in medicine is the assessment of severity state of a patient and prognosis of disease outcome. For example, during retrospective analysis, patients are divided into groups according to severity of disease – mild, moderate, and severe form. Then results of clinical and laboratory analyses are studied to reveal statistically different variables in these groups. Using these variables, discriminant functions are built to classify disease severity in future patients. Additionally, Linear Discriminant Analysis (LDA) can help select more discriminative samples for data augmentation, improving classification performance.[32] In biology, similar principles are used in order to classify and define groups of different biological objects, for example, to define phage types of Salmonella enteritidis based on Fourier transform infrared spectra,[33]to detect animal source ofEscherichia colistudying its virulence factors[34]etc. This method can be used toseparate the alteration zones[clarification needed]. For example, when different data from various zones are available, discriminant analysis can find the pattern within the data and classify it effectively.[35] Discriminant function analysis is very similar tologistic regression, and both can be used to answer the same research questions.[10]Logistic regression does not have as many assumptions and restrictions as discriminant analysis. However, when discriminant analysis’ assumptions are met, it is more powerful than logistic regression.[36]Unlike logistic regression, discriminant analysis can be used with small sample sizes. It has been shown that when sample sizes are equal, and homogeneity of variance/covariance holds, discriminant analysis is more accurate.[8]Despite all these advantages, logistic regression has none-the-less become the common choice, since the assumptions of discriminant analysis are rarely met.[9][8] Geometric anomalies in higher dimensions lead to the well-knowncurse of dimensionality. Nevertheless, proper utilization ofconcentration of measurephenomena can make computation easier.[37]An important case of theseblessing of dimensionalityphenomena was highlighted by Donoho and Tanner: if a sample is essentially high-dimensional then each point can be separated from the rest of the sample by linear inequality, with high probability, even for exponentially large samples.[38]These linear inequalities can be selected in the standard (Fisher's) form of the linear discriminant for a rich family of probability distribution.[39]In particular, such theorems are proven forlog-concavedistributions includingmultidimensional normal distribution(the proof is based on the concentration inequalities for log-concave measures[40]) and for product measures on a multidimensional cube (this is proven usingTalagrand's concentration inequalityfor product probability spaces). Data separability by classical linear discriminants simplifies the problem of error correction forartificial intelligencesystems in high dimension.[41]
https://en.wikipedia.org/wiki/Linear_discriminant_analysis
TheKnowledge Query and Manipulation Language, orKQML, is a language and protocol for communication among software agents andknowledge-based systems.[1]It was developed in the early 1990s as part of theDARPAknowledge Sharing Effort, which was aimed at developing techniques for building large-scale knowledge bases which are share-able and re-usable. While originally conceived of as an interface to knowledge based systems, it was soon repurposed as anAgent communication language.[2][3] Work on KQML was led byTim Fininof theUniversity of Maryland, Baltimore Countyand Jay Weber of EITech and involved contributions from many researchers. The KQML message format and protocol can be used to interact with an intelligent system, either by anapplication program, or by another intelligent system. KQML's "performatives" are operations that agents perform on each other's knowledge and goal stores. Higher-level interactions such ascontract netsand negotiation are built using these. KQML's "communication facilitators" coordinate the interactions of otheragentsto supportknowledge sharing. Experimental prototype systems support concurrent engineering, intelligent design, intelligent planning, and scheduling. KQML is superseded byFIPA-ACL.
https://en.wikipedia.org/wiki/Knowledge_Query_and_Manipulation_Language
Inpopulation genetics,Ewens's sampling formuladescribes theprobabilitiesassociated with counts of how many differentallelesare observed a given number of times in thesample. Ewens's sampling formula, introduced byWarren Ewens, states that under certain conditions (specified below), if a random sample ofngametesis taken from a population and classified according to thegeneat a particularlocusthen theprobabilitythat there area1allelesrepresented once in the sample, anda2alleles represented twice, and so on, is for some positive numberθrepresenting thepopulation mutation rate, whenevera1,…,an{\displaystyle a_{1},\ldots ,a_{n}}is a sequence of nonnegative integers such that The phrase "under certain conditions" used above is made precise by the following assumptions: This is aprobability distributionon the set of allpartitions of the integern. Among probabilists and statisticians it is often called themultivariate Ewens distribution. Whenθ= 0, the probability is 1 that allngenes are the same. Whenθ= 1, then the distribution is precisely that of the integer partition induced by a uniformly distributedrandom permutation. Asθ→ ∞, the probability that no two of thengenes are the same approaches 1. This family of probability distributions enjoys the property that if after the sample ofnis taken,mof thengametes are chosen without replacement, then the resulting probability distribution on the set of all partitions of the smaller integermis just what the formula above would give ifmwere put in place ofn. The Ewens distribution arises naturally from theChinese restaurant process.
https://en.wikipedia.org/wiki/Ewens%27s_sampling_formula